Have you ever had that moment when you knew exactly what you were trying to search for online and didn’t want to sort through all the sites that appear in your search? This is recently why I tried ChatGPT last week for the first time for something as simple as gift ideas for a friend. My online search took seconds to find, which normally takes 10-30 minutes or longer. This also got me thinking, though, are people unintentionally giving out personal data by using this new generative AI software?
ChatGPT has recently come on the scene within the past few months, and now more than ever, people are incorporating it into their everyday lives.
People have used it for things like recipes or to code entire websites. ChatGPT is a large language model that was developed over the past several years by a tech venture called Open AI. It has had the financial backing of many big tech players in the industry, with the likes of Elon Musk and Microsoft.
The capabilities are endless.
And that may have many of us fellow millennials wondering if that is a good thing. Millennials were the last generation to experience a world without the Internet. Sites like Xanga, MySpace, and AIM (AOL Instant Messenger) started changing how we interacted as we moved into our teens and early 20s. When your coming of age includes the Internet and an entire generation was burned by sites like Limewire, it builds a natural skepticism and hesitancy to trust artificial intelligence implicitly. Combine that with growing up on movies like “Smart House” and “iRobot,” where technology starts out very chill and suddenly wants to take over the world.
In this new frontier of tech, how do we use this to improve our lives while also protecting our information?
As we all are still learning together, thoughtfully consider what information you give your new bestie, ChatGPT.