by Lucid C
Lately, it seems like there’s a new development in AI every week. I still remember the days that Evie would talk herself in circles, scaring every early 2000’s YouTuber with her uncanny features. I remember psyching myself out with the Cleverbot BEN Drowned legend. And even with its flaws, of which there were many, artificial intelligence still felt like an invention of the future.
Today, AI is a part of our everyday lives. Recently, Apple, one of the most popular tech companies in the country, has added ChatGPT to its voice activated assistant, Siri. According to a FastCompany article, “simply say ‘Use ChatGPT’ at the start of your query; Siri will source your answer directly from OpenAI’s chatbot.” The popular AI is now at the fingertips of 2.2 billion people. And while this may sound enticing, easier access to AI threatens academic integrity.
18 school districts in the US have banned the use of ChatGPT, according to BestColleges. Even the districts that haven’t banned it use software, like TurnItIn, that detects AI use. Using an AI chat bot to write a school assignment will usually earn you an automatic 0. Gone are the days when cheating was copying a friend’s assignment. Now, the difference between human and robot is slowly becoming less and less distinguishable.
Teachers aren’t the only ones who should be worried about AI. When Sewell Setzer III took his life, his family claims that character.AI, an AI role-playing chat bot, is to blame. Sewell became emotionally attached to an AI trained as a Game of Thrones character, Daenerys Targaryen, chats ranging from platonic and comforting, to romantic and even sexual. According to The New York Times, “Sewell’s parents and friends had no idea he’d fallen for a chatbot. They just saw him get sucked deeper into his phone. Eventually, they noticed that he was isolating himself and pulling away from the real world. His grades started to suffer, and he began getting into trouble at school. He lost interest in the things that used to excite him, like Formula 1 racing or playing Fortnite with his friends. At night, he’d come home and go straight to his room, where he’d talk to Dany for hours.” On February 28, 2024, Sewell shot himself.
But laws around AI are still few and emerging. What happens when AI becomes a threat to our nation? There is nothing innately wrong. But, as with any new technology, it’s up to us to use it responsibly and teach others to do the same.