by Emi C.
So everyone knows the new thing teachers worry about is AI. Who wants to write an English paper from scratch when you can have it written for you these days?
Teachers, we know you’re trying to prevent use of AI in our work. Students, I know you're burnt out and just want to be done with school already. AI isn’t the answer though; its use isn’t a solution but rather a problem.
I’m not here to criticize teachers for worrying about cheating using AI, or to tell students that you’re just ‘being lazy’. I’m here to say that while never said to be perfect, AI isn’t as reliable as you may think.
So, we’ve also all heard the excuse from our teachers that “‘there's only one of me, and ___ of you,’ so please be patient.” AI is an advantage in that way as it “equips teachers with the…strategies they will need to use…to improve and streamline everyday processes.” (Language Magazine).
And, imagine this: you have a question that's been bugging you, and you want to get a simple yet quick answer. Your AI is there to help. Photomath can't double check your answer or help you solve complex math problems? Try chat GPT. Exhausted from reading your history textbook? Snapchat AI could oddly be your solution. Maybe you're just lost in the sauce of Shakespeare. I mean who doesn’t struggle with Shakespeare and what he actually meant anyways? It’s so easy to use AI and is a simple solution, especially when everyone struggles at one point or another.
But, with the rise of AI and it seeming as such a reliable source with the internet as the AI’s database, this doesn’t mean there aren't flaws. An article by TechTarget highlighted that AI has inherent bias based on the algorithms made by the developers without even realizing it. I figured out an exact situation in which this applies directly to.
A few months back, I was working on math at home and struggling. Now kids, you’re probably shouting “use Photomath.'' In this situation though, it wasn’t applicable; so I turned to my Chat GPT. I thought of my AI to be my saving grace despite some skepticism. Soon after I started using it, I began recognizing errors and questioning missed or hidden steps it made getting to the answer. After asking a question about the mistake, it seemed to fix it: except it didn't. I never thought I’d catch a computer's mistakes, and it took me by surprise.
Research from Cambridge University proves a point I'm making: “Sometimes it’s even more difficult for an AI system to realize when it’s making a mistake than to produce a correct result.” Nowadays, more and more people are relying on AI as their sole resource. Teachers aren’t just not wanting you to shortcut work. They’re concerned you’re learning and retaining the wrong information too. Even when you find its mistakes, and they ‘fix it’, these AI cannot actually fix it.
Overall, my end goal is to illustrate that AI isn’t very reliable or viable but rather convenient. Chat GPT reiterates: “As an AI language model, I strive to provide accurate and reliable information based on the data and patterns present in the text I've been trained on. However, I'm not infallible."
While this won’t stop you from relying on AI, think twice about its use and the information. AI is a cool concept and a useful source resource, don’t get me wrong, but the technology isn’t yet to the point where it can hold up to needed integrity right now.