
Credit: Allison Shelley / The Verbatim Agency for Edimages
A few weeks ago, my secondary chemistry course followed a “training on AI”. We were told that this would teach us how to use the Chatppt responsible for a manner. We worked on workheets with questions such as “when is it allowed to use Chatgpt on written duties?” And “how can he support and not replace your reflection?” Another asked: “What are the risks of relying too much on Chatgpt?”
Most of us had just used Chatgpt to finish the worksheet. Then we went to other things.
Schools rushed to regulate AI based on a fiction full of hope: that students are curious and autonomous learners who will use the technology in a responsible manner if we give the right railing. But most students do not use AI to think or refine ideas – they use it to do work faster. And school policies, based on optimism rather than observation, did not do much to stop it.
Like many districts across the country, our school policy calls on students to use Chatgpt to reflect, organize and even generate ideas – but not to write. If we use a generative AI to write the actual content of an assignment, we are supposed to obtain a zero.
In practice, this line does not make sense. Later, I spoke to my chemistry teacher, who said that she had started to check the Google Docs documents of the articles she had attributed and discovered that huge pieces of writing of the students were glued. “It's just disappointing,” she said. “I can't do anything.”
In the biblical class, the students cited Chatgpt comes out word for the word during the presentations. A student projected a slide listing the minor prophets alongside the sentence: “Do you want me to form this in a table for you?” Another spoke with confidence of the “post -exilic” period – after having badly pronounced the “patriarchy”. At one point, Mr. Knoxville stopped during a slide and asked, “Why he says BCE?” Then, glowing, answered his own question: “Because it is the cat using a secular language.” Everyone laughed and continued.
It is sure to say that in reality, most students do not use AI to deepen their learning. They use it to completely bypass the learning process. And the real frustration is not only that the students cut the corners, but that the schools always claim that they are not.
This does not mean that AI should be prohibited. I am not an alarmist AI. There is enormous potential for intelligent and controlled integration of these tools in the classroom. But to give the students without restrictions with little surveillance is to undermine the main objective of the school.
It's not just a high school problem. At the CSU, the administrators have doubled on the integration of AI With the same blind optimism: assuming that students will use these tools in a responsible manner. However, generalized adoption does not equivalent to responsible use. A recent study From the National Education Association found that 72% of high school students use AI to finish homework without really understanding the equipment.
“The AI ​​has not corrupted in -depth learning,” said Tiffany Noel, education researcher and professor at Suny Buffalo. “This revealed that many missions never required critical thinking first. Just Performance. Ai is just the faster actor; the problem is the script.”
Exactly. AI has not spoiled education; He exposed what was already broken. Students respond to the incentives that the education system has given them. We are told that notes count more than understanding. So if there is an easy shortcut, why don't we take it?
This also penalizes students who do not cheat. They spend an hour fighting against a duty that another student ends in three minutes with a chatbot and a humanizing text. The two get the same note. It is discouraging and painfully absurd.
Of course, this is not new. Students have always found ways to reduce their workload, such as copying homework, sharing responses and glance during the tests. But it is different because it is a technology that should help schools – and under the current paradigm, this is not the case. This makes schools vulnerable to improper use and not rewarded students for doing things the right way.
What to do, then?
Start by admitting the evidence: if an assignment is done at home, this will likely imply the AI. If students have internet access in class, they will also use it there. Teachers cannot stop this: they see phones under the offices and tabs overturned the second where their back is turned. Teachers simply cannot control 30 screens at a time, and most will not try. They shouldn't either.
We need difficult rules and clearer limits. The AI ​​should never be used to carry out the real academic work of a student – just as calculators are not allowed on multiplication exercises or grammar is not accepted on spelling tests. The school is where you learn Competence, not where you unload it.
AI is designed to respond to prompts. These are the homework too. Of course, students cheat. The only solution is to make the cheating structurally impossible. This means to come back to the basics: Pen and paper testsWriting in class, oral defenses, live problem solving, analysis based on sources where each quote is annotated, explained and verified. If an AI can make an assignment in five seconds, it was probably never a good assignment in the first place.
But that does not mean that AI has no place. This simply means that we put it where it belongs: behind the desk, not in it. Let it help teachers note the quiz. Let it help students have practice problems or serve as a Socratic tutor that request questions instead of answering them. Generative AI should be treated as useful help After Master, not a replacement for learning.
Students are not idealized learners. They are strategic, social, overloaded and deeply listening to what the system rewards. This is the reality of our education system, and the only way to follow is to train policies on the way students really behave, not the way educators wish.
Until it happens, AI will continue to write our tests. And our teachers will continue to classify them.
•••
William Liang is a high school student and a journalist in education living in San Jose, California.
The opinions expressed in this commentary represent those of the author. Edsource welcomes comments representing various points of view. If you want to submit a comment, please consult our guidelines And Contact us.

At Learnopoly, Finn has championed a mission to deliver unbiased, in-depth reviews of online courses that empower learners to make well-informed decisions. With over a decade of experience in financial services, he has honed his expertise in strategic partnerships and business development, cultivating both a sharp analytical perspective and a collaborative spirit. A lifelong learner, Finn’s commitment to creating a trusted guide for online education was ignited by a frustrating encounter with biased course reviews.