Your mileage can vary is a column of advice offering you a unique frame to think about your moral dilemmas. To submit a question, fill out this anonymous form Or send an email to sigal.samuel@vox.com. Here is the question of this week of a reader, condensed and published for more clarity:
I am a university educational assistant, main discussion sections for major conference courses in human sciences. It also means that I note a lot of students' writing – and, inevitably, I also see a lot of writing on AI.
Of course, many of us are working to develop missions and pedagogies to make it less tempting. But as TA, I only have the limited capacity to implement these policies. And in the meantime, the writing generated by AI is so omnipresent that taking the course policy seriously, or even degenerating each suspected body to the professor who directs the course, would be to make dozens of accusations, including certain positive false, for practically every assignment.
I believe in the nurse and ineffable value of an education in human sciences, but I will not convince children of 19 years stressed with this value by repressing something hard that everyone does. How can I think about the ethics of the application of the rules of an institution that they do not take seriously, or to let things drag in the name of the construction of a classroom that looks less like an obstacle to bypass?
I know that you said that you believe in the “ineffable value of an education in human sciences”, but if we really want to become clear on your dilemma, this ineffable value must be endeavored!
So: what is the true value of a human science education?
Looking at the modern university, one might think that the human sciences are not so different from the STEM fields. Like the engineering department or the department of mathematics justifies its existence by pointing out the products it creates – bridges conceptions, meteorological forecasts – the humanities services justify their existence today by noting that their students also create products: literary interpretations, cultural criticism, short films.
But let's be real: it is the neoliberalization of the university that forced the human sciences to this strange contortion. This is never what they were supposed to be. Their real objective, such as the Philosopher Megan Fritts writesis “training of human people”.
In other words, although the aim of other departments is ultimately to create a product, an education in the humanities is supposed to be different, because the student herself is the product. It is what is created and recreated by the learning process.
You have a question you want me to answer in the next mileage can vary the column?
This vision of education – as a prosecution which is supposed to be personally transforming – is Aristotle proposed in ancient Greece. He thought that the real objective was not to transmit knowledge, but to cultivate virtues: honesty, justice, courage and all other character traits that make a flourishing life.
But because the development is devalued in our hypercapitalist society, you find yourself caught between this original vision and the utility vision based on today's products. And students feel – rightly! – that generating AI proves that the utility vision of human sciences is a sham.
As a student said to his professor at New York University, in order to justify Use AI to do his job for him“You ask me to go from point to to point B, why would I not use a car to get there?” It is a completely logical argument – as long as you accept the utility vision.
The real solution is therefore to be honest on what the humanities are for: you are in the field of helping students cultivate their character.
I know, I know: many students will say: “I don't have time to work to cultivate my character! I just need to be able to find a job! ”
It is totally just for them to focus on their job prospects. But your The work consists in focusing on something else – something that will help them to prosper in the long term, even if they do not fully see the value now.
Your job is to be their aristotte.
For the former Greek philosopher, the mother of all virtues was phronesisor practical wisdom. And I would say that you can do nothing more useful for your students than to help them cultivate this virtue, which is more, no less, relevant by the advent of AI.
Practical wisdom goes beyond the simple fact of knowing the general rules – “do not lie”, for example – and apply them mechanically as a kind of moral robot. This is how to make good judgments in the face of complex and dynamic situations that life gives you. Sometimes this really means violating a classic rule (in some cases, You should lie!). If you have perfected your practical wisdom, you will be able to discern the morally protruding characteristics of a particular situation and find a well -assembled response to this context.
This is exactly the kind of deliberation in which students will have to be good when they enter the world. The frantic pace of technological innovation means that they will have to choose, again and again, how to use emerging technologies – and how not to do it. The best training they can get now is to train like this type of choice wisely.
Unfortunately, this is exactly what the use of the generative AI in class threatens from short-circuit, because it removes something incredibly precious: friction.
AI removes the cognitive friction of education. We have to add it.
Meeting a friction is the way we give our cognitive muscles a training. Removing it from the image makes things easier in the short term, but in the long term, this can lead to intellectual deferring, where our cognitive muscles gradually become lower for lack of use.
“Practical wisdom is built by practice like all other virtues, so if you do not have the opportunity to reason and you have no practice to deliberate certain things, you will not be able to deliberate much later”, philosopher of technology Shannon Vallor said to me last year. “We need a lot of cognitive exercises in order to develop practical wisdom and keep it. And there are reasons to worry about cognitive automation depriving us of the opportunity to build and keep these cognitive muscles. ”
So how do you help your students to keep and build their phronology? You add friction, giving them as many opportunities as possible to practice deliberation and choice.
If I conceived the study program, I would not do it by adopting a strict policy “no AI”. Instead, I would be honest with students about the real advantage of human sciences and the reason why the insane cheat of AI would be wrong with this advantage. Then, I would offer them two choices when the time comes to write a test: they can either write it with the help of AI, or without. The two are totally good.
But if they get the help of the AI, they must also write a class of reflection in class, explaining why they chose to use a chatbot and how they think that it has changed their process of reflection and learning. I would make it shorter than the original mission but longer than a paragraph, so it forces them to develop the even reasoning skills that they were trying to avoid using.
As you, you might suggest to teachers, but they may not go. Unfortunately, you have a limited agency here (unless you are ready to risk your work or get away from it). All you can do in such a situation is to exercise the agency you have. So use each piece.
Since you run discussion sections, you are well placed to encourage your students to work on their cognitive muscles in the conversation. You can even stage a debate on AI: attribute half of them to argue the case for the use of chatbots to write items and half of them to argue the opposite.
If a teacher insists on a strict policy “without AI” and you meet tests that seem clearly written, you can have any other choice than to report them. But if there is room for a doubt about a given test, you might be mistaken on the side of the leniency if the student is very thoughtful in the discussion. At least you know they have achieved the most important goal.
None of this is easy. I feel for you and all the others Educators who have trouble in this confusing environment. In fact, I would not be surprised if some educators suffer from moral injuryA psychological condition that occurs when you feel that you have been forced to violate your own values.
But maybe it can comfort you to remind you that it is much larger than you. The generative AI is an existential threat to an education in the humanities as it is currently constituted. Over the next few years, the humanities services will have to paradigm-shift or perish. If they want to survive, they will have to be brutally honest about their real mission. For the moment, from your pre-paradigm perch, all you can do is make the choices you have left.
Bonus: what I read
- This week, I went back to the first book by Shannon Vallor, Technology and virtues: a philosophical guide for a future that deserves to be wanted. If there is a book that I could have everyone read in the AI ​​world, it would be this one. And I think it can also be useful to everyone, because we must all cultivate what Vallor calls “technomoral virtues” – the features that will allow us to adapt well to emerging technologies.
- A New York In April on AI and cognitive atrophy led me to a psychology article in 2025 entitled “The inconvenience of thought: a meta-analytical review of the association between mental effort and negative affect.” The conclusion of the authors: “We suggest that mental effort is intrinsically aversive.” Come back? Yes, sometimes I just want to turn off my brain and watch Netflix, but sometimes thinking about a difficult subject is so pleasant! For me, I feel like I run or lift weight: too much is exhausting, but the right amount is exhilarating. And what looks like “the right amount” can go up or down depending on the amount of practice.
- Astrobiologist Sara Imari Walker recently published an essay in Noema provocatively entitled “AI is life. “It reminds us that evolution has produced us and that we have produced AI. To be clear, it does not argue that technology is alive; it says that it is an outgrowth of human life, an extension of our own species.

At Learnopoly, Finn has championed a mission to deliver unbiased, in-depth reviews of online courses that empower learners to make well-informed decisions. With over a decade of experience in financial services, he has honed his expertise in strategic partnerships and business development, cultivating both a sharp analytical perspective and a collaborative spirit. A lifelong learner, Finn’s commitment to creating a trusted guide for online education was ignited by a frustrating encounter with biased course reviews.