During a meeting of class 6.C40 / 24.C40 (IT ethics), professor Armando Solar-Lezama Asks the same impossible question for his students whom he often wonders in the research he is carrying out with the programming group computer assisted by MIT:
“How can we make sure that a machine does what we want, and only what we want?”
At the moment, what some consider the golden age of the generative AI, it may seem a new urgent question. But Solar-Lezama, the eminent teacher of computer science at the MIT, quickly stresses that this struggle is as old as humanity itself.
He begins to tell the Greek myth of King Midas, the monarch who obtained divine power to transform everything he touched into solid gold. As you would expect, the wish turned against the Midas who accidentally transformed everyone he loved into a golden stone.
“Pay attention to what you ask, because it could be granted in a way that you do not expect,” he said, warning his students, many of whom as aspiring mathematicians and programmers.
By digging in the MIT archives to share slides of grainy photographs in black and white, he tells the story of programming. We intend to talk about the 1970s Pygmalion machine which required incredibly detailed clues to computer software in the late 1990s which took teams of engineers and an 800 -page document to program.
Although remarkable in their time, these processes took too long to reach users. They have left no room for spontaneous discovery, game and innovation.
Solar-Lezama talks about the risks of building modern machines that do not always respect the signals or red lines of a programmer, and which are also able to compete only to save lives.
Titus Roesler, a senior specializing in electrical engineering, nods knowingly. Roesler writes his latest article on the ethics of autonomous vehicles and weighing which is morally responsible when one strikes hypothetically and kills a pedestrian. His argument questions the hypotheses behind technical advances and considers several valid points of view. It is based on the theory of philosophy of utilitarianism. Roesler explains: “Basically, according to utilitarianism, the moral thing to do brings the goodest for the greatest number of people.”
MIT philosopher Brad SkowWith whom Solar-Lezama has developed and teaches the course, leans forward and takes notes.
A class that requires technical and philosophical expertise
IT ethics, offered for the first time in the fall of 2025, was created through the Commitment ground for computer educationAn initiative of MIT Schwarzman College of Computing which brings together several departments to develop and teach new courses and launch new programs that mix IT with other disciplines.
The instructors alternate on conference days. Skow, the professor of philosophy of Laurance S. Rockefeller, brings the lens of his discipline to examine the broader implications of the ethical questions of today, while Solar-Lezama, who is also the associate director and chief of the MIT in the field of computer and artificial intelligence laboratory, offers a perspective through his.
Skow and Solar-Lezama attend conferences from each other and adjust their sessions of tracking in response. The introduction of the learning element of each other in real time has made more dynamic and reactive class conversations. A recitation to break the subject of the week with students graduated from philosophy or IT and an animated discussion combine the content of the course.
“A foreigner might think that it will be a class that will make sure that these new IT programmers sent to the world by MIT always do the right thing,” says Skow. However, the class is intentionally designed to teach students a different skills set.
Determined to create a striking course of a semester that has done more than conferences to students on good or evil, the professor of philosophy Caspar Hare conceived the idea of the ethics of computer science in his role as associated dean of the Social and ethical responsibilities of IT. Hare recruited Skow and Solar-Lezama as a main instructor because he knew they could do something deeper than that.
“Thinking deeply about the questions that arise in this class require technical and philosophical expertise. There are no other courses at MIT that are placed side by side,” explains Skow.
This is exactly what attracted the senior Alek Westover to register. The double major of mathematics and computer science explains: “Many people talk about how the AI trajectory will be related to five years. I thought it was important to take a course that will help me think more.”
Westover says that he is attracted to philosophy because of an interest in ethics and the desire to distinguish good from evil. In mathematics lessons, he learned to write a problem declaration and receive instant clarity on success or not. However, in the ethics of computer science, he learned to make arguments written for “delicate philosophical questions” which may not have a single good answer.
For example, “a problem that could concern us is to know that if we build powerful AI agents who can do any job that a human can do?” Request Westover. “If we interact with these AI to this degree, should we pay them a salary? How much should we worry about what they want?”
There is no easy answer, and Westover assumes that it will meet many other dilemmas in the workplace in the future.
“So, does the Internet destroy the world?”
The semester began with a deep dive into the risk of AI, or the concept of “If IA has an existential risk for humanity”, the unpacking of free will, the science of the way our brains make decisions in uncertainty and debates on long -term passives and AI regulations. A second longer unit focuses on “Internet, the World Wide Web and the social impact of technical decisions”. The end of the term examines privacy, bias and freedom of expression.
A class subject was devoted to provocatively asking: “So, does the Internet destroy the world?”
Caitlin Ogoe Senior specializes in Cours 6-9 (calculation and cognition). Being in an environment where it can examine these types of problems is precisely why self-written “skeptical technology” has registered during the course.
Having grown up with a hearing mother and a little sister with a development handicap, Ogoe has become the default family member whose role was to call providers of technological support or program iPhones. She exploited her skills in part -time work fixing mobile phones, which opened the way to develop a deep interest in the calculation and a path to MIT. However, a prestigious summer scholarship in its first year made it question the ethics behind the way consumers were affected by the technology it helped to program.
“Everything I have done with technology is from people's point of view, education and personal connections,” explains Ogoe. “It is a niche that I like. Taking human science lessons around public policy, technology and culture is one of my big passions, but it is the first course I have followed that also involves a philosophy teacher.”
The following week, Skow gives conferences on the role of biases in AI, and Ogoe, which enters the labor market next year, but possibly provides for the power to focus on the regulation of related issues, raises their hands to ask questions or share the counterpoints on four times.
Skow is hollowing out in the compass examination, controversial AI software that uses algorithm to predict the probability that people accused of crimes continue to retouch. According to a Propublica article 2018Compass, he was likely to report the black accused as future criminals and gave false positives twice the rate as he did to white defendants.
The class session is dedicated to determining whether the article justifies the conclusion that the compas system is biased and must be interrupted. To do this, Skow presents two different theories on equity:
“Substantial equity is the idea that a particular result could be fair or unfair,” he explains. “Procedural equity consists of whether the procedure by which a result is produced is right.” A variety of conflicting equity criteria are then introduced, and the class discusses which was plausible, and what conclusions they guaranteed in the compas system.
Later, the two teachers go upstairs to the Solar-Lezama office to debrief on how the exercise took place that day.
“Who knows?” said Solaire-Lezama. “Perhaps in five years, everyone will make fun of how people have worried about the existential risk of AI. But one of the themes that I see crossing this class is to learn to approach these debates beyond the media discourse and to reach the bottom of reflection rigorously on these questions.”
