Who designs, decides: rethink AI in education through the objective of co -creation – PA Times online

by Finn Patraic

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

The opinions expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Wilson Wong
June 20, 2025

When Openai published Chatgpt at the end of 2022, he sparked a wave of excitement and speculation in the world education sector. However, two years later, the long -awaited transformation failed to materialize.

Despite the rapid advancement of AI tools, their real integration into classrooms remains limited and unequal. According to a October 2025 Education Week national surveyWhile more teachers have undergone professional development linked to AI, 58% had still received no training and only 2% said they frequently use generative AI tools in their teaching. This striking contrast between the promises of the AI ​​and its real use highlights an increasing disconnection.

A separate survey published in Next Education shed light on this gap. Over the past decade, American schools have largely adopted online mathematics platforms like Khan Academy, Dreammbox, I-Ready and IXL. Although research suggests that these tools can improve students' performance when used as expected, only a tiny fraction, around 5%, students actually respond to recommended user guidelines. Most of the gains observed in studies are concentrated among motivated and high -performance students from easier environments. Does this phenomenon, nicknamed the “5%problem”, raises a disturbing question: do AI learning tools really help to fill the shortcomings of success or do they expand them quietly?

This challenge, where the advanced tools benefit a small subset of advantageous students while bypassing the majority, has deep implications for developed education systems. This suggests that even students that AI is supposed to help the most are the least likely to get involved or enjoy it.

CS293 / Educ473 courses from Stanford Universitytitled Make educators via linguistic technologyoffers a complete framework to understand this disconnection. The course brings together students, educators, technologists and researchers to examine the limits of current AI tools in real class environments. One of his fundamental ideas is that AI excels in the automation of standardized and quantifiable tasks, but education, in particular in K – 12 contexts, is deeply human, contextual and relational.

When designers build AI tools as efficiency engines, automating the classification, generating comments or simulating tutoring, they risk reducing education to a series of mechanical tasks. But learning is not only data processing. This implies failure, reflection, dialogue and growth. Even the most advanced AI models cannot really understand why a student made a mistake, what type of support he needs or how to guide them empathetic through a learning trip.

The CS293 team also warns that many tools are designed for a few, those that already excel in digital environments. The study notes that many AI educational tools are created by engineers and scientists who are themselves high-level learners. Consequently, these tools often reflect the hypotheses, habits and learning styles of their creators, leaving behind students who fight with language, motivation or access.

This bias is not malicious; It's structural. Developers often lack in -depth experience in classrooms and involuntarily create tools that assume a level of digital mastery, linguistic sophistication and self -regulation that many students simply do not have. At the heart of these challenges is a fundamental question: which manages to design educational technologies and who decides what matters as a “good learning”? This question is more than philosophical, it is political, cultural and deeply practical.

Who designs decides. AI systems do not arrive with neutral values; They code the hypotheses, priorities and the dead angles of their creators. When decisions about how students should learn, which constitutes the success and the way in which performance is evaluated by engineers distant from classrooms, the risk of disalember is enormous.

Instead of descending solutions, we need a new co-design and co-creation paradigm, where teachers, students and communities are integrated into each stage of development. This approach recognizes that educators are not simple users of technology, they are co-creators of educational meaning. They should be allowed to define the problems that deserve to be solved, the metrics that count and the way technology should be used rather than direct pedagogy.

This concern is particularly urgent when the adoption of AI accelerates. Developed countries in North America, Europe and Asia are the most likely to quickly adopt AI tools, often under political or market pressure to modernize. If we continue to outsource educational decision -making with private suppliers or remote developers, we risk building systems that serve more compliance than creativity, efficiency more than equity. However, without approaching equity, conviviality and educational alignment of these tools, they risk strengthening systemic disparities.

The “5%problem” is not a technological failure. It is a design and implementation failure. This reminds us that access to tools is not the same as access to learning and that digital solutions must be based on human realities.

The future of educational AI must be designed with, and not simply delivered to educators and students. The question of which holds the power to shape learning environments is fundamental. AI can support learning, but only if we make sure that good people, teachers, students and communities are those who decide what learning should look like.

While AI continues to evolve, we would do well to remember: the future of education is not determined by the code, but by the values ​​and by which can define them.


Author: Wilson Wong is the founding director and associate professor of data sciences and political studies (DSP), School of Governance and Policy Science, at the Chinese University of Hong Kong (Cuhk). He is also a principal researcher at the School of Management, UCL, and a scholarship holder at the Center for Advanced Study in the Behavioral Sciences (Casbs) at the University of Stanford. Its main areas of research include AI and Megadia, digital governance, ICT and comparative public administration.

1 star2 stars3 stars4 stars5 stars (No dimensions yet)
Loading…

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.