While AI reshaves how educators teach, acquire and develop skills, the panel considers the potential risks of using AI in education and how they can be mitigated to guarantee that its power can still be used to improve results.
Sian Cooke, responsible for evidence and adoptions in education technology at the UK of the Department of Education (DFE), underlined the Recent resources published by the government on ai. The documents aim to help schools and colleges to use AI safely and effectively.
“There is also a material challenge,” said Cooke, noting that the DFE has a lot of emphasis on “the right base” such as access to wifi and laptops. “Teachers already have so much to fear, they want a technology that works,” she told delegates.
However, the DFE also wishes to promote the use of AI to increase efficiency. “We want to promote the use of this technology so that teachers can focus on what they are best,” said Cooke. While she hoped that AI could unlock the potential and help provide excellent teaching to each child, she also shared fears that unequal access could create a growing digital fracture.
Professor Manolis Mavrikis, professor of artificial intelligence in education, at Iee – The Faculty of Education and the Society of the UCL told delegates that the discussion on the use of new technologies in education had become polarized. “Even with positive use of Edtech, we see confused arguments around screen time,” he said.
Mavrikis argued that the accumulation of evidence supports the application of AI in class. He also warned that there is a risk that students continue to use technology even if educators do not do so. “If we do not show them the best way to use it, they will use it in a laziness,” he said, warning against its unregulated use.
Guadalupe Sampedro, partner of the law firm COOLEY LLPexplained that any legal framework on the use of AI will be complicated to implement. “From a legal point of view, AI is very new and it is constantly evolving,” she said. “We already have a very complete legal framework that is difficult to navigate for businesses.”
With many companies that have trouble forming voice recognition of children due to data protection rules, Sampedro said that some had found a bypass using synthetic data. “It's not easy, but it's doable,” she said.
While Sampedro added that many want a reduced complexity, she explained that the general EU data protection regulations (GDPR) and the British GDPR will make this difficult. That said, she admitted that legislation in the United States was currently “a little a West West” and called for consistency in the global approach.
Joshua Wohle, CEO and co-founder of platform learning Spirit stone said that the technology fueled by AI is already there and used. He thought it was important that organizations work with this reality, rather than trying to prevent it from being used. “The worst case is that you get employees to use personal accounts and data are used in a dangerous way,” he said. “We must make employees feel that they can use it.”
However, he recognized that there was a need for appropriate data to effectively supply AI tools. “If you are afraid of providing computer data, this will not work,” added Wohle. “AI cannot be as useful as the data you feed it.”

At Learnopoly, Finn has championed a mission to deliver unbiased, in-depth reviews of online courses that empower learners to make well-informed decisions. With over a decade of experience in financial services, he has honed his expertise in strategic partnerships and business development, cultivating both a sharp analytical perspective and a collaborative spirit. A lifelong learner, Finn’s commitment to creating a trusted guide for online education was ignited by a frustrating encounter with biased course reviews.