How to navigate the future of AI in education and education in AI | Ey

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

Decision -makers and education leaders play a crucial role in mitigating risks and challenges associated with AI in education. They can achieve this by implementing robust strategy frames. Although the adoption of digital tools led by AI can widen the educational fracture, it also introduces various risks, including the impacts on academic integrity, with concerns about plagiarism, cheating in assessments and the loss of critical skills. In addition, there are risks linked to inaccuracy, discrimination, biases and data confidentiality.

Teachers will remain essential in education, even with tutors supplied by AI. AI does not have nuanced understanding and empathy of human educators, which makes it less effective to meet complex learning needs. In addition, AI tools have limits of recence and precision of content, “often hallucinating” or manufacturing information. Students and teachers must learn to navigate these limitations and check the results of the AI. Consequently, AI tutors cannot yet trust to educate young children who may not recognize errors or hallucinations.

Transparency and explanability

AI models can work as “black boxes”, with their often hidden or poorly understood inner operation. This lack of transparency complicates the ability of users to understand the justification of the decisions related to the learning ways, the notation or the performance of the students, potentially introduce biases or errors. Without transparency and explanability, students, parents and teachers cannot assess the fairness or accuracy of the recommendations based on AI.

To promote general acceptance and effective use of AI in education, it is essential to establish model development protocols that clarify how AI recommendations are generated and validated. The construction of transparency and explanation in AI models is vital to promote confidence.

Data confidentiality and security

Confidentiality and data security are fundamental to strengthen confidence in educational tools fueled by AI, promotion of technology that improves learning without compromising the security and rights of students. In education, AI tools are often based on large quantities of students on students to provide personalized learning experiences, follow progress or adapt to individual needs, however, the collection and storage of sensitive information, such as academic performance and behavioral models, raises concerns about which has access to this data and how it can be used.

Impact on the quality of learning and academic integrity

The educators fear that excessive dependence on AI tools, in particular the GENAI, can undermine the skills of research students, critical thinking and effective communication. Genai raises academic integrity problems, because he can produce tests, solve mathematical problems and complete tests that seem acceptable. Consequently, the emphasis has often been placed on the prohibition of the use of AI or the detection of the submissions generated by the AI. However, students need clear advice and policies on the ethical and effective use of AI.

While schools and universities adopt AI -focused approaches, they should regularly monitor and assess the results while collecting the comments of the stakeholders, in particular with regard to equity. This allows the necessary adjustments to improve efficiency and fill all gaps.

Lack of digital skills and AI among staff and students

Most teachers have not yet integrated AI into their daily workflows, mainly due to a lack of knowledge about how to do so. They need advice, support and emergency resources to effectively use AI tools to improve learning. A 2025 Edweek Research Center report revealed that the main reason why American teachers did not integrate AI into their practice is a “lack of knowledge and support”.

Lack of digital infrastructure

To take advantage of AI in education, teachers and learners need to access digital, high speed internet and secure platforms. Without access to universal technology, digital divide and educational inequity will worsen, in particular in poorly served communities without internet and reliable resources for computers. The promotion of equitable education improved by AI requires investments in broadband Internet, connected devices and, in certain regions, stable electricity.

AI educational tools such as adaptive learning platforms and intelligent tutoring systems require significant financial investment for configuration, maintenance, updates and data security. The implementation of AI requires sophisticated data systems to manage students' data, which must comply with strict confidentiality standards, increasing the costs for software, data protection and staff training. Many schools, especially in poorly served areas, find it difficult to finance traditional resources, not to mention advanced technologies and their safety needs.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.