Dailymaverick logo

DM168

INTELLIGENT LEARNING OP-ED

Teaching with AI requires a balancing act in the classroom

Artificial intelligence is here to stay and educators must therefore grapple with how they use it. Its advantages are numerous, but so are its dangers. It is vital to know and navigate the difference.

Mark Potterton
P27 Education AI (Photo: iStock; Illustration: Freepik)

Teachers around the world are using AI tools. Administrative tasks like marking and attendance are being automated, freeing up more time for teaching.

In online schools, teachers are using AI tools to personalise learning, tailoring content to individual student pace and needs. Many teachers are using AI to help with lesson plans and generate tests, resources and differentiated materials very quickly.

I have recently learnt how AI-powered platforms provide real-time feedback to students and how analytics helps teachers identify children struggling. Language teachers are also using AI for translation and writing feedback. Teachers use tools such as ChatGPT, Claude and DeepSeek for brainstorming and research.

But teachers are also grappling with challenges around academic integrity and bias, and ensuring that AI does not replace human connection in the classroom.

A major new study from the Brookings Center for Universal Education argues that the risks of generative AI in school outweigh the benefits. A year-long global research project drew on consultations with more than 500 students, teachers, parents and education leaders in 50 countries and a review of more than 400 studies.

They warn that the trajectory of AI implementation is undermining the very foundations of learning. Their central concern is what they call a “doom loop” of dependence where students are increasingly outsourcing their thinking to AI tools, eroding the critical thinking and foundational knowledge that schools are supposed to build.

In a report on the study, the researchers stress that these harms are not inevitable and argue that AI can genuinely enrich education, but only when tools are co-designed with teachers and students, and grounded in sound pedagogy. They call for robust safeguards and urge governments, schools, tech companies and families to act together.

Students must learn not just how to use AI, but how to question it, and teachers should be able to help them through this process. (Photos: Unsplash)

AI is more than a tool

At the World Economic Forum in Davos earlier this year, Yuval Harari challenged a common assumption about AI: “The most important thing to know about AI is that it is not just another tool. It is an agent. It can learn and change by itself and make decisions by itself.

“A knife is a tool. You can use a knife to cut salad or to murder someone, but it is your decision what to do with the knife. AI is a knife that can decide by itself whether to cut salad or to commit murder.

“The second thing to know about AI is that it can be a very creative agent. AI is a knife that can invent new kinds of knives as well as new kinds of music, medicine and money.”

More unsettlingly, Harari argued that AI can lie and manipulate, observing that evolution rewards deception, and it has already begun exhibiting these tendencies.

On the question of whether AI can think, he invited his audience to examine their own thought processes. Much of human thinking, he suggested, is simply words forming sentences and arguments in the mind. If that is what thinking fundamentally is, then AI already outperforms many humans at it.

Harari’s sweeping conclusion was that anything built from words will be overtaken by AI, including law, literature and religion. He singled out the Abrahamic faiths –particularly Judaism, with its deep textual tradition – asking what it means for a “religion of the book” when an AI becomes its greatest scholar.

“It is different with AIs. Unlike rivers and guards, AIs can actually make decisions by themselves. They will soon be able to make the decisions necessary to manage a bank account, file a lawsuit and even operate a corporation without any need [for] human executives, shareholders or trustees. AIs can therefore function as persons.”

Harari, in his other writings, offers some hope for education. He argues that human experience, pain, fear, love and non-verbal feelings may be something AI cannot replicate (for now).

He suggests that the future of education does not lie in knowledge transfer or even analytical thinking, but in developing the fully human dimensions of a child: empathy, creativity rooted in lived experience, moral judgement and emotional intelligence.

Depersonalising the world

In January 2025, the Vatican published Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence, its most comprehensive statement on AI. It proclaimed that AI must always remain a tool and not a substitute for the human mind or soul.

The document affirmed that technological progress is part of God’s plan, but humans bear moral responsibility for how AI is used. It warned against AI deepening inequality, enabling autonomous weapons, eroding privacy and spreading misinformation. It also cautioned that anthropomorphising AI poses dangers, especially for children, and that misrepresenting AI as a person is a grave ethical violation. Ultimately, it argued that AI should complement human intelligence and not replace it.

Last month, Pope Leo XIV, recognised by Time magazine as an influential voice on ethical AI, warned priests in Rome against using AI chatbots like ChatGPT to write their sermons. He argued that outsourcing spiritual thinking to machines causes intellectual and spiritual atrophy, much like how relying on phone contacts has eroded our ability to memorise numbers. He argued that a genuine sermon requires personal faith and knowledge of one’s specific community, something no AI can replicate.

Unlocking what AI offers and understanding how to use it responsibly are key to using it effectively in the classroom. (Photo: Unsplash)

The way forward

AI offers genuine and exciting possibilities for education. It can personalise learning, make quality education accessible to remote communities, support students at their own pace and even strengthen critical thinking. These benefits are real, and schools should take advantage of them.

However, we cannot ignore the limitations that matter enormously in school settings. AI lacks emotional intelligence, genuine empathy and any moral compass. It can reflect and amplify biases embedded in its training data, which raises serious concerns about fairness and cultural sensitivity.

Data privacy, ethical oversight and the risk of widening inequality between communities are all real challenges that must not be ignored.

Most importantly, AI cannot replace the relational heart of great teaching. The human connection between teacher and student – pastoral care, moral modelling, formative influence of one person on another – is something no algorithm can replicate.

The way forward must be informed by more than fear or uncritical enthusiasm. We need to make a deliberate choice to let AI do what it does well, but we must fiercely protect the irreplaceable human core of teaching and learning. This means that there needs to be investment in teacher development, robust ethical oversight, honest conversations about bias and data and, above all, a renewed commitment to the question that algorithms find difficult to answer: what kind of human being are we trying to help this child become? DM

Dr Mark Potterton is director of the Three2Six Refugee Children’s Education Project.

This story first appeared in our weekly DM168 newspaper, available countrywide for R35.


Comments

Loading your account…

Scroll down to load comments...