/file/dailymaverick/wp-content/uploads/2025/10/label-Op-Ed.jpg)
I recently finished a gig as an external examiner for a graduate-level course at a large university. The job of the external examiner is to mark a subset of an examination that has already been marked, to ensure consistency in grading. This is a long-accepted practice in universities across the world.
But now that AI has inserted itself everywhere, including tertiary education, a new and sometimes angry debate has opened in the genteel world of pedagogy.
The submitted materials that I marked were research papers, each about 25 pages long. The topic was broad and deep, and the course designer’s rubric was carefully thought out and well-considered — requiring the students to conduct wide-ranging research, then synthesise disparate sources, analyse, cogitate, and come up with new ideas, and observations and arguments relating to the topic.
Here’s the interesting part. The course designer (Prof Herman Singh) has a policy of allowing students to access AI however they see fit. This had consequences.
The first (and most positive) was that every paper was well-written. Some of the students were not English first-language speakers, but the submissions showed no sign of this — the language was uniformly articulate and sharp. There is little doubt that many, if not all, of the students had received AI assistance in drafting the manuscript. This struck me as an excellent leveller — any unconscious bias in marking due to poorly constructed English was off the table.
The papers ranged from quite good to excellent. How much of it was due to AI?
I have no idea.
Even the part of the paper that was supposed to show novel thinking and abstract thought could well have been constructed by AI. If a user knows how to prompt, these systems are capable of astonishing stuff.
So I took the position that even if this was so, the students certainly learned a great deal in the process of guiding the AI to a completed product. And isn’t that the goal?
Turns out that it is not so simple.
I told this story to someone who has a deep and abiding interest (and expertise) in pedagogy. Her name is Pamela Scully, and she is the Samuel Candler Dobbs Professor (Women’s, Gender and Sexuality Studies, and African Studies) at Emory University in Atlanta, and also ex-provost and ex-president of the Association for Undergraduate Education at Research Universities. In other words, she has thought deeply about these things for a long time.
I told her my external examiner/AI story. She agreed that the students were being educated as they plumbed the new AI tools to produce a paper that satisfied the rubric.
A thin gruel
But she made a comment that brought me up short (I paraphrase here):
If you untether writing from the education process, you have torn the heart out of the process. It is the painstaking (and sometimes painful) process of composing, writing and editing a paper that emulsifies the learning process. Separating the research from the writing threatens to produce a thin gruel of partial and perhaps even quickly forgotten understanding.
AI is too new for there to have been any data collected to prove or disprove this, but I suspect she is right. Many people I know (not only in academia) have quietly surrendered the act of writing, finding AI to be better suited to the task. And AI is increasingly becoming better at the task of uncovering, arguing and articulating entirely new perspectives — a skill once the province of the best students. But surely something is being lost.
Scully is a proponent of a style of pedagogy called “slow teaching”, and she calls herself a “slow professor”, a term taken from the title of the book The Slow Professor: Challenging the Culture of Speed in the Academy, by Maggie Berg and Barbara K Seeber. Scully’s courses at Emory are carefully designed to give her students the time for full expression and unhurried contemplation and (as importantly) full reflection on “how” they are learning.
In addition, she aggressively insulates her students from the many technologies and universities that now pepper US campuses (called Learning Management Systems — LMSs) — online schedules, calendar popups, submission deadlines, constant reminders and myriad prods and pokes delivered to their cellphones all day. They just make students anxious, she avers; the pressure of these apps constantly demanding attention and obeisance from students (many of whom are already overworked and stressed) impedes the learning process and, worse, results in students not having any fun at all. Which is the other point of going to university.
In the name of efficiency, employing AI speeds up a process that should not be hurried. I would argue that the between the extremes of books spread out on desks and the hourslong battle of reading, writing and editing, and the instant gratification of AI, there must be some happy medium that actually works, but no one seems to know what it is, at least not yet.
There is a postscript here. After I had marked all the papers, I fed the rubric into an AI and asked it to do the marking again. It did a fantastic job — making perceptive comments, uncovering weaknesses, praising strengths and recommending remedial actions. Mercifully, its marks were very close to mine.
It just took me longer. And I’ll bet I was more satisfied than ChatGPT was when I finished. DM