The recently announced University of Cape Town decision to disable Turnitin’s AI detection feature is to be welcomed – and other universities would do well to follow suit. This move signals a growing recognition that AI detection software does more harm than good.
The problems with Turnitin’s AI detector extend far beyond technical glitches. The software’s notorious tendency towards false positives has created an atmosphere where students live in constant fear of being wrongly accused of academic dishonesty.
Unlike their American counterparts, South African students rarely pursue legal action against universities, but this should not be mistaken for acceptance of unfair treatment.
A system built on flawed logic
As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental issue lies in how these detection systems operate. Turnitin’s AI detector doesn’t identify digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns associated with AI-generated text.
The software might flag work as likely to be AI-generated simply because the student used em-dashes or terms such as “delve into” or “crucial” – a writing preference that has nothing to do with artificial intelligence.
This approach has led to deeply troubling situations. Students report receiving accusatory emails from professors suggesting significant portions of their original work were AI-generated.
One student described receiving such an email indicating that Turnitin had flagged 30% of her text as likely to be AI-generated, followed by demands for proof of originality: multiple drafts, version history from Google Docs, or reports from other AI detection services like GPTZero.
The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system.
Other academics have endorsed the use of services like Grammarly Authorship or Turnitin Clarity for students to prove their work is their own.
The burden of proof has been reversed: students are guilty until proven innocent, a principle that would be considered unjust in any legal system and is pedagogically abhorrent in an educational context.
The psychological impact cannot be overstated; students describe feeling anxious about every assignment, second-guessing their natural writing styles, and living under a cloud of suspicion despite having done nothing wrong.
The absurdity exposed
The unreliability of these systems becomes comically apparent when examined closely. The student mentioned above paid $19 to access GPTZero, another AI detection service, hoping to clear her name. The results were revealing: the programs flagged different portions of her original work as AI-generated, with only partial overlap between their accusations.
Even more telling, both systems flagged the professor’s own assignment questions as AI-generated, though the Turnitin software flagged Question 2 while GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the questions, both, or neither? The software provides no answers.
This inconsistency exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what constitutes AI-generated text, and both flag the professor’s own questions as suspicious, how can any institution justify using these tools to make academic integrity decisions?
Gaming the system
While South African universities have been fortunate to avoid the litigation that has plagued American institutions, the experiences across the Atlantic serve as a stark warning.
A number of US universities have abandoned Turnitin after facing lawsuits from students falsely accused of using AI. Turnitin’s terms and conditions conveniently absolve the company of responsibility for these false accusations, leaving universities to face the legal and reputational consequences alone.
The contrast with Turnitin’s similarity detection tool is important. While that feature has its own problems, primarily academics assuming that the percentage similarity is an indicator of the amount of plagiarism, at least it provides transparent, visible comparisons that students can review and make sense of.
The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system.
Undermining educational relationships
Perhaps most damaging is how AI detection transforms the fundamental relationship between educators and students. When academics become primarily focused on catching potential cheaters, the pedagogical mission suffers.
Education is inherently relational, built on trust, guidance and collaborative learning. AI detection software makes this dynamic adversarial, casting educators as judges, AI detection as the evidence and students as potential criminals.
The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology
The lack of transparency compounds this problem. Students cannot see the AI detection reports that are being used against them, cannot understand the reasoning behind the accusations and cannot meaningfully defend themselves against algorithmic judgements.
This violates basic principles of fairness and due process that should govern any academic integrity system.
A path forward
UCT’s decision to disable Turnitin’s AI detector represents more than just abandoning a problematic tool. It signals a commitment to preserving the educational relationship and maintaining trust in our universities. Other institutions following suit demonstrate that the South African higher education sector is willing to prioritise pedagogical principles over technological convenience.
This doesn’t mean ignoring the challenges that AI presents to academic integrity. Rather, it suggests focusing on educational approaches that help students understand appropriate AI use, develop critical thinking skills and cultivate a personal relationship with knowledge.
When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose.
The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology.
Detection should give way to education, suspicion to support and surveillance to guidance. When we position students as already guilty, we shouldn’t be surprised that they respond by trying to outwit our systems rather than engaging with the deeper questions about learning and integrity that AI raises.
The anxiety reported by students who feel constantly watched and judged represents a failure of educational technology to serve educational goals. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose.
UCT and other South African universities deserve recognition for prioritising student welfare and educational relationships over the false security of flawed detection software. Their decision sends a clear message: technology should serve education, not the other way around.
As more institutions grapple with AI’s impact on higher education, South Africa’s approach offers a valuable model; one that chooses trust over surveillance, education over detection and relationships over algorithms.
In an era of rapid technological change, this commitment to fundamental educational values provides a steady foundation for navigating uncertainty.
The future of academic integrity lies not in better detection software, but in better education about integrity itself. DM
Sioux McKenna is professor of higher education studies at Rhodes University.
Neil Kramm is an educational technology specialist in the Centre of Higher Education Research, Teaching and Learning (CHERTL) at Rhodes University. He is currently completing his PhD on AI and its influence on assessment in higher education.
ChatGPT. (Photo: Unsplash)