Dailymaverick logo

Sci-Tech

MACHINE UNLEARNING OP-ED

Turning off AI detection software is the right call for SA universities

Universities across South Africa are abandoning problematic artificial intelligence detection tools that have created a climate of suspicion.
Turning off AI detection software is the right call for SA universities ChatGPT. (Photo: Unsplash)

The recently announced University of Cape Town decision to disable Turnitin’s AI detection feature is to be welcomed – and other universities would do well to follow suit. This move signals a growing recognition that AI detection software does more harm than good.

The problems with Turnitin’s AI detector extend far beyond technical glitches. The software’s notorious tendency towards false positives has created an atmosphere where students live in constant fear of being wrongly accused of academic dishonesty.

Unlike their American counterparts, South African students rarely pursue legal action against universities, but this should not be mistaken for acceptance of unfair treatment.

A system built on flawed logic

As Rebecca Davis has pointed out in Daily Maverick: detection tools fail. The fundamental issue lies in how these detection systems operate. Turnitin’s AI detector doesn’t identify digital fingerprints that definitively prove AI use. Instead, it searches for stylistic patterns associated with AI-generated text.

The software might flag work as likely to be AI-generated simply because the student used em-dashes or terms such as “delve into” or “crucial” – a writing preference that has nothing to do with artificial intelligence.

This approach has led to deeply troubling situations. Students report receiving accusatory emails from professors suggesting significant portions of their original work were AI-generated.

One student described receiving such an email indicating that Turnitin had flagged 30% of her text as likely to be AI-generated, followed by demands for proof of originality: multiple drafts, version history from Google Docs, or reports from other AI detection services like GPTZero.

The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system.

Other academics have endorsed the use of services like Grammarly Authorship or Turnitin Clarity for students to prove their work is their own.

The burden of proof has been reversed: students are guilty until proven innocent, a principle that would be considered unjust in any legal system and is pedagogically abhorrent in an educational context.

The psychological impact cannot be overstated; students describe feeling anxious about every assignment, second-guessing their natural writing styles, and living under a cloud of suspicion despite having done nothing wrong.

The absurdity exposed

The unreliability of these systems becomes comically apparent when examined closely. The student mentioned above paid $19 to access GPTZero, another AI detection service, hoping to clear her name. The results were revealing: the programs flagged different portions of her original work as AI-generated, with only partial overlap between their accusations.

Even more telling, both systems flagged the professor’s own assignment questions as AI-generated, though the Turnitin software flagged Question 2 while GPTZero flagged Question 4. Did the professor use ChatGPT to write one of the questions, both, or neither? The software provides no answers.

This inconsistency exposes the arbitrary nature of AI detection. If two leading systems cannot agree on what constitutes AI-generated text, and both flag the professor’s own questions as suspicious, how can any institution justify using these tools to make academic integrity decisions?

Gaming the system

While South African universities have been fortunate to avoid the litigation that has plagued American institutions, the experiences across the Atlantic serve as a stark warning.

A number of US universities have abandoned Turnitin after facing lawsuits from students falsely accused of using AI. Turnitin’s terms and conditions conveniently absolve the company of responsibility for these false accusations, leaving universities to face the legal and reputational consequences alone.

The contrast with Turnitin’s similarity detection tool is important. While that feature has its own problems, primarily academics assuming that the percentage similarity is an indicator of the amount of plagiarism, at least it provides transparent, visible comparisons that students can review and make sense of.

The AI detection feature operates as a black box, producing reports visible only to faculty members, creating an inherently opaque system.

Undermining educational relationships

Perhaps most damaging is how AI detection transforms the fundamental relationship between educators and students. When academics become primarily focused on catching potential cheaters, the pedagogical mission suffers.

Education is inherently relational, built on trust, guidance and collaborative learning. AI detection software makes this dynamic adversarial, casting educators as judges, AI detection as the evidence and students as potential criminals.

The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology

The lack of transparency compounds this problem. Students cannot see the AI detection reports that are being used against them, cannot understand the reasoning behind the accusations and cannot meaningfully defend themselves against algorithmic judgements.

This violates basic principles of fairness and due process that should govern any academic integrity system.

A path forward

UCT’s decision to disable Turnitin’s AI detector represents more than just abandoning a problematic tool. It signals a commitment to preserving the educational relationship and maintaining trust in our universities. Other institutions following suit demonstrate that the South African higher education sector is willing to prioritise pedagogical principles over technological convenience.

This doesn’t mean ignoring the challenges that AI presents to academic integrity. Rather, it suggests focusing on educational approaches that help students understand appropriate AI use, develop critical thinking skills and cultivate a personal relationship with knowledge.

When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose.

The goal should be advocacy for deep learning and meaningful engagement with coursework, not policing student behaviour through unreliable technology.

Detection should give way to education, suspicion to support and surveillance to guidance. When we position students as already guilty, we shouldn’t be surprised that they respond by trying to outwit our systems rather than engaging with the deeper questions about learning and integrity that AI raises.

The anxiety reported by students who feel constantly watched and judged represents a failure of educational technology to serve educational goals. When tools designed to protect academic integrity instead undermine student wellbeing and the trust essential to learning, they have lost their purpose.

UCT and other South African universities deserve recognition for prioritising student welfare and educational relationships over the false security of flawed detection software. Their decision sends a clear message: technology should serve education, not the other way around.

As more institutions grapple with AI’s impact on higher education, South Africa’s approach offers a valuable model; one that chooses trust over surveillance, education over detection and relationships over algorithms.

In an era of rapid technological change, this commitment to fundamental educational values provides a steady foundation for navigating uncertainty.

The future of academic integrity lies not in better detection software, but in better education about integrity itself. DM

Sioux McKenna is professor of higher education studies at Rhodes University.

Neil Kramm is an educational technology specialist in the Centre of Higher Education Research, Teaching and Learning (CHERTL) at Rhodes University. He is currently completing his PhD on AI and its influence on assessment in higher education.

Comments (7)

Peter Geddes Jul 26, 2025, 03:09 PM

Interesting article about a huge problem. Full of lofty ideals and considerations, but maybe a bit leftie/woke. My experiences as a student fifty years ago at an almost all-white university led me to the conclusion that some students will stop at nothing to pass their courses.

Frans Flippo Jul 27, 2025, 10:33 AM

Abandoning “automatic AI detection software” doesn’t mean abandoning calling out plagiarism or non-original work submitted as being original. It just means that the professors will need to do these checks themselves again, and in a traceable way. That only seems fair, both to honest students and to those trying to take shortcuts.

stan the man Jul 27, 2025, 09:52 AM

...... and policies should be developed to focus on ethical and responsibe use in order to maintain academic integrity.

Geoff Krige Jul 27, 2025, 10:16 AM

Fascinating analysis. Thank you Sioux and Neil. A follow-up article about how universities aim to teach integrity will be enlightening. In our context where too many senior appointments have been based on fake qualifications, too many fortunes have been built on corruption, and too many political and business leaders rely on lies, students unfortunately do not have good role models for integrity.

Rod MacLeod Jul 27, 2025, 12:01 PM

Is the answer to ditch tests for plagiarism and/or AI generated works, or to improve the detection models? Forensic sciences are there to call out the cheats and liars. Look at how we have moved from eye-witness evidence, to fingerprint technology, to blood and hair analysis, to DNA testing, to relational DNA in identifying criminals. Why not a similar developmental process for calling out academic cheats?

Betsels R Jul 27, 2025, 02:31 PM

Is the problem the tool or the user? Students are discouraged from disproportionate use of AI writing tools if they know their work will be tested. Where we've needed to reach out to students with problematic AI reports, 90% immediately admit they are in the wrong. Being asked to outline the academic research process supports the goal of developing accountable researchers. What is however needed are clear institutional guidelines on using the tool fairly, consistently and transparently.

Johan Buys Jul 27, 2025, 03:39 PM

If students have free use of AI for not-exam circumstance, why not just let students use AI under exam conditions? It is farcical - third year students can make nice money doing assignments for first year students.

Alan Salmon Jul 27, 2025, 08:29 PM

"The future of academic integrity lies not in better detection software, but in better education about integrity itself." She must be joking - copying and plagiarism have been common at universities for years, and AI has made it worse. Turning off detection is a big mistake.