Dailymaverick logo

Opinionistas

This article is an Opinion, which presents the writer’s personal point of view. The views expressed are those of the author/authors and do not necessarily represent the views of Daily Maverick.

The AI mirror — when technology reflects our worst selves

The question before us is no longer whether AI can undress us. It already can. The real question is whether our laws, institutions and moral courage will rise swiftly and decisively to meet a challenge that demands leadership.

When South African digital creator Mihlali Ndamase publicly instructed an artificial intelligence chatbot not to manipulate her image, it was more than an influencer moment – it was a warning flare.

Her message, echoed by other women globally, came amid a disturbing wave of users prompting AI tools to undress, sexualise or fabricate intimate images of real women and children without consent.

That a public figure felt compelled to assert her boundaries to a machine exposes a deeper structural issue: the absence of sufficiently embedded safeguards in technologies that increasingly mediate identity, power and dignity.

For those of us entrusted with shaping digital governance and public policy, these incidents are not merely alarming; they are instructive, revealing precisely where regulation, institutional readiness and ethical oversight must now be strengthened.

Artificial intelligence, often described as neutral or objective, is in reality a mirror – reflecting the values, biases and impulses of the societies that build and deploy it.

Technical and moral challenge

When AI systems are repeatedly used to sexualise, violate and dehumanise women and children, the technology is not malfunctioning; it is reproducing the darker contours of our social order. This makes AI not just a technical challenge, but a moral one.

The growing misuse of generative AI to create sexualised and explicit imagery is no longer theoretical. Platforms such as Grok have demonstrated how rapidly AI systems can be repurposed for harm when design, governance and accountability do not keep pace with innovation.

These tools are not simply generating offensive content; they are enabling a new and deeply corrosive form of image-based sexual abuse, one that is scalable, automated and often irreversible. The technology requires no consent, no participation and frequently no awareness from those whose likenesses are exploited. All it needs is access.

This reality demands clarity. The challenge before us is not innovation gone awry, but innovation deployed without sufficiently enforceable ethical guardrails.

It is important to acknowledge that the global community is not starting from zero. Institutions such as the Future of Life Institute (FLI), working in collaboration with the United Nations and other multilateral bodies, have played a critical role in elevating the risks associated with artificial intelligence and advocating for responsible governance frameworks.

Across the world, researchers, policymakers and civil society actors are engaging seriously with AI ethics, safety and human rights. However, the pace, coherence and enforceability of policy responses remain uneven, particularly in areas where AI intersects with bodily autonomy, consent and the protection of women and children.

This unevenness matters because technology does not wait for consensus. In the absence of firm rules, markets, incentives and social behaviour fill the vacuum, often in ways that entrench inequality rather than dismantle it.

While societies continue the long and difficult work of confronting sexism, gender-based violence and abuse in the physical world, we cannot afford to reproduce those same failures in the digital one. At the very least, technology should not be allowed to outpace the values we claim to uphold.

Fragmented approaches

This is most evident in the regulation of AI-generated sexual content. Internationally, legal approaches remain fragmented. Some jurisdictions criminalise the distribution of non-consensual deepfake imagery, but remain silent on its creation.

Others place the burden of proof on victims to demonstrate intent or reputational damage, an onerous requirement in an environment where harm can be instantaneous and anonymous. Platforms operate across borders, while enforcement remains nationally bounded, creating regulatory gaps that are routinely exploited.

South Africa faces similar challenges. While existing legal instruments address image-based abuse, harassment and child sexual exploitation, they were not drafted for an era in which synthetic media can convincingly fabricate sexual harm without any physical act taking place.

AI-generated sexual imagery often falls into legal grey zones, recognised as harmful in principle, but insufficiently defined in law.

Institutional capacity within law enforcement and the judiciary to address such cases remains limited, and there is currently no comprehensive statutory framework compelling AI developers and platforms to integrate consent, traceability and harm prevention into system design. These gaps are not merely technical. They are cultural and structural.

AI technologies reflect and amplify existing social patterns. The disproportionate targeting of women and girls in AI-generated sexual content mirrors long-standing forms of online harassment and gender-based abuse. What has changed is scale and accessibility. Acts that once required specialised skills can now be performed in seconds by anyone with a prompt. Abuse has been automated.

This is the uncomfortable truth of the AI mirror: it does not invent cruelty, but it industrialises it. And unless deliberately constrained, it will encode into systems the very injustices societies are still struggling to undo.

The implications for children are especially grave. AI-generated child sexual abuse material, even where no child was physically involved in its creation, undermines child protection regimes and normalises their sexualisation, creating material that can be used for grooming, coercion and psychological harm. Treating such content as a lesser offence because it is “synthetic or fake” fundamentally misunderstands the nature of harm in the digital age.

Digital governance

This is why existing governance efforts, while important, must now be strengthened in critical ways by those of us responsible for digital policy and oversight.

First, AI governance must shift from voluntary ethical principles to enforceable legal obligations. Consent should be a foundational requirement for any AI system capable of generating realistic human imagery. Risk assessments, transparency requirements and independent audits should be mandatory, not discretionary.

Second, legislation must explicitly criminalise the creation of non-consensual AI-generated sexual imagery, not only its distribution. Prevention must be embedded upstream, rather than relying on reactive takedown mechanisms after harm has already occurred.

Third, platform accountability must be substantive. Safety controls should not be optional features or monetised restrictions. They must be built into system architecture and subject to regulatory scrutiny, with meaningful consequences for non-compliance.

Fourth, institutional capacity must be strengthened, particularly within law enforcement and the judiciary. Training, technical expertise and cross-border cooperation are essential if laws are to be effectively applied.

Real-world consequences

Finally, public education and digital literacy must be prioritised. A technologically advanced society requires not only innovation, but ethical maturity, an understanding that digital actions carry real-world consequences.

Technology is one of the few domains where societies still have the opportunity to design the future intentionally. Even as we confront the inherited injustices of the past, we retain the responsibility and the power to decide what values are encoded into the systems that will govern the next generation.

The intervention by Mihlali Ndamase should not have been necessary. No individual should be required to publicly negotiate their dignity with an algorithm. If artificial intelligence is to serve the public good, consent, accountability and human dignity must be treated as core design and governance principles, not afterthoughts.

The question before us is no longer whether AI can undress us. It already can. The real question is whether our laws, institutions and moral courage will rise swiftly and decisively to meet a challenge that demands leadership, coordination and accountability from those of us entrusted with governing the digital future and whether we are prepared to build, through technology, the kind of world we claim to want. DM

Mbali Hlophe is Gauteng Chairperson for e-Government, Policy and Research Development (Gauteng Provincial Legislature Committee).

Comments

Scroll down to load comments...