Shocked? Sure. Surprised? Not one bit in today’s rapidly evolving digital jungle. That was the prevailing reaction from delegates at an information integrity summit in Stellenbosch to the recent Grok AI “undressing” scandal, which unleashed the reckless, harmful abuses of unchecked generative AI.
Kick-starting a three-year collaborative project to strengthen information integrity in the Global South, the Canadian International Development Research Centre (IDRC) – Centre for Information Integrity in Africa (CINIA) partners summit convened a multidisciplinary mix of activists, researchers, policy advisers, AI tech experts, media practitioners, scientists and academics to offer a range of perspectives and consider solutions to what keeps them up at night.
The creation and public sharing of at least 1.8 million nonconsensual sexualised images of women and children over nine unchecked days, thanks to Elon Musk’s Grok, provided a stark example of escalating risks. This incident, alongside new efforts by various countries to tighten systems against platforms’ societal harms, underscores the infinite emergence of new capabilities and tools: addictive and manipulative algorithms, digital authoritarianism, deepfakes, AI hallucinations, AI slop, hate amplification, etc.
Describing the Grok “undressing” as an example of harm within a much larger “harmscape”, Jonathan Shock, associate professor and interim director of the University of Cape Town’s AI Initiative, said the personal harm caused by the AI image generator stemmed from “very little oversight from a governmental level on what is allowed on these platforms. These are massively powerful systems that the vast majority of people, at least in some places, have access to. It’s incredibly worrying that it is so easy to produce information that can cause so much harm, at such a pace. It’s an arms race. We don’t have systems at the moment that can keep up with the pace at which misinformation and disinformation can be produced.”
/file/attachments/orphans/GettyImages-2255893185_244665.jpg)
Shock, a researcher, scientist and academic within the Department of Mathematics and Applied Mathematics, said generative AI’s capabilities require rigorous testing to prevent things from “going wrong”. Existing in-house platform safety testing procedures were inadequate and opaque. In the same way that products are safety-tested and licensed through a legal framework, independent oversight was required to determine the parameters of what a digital company could put out into the world, including early-warning systems. “This should be no different,” said Shock, whose multifaceted work ranges from testing AI systems’ capabilities to mitigate harms, to medical imaging – one of the positive ways that AI can “potentially solve really important scientific questions”.
Describing the Grok “undressing” as a “moment within a continuing moment”, associate professor with the Tayarisha Centre for Digital Governance at Wits University’s School of Governance, Geci Karuri-Sebina, who advises the public sector on tech governance, said that we are living in a constantly evolving, technologically driven environment that requires continual adaptability and effective scanning systems to help sense, anticipate and prevent harm.
However, she also cautioned against “retreating into a corner of fear, being too worried about what’s going to hit us next”. This would cede our agency and also insights into how generative AI could also be used positively to help tackle important societal problems and create meaningful opportunities.
During the summit, the collective impact of TBGBV (technology-based gender-based violence) was explored, with discussants exploring issues such as:
- The aggravated human harm caused by platforms that enabled multiple users to “create a whirlwind that hammers you” repetitively by multiple attackers, not from a single individual; and
- The need to determine the causal connection between online and offline violence.
Sharing her perspective from the summit sidelines, Dianna H English, director of programmes at the Canada-based Centre for International Governance Innovation, said that “to allow tools like Grok to run wild” without criminal consequence indicated a “significant regression in terms of accountability for the platforms and surrounding political and social permission structures.
“It indicates the culture of impunity for online harms at this point.
“Increasingly, we are seeing the generation of sexualised images, the nonconsensual use of people’s images and sexualised content as a form of sexual assault,” said English, who leads the CIGI’s programme on Africa’s Digital Transformation.
/file/attachments/orphans/JanjiraSombatpoonsiri1_619372.jpeg)
(Photo: Supplied)
Agreeing that it was a sign of regression, assistant professor at Chulalongkorn University, Janjira Sombatpoonsiri, whose book Death by a Thousand Cuts: Digital Repression and Pro-Democracy Movements in Thailand is soon to be released, said that among the things that kept her awake at night was the “marriage of political power, political elites and governments with tech power”.
The Grok “undressing” indicated the latest stage of US-dominated tech monopolisation, where platforms are driven by one thing: monetisation, she said. Between 2016 and US President Donald Trump’s second term there were systematic efforts to address digital harms, with platforms feeling some pressure. “Now, with the erosion of the liberal order and human rights protection, gender equality, any gains made in regulations and content moderation are now out the window.”
It’s not a technical fix, it’s a lived condition
“Embodied data” was another frequently referenced concept at the explorative summit. Sharing a feminist perspective, independent researcher and consultant Anja Kovacs was among many delegates who expressed frustration with the continued application of “outdated 20th century concepts” to make sense of today’s digital ecosystem.
Elaborating to CINIA later, Kovaks explained that the Grok “undressing” controversy, for instance, was generally still being viewed as a data privacy transgression, rather than sexual assault. “From an embodied data perspective, this is really a massive sexual assault on a large scale, happening in full public view without there being general public outrage. There was anger, there was discontent, and they had to shut it down in the end, but really, what should have happened is people should have been arrested, and struck down on day one; there should have been massive public outrage. Yet there isn’t. We treat it as just information; we find it uncomfortable, but we let it pass,” said Kovacs, a senior fellow at Research ICT Africa.
During the five-day summit, which was convened by CINIA director Herman Wasserman, a pertinent just-published article was circulated in which World Wide Web founder and computer scientist Tim Berners-Lee used terms such as “commercialised, extractive, surveillance-heavy and optimised for nastiness” to describe harmful encroachments on his 1989 creation. Academically driven and intended to be free for everyone, the internet has been overpowered by addictive social media sites.
/file/attachments/orphans/GeciKaruri-Sebina1_900418.jpeg)
(Photo: Supplied)
/file/attachments/orphans/CINIAdirectorProfHermanwassermanSUPPLIED_894440.jpg)
Launching what he calls the “battle for the soul of the Web”, Berners-Lee told The Guardian it was “not too late to fix it”, and collaboration and compassion could prevail. But when it comes to unchecked generative AI, he cautioned that the “horse is bolting”, with guardrails urgently needed before it is too late.
Agreeing about developing user-first internet alternatives, Olivia Bandeira, a researcher at Intervozes, a freedom of speech and digital rights collective in Brazil, said. “There is an alternative, we can construct another internet, this current model does not serve society, and is producing harm. It is not enough to just regulate platforms. We need another model, and we can construct that. Universities and social movements can do that; the internet basically began at universities; we can do it again,” said Bandeira, a member of the Association of Progressive Communications’ Resistance and Rebellion collaborative movement set up in response to online attacks on environmental defenders in Brazil, Mexico, Kenya and the Philippines.
Summing up the work ahead, English concluded: “If we say that it’s impossible, we remove the accountability for the choices that can be made by all actors to make our shared digital space safer… It’s only impossible if we don’t make it possible.” DM
The jargon:
Embodied Data: Re-centring the body in debates about data and AI to ensure that technological systems expand rather than restrict the possibilities for human dignity, autonomy and diversity – Anja Kovacs, see here;
AI harmscapes: Dynamic landscapes of cumulative, interconnected socioecological harms that are produced or amplified by AI systems and infrastructures, becoming normalised through power, data extraction and uneven governance, but also containing spaces for community-led repair and harm reduction – Hübschle, A. and Shearing, C. (in press). Artificial Intelligence Harmscapes: Rethinking Governance in the Age of Cognitive Tools. Contemporary Justice Review: Issues in Criminal, Social, and Restorative Justice;
TBGBV: Technology-based gender-based violence – the weaponisation of technology and online platforms to attack women and children based on their gender. It has no limits or geographical boundaries and can start online and escalate to physical spaces, or vice versa – UN, see here.
This article was first published by the Centre for Information Integrity. Watch a video clip here with the centre’s director and principal investigator of the project, Herman Wasserman, who convened the meeting.
Illustrative image: Phone screen displaying the Grok app and logo. (Photo: Anna Barclay / Getty Images) | Elon Musk. (Photo: Win McNamee / Getty Images)