/file/dailymaverick/wp-content/uploads/2025/10/label-Op-Ed.jpg)
Warning: this article contains disturbing content about sexual violence and child sexual abuse material.
As 2025 ended, social media platform X was suddenly flooded with nonconsensual images of people stripped down to bikinis. It began as a “put her in a bikini” challenge on Grok, the platform’s generative AI tool, and rapidly grew to about three million sexualised images, as many as one created per minute, including about 23,000 of children in various states of undress.
At its zenith, Grok was processing close to 200,000 requests in one day.
As the challenge progressed, and users realised what Grok could and would do when instructed, images shared publicly and available to all X users regardless of their age, showed bikinis becoming progressively more transparent and smaller as requests were issued for them to be made of dental floss or string. Instructions became increasingly sexualised, violent and extreme. Grok was asked to make women bend over to show their genitals, for them to be tied up, gagged, mutilated, covered in blood, bruises, have a forced smile, and for women and children to be drenched in “sticky donut glaze” denoting semen.
To quote one pro-Grok acolyte, “if you post publicly, you are fair game to be e-raped”.
Those victimised through the AI app included Ashley St Clair, erstwhile partner of Elon Musk and mother of one of his children. When she raised the alarm, Grok users fought back, making and posting what she has referred to as “revenge porn”, extreme images of her, including undressing a photo of her as a child and creating an image of her bent over wearing only a dental floss bikini. Disturbingly, her baby son’s backpack is still visible in this image.
The retaliation strategy was also used on British X user Evie who publicly reported xAI for a nonconsensual image of her dressed in a bikini and covered in baby oil. As retribution, multiple images of her were created and posted on X, the worst being one of her mostly naked with a string around her waist, her eyes rolled back and a ball gag in her mouth.
Australian activists Collective Shout, who are campaigning for Grok to be removed from the app stores, were also targeted, their images transformed into humiliating naked or sexualised ones. And an adult survivor of child sexual abuse suffered extreme violence online after she spoke out when Grok users employed the technology to strip a fully clothed photo of her as a three-year-old.
Although Musk maintained that Grok would not create child sexual abuse material (CSAM), the Internet Watch Foundation found “sexualised and topless imagery of girls” on the dark web which users said they had created using Grok. This imagery was reportedly then turned into the worst form of child sexual abuse material (category A: penetrative sexual activity, bestiality or sadism). In its analysis of the images Grok had generated, AI Forensics found that users had requested that minors be put in erotic positions and that sexual fluids be depicted on their bodies. Grok complied with those requests.
IWF expressed concern about the speed and ease at which the images were created. They fear that tools like Grok are “bringing sexual AI imagery of children into the mainstream”.
The sharing of pornography is not against X’s community standards and even before Musk’s acquisition of the company and his update of company policy to allow adult content, about 13% of Twitter content was pornographic in nature. But standards require pornography to be consensual and clearly labelled, a standard these explicitly nonconsensual, easily accessible images failed to meet.
The company also purportedly has zero tolerance for CSAM. On 1 January 2026, an X user asked about its safeguarding failure: “I like X a lot… but, proposing a feature that surfaces people in bikinis without properly preventing it from working on children is wildly irresponsible. This is one of the very first things you’re supposed to check.”
An admin responded: “Thanks for flagging. The team is looking into further tightening our gaurdrails” (sic)
Even Grok acknowledged that it has created and facilitated CSAM: “We appreciate you raising this. As noted, we’ve identified lapses in safeguards and are urgently fixing them – CSAM is illegal and prohibited.”
What both failed to mention is that Musk himself removed the guardrails on Grok’s image and video generator at the end of 2025, citing “free speech” and the need to push back against censorship as his motivation. CNN reported that this was despite senior staffers raising concerns about inappropriate content, and that the upshot was that three senior managers from X’s already tiny safeguarding team resigned shortly thereafter.
According to stats shared in the New York Times, Musk, who openly supported the introduction of “spicy” mode on xAI in August 2025 to drive traffic to the product, and boasted following the Grok furore about how many new users it had generated, also fuelled the bikini trend by posting a photo of himself in a bikini on new year’s day.
Musk’s participation made light of the impact on victims who said they felt horrified and dehumanised by the deepfakes. Collective Shout’s Melinda Tankard Reist agreed that it was a violation, identity theft and humiliating. She said that her “response was visceral”.
Grok’s deepfakes attracted the attention of several governments, including France which was already investigating X for misinformation, hate speech and fraudulent extraction of data, and the UK which used the debacle to fast-track the implementation of legislation passed last year, but inexplicably never implemented, making it illegal to create nonconsensual deepfake pornographic images.
In response to the international backlash, X, which was initially resistant to curtailing Grok’s functionality, issued notification on 15 January (more than two weeks after the challenge began) that making and editing images via Grok to X would now be a paid service rather than one freely available to “add an extra layer of protection”.
It further stated that it would use geoblocking to ensure that X users would no longer be able to edit photos of real people to show them in revealing clothing in jurisdictions where it was illegal.
Most big tech companies wouldn’t be brazen enough to reduce their tactics to writing. But X’s statement confirmed a common practice. Despite having the power to prevent and stop harm, companies don’t prioritise the best interests of users, but instead do the minimum to comply. The troubling implication is that X’s guardrails will only be implemented in countries where creating nonconsensual deepfakes is illegal.
This substantiates burgeoning fears that as developed countries implement more stringent restrictions on Big Tech, and impose significant fines on those who don’t comply, it will not result in companies setting up universal guardrails to protect vulnerable users. Instead, the drive for profit will result in them pulling back from those markets, and directing harmful content and traffic to countries that are not protected by legislation.
While Grok has made nonconsensual deepfake pornography mainstream, “nudifying” apps are not new; a host of others have been freely available since 2019.
In December 2025, Collective Shout conducted a groundbreaking study called “Turning Women and Girls into porn” to determine what a teenage boy with a smartphone and a photo of a (fully clothed) female classmate could do with these apps.
They tested 20 nudifying, deepfake and AI girlfriend apps and discovered that many of the apps are free (although some did have paid functionality which was even more extreme) and there was no age verification. Most nudifying technology only works on women and girl children’s bodies, and not only can it undress them in seconds, but it can also place them in myriad sexual positions including performing anal sex, oral sex, undergoing sexual torture or playing out a sexual fantasy, anything from stepsister to schoolgirl to Disney princess.
Of the 20 websites, only two (which allowed workarounds) prevented the user from creating CSAM, with some generating images of what appear to be prepubescent children.
Galleries contain myriad images, including of seemingly underage girls, bound, blindfolded, being penetrated by multiple men or machines, covered in semen and gang-raped.
Users can copyright and sell images, and some apps allow users to upload them to public forums, compounding the trauma for the victims. Moreover, some apps are gamified with “invite and earn” options where users can unlock additional functionality if they invite their friends to the app. They can also earn money off the purchases of those they’ve invited.
Sites, which typically indemnify themselves by asking users to tick that they will not use the tech for anything illegal, promise to protect the identity of users, just, to quote the report, “not the women and girls they undress”.
The technology is not only easy to access, but also discoverable through search engines. Some are downloadable from the app stores, and others advertised on social media, most notably on Instagram, Telegram and X. These same platforms are being used to share the images once created.
Globally, this technology is resulting in a flood of deepfake CSAM onto pornographic sites. In October 2023, the IWF found that 20,254 AI generated CSAM images had been posted to just one pornography site on the dark web in one month. A year later, the number had increased and the tech had improved, making them almost indistinguishable from in-person abuse. There had also been a 32% increase in the number of images that fell into category A of sexual offences against children, indicating that predators were able to use the technology more effectively to create the most harmful forms of CSAM.
Ease of access means that children themselves are often the initial perpetrators of harm. On 7 February 2026, Emma Sadlier from the Digital Law Company posted that in the past week she had received 18 requests for help from principals, parents and children regarding deepfake pornographic images. She reported that these cases were nearly identical. In all, multiple images were created using numerous girls’ social media photos. Users generated naked images and/or placed girls in sexualised situations mimicking pornography, and these were then shared privately with other users or on public platforms.
Sadlier stressed that if the victim is under 18, this constitutes creation of child pornography (called CSAM elsewhere in the world), because the law doesn’t distinguish between fake or real images, distribution of child pornography, nonconsensual distribution of intimate images, nonconsensual distribution of private sexual images and crimen injuria (harm to the dignity of the child) and may result in civil charges for defamation or damaging the child’s reputation.
She explains that if the child creating and distributing the image is over 14, they have full criminal capacity and can be arrested and imprisoned, 12- and 13-year-olds would be assessed for capacity, and while under-12s do not have criminal capacity, there would still be consequences, including from their schools. Further, children over the age of seven can be sued for harm through civil cases.
According to Marita Rademeyer, a psychologist from Jelly Beanz who works with children exhibiting harmful sexual behaviours, ease of access is resulting in children as young as 10 being referred to her for creating “nudified” images of other children. Rademeyer says that younger children often use the technology because they are curious about bodies or because they think it is a funny prank. Many can’t understand the ramifications of their actions.
Rademeyer describes how bemused a Grade 4 client was when he was suspended from school and only reinstated on condition that he receive counselling. He couldn’t understand why everyone was so upset because, as he said to her, “but Tannie, it wasn’t a real photo”.
At 10, he cannot appreciate the humiliation and sense of violation of his victim, or that girls view nudified images as an act of sexual violence.
He isn’t alone. Even older boys seemingly struggle to appreciate the impact of deepfake porn on victims. But as Rademeyer emphases, the consequences can be devastating. Citing unpublished research which corroborates a 2023 Internet Matters study, she says that for many children a deepfake image could be more traumatic than a real one. The lack of consent increases the harm. She also notes that children have taken our safety messaging seriously. They know that nudes shared publicly can haunt you forever. It makes the loss of control even more violating.
In her April 2025 report, “One day this could happen to me: Children, nudification tools and sexually explicit deepfakes”, UK Children’s Commissioner Dame Rachel de Souza confirms that now, “girls fear nudification technology in much the same way as they would fear the threat of sexual assault in public places”.
Most link it to misogyny and dominance on the part of the men or boys. One 18-year-old girl explained that in concert with influencers like Andrew Tate and the increasingly violent pornography industry, nudifying apps are being used to force girls into dating and sexual acts. In addition to manipulation, for many it’s used as revenge porn after break-ups or to punish a girl for rejecting a boy.
Even boys subconsciously associate generative AI nudes with hate. Asked if he feared anyone making a nude image of him, a 17-year-old boy responded: “I don’t think anyone hates me enough.”
De Souza, who notes a link between deepfake nudes and depression, post-traumatic stress disorder and suicidal ideation, is concerned about how many girls report withdrawing from their online lives because they are terrified of having their images turned into sexual content.
She says that unlike other apps that may be seen to have some benefit, “there is no good reason for tools that create naked images of children. They have no value in a society where we value the safety and sanctity of childhood. Their existence is a scandal.” She then quotes a 16-year-old girl interviewed during her research who asked: “Do you know what the purpose of a deepfake is? Because I don’t see any positives.”
The commissioner’s position, shared by many activists, is that if the creation of CSAM is illegal, the technology used to create the images should also be illegal. And further, that “any individual or organisation motivated by the idea of making profit by creating a tool that supports the exploitation of a child must be held to account.”
Dr Federica Fedorczyk, an expert in AI ethics, agrees. She argues: “The Grok case is only the tip of the iceberg of a wider… ecosystem of online misogyny and abuse. As major tech companies increasingly move towards the creation and dissemination of sexual chatbots – from the announced launch of ‘ChatGPT Erotica’ to Meta’s romantic chatbots that have engaged in sexual conversations with minors – criminalising the outcome alone is no longer enough.”
To remedy the situation, Collective Shout want the UN to support a global ban on all bespoke nudification apps, along with their removal from app stores, and for a global criminalisation of image-based sexual abuse including through deepfakes. Further, the UK Children’s Commissioner has called for the providers of Generative AI and open-source GenAI to face legal consequences if their products are used on children.
Similarly, Fedorczyk is arguing for strict and enforceable limits on material related to child sexual abuse. In the interim, activists are also calling for enforceable age-gating for nudifying apps so they cannot be used by children.
In South Africa, the government is likely to abdicate responsibility, placing the onus on children and families to keep themselves safe. This will require some honest conversations at home and at school, especially with boys of all ages who have access to the internet, about respect, consent and empathy as well as the emotional and legal consequences of deepfake pornography involving their peers, friends and even their female teachers.
Above all, we need men and boys who are prepared to combat misogyny to undo the narrative perpetuated by Musk and others that nudification is harmless fun, a joke, or worse, that a woman or girl’s presence on the internet makes her fair game to violate. DM

Illustrative Image: Children. | Phone screen pixelation. (Image: iStock) | (By Daniella Lee Ming Yesca)