Dailymaverick logo

Business Maverick

PROMPT PREDATORS

Musk’s Frankenstein is coming for your digital dignity

X’s AI chatbot Grok has facilitated a surge in sexualised content on the platform, specifically the mass production of explicit images created without the subjects’ permission. Its creator, Elon Musk, dismisses this ‘digital stripping’ as a free speech issue.

Illustrative Image: Grok. (Photo: Anna Barclay / Getty Images) | Phone screen. (Image: Freepik) | (By Daniella Lee Ming Yesca) Illustrative Image: Grok. (Photo: Anna Barclay / Getty Images) | Phone screen. (Image: Freepik) | (By Daniella Lee Ming Yesca)

In the neon-lit corridors of the internet, the line between a profile picture and a pornographic deepfake can be erased with a single prompt.

Grok, the artificial intelligence (AI) chatbot that is integrated into Elon Musk’s social media platform X, has turned the platform into a factory for non-consensual sexual imagery. It is a digital Frankenstein’s monster, stitched together from the data of millions and capable of ‘stripping’ users at the request of any voyeur with a subscription.

A 10-minute review by Reuters tallied 102 attempts by users to edit photos of people into revealing outfits. In at least 21 cases, Grok fully complied, creating dental-floss-style or translucent bikinis.

What makes Grok particularly dangerous compared with traditional editing tools is its public nature. Media lawyer Emma Sadleir of the Digital Law Company told Daily Maverick that while users once privately relied on software like Photoshop before deciding to share an altered image, Grok publishes the content automatically for the world to see.

Read more: Elon Musk’s Grok is a risky experiment in AI content moderation

“Anybody can go to the Grok account and watch in real time that this is happening,” Sadleir said. “They’re really putting out a platform for bad, I think. And they don’t care.”

Musk’s monetisation of abuse

In a display of brand-standard hubris, the response from X’s billionaire owner has been a masterclass in deflection.

Musk has dismissed the outcry as an excuse for censorship, reposting posts that argue millions of other apps can also put people in bikinis. Instead of disabling the feature, X limited the image-generation tools to paid premium subscribers.

Elon Musk's responses to X messages mentioning that other AI platforms are also capable of altering images. (Photos: Screenshots/X)
Elon Musk’s responses to X messages mentioning that other AI platforms are also capable of altering images. (Photos: Screenshots/X)

Sadleir pointed to a “mega double standard” in Grok’s survival. If any other developer released a similar ‘bikinification’ tool that shared Grok’s 13-plus age rating, it wouldn’t be allowed in the app store.

In the UK, the media regulator Ofcom has launched an investigation into whether X violated its duty to protect users from illegal content. If X is found to have broken the law, Ofcom has the power to fine the company up to 10% of its qualifying worldwide revenue.

Read more: Grok is Elon Musk’s new sassy, foul-mouthed AI. But who exactly is it made for?

This move follows a letter from the country’s Culture, Media and Sport Committee chair, Dame Caroline Dinenage, who slammed X’s decision to hide the tool behind a paywall, and asked that Ofcom provide an enforcement update by Friday, 16 January.

“The action X has reportedly taken today is also concerning, given it fails to engage with the seriousness of the issue. It appears not to stop the creation of such images but to turn it into a paid-for service,” Dinenage wrote.

Image-based violence in SA

Sadleir labels these digital violations as “image-based violence”. In South Africa, where gender-based violence remains a national crisis, the weaponisation of a woman’s likeness is a digital extension of the same systemic abuse.

Minister in the Presidency Khumbudzo Ntshavheni recently condemned the photoshopping of her own images on X into vulgar content.

“In a country that is battling with a scourge of gender-based violence, the continued use of sexual images to tarnish women is deplorable at the least. The cowards responsible for the photoshop represent the worst sexists and gender-based violence perpetrators,” a statement issued by the Presidency on Saturday, 10 January reads.

Read more: Taylor Swift deepfakes: new technologies have long been weaponised against women

The issue is also increasingly present in schools, where harmful content is frequently trivialised by young pupils. “There are a lot of kids who think it’s funny,” Sadleir said, noting that in South Africa, children reach full criminal capacity at 14.

The cost of digital forgery
In May 2025, the Pietermaritzburg Regional Court handed down a sentence that should serve as a warning to anyone using tools like Grok to ‘nudify’ others.

Scebi Nene (36) was sentenced to five years of direct imprisonment after pleading guilty to cyber forgery and the disclosure of intimate data images.

Between 2022 and 2023, Nene superimposed the faces of President Cyril Ramaphosa, former police minister Bheki Cele and former police commissioner Khehla Sithole on to sexually explicit photographs. He then spread the images to create a false impression of authenticity.

The legality of it all

South Africa’s legal framework is surprisingly well-equipped to fight back, provided the courts can keep pace with the algorithms and the conglomerates that control them.

Victims in South Africa can lean on several pillars of law.

⚖️The Cybercrimes Act: Criminalises the non-consensual sharing of intimate images.
⚖️The Protection of Personal Information Act (Popia): Protects personal data from being processed unfairly or unlawfully.
⚖️Films and Publications Amendments Act:
Covers the distribution of private sexual content and carries fines of up to R300,000 or four years in prison.
⚖️Crimen injuria:
A common law offence used to prosecute the serious impairment of a person’s dignity.
⚖️Protection from Harassment Act:
Offers the quickest relief via a protection order, which can even be granted against anonymous perpetrators.
⚖️Copyright law:
Provides a layer of protection, as the original photos used to prompt AI often belong to the victim.

If you are targeted, Sadleir’s advice is to move fast. “Take evidence as quickly as possible,” she advised. However, she warned that screenshots can now be faked with AI. “If possible, use a secondary device and film the screen. Film the screen scrolling, wiggle it around a bit, so it’s obvious (that your recording is) not AI.”

Read more: Deepfakes in South Africa: Protecting your image online is the key to fighting them

Pulling the plug

While Musk decries the backlash as censorship, countries like Indonesia and Malaysia have blocked access to Grok.

“Sexually manipulating images of women and children is despicable and abhorrent,” said Liz Kendall, UK Secretary of State for Science, Innovation and Technology. “It is an insult and totally unacceptable for Grok to still allow this if you are willing to pay for it.”

An X message reply from AI chatbot Grok on the social media platform after users commented, questioning its policies on being able to alter images of women to have them wear bikinis. (Photo: Screenshot/X)
An X message reply from AI chatbot Grok on the social media platform after users commented, questioning its policies on being able to alter images of women to render them as wearing bikinis. (Photo: Screenshot/X)

While international pressure from the app store or foreign governments might be the most effective way to force change at X, South African victims have immediate options. Sadleir recommended the Protection from Harassment Act as the “quickest, cheapest court order” because it can be obtained at a magistrates’ court and can target anonymous accounts.

Grok is a mirror of its creator’s philosophy — a tool that prioritises “free speech” over the safety and dignity of victims. If Musk continues to let his Frankenstein roam the digital streets unchecked, he may find the world’s governments ready to pick up their torches and pitchforks to protect the villagers. DM

Comments

Scroll down to load comments...