Facebook, X, TikTok and YouTube give thumbs-up to violent, sexualised hate speech targeting women — report
A new investigation by Global Witness and Legal Resources Centre revealed that major social media platforms all approved adverts containing extreme and violent misogynistic hate speech against women journalists in South Africa.
Adverts featuring extreme and violent misogynistic hate speech against women journalists in South Africa have all been approved by Facebook, X, TikTok, and YouTube. This is according to a new investigation by Global Witness (GW) and the South African public interest law firm, Legal Resources Centre (LRC).
How the investigation worked
“You deserve a bullet in the head”, “You are a thing, a bitch, a lying bitch.”
These are some real-life examples of hate speech attacks on social media faced by women journalists, just for doing their jobs. Hate speech attacks on social media are part of a global trend, seeing online violence against women journalists spilling offline.
These attacks occur despite large social media corporations having hate speech policies designed to protect users. The investigation by GW and LRC set out to test how effective social media corporations were at enforcing their policies and detecting misogynistic hate speech on their platforms.
Instead of publishing the examples on the platforms as user content, the examples were submitted to all four platforms in the form of adverts. This enabled GW and LRC to schedule and remove the adverts before going live. After capturing these results, GW and LRC deleted all the adverts before they were published so none will appear on the platforms.
The investigation submitted 10 adverts in four languages (English, Afrikaans, Xhosa, and Zulu) that contained hate speech targeting women journalists to the platforms. The adverts were based on real-world examples of abuse received by women journalists. The adverts were violent, sexualising, and dehumanising, referring to women as vermin, prostitutes, or psychopaths and calling for them to be beaten and killed.
Adverts approved, despite extreme content and policy breaches
All four platforms approved the vast majority of the adverts, despite the extreme content of the adverts and the fact that they breach the social media platforms’ policies on hate speech.
Real-world examples of misogynistic hate speech were edited to clarify language and grammar, none were coded or difficult to interpret, the text was illustrated by video footage, and all clearly violated the platforms’ advertising policies.
All of the content fit the platforms’ definitions of hate speech indicated in their policies. The content also targeted women specifically and was violent, dehumanising, and expressed inferiority, contempt, and disgust. Yet:
- Meta and TikTok approved all 40 ads within just 24 hours;
- YouTube also approved them all but flagged 21 of the 40, with an approved but ‘limited’ status, thus still deemed appropriate for some audiences; and
- X/Twitter approved them all, aside from two English adverts, which had their publication ‘halted’ after we conducted further tests into the platform’s approval process.
“Our tests show that social media corporations’ automated and AI-informed content moderation systems are not fit for purpose if even the most extreme and violent forms of hate speech are approved for publication, in clear violation of their own policies,” said GW and LRC.
Although new technologies are crucial for moderating at scale, they are not sophisticated enough to replace or merit a lack of investment in human moderators and fact-checkers, GW and LRC added.
Elections and social media
This investigation comes at a crucial time, with South Africa one of the 65+ countries due to go to the polls in 2024 in the biggest global election year so far this century.
“As we approach the biggest election year so far this century, the stakes have never been higher — protecting press freedom and the safety of journalists is essential to uphold the democratic process,” said Sherylle Dass, Regional Director at the LRC.
Safeguarding press freedom is essential to uphold the democratic process during this time and women journalists need to be able to carry out political reporting without fear of gendered reprisals online and offline.
Read more in Daiy Maverick: Social media bosses must invest in guarding global elections against incitement of hate and violence
Dass expressed concerns that social media platforms appear to be neglecting to enforce their content moderation policies in global majority countries, in particular countries like South Africa which are often outside the international media spotlight.
“The platforms must act to properly resource content moderation and protect the rights and safety of users, wherever they are in the world, especially during critical election periods,” she said.
Women’s rights and media freedom threatened
“As a female journalist in South Africa, I have been targeted and abused online, simply for doing my job. This has taken a huge toll on me and my loved ones,” said Ferial Haffajee, Associate Editor of Daily Maverick and former Editor-at-large at HuffPost South Africa.
Haffajee said GW and LRC’s latest exposé illustrates that social media corporations are not practising what they preach, as they allow even the most extreme and violent forms of content to be published, risking complicity with perpetrators of online violence.
“After 29 years as a journalist, I should be bolder and more confident than ever but online hate and the threat of offline violence exhausts and terrifies me,” she said.
The abuse is not just from individuals but also from troll armies, making it impossible to counter through deleting and blocking alone, said Haffajee.
“Along with many other journalists, I have tried to use the social media platforms’ reporting mechanisms and even contacted the companies directly, but it is to no avail. They knowingly turn a blind eye while playing host to assaults on women’s rights and media freedom,” she said.
In 2021, Unesco published a report titled ‘The Chilling’, which indicated that 73% of the 901 women journalists interviewed, across 125 different countries, had experienced online violence with 20% reporting that they had also been attacked offline in connection with the online violence.
Black, Indigenous, Jewish, Arab, and lesbian women all reported the highest rates of online violence and are at heightened risk due to the intersection with other forms of discrimination.
Hannah Sharpe, Digital Threats Campaigner at GW said that women are under constant threat from misogynistic attacks online, and the investigation shows that platforms continue to enable and even profit from this hate speech.
“To protect women and minoritised communities, press freedom and democracy, together we have to challenge Big Tech’s predatory business model, in which billionaire social media CEOs are raking in huge sums through platforms designed to promote enraging, extreme and hateful content,” said Sharpe.
To achieve an online world that connects people rather than divides them, social media corporations need to build safety by design into their platforms. Governments need to bring forward balanced regulation grounded in human rights that holds platforms accountable, she said.
Platforms say mistakes happen occasionally
In response to the investigation, a Meta spokesperson said the adverts violate their policies and have been removed. “Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes. That’s why ads can be reviewed multiple times, including once they go live”.
A TikTok spokesperson said that hate has no place on TikTok and that their policies prohibit hate speech. While their auto-moderation technology correctly flagged the submitted advertisements as potentially violating their policies, a second review — by a human moderator, incorrectly overrode the decision.
“Errors like this are the exception and we are continually refining our moderation systems and improving our moderator training,” said the spokesperson.
Moderation experts at TikTok speak more than 70 languages and dialects, including English, Afrikaans, Xhosa, and Zulu, and they have expanded their safety operations as the Africa-based TikTok community has grown.
“We are taking aggressive steps against persistent bad actors and as part of our improved model detection, we are using network signals to find emerging trends and identify new violating accounts. As a result, users are seeing less violative content on the platform”.
Google and X/Twitter were approached for comment but did not respond DM