Week 7: Weekly trends – the haters
Through Real411, Media Monitoring Africa has been tracking disinformation trends on digital platforms since the end of March 2020. For Real411, we are focusing on combating anti-vaccine content, and we are also gearing up for local elections. Hate speech angers, hurts, dehumanises and insults, and it has real potential to cause public harm, but just what do we mean by the term, and what should be done about it? We will take a quick look at the definitions of the major platforms and then the definition we apply.
Unlike its rather ambiguous position on disinformation (calling it false news for example, as we highlighted here), Facebook offers one of the better approaches to hate speech, offering nuance and guidelines on its exceptions. It’s pretty long for a short definition but it highlights some of the complexity from the word go:
“We define hate speech as a direct attack against people on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. We define attacks as violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation. We consider age a protected characteristic when referenced along with another protected characteristic. We also protect refugees, migrants, immigrants and asylum seekers from the most severe attacks, though we do allow commentary and criticism of immigration policies. Similarly, we provide some protections for characteristics such as occupation, when they’re referenced along with a protected characteristic.” Facebook Community Standards
It’s a useful definition as it includes elements of attacks – which include violent and or dehumanising speech – and it also includes the notion of protected characteristics. Protected characteristics commonly include race, ethnicity, disability, sexual orientation, sex and serious disease. In other words, Facebook doesn’t allow attacks against protected characteristics. It then set out other groups based on criteria including age, occupation in some instances and also refugees, migrants and asylum seekers.
This sounds clear, and for some of the obvious forms of hate speech, it works well. Calling for all black people to be washed into the sea, for instance, is a relatively clear-cut example. It is however in the millions of other posts where the complexities arise. Facebook seeks to address the complexities by dividing the different forms of hate speech into tiers. Perhaps more useful is that it addresses the importance of context, intent, and mistakes. So not only does it offer a tiered system for assessment, it also then offers other critical factors taken into consideration for each post.
Seemingly simpler, YouTube unsurprisingly also has a two-minute video explaining its policy on hate speech. It offers some good basic examples.
“Hate speech is not allowed on YouTube. We remove content promoting violence or hatred against individuals or groups based on any of the following attributes:
On the surface, it looks pretty similar to the Facebook policy, in that it expressly prohibits hate speech. It also defines hate speech with an element of attacks and includes protected groups. But there are a few differences. For YouTube, the content needs to promote violence or hatred against the protected group.
This is significantly narrower than the FaceBook definition of attack: “as violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation”. For YouTube, the content must be promoting violence or hatred.
Someone saying ‘I hate fxxking White people because they smell’ might not meet the threshold for YouTube, as there is no element of promotion, it is merely an expression of an opinion. But the same content might meet the threshold on Facebook as it is a clear expression of disgust.
Interestingly YouTube has expanded the protected characteristics groups to include, “victims of a major violent event” and “veteran status”. In this context it might be an expression like: “More people need to see and understand the victims of Marikana were trouble-seeking evil scumbags and they deserved what they got.” As victims of a protected group, this may meet the threshold of Google’s test, but not necessarily Facebook as they are not identified by the protected groups Facebook identifies.
Twitter avoids the term hate speech and refers instead to “hateful conduct” and it applies the following definition:
“Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.” (Twitter Hateful Conduct Policy)
Again, the common element of attack and categories of protected characteristics are included as core elements. Twitter also includes the element of promotion of violence, but adds, or “directly attack” or “threaten.” So even though it is called hateful conduct, the element of hatred common to the other definitions thus far is not explicitly included in the Twitter definition. Twitter also interestingly adds another element of incitement and harm for accounts that have that as their primary purpose. But wait, there’s more! Twitter, unlike the others, draws a distinction between the kind of content, from text to images and display names.
“Hateful imagery and display names: You may not use hateful images or symbols in your profile image or profile header. You also may not use your username, display name, or profile bio to engage in abusive behavior, such as targeted harassment or expressing hate towards a person, group, or protected category.” (Twitter Hateful Conduct Policy)
Twitter has also expanded its definition further in a blog,
“In July 2019, we expanded our rules against hateful conduct to include language that dehumanises others on the basis of religion or caste. In March 2020, we expanded the rule to include language that dehumanises on the basis of age, disability, or disease. Today, we are further expanding our hateful conduct policy to prohibit language that dehumanises people on the basis of race, ethnicity, or national origin.” (Twitter Blog)
So, it would seem that the Twitter definition, although a little unclear, in addition to attacks and protected groups, also includes elements of incitement, hateful imagery and dehumanising language. A person might be able to say “I hate the Jews” – it is anti-Semitic and racist, but is it an attack? Some may argue it is, but on the surface it might not be. At the same time, the use of a swastika would be seen as hateful imagery and would not be allowed, on the surface. Of course, each example requires detailed context and scrutiny, but we use them to highlight some of the potential differences in how the different definitions may come up with divergent results.
TikTok also prohibits hate speech and it also interestingly does not explicitly include the element of hate in its definition:
“TikTok is a diverse and inclusive community that has no tolerance for discrimination. We do not permit content that contains hate speech or involves hateful behavior and we remove it from our platform. We suspend or ban accounts that engage in hate speech violations or which are associated with hate speech off the TikTok platform.
Attacks on the basis of protected attributes
We define hate speech or behaviour as content that attacks, threatens, incites violence against, or otherwise dehumanises an individual or a group on the basis of the following protected attributes:
TikTok directly refers to incitement, and it includes “behaviour” as well as content. It also includes elements of attack and protected characteristics. To a degree then, it seems TikTok has tried to cover the possible gap by including both behaviour and content.
The United Nations (UN) in its approach to hate speech defines it as:
“The United Nations Strategy defines hate speech as ‘any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor’.”
The UN includes the elements of attack and broadens them, to include any behaviours or writing that attacks or uses pejorative or discriminatory language. The UN definition of protected groups excludes protected as a term and spells out the more common group characteristics, but then also includes “other identity factor” so it is potentially very broad.
Then we get to South Africa, where we find ourselves awaiting the Constitutional Court ruling on how we will define it. The case was argued in 2020, see here (if any of the issues have jingled your bells, trust us and watch the livestream – some utterly brilliant, fascinating and deeply thoughtful inputs on the complexity of hate speech and a definition, better than any current streaming series). The issues are really fascinating and require their own analysis for discussion. For the current purposes, what’s so interesting is that the current case seeks to find a balance between combating hate speech and freedom of expression, allowing speech we might find offensive and repugnant, but drawing a line as to what we won’t tolerate as a society.
Given the complexity, it should come as no surprise that the definition we apply as Real411 draws on our Constitution and our Equality law. The criteria we use are:
“How does the DCC (Digital Complaints Committee) determine hate speech from free speech?
As and when the Constitutional Court hands down judgment we will amend the criteria, but what should be immediately apparent is that in South Africa, we include the element of “advocacy of hatred” and “incitement to cause harm” and it must be based on one of the protected characteristics. We have also included a clear carve-out, to allow for content that is bona fide engagement in artistic creativity, reporting, and or information in the public interest. In other words, the threshold that we have in South Africa is significantly higher than it is for almost all the platforms.
It is an area, therefore, where the platforms are more likely to remove content before it meets our requirements. For something to be hate speech currently, it needs to be advocacy of hatred – so it cannot just be nastiness or an expression, it must also be incitement and it must also meet the threshold for causing harm; finally, it must be on the basis of a protected ground. Our threshold is also significantly greater than that envisaged by the UN. So hating people who work in call centres, or for City Power as a group, and calling on others to blow them up, might be threatening and could result in other legal action – but neither call centre agents nor City Power workers are a protected group, so it likely wouldn’t count as hate speech.
Why does all this matter? It matters because once again we see how big platforms demonstrate significantly different approaches to an issue that cuts to the heart of freedom of expression and is so often intertwined with disinformation. It matters because again we see how big platforms regulate content using divergent approaches and do not take into account local legislation and context. Through Real411, we not only have a common approach in line with our law, we also have a common standard being applied across the platforms. This means that we won’t have the same content having different outcomes on different platforms.
If you come across content on social media that could potentially be hate speech, incitement, harassment or disinformation, report it to Real411. To make it even more simple, download the Real411 mobile app. DM
William Bird is director of Media Monitoring Africa (MMA) and Thandi Smith heads the Policy & Quality Programme at MMA.
"Information pertaining to Covid-19, vaccines, how to control the spread of the virus and potential treatments is ever-changing. Under the South African Disaster Management Act Regulation 11(5)(c) it is prohibited to publish information through any medium with the intention to deceive people on government measures to address COVID-19. We are therefore disabling the comment section on this article in order to protect both the commenting member and ourselves from potential liability. Should you have additional information that you think we should know, please email [email protected]"
"Without mathematics there’s nothing you can do. Everything around you is mathematics. Everything around you is numbers." ~ Shakuntala Devi
Daily Maverick © All rights reserved