South Africa

OP-ED

Disinformation in a time of Covid-19: Weekly Trends in South Africa

Disinformation in a time of Covid-19: Weekly Trends in South Africa
(Photo: Unsplash / Patrick Fore)

As the Covid-19 pandemic continues to wreak havoc in our country, those who seek to cause confusion, chaos and public harm have powerful tools of mis- and disinformation to do just that. This week, we look at another issue that is often used as a tool to help spread disinformation. It’s even more slippery a concept, but just as dangerous: Hate speech.

Week 7: Weekly trends – the haters

Through Real411, Media Monitoring Africa has been tracking disinformation trends on digital platforms since the end of March 2020. For Real411, we are focusing on combating anti-vaccine content, and we are also gearing up for local elections. Hate speech angers, hurts, dehumanises and insults, and it has real potential to cause public harm, but just what do we mean by the term, and what should be done about it? We will take a quick look at the definitions of the major platforms and then the definition we apply.

Facebook

Unlike its rather ambiguous position on disinformation (calling it false news for example, as we highlighted here), Facebook offers one of the better approaches to hate speech, offering nuance and guidelines on its exceptions. It’s pretty long for a short definition but it highlights some of the complexity from the word go:

We define hate speech as a direct attack against people on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. We define attacks as violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation. We consider age a protected characteristic when referenced along with another protected characteristic. We also protect refugees, migrants, immigrants and asylum seekers from the most severe attacks, though we do allow commentary and criticism of immigration policies. Similarly, we provide some protections for characteristics such as occupation, when they’re referenced along with a protected characteristic.” Facebook Community Standards

It’s a useful definition as it includes elements of attacks – which include violent and or dehumanising speech – and it also includes the notion of protected characteristics. Protected characteristics commonly include race, ethnicity, disability, sexual orientation, sex and serious disease. In other words, Facebook doesn’t allow attacks against protected characteristics. It then set out other groups based on criteria including age, occupation in some instances and also refugees, migrants and asylum seekers.

This sounds clear, and for some of the obvious forms of hate speech, it works well. Calling for all black people to be washed into the sea, for instance, is a relatively clear-cut example. It is however in the millions of other posts where the complexities arise. Facebook seeks to address the complexities by dividing the different forms of hate speech into tiers. Perhaps more useful is that it addresses the importance of context, intent, and mistakes. So not only does it offer a tiered system for assessment, it also then offers other critical factors taken into consideration for each post.

YouTube

Seemingly simpler, YouTube unsurprisingly also has a two-minute video explaining its policy on hate speech. It offers some good basic examples.

Hate speech is not allowed on YouTube. We remove content promoting violence or hatred against individuals or groups based on any of the following attributes:

  • Age
  • Caste
  • Disability
  • Ethnicity
  • Gender Identity and Expression
  • Nationality
  • Race
  • Immigration Status
  • Religion
  • Sex/Gender
  • Sexual Orientation
  • Victims of a major violent event and their kin
  • Veteran Status”

YouTube Hate Speech Policy

On the surface, it looks pretty similar to the Facebook policy, in that it expressly prohibits hate speech. It also defines hate speech with an element of attacks and includes protected groups. But there are a few differences. For YouTube, the content needs to promote violence or hatred against the protected group.

This is significantly narrower than the FaceBook definition of attack: “as violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation”. For YouTube, the content must be promoting violence or hatred.

Someone saying ‘I hate fxxking White people because they smell’ might not meet the threshold for YouTube, as there is no element of promotion, it is merely an expression of an opinion. But the same content might meet the threshold on Facebook as it is a clear expression of disgust.

Interestingly YouTube has expanded the protected characteristics groups to include, “victims of a major violent event” and “veteran status”. In this context it might be an expression like: “More people need to see and understand the victims of Marikana were trouble-seeking evil scumbags and they deserved what they got.” As victims of a protected group, this may meet the threshold of Google’s test, but not necessarily Facebook as they are not identified by the protected groups Facebook identifies.

Twitter

Twitter avoids the term hate speech and refers instead to “hateful conduct” and it applies the following definition:

Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.” (Twitter Hateful Conduct Policy)

Again, the common element of attack and categories of protected characteristics are included as core elements. Twitter also includes the element of promotion of violence, but adds, or “directly attack” or “threaten.” So even though it is called hateful conduct, the element of hatred common to the other definitions thus far is not explicitly included in the Twitter definition. Twitter also interestingly adds another element of incitement and harm for accounts that have that as their primary purpose. But wait, there’s more! Twitter, unlike the others, draws a distinction between the kind of content, from text to images and display names.

Hateful imagery and display names: You may not use hateful images or symbols in your profile image or profile header. You also may not use your username, display name, or profile bio to engage in abusive behavior, such as targeted harassment or expressing hate towards a person, group, or protected category.” (Twitter Hateful Conduct Policy)

Twitter has also expanded its definition further in a blog,

“In July 2019, we expanded our rules against hateful conduct to include language that dehumanises others on the basis of religion or caste. In March 2020, we expanded the rule to include language that dehumanises on the basis of age, disability, or disease. Today, we are further expanding our hateful conduct policy to prohibit language that dehumanises people on the basis of race, ethnicity, or national origin.” (Twitter Blog)

So, it would seem that the Twitter definition, although a little unclear, in addition to attacks and protected groups, also includes elements of incitement, hateful imagery and dehumanising language. A person might be able to say “I hate the Jews” – it is anti-Semitic and racist, but is it an attack? Some may argue it is, but on the surface it might not be. At the same time, the use of a swastika would be seen as hateful imagery and would not be allowed, on the surface. Of course, each example requires detailed context and scrutiny, but we use them to highlight some of the potential differences in how the different definitions may come up with divergent results.

TikTok

TikTok also prohibits hate speech and it also interestingly does not explicitly include the element of hate in its definition:

“TikTok is a diverse and inclusive community that has no tolerance for discrimination. We do not permit content that contains hate speech or involves hateful behavior and we remove it from our platform. We suspend or ban accounts that engage in hate speech violations or which are associated with hate speech off the TikTok platform.

Attacks on the basis of protected attributes

We define hate speech or behaviour as content that attacks, threatens, incites violence against, or otherwise dehumanises an individual or a group on the basis of the following protected attributes:

  • Race 
  • Ethnicity
  • National origin 
  • Religion
  • Caste 
  • Sexual orientation
  • Sex
  • Gender
  • Gender identity
  • Serious disease
  • Disability
  • Immigration status

TikTok Community Guidelines

TikTok directly refers to incitement, and it includes “behaviour” as well as content. It also includes elements of attack and protected characteristics. To a degree then, it seems TikTok has tried to cover the possible gap by including both behaviour and content.

The United Nations (UN) in its approach to hate speech defines it as:

“The United Nations Strategy defines hate speech as ‘any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor’.”

United Nations Strategy and Plan of Action on Hate Speech: Detailed Guidance on implementation for United Nations Field Presences

The UN includes the elements of attack and broadens them, to include any behaviours or writing that attacks or uses pejorative or discriminatory language. The UN definition of protected groups excludes protected as a term and spells out the more common group characteristics, but then also includes “other identity factor” so it is potentially very broad.

Then we get to South Africa, where we find ourselves awaiting the Constitutional Court ruling on how we will define it. The case was argued in 2020, see here (if any of the issues have jingled your bells, trust us and watch the livestream – some utterly brilliant, fascinating and deeply thoughtful inputs on the complexity of hate speech and a definition, better than any current streaming series). The issues are really fascinating and require their own analysis for discussion. For the current purposes, what’s so interesting is that the current case seeks to find a balance between combating hate speech and freedom of expression, allowing speech we might find offensive and repugnant, but drawing a line as to what we won’t tolerate as a society.

Given the complexity, it should come as no surprise that the definition we apply as Real411 draws on our Constitution and our Equality law. The criteria we use are:

How does the DCC (Digital Complaints Committee) determine hate speech from free speech? 

  1. In order for the DCC to determine a complaint to be hate speech, as contemplated in terms of section 16(2) of the Constitution, the following elements must be met: 
    1. There has been advocacy of hatred against another person;
    2. It is based on one or more prohibited grounds, including race, ethnicity, gender or religion;
    3. It constitutes incitement to cause harm; and
    4. It does not constitute bona fide engagement in artistic creativity, academic and scientific inquiry, fair and accurate reporting in the public interest or publication of any information, advertisement or notice in accordance with section 16 of the Constitution.” (Real411)

As and when the Constitutional Court hands down judgment we will amend the criteria, but what should be immediately apparent is that in South Africa, we include the element of “advocacy of hatred” and “incitement to cause harm” and it must be based on one of the protected characteristics. We have also included a clear carve-out, to allow for content that is bona fide engagement in artistic creativity, reporting, and or information in the public interest. In other words, the threshold that we have in South Africa is significantly higher than it is for almost all the platforms.

It is an area, therefore, where the platforms are more likely to remove content before it meets our requirements. For something to be hate speech currently, it needs to be advocacy of hatred – so it cannot just be nastiness or an expression, it must also be incitement and it must also meet the threshold for causing harm; finally, it must be on the basis of a protected ground. Our threshold is also significantly greater than that envisaged by the UN. So hating people who work in call centres, or for City Power as a group, and calling on others to blow them up, might be threatening and could result in other legal action – but neither call centre agents nor City Power workers are a protected group, so it likely wouldn’t count as hate speech.

Why does all this matter? It matters because once again we see how big platforms demonstrate significantly different approaches to an issue that cuts to the heart of freedom of expression and is so often intertwined with disinformation. It matters because again we see how big platforms regulate content using divergent approaches and do not take into account local legislation and context. Through Real411, we not only have a common approach in line with our law, we also have a common standard being applied across the platforms. This means that we won’t have the same content having different outcomes on different platforms.

If you come across content on social media that could potentially be hate speech, incitement, harassment or disinformation, report it to Real411. To make it even more simple, download the Real411 mobile app. DM

Download the Real411 App on Google Play Store or Apple App Store.

William Bird is director of Media Monitoring Africa (MMA) and Thandi Smith heads the Policy & Quality Programme at MMA.

Gallery

"Information pertaining to Covid-19, vaccines, how to control the spread of the virus and potential treatments is ever-changing. Under the South African Disaster Management Act Regulation 11(5)(c) it is prohibited to publish information through any medium with the intention to deceive people on government measures to address COVID-19. We are therefore disabling the comment section on this article in order to protect both the commenting member and ourselves from potential liability. Should you have additional information that you think we should know, please email [email protected]"

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

Become a Maverick Insider

This could have been a paywall

On another site this would have been a paywall. Maverick Insider keeps our content free for all.

Become an Insider

Every seed of hope will one day sprout.

South African citizens throughout the country are standing up for our human rights. Stay informed, connected and inspired by our weekly FREE Maverick Citizen newsletter.