Maverick Citizen

OP-ED

Disinformation in a time of Covid-19: What’s in a definition?

Disinformation in a time of Covid-19: What’s in a definition?
(Photo: forbes.com / Wikipedia)

As the Covid-19 pandemic continues to wreak havoc in our country, those who seek to cause confusion, chaos and public harm have powerful tools of mis- and disinformation to do just that.

Through Real411, Media Monitoring Africa has been tracking disinformation trends on digital platforms since the end of March 2020. Last week we had the good fortune to talk about disinformation at the South African Internet Governance Forum, as well as during a workshop hosted by CST and DFRLab. One of the central issues raised was just how tricky it is to deal with disinformation across the various platforms, not just because of its different forms but how the term is or isn’t used, and what actions can be taken. The big platforms have also been strongly criticised for their role in the spread of disinformation. This week we look at how the big social media platforms define disinformation and see if we can make any sense of it.

Each of the platforms usually have blogs and press releases on dealing with disinformation, but for our purposes though we are looking at the content of their policies. Both Facebook and Twitter, for example, have taken strong action against anti-vaccine disinformation, with a decision to remove it from Facebook and a new strike system for Twitter. Most users are unlikely to go hunting for the latest policy views.

Facebook

Our first challenge with Facebook is that they generally avoid the term “disinformation” and instead talk about “false news”. Their policy has this to say:

  1. Policy rationale

Reducing the spread of false news on Facebook is a responsibility that we take seriously. We also recognise that this is a challenging and sensitive issue. We want to help people stay informed without stifling productive public discourse. There is also a fine line between false news and satire or opinion. For these reasons, we don’t remove false news from Facebook, but instead significantly reduce its distribution by showing it lower in the News Feed. Learn more about our work to reduce the spread of false news here.” – (Facebook Community Standards)

As can be seen Facebook’s use of “False News” effectively means they are unable to draw a distinction between satire, opinion and false news. It’s a poor definition in that it is circular and doesn’t help you understand what it really is. There are many journalists, including our own legendary Joe Thloloe, who have said in relation to the term, “if it’s fake, it can’t be news”. There might be a difference between “false” and “fake” but the link to News, where we understand news to be about credible journalism, is a concern. Be that as it may, what is clear is that Facebook’s approach is to limit the spread of something, disincentivise, and until early March, they didn’t have a default to simply remove “false news” and now as noted above, if it is false news about vaccines, they may remove it.

Let’s look at WhatsApp.

You might assume that because it is owned by Facebook it has the same, or similar policy. You would be wrong. One of the key results that comes up for the WhatsApp policy on disinformation is this blog piece on tips to help prevent the spread of rumours and “fake news”. Aside from the challenge of using the term “fake news” the tips are helpful. It is tempting (given that WhatsApp is end-to-end encrypted and that it is very difficult to remove any content from the platform) to say it is fine for them not to have a policy on disinformation. A general reading of the terms and conditions didn’t reveal any mention of “fake news” or disinformation. We did, however, find this under the “Acceptable use of our services” section:

“Legal and Acceptable Use. You must access and use our Services only for legal, authorized, and acceptable purposes. You will not use (or assist others in using) our Services in ways that: (a) violate, misappropriate, or infringe the rights of WhatsApp, our users, or others, including privacy, publicity, intellectual property, or other proprietary rights; (b) are illegal, obscene, defamatory, threatening, intimidating, harassing, hateful, racially, or ethnically offensive, or instigate or encourage conduct that would be illegal, or otherwise inappropriate, including promoting violent crimes; (c) involve publishing falsehoods, misrepresentations, or misleading statements; (d) impersonate someone; (e) involve sending illegal or impermissible communications such as bulk messaging, auto-messaging, auto-dialing, and the like; or (f) involve any non-personal use of our Services unless otherwise authorized by us.” (WhatsApp Terms of Service, our emphasis in text)

While it isn’t clear what beyond disabling your account they might do, it is interesting that their threshold for such a violation is pretty low. All we need is a misleading statement and if it is brought to WhatsApp’s attention they could remove your account, not just delete the post, but your account. If, for example, we were to send a post to members of our team we don’t like, “team building is cancelled” and the rest of us meet and have a great session, the post is both false and misleading. It seems unlikely, but the terms don’t suggest otherwise. Curiously, there are key elements missing, including potential harm, and there doesn’t appear to be a three-strike system which others have gone for.

Google and YouTube

These two are linked, not just because they are owned by Alphabet, but they also have many similarities, and the Google search on Google policy on disinformation takes you to the YouTube policy. They also have  documents like these that seek to bring commonality and clarity to the approach taken by Google. As we look at Google what is interesting is that Misinformation in the YouTube Community standards appears under the label “Covid Misinformation Policy”. While this is positive for users to get a clear idea on where YouTube stands on Covid misinformation it’s not clear about other kinds of misinformation or disinformation in particular. This is what they say:

“YouTube doesn’t allow content about Covid-19 that poses a serious risk of egregious harm.

“YouTube doesn’t allow content that spreads medical misinformation that contradicts local health authorities’ or the World Health Organisation’s [WHO] medical information about Covid-19. This is limited to content that contradicts WHO or local health authorities’ guidance on:

  • Treatment
  • Prevention
  • Diagnostic
  • Transmission
  • Social distancing and self isolation guidelines
  • The existence of Covid-19” (YouTube Community Guidelines)

While no definition is overtly stated, the shift away from fake news is to be welcomed, and YouTube offers significantly greater detail on what they mean and what kind of information will not be tolerated on YouTube. You Tube also lists these guidelines for misinformation more broadly.

“What policies exist to fight misinformation on YouTube?

Several policies in our Community Guidelines are directly applicable to misinformation.

“Our guidelines against deceptive practices include tough policies against users that misrepresent themselves or who engage in other deceptive practices. This includes deceptive use of manipulated media [e.g. “deep fakes”] which may pose serious risks of harm. We also work to protect elections from attacks and interference, including focusing on combating political influence operations.

“We also have a policy against impersonation. Accounts seeking to spread misinformation by misrepresenting who they are via impersonation are clearly against our policies and will be removed.

“And finally, our hate speech policy prohibits content that denies well-documented, major violent events took place.” (YouTube Community Guidelines Misinformation)

Google offers this up as a definition:

“[T]here is something objectively problematic and harmful to our users when malicious actors attempt to deceive them. It is one thing to be wrong about an issue. It is another to purposefully disseminate information one knows to be inaccurate with the hope that others believe it is true or to create discord in society.

“We refer to these deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web as “disinformation”. (How Google Fights Disinformation)

YouTube users benefit from greater clarity and refinement, which makes it easier to know what is and isn’t acceptable. What is useful is that they provide examples of each kind of misinformation, and include a carve out for educational, documentary, scientific and or artistic purposes. They are also clear that if you violate their guidelines, the content will be removed, and then there is a three-strike system after which your channel may be terminated. We can also determine that for Google, disinformation is about deliberate efforts to deceive and mislead. Interestingly, the element of harm is not mentioned.

Twitter

Twitter refers to misleading content and generally avoids the terms mis- and disinformation. They address the challenge slightly differently by looking at what they term “civic integrity”. (We note that Google and Facebook also address elections misinformation)

“You may not use Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process. In addition, we may label and reduce the visibility of Tweets containing false or misleading information about civic processes in order to provide additional context.

“The public conversation occurring on Twitter is never more important than during elections and other civic events. Any attempts to undermine the integrity of our service is antithetical to our fundamental rights and undermines the core tenets of freedom of expression, the value upon which our company is based.” (Twitter Civic Integrity Policy)

Like Google, Twitter also includes the issue of intention in how they address disinformation, but they frame it as saying that they will act against misleading information about how to participate, or information that seeks to suppress and intimidate, for example.

Twitter has a Covid-19 misleading information policy. The core elements of this policy are:

“You may not use Twitter’s services to share false or misleading information about Covid-19 which may lead to harm.

“Even as scientific understanding of the Covid-19 pandemic continues to develop, we’ve observed the emergence of persistent conspiracy theories, alarmist rhetoric unfounded in research or credible reporting, and a wide range of unsubstantiated rumors, which left uncontextualized can prevent the public from making informed decisions regarding their health, and puts individuals, families and communities at risk.

“Content that is demonstrably false or misleading and may lead to significant risk of harm [such as increased exposure to the virus, or adverse effects on public health systems] may not be shared on Twitter. This includes sharing content that may mislead people about the nature of the Covid-19 virus; the efficacy and/or safety of preventative measures, treatments, or other precautions to mitigate or treat the disease; official regulations, restrictions, or exemptions pertaining to health advisories; or the prevalence of the virus or risk of infection or death associated with Covid-19. In addition, we may label tweets which share misleading information about Covid-19 to reduce their spread and provide additional context.” (Twitter Covid-19 misleading information policy)

In the Covid-19 policy Twitter introduces the element of harm. It needs to be a significant risk of harm. Many of the elements here can be seen in the Google policy. Twitter also notes that its action can be labelling, removal of tweets and account suspension. Twitter has a “five strikes and you are out” system.

The final one we look at is TikTok. They do have a policy on misinformation.

Misinformation

“Misinformation is defined as content that is inaccurate or false. While we encourage our community to have respectful conversations about subjects that matter to them, we do not permit misinformation that causes harm to individuals, our community, or the larger public regardless of intent.

Do not post, upload, stream, or share:

  • Misinformation that incites hate or prejudice
  • Misinformation related to emergencies that induces panic
  • Medical misinformation that can cause harm to an individual’s physical health
  • Content that misleads community members about elections or other civic processes
  • Conspiratorial content that attacks a specific protected group or includes a violent call to action, or denies a violent or tragic event occurred
  • Digital forgeries (synthetic media or manipulated media) that mislead users by distorting the truth of events and cause harm to the subject of the video, other persons, or society.

Do not:

  • Engage in coordinated inauthentic behaviours (such as the creation of accounts) to exert influence and sway public opinion while misleading individuals and our community about the account’s identity, location, or purpose.” (TikTok misinformation policy)

Curiously, the TikTok definition seems very broad, but they expand the concept by adding the notion of harm, when it incites hatred or induces panic. For TikTok it seems the main action is to remove content that violates the guidelines. It may remove or ban such accounts, and also report transgressors to legal authorities. Interestingly, of all the platforms reviewed, TikTok seems to be one of few that offers an appeals process to those who have had content removed.

So where does this leave us? Our take is it highlights one reason why Real411 fulfils such a critical role. Why? Because a quick review of the different platforms reveals that some content may be allowed on some platforms, and not on others, which means you have differing standards being applied to the same thing. We saw a particularly stark example of this last year with Trump where the same content was removed from Twitter but allowed to remain on Facebook. Aside from that, it is clear that platforms have very different baseline approaches, where some refer to harm, others include intention, and some have clear carve-outs for satire, scientific education or documentary use. Another issue to flag is jurisdiction. What may be “more acceptable” in one country, may not be in another. These policies address broad international guidelines, and are not unique to country context, which can often affect how disinformation is defined.

It is clear that the platforms do, to varying degrees, take the issues seriously. However, unless we carry out a specific analysis, we cannot be sure that all the definitions, and interpretations, meet the standards of our laws, rights and our Constitution.

The definitions used in the Real411 system for digital harms have been carefully designed by legal experts, drawing on our own Constitution and existing laws. The criteria we use for reviewing complaints can be found as you submit a complaint. The definition we use for disinformation for Real411 is “false, inaccurate or misleading information designed, presented and promoted online to cause public harm”. We have added the element of public harm as opposed to just harm, as something may be harmful to an individual and not to the public. Our definition is drawn from work carried out in the EU and other jurisdictions as well work by Unesco and our own electoral laws. In reviewing complaints reviewers apply a three-part test to determine if something is disinformation or not. Real411 also has an appeals process. Perhaps the most valuable aspect that Real411 brings is that relevant content is assessed on the same criteria regardless of platform and done in a manner in line with our laws and Constitution.

Remember, if you come across content on social media that could potentially be disinformation, report it to Real411. To make it even more simple, download the Real411 mobile app!

Download the Real411 app on Google Play Store or Apple App Store. DM

Gallery

Comments - Please in order to comment.

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

Premier Debate: Gauten Edition Banner

Join the Gauteng Premier Debate.

On 9 May 2024, The Forum in Bryanston will transform into a battleground for visions, solutions and, dare we say, some spicy debates as we launch the inaugural Daily Maverick Debates series.

We’re talking about the top premier candidates from Gauteng debating as they battle it out for your attention and, ultimately, your vote.

Daily Maverick Elections Toolbox

Feeling powerless in politics?

Equip yourself with the tools you need for an informed decision this election. Get the Elections Toolbox with shareable party manifesto guide.