South Africa

WEB OF LIES (PART ONE)

Anatomy of a disinformation campaign: The who, what and why of deliberate falsehoods on Twitter

Anatomy of a disinformation campaign: The who, what and why of deliberate falsehoods on Twitter
(Illustrative image | Sources: Rawpixel | Unsplash / Thiebaud Faix / Nsey Benajah / Noah Buscher / Dmitry Vechorko)

In recent years the UK’s Oxford Internet Institute has tracked the manipulation of public opinion online. Since 2018, South Africa has featured on a growing list of countries where social media is used to spread disinformation and computational propaganda.

Twitter is a prominent platform for social media manipulation in South Africa, the institute found. In this three-part series, fact-checking organisation Africa Check and the Atlantic Council’s Digital Forensic Research Lab (DFRLab) take a closer look at disinformation on Twitter in the country. Part One focuses on disinformation actors, their behaviour and content.

The people behind disinformation – deliberate falsehoods spread online – exploit an information ecosystem that “prioritises what is popular over what is true” to cause widespread harm.

The root of the problem is that political conversations take place on platforms built for viral advertising, says Renée DiResta, research manager at Stanford University’s Internet Observatory, a US programme that focuses on the abuse of social media.

“[Social media algorithms] will show you what you want to see, but they don’t have any kind of value judgement and this is where we see things like radicalisation beginning to be an increasing problem because the recommendation engine does not actually understand what it is suggesting.”

The foreign influence operations of countries like Russia, and increasingly China, loom large in discussions about online disinformation. But the threat of domestic disinformation is as real.

The recently released 2020 edition of the Oxford Internet Institute’s Global Inventory of Organised Social Media Manipulation identified 77 countries where the government or political party actors used disinformation on social media to manipulate public opinion. These disinformation campaigns are mostly run within the country, the researchers told Africa Check.

Twelve African countries are among the 77.

They include South Africa, where government agencies, politicians and political parties, private contractors, citizens and influencers are involved in social media manipulation. This ranges from attacking the opposition to trolling those who disagree into silence.

Examples are the ANC’s pre-election “boiler room”, which reportedly organised “party activists” to attack opposition leaders on Twitter, and the EFF’s Twitter “troll army”, which targeted journalists.

Amelie Henle, research assistant at the Oxford Internet Institute’s Computational Propaganda Research Project, says the Gupta-backed online influence operation, in which the now defunct British public relations firm Bell Pottinger had a hand, was the biggest example of disinformation it had identified in South Africa in recent years.

Disinformation ultimately damages democracy. University of Washington Associate Professor Kate Starbird writes: “While a single disinformation campaign may have a specific objective – for instance, changing public opinion about a political candidate or policy – pervasive disinformation works at a more profound level to undermine democratic societies.”

A disinformation ABC

One way to look at online disinformation is to break it down into three “vectors”: manipulative actors, deceptive behaviour and harmful content.

The three are “often intertwined”, explains Camille François, chief innovation officer at network analysis company Graphika.

But each campaign may have a different primary vector.

For example, a manipulative actor might distribute “uplifting, empowering” content, but the way it is distributed – the behaviour – makes it disinformation.

“We’ve seen this in many information operations: the first set of posts in the first months is often just putting out this uplifting, engaging, kind of feel-good content, because you want to amass followers and create an audience around those accounts,” says François. “And then once you have an audience… you can start ‘weaponising’ these accounts and use them for more divisive and more political content.”

Dr Danil Mikhailov, executive director of the data-science-for-good platform data.org, describes this as the investment of time capital to acquire social capital, as people like and follow your content. This causes search engines and social media algorithms to amplify the content, earning cultural capital or recognition that is then used to influence communities.

Manipulative actors

Disinformation actors design their campaigns to hide their identities and intentions, writes François.

Manipulative actors include the creators of sock-puppet accounts, which use false identities to spread or boost disinformation, and trolls who bully those who get in the way of its spread.

You might have encountered a troll when questioning the veracity of a tweet. What they do better than bots (automated accounts that mimic human behaviour) is launch personal attacks to silence people who ask questions about whether something is true, according to a report on information disorder.

The report – by Dr Claire Wardle, co-founder of the anti-misinformation non-profit First Draft, and media researcher Hossein Derakhshan – says the actors in a disinformation campaign don’t necessarily share motives. “For example, the motivations of the mastermind who ‘creates’ a state-sponsored disinformation campaign are very different from those of the low-paid ‘trolls’ tasked with turning the campaign’s themes into specific posts.”

Possible reasons for creating and spreading mis- and disinformation include:

  • Money – pushing traffic to false-news websites for advertising income;
  • The desire to connect with an online “tribe”, such as fellow supporters of a political party or a cause; and
  • Wanting to influence public opinion by, for example, discrediting a political opponent.

The money motive was at work when a South African municipal employee posed as a racist white woman on Twitter in 2020 to drive traffic to his websites.

Seeking to influence public opinion featured prominently in the Radical Economic Transformation (RET) disinformation campaign on Twitter in South Africa in 2016 and 2017.

The African Network of Centers for Investigative Reporting (ANCIR) found that the RET network set out to undermine the institutions that ran South Africa’s economy, used “white monopoly capital” to divert attention from State Capture and attacked critics of the Gupta family and former president Jacob Zuma.

ANCIR analysed 200,247 of the RET network’s tweets. A full 98% of these were retweets, showing how a network of fake accounts was used to amplify messages “to give the illusion that the content they are sharing resonates with a wider group”.

Typically, a fake account would tweet something, which was then boosted by more fake accounts. Prominent – and real – Twitter users who were tagged in some of these tweets created a “bridge” between the network of fake accounts and the rest of Twitter.

There is still an RET community on South African Twitter. A 2020 analysis of 14 million tweets by the Superlinear blog found that it had merged with the EFF community. These groups have displaced “the mainstream media from the centre of the conversation”, the blog, run by a data scientist, says.

Deceptive behaviour

“Deceptive behaviours have a clear goal,” says François. “To enable a small number of actors to have the perceived impact that a greater number of actors would have if the campaign were organic.”

In January 2021, Twitter and Facebook removed accounts that used deceptive behaviour to benefit Ugandan President Yoweri Museveni ahead of the country’s election.

Facebook said a network linked to Uganda’s ICT ministry was involved in “coordinated, inauthentic behaviour”.

The platform defines this as groups of pages or people who “work together to mislead others about who they are or what they are doing… When we take down one of these networks, it’s because of their deceptive behaviour. It’s not because of the content they are sharing.”

AFP reported that the Ugandan network’s tactics included using “fake and duplicate accounts to manage pages, comment on other people’s content, impersonate users [and] re-share posts in groups to make them appear more popular than they were”.

The DFRLab identified related “suspicious behaviour” on Twitter, including accounts that responded to negative tweets about Museveni with identical copied and pasted text.

Read: How the #PutSouthAfricansFirst disinformation network coordinated to create the perception of a legitimate movement.  

Online disinformation actors use both manual and automated techniques to deceive.

In 2020, the Oxford Internet Institute identified 58 countries where “cyber troops” – government or political party actors – had used bots to manipulate public opinion. Fake human-run social media accounts were even more widespread, found in 81 countries.

Based in part on an analysis of media reports, the institute found that cyber troops in South Africa used bots, fake human-run and hacked or stolen accounts.

The second part of this series  gives a snapshot of disinformation on Twitter in South Africa, but a disinformation campaign typically spans multiple platforms.

Disinformation could, for example, arrive on open social networks like Twitter after making its way from anonymous online spaces to closed or semi-closed groups – WhatsApp or Twitter direct messages, for example – to conspiracy communities on Reddit or YouTube, explains First Draft’s Wardle.

Journalists might pick it up from an open social network like Twitter, particularly if a politician or influencer repeats the falsehood, she says. Malicious actors often bank on this media amplification (more on this in Part Three.)

If an operation is effective, writes DiResta, “sympathetic real people” will find the message in their feeds and amplify it too.

These “unwitting agents” might even be in the majority, says Starbird.

A Russian influence operation exposed in 2020 went as far as deceiving both the people who consumed its content and some of those who created content for the campaign. Ghanaians linked to a human rights NGO appeared to be unaware that they were part of an operation targeting black communities in the US.

Dubbed Double Deceit, the operation is an example of the evolving tactics disinformers use and the challenge these shifts pose, says François.

South Africans were involved in an operation, linked to a financier of the Russian Internet Research Agency troll farm, that targeted the Central African Republic and southern Africa. It used locals from the Central African Republic and South Africa to manage activities and create content, “likely to avoid detection and help appear more authentic”, according to Facebook.

Harmful content

Content that is manipulated to deceive can take many forms.

As part of the #PutSouthAfricansFirst disinformation campaign, a photo taken in a Nigerian hospital in 2019 was used out of context on Twitter to falsely claim that South Africans were sleeping on hospital floors in 2020 because foreigners were occupying the beds.

This is an example of false context – where content is taken out of its original context and recirculated.

The information disorder report highlights other types of false content. These include falsely creating the impression that content was created by an official source (imposter content) and making something up (fabricated content).

To increase the likelihood that a message will be shared, it might include a “powerful visual component”.

The Media Manipulation Casebook identifies several tactics that use visuals. These include memes, misinfographics (false or misleading infographics) and evidence collages (“compiling information from a number of sources into a single, shareable document, usually as an image, to persuade or convince a target audience”).

Disinformation actors have also had success with content that triggers an emotional response.

For this reason, one thing a user can do to avoid falling for disinformation is to pause before taking action if a social media post makes them scared or angry.

Get more advice on dealing with disinformation in Part Three of this series. 

As DiResta reminds us: in an information war, our minds are the territory. “If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.” DM

In Part Two we ask: How much damage can a hashtag cause? And finally, we consider what individual social media users can do about disinformation.

Gallery

Comments - Please in order to comment.

  • Lee Richardson says:

    The sad reality is unfortunately very obvious; when people are too lazy/stupid to differentiate fake news from reality, the battle is lost. This is the (mis)information age and much like the black market, as long as there are willing buyers, agents will fill the space.

Please peer review 3 community comments before your comment can be posted

X

This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.


Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

Become a Maverick Insider

This could have been a paywall

On another site this would have been a paywall. Maverick Insider keeps our content free for all.

Become an Insider

Every seed of hope will one day sprout.

South African citizens throughout the country are standing up for our human rights. Stay informed, connected and inspired by our weekly FREE Maverick Citizen newsletter.