2019 election guide: How to detect misinformation, bots, trolls and sockpuppets
Words such as disinformation, fake news and bots start to become even more popular when elections are around the corner.
We already know that social media can sway elections. Take the 2016 US elections as an example: Researchers are only just starting to fully understand the influence Russian bot farms and misinformation campaigns had on Trump being elected. And closer to home, the impact of the Bell Pottinger scandal is still being felt by politicians and everyday citizens alike, with the racial divisiveness promoted by the Gupta-supported company likely to rear its head as we near 8 May.
Disinformation vs misinformation
In 2018 Dictionary.com chose “misinformation” as its word of the year, while the Collins English Dictionary settled on “fake news” as its 2017 option. The three terms, misinformation, disinformation and fake news, are often used interchangeably, but have significantly different meanings.
The primary difference between disinformation and misinformation is intent. Misinformation is the spread of misleading or false information, either wittingly or unwittingly. So when your aunt forwards you a WhatsApp message purportedly from the president that says the ANC is changing its name to the CNA, she is spreading misinformation. Whether she believes that the ANC is changing its name or not is irrelevant — by forwarding the message, the fake information is spread.
Disinformation, on the other hand, is the spread of misleading or false information with the intent to cause harm, spread propaganda and manipulate public perception. So the person who creates the WhatsApp message saying the ANC is changing its name to the CNA is purposefully spreading fake information, with the intent to damage the ANC’s reputation. This is disinformation.
Both misinformation and disinformation have negative effects, from minor confrontations on Facebook to actual manipulation of elections. And social media is at the forefront of the battle against facts, simply because it is so easy to spread mis- and disinformation online.
Media Monitoring Africa, in conjunction with the IEC and other organisations, recently launched a platform that allows members of the public to report misinformation, primarily in the form of social media posts. Fighting disinformation on social media is difficult, particularly considering that trends online often trickle down into traditional media. But if everyday Twitter users report mis- and disinformation to organisations such as Media Monitoring Africa, we can slowly start to combat “fake news”.
The concept of “fake news” was popularised by US President Donald Trump, who has repeatedly labelled any negative media coverage as fake news. While editors and journalists may argue that news cannot be fake and the concept of fake news is itself inherently false, the term has become a part of our everyday language.
The primary difference between fake news and disinformation is often context. Information is frequently labelled as fake when it disagrees with a particular world view (we’re looking at you, Donald). But disinformation is usually more insidious, despite also technically being fake news.
Although the ANC changing its name is a fictitious example, in reality misinformation and fake news is spread via messaging platforms such as WhatsApp every day. And it’s very difficult to monitor. WhatsApp itself is a closed messaging app, meaning that what you send to a friend is not public, as a Facebook or Twitter post would be. WhatsApp also uses end-to-end encryption, which means the company itself should not be able to see the messages you send (although it is owned by Facebook, aka Big Brother).
But all that makes detecting misinformation very difficult. In India, WhatsApp has come under fire for allowing the spread of false information that led to a spike in killings in 2018. In response, the messaging service limited the number of times a user could forward messages in an attempt to curb the spread of false information.With the Indian elections to start on 11 April, on Tuesday WhatsApp launched a fact-checking service named Checkpoint Tipline in an attempt to further battle the scourge of fake news in the country, according to Reuters.
The launch of the tip line comes just a day after WhatsApp owner Facebook announced it had removed 687 pages and accounts linked to India’s main opposition party, for “co-ordinated inauthentic behavior and spam”.
Bots, trolls and sockpuppets
A bot is an automated account made to look as if it were human. Bots are easy to make — all you need is an email address to sign up for Twitter — and those can be randomly generated. When working together, bots can have huge amounts of power over social media narratives and even influence trends or promote disinformation. A bot farm can be bought for a few hundred rand or a few million, depending on whether the farmers are part of the Gupta family or not.
According to Ben Nimmo, senior fellow for information defence at Digital Forensic Research Lab, one of the primary uses of a bot farm is to promote and amplify social media posts. A relatively obscure post can become viral out of the blue if it garners significant attention. So, if a politician pays for a bot farm to like and retweet a post that contains disinformation, that post will be spread to more people.
Amplifying a post also increases its legitimacy in the eyes of everyday users. If a random post with two likes says that the ANC is changing its name to the CNA, most Twitter users will scroll past it without giving it much thought — it doesn’t have legitimacy because it hasn’t garnered attention. But if that same post had 10,000 likes and 8,000 retweets, suddenly it appears more legitimate. Seemingly legitimate posts are then promoted by real humans when they comment or like the tweet, and thus the disinformation reaches a greater audience.
Twitter encourages bots by making it difficult for everyday users to check who has retweeted or liked a post. If you click on the number of likes or retweets a post has, you’re only able to see 25 random accounts that have liked or retweeted the original post. Although very few people will actually check, anyone who is interested in checking to see the accounts that are engaging with a post will be frustratingly limited with the data they have access to.
But bots don’t often create original content, they just amplify it. Instead, a sockpuppet working with a network of bots will create a post that contains misinformation and set the bots on it. Sockpuppets are real people, often pretending to be someone else. Their social media accounts can look convincingly real, complete with profile pictures, bios and original posts.
Unlike bots, trolls are real human beings, albeit unpleasant ones. Trolls are often found lurking in comments sections posting inflammatory remarks, or attacking people on social media. Their intention is to be malicious, but they often have little sway over legitimate debate.
“Don’t feed the trolls” is the best way to deal with online actors who strive to incite conflict. Just don’t engage, don’t take their bait, but be prepared to deal with them.
Trolls thrive on divisive issues, so genuine political debate can quickly become overrun with social media trolls. If you see a post online relating to the elections that instantly makes you angry, just remember not to feed the trolls. DM