Missed parts one and two? Read them here and here.
In a polluted information ecosystem, our actions – even if well intentioned – can make the disinformation problem worse.
“Online, everyday actions like responding to a falsehood in order to correct it or posting about a conspiracy theory in order to make fun of it – case in point, QAnon – can send pollution flooding just as fast as the falsehoods and conspiracy theories themselves,” writes Whitney Phillips, the co-author of You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape.
“Once we publish or send or retweet, our messages are no longer ours; in an instant they can ricochet far beyond our own horizons, with profound risks to the [information] environment. At least potentially.”
Is Phillips saying we should stop tweeting?
No, she says, some things are worth saying even if they bring attention to polluted information, which includes disinformation.
Not responding to polluted information “prevents people from telling the truth, educating those around them… and pushing back against bigots, manipulators and chaos agents”.
Instead, her advice is to be more strategic about who and what we amplify. We should question what we don’t know, whether we might be giving free publicity to malicious actors, and if the possible benefits of our actions outweigh the pollution we might cause.
Considering who and what you’re amplifying is an important consideration for journalists too.
Dr Claire Wardle, co-founder of anti-misinformation nonprofit First Draft, warns that reporting on disinformation prematurely – before it reaches a “tipping point” where it is increasingly visible – could boost misleading content. “All journalists and their editors should now understand the risks of legitimising a rumour and spreading it further than it might have otherwise travelled… ”
The flip side is also true: wait too long and a falsehood may turn into a “zombie rumour” that refuses to die.
Why we fall for disinformation
Herman Wasserman, professor of media studies at the University of Cape Town (UCT), has researched “fake news” in South Africa, Kenya and Nigeria. What surprised him about surveys in these countries, he told Africa Check, was how many social media users shared false information despite suspecting that it was unverified or made up.
What Mandy Jenkins learnt while studying consumers of disinformation during her John S Knight journalism fellowship at Stanford University in the US, was that the people she interviewed overestimated their ability to distinguish between “fake” and real. They tended to rely on search engines for verification, but, unfortunately, search engines often “give you what you want to see”.
They were also overwhelmed with information. “It’s very tempting to close it off and just say: ‘You know what… I only want this stuff from my friends and my circle,’” Jenkins
The challenge for those who want to counter disinformation is that the way we process information isn’t always rational.
Many factors are at play, including our biases, the fact that familiarity might make something seem true and our need to belong.
In their report on disinformation disorder, Wardle and media researcher Hossein Derakhshan argue that when we share news on social media we’re not simply transmitting information. We become performers for “tribes” of followers or friends.
“This tribal mentality partly explains why many social media users distribute disinformation when they don’t necessarily trust the veracity of the information they are sharing: they would like to conform and belong to a group, and they ‘perform’ accordingly.”
How best to deal with people who fall for disinformation is not yet clear, says Ben Nimmo, former director of investigations at network analysis company Graphika.
“A lot of the time, what we see is that people will share the false content, either because they believe that it’s true or because they want to believe that it’s true – it’ll confirm some kind of political leaning or political bias that they already have.
“And so part of the question is: who is [going to tell them that] they’ve shared a piece of disinformation? Because if it’s somebody from what is seen as the other side… then there’s a danger that you’ll reinforce their resistance to the truth.”
Correcting false information doesn’t always work.
The News Literacy Project’s advice for rectifying falsehoods spread by friends and family includes trying to find common ground and using “an empathetic and respectful tone”.
With health information, researchers Leticia Bode and Emily K Vraga recommend including a link to a credible source in your correction to increase its chances of success.
Disinformation researcher Nina Jankowicz’s book, How To Lose the Information War: Russia, Fake News, and the Future of Conflict, makes a case for solutions that consider the divisions in society that make us vulnerable to disinformation in the first place.
She writes that in countries where disinformation has long existed, “empowering people to be active and engaged members of society through investments in the information space and in people themselves is always part of the solution”.
Estonia, for example, focused on education and invested in both media and contact between people to “repair the gaps in trust and crises of identity” that made the country’s Russian speakers an “easy target” of Russian disinformation campaigns.
Don’t be an accidental co-conspirator
People who spread disinformation rely on unsuspecting social media users to amplify their content. Renée DiResta, research manager at the Stanford Internet Observatory in the US, noticed a shift in the tactics used on Twitter in 2018. “Twitter’s self-imposed product tweaks have already largely relegated automated bots to the tactical dustbin. Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically aligned people to inadvertently spread real, ideologically aligned content instead.”
So how can we avoid becoming accidental co-conspirators in a disinformation campaign?
The advice of UCT’s Wasserman is to:
- Actively look for “good information”. Examples include “independent, critical, rigorous journalism” and – in the case of Covid-19 – official sources of health information;
- Don’t share unverified information “just in case” it might be true. “It’s like passing on a virus – your fake post or false information can go on to multiply, infect many others, and do real harm”; and
- Verify before you share, and develop the necessary skills to do so.
(Illustrative image | Sources: Rawpixel | Unsplash / Ilias Chebbi / Laura Chouette)