Defend Truth

Opinionista

Tech, truth and the future of our democracies

mm

Kavisha is a social justice activist and founding director of the Campaign On Digital Ethics (CODE). She is the former Head of Stakeholder Relations and Campaigns at Corruption Watch.

In this pivotal year of elections, where the impact of who we elect will shape the trajectory of our collective and interconnected futures for many years, we must be aware of the subtle and not-so-subtle influences on how we behave, perceive each other, and vote. One important factor that consumers of information must take note of, is the way disinformation, microtargeting and social media have not only enhanced our lives but also polarised our societies.

In 2024, an estimated four billion people will go to the polls to elect leaders of their countries, in what has been called a global year for democracy. More than 60 countries, including El Salvador, the UK, Rwanda, Russia, Taiwan, Indonesia, India, South Africa and the US, will host elections – some will be free and fair, others deeply compromised. 

All of these elections, however, will be consequential. 

On the global ballot, our votes will determine whether we can arrest and contain the climate crisis, whether the warmongers and their profiteers will be allowed to upend international human rights frameworks, and whether our democracies are resilient enough to manage the rise of right-wing authoritarianism. Complicating and contributing to all of these issues is the rampant use of disinformation and political microtargeting on social media, which has played a profound role in polarising our societies. 

Can you hear me from your echo chamber? 

Social media has undoubtedly revolutionised the way we communicate, engage, think and perceive each other and the world around us. It has an influence in mobilising public views and political opinions, and has democratised information dissemination, allowing for a more inclusive and participatory political environment whereby politicians can connect directly with the electorate.  

However, the days of naively trusting in the benevolent powers of the tech industry and their messiahs are, thankfully, over. Since 2016, evidence has steadily been growing, based on experiences from elections in the US, India, Brazil, the UK and the Philippines, that, if left unregulated, social media can have a destabilising impact on the very tenets of our democracies. We have transitioned from the principle that “information is power” to the present moment where disinformation is power – those who control and manage the information ecosystem have an influence on how people vote, behave, interact and perceive the world. 

Disinformation is not a new phenomenon – it’s been around for centuries. What is new, however, are the algorithms being deployed on social media, where users, often without their knowledge or understanding, are shown specific content which they will find agreeable based on their tastes, preferences and biases. These echo chambers that we find ourselves in are reinforcing our beliefs (right or wrong!), isolating us from diverse perspectives, and exacerbating polarisation both in the virtual and real worlds. 

Microtargeting – the invisible hand 

Added to the phenomenon of disinformation is the use of microtargeting, which involves analysing vast amounts of data across social media and the internet, to identify potential voters and tailor messages designed to resonate with them. It is a powerful tool being used by political campaigns across the world to persuade voters directly, based on their preferences, habits and ideologies. 

Barack Obama’s campaign team was successful in doing so in 2008 and 2012 when it directly reached young people in the US and targeted them with messages that they were increasingly interested in – healthcare, education and student debt. At the time, this type of campaigning was hailed as innovative and impressive, using the direct power of social media to increase voter turnout. 

Fast-track to 2016, where both the Trump and Brexit Vote Leave campaigns used the same tactics Obama did, though they deployed significantly more sinister strategies, using disinformation to microtarget voters – appealing to their innate fears, biases and belief systems – without users’ express knowledge that they were being intentionally targeted based on a set of demographic and behavioural data that social media companies collected and sold to third parties. 

While microtargeting can seem benign in some instances, where users are targeted with specific adverts based on their shopping, food, commercial or lifestyle preferences, it is much more dangerous when they are targeted politically and socially, to influence the way they vote or don’t vote, based on their internet scrolling habits and engagement behaviours. 

A New CODE 

As a social justice activist with a keen interest in data and technology, I’ve long been interested in how our societies and behaviours are shifting rapidly, based on the little tech devices we carry around with us. Seeing how Donald Trump, Narendra Modi, Jair Bolsonaro and Rodrigo Duterte all rose to power using very targeted digital campaign strategies, was a turning point for many. 

Our world, elections and democracies would probably never be the same again because social media companies allowed their platforms, services and users’ data to be used and manipulated by the world’s most dangerous despots. Furthermore, our ever-increasing reliance on digital technologies is shaping the way that we act, connect, transact and interact. Those who write the codes behind our screens, write the rules by which the rest of us live. Software engineers are becoming the social architects of our time, framing the way we see ourselves and each other.

For these reasons, with the belief that activism can be pivotal in the digital domain, I have established the Campaign On Digital Ethics (CODE), a nonprofit organisation aimed at improving your digital rights, and shaping a digital future based on human rights frameworks. At CODE our strategy is simple: Improve the digital literacy of users of the internet, and advocate for ethical and legal frameworks in the development and deployment of algorithms and artificial intelligence (AI). 

It probably seems untenable for a small NGO in South Africa to take on the foes in the big tech industry, but working directly with a user and consumer of social media and digital technology, we can revolutionise the digital space, making it safer, transparent and accountable. 

In this pivotal year of elections, where the impact of who we elect will shape the trajectory of our collective and interconnected futures for many years, we have to be aware of the subtle and not-so-subtle influences on how we behave, perceive each other and vote. The digital landscape, with its vast potential to both empower and mislead, is crucial to this dynamic. 

Read more in Daily Maverick: Navigating new frontiers — AI and the future of political campaigns in Africa

As users of digital technology and online platforms, it is incumbent on us to critically evaluate the information that we consume and to understand the mechanisms for why we are receiving particular types of messages and content. 

We will never be able to backtrack on the technological advancements of our times, so in that spirit, we should insist that technology, social media, algorithms and AI, rather than undermine, be built on the basis that they enhance our humanity and democratic values. DM

Gallery

Comments - Please in order to comment.

Please peer review 3 community comments before your comment can be posted