In a recent interview, Hillary Clinton criticised Facebook CEO Mark Zuckerberg for continuing to allow false political advertising on his platform.
In an article published on 25 January in The Atlantic, Clinton argued that Zuckerberg tipped the scales in favour of Donald Trump in the 2016 election and that Facebook is “not just going to re-elect Trump but intend[s] to re-elect Trump”.
According to Business Insider, Clinton was promoting a new Hulu documentary about her 2016 presidential bid at the Sundance Film Festival. She was, as mentioned, referring to Facebook’s controversial decision to allow falsehoods in political ads and is also of the opinion that Zuckerberg has been “somehow persuaded […] that it’s to his and Facebook’s advantage not to cross Trump. That’s what I believe. And it just gives me a pit in my stomach.”
Zuckerberg has originally maintained that he doesn’t believe fake news on Facebook influenced Trump’s win and described it as “a pretty crazy idea”, only to moderate his position somewhat at a later stage, after it became impossible to defend such a ridiculously callous position.
Many commentators have pointed to the damaging implications of Facebook’s refusal to take responsibility for the consequences of misinformation that is spread on the platform. As Emilie Gambade has previously argued, defending the truth has never been so imperative — humans can transform fake information into fake memories, especially when the fabricated details align with one’s system of values and political beliefs. As Gambade explains, fake news and falsehoods spread like wildfire across social media and when you hear the same thing over and over again, with time, it becomes truth. More to the point here is how it plays on people’s worst fears and insecurities.
Take for example the hate speech that surged on Facebook in August 2017 in Myanmar. At that point, Myanmar had 53 million people residing within its borders and 20 million Facebook users. The hate speech and propaganda called for a “Muslim-free” Myanmar, targeting Muslim minority group the Rohingya. Much of the propaganda was created and disseminated by military personnel and after Rohingya militants coordinated an attack on the police, the Burmese military capitalised on online support. The military then systematically killed, raped and maimed tens of thousands of Rohingya. Other groups joined the ethnic cleansing and support for the slaughter continued to go out on Facebook. Villages were burned down and more than 700,000 Rohingya fled across the border into Bangladesh.
Facebook was warned repeatedly by international and local organisations about the situation in Myanmar. The company banned one Rohingya resistance group and left the military and pro-government groups long enough to effectively enable them to spread propaganda and cause massive damage before taking them down.
The United Nations clearly stated that what was transpiring in Myanmar was a textbook example of ethnic cleansing.
In 2018, the UN concluded that Facebook had played a determining role in the ethnic cleansing of the Rohingya people and that the violence was enabled by the network’s frictionless architecture. And although some are quick to point to the fact that Facebook was not responsible for the violence, the social network was forewarned about the dangerous propaganda that could potentially lead to violence. More to the point, Facebook enabled the spreading of hate speech through an entire population at a speed previously unthinkable.
Facebook, in the face of the deaths of 40,000 people and the displacement of 700,000, responded with the same line it has responded with before — “[t]here is no place for hate speech or content that promotes violence on Facebook, and we work hard to keep it off our platform”. Later, the social network admitted it failed to do enough to prevent its platform being used to fuel political division and bloodshed in Myanmar, stating in a blogpost that we “can and should do more”.
What Facebook does not work hard to keep off its platform is false statements in political advertisements that can easily be delegitimised. In fact, Zuckerberg has dug in his heels. While the tech industry found itself on the receiving end of increased scrutiny from lawmakers and the public throughout 2019 in the US, Facebook, as stated above, decided, in October, to quietly revise its policy of banning false claims in advertising to exempt politicians.
While TikTok and Twitter banned almost all political advertising and Google announced it would no longer allow political advertisers to target voters based on their political affiliations, Mark Zuckerberg delivered a speech at Georgetown University in October 2019, calling for freedom of political voice and touting Facebook as a champion of free expression.
Invoking Frederick Douglass, Martin Luther King Junior and Black Lives Matter, Zuckerberg defended the company’s decision to allow misinformation in political advertising on the platform, stating that “[p]olitical ads are an important part of voice — especially for local candidates, up-and-coming challengers and advocacy groups that may not get much media attention otherwise”. The irony of Zuckerberg invoking Douglass, Black Lives Matter and Martin Luther King when his platform refuses to take down neo-Nazi propaganda by hiding behind the first amendment, seemed to escape him. Further, I am not sure how paying to increase political reach, with false claims, amounts to freedom of expression.
In his speech, Zuckerberg also attempted to rewrite Facebook’s historical beginnings — re-imagining the actual history of the site — originally created to rate the hotness of women at Harvard — to a platform for sharing perspectives on the War in Iraq. In relaying his alternative account, he stated:
“I remember feeling that if more people had a voice to share their experiences, then maybe it could have gone differently.”
Facebook further attempted to launch a digital financial service in 2019. This was after several years characterised by anti-trust issues and major privacy scandals, not least Cambridge Analytica.
Kari Paul, for The Guardian, sets out some of the privacy scandals and antitrust issues that dogged social media and tech giants in 2019. After years of unchecked growth, the tech industry was subjected to scrutiny from lawmakers and the public in the US. According to Paul, regulators and lawmakers seemed to have embraced the type of criticism regularly levelled at tech firms by the European Union. Peter Yared, CEO and founder of data compliance firm InCountry, states that “the techlash we have seen in the rest of the world is just now catching up in the US [and] it’s been a long time coming”.
This assertion by Yared cannot be overstated — the Federal Trade Commission, already in 2011, initiated a settlement with Facebook over charges that it had systematically “deceived consumers by telling them that they could keep their Facebook information private, and then repeatedly allowing it to be shared and made public”. This systematic deception included website changes that made private information public, third-party access to users’ personal data, leakage of personal data to third-party apps, a “verified apps” program in which nothing was verified, enabling advertisers to access personal information, allowing access to personal data after accounts were deleted, and violations of the Safe Harbour Framework. It is of course exactly the “verified apps” program, five years later, that allowed Cambridge Analytica to easily access the data of users in order to influence US voters.
Lest we forget, Cambridge Analytica, funded by conservative billionaire Robert Mercer and, in part, birthed by alt-right allrounder Steve Bannon, exploited the information of almost 100 million Facebook users and distributed false narratives that played on racist, sexist and anti-immigration stereotypes.
The Cambridge Analytica revelations led to some of the largest multinational investigations into data crime yet and together with the Cambridge Analytica whistle-blower’s account of the firm’s operations, a clear picture emerged — the using of private Facebook user data was central to information operations that successfully cultivated pro-Trump and pro-Brexit opinion through falsehoods and misinformation, what Zuckerberg at time contended was a “pretty crazy idea”.
Zuckerberg defied three requests to testify before the British Parliament. He also refused to be interviewed by 15 other national parliaments, representing one billion citizens on six continents. Clinton, in the same interview mentioned above, has therefore rightly described Zuckerberg as “authoritarian”, stating that in some of her dealings with the social network, it felt like she was “negotiating with a foreign power”.
The so-called “techlash” didn’t change anything at Facebook. Neither did the congressional hearings in October of 2019 that saw Democratic lawmakers grill Zuckerberg. Alexandria Ocasio-Cortez, for example, asked “so you won’t take down lies or you will take down lies?” as Zuckerberg struggled to answer questions relating to fact-checking of political advertisements.
Elizabeth Warren, a presidential candidate, even went so far as taking out advertisements on Facebook that contained false statements in order to expose Facebook’s policy change and how easily disinformation campaigns can be created and distributed on the network. It is also well known now that Zuckerberg does not want Elizabeth Warren to be president. In leaked audio of an internal Facebook meeting that emerged in September 2019, he referenced Warren’s interest in regulating Facebook and said he would “go to the mat and… fight” her.
Joe Biden stated recently that he is not a “fan” of Zuckerberg as Biden himself is currently the subject of a Facebook ad run by the Trump campaign that made the debunked claim that “Joe Biden promised Ukraine $1-billion if they fired the prosecutor investigating his son’s company”. According to Business Insider, from September 25 to October 1 2019, the Trump campaign spent more than $1.6-million on Facebook ads, many of which included false or misleading claims.
Facebook took down one of these ads, which referred to Joe Biden as a “bitch” as it violated its ad policy against profanity. The Trump campaign then revised the ad to include the debunked claim mentioned above. This ad was, therefore, allowed to stay up because Facebook ads from politicians are not eligible for third-party fact-checking. In other words, Facebook has confirmed that Donald Trump is allowed to lie on Facebook ads, but he can’t curse.
In the article by Emilie Gambade mentioned above, she quotes Yale professor of philosophy Jason Stanley in his article “How fascism works”: “The key thing is that fascist politics is about identifying enemies, appealing to the in-group (usually the majority group), and smashing truth and replacing it with power.”
This notion is echoed by Christopher Wylie, the man who helped set up and blew the whistle on Cambridge Analytica. In his discussion of how targeted political advertising and information campaigns work, he explains that when attempting to change culture or “hack a person’s mind”, you need to identify cognitive biases and then exploit them. Wylie explains that in information operations, “you […] first identify which people are susceptible to weaponise messaging, determine the traits that make them vulnerable to […] a narrative, and then target them with an inoculating counter-narrative in an effort to change behaviour”.
The exploitation of cognitive biases plays on people’s worst fears and insecurities and it affects a person’s judgment of information by pulling certain information to the front of the mind. In psychology, this is called priming and Wylie asserts that “this is, in essence, how you weaponise data: you figure out which bits of salient information to pull to the fore to affect how a person feels, what she believes, and how she behaves”.
What was particularly useful in Cambridge Analytica’s strategies was social identity threats, whereby social discord is effected through the idea that minorities are considered as “threats” to identity and resources.
What is also important to recognise is that everyone thinks that they are immune from the influence of cognitive biases. In reality, we are all subject to cognitive and emotional vulnerabilities. Wylie further explains that data researchers at the University of Cambridge’s Psychometrics Centre demonstrated how, by using Facebook likes, a computer model reigned supreme in predicting human behaviour. With 10 likes, the model predicted a person’s behaviour more accurately than one of their co-workers. With 150 likes, better than a family member. And with 300 likes, the model knew the person better than their own spouse.
Wylie argues that “[w]e can already see how algorithms competing to maximise our attention have the capacity to not only transform cultures, but redefine the experience of existence. Algorithmically reinforced user “engagement” lies at the heart of our outrage politics, call-out-culture, selfie-induced vanity, tech addiction and eroding mental well-being”.
Wylie described some of the Cambridge Analytica operations (through Facebook user data) as building “societies in silico”:
“The underlying ideology within social media is not to enhance choice or agency, but rather to narrow, filter and reduce choice to benefit creators and advertisers. Social media herds the citizenry into surveilled spaces where the architects can track and classify them and use this understanding to influence their behaviour. If democracy and capitalism are based on accessible information and free choice, what we are witnessing is their subversion from the inside”.
As Wylie further asserts, shared experience is the basis for solidarity among citizens in a modern pluralistic democracy. Therefore, if you want to tear at the social fabric, you socially isolate segments of society, make sure they see the same information over and over again and play on their fears and insecurities. People will start confirming information to one and for one another by clicking, liking and sharing. Also, make sure that one segment doesn’t see what the other sees. This creates mistrust and allows for effective control of that segment — the raw materials for conspiracism and populism.
Wylie reminds us that “[m]any of us forget that what we see in our newsfeeds and our search engines is already moderated by algorithms whose sole motivation is to select what will engage us, not inform us. With most reputable news sources now behind paywalls, we are already seeing information inch toward becoming a luxury product in a marketplace where fake news is always free”.
In November 2019, Sasha Baron Cohen delivered a speech calling out Big Tech and its enabling of hate and violence. Baron Cohen argued that hate is being facilitated by a handful of internet companies (specifically Facebook, Google, Twitter and YouTube) that amount to the greatest propaganda machine in history.
“These companies pretend they’re something bigger, or nobler, but what they really are is the largest publishers in history — after all, they make their money on advertising, just like other publishers. They should abide by basic standards and practices just like the ones that apply to newspapers, magazines, television and movies.”
Facebook has of course repeatedly insisted that it is not a publisher, but a platform. Andrew Marantz from The New Yorker and Author of Antisocial: Online Extremists, Techno-Utopians, and the Hijacking of the American Conversation has drawn attention to the fact that Zuckerberg and other Facebook executives have repeatedly retreated to nebulous rhetoric about free speech and political neutrality.
Marantz argues that Facebook has never been a neutral platform. Rather, “it is a company whose business model depends on monitoring its users, modifying and manipulating their behaviour and selling their attention to the highest bidder”. Further, not wanting to describe yourself as a media company or a publisher does not somehow not make you one. As Marantz further states:
“A publisher, after all, could be expected to make factual, qualitative, even moral distinctions; a publisher would have to stand behind what it published; a publisher might be responsible, reputationally or even legally, for what its content was doing to society. But a platform, [is] nothing but pure, empty space”.
“This rhetoric sounds nice — ‘free expression’ and ‘in a democracy’ are the phrasal equivalents of American-flag lapel pins—but it doesn’t amount to much. It’s one thing for Zuckerberg to build the world’s biggest microphone and then choose to rent that microphone to liars, authoritarians, professional propagandists, or anyone else who can afford to pay market rate. It’s another, more galling thing for him to claim that he is doing so for everyone’s benefit.”
Facebook’s “fundamental belief in free expression” is not even believed by its own employees. After revising its policy in October to exempt politicians from false claims, more than 250 of the company’s employees wrote a letter decrying the new ad policy, arguing that “free speech and paid speech are not the same thing”.
In 2018, The Guardian journalist Arwa Mahdawi wrote about how Mark Zuckerberg, in 2004 when he was just starting to build Facebook, sent his Harvard friends a series of instant messages in which 4,000 people volunteered their personal information to his social network. He marvelled: “People just submitted it […] I don’t know why […] They ‘trust me’ […] dumb fucks”.
In November of 2019, Facebook had 2.45 billion users trusting Zuckerberg with their personal information. Reflecting on his comments in 2010, he told The New Yorker: “I think I’ve grown and learned a lot.”
Exactly what Zuckerberg has learned since 2010 is unclear. Zuckerberg’s continued denial and determination to avoid being perceived as a gatekeeper is becoming tiring and ludicrous. Marantz asks “what else to call people whose algorithms influence what billions of people [see], [hear], and [know] about the world?”
What is clear is that the only way in which Facebook can be touted as a champion of free expression and political voice, is if political language is as Orwell says it is — designed to make lies sound truthful… and to give an appearance of solidity to pure wind. ML
Yvonne Jooste is a former senior lecturer in law. She taught at Stellenbosch University and the University of Pretoria where she also obtained her Doctorate in Jurisprudence in 2016. Jooste is currently a freelance academic editor and proofreader, who embarked on a career in freelance writing. She is specifically interested in how technology impacts our lives and the legal implications of dominant digital technologies. She has written on legal tech and the intersection between education and travel and has published a number of articles in academic journals on gender and the law, post-apartheid jurisprudence and legal education.
"Plato is dear to me, but dearer still is truth" ~ Aristotle