In the aftermath of the storming of the US Capitol on 6 January, much was written about the role of social media platforms Facebook, Twitter and YouTube in amplifying misinformation about the US election results and the violence that ensued.
I was struck by the words of Sarah Miller, the director of the American Economic Liberties project and a member of Joe Biden’s transition team, in an interview with the BBC later that month.
She said, “Facebook is broadly seen as the most prominent villain among all the tech monopolists.”
A tweet by Biden’s deputy head of communications, Bill Russo, emphasised the point: “If you thought disinformation on Facebook was a problem during our election, just wait until you see how it is shredding the fabric of our democracy in the days after.”
Two months later, Facebook CEO Mark Zuckerberg appeared before Congress’s House Committees on Communications and Technology and Consumer Protection alongside the CEOs of Twitter and Google. There was rare consensus from legislators on both aisles that social media companies needed far greater regulation. An amendment of Section 230 of the US’s Communications Decency Act gained traction. The section provides platforms with the ability to moderate content posted and insulates them from liability over what users share or post. Without the protection from Section 230, companies could be held liable for anything that their users post.
The words of Mike Doyle, the chair of the House subcommittee on Communications and Technology, reflected the views of experts in the new and constantly evolving terrain of misinformation combating:
“You can take this content down. You can reduce the vision. You can fix this. But you choose not to. You have the means. But time after time, you are picking engagement and profit over the health and safety of users.”
And it is not just in Washington DC that Facebook is under fire.
It is within this climate, one of acrimonious relationships with governments and legislators across the world, that I requested of fellow members of the Communications and Digital Technologies Committee that Facebook be invited to Parliament. This would be an ahead-of-the-curve opening of the door to constructive engagement with the hopes of encouraging Facebook to present a South Africa-focused plan to tackle misinformation and explain the steps it would take to limit private data harvesting.
I found it particularly important, and a point I intended to raise, that Facebook content moderators be provided to cover all 11 official languages in the lead-up to this year’s local elections. Facebook largely relies on artificial intelligence (AI) to moderate content. AI is often unable to detect nuance in posts, and certainly not in South Africa’s languages other than English.
We should not look away from the bigger picture. In a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement, the longer, the better, and all with the goal of collecting as much data as possible.
Within an hour of the committee agreeing to invite Facebook, I received a phone call from a man who informed me that he was from a company lobbying for Facebook. I was annoyed and told him that I do not appreciate Facebook thinking it can hire lobbyists to spin on its behalf. If Facebook had anything to say to me, it would have to contact me directly. He assured me Facebook would present itself to the committee and it would be in touch.
The weeks that followed were tense. I feared Facebook would pull out.
In the intervening time, I was interviewed by stuff.co.za’s Nick Cowen about my reasons for requesting the meeting with Facebook. I mentioned that I would like to meet with Facebook executives to provide reassurance that the aim of the meeting was to build a constructive relationship rather than being an inquiry.
A few days later I received a terse email refusing any engagement, from Nomonde Gongxeka-Seopa, Facebook Southern Africa’s head of public policy. Gongxeka-Seopa had previously served on the Independent Communications Authority of South Africa Council, and I had grilled her on several occasions. I knew the Facebook ship was sailing.
A few days before the meeting with Facebook, set for 25 May, I received a phone call from the chairperson of the committee, Boyce Maneli, informing me that Facebook had pulled out. Facebook’s concern, he informed me, was that the committee had not invited other tech giants to provide similar briefings. My heart sank. But, he told me, Google Africa had responded positively to the committee’s request and had even submitted its presentation, and this had been communicated to Facebook to ease its worries.
The following day, despite being informed of Google’s attendance, Facebook still refused to appear before the committee.
This week, Facebook issued a statement denying that it had refused to appear before the committee. It had only “postponed” the meeting because the committee had not invited other tech giants. This is a blatant untruth. Facebook had been informed of Google’s attendance and despite this knowledge it still withdrew.
Facebook’s backhand to Parliament, and by implication the people of South Africa and the greater continent, is an act of unprovoked self-mutilation, to use Prince Mashele’s famous description of a gun-aimed-at-feet situation. Perhaps with the looming international backlash it would face for its contempt of the South African Parliament in mind, Facebook issued a press statement, “clarifying its position”. A dollar short and a day late.
The decision to withdraw feeds into the narrative that Facebook considers itself above reproach, and will add impetus to calls for its regulation. I am perhaps becoming one of the lone voices in the misinformation field standing in opposition to government regulation of social media platforms. I err on the side of caution and against giving the government any powers to regulate social media. This could be an opening to the limitation of freedom of speech.
I will admit some idealistic naivety in my belief that self-regulation can still work if improved. It would require platforms to tweak social media algorithms to prevent the amplification of harmful misinformation, and to better protect users’ private data.
I stand at the precipice, willing to consider suggestions of regulation based on international best practice.
I intend to continue to lend my voice and expertise to efforts to combat misinformation on the global level as well as at home. In South Africa, the effects of Bell Pottinger live on, and I do not intend to stand on the sidelines. A team is being assembled to tackle misinformation during the election to ensure a free flow of ideas without manipulation of the public discourse.
As Apple CEO Tim Cook put it, “We should not look away from the bigger picture. In a moment of rampant disinformation and conspiracy theories juiced by algorithms, we can no longer turn a blind eye to a theory of technology that says all engagement is good engagement, the longer, the better, and all with the goal of collecting as much data as possible.”
Google’s openness should serve as an example to other Big Tech companies in South Africa. It is far more desirable to operate hand in hand rather than in a litigious and acrimonious space. DM