The internet, mobile phones and gadgets have increased the likelihood of users of social media, especially children, coming across illegal and harmful content. What exacerbates the problem is that information expressed in social media such as Twitter is done without any organisational or institutional filters, hence some of it can easily incite violence or cause harm, especially if it is propagandist in nature.
Fake news is defined by Lazer et al as “fabricated information that mimics news media in form but not in organisational content or intent”. Fake content lacks the media’s editorial norms and processes for ensuring accuracy and credibility of information, and often escapes the regulatory teeth designed to protect citizens. It is illusory and misleading, amounts to misinformation and is, often, propagandist.
More recently, we experienced fake news abused during electioneering as a point-scoring mechanism. And in South Africa, at the start of the Covid-19 pandemic, Minister of Communications Stella Ndabeni-Abrahams on 26 March 2020 gazetted requirements for the dissemination of Covid-19 information to citizens. This was in response to the proliferation of fake news and inappropriate Covid-19 content.
In the month before the 2016 election in America, the average American encountered between one and three stories from known publishers of fake news. The challenge with false information on various social media platforms is that it gets liked, retweeted and shared by thousands of people, spreading fake news and disinformation in a second. “Fake news is misleading and at its worst is an attempt to undermine national security,” writes Damian Tambini of the London School of Economics.
So, to those of us who are still questioning whether or not we have come into contact with fake news and/or “deepfakes”: do TikTok, DeepArt, or Face Swap ring a bell? The reality of deepfakes infiltrating every facet of our lives has arrived.
How are deepfakes created?
Deepfake is a hybrid of “fake” and “deep learning”. Using artificial intelligence (AI), deepfakes is a technique for human image fusion. The method combines and overlays existing images and video clips on to source images, or videos, using a machine-learning technique known as “generative adversarial network”. So picture this — if you have produced enough sound bites or videos and they have been distributed on various social media platforms, these technicians or perpetrators are able to cut, strip and combine pieces to create a completely new piece of content – possibly featuring you in ominous situations.
Since 2017 there has been a proliferation of deepfake apps, making it a relatively new phenomenon in the realm of the internet age. Initially, deepfake technology was used in research and academic settings, and by development amateurs. It soon spread to the political sphere, aiming to alter people’s perceptions of political leaders. This was followed by entering the entertainment realm. The use of deepfake techniques was seen by millions of people around the globe in 2016 in the movie Rogue One for the acting of Princess Leia (played by Carrie Fisher, who died before the movie was completed) and in 2018 in Solo: A Star Wars Story, when Harrison Ford’s face was inserted on to Han Solo’s face.
The list of deepfake apps is seemingly endless and includes apps such as DeepFaceLab, Face Swap Live, Deep Art and AvengeThem — available to anyone, including a child, who has access to the internet. These apps have the capability of allowing anyone, within seconds, to replace the original face in a video with someone else’s face as well as to change the voice. Deeptrace Labs, a company that researches and detects deepfake apps, found that since 2018, the number of deepfake videos increased by an alarming 84%.
So, what’s the fuss? Political interference and the erosion of democracy
In the context of an election, fake news tends to undermine legitimate opposition. It is a threat to democracy and freedom of expression. It infringes upon the freedom to dignity of the people who it targets. Ironically, this harmful content attracts a lot of public attention because people are not immediately sensitised about the inaccuracy of the information.
Fake news has the ability to erode and undermine the very fabric of our various global institutions, and country-to-country associations. What should be considered is the development of a single regulatory framework, an SADC, AU and/or BRICS convention with associated resources as a go-to when assistance is required, because the critique at hand is that fake news plays a critical role or is a form of undermining or interfering in the sovereignty of our member countries. We should be concerned as members of a global society at the possibility of undermining the sovereignty of states and therefore the voice of these countries must be vigilant to form a united harmonised defence that will discourage the growth of these tendencies.
How will we hold each other accountable to this harmonised regulatory framework or convention? We need to appraise and evaluate the damaging effect of fake news on national states’ sovereignty. The use of fake news is subversive and contrary to the essence of the nation state. The establishment of global economic blocs such as BRICS is in and of itself a desire for autonomy (self-determination) and not being beholden to hegemonic powers.
Imagine how easily fake news could serve inappropriate persons to sway the views of a nation — the potential for damage is endless. We will be remiss in our responsibility as nation states to defend our citizens and to encourage gains made in building a global citizenship where people can move as efficiently as goods across borders and international solidarity becomes a reality.
It is, therefore, critical that as member countries we monitor fake news and the manner in which we can prevent it legislatively as well as to punish it.
Societal faultlines: Fake news used to stir up emotional faultlines that lead to violence and genocide
Twenty-six years ago in April 1994, all hell broke loose in Rwanda — hordes of members of the Hutu majority, armed with machetes, spears, nail-studded clubs and other rudimentary weapons, moved house to house in villages, hunting for Tutsis, the second-largest of Rwanda’s three cultural groups. The radio station RTLM, together with leaders of the government, had been inciting Hutus against the Tutsi minority, repeatedly describing the latter as inyenzi, “cockroaches”, and as inzoka, “snakes”. The radio station had many listeners. The country suffered the results of ethnic cleansing and genocide. The potential for a radio programme to wipe out a nation had become real.
Call of Duty game ‘Black Ops’ recklessly promotes conspiracy theories
As reported recently in The New Yorker, developers of the game Call of duty: Black Ops Cold War abused a video clip of a Soviet Union defector without contextualising it (in the game’s trailer). It used a real interview by Russian defector Yuri Bezmenoz that took place in 1984 during the Cold War to promote a new addition to the game. Bezmenoz is shown stating how the US will gradually be undermined to allow for minorities such as African-Americans, women and the LGBTQI communities to destabilise American society.
Bezmenoz warns Americans about these social movements — he infers that the Soviet Union has a role in destabilising American society. Many Americans who have seen or read this have taken it as gospel truth. So, think about this: an impressionable 18-year-old with access to a gun listens to this, plays the game Call of Duty: Black Ops and potentially acts on this. A case in point was the 2011 Norwegian massacre, where Anders Breivik trained using Call of Duty: Modern Warfare, and then went on to murder 77 people.
This is how popular culture is used to insidiously normalise abnormal behaviour. This is how unprogressive ideas are sold to the public and more ominously where gaming and internet platforms are used to stir up hatred or identity fraud.
The internet has no firewall for patriarchy: Tools that become weaponised against women
When powerful women stir the hornet’s nest, they are vulnerable to vicious online attacks. The capability to reinvent a character allows for the creation of endless hoaxes, fake news, nude and pornographic scenes and revenge porn, and thereby targets girls and women. Mutale Nkonde, a fellow at the Data and Society Research Institute in New York, states that “the DeepNude App (a deepfake app) proves our worst fears about the unique way audiovisual tools can be weaponised against women”, ultimately altering our perceptions of people and controlling women’s bodies. Many celebrities, politicians and, alarmingly, non-celebrities have experienced this as deepfake software can be purchased online for as little as R45.
Worryingly, children are exposed to misinformation and other forms of illegal content, such as child pornography. The capability to reinvent a character allows for the creation of hoaxes, fake news, nude and pornographic scenes as well as revenge pornography. The use of deepfake apps tends to perpetuate cybermisogyny, as women remain vulnerable to manipulation of their pictures by those acting maliciously in cyberspace. Social media is a key conduit of fake news, and given the access to cellphones and other gadgets in the Fourth Industrial Revolution era, it becomes crucial to regulate such content.
Initially, deepfakes appear to be innocent and exciting, but the individual could be lured into the destructive side of the app. The result could be an individual who is ill-informed about the harmful effects of their actions, not only to themselves, but to those who they target through using deepfake apps.
More disturbingly, deepfake apps especially target children. Apps form a key element of children’s social lives and steer their beliefs, value systems and behaviours. This all occurs while children are physically, emotionally and psychologically in their developmental phases. The use of deepfake apps among children may result in their inability to differentiate between reality and fake information, the ruining of relationships with significant others and their reputations, as well as their online reality with cyberbullying forming a key part of this constant element in their lives.
The manipulation of images and videos using artificial intelligence has the ability to become a destructive mass phenomenon. Arwa Mahdawi of The Guardian believes that the key factor behind the creation of deepfake pornography is the desire to humiliate and control women. Artificial intelligence researcher Alex Champandard opines that due to the inability to differentiate between reality and fake media, humanity has entered an age in which it is impossible to know when content represents truth.
Taking Action against the abuse of information and fake news
Globally, governments such as the UK, EU, US and SA have closed down deepfake apps and made the creation and distribution of deepfake apps punishable by law.
In South Africa, the Film and Publication Board, through the implementation of its Amendment Act, 2019, the 2017 Cybercrimes and Cyberbullying Bill, aims to rationalise the laws of South Africa that deal with cybercrime and cybersecurity, and to criminalise the manufacturing and distribution of malicious communications as a means to provide interim protection measures.
We acknowledge that there is an increased demand for online content and technologically advanced content creation, therefore the concomitant demand for content regulators in BRICS to increase its monitoring of digital platforms and social media. Arguably, deepfake apps will affect the way we perceive life and place further pressure on our societal norms and values. But the key concern should be the effect deepfake apps have on women, vulnerable minority groups and children.
Real 411 is a prime example of a civil society-led initiative in South Africa aimed at addressing harmful online content. It emerged out of a need to curb fake news by setting up a system that would enable members of the public to report misinformation. Real 411 has a code of conduct which applies to offences that comprise harmful false information, hate speech, incitement of violence and harassment of journalists. The code of conduct established the Digital Complaints Committee. The code of conduct seeks to strike a balance between competing rights and interests by taking into account factors such as freedom of expression, satire and public interest.
Additional recourse for the public is to apply to the equality court for relief, approach the South African Human Rights Commission for assistance and also to check a fact-checking organisation for verification.
South Africa has a few regulatory frameworks that seek to combat the distribution of illegal content and inaccurate information, including fake news:
Hate speech has become even more dangerous in the age of social media, where people freely express their opinions in the public arena.
Means of regulation
There are three possible mechanisms for regulating distribution of illegal content which could be considered. There is either self-regulation by the platforms, government intervention or co-regulation.
Direct government regulation
This phenomenon entails the establishment of rules by the state to regulate private business. Direct government regulation, however, could be construed as censorship, given the constitutional rights of citizens in different nation-states. In South Africa, for example, the Bill of Rights protects the right to freedom of speech and expression, albeit with certain limitations. However, government regulators may not easily maintain objectivity in defining and imposing some regulations. They may end up developing regulations that suit their political interests.
Governments by their nature have a responsibility to balance public interest by ensuring a free flow of news and simultaneously protecting the dignity of their citizens through its legislative regimes. In South Africa, direct lawsuits have taken place where people who felt that their characters were defamed through social media approached the equality court to seek justice. Therefore, the law on defamation becomes crucial as a means to control fake news that is defamatory and harmful.
From self-regulation to co-regulation
Co-regulation refers to a combination of self-regulation and government regulation, where consensus is reached between actors in the regulatory space such as judges, legislators, civil society and the regulatory authority, the Film and Publication board in this case. This process enhances the legitimacy and efficacy of regulation since the interests of all actors are considered. The board uses a co-regulatory model working with industry, government and the public. The laws and policies are in place for public accountability. While industry ensures the application and compliance with the law in the instance of a transgression, an independent enforcement committee will review the transgression.
Another option is self-regulation, which allows industry, and in this case, the media, to have regulatory mechanisms in place such as codes of conduct, rules and standards that an industry should comply with as well as mechanisms to monitor compliance to the set rules and standards. However, self-regulation of the internet has been criticised due to problems of legitimacy and accountability, lack of credibility and transparency, and concerns about protection of freedom of expression. Self-regulation has also been criticised for the lack of enforcement of sanctions.
The Press Council, the Press Ombud and the Appeals Panel are independent co-regulatory mechanisms set up by the print and online media to provide impartial, expeditious and cost-effective adjudication to settle disputes between newspapers, magazines and online publications, on the one hand, and members of the public, on the other, over editorial content of publications. It is based on two pillars: a commitment to freedom of expression, including freedom of the media, and to high standards in journalistic ethics and practice. The South African Press Code guides journalists in their daily practice of gathering and distributing news and opinion and guides the Press Ombud and the Appeals Panel to reach decisions on complaints from the public.
Our original objective through the AU, SADC and BRICS, according to the AU Mission Statement, was to “improve our global economic position and reform our financial institutions through building an integrated, prosperous and peaceful Africa, driven by its own citizens representative of a dynamic force in the global arena”.
We should consider moving towards harmonisation of a single regulatory regime or a convention where a single code of conduct may be adopted across member countries. This may call for a network of global or multiple players brought together to tackle these challenges and participate in a single harmonised body to regulate media content.
Principles of accountability, transparency and respect for human dignity, and the safety of the child — among others — may be considered and adopted as guiding principles for such a convention and/or regulatory framework.
The overall wellness of a nation is measured in far more complex terms than its mere economy — the essence of our democracies, societies and social interactions is under threat. Deepfakes are an anathema to truth. DM
Laurie Less is the Shared Services executive at the Film and Publication Board. She was previously the executive manager at Wits University Clear-AA, a specialist M&E centre and in various government agencies. In the development sector she worked as a senior programme manager for both Swedish SIDA and the Open Society Foundation of South Africa.
Dr Tebogo Umanah is a manager for Research, Policy and Advocacy at the Film and Publication Board. She was previously employed as GM: Policy Analysis, Research and Strategic Projects at the Tourism Business Council of South Africa. She has over 25 years of experience in research, which spans the public sector, academia, NGO sector and the private sector.
Around 762 AD demand for books in Baghdad was so high that any book dealer would be paid the tomes' weight in gold.