Revelations by The Guardian that Facebook has secret guidelines on content, from pornography to terrorism, present an opportunity for us to engage with how we think social media platforms should moderate content, and for us to really get to grips with some of the more difficult ethical dilemmas and issues. By WILLIAM BIRD and THANDI SMITH.
Yesterday, 21 May 2017, The Guardian revealed that they had been involved in an ongoing investigation known as ‘The Facebook Files’. Part of this investigation has been around the policies regulating content. The policies are supposedly secret guidelines which have now been publicly released for the first time. According to the paper, “The Guardian has seen more than 100 internal training manuals, spreadsheets and flowcharts that give unprecedented insight into the blueprints Facebook has used to moderate issues such as violence, hate speech, terrorism, pornography, racism and self-harm”.
The recent reveal raises various questions as well as highlighting various challenges, not just with the policies themselves, but with the general air of secrecy surrounding these processes and guidelines.
It must be acknowledged that the areas and issues Facebook seeks to moderate are not only expansive but also among the most hotly contested issues politically and culturally. They deal with terrorism, violence, sexual abuse material and everything in between. That Facebook has clear policies on the multiplicity of issues is to be commended. We also need to acknowledge the efforts by Facebook and other social media giants in combating child sexual abuse material (often referred to incorrectly as Child Porn).
What is more concerning however is that the policies have been kept so confidential. There are two clear issues that need to be addressed from the leaks. The first is how we can encourage social media giants like Facebook to be more open about these guidelines, and indeed to make country-specific guidelines available, and encourage comment on them. The second issue is to ensure that the guidelines are in line with South Africa’s legal framework and the rights enshrined in our Constitution.
The revelations present an incredible opportunity for us to engage with how we think social media platforms should moderate content, and for us to really get to grips with some of the more difficult ethical dilemmas and issues, as well as ensure legal compliance with our own laws. The number of issues are simply too many to do any justice to without considered discussions and engagement, but one area where we can outline a potential problem rests in how Facebook deals with the non-sexual abuse of children.
According to Facebook, “some photos of non-sexual physical abuse and bullying of children do not have to be deleted or ‘actioned’ unless there is a sadistic or celebratory element”. While we support the argument that there may conceivably be situations where such images may be justifiably utilised, we believe that a rider would need to be added to the guidelines regarding the identification of children.
Currently our law and media (to a greater extent) exercise great care in identifying children who have been physically abused. Our law is for the most part clear on this issue – that nobody may name or identify a child who has been a victim of physical abuse. Not only would identification of a child who has been abused constitute a legal violation, it also clearly goes against Section 28 (2) of our Constitution which states that in every matter concerning the child the best interests of the child are paramount. Accordingly, if one is able to identify the child in content showing physical abuse of the child, it poses a fundamental threat to her/his well-being and is clearly not in the best interest of the child to be identified. There is a clear contradiction between what is allowed by moderators and South African law. It is precisely these kinds of conflicts that the revelations in The Guardian enable us to confront directly.
Another concerning element is around the policy on “credible violence”. The example given in The Guardian’s Facebook Files allows for the statement “someone shoot Trump” to be removed from Facebook, but the statement “to snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat” is allowed. The question to be debated in light of possible hate crimes legislation is whether the second example, which may be viewed as hate speech, could be illegal, while a desire for someone to kill the President of the United States may be protected speech. These are not easy issues, and require careful consideration. They also differ from nation to nation, which is again an important reason why we should be engaging on where the boundaries for protected speech lie.
Given that Facebook has country-specific community guidelines, one would assume that the examples above vary according to country law and regulations. We call on Facebook to embrace the revelations and to encourage broader engagement on them, so that not only are people more aware of the potential conflicts, but it also empowers users to know what can and should be published and reported more often. We call on Facebook to lead the way in ensuring greater transparency about how it regulates, why it regulates and procedures in place to adhere to country-specific laws. DM
Photo: The Facebook icon is displayed in Taipei, Taiwan, 28 April 2017 (re-issued 16 May 2017). EPA/RITCHIE B. TONGO
While we have your attention...
An increasingly rare commodity, quality independent journalism costs money - though not nearly as much as its absence.
Every article, every day, is our contribution to Defending Truth in South Africa. If you would like to join us on this mission, you could do much worse than support Daily Maverick's quest by becoming a Maverick Insider.
Click here to become a Maverick Insider and get a closer look at the Truth.
Burger King is called "Hungry Jack's" in Australia. This is due to one restaurant in Adelaide having already claimed the named Burger King.