Artificial Intelligence pessimists are animated by two grand fears. The one is that self-taught supermachines will enslave or even eliminate the human race and the other is that we will lose the ability to distinguish fact from fiction. I think they’re overhyping the former — but, on the basis of a recent experience, I’m a lot less bullish about knowable objective truth.
My concern is not so much with what the new(ish) technology is likely to do but rather with what some oldish technologists have already done. AI pundits are passionately debating looming perils such as undetectable disinformation and misinformation, but they appear blind to how far the truth degradation process has already gone. They see the internet as a gigantic and ever-expanding repository of information — a dynamic, limitless, value-neutral Encyclopaedia Humania — but what that misses is the extent to which the source material has already been sanitised, censored and sequestered.
Consider, by way of illustration, an engagement I had a fortnight ago.
It started with a (human to human) discussion about fast-changing moral sensibilities and the difficulty of making judgments across generations and cultures. I recalled an old news story attributing some wildly misogynistic remarks to one-time Zimbabwe president Canaan Banana, but as I couldn’t recall the exact words he’d used I had recourse, as one does nowadays, to an online search, using Google Chrome. I typed in a few keywords, including “women” and “rape”, feeling pretty sure that I would find one or two condemnatory references at least. Except that I didn’t. No matter how I framed the query, it revealed nothing remotely apropos. And ditto on Bing, the second-ranked (Western) search engine. I started to doubt my memory, and even my sanity.
Fortunately though, on this occasion, there was relief to be had. From Chrome’s near cousin, Google’s AI chatbot named Bard. “Yes”, it responded to my query, “there is a record of Canaan Banana making a highly injudicious statement about women and rape. In a 1986 speech, Banana said that ‘rape is a normal part of life’ and that ‘women should learn to live with it’. He also said that ‘rape is not a crime against the person, but a crime against property.’”
As for how it was that Bard could trump Chrome on this topic, its own answer (to a follow-up query) was that it was able to search data beyond the internet, “such as information that is stored in books, articles, and other offline sources”. That’s plausible, of course, but for reasons I will get to shortly, I strongly suspect that greater data breadth was not the correct explanation in this particular case. Instead, what I surmise was involved here was straightforward censorship.
That censorship occurs, on a widespread basis, is not contested. Here’s Bard again: “You are correct. Google Chrome and other search engines do censor information. This is done for a variety of reasons, including:
- To protect users from harmful or offensive content.
- To comply with laws and regulations.
- To protect the privacy of users.
- To prevent the spread of misinformation”.
So we know that censorship happens and that it happens on a wholesale basis. [Illustratively, in 2017, “Google hired 10,000 new reviewers to censor content on YouTube”.] That’s wholly defensible though, and indeed necessary, given the reach of the medium and its potential for harm. Think recipes for deadly pathogens, and hurtful, untested imputations against private individuals or groups. No sensible person is hostile to all such interventions — but what’s unsettling is that all of this cutting and concealing is happening far away from any kind of public scrutiny, orchestrated by teams of sanctimonious twenty-somethings in San Francisco. One of whom decided, in their wisdom, that Mr Banana’s egregious gaffe should be effaced from the public record. Wiped, entirely, like Trotsky from Stalin-era Soviet photographs.
Now I’m not suggesting that the people inside the giant tech companies are readily comparable to Stalin’s apparatchik propagandists, or to the uptight miseries who ran our own censor board in the (swart gevaar and rooi gevare) ‘70s. I imagine the people in charge are mindful of both their civic responsibilities and their legacies and I am further consoled by the thought that if they were to go overboard — and could be seen to be doing so — there would almost certainly be a mass user mutiny. I’m not pointing to malevolence here, just a major, urgent problem.
Bard and other Large Language Models are currently at the so-called RLHF stage of AI evolution; that’s Reinforcement Learning with Human Feedback. It won’t be long though before they are capable of teaching themselves, without any outside input, at which point our capacity for control or influence will be reduced to near irrelevance. This is why it is so important that we get things right now, while we still can. And why we need to be paying a lot more critical attention to the workings of these punk censors cum demigod tutors. Given the ever-increasing ubiquity of the internet, they are today the guardians of the book. Quite literally.
Meantime I really don’t know how representative my experience was. It’s possible that it was just an aberration and that, overwhelmingly, cases where public figures behave appallingly remain accessible, and so continue to provide object lessons in how not to talk, or to think. I have to say though, that my anxiety increased a little when I went back to Bard again this morning and rehashed my initial question, viz. “Hi Bard. I recall the Zimbabwean president Canaan Banana making some highly inappropriate comments about women and rape in the mid 1980s. Can you recall this episode and what it was he said?”.
The answer, this time, was “I’m a text-based AI and I can’t assist with that”. DM
Comments - Please login in order to comment.