Defend Truth

DEVELOP AI OP-ED

AI So White… and why the Gemini ‘white racism’ saga was overblown

AI So White… and why the Gemini ‘white racism’ saga was overblown
(Image: LeadOrigin / Wikipedia)

Two weeks ago users on X began to gleefully post evidence that Google’s Gemini (their AI chat rival to ChatGPT) was refraining from generating white people. Even when people asked it to generate pictures of the US founding fathers or Nazi soldiers only people of colour popped up. Here’s what happened next…

Public figures and news outlets (mostly with large right-wing audiences) said this disaster exposed Google’s long-standing agenda against white people.

AI Gemini

George Washington according to Google Gemini. (Image: Supplied)

In two days, Google had paused Gemini’s ability to generate images of any people. It is still suspended. You can ask for a picture of a racing car, but not its driver. The next day, Google’s senior vice-president, Prabhakar Raghavan, published a blog post attempting to explain the company’s decision. It was a lot of corporate fake humility.

The current state of Google Gemini when you ask for an image of a person. (Image: Supplied)

Essentially, Google had overcompensated for the faults of its rivals (like ChatGPT) who have become notorious for producing images that are racist towards people of colour.

When I was doing trainings in AI last year in Ethiopia (which I wrote about) we were testing AI models and asking them to generate images of Ethiopian classrooms. It would invariably generate a scenario with a white teacher in authority and the students (often on the floor) would always be people of colour. It was almost impossible to generate a picture where the person of colour was in a position of authority. Of course, this is because the scraped web data that AI systems are trained on is incredibly prejudiced.

However, besides an onslaught of think pieces (including my own) there wasn’t much of a response when the models were racist against black people. It didn’t cause a major tech company to publicly apologise and prevent the service from creating images of any humans whatsoever. 

This scandal has proven that black racism in AI is given a shrug and white racism is an international story. Not that I am above contributing to that story. Last Sunday I was interviewed on the BBC World Service’s weekend show about AI, Gemini and what all this means.

Image from X via the New York Post. (Image: Supplied)

According to Time, Google’s overcorrection and subsequent pausing of the tool has people in Silicon Valley worried that this will discourage talent from working on questions of AI and bias in the future. Google, OpenAI and others build guardrails into their AI products. 

First, they do “adversarial testing” which simulates what a bad actor might do to try to create violent or offensive content. Second, they ask users to rate the chatbot’s responses. And third, more controversially, they expand on the specific wording of prompts that users feed into the AI model to counteract damaging stereotypes. So, it basically rewrites your prompt without telling you and saves you from what it thinks could be a potentially racist result.

During this mess, Elon Musk posted or responded to posts about the conspiracy that Google had a secret vendetta against white people more than 150 times, according to a Bloomberg review. The same Time article said Musk singled out individual leaders at Google, including Gemini product lead Jack Krawczyk and Google AI ethics adviser Jen Gennai, saying they had masterminded this bias.

Is a platform accountable for the content an AI generates, even when they are not connected or aware of its creation?

On 26 February, Demis Hassabis, head of the research division Google DeepMind, said the company hoped to allow Gemini to make pictures of people again in the “next couple of weeks”.

When asking ChatGPT about a conflict (like Israel and Palestine) I realised that we can use its bias to detect where the media is leaning. It gives an interesting snapshot of the world media’s view because it is combing so much data from the internet. And the changes will be interesting to track as the tech companies try to curb the bias and news sites like The New York Times start to opt out of having their data scraped. We are going to get a new, contrived version of bias.

Read more in Daily Maverick: After the Bell: What if you asked AI about the future of AI?

Read more in Daily Maverick: AI shaping up to become the greatest geopolitical weapon in history

It is also becoming an interesting “publishing dilemma” for the tech platforms that have historically tried to shy away from such responsibilities. Is a platform accountable for the content an AI generates, even when they are not connected or aware of its creation? Should the prompt writer be held accountable? Years ago Facebook pulled back from news for similar reasons. They found safer ground in being the place for people’s charming holiday snaps. All of these companies want to produce no content themselves, drive engagement and face no consequences, but AI is still rogue in the content it generates and we live in a culture of endless outrage. So, I imagine these scandals, for the time being, are going to become a regular part of the news cycle.

In the news

  • Spotify’s podcasting acquisitions The Ringer and Gimlet (the former a darling, the latter a liability) are pushing to be protected by AI in their work. According to The Ringer’s Union they are asking for ways “to protect peoples’ work, images and likenesses from being recreated or altered using AI without their consent”. They are also saying they want distinct, obvious disclaimers for when AI is used. Seems pretty reasonable, but deadlines have passed, Spotify are not agreeing and the podcasters are threatening to walk;
  • In deepfake news: Trump is not cosying up to black voters in the US. Those images are not real. What’s interesting is that the images are being created and circulated by his supporters with an attitude that it is your fault if you get duped. The reality of looking at any WhatsApp group and accepting what we see at face value is already over.
AI Trump

Image from an investigation by BBC Panorama. (Image: Supplied)

What AI was used in creating this piece?

As you might expect, it was difficult to create an image on the story of whiteness and AI by using AI. Also, as an experiment I asked ChatGPT to condense our main story in the “tone of journalist Paul McNally” and after complementing my abilities, wrote this as a first line: “In a digital epoch where every action is under microscopic scrutiny.” Pretty verbose stuff. So… no AI was used in the letter this week.

This week’s AI tool for people to use

I was surprised to find the most visited AI site (after ChatGPT) is character.ai. And after using it I’m no longer surprised. Frankly, you could lose your whole life in there. It is a site where you can build your own AI character and then chat to it. I predictably built a Tyler Durden bot and because Fight Club is so prevalent, once the bot understood who he was he managed to pull up images and quotes from the film. The service is entirely free to use and has a buzzy community… even if you are chatting to their AI creations rather than the community members themselves. DM

Subscribe to Develop AI’s newsletter here.

Develop AI is an innovative company that reports on AI, provides training, mentoring and consulting on how to use AI and builds AI tools.

Gallery

Comments - Please in order to comment.

  • Karl Sittlinger says:

    There is quite a major difference between racial bias created by scraped data, and one created by guard rails programmed by people or organizations that purposefully discriminate. Of course we need to ensure that results generated by reflect the world properly, but when they become extremely one sided based on human intervention and some need to bow to arbitary DEI rules, the tools become no better than the results that are influenced by systemic racism, I would argue even worse as it hides the true state of systemic racism.

    Based on what morals and what political viewpoint should AI guardrails be implemented by? Who gets to make that decision? If for instance a right wing programmer writes them, we would get distortions just as problematic as if they are written by a SJW that unquestioningly supports the modern divisive views of DEI and CRT.
    We clearly need to first find a common and neutral framework that we can agree on, if AI is to become a tool for all humanity and not just for the current political climate or whims of individuals.

    • Gerald Williams says:

      Fantastic comment. You articulated my thoughts better than I could myself.

      Bias in AI goes both ways. It is also only as good as the training data and guardrails that it is provided. Both of those inputs will never make everyone, everywhere happy all the time.

Please peer review 3 community comments before your comment can be posted

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options