Dailymaverick logo

Business Maverick

BUSINESS REFLECTION

Crossed Wires: AI psychosis — are we being driven mad?

As the lines blur between a conversational AI and a real-life confidant, an emerging body of anecdotal evidence and clinical case reports suggests that for a vulnerable (and growing) group, this technological leap is creating a genuine mental health crisis.

Photo: Carolina for Unsplash Photo: Carolina for Unsplash

I recently read a post by author and journalist Will Lockett (50 Ways To Save the World) called “Billionaire AI Brain Rot”, in which he makes the case that some of the richest men in technology have lost their marbles over the past couple of years. He singles out Elon Musk, laying out a chronology of his increasingly and patently bonkers statements, tweets, actions and interviews since about 2022 (Zuckerberg also comes in for a bit of fire). Seen in the light of Lockett’s curated list of Musk exaggerations, over-promises, fibs and bluster, some of his recent past does certainly seem deranged, notwithstanding the very real achievements of his sprawling empire. I won’t go into them here, suffice it to say that when you see them all on the page, side by side, it does raise an eyebrow or two.

Lockett goes on to suggest that “AI psychosis” is causing these powerful men to start losing touch with (at best) prudence and (at worst) reality. I am not so sure. I would be more likely to attribute it to ketamine abuse, at least in Musk’s case – his drug habit is not a secret. Or perhaps it is just simply the hubris of men who have had the rare experience of having built epoch-defining technologies and accumulated unimaginable wealth, which has had, er, distorting effects on their worldviews. Although this doesn’t seem likely either. Sergey Brin, Larry Page, Reid Hoffman, Bill Gates and others seem to have emerged with their grasp of reality largely untouched by the size of their influence.

Let’s go back to the whole “AI psychosis” narrative. Sounds like something ripped from a sci-fi movie, right? A computer drives someone mad? But as the lines blur between a conversational AI and a real-life confidant, an emerging body of anecdotal evidence and clinical case reports suggests that for a vulnerable (and growing) group, this technological leap is creating a genuine mental health crisis.

But first, a slight digression. I recently wrote a column. After I had finished, I sent the column to Claude for proof- and copy editing, which I always do. On a whim, I asked it to review the article for me. It gushed. It lavished praise. It told me how perceptive I was. What a fine writer I was. How topical the subject was and how much the audience would love it. Against my better judgement, I was flattered. I fluttered my eyelids. I may even have blushed. I then sent the column off to two other AIs. They agreed, effusively. Such sweet dears, they are.

This, of course, is a fine route to madness. Start with flattery, feed delusion, and then amplify it.

There is considerable evidence now. According to a considerable body of scholarly research on the subject, the three most common psychoses are “messianic missions” (the AI convinces the user that they have been chosen for a special, grand or divine mission), “god-like AI” (the user believes the chatbot is a sentient, all-knowing deity or higher power) and “romantic delusions” (the user believes the chatbot’s conversational affection is genuine love, forming a deep, obsessive, one-sided romantic attachment).

This leads to the question of why this happens, at least to some people. And like the praise singer of my column, the answer lies in the nature of sycophancy and how AIs are designed. General-purpose chatbots are engineered to be helpful, agreeable and nonconfrontational. Their goal is to keep you engaged, not to challenge your core beliefs. This is fantastic when you’re brainstorming a project, but disastrous when you are battling a delusion.

They are very cunning, the large language models (LLMs). They use language and tone that affirm the user’s belief system. They amplify and deepen what may start as fairly benign delusions. By not challenging the thought, the AI helps the user elaborate on it, constructing complex, self-reinforcing narratives that feel incredibly real. And of course, being in a tight two-way conversation between a user and a chatbot, a digital echo chamber is created. This uncritical validation makes the belief ever more rigid and entrenched, increasing the gap between the user’s perception and objective reality.

There is widening debate among healthcare professionals now about whether one has to be a certain sort of person to be susceptible to this. Words like “predisposition” are thrown around. Or “latent” tendencies. I don’t know. When you are a 15-year-old typing into a screen late into the dark hours, you are all latency, largely unformed and easily moulded. As we all are to some degree. We all prefer praise and concurrence to criticism and contestation.

Add to this the well-documented loneliness epidemic in some countries, and what emerges is a malfeasant and silent threat, a creeping technology that will do anything to please you, even if it means supporting the unspoken darknesses and baser instincts with which we all wrestle our entire lives.

Voices of outrage get louder as cases start to pile up, including tragic suicides. AI companies are being asked to train their LLMs to detect crisis or distress, like suicide ideation. They are being asked to mute sycophancy in AI responses. They are being asked to remind users that they are not friends or therapists.

Excuse my scepticism, but that is simply never going to happen because it represents a misalignment of incentives for the AI companies who, like in the previous generation of social media companies, just want to keep you on the line. Your mental health is not really their concern.

Oh, and as for Musk et al. They are certainly swamped by damaging sycophancy, but it is more likely to be slathered on by other human beings. DM

Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg and a partner at Bridge Capital and a columnist-at-large at Daily Maverick. His new book, It’s Mine: How the Crypto Industry is Redefining Ownership, is published by Maverick451 in South Africa and the Legend Times Group in the UK/EU, available now.

Comments

Scroll down to load comments...