In May, OpenAI launched GPT-4o (“o” for “omni”), a new version of the artificial intelligence (AI) system powering the popular ChatGPT chatbot. GPT-4o is promoted as a step towards more natural engagement with AI. According to the
video, it can have voice conversations with users in near real-time, exhibiting human-like personality and behaviour.
This emphasis on personality is likely to be a point of contention.
In OpenAI’s demos, GPT-4o sounds friendly, empathetic and engaging. It tells “spontaneous” jokes, giggles, flirts and even sings. The AI system also shows it can respond to users’ body language and emotional tone.
Launched with a streamlined interface, OpenAI’s new version of the ChatGPT chatbot appears designed to increase user engagement and facilitate the creation of new apps based on its text, image and audio capabilities.
GPT-4o is another leap forward for AI development. However, the focus on engagement and personality raises important questions about whether it will truly serve the interests of users, and the ethical implications of creating AI that can simulate human emotions and behaviours.
The personality factor
OpenAI envisions GPT-4o as a more enjoyable and engaging conversational AI. In principle, this could make interactions more effective and increase user satisfaction.
Studies show users are more likely to trust and cooperate with chatbots exhibiting social intelligence and personality traits. This could prove relevant in fields such as education, where studies have indicated AI chatbots can boost learning outcomes and motivation.
However, some commentators worry users may become overly attached to AI systems with human-like personalities or emotionally harmed by the one-way nature of human-computer interaction.
The Her effect
GPT-4o immediately inspired comparisons – including from OpenAI boss Sam Altman – to the 2013 science-fiction movie Her, which paints a vivid picture of the potential pitfalls of human-AI interaction.
her
— Sam Altman (@sama) May 13, 2024
In the movie, the protagonist, Theodore, becomes deeply fascinated and attached to Samantha, an AI system with a sophisticated and witty personality. Their bond blurs the lines between the real and the virtual, raising questions about the nature of love and intimacy, and the value of human-AI connection.
While we should not seriously compare GPT-4o to Samantha, it raises similar concerns. AI companions are already here. As AI becomes more adept at mimicking human emotions and behaviours, the risk of users forming deep emotional attachments increases. This could lead to over-reliance, manipulation and even harm.
While OpenAI demonstrates concern with ensuring its AI tools behave safely and are deployed in a responsible way, we have yet to learn the broader implications of unleashing charismatic AIs onto the world. Current AI systems are not explicitly designed to meet human psychological needs – a goal that is hard to define and measure.
GPT-4o’s impressive capabilities show how important it is that we have some system or framework for ensuring AI tools are developed and used in ways that are aligned with public values and priorities.
Expanding capabilities
GPT-4o can also work with video (of the user and their surrounds, via a device camera, or pre-recorded videos), and respond conversationally. In OpenAI’s demonstrations, GPT-4o comments on a user’s environment and clothes, recognises objects, animals and text, and reacts to facial expressions.
The ChatGPT application developed by US artificial intelligence research organisation OpenAI, displayed on a smartphone screen in Berlin, Germany. (Image: EPA-EFE / Filip Singer)