Maverick Life

THE CONVERSATION

2023 was the year of generative AI. What can we expect in 2024?

2023 was the year of generative AI. What can we expect in 2024?
Why is everyone so angry at artificial intelligence?Illustrative image generated with AI.

Generative AI has changed the ways we work, study and even pray. Here are some highlights of an astonishing year of change – and what we can expect next.

In 2023, artificial intelligence (AI) truly entered our daily lives. The latest data shows four in five teenagers in the United Kingdom are using generative AI tools. About two-thirds of Australian employees report using generative AI for work.

At first, many people used these tools because they were curious about generative AI or wanted to be entertained. Now, people ask generative AI for help with studies, for advice, or use it to find or synthesise information. Other uses include getting help coding and making images, videos, or audio. So-called “prompt whisperers” or prompt engineers offer guides on not just designing the best AI prompts, but even how to blend different AI services to achieve fantastical outputs.

AI uses and functions have also shifted over the past 12 months as technological development, regulation and social factors have shaped what’s possible. Here’s where we’re at, and what might come in 2024.

AI changed how we work and pray

Generative AI made waves early in the year when it was used to enter and even win photography competitions, and tested for its ability to pass school exams.

ChatGPT, the chatbot that’s become a household name, reached a user base of 100 million by February 2023. Some musicians used AI voice cloning to create synthetic music that sounds like popular artists, such as Eminem. Google launched its chatbot, Bard. Microsoft integrated AI into Bing search. Snapchat launched MyAI, a ChatGPT-powered tool that allows users to ask questions and receive suggestions.

GPT-4, the latest iteration of the AI that powers ChatGPT, launched in March. This release brought new features, such as analysing documents or longer pieces of text.

Also in March, corporate giants like Coca-Cola began generating ads partly through AI, while Levi’s said it would use AI for creating virtual models. The now-infamous image of the Pope wearing a white Balenciaga puffer jacket went viral. A cohort of tech evangelists also called for an AI development pause.

Amazon began integrating generative AI tools into its products and services in April. Meanwhile, Japan ruled there would be no copyright restrictions for training generative AI in the country. In the United States, screenwriters went on strike in May, demanding a ban of AI-generated scripts. Another AI-generated image, allegedly of the Pentagon on fire, went viral.

In July, worshippers experienced some of the first religious services led by AI.

In August, two months after AI-generated summaries became available in Zoom, the company faced intense scrutiny for changes to its terms of service around consumer data and AI. The company later clarified its policy and pledged not to use customers’ data without consent to train AI.

In September, voice and image functionalities came to ChatGPT for paid users. Adobe began integrating generative AI into its applications like Illustrator and Photoshop.

By December, we saw an increased shift to “Edge AI”, where AI processes are handled locally, on devices themselves, rather than in the cloud, which has benefits in contexts when privacy and security are paramount. Meanwhile, the EU announced the world’s first “AI Law”.

Given the whirlwind of AI developments in the past 12 months, we’re likely to see more incremental changes in the next year and beyond. In particular, we expect to see changes in these four areas.

Increased bundling of AI services and functions

ChatGTP was initially just a chatbot that could generate text. Now, it can generate text, images and audio. Google’s Bard can now interface among Gmail, Docs and Drive, and complete tasks across these services.

By bundling generative AI into existing services and combining functions, companies will try to maintain their market share and make AI services more intuitive, accessible and useful. At the same time, bundled services make users more vulnerable when inevitable data breaches happen.

Higher quality, more realistic generations

Earlier this year, AI struggled with rendering human hands and limbs. By now, AI generators have markedly improved on these tasks. At the same time, research has found how biased many AI generators can be.

Some developers have created models with diversity and inclusivity in mind. Companies will likely see a benefit in providing services that reflect the diversity of their customer bases.

Growing calls for transparency and media standards

Various news platforms have been slammed in 2023 for producing AI-generated content without transparently communicating this. AI-generated images of world leaders and other newsworthy events abound on social media, with high potential to mislead and deceive.

Media industry standards that transparently and consistently denote when AI has been used to create or augment content will need to be developed to improve public trust.

Expansion of sovereign AI capacity

In these early days, many have been content playfully exploring AI’s possibilities. However, as these AI tools begin to unlock rapid advancements across all sectors of our society, more fine-grained control over who governs these foundational technologies will become increasingly important.

In 2024, we will likely see future-focused leaders incentivising the development of their sovereign capabilities through increased research and development funding, training programs and other investments.

For the rest of us, whether you’re using generative AI for fun, work, or school, understanding the strengths and limitations of the technology is essential for using it in responsible, respectful and productive ways.

Similarly, understanding how others – from governments to doctors – are increasingly using AI in ways that affect you, is equally important. DM 

This story was first published on The Conversation. T.J. Thomson is a Senior Lecturer in Visual Communication and Digital Media at RMIT University. Daniel Angus is a Professor of Digital Communication at the Queensland University of Technology.

Gallery

Comments - Please in order to comment.

  • John P says:

    How can this get through moderation?

  • O C says:

    As always the intent often person using the Ai will be the degerming factor of the outcomes achieved. As proven in 2023 AI can be manipulated and it does not understand human intent – we do not live in a black and white world.
    Until AI can determine right from wrong, the humans utilizing its services will distort the outcomes to serve their purpose and narrative.

  • Johan Buys says:

    AI relies entirely on scraping the internet of billions of articles, research, images and music and making some or sometimes not even any changes, and presenting this output as new. Without those originally created sources, AI has as much chance of creating an article, image or sound as a monkey in front of a computer.

    That model will be stopped. Crunch legal case is NYT vs OpenAI. NYT has already proven that if one feeds ChatGPT4 part of a NYT article and ask it to complete it, OpenAI copied virtually verbatim is completion from the original and copyright protected articles. And it presents this as an OpenAI authored piece of artificial intelligence to a paying audience : that is fraud.

    Without scanning Picasso art or Beatles music, AI cannot produce a new artwork or song in the style of Picasso or Beatles.

    So where this ends if left unchecked and without new actually original intelligence is a world of bastardized rehashing of history.

    AI is vastly exaggerated in both “artificial” and “intelligence”.

  • Peter Utting says:

    I am disappointed to find that Mott MacDonald, the designers of the Channel Tunnel and the Hong Kong Airport, still don’t appreciate, recognise or reward innovation in engineering projects. While working there in the late 1980s, I would often be sought out by Directors for advice and discussion.

    In 1989, after returning from an interview and appointment to a Chair in Civil Engineering at the University of Natal, two Directors approached me while I was training a group of mechanical engineers on using an Artificial Intelligence (AI) system I had set up to simulate emergency operation of the Channel Tunnel in case of fire.

    The Directors asked me which part of the Hong Kong Airport project should they do? Without hesitation I told them the whole project. They were stunned and asked how? They showed me the Expression of Interest documents and I told them I would set up an Expert System (AI) to allow them to enter all the documentation and interactively plan the project. They asked whether it would work, and I told them it was similar to an approach I had successfully used twenty years earlier with Professor Ronald Woodhead for the installation of ventilation systems in the Concert Chamber for the Sydney Opera House in 1967 (uniquely, my third of the ASCE Top Ten Construction Projects in the 20th Century).

    I set up the system the next day, allowing Mott’s to submit a successful set of Project Documents within weeks and be awarded the Hong Kong Airport Contract well before the end of the year.

    As far as I know, Mott’s have never acknowledged my innovative contributions to construction projects using AI. Perhaps they do not know the power of AI.

    My philosophy is that I don’t believe in anything. If I don’t know I’ll find out. When I’ve found out I’ll know, and can’t believe what I know.

Please peer review 3 community comments before your comment can be posted

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

Premier Debate: Gauten Edition Banner

Gauteng! Brace yourselves for The Premier Debate!

How will elected officials deal with Gauteng’s myriad problems of crime, unemployment, water supply, infrastructure collapse and potentially working in a coalition?

Come find out at the inaugural Daily Maverick Debate where Stephen Grootes will hold no punches in putting the hard questions to Gauteng’s premier candidates, on 9 May 2024 at The Forum at The Campus, Bryanston.