Sci-tech

GUEST ESSAY

AI — what happens when there is nothing left to learn?

AI — what happens when there is nothing left to learn?

Modelling. This is the pot of gold at the end of the AI rainbow and the great missing link between humans and ‘human level’ AI.

By some estimates, the voraciously hungry generative AI models developed by the likes of Google, Microsoft/OpenAI, Meta and a few others will have slurped up every bit of human text by about 2027, with the world’s photographs, graphics, video and audio not too far behind. There will be some bumps along the way, such as copyright and privacy lawsuits, but let’s roll out the scenario anyway and assume that all recorded history is finally ingested and used to train AI. 

What then?

A number of organisations have already started preparing for this end-of-training-data era by getting AIs to output millions of new words, sentences, paragraphs, narratives, poems, images and audio – under the catch-all term “synthetic data” – to feed the beast. It seems to me, however, that feeding AI with AI-generated output is akin to genetically reckless inbreeding, which can only result in AI systems with three eyes, six toes and some serious cognitive issues. 

In any event, all of this data-ingest and training is eye-wateringly expensive, which means only a few huge companies will be left in charge of the repositories of humanity’s entire intellectual and creative output. This is disquieting in a number of ways, but that’s where we are right now, notwithstanding a lot of work going on to try to democratise the field. 

What are the AIs going to do with all of this data, beyond the clever (and useful) chatbots, which is where a lot of the action is happening now? Surely there are more gems to mine, greater insights to be uncovered, a deeper intelligence lurking in there somewhere?

There is. The average human is exposed to about 500 million words through talking, listening and reading by the time they reach their thirties. AI systems are now being trained on trillions of words. And not one AI system is as smart as a human. Something is missing in the current approach to AI training which makes them unable to do some things that a toddler can do, notwithstanding their startling abilities in specific areas. 

Dr Leslie Valiant of Harvard is a computer scientist famous in some circles – he is a recipient of the Turing Award, which is like a Nobel Prize for computer science research. His most recent book has the wonky title The Importance of Being Educable, and in it he takes on the ambitious task of unpacking human uniqueness. 

In a recent episode of Sean Carroll’s podcast Mindscape, Valiant clearly articulates the gap between human intelligence and machine intelligence and, in doing so, he cuts through a whole lot of nonsense. He describes human “educability” or, more simply, the way we learn, as follows.

There are three ways in which we acquire knowledge. 

The first is by experience. Touch a hot stove as a toddler and you are unlikely to do it again. 

The second is by example. Someone tells you something, or you read it in a book or see it in a film. 

The last method of educability described by Valiant is the one that really seems to distinguish us both from other species and from AI in its current state. It is our ability to build models in our minds and then act on them, revising the model if it doesn’t work so well when we act on it. 

The question of whether we will be able to control the beast once it is animated, seems somewhat irrelevant in the larger scheme of things.

For instance, consider our internal model for a forthcoming vacation. We first plan for the sequence of events – choosing the holiday, booking, packing, organising our transport and accommodation, doing all the fun stuff and then coming home again. We have a model of how it’s all going to play out. And then we do each step; we act on the model. 

If our return flights are cancelled due to bad weather, we can change our plans midstream and take a train home instead, and we remember to check the weather before our next trip. We are wonderful modellers and remodellers. 

Where does this leave the field of machine learning? 

AI has made impressive strides in learning by example – that’s where ChatGPT and its friends shine. As for learning by experience, that project is just getting going (some startling, even creepy robots have begun to show their, erm, faces in various research labs, sporting sight, hearing and touch capabilities that help them to learn).

Read more in Daily Maverick: When are we all going to lose our jobs to AI?

But building complex internal models of the world in order to construct future scenarios, to act on them and then revise them if necessary? That is the pot of gold at the end of the AI rainbow and the great missing link between humans and “human level” AI.

There is a great deal of work being done by some of the smartest people in the world to drag AI up to our level, including model-building. The question of whether this is a good idea, and whether we will be able to control the beast once it is animated, seems somewhat irrelevant in the larger scheme of things. This is what we appear to want, and it is unlikely that anyone will be able to effectively stop, retard or direct the process, even with regulatory pressure. 

The best we can hope for, as we hurtle headlong into this new world, is that we learn something about ourselves. DM

Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg. His new book, It’s Mine: How the Crypto Industry is Redefining Ownership, is published by Maverick451 in South Africa and the Legend Times Group in the UK/EU, available now.

Gallery

Comments - Please in order to comment.

  • Steve Davidson says:

    Well said Valiant. None of the AI stuff will ever get close to the human brain. It’s just regurgitating what’s gone before.

  • Phil Baker says:

    One thing that rattles with me is that humans also have a finite period for learning thence deciding actions – our life span.
    AI doesn’t – it’s immortal.
    That must have a perspective on how and how much we learn and what actions we take subsequently…
    AI has no such deadlines or motivations

  • Ian Mann says:

    I only hope that AI will help our politicians to learn from the mistakes our past politicians have made. Every generation seems to usher a politician who is either too young to remember the last stupid decision made by a former leader, or has no grasp of history, having never read a history book. The result is they never ever learn, and having disgraced themselves, disappear onto the speaking circuit leaving the door open for the next minister. Crazy, but true.

Please peer review 3 community comments before your comment can be posted

We would like our readers to start paying for Daily Maverick...

…but we are not going to force you to. Over 10 million users come to us each month for the news. We have not put it behind a paywall because the truth should not be a luxury.

Instead we ask our readers who can afford to contribute, even a small amount each month, to do so.

If you appreciate it and want to see us keep going then please consider contributing whatever you can.

Support Daily Maverick→
Payment options

[%% img-description %%]

Spotting False Information During Elections: A Digital Literacy Workshop

In today's digital age, the spread of misinformation can influence public opinion and undermine the democratic process, especially during election periods. Join us for a vital training session designed to empower voters with the skills needed to discern fact from fiction, on Wed 15 May at 12:00, live, online and free of charge.

It'Mine: How the Crypto Industry is Redefining Ownership

There must be more to blockchains than just Bitcoin.

There is. And it's coming to a future near you soon.

It's Mine is an entertaining and accessible look at how Bitcoin made its mark, how it all works and how it challenges our long-held beliefs, from renowned expert and frequent Daily Maverick contributor Steven Boykey Sidley.

Become a Maverick Insider

This could have been a paywall

On another site this would have been a paywall. Maverick Insider keeps our content free for all.

Become an Insider
Daily Maverick Elections Toolbox

Download the Daily Maverick Elections Toolbox.

+ Your election day questions answered
+ What's different this election
+ Test yourself! Take the quiz