AI — what happens when there is nothing left to learn?

AI — what happens when there is nothing left to learn?

Modelling. This is the pot of gold at the end of the AI rainbow and the great missing link between humans and ‘human level’ AI.

By some estimates, the voraciously hungry generative AI models developed by the likes of Google, Microsoft/OpenAI, Meta and a few others will have slurped up every bit of human text by about 2027, with the world’s photographs, graphics, video and audio not too far behind. There will be some bumps along the way, such as copyright and privacy lawsuits, but let’s roll out the scenario anyway and assume that all recorded history is finally ingested and used to train AI. 

What then?

A number of organisations have already started preparing for this end-of-training-data era by getting AIs to output millions of new words, sentences, paragraphs, narratives, poems, images and audio – under the catch-all term “synthetic data” – to feed the beast. It seems to me, however, that feeding AI with AI-generated output is akin to genetically reckless inbreeding, which can only result in AI systems with three eyes, six toes and some serious cognitive issues. 

In any event, all of this data-ingest and training is eye-wateringly expensive, which means only a few huge companies will be left in charge of the repositories of humanity’s entire intellectual and creative output. This is disquieting in a number of ways, but that’s where we are right now, notwithstanding a lot of work going on to try to democratise the field. 

What are the AIs going to do with all of this data, beyond the clever (and useful) chatbots, which is where a lot of the action is happening now? Surely there are more gems to mine, greater insights to be uncovered, a deeper intelligence lurking in there somewhere?

There is. The average human is exposed to about 500 million words through talking, listening and reading by the time they reach their thirties. AI systems are now being trained on trillions of words. And not one AI system is as smart as a human. Something is missing in the current approach to AI training which makes them unable to do some things that a toddler can do, notwithstanding their startling abilities in specific areas. 

Dr Leslie Valiant of Harvard is a computer scientist famous in some circles – he is a recipient of the Turing Award, which is like a Nobel Prize for computer science research. His most recent book has the wonky title The Importance of Being Educable, and in it he takes on the ambitious task of unpacking human uniqueness. 

In a recent episode of Sean Carroll’s podcast Mindscape, Valiant clearly articulates the gap between human intelligence and machine intelligence and, in doing so, he cuts through a whole lot of nonsense. He describes human “educability” or, more simply, the way we learn, as follows.

There are three ways in which we acquire knowledge. 

The first is by experience. Touch a hot stove as a toddler and you are unlikely to do it again. 

The second is by example. Someone tells you something, or you read it in a book or see it in a film. 

The last method of educability described by Valiant is the one that really seems to distinguish us both from other species and from AI in its current state. It is our ability to build models in our minds and then act on them, revising the model if it doesn’t work so well when we act on it. 

The question of whether we will be able to control the beast once it is animated, seems somewhat irrelevant in the larger scheme of things.

For instance, consider our internal model for a forthcoming vacation. We first plan for the sequence of events – choosing the holiday, booking, packing, organising our transport and accommodation, doing all the fun stuff and then coming home again. We have a model of how it’s all going to play out. And then we do each step; we act on the model. 

If our return flights are cancelled due to bad weather, we can change our plans midstream and take a train home instead, and we remember to check the weather before our next trip. We are wonderful modellers and remodellers. 

Where does this leave the field of machine learning? 

AI has made impressive strides in learning by example – that’s where ChatGPT and its friends shine. As for learning by experience, that project is just getting going (some startling, even creepy robots have begun to show their, erm, faces in various research labs, sporting sight, hearing and touch capabilities that help them to learn).

Read more in Daily Maverick: When are we all going to lose our jobs to AI?

But building complex internal models of the world in order to construct future scenarios, to act on them and then revise them if necessary? That is the pot of gold at the end of the AI rainbow and the great missing link between humans and “human level” AI.

There is a great deal of work being done by some of the smartest people in the world to drag AI up to our level, including model-building. The question of whether this is a good idea, and whether we will be able to control the beast once it is animated, seems somewhat irrelevant in the larger scheme of things. This is what we appear to want, and it is unlikely that anyone will be able to effectively stop, retard or direct the process, even with regulatory pressure. 

The best we can hope for, as we hurtle headlong into this new world, is that we learn something about ourselves. DM

Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg. His new book, It’s Mine: How the Crypto Industry is Redefining Ownership, is published by Maverick451 in South Africa and the Legend Times Group in the UK/EU, available now.


Comments - Please in order to comment.

  • Steve Davidson says:

    Well said Valiant. None of the AI stuff will ever get close to the human brain. It’s just regurgitating what’s gone before.

  • Phil Baker says:

    One thing that rattles with me is that humans also have a finite period for learning thence deciding actions – our life span.
    AI doesn’t – it’s immortal.
    That must have a perspective on how and how much we learn and what actions we take subsequently…
    AI has no such deadlines or motivations

  • Ian Mann says:

    I only hope that AI will help our politicians to learn from the mistakes our past politicians have made. Every generation seems to usher a politician who is either too young to remember the last stupid decision made by a former leader, or has no grasp of history, having never read a history book. The result is they never ever learn, and having disgraced themselves, disappear onto the speaking circuit leaving the door open for the next minister. Crazy, but true.

Please peer review 3 community comments before your comment can be posted

A South African Hero: You

There’s a 99.7% chance that this isn’t for you. Only 0.3% of our readers have responded to this call for action.

Those 0.3% of our readers are our hidden heroes, who are fuelling our work and impacting the lives of every South African in doing so. They’re the people who contribute to keep Daily Maverick free for all, including you.

The equation is quite simple: the more members we have, the more reporting and investigations we can do, and the greater the impact on the country.

Be part of that 0.3%. Be a Maverick. Be a Maverick Insider.

Support Daily Maverick→
Payment options

It'Mine: How the Crypto Industry is Redefining Ownership

There must be more to blockchains than just Bitcoin.

There is. And it's coming to a future near you soon.

It's Mine is an entertaining and accessible look at how Bitcoin made its mark, how it all works and how it challenges our long-held beliefs, from renowned expert and frequent Daily Maverick contributor Steven Boykey Sidley.

MavericKids vol 3

How can a child learn to read if they don't have a book?

81% of South African children aged 10 can't read for meaning. You can help by pre-ordering a copy of MavericKids.

For every copy sold we will donate a copy to Gift of The Givers for children in need of reading support.