Defend Truth


The fast and the tedious — artificial intelligence tries to take over Formula 1

The fast and the tedious — artificial intelligence tries to take over Formula 1
It was an event poised to showcase the zenith of AI, but it unfolded more like a cautionary tale rather than a triumph. (Image: Created by ChatGPT)

Would human drivers soon be replaced by algorithms? Reality quickly veered into view.

Under the dazzling floodlights of the Yas Marina Grand Prix Formula 1 track in Abu Dhabi the Autonomous Racing League sought to pioneer a new era of motorsport. There was not a single human driving anywhere on the track. It was an event poised to showcase the zenith of AI, but it unfolded more like a cautionary tale rather than a triumph. The promise was that soon human drivers would be replaced by algorithms. But the crowd quickly realised that dystopia was not going to be delivered

The stage was set for a spectacle: eight teams from various corners of the globe, each equipped with a state-of-the-art Dallara racing car loaded with the latest in LaSAR, RaDAR, cameras, and an intricate mesh of sensors. These vehicles, crafted for speed and precision, were programmed to navigate the demanding track autonomously at speeds exceeding 250km/h.

However, reality quickly veered into view. The teams, including the well-prepared squad from the Technical University of Munich (TUM), encountered a string of setbacks.

During qualifying time trials, the cars outfitted with cameras and software struggled to complete a full lap. They were marred by technical glitches and crashes. But most dispiriting was when cars just pulled over for no reason and took a little break on the side of the track.

TUM, despite their rigorous preparation and technical acumen, only managed a third-place finish in the time trials due to these unforeseen issues.

Read more in Daily Maverick: AI — what happens when there is nothing left to learn?

The final race, intended to be a seamless display of advanced technology, turned chaotic when one car spun out on the very first lap, triggering a domino effect. The remaining vehicles, programmed for safety first, halted abruptly behind it, unable to navigate the unforeseen obstacle of a car out of place. The race ground to a premature stop, with thousands of spectators witnessing the limitations of AI. It was a stark reminder of the formidable challenge of replicating human intuition and reflexes. But, I can imagine there was also relief in that crowd, because do we really want everything to be replaced and automated?

The aftermath saw feeble humans scramble around to reset their vehicles for another attempt. During this second chance, despite a more cautious start, mechanical and software issues persisted.

Read more in Daily Maverick: Not all languages are equal in the artificial intelligence boom

Amid the technological turmoil, the TUM team, led by Professor Markus Lienkamp and team leader Simon Hoffmann, rallied their collective expertise to address each challenge. Their vehicle, equipped with an array of sensors processing massive data streams, showcased brief moments of brilliance, hinting at the potential that might one day be realised.

The event closed with an exaggerated shrug. Yet, with the right kind of eyes you could possibly see progress. Lienkamp viewed the event as a critical learning experience. He said that it brought them closer to understanding the complex interplay of technology and racing dynamics. Though, apparently plenty of spectators were so bored they left before the race even finished. They wouldn’t even have seen TUM win the race.

I have to say from developing entirely AI-generated content (like our podcast) it is easier to have a “big red button” approach and make the whole project AI dependent, but that doesn’t necessarily bring the best results. It is a cute gimmick to say no humans were used in producing a podcast (or driving a F1 car), but the future is certainly a less-glitzy mix of people and emerging tech. So, expect to see drivers in F1 cars maybe forever, but for their jobs to get infinitely easier.

What AI was used in creating this newsletter?

I asked ChatGPT to create the image for this newsletter and to help write the main story. Initially, it got the verdict of the F1 race completely wrong. Despite me giving the AI a range of stories and information, it wrote the article as if all had transpired perfectly. It was only after I told ChatGPT that it had been a disaster did it pick out those negative facts and include them in the story. As always, I had to rewrite the article to remove the generic “ChatGPT voice” that has emerged in the past year.

In the news…

  • The bad: OpenAI is finally paying for news content (but way too late). In a move that feels surprising for modern media, OpenAI is paying the Financial Times for their content. Intuitively this feels like good news as this cements the AI giant’s ability to ingest training material without legal risk and throw a few bucks to providers for their work. Though I would say that this is too little, too late and basic lip service and good PR for OpenAI after gobbling up oceans of data to train their model for free. This is definitely interesting when you consider the backdrop of The New York Times’s lawsuit against OpenAI (and a whole bunch of other papers suing them). We have a long way to go before we figure out the rules of engagement between AI and content;
  • The good: Deepfakes are being outlawed in the UK. The government in the UK has already pledged that the creation of sexually explicit “deepfake” images will be made a criminal offence in England and Wales. Next, musicians want to be protected. Professional whiners Mumford and Sons and Sam Smith (plus others) are saying that AI is taking their voices and faces and is “a destroyer of creators’ livelihoods”. In response, MPs in the UK are scrambling to try to update their archaic laws for modern times. But, I think what should have artists riled up is that AI can pump out music so easily at this point that we might not even need their original tunes soon enough.

This week’s AI tool for people to use

I have started testing the various AI platforms to see which one can best plan my day. I have extreme to-do list-making disease. I will happily make comprehensive lists of what I need to do rather than do any work. Also, I have found that allowing the internet into these AI platforms hasn’t necessarily been for the best – Microsoft’s Copilot is obsessed with giving out links as if it is Google instead of doing what I asked, but ChatGPT will take your to-do list and build a comprehensive plan, particularly if you tell it your working hours and deadlines. DM

Subscribe to Develop AI’s newsletter here.

Develop AI is an innovative company that reports on AI, provides training, mentoring and consulting on how to use AI and builds AI tools.


Comments - Please in order to comment.

Please peer review 3 community comments before your comment can be posted

A South African Hero: You

There’s a 99.7% chance that this isn’t for you. Only 0.3% of our readers have responded to this call for action.

Those 0.3% of our readers are our hidden heroes, who are fuelling our work and impacting the lives of every South African in doing so. They’re the people who contribute to keep Daily Maverick free for all, including you.

The equation is quite simple: the more members we have, the more reporting and investigations we can do, and the greater the impact on the country.

Be part of that 0.3%. Be a Maverick. Be a Maverick Insider.

Support Daily Maverick→
Payment options

MavericKids vol 3

How can a child learn to read if they don't have a book?

81% of South African children aged 10 can't read for meaning. You can help by pre-ordering a copy of MavericKids.

For every copy sold we will donate a copy to Gift of The Givers for children in need of reading support.