GUEST ESSAY
The tortoise and the hare revisited – the inevitable failure of AI regulation
AI is nothing if not a slippery customer – the field will simply expand organically and any attempts at regulation will be frustrated or swept aside. So, what will such a world look like?
Growth in the regulation industry feeds on the new, the novel, the unprecedented. Some new tech or device, invention, behaviour or process appears which does not seem to fit neatly into any existing boxes and, inevitably, someone decides that it has to be contained and constrained with the force of law.
So the powers that be (consultative bodies in partnership with legislators, politicians or regulators) get to work considering the matter at hand, researching and arguing, reading and debating, before finally writing long draft recommendations and then awaiting feedback, making changes and, eventually, shoving all of it into a long tunnel of administrative hell which, much much later, ends up on someone’s desk for signature. With this begins a whole new decision-making process – how to police it and how to sanction breaches.
Anyone who has watched this process unfold at close quarters would have found themselves, on occasion, losing the will to live. In a previous life I brushed up against one. It was trying.
On the other hand, it generally works quite well for those of us who have to interact with the new thing. Because a new thing which is regulated knows its place in the world, as do its guardians, and the rest of us are, at least most of the time, its beneficiaries.
Like who can keep our personal data and for how long. What age one may drink. Or drive. Who can vote, and how. How taxes should be levied and paid. How many nuclear bombs one can make, and who can make them. Whether scientists can mess around with certain kinds of genetic engineering. You get the picture.
But the world has changed.
The change began with the explosion of new digital technologies in the 1970s. It was catalysed by the first very large-scale integrated circuit (VLSIC) which begat PCs and cellphones and fast servers and the internet and apps and all the rest. But the really new thing here was not the laptop or smartwatch or whatever. It was that everything connected with this rush of new technologies got cheaper and faster. Much cheaper and much faster, continuously, year after year, including the rate of innovation itself.
So much so that the speed of innovation in digital technologies got its own memetic law, Moore’s Law, which predicted the exact rate of improvements, at least for the number of transistors on chips. It was then applied promiscuously to everything else and it has been remarkably accurate for 40 years.
But the rate of change in AI is way too fast for Moore.
Now anyone can dream up an AI application, develop it and launch it in days with almost no programming knowledge at all.
Pause for a moment to consider the fact that AI, which has been determinedly grinding on since the 1950s, had its big moment a little more than a year ago. It was not so much a new development of the technology (the smarts behind ChatGPT had been around for a while), but it was its breakout into “general” usefulness. It was simple to operate, spoke English (and other languages) and was of immediate benefit across almost every field of human endeavour, notwithstanding its staggering into potholes and crashing into walls every now and again.
Regulators, who have been wrestling with AI for many years, are suddenly feeling the sting of urgency. For instance, the UK National AI Strategy, a carefully researched, well-written and persuasively argued document, was in the works for years before its release in October 2021, a year before ChatGPT. It was considered a model policy paper for future legislation. It did not even mention generative AI. A mere year after publication, the document was suddenly incomplete, useless until updated.
It gets worse. The weekly (and sometimes daily) torrent of innovations within the AI world is challenging for even the most dogged observer to fully digest and understand. Not only new types of LLMs (or, more broadly, “foundation” systems), but massive advances in AI chips, other non-LLM machine learning approaches, new architectures, training data and data storage.
And applications! Forget about chatbots. Choose a field, any field. AI is burrowing in there using wildly different approaches. There are scores of announcements daily, way beyond even the early days of the internet, when at the very least one had to know how to code in order to build an app. Now anyone can dream up an AI application, develop it and launch it in days with almost no programming knowledge at all.
Your correspondent keeps himself glued to this daily torrent of new stuff, and it’s pretty much a fool’s errand. There is simply too much happening, the forest gets lost for the trees. Where does this leave the attempt to regulate the field? With its stolid and slow-moving approach requiring consultation, collaboration, feedback, research and compromise? No one at the forefront of research is going to stop and wait for it, and nor is anyone going to wait in the marketplaces of the digital world.
Read more in Daily Maverick: The future of disease treatment at the intersection of AI and biochemistry
The field will simply expand organically and any attempts at regulation will be frustrated or swept aside because no one can really constrain its development, especially at a global level, where research and deployment of AI will probably be decentralised, borrowing directly from the technologies of blockchain and crypto, whose entire purpose is to escape the various embraces and prisons of regulation.
Will there be any regulation at all? Of course. Regulation will try to regulate what little it can, as it has with the internet (where it failed to control harmful content), and crypto (where it is having trouble as the technology aggressively resists its oversight, and different countries have wildly different approaches). But AI is nothing if not a slippery customer, the likelihood being that it will constantly wriggle out of reach.
What will a world with toothless AI regulation look like? I am not sure we know the answer to that any more than we know how many new things will be pouring out of the AI hosepipe tomorrow, next week, next month, next year.
What we do know is that regulation and AI innovation are rewriting the old fable of the tortoise and the hare, but in this version the hare is not going to stop for a nap. DM
Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg. His new book It’s Mine: How the Crypto Industry is Redefining Ownership is published by Maverick451 in South Africa and Legend Times Group in the UK/EU, available now.
Comments - Please login in order to comment.