I recently read an exchange on social media. The original poster, an author, was asking her friends about a recent R25.67-billion ($1.5-billion) settlement from Anthropic, which was the amount offered to authors whose work had been used without authorisation in the training of their AI chatbot Claude. The responses swirled around who is entitled to make a claim, how to make a claim and how much could be expected by an individual author.
But that part was less interesting to me than one poster who said, “Who’s Anthropic when they’re home?” Identifying Anthropic as the maker of the Claude chatbot was no more enlightening.
I know about this lawsuit in detail because I write about tech a lot, and so I read about tech a lot. I listen to podcasts, follow smart YouTubers, Substackers and tweeters. Of course, I understand that most people have other matters that interest them, and could not be less interested in keeping up with the latest developments in the very frothy world of AI tech.
‘Yawning inside-outside gap’
Which brings me here: there is a popular tech podcast from the New York Times called Hard Fork. In a recent episode, co-host Casey Newton said this: “I follow AI adoption pretty closely, and I’ve never seen such a yawning inside-outside gap”.
The rate of innovation in AI development has accelerated so fast since the first ChatGPT launch in November 2022 that it is hard to find a descriptor. The cliché “firehose” doesn’t get close. Not only in the frontier models like ChatGPT and its 10 or 15 or so US and Chinese competitors, but in the myriad other elements in the great AI stack – especially useful AI applications for the average citizen.
It is tempting to compare this with the early Internet.
In the late 1990s and early 2000s, the gap between internet “haves” and “have-nots” was significant, but its nature was largely access-based. The primary differentiator was connectivity – who had a dial-up modem and who did not. Once online, the core user experience was relatively uniform: email, web browsing on Netscape or Internet Explorer, and nascent search engines.
A gap certainly existed – some were more adept at finding information or building Geocities pages – but the fundamental toolkit and its potential were broadly similar for most users. The internet was a new place to visit, a powerful library and communication channel, but if any individual chose not to engage or to rarely engage (or didn’t have access), they were not unduly prejudiced in life.
But AI is different. The emerging divide isn’t merely between people who “use AI” and people who don’t. The real gap is between those who treat AI as a query engine – an unusually fluent search bar – and those who treat it as a toolkit: a set of reasonably reliable interns, editors, analysts, tutors, coders, designers, planners, graphic artists, writers, simulators and shoppers that can be orchestrated into infinite workflows simply by using English (or other human language) instructions.
Widening separation
Here’s the bottom line. Most people on the planet don’t know and don’t care about almost all of this. Like the social media poster, they haven’t ever heard of Anthropic (even if they occasionally use ChatGPT), and probably never will. Which leaves those who do know and do care and do engage with this stuff in a very peculiar place – assembling and mastering tools of unprecedented power across multiple domains into their personal lives, profoundly separating themselves from the rest of the population.
And the distance between those groups is getting wider because the gains are not linear. They are compounding. The up-to-date AI users don’t just do tasks faster; they do different tasks entirely, and they do them with a casual swagger. I am not one of these. While I tinker more than most, I don’t hold a candle to those who are swimming in the deep end.
They find creative, and fun, and learning opportunities that others don’t. They automate the boring bits of life – admin, scheduling, summarising, comparing, drafting, formatting – until the friction that most people accept as the natural tax of adulthood starts to evaporate. The time they save is reinvested into learning the next tool, creating better prompts, refining templates, building libraries of reusable instructions, and stitching together small automations that either quietly eliminate another hour of drudgery or unearth exciting new experiences. It’s not that they’re smarter than us – it’s that they're running a different operating system for their lives.
The people who don’t keep up aren’t irrational. They are simply busy with other stuff. They are not paid to experiment; they may not have the interest or incentive to get into the weeds. They suspect – often correctly – that half the hype is the overenthusiastic chatter of vendors, tech nerds and zealots.
Self-exclusion
They also carry a normal human fear: if I start relying on this thing, will I become incompetent without it? Will my boss expect more? Will it get my facts wrong? Will it leak my information? Will I look foolish? Will I understand the tools? These are sensible questions. The tragedy is that “sensible” can become a form of self-exclusion when the curve is steep.
The early internet divide never got quite so sharp because the internet’s value was obvious and social. If your friends were on email, you joined. If your airline tickets were cheaper online, you joined. If your bank required online access, you joined. It was a mass migration driven by network effects, and the late adopters were pulled along by gravity.
AI adoption doesn’t have the same automatic drag – because the biggest benefits are private, procedural and personal. The best AI workflows look like invisible scaffolding around someone’s life. None of it is socially contagious like “send me an email” was.
So yes, we should expect polarisation. Not in the cartoonish sense of a tiny priesthood of AI sorcerers and a mass of peasants banging rocks together. But in a more mundane, more plausible way: a small group accrues disproportionate practical power – more knowledge, more time, more fun, more options, more resilience – and everyone else uses AI like they use Google: occasionally and reactively.
We‘re sleepwalking towards a new form of inequality – not based primarily on wealth or education in the traditional sense, but on willingness and ability to expand one's cognitive capabilities through AI tools. And unlike the internet gap of 2003, this one looks likely to define life opportunities. DM
Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg and a partner at Bridge Capital and a columnist-at-large at Daily Maverick. His new book, It's Mine: How the Crypto Industry is Redefining Ownership, is published by Maverick451 in SA and Legend Times Group in the UK/EU, available now.
Up-to-date AI users don’t just do tasks faster; they do different tasks entirely, and they do them with a casual swagger. (Image: iStock)