Business Maverick


After the Bell: A subtext to the OpenAI debacle — Sam Altman’s kooky economics

After the Bell: A subtext to the OpenAI debacle — Sam Altman’s kooky economics
OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on 6 November 2023 in San Francisco, California. (Photo: Justin Sullivan / Getty Images)

The ideas are big, his tone is declarative, his solutions permeated with high morality, and I hate to say this, but it’s all kinda spacey.

Over the past fortnight, the international news about artificial intelligence (AI) has been in a tizzy with perhaps the leading character in this world-shifting new field, Sam Altman, being dramatically fired and then equally dramatically rehired as CEO of OpenAI. It’s been, as we all know, a cluster … er, debacle. Members of the board that fired him were largely themselves fired, and the 500+ staff of the organisation mostly threatened to resign, then didn’t.

Apart from the sheer dramatic comedy of the events, one of the oddities is that we don’t really know what reason the board had for his firing in the first place, apart from its statement that he had not been completely candid with the board. 

Manifold, the predictive markets website, asked participants to bet on the reason the drama began. Revealingly, even now after the issue is technically done and dusted, no clear majority basis has emerged. Some people think he was ousted for trying to remove a board member, Helen Toner. Lots of people are betting that we won’t know for a year. There is some support for the notion that there was a philosophical argument about AI and safety.

My bet is that at root, there was a subterranean battle between those who wanted to build a profitable business, led by CEO Altman, and the company’s non-profit board. It so often happens that startups want to change the world until precisely the moment when the money truck backs up at the loading zone.

What strikes me about all this is how little we know about the people who are suddenly at the forefront of what are sure to be enormous changes to our world. The sense was enhanced when Daily Maverick’s CEO, Styli Charalambous, sent me a blog post by Altman from 2021.

The post was titled Moore’s Law for Everything and it essentially proposed some ideas about how to deal with inequality, especially if the AI revolution upends traditional value systems. The post is imbued with ideas which vest in the effective altruism movement. It also shows how much Altman is worried about the dramatic economic consequences of AI, rather than the “end of the world” scenarios associated with its slightly unhinged critics.

The ideas are big, his tone is declarative, his solutions permeated with high morality, and I hate to say this, but it’s all kinda spacey. Altman basically believes there is going to be an enormous swing from a world where labour is dominant to one where capital is dominant. 

“AI will lower the cost of goods and services because labour is the driving cost at many levels of the supply chain. If robots can build a house on land you already own from natural resources mined and refined onsite, using solar power, the cost of building that house is close to the cost to rent the robots.”

Hence, we will have to find new ways to deal with inequality because the traditional way to address it has been by progressively taxing incomes. He proposes — and I am not making this up — the creation of the American Equity Fund, which would be capitalised by taxing companies above a certain valuation 2.5% of their market value each year, payable in shares transferred to the fund, and by taxing 2.5% of the value of all privately held land, payable in dollars.

Wowzer. You know, it’s nuts … but I like it. My own proposal for wealth redistribution is a much more modest version, but based on broadly the same idea. The government should put collected dividend taxes into a fund that would then be reinvested in the stock market on behalf of the population. My back-of-the-cigarette-box calculation is that if the government had adopted this proposal in 1994, all SA families would by now be getting a dividend cheque every month of around R1,000. That’s not a king’s ransom, but if you increase dividend tax just a bit, the amount would treble — and it’s forever.

So, what’s wrong with Altman’s much more ambitious idea? Well, frankly, a lot. It seems appealing to tax companies in newly issued shares: this seems like a free tax on capital. Companies issue shares all the time and if the amounts are modest, investors mostly ignore them.

But every issue of shares technically reduces return on equity and, consequently, in the ultimate analysis, it’s really not that different from cash. The value of the company is being diminished and, at the rate Altman is proposing, it will hurt a lot. The value of companies would halve over 20 years and, perhaps more importantly, the wealth machine that is the stock market will be smashed into reverse.

The more fundamental problem is this notional split between labour and capital. The labour theory of value is such a tantalising idea. It’s the one conceptual notion I know of shared by Adam Smith and Karl Marx. And yet, it’s also very tricky. If it were true that the value of products is contingent on the labour required to create them, then Toyota Corollas would be worth the same as Ferraris.

The problem is that you can argue it both ways: you could say all tax is currently derived from labour since personal tax comes off income; VAT comes from what people buy from what they earn; and corporate tax is the result of the value workers create. Or you could say all tax is derived ultimately and only from capital because wages, corporate tax and VAT are derived from the process of corporate capital accumulation.

Altman seems to think that AI is going to make capital much more efficient and labour much less consequential. I think most economists would agree that this is a very rudimentary understanding of how economics works; capital and labour are not mirror images of each other. They interact in complex ways, which, frankly, we struggle to understand. It’s by no means clear to me that Altman is correct that global stock markets are going to be enormously boosted by AI. But who knows; he could well be right.

Anyway, I think the good part of the idea is that it would provide ordinary people with an avenue into the greatest wealth creator we have ever invented: the stock market. If you look carefully at how calculations of inequality in a country work, it’s obvious that the top 10% are heavily invested in stock markets around the world, which of course stands to reason. 

When you have more money than you need to pay for the basic items you need, and for a respectable quantity of luxuries, you would naturally tend to concentrate on investments. When markets decline, the top 10% are suddenly only grotesquely richer than the rest of us, rather than outrageously richer than the rest of us. Or the other way around … you know what I mean.

The real takeaway from Altman’s treatise is how offbeat and eccentric his ideas are, presumably driven by his offbeat and eccentric job, which of course was especially offbeat and eccentric over the past few weeks. But if the people within the AI industry are thinking this way, it’s only a matter of time before these ideas become the subject matter of the new generation. Get used to it. DM


Comments - Please in order to comment.

  • Bruce Sobey says:

    You say “the “end of the world” scenarios associated with its slightly unhinged critics.” I am not so sure that they are so unhinged. From my reading it would appear as if there are very real reasons for concern. I suggest reading the Uncharted Territories author Thomas Pueyo who does a detailed analysis in his article “OpenAI and the Biggest Threat in the History of Humanity”. One needs to read that article for the full story, but basically if the target that AI is trained to optimise does not coincide or be in alignment with ours, its optimisation may make it take steps to do away with us. And there are surprising ways that it may be able to do this.

  • Paul T says:

    Apart from the apocalyptic destruction of useless humans by the superior machines, the biggest threat of AI and technology in general must be that it shifts wealth from labour to capital. So when machines can write books better than authors and can churn them out at a much faster rate, all the money people pay for books go to the owners of the machines and the authors are on the street, as their creativity and energy is now worth close to zero. In a world dominated by machines the author would struggle to reinvent themselves fast enough to add value elsewhere. So the idea of a redistribution mechanism back to the citizenry is not a bad one. In a world where the divide between the haves and the have nots is increasing, wealth redistribution should be high on the agenda, or else the have nots will come for the haves, as has happened many times in history.

    On the other hand, in a world in which the machines do all the work, controlled by a tiny group of people, with everyone else sucking passively off the generated value, i shudder to think what those idle minds will produce.

    Still so many questions, not enough answers.

Please peer review 3 community comments before your comment can be posted

Make your taxes work for you

Donate to Daily Maverick’s non-profit arm, the Scorpio Investigative Unit, by 29 February 2024 and you’ll qualify for a tax break.

We issue Section 18A tax certificates for all donations made to Daily Maverick. These can be presented to SARS for tax relief.

Make your donation today

Support Daily Maverick→
Payment options

Become a Maverick Insider

This could have been a paywall

On another site this would have been a paywall. Maverick Insider keeps our content free for all.

Become an Insider
Otsile Nkadimeng - photo by Thom Pierce

A new community Actionist every week.

Meet the South Africans making a difference. Get Maverick Citizen in your inbox.