Business Maverick


After the Bell: What if you asked AI about the future of AI?

After the Bell: What if you asked AI about the future of AI?
AI tools and ChatGpt (Photos: iStock | Wikipedia)

Artificial intelligence, it turns out, has a very positive outlook for itself. But the devil is in the detail — and some of its predictions are not entirely impressive. In the end, there are lots of things we don’t know; what we do know is that it’s here to stay.

Perhaps the most salient event in business, and society for that matter, in 2023 was how artificial intelligence burst into public prominence. So a big question for 2024 is whether this will continue.

Well, this is artificial intelligence we are talking about, so let’s ask it. That way we can see if AI is a good predictor of something it presumably knows a lot about – itself!

So I asked ChatGPT: In 2023 artificial intelligence burst into public consciousness. Do you think that will continue in 2024?

Shockingly, or not, ChatGPT was positively exuberant about itself.

Its answer was “Absolutely! The pace of AI development has been consistently accelerating, and it’s likely to continue capturing public attention in 2024.

“New breakthroughs, applications, and debates around AI ethics, governance, and its impact on various industries and society will likely keep it in the spotlight. The advancements in AI are so multifaceted that they touch almost every aspect of our lives, from healthcare to entertainment, economics to education. So, buckle up for more exciting developments and discussions on AI in the coming year!”

This sounds very much like a press release. What about some more probing questions? It asked me: “What specific aspects of AI are you most interested in?”


One of the things I’m interested in, and I presume the entire investment community, is what shares you should buy to take advantage of the AI trend. There is a kind of technical warning that you should take care when investing, but generally it suggests the obvious: Google, Microsoft, Amazon because they are heavily invested in AI research; Nvidia and AMD because they provide hardware; Palo Alto Networks and CrowdStrike for cybersecurity; and iRobot and ABB for robotics.

One of the interesting aspects of large language models is that, after continual questioning, they can sometimes “hallucinate”. But it’s possible that, even at question number two, hallucinations are already evident.

Take Nvidia, for example. The company makes the computer chips that are necessary to create AI models. Crucial, right? It has a market cap of $1.2-trillion and a price-to-equity ratio of 65.

This is just outrageous. Ten years ago it had a market cap of $10-billion; it’s now the world’s sixth-largest company.

In many circumstances, a 65 price-to-earnings would not be ultra-demanding for a high-growth company, which Nvidia undoubtedly is.

But combined with its existing market cap, there is a good argument that revenue-driven growth would be difficult from here. Just for example, if you wanted to decrease the price-to-earnings to, say, 50 – high, but a little closer to reality – earnings per share would have to double.

That doesn’t seem impossible historically: they have doubled over the past year. But doubling again? Tough call. Perhaps for this reason the Nvidia share price has wavered in a range ever since the share price exploded in May.

ABB’s share price was 20% up in 2023, but iRobot’s is down by about the same margin, so if AI is going to assist robotics, it’s not visible yet. The same applies to e-commerce and retail. Alibaba had a terrible year, but Shopify had a fantastic year. AI is just not visible in their earnings yet, either, as demonstrated by the inconsistency.


I suspect where ChatGPT is most interesting about itself is in a field a little more at a tangent: healthcare. The numbers that are being floated are just incredible. One research house, Precedence Research, said the current AI healthcare market was valued at $11-billion in 2021, and that it is projected to be worth $187-billion in 2023.

The areas where AI could be really valuable are in diagnosis and pharmacology, both of which are very expensive at the moment. In both cases, the initial research delivered good results. A published study found, for example, that AI recognised skin cancer better than the best international dermatologist. Harvard’s School of Public Health found that using AI to make diagnoses may reduce treatment costs by up to 50% and improve health outcomes by 40%.

The prospect is that AI could reduce the need to test potential drug compounds physically, which would result in enormous cost savings. In an article in Harvard Law School’s publication Bill of Health earlier this year, the author, Matthew Chun, said it’s possible that high-­fidelity molecular simulations can run on computers without incurring the high costs of traditional discovery methods.

“AI also has the potential to help humans predict toxicity, bioactivity, and other characteristics of molecules or create previously unknown drug molecules from scratch,” he writes.

But it is worth noting that the two stocks ChatGPT suggests in this space, Intuitive Surgical and Medtronic, have both more or less held their own, but have not increased substantially in the past year.

Rough ride

Okay, another question. What would make the shares associated with AI fall dramatically over the next year? Well, there is a lot, but at the top of the list would be regulatory changes and hype correction. Regulators – in fact, the public at large – are a little freaked out by the speed with which AI is becoming mainstream.

Even the crazy firing and rehiring of Open­AI’s CEO, Sam Altman, in some ways reflects a high level of scepticism, doubt and uncertainty that currently surrounds the industry. The answer to some of the biggest questions surrounding AI is simply that we just don’t know.

We just don’t know, for example, whether AI will be a net job displacer. It certainly seems that way, but, historically, tech innovations have often created as many jobs as they destroyed.

We just don’t know what precisely the security and privacy issues will be, and even whether AI will be a useful tool for creatives, even though the early examples are impressive.

Most of all, we don’t know whether AI can reach something called “the singularity”, in which it reaches a kind of superintelligence beyond human capability.

What we do know is that AI will be one of perhaps the top five ­crucial issues of 2024. Hold on to your socks; it’s going to be a rough ride. DM



Comments - Please in order to comment.

  • Johan Buys says:

    Here is my prediction: the new court case by NYT against OpenAI will be crucial.

    1. The facts are obvious. Feed ChatGPT4 a portion of an old NYT article, it “completes” the article virtually verbatim as per the real article. There is no AI, it is doing a copy paste, no “intelligence” involved. It is theft / copyright infringement and since ChatGPT4 is a paid service, it is fraud.

    2. A few years back, publishers lost their case about Google digitizing books. That was a major legal misstep, even while the difference is that Google did not in any way present that it produced that content. That is a KEY difference now.

    3. If AI is allowed to scrape billions of images, texts and songs from the internet and make money presenting “new” works as its own, the risk to all of us is that the real creators stop creating. AI is most obviously NOT creating new. Whether it is a painting in the style of Picasso, a song in the style of Beatles, or a very efficient piece of software code that some youngster developed but which is now replicated as if AI developed it.

    People (and investors) are imho chasing a myth if they believe AI is actually creative or intelligent. Without a stolen reference library, ChatGPT cannot produce anything. Or at least not any more so than a monkey with a piano can produce a tune.

    They should rename it PocketMBA = very politely summarize other people’s knowledge.

  • Étienne M says:

    ChatGPT, like data analytics more broadly, is great when you’re scientific in your reasoning and specific in your pursuit. For example, ask a room full of philosophers what the meaning of life is and chances are you’ll leave with a list of inherently contestable and contradictory answers.
    The same is true for any AI.
    Ask broad questions – get broad answers.
    So what’s my point? Well, if you’re questioning progress through automation and/or generative potential—particularly human progress—you’re likely to arrive at two answers:
    1) An inherently contestable one, as discussed.
    2) An answer that we already know, but quicker and more accurately [?].
    These two outcomes present scope for a third, more capricious role, that of innovation. Arguably, the primary purpose of Generative AI…
    But innovation needs to achieve desirable outcomes which remain, for the present, intrinsically human.
    To conclude, if you posit the outlook for AI in industries where outcomes depend more on human “desirability”, and where the contestable nature is inherently changeable, you’ll probably leave with a long list of options you’ll have to choose from anyway.

    • Johan Buys says:

      Etienne: if you asked ChatGPT, without the billions of articles that question about the meaning of life it could not come up with anything at all. It is a large natural language synthesizing model that cannot develop an original thought.

      Watch the NYT case going forward and please read the filings to date.

      Complete joke

Please peer review 3 community comments before your comment can be posted


This article is free to read.

Sign up for free or sign in to continue reading.

Unlike our competitors, we don’t force you to pay to read the news but we do need your email address to make your experience better.

Nearly there! Create a password to finish signing up with us:

Please enter your password or get a sign in link if you’ve forgotten

Open Sesame! Thanks for signing up.