Psychologist Philip Tetlock has an ongoing project with thousands of subjects to assess human predictive powers on a wide range of topics in the broad social, economic and political sphere, such as “Will Russia annex additional territory in the next three months?” or “Will any country withdraw from the Euro Zone in the next year?”. Over time, as world events unfold, he scores predictive successes and failures. The project is ongoing, and Tetlock has teamed up with science writer Daniel Gardner to reach a large range of people interested. If you wanted to, you could probably be one of the subjects.
Tetlock’s goal is not just to show that we’re bad at predicting (although that’s very important to him) but to find out whether there are any cognitive traits that differentiate good from bad predictors. It turns out that there are. In their most recent book, Gardner and Tetlock term the roughly 2% of people who are better at predicting 60% better than the rest of us superforecasters.
What do these people have in common? Education? Expert domain knowledge? Doctorates? Access to classified information? One or another political affiliation? None of the above, actually. Unsurprisingly, they tend to be more intelligent than average, but that doesn’t distinguish them from the experts we recognise. There’s something else going on.
What superforecasters have in common is being a cognitive fox, a reference to one of Aesop’s fables. Just as a real fox has a lot of ideas, so cognitive foxes don’t have one view or ideology. They are always generating alternative hypotheses. That’s because they are sensitive to new information. As such, they are typically tentative in their views, because they will change their minds if new information comes in. They are addicted to information-gathering and will even hunt for information that might contradict their hypotheses. They see the world as a complex place, and they do not have any clear organising principle with which to make sense of it.
Foxes are much more likely to be superforecasters.
Superforecasters come from all walks of life, including relatively low-status ones. They are not more likely to be White House staffers, TV pundits or political analysts. They are not necessarily experts in economics, like Klaus Schwab, nor historians with a grasp of deep history like Yuval Noah Harari. In short, they are not our go-to people when we want predictions.
The contrasting cognitive style is that of the hedgehog. Hedgehogs are so called because they have one big idea, through which they make sense of everything, including new information pertaining to that idea. They tend to discount contrary evidence and accept favourable. They don’t hunt for new information unless it’s about why their opponent’s views might be wrong, and they certainly don’t seek information as to why their own view is wrong. That’s pointless; they know it’s true.
Hedgehogs make great advisers. Their assertions are less likely to be qualified and conditional. They are confident. They know what’s going on, and they’re able to tell you clearly what you should do about it. Hedgehogs are also great debaters. The hedgehog will devote her intellectual energy to faulting her opponent’s argument. The fox, uncertain of her position, is more likely to want to reflect on or modify it, in light of what the hedgehog says. To the observer, it looks like the hedgehog knows what she’s talking about while the fox doesn’t. Who are you going to believe? Public discourse is dominated by hedgehogs: people who are clear, confident – and wrong. Unfortunately, the discourse about the Fourth Industrial Revolution is no different.
Commentators on 4IR have offered wonderfully acute analyses of contemporary social and political trends, and of the way that technology is interacting with them. But I have not seen any of these commentators asking, for example, whether recent improvements in AI might be step improvements that are followed by a plateau of relatively slow progress. This is what happened with space travel between the 1960s and Elon Musk. Why not with AI?
Or maybe new technologies won’t have economically viable applications. Concorde was not the future of intercontinental flight but was ultimately decommissioned, in favour of huge jets that aren’t significantly faster than the old passenger planes. Perhaps something like this is true of voice-recognition. Even if the technology continues to improve, perhaps it will turn out that typing is more economically viable as a human-machine interface.
Or perhaps the scientific investigations underlying envisaged future technologies won’t yield the understanding we expect. Science is replete with false projections that the answer is just around the corner. Perhaps, just as advances in physics succeeded only in uncovering deeper mysteries, so will advances in biology fail to unravel the supposed “algorithms” of the human brain. As much as science has deepened our understanding of the world, it has brought us ever more starkly into contact with its mysterious deep nature. Science, in any domain, is far from complete.
What excites me about 4IR is the focus on critical thinking and creative skills that it brings. These are sorely lacking. Public debate is without any kind of nuance. And technology does offer both the means and the imperative to address this. There’s a market for hour-long podcasts. And people had better be able to think if they’re going to work out what is going on in a world where change and uncertainty are the norm.
I’m fully behind such prescriptions. But I doubt that many, if any, of the accompanying predictions will come true. And that matters, because these predictions are shaping policy decisions, research funding and even where we invest our hard-earned personal savings.
If we’re serious about critical thinking skills, then we need to apply them at home. Hedgehogs use critical thinking as much as foxes. But they apply it to their opponents. When evaluating their own views, they aren’t critical. They look for evidence to support their views and discount evidence that contradicts it. No matter how good you are at critical thinking, it won’t be more than a debating tool if you can’t apply it to your own ideas.
Gardner and Tetlock were in the intellectual limelight with the fox/hedgehog idea a few years ago. Now they’re not. Likewise, it’s only a few years since Daniel Kahneman hit the intellectual headlines by showing how we fall prey to fallacies such as the base rate fallacy and confirmation bias.
These past lessons seem to have been forgotten, at just the time they are most necessary. And if we don’t apply them to 4IR and especially to the predictions upon which we are basing our policies, investments and education, it too will soon be gathering dust on the archive shelf, to be uncovered with bemusement by the intellectual historians of the future – whether human or machine.
Prof Broadbent is the Executive Dean in the Faculty of Humanities, Director of the African Centre of Epistemology of Science and Professor of Philosophy at the University of Johannesburg. He writes in his personal capacity.