We apply the scientific method in order to generate reliable evidence on the nature of our reality. The scientific method requires, for example, that we gather the right/necessary data (that reflects the death rate, for example); conduct controlled experiments (think of hydroxychloroquine); discriminate among rival hypotheses (lockdown or go for herd immunity); and replicate experiments (remdesivir treatment). This was just not possible before authorities needed to intervene in Covid-19, a new disease and pandemic.
Complex problems involve numerous interconnected causes and effects and delayed feedback; health policies involve a complex network of factors: biological, human, technical, economic, social, political, ethical and others. John Sterman, the system dynamics scholar from MIT, pointed out that the evidence may not reflect reality; our decisions may be guided by input other than the evidence; our interventions add further complexity, and they change the nature of the problem.
A “Manhattan Project” approach is one way of being guided by the evidence: a panel of experts secretly provides advice to inform policy decisions based on the evidence it gleans. This approach has been used in South Africa in Covid-19; not unexpectedly the war against a viral enemy was confirmed by the appointment of the National Coronavirus Command Council (NCCC). However, the pandemic is not a war-like situation and a command-and-control approach is not likely to be the most effective when widespread behavioural change is required of a free society.
Measurement in science cannot provide complete information:
Measurement is an act of selecting a fraction of possible experiences or events of which we are aware; and we may not be aware of important events and thus innocently omit data when we assemble the evidence. For example, it took weeks before epidemiologists in some countries became aware of the numerous deaths of old people in “care” facilities.
Measurement also is accompanied by distortions, delays, biases, and errors:
But that does not mean the information is not useful – we just need to understand and be humble about the limitations of what the evidence does and does not represent. For example, the confirmed number of new cases published each day is not the real rate of occurrence of Covid-19 in the population.
Groups of experts may help overcome the challenges of individual experts, but also can be thwarted:
Groups have their own challenges, even if the participants receive excellent information and reason well as individuals. Defensive routines may be used to save face; untested inferences may be offered in a way that they seem like relevant facts; individuals may advocate their positions, while appearing to be neutral. Individuals may make strong attributions to other members that are not grounded in evidence or may be irrelevant. Of course, the composition of the group is also a crucial factor, eg the presence and role of “Natjoints” on the NCCC rendered an enforcement approach to social change almost inevitable.
Groups of experts, including scientists, are not immune to the problems of groups. On the other hand, some groups appear functional, but may suppress dissent, and even seal themselves off from those with different views and/or from disconfirming evidence. They may suffer from deeply entrenched groupthink: members of a group have a shared mindset and mutually reinforce their current beliefs. This has the potential benefit of efficient collaboration within the group, but the members will struggle to think out of the box and be creative – that requires different thinking that may reside only outside the group.
It is a special and an enviable group that not only encourages diverse views, but constructively harnesses the energy they generate to create new thinking.
Double loop learning is the process by which evidence results in an amendment of our mindsets – the way we think. The same information, interpreted by a different model, yields different decisions and different policies! We tend to protect our mindsets and may not adapt them, despite the evidence; consequently, we may repeatedly develop non-solutions within an entrenched mindset (more or revised regulations to control what people purchase at the supermarket may not change the spread of the virus).
There is abundant evidence that the command-and-control mindset and approach to changing behaviour in society is unpopular and may be ineffective. Prohibition of sale of tobacco products is failing dismally – and causing the side-effect of crime and damage to the economy.
One reason for the failure to change policy despite the evidence is that change may be costly (not just in monetary terms; ego and reputation too); and some decisions and outcomes may be irreversible. Therefore, continuation of past decisions or ineffective adaptations often overrides needed change or experimentation. Was an extension of the lockdown really likely further to flatten the curve’?
Our judgments, decisions and behaviours:
Are influenced not only by evidence, but also by some issues outside the scope of evidence and remit of science. Ethics and political ideology also guide decisions on how best to manage complex problems such as the pandemic. Should we continue testing randomised samples in the population when we do not have sufficient kits for testing patients and healthcare workers? The answer will be guided by more than evidence. Decision-makers also may be influenced by their personal traits, emotions, reflex, unconscious motivations, and other non-rational factors such as pressures created by the systems in which they act. The logic of inference is not always simple – as Dan Ariely puts it: we are predictably irrational. When biased mindsets prevail, decisions and policies based on even good evidence are likely to fail the test of rationality.
“Scientists advised Cabinet to go to Level 1”; the WHO suggested gradual re-opening of lockdown; “the government chose middle ground – Ramaphosa”. Was this an example of existing policy guiding the selection of evidence, or wise selection among competing evidence?
Judgment on issues is also strongly affected by the frame in which evidence is presented or by its obvious implications (eg evidence presented to counter extension of the lockdown may have been viewed less favourably from evidence that supports continued lockdown – just because it countered a preconceived notion).
Other latent biases in judgments are common; these include overconfidence (“we’ll develop an effective vaccine by year end”), wishful thinking (“this virus will just disappear”), and confirmation bias (highlight that early results with a technology show promise, but not that the technology has yet to be used for vaccine development).
Social systems contain intricate networks of feedback:
Implementation of social policy affects people. They seek to achieve their own goals and may act to restore the balance that has been upset by the policy (eg buy cigarettes and alcohol on the black market). Citizens also become cynical, non-compliant and actively resistant when they think that those with power and authority manipulate the policy-making process for ideological, political, or pecuniary purposes (consider the outcry about the prohibition measures included in lockdown, and the unproven inferences on the reasons behind the measures).
Citizens’ reactions generate intended consequences (less alcohol-related violence and therefore hospital admissions) and unintended consequences (a boon to the black market). The problem intensifies when those in authority respond to negative feedback (the ban on sale of tobacco products is not working) by pulling harder on their policy levers, thus creating a vicious cycle (we will arrest you if you do not have an invoice for the cigarettes in your car).
Problem complexity hinders the successful implementation of the evidence-based policies (eg the shortage of test kits). Even if policy follows the best course of action guided by the evidence, the implementation of the policies may be distorted by asymmetric distribution of information and understanding in society, private agendas, and game-playing within the whole system. These sometimes counterintuitive behaviours can and often do delay, dilute or derail the well-intended policies.
Unexpected or seemingly aberrant behaviours in others, in response to the policies, may be attributed to their undisciplined personal habits or qualities, attitude, or failure to follow procedure, implying personal qualities that tarnish the credibility of the deviant. The reaction of those in authority may be to blame and scapegoating (investigate Prof Glenda Gray). This reaction provokes resistance and non-compliance (eg a petition by academics in defence of Prof Gray). These unwelcome responses may just strengthen the authorities’ erroneous belief that these “deviants” (including scientists with alternate views) require still greater monitoring and control and so they resort to forced compliance (“I have been told I may not speak to the press”).
The power of ‘the system’ to shape behaviour is an opportunity for policymakers:
It is an opportunity for them to be humble, less adamant and to share more widely their information and the rationale of their decisions and policy-making; citizens do not take well to being treated as passively compliant, mindless minions.
Policymakers should surely direct their efforts where they have highest leverage: to design enabling systems and processes through which people outside the formal decision-making structures can contribute to decisions and policy and to their successful implementation. Encouraged by influential examples of mutual care and self-discipline, citizens voluntarily can achieve what commands and controls cannot – a real test of leadership!
A fundamental question: do those in authority have the wherewithal to interpret the complex evidence? If they do, are they willing to alter their mindsets if the evidence gainsays their decisions or policies? History reveals that those with the authority to set and/or force implementation of policy may just prefer to exercise authority and demand compliance.
In complex social problems, such as this coronavirus pandemic, there are many unknowns and uncertainties; evidence is seldom without distortions; it is human to make errors of judgement and implementation is often beset by unanticipated problems.
Complex problems do not have right or wrong solutions; outcomes of “solutions” are in the range of good/better to bad/worse. It is sometimes just not possible to know in advance the myriad so-called side effects of implementing evidence-based policies. And the continually changing problem may require continual re-solving in the hope it will attenuate and become tolerable to society (which may just become inured). DM