BLOG POST

Evidence-Based Decision Making During COVID-19: How to Navigate Extreme Uncertainty and Urgency

Evidence-based policy generation is an aspiration more than reality at the best of times. With COVID-19 generating extreme uncertainty and urgency of action across the world, it has become harder than ever. This blog considers three areas where both researchers and policymakers need to engage more effectively: asking better questions; being clearer about how well we can answer them; and broadening collaboration and challenge. It concludes with four propositions for better decision-making under great uncertainty: the pursuit of cognitive diversity, regular updating of information, flexibility in decision-making, and a high degree of transparency.

In the context of COVID-19, what questions are policymakers asking?

“How do we protect the health system from being overwhelmed by the coronavirus?”

“How can we keep excess deaths down during the coronavirus pandemic?”

“How do we protect health as effectively as possible over the next year?”

Listening to the science is only useful insofar as the scientists are being asked the most useful questions. Which questions are asked and unasked shapes the policy response; whether the crisis in care homes–which supplanted hospitals as the central theatre of the pandemic–was a feature or a bug of the UK’s coronavirus response depends on whether policymakers were seeking answers to the first question, or one of the following two.

For well-established problems–take youth unemployment–theory and prior research helps us to anticipate complications, and to design the research and advice in a way that takes these complications into account as questions are asked. Even if a policymaker asks how to minimize youth unemployment, good researchers know that they need to consider whether labour market policies that encourage employment of young people do so by simply replacing older workers with younger ones. For a novel problem, identifying the right questions to ask is especially hard. The scaffolding of prior research and theory is much more rickety, and as a result policy and research has many more blind spots.

This problem is particularly acute in developing countries. Their economic and healthcare resources are more limited than those in high-income countries, as is the health, education, and income of their populations. This makes the trade-offs they face in designing policy totally different, and often more difficult to navigate, with fewer “good” choices available. Asking the right policy questions therefore becomes more important: with so many sources of large-scale morbidity and mortality to manage, asking how to minimise one, without considering the effect on others, may be disastrous.

Asking the right policy questions therefore becomes more important: with so many sources of large-scale morbidity and mortality to manage, asking how to minimise one, without considering the effect on others, may be disastrous.

Getting to the right questions quickly requires engaging a diverse set of viewpoints early and investigating lots of questions, including long-shots. Many of these lines of inquiry will turn out to be dead ends, but because extreme uncertainty limits our ability to predict which will eventually be useful, a “portfolio” approach to investigating policy questions is likely to pay off if emerging evidence points to the need to change tack.

How well can research answer these questions?

Everything about the COVID-19 pandemic feels exponential: the spread of the virus, the emergence of armchair epidemiologists, and the avalanche of rapid research into it. Both medical and economic research has been produced at such a pace that even full-time researchers are struggling to keep up with it. One consequence of the enormous appetite for learning about coronavirus in developing countries has been the use of rapid surveys of convenience–researchers collecting data from whichever groups of people they were able to quickly contact. This means that most of these surveys provide information for a slightly different universe of respondents: some are tilted towards more educated respondents; others towards those most in distress; still others to urban rather than rural populations.

While these studies are all informative, they pose problems for policymakers who need to design policies that work across different groups. Researchers are incentivised to maximise the accuracy of the estimates from their particular sample, and to sell these results as uniquely informative. This pushes researchers to be as careful in their own analyses as they can be, but it means that their focus is on reporting uncertainty around their specific estimate, not the variability of the thing they’re measuring.

Researchers are incentivised to maximise the accuracy of the estimates from their particular sample, and to sell these results as uniquely informative. This pushes researchers to be as careful in their own analyses as they can be, but it means that their focus is on reporting uncertainty around their specific estimate, not the variability of the thing they’re measuring.

A good analogy is how many points a basketball player scores against different teams. Maybe we know how many points a player scores on average against very big, physical teams, but a coach will also need to know how they tend to play against an opponent who has a different style–who may be small and skillful. While individual research teams have invested in precise estimates of the effect of lockdowns on incomes for specific populations (such as migrants in Nepal), in order to allocate their scarce resources most effectively, policymakers need to have comparable data for all the populations they need to consider: rural wage labour, urban informal workers and so on. The failure to generate and collate comparable data that demonstrates the variability of income shocks across these groups has been marked and disappointing. Policymakers should use their funding power to incentivise researchers to coordinate their work and to locate their estimates among other credible estimates being produced.

Who collaborates in setting the questions and answering them?

Both policymaking and research are riddled with silos. In policymaking, different policy areas often proceed in isolation, even when policies pull in opposite directions. Among researchers, cross-disciplinary collaboration is still rarer than most of us would like.

These imperfections are magnified when high uncertainty and rapid policymaking collide. Research and expertise from single fields can dominate policy discussions with the consequence that decisions are taken without due consideration of trade-offs. This can be true even within broad policy areas like health: in many countries the epidemiological response to COVID-19 has materially impacted ongoing work against other deadly conditions. These trade-offs may be worth making, but they must be made with full recognition of the costs and benefits. This makes collaboration crucial: experts in one field may not ask obvious questions about secondary impacts on another field if they’re not used to thinking about them. Would care home deaths be so high in so many places if frontline workers from this field had a voice in decision-making earlier? How much better might India’s migrant labour have fared if organisations that represent them were involved from the start?

The need for speed in policymaking and research generation should be balanced with pursuit of the cognitive diversity and range of viewpoints that broad collaboration brings. It is difficult to look at the UK’s SAGE group of experts without being struck by how uniform its composition is. It is dominated by medical scientists, statisticians, and those with an AI/computing backgrounds. Who offers advice on trade-offs between say, health and education? The final choice made may well be a political decision, but technical advice is still required. In general, these trade-offs have been poorly handled both among researchers and policymakers globally. It is surprising, to say the least, that natural experiments in schooling and COVID-19 transmission in Scandinavia have borne almost no data on the impact of opening schools on the spread of the disease.

Four ways to support evidence-based policy in the coming months

Dealing with novel problems at pace is difficult, and mistakes have, and will continue to be made. But the weaknesses in the process of generating and acting on evidence have made this worse. To mitigate or resolve these weaknesses, I propose adopting four approaches:

  1. Cognitive diversity in policymaking and open challenge: No individual or discipline can claim a monopoly on insight to the primary or secondary impacts of COVID-19. Policymaking and research will both benefit from a wider range of voices being heard: not just across disciplines, but including practitioners, frontline workers, and clients of Government services. Similarly, minority voices have been given less prominence than we should expect, given the disproportionate effect of the disease they bear. Research shows that cognitive diversity has payoffs in problem solving. It should be a default principle of government decision-making bodies during crises.

  2. Regular updating of research: Both data and research findings need regular updating in the face of great uncertainty that is being resolved piece by piece. This is not the time for stubborn refusal to accept weaknesses in research, as the controversy over the Lancet’s publication of flawed work on Hydroxychloroquine demonstrates. As the data comes in, researchers need to update their conclusions – and it pays to have a “running tally” kept of the results. And since peer review is expedited, open peer review and post-publication criticism must be taken seriously, and concerns addressed.

  3. Flexibility and adaptation in policy: The corollary of updated research and data is updated policy. The phrase “flexible and adaptive programming” too often finds service as a poor euphemism for “making it up as we go along.” But under high uncertainty, policy responses should change as new information comes to light. As Stefan Dercon pointed out recently, we must accept that u-turns may be necessary. Policies should be subject to constant review against specified data–an approach taken in Kerala’s highly-lauded response. This seems particularly appropriate for school openings, where policy everywhere should be informed by emerging data as countries reopen schools to different extents.

  4. Transparency of decision-making: The speed at which research is being produced and decisions have to be made make it inevitable that sometimes models, data or inferences will become influential despite weaknesses; or that certain costs or benefits will be omitted from decisions. This makes transparency of decision-making crucial. In recent days, the architects of policy in the UK and Sweden have admitted misgivings or mistakes. Publishing advice and data as they are received facilitates faster challenge and review, and the possibility of catching and rectifying mistakes earlier. Of the four proposals I make this is perhaps the hardest to square with political incentives, which too often run to obfuscation. It may also be the most important.

There is no known recipe for a perfect response here. Mistakes will be made, and in coming years current research and policy will come under intense scrutiny. It is upon policymakers and researchers to hew to good practice in advance of it.

* This blog benefited from excellent comments from a number of colleagues.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.