Through forecasting the disease burden and comparing intervention strategies, modelling has been a key part of the public policy response to the COVID-19 pandemic.
Governments across the world have justified implementing policies based on science, data, and information gleaned from these models. However, as we have learned through previous outbreaks, the science of modelling/forecasting an epidemic can be uncertain.
Policies adopted by governments due to disease forecasting will have wide-ranging consequences—not only on the epidemic. Without appropriately considering this uncertainty, understanding model limitations, and contextualising local characteristics, policymakers might misuse models.
Before COVID-19, the 2014-2015 West African Ebola epidemic (EVD epidemic), was one of the most heavily modelled outbreaks in history. Within the first two months of the COVID-19 pandemic, 31 mathematical models were developed. Despite the clear differences in the two outbreaks, the EVD epidemic can help us draw lessons to improve COVID-19 modelling and its reach in policymaking. We discuss some of those learnings in this piece.
Coordinated data collection efforts are urgently needed
Recommendations from the EVD epidemic on the collection and use of data for outbreak response are extensive. During the EVD epidemic, detailed individual-level data (e.g. a detailed list of reported cases) and exposure data (e.g. identifying where/how cases may have been infected) were collected, which proved invaluable in developing mathematical models.
Since 11th March 2020, when COVID-19 was declared a pandemic, the response by the international community to collect and share individual-level and exposure data has been tremendous—especially through the use of online repositories.
Online dashboards from multiple institutions like the ECDC, Imperial, John Hopkins University, IHME, and LSHTM can now track new cases in real time and rapidly re-forecast the COVID-19 disease burden in multiple countries. However, as previously seen in the EVD Epidemic, differences in how cases are defined and tested has hindered our ability to accurately assess mortality rates, and compare the COVID-19 response across countries.
The global health community must improve surveillance, laboratory testing, and diagnostic capacity to ensure robust data, collected in the real world, is available to parameterise models. In early modelling exercises, some modellers assumed parameters (in the absence of information). This was stated openly as a limitation, but it led to greater uncertainty.
Targeted approaches to collecting data and filling those evidence gaps in models should be implemented as a matter of priority. In addition, collaborations between academic groups and national statistics offices should consider standardising the collection, use, storage, and management of data during an epidemic.
Forecasting is most useful when regularly updated
Especially at the beginning of an outbreak, data is often scarce and limited when modelling an emerging infectious disease. This scarcity can lead to tremendous uncertainty. In fact, research shows that forecasts consistently overestimate the number of cases, and forecasting during the EVD epidemic may have only been good enough quality to inform decision-making based on predictions of no more than three weeks ahead of time.
This lesson has been readily taken up by the modelling community, as institutions provide weekly forecasts on a rolling basis as new information comes to light. COVID-19 policy reviews within governments are also occurring over shorter time frames. For example, in the UK, ministers are required by law to assess whether the rules are working, based on expert advice, every three weeks.
As more data is generated over the course of the outbreak, modelling needs to be responsive. This change could make modelling more useful to decision-makers when timely and regularly updated. Modelling during the COVID-19 pandemic has, largely, implemented lessons from the EVD epidemic, and has aimed to provide timely updates and estimates at the national level based on the data available, with some success. However, those who benefit from modelling could benefit even more from further clarity about what has been updated, and whether there are plans to integrate new data.
Assessing causality of intervention impacts can allow for improved resource allocation
During the EVD epidemic, many interventions (e.g. air-travel restrictions, reductions in human mobility, differing diagnostics, hygienic funeral rites) were being implemented by different groups and organizations simultaneously, impacting disease transmission. This diverse response can make it impossible to draw firm conclusions about intervention effectiveness. So, without detailed data on when, where, and how interventions were being conducted, modelling studies assessed the combined impact of all interventions in place, by comparing transmissibility in the early phase (with no intervention) to that in the later phases.
A similar approach has been used when modelling COVID-19, assessing the combined impact of multiple interventions in place. This approach provides less compelling evidence of a causal effect and does not disentangle the impact of different interventions performed at the same time (e.g. closing schools, self-isolation, and increasing adherence of hand washing).
The lessons learned from assessing intervention effectiveness during the EVD epidemic have been difficult to implement during the COVID-19 pandemic. This has resulted in modelling blanket intervention policies which do not take into account local context (i.e. social distancing may be impossible in many parts of India), so these types of policies will not be as effective in some countries.
More focus needs to be given to robustly evaluating the effectiveness (including conducting randomised controlled trials during epidemics, not only modelling cases or fatalities, but also the economic, societal, and health consequences) and feasibility of interventions—which can allow for better control of the epidemic.
Modelling should address the net impact of COVID-19 and related interventions including policy responses
Emerging infectious diseases and interventions to combat these epidemics, like lockdowns, can have important indirect impacts on health, the environment, and the economy. Studies that show the nature and magnitude of indirect health and economic impacts of the EVD epidemic cannot be ignored.
Over the course of the current pandemic, researchers have already tried to quantify the direct and indirect epidemiological impacts of COVID-19. For example, lockdown-related disruption to essential health services can include changes in coverage to existing interventions for childhood immunisation, maternal and childhood deaths, malaria, HIV, and TB.
However, many of these indirect estimates only focus on human deaths as opposed to health-related quality of life, and are disease-specific. Costs, resource use, and assessing whether interventions are value for money are also equally important when funds are limited in health systems. Unfortunately, no study has determined how cost-effective COVID-19 interventions are compared with other interventions in the healthcare system.
Quantifying the net impact of COVID-19 and the policies that address it through integrated economic evaluation can aid in optimal resource allocation of new and existing interventions— thereby maximising health gains, at minimal cost.
While it may have been difficult to carry out these economic evaluations, they are urgently needed, especially in low- and middle-income countries, where crowding out of essential health services can have devastating impacts in terms of deaths and disability in decades to come. Further, integrated economic evaluations, which have been implemented extensively in other areas of health, should become common practice for all infectious disease modelling so that they can better inform public policy related to COVID-19 and other communicable (HIV, Malaria, tuberculosis (TB), neglected tropical diseases (NTDs)), maternal, neonatal, and nutrition-based diseases.
Worst case scenarios are not appropriate comparators for informing policy action
During the EVD epidemic, many models overestimated the true size of the outbreak. One forecast that gained particular attention at the beginning of the epidemic projected that there might be 1.4 million cases. This number was based on unmitigated growth without further intervention and proved a gross overestimate. Still, it was later highlighted as a “call to arms” that helped trigger the international response to avoid a worst-case scenario.
A parallel can be drawn between these over-estimates and COVID-19. Experience tells us that in many countries, press dissemination of these worst-case scenarios has led to mass panic or abrupt decisions. For example, in India, 1.3 billion people were given less than four hours’ notice of the initial three-week lockdown, resulting in many adverse impacts like stranded migrants, no means of income or food for many, and many having to walk hundreds of miles home in the unbearable heat.
As the COVID-19 pandemic continues, it is already evident that the unmitigated scenario is neither a possible, nor realistic, scenario, since it assumes that governments take no action and citizens do not adjust their behaviour as an outbreak progresses. Instead, modeling estimates should communicate appropriately what a “do nothing” scenario represents, and attempt to incorporate national and individual responses (like behavioural dynamics and economics) into disease forecasts.
Ensemble modelling outperforms any individual model
Decision-makers are often tasked with trying to build policy based on multiple evidence sources. Across countries, many establishments have used expert elicitation methods to extract information from multiple COVID-19 model estimates (e.g. SAGE in the UK). Many initiatives, like modelling consortiums for HIV, NTDs, malaria, other epidemics, and now COVID-19, have also been created to validate modelling methodologies and forecasting estimates.
However, from the EVD epidemic, we know that ensemble forecasting—the statistical averaging of multiple model outputs—consistently outperformed any individual model.
Until now, only a handful of studies have aimed to use ensemble modelling. There have also been no attempts to statistically average different modelling outputs from differing institutions during the COVID-19 pandemic, likely due to the modelling community wanting to rush results to publication to be “first past the post.”
An unbiased global platform allowing for modelling groups to upload model outputs for ensemble methods may be an extremely valuable tool for future outbreaks. However, such ensemble methods should be based on a prescriptive reference case when modellers want to inform policy.
Many lessons, like collecting and sharing individual or exposure data or responsiveness in modelling, have already been rapidly implemented. However, some lessons, like those of intervention effectiveness, net impacts, or ensemble modelling, seem to have not yet been applied. There is still an opportunity to integrate these lessons as new models are informing exit strategies around the world.
However, modelling alone is not (and should not be) the only piece of evidence informing pandemic policy response. Decision-making frameworks can be useful in highlighting the rationale for decisions and can provide transparency on how modelling has—or has not—influenced public policy. A transparent, independent, and consultative policy process that uses modelling as one of many inputs, and that prescribes the standards and scope of modelling exercises (as opposed to being driven by them) is essential for modelling to have a positive impact on decisions.