BLOG POST

Uncertainty Matters: Investing in Pragmatic Research for Learning Healthcare Systems

In June, Kenyan authorities announced they would use public funds to cover the cost of Xpert MTB/RIF—a point-of-care molecular test for Tuberculosis (TB), previously endorsed by the WHO and supported by development partners such as UNITAID, GFATM and the BMGF. The large price tag for the Xpert MTB/RIF machine, cartridges, and running costs amounts to roughly 0.2 percent of the country’s total public healthcare budget. While Xpert MTB/RIF has been considered a breakthrough innovation in significantly reducing the time from diagnosis to treatment (to approximately two hours, while other tests take several weeks on average) and patient attrition rates—the test has not led to demonstrably better health outcomes or health system savings (e.g., see the UNITAID evaluation citing affordability and lack of linkage between diagnosis and treatment as causes for concern). Some experts have also highlighted that its functioning is unsuitable in health systems in low- and middle-income countries (LMICs) as it requires constant electricity and air conditioning. Thus, when rolled out in real-world settings, Xpert MTB/RIF has fallen well short of the performance shown in exploratory clinical trials.

Addressing uncertainty in practice: four steps

Even if not the technological panacea originally envisioned, the story of Xpert MTB/RIF can help the global health community and independent national payers to think about the inevitable uncertainty that comes with taking an innovation from the lab to the real world, and indeed how to address this uncertainty. In their newly published PNAS article, Cassidy and Manski (C&M) do just that. They use decision-theoretic modelling to make a case for a policy of “diversification” in the face of what they describe as “deep uncertainty” when clinicians use new technologies with populations different from those studied by trials.

To help understand what happened when clinicians across LMICs ended up using Xpert MTB/RIF, and how this impacted (or did not impact) health outcomes, C&M model how a clinician might decide whether to order tests for TB and whether to treat a patient for TB, with or without test results. They highlight the role of uncertainty in the prevalence of TB and the accuracy of different tests, for patients with different characteristics. They show that, given such uncertainty, a reasonable policy may be to diversify testing and treatment, randomly assigning patients with certain characteristics to different combinations of testing and treatment.

Their work relates to another widespread but understudied problem: medical technologies disproportionately undergo clinical trials in high-income countries (HICs), but are introduced in LMICs with little evidence of how well they work in the local systems, or of their cost profile when scaled up locally. This makes it difficult for countries to make their own well-informed decisions on what to cover in the context of their expanding universal healthcare coverage schemes; and what to finance fully or in part, especially as global development partners are reducing their support during a period of aid transition.

To make matters worse, the global health community is resistant to acknowledging uncertainty when it comes to endorsing, subsidizing, and encouraging the scale-up of these technologies across LMICs.

C&M’s work carries several important implications for researchers, their funders, and global and country policymakers. Here are four we have teased out:

1. Be honest about uncertainty

First, there can rarely be certainty in judging the effectiveness of an intervention in a real-world setting, whether it is a technological innovation such as Xpert MTB/RIF or a service delivery modality. Early concerns were reported from the field. What is important is that uncertainty be addressed to the extent possible, through further research. Moreover, the implications of uncertainty on budgets and health gains should be acknowledged when decisions about adoption and scale-up are made—whether by WHO or by national payers (using public funds), as in the case of the South African and more recently, Kenyan governments. Indeed, both early and more recent analyses highlight the potentially limited impact of the Xpert MTB/RIF test on budgets and, most importantly, TB-related deaths. Ex ante models extrapolating optimistic point estimates to the real world can be misleading if not bound by realistic assessments of the uncertainty that accompanies them. Considering all the concerns stated above, would Xpert MTB/RIF have been endorsed and funded so heavily?

2. Endorse pragmatic research

Second, as discussed by C&M, such uncertainty is best resolved through pragmatic research, oftentimes involving randomisation in real world—what C&M call “diversification.” Only such pragmatic trials (and observation) can capture critical issues such as the behaviour of clinicians, patients and their families, and whole systems, which in turn influence the impact a technology has on health and spending. On clinician behaviour, C&M find that the “[only] partial substitution away from empirical treatment,” which may have accounted for some of the reduced effect of Xpert MTB/RIF in the real world, was “reasonable” under the uncertainty. 

More broadly, health systems’ weaknesses, such as electricity shortages or ineffective procurement (leading to cartridge stockouts in some countries) and contracts disadvantaging payers against the monopolist company, diluted the test’s effectiveness. Early hopes for more of the same POC diagnostics coming to the market, hence increasing competition and pressure on prices, did not materialise, effectively creating a monopoly situation with high resultant prices of Xpert commodities and making the continued use of the test contingent on external financial support. Such issues could have been identified from the onset through pragmatic research. Pragmatic research can inform policy decisions as in the case of HOPE4, which looked at managing high blood pressure in a context sensitive fashion and accounting for multiple factors modifying the clinical and cost effectiveness of alternative interventions.

3. Invest in a learning healthcare system

Third, pragmatic research can form the basis of a learning healthcare system (LHS)—a system designed to generate evidence from policy implementation and care practices and to apply this evidence for better quality and value. Some of what C&M describe can be accomplished by taking advantage of the natural variation in practice which results from uncertainty and is common across healthcare systems the world over, to generate useful evidence. Such evidence can then change practice, especially in settings where technology adoption decisions must be based on some credible analysis of the incremental benefits and costs of the new over what is already in place. It may help to develop an institutionalised health technology assessment function, thus far widely adopted across national payers (see here, here, and here  for relevant discussion) but less so amongst global norm setting organisations such as the WHO or global purchasing agencies such as the GFATM despite their heavily commoditised portfolio. A LHS would encourage, mandate even, research into the effectiveness of interventions as an integral part the technology adoption or scale-up process; for example through making coverage conditional on research when there is uncertainty (see the Coverage with Evidence Development model of the US Medicare; the Only in Research approach piloted in the UK by NICE; and Market Entry Agreements, though much less well developed and diffused, in LMICs).

4. Revisit and reform ethics requirements and reporting norms

Fourth, and related to the above, for research to become more relevant to health policy, current ethics guidance must also change to accommodate the need for useful evidence to inform technology adoption decisions (see here for a discussion of how the C&M diversification could work in “ethically robust learning health care systems”). Further, analysis and reporting ought to happen in a way that allows for the evidence usefully to inform decisions. C&M describe how reporting a test’s positive and negative predictive values (PPVs and NPV) for example may be far more useful to practicing clinicians than the current convention of focusing on sensitivity and specificity.

Moving forward in the face of uncertainty: an evidence generation network

One can envisage the establishment of a pragmatic evidence generation network across sub-Saharan Africa, building on the successes of The European & Developing Countries Clinical Trials Partnership (EDCTP), broadening the disease focus to include NCDs and their methods further to endorse pragmatic research. The Theron et al. 2013 pragmatic trial, which suggested Xpert MTB/RIF may not yield the benefits anticipated when rolled out in the real world, was funded by EDCTP. Such a network would place an unprecedented for LMICs emphasis on pragmatic real-world evidence and data, including post-marketing surveillance information and the collection of real-world effectiveness and cost data. The EU recognises this as being critical, but it is currently lacking for infectious diseases, NTDs, and NCDs in LMIC settings, where this research is most needed. Even in the conventional areas of TB, HIV, and malaria, evidence of real-world impact (as opposed to modelled analyses) is missing.

Evidence of what works in those countries—based on research carried out in Africa and ideally by African institutions and researchers—would strengthen healthcare systems, improve outcomes, enhance efficiencies and help build local African capacity. Importantly, it would help accelerate product introduction and inform decisions during crises such as an Ebola outbreak and help inform more legitimate impact evaluations of development partner interventions than what we currently see.

How to fund this network?

With the UK investing record amounts of ODA in research and development (R&D)—including health R&D—this could be something British aid could seed fund, as one of us has argued elsewhere. The Wellcome Trust has a track record in supporting the development of local research capacity and an interest in driving the development and uptake of good value innovation across LMICs and is well placed to more explicitly and systematically link research to policy in the geographies it is active. Indeed, centres such as the Kenya-based KEMRI have been leading on LHS efforts in the region. Ultimately, something like this has to be funded in a sustainable way out of the scarce but growing healthcare budgets of SSA nations. Repeated calls for ring fencing 2 percent or more of LMIC healthcare budgets for health research have not led to real commitments and the WHO led R&D Accelerator recommendations focus on a model of top-down coordination led by WHO making little reference to the end users of research.

But if research is credibly seen as a means of improving the health of the people and increasing the value of healthcare spending, and an integral part of healthcare delivery rather than an unaffordable luxury, then perhaps African leaders will be more willing to make such commitments.

With the 2019 Nobel prize for economics going to Western economists combating poverty in LMICs through RCT-generated evidence, the time is ripe for the global community and national leaders to invest in a similar and yet different type of capacity: local pragmatic research infrastructure for learning healthcare systems.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.


Image credit for social media/web: DFID/Flickr