BLOG POST

The Dynamics of Health Technology Assessment: Is it Just About the Evidence?

November 05, 2019

Recommended

Decisions about the types of health services and medicines covered under health insurance schemes can spark fierce policy debates and, at times, outright public protest. In light of the politically and emotionally fraught space in which these coverage decisions must be made, many countries have adopted an approach known as health technology assessment (HTA) to review the relevant evidence through a systematic, standardized process for health priority setting that promotes fairness and good value for money. Though there are various definitions of HTA, and different approaches for considering evidence on clinical effectiveness, economics, social values, and ethics, there is a universal emphasis on generating transparent and evidence-informed recommendations for health coverage decisions. HTA, conducted by government or quasi-governmental bodies in most high-income countries and an increasing number of LMICs, has become a critical step for pricing, reimbursement, and access

Given these aims, there has justifiably been tremendous focus on trying to imbue the HTA process with methodological rigor and consistent application of the same criteria across health interventions. As a result, we might expect such a process to yield relatively consistent positions on when to adopt certain types of health interventions, from both an internal consistency perspective within a single HTA body, as well as across HTA organizations with relatively similar contexts, criteria, and methods.

Yet, according to most of the comparative data collected to date, this is not the case, even when comparisons have focused on countries that have similar health systems or adjacent geography.  Possible explanations for variability in these studies included differences in domestic priorities, methods for analysis of clinical and/or economic data, and weighting of different aspects of the evidence.

But are these the only explanations? Might more nuanced differences, such as variation in the structure, membership, roles, and group dynamics of the independent committees tasked with appraising the evidence contribute to differences in conclusions and recommendations? What about social, political, or even psychological factors? It would be naïve to consider HTA appraisals immune from the potential influence of interpersonal dynamics, individual member traits and biases, and the format of deliberations on the outcome of the appraisals. Yet, exploration of how these factors may affect recommendations continues to be an understudied aspect of HTA.  To be sure, there is no single common method or guideline for the philosophy, governance, or structure of HTA—in fact, a recent review concluded that there is a dearth of documented best practices on these aspects. 

So what does this mean for HTA, which relies on standards for almost everything else it does in the name of fairness and equity?

As more countries move to introduce HTA as a key component of their UHC strategy, and countries with existing HTA bodies revisit their processes, it is critically important to better understand how the structure of the HTA body—the membership, and the processes of appraisal and deliberation–can impact the quality and consistency of HTA recommendations. Below we outline a set of features that may modify the way evidence feeds through the HTA process to inform health coverage decisions, highlighting where more research is needed.

Underexamined features of HTA dynamics

Committee size and composition

We are certainly not the first to note that the number and type of people involved in the appraisal process, and the interests they bring to the table, clearly have the potential to shape the direction of deliberations and decisions about health technologies (Culyer, 2016 and Sandman & Gustavsson, 2016). This is especially true when appraisal processes and decision rules are more flexible, allowing for consideration of other factors not directly captured by the clinical evidence or calculation of cost-effectiveness. These aspects may also influence how qualitative evidence and public participation feed into appraisals.

The inclusion of the lay public, which has become increasingly common, brings different perspectives to the deliberation process—but only if the evidence is appropriately accessible, and if the committee dynamics support an adequate platform for these perspectives to be heard. For instance, the inclusion of one lay member among a 25-person committee otherwise comprised of clinicians, health economists, and scientific experts might be viewed as tokenism—tokenism that would likely not yield much impact of their inclusion on the final recommendation (not to mention the flawed thinking that a single individual could authentically represent the diverse interests of an entire stakeholder group). On the other hand, overrepresentation of a particular type of stakeholder may skew the focus of deliberations toward certain criteria—whether innovation, allocative efficiency, compassionate use, etc.—without due consideration of other important values and evidence. Despite wide acknowledgment that the constituency matters, there are not good benchmarks to understand what the optimal size or stakeholder distribution on appraisal committees should be. A critical analysis of these aspects would be helpful not only to inform what yields high quality appraisals, but also as input for governments scoping the required resources to establish new HTA units in support of their UHC objectives.

Committee dynamics and role of the chair

Beyond considering how many and which types of people are included in an appraisal committee, the particular dynamics of how those members engage with one another in discussions and deliberations may substantively impact the outcome of these efforts. Attention to dominant voices, the nature of responses and disagreements, and the general comfort level of committee members to speak candidly, may be important to understand, particularly when deliberative processes aim for consensus positions rather than majority votes. The role of committee Chair can also play a critical part in shaping dynamics of the group, mitigating disruptive behavior when it arises, and working to bring in underrepresented voices. But the Chair also may have additional opportunities to exert their own views and biases, implicitly or explicitly. To date, little empirical investigation has explored the ways in which committee Chairs influence the flow and focus of deliberations, or how different formats of appraisal meetings may contribute to the Chairs’ potential influence (for better or worse).

Format of the Appraisal Meeting and Decision-making

This brings us to the meeting itself and how the structure and format contributes to the output of the appraisal. The literature is especially scant with respect to this topic, but it raises many questions. Does it matter how much of the meeting is open or closed to the public, and whether there is a role for external stakeholders to present their views and evidence on the day of deliberation? Are there important differences between HTAs that adopt a voting approach versus consensus positions? Are there specific opportunities for committees members to present dissenting opinions? How transparent is the process with regard to the information used, the content of the deliberations, and disclosures of potential conflicts of interests, and how is this balanced with the handling of confidential and/or proprietary information? Additionally, do we observe differences in how committees handle deliberations when the output is a recommendation rather than a binding decision? Of course, all of these questions relate to formalized meeting components, but informal interactions may play a role as well. For example, members of the public may have an opportunity to approach committee members during meeting breaks in some settings, while committees are sequestered in others.

Some reflections on HTA dynamics and the research landscape

So how much do these elements vary in practice? As noted above, despite the recognition of these influencing factors, there has been very little empirical investigation into how these vary across HTA bodies and the nature of the impacts they have on recommendations. In light of these gaps, we wanted to reflect on our own experiences to consider potential impacts and areas for further study.

One of us [Dan Ollendorf] was formerly Chief Scientific Officer at the US Institute for Clinical and Economic Review (ICER), a private HTA body that, in many ways, was inspired by the approach taken by England’s National Institute for Health and Care Excellence (NICE), the most widely-recognized HTA body worldwide. Despite their similarities, there are some major differences in the deliberative process between the two organizations, as depicted in the table below.

Process Characteristics NICE ICER
Payer/Industry Committee Participation
Payer: Yes
Industry: Yes
Payer: Non-voting only
Industry: No
Deliberation output Consensus (vote as last resort) Majority vote
Role of chair
Manage agenda
Facilitate discussion
Lead deliberation and consensus
Manage agenda
Deliberation Private (general public excused) Public

 

As we have noted, the effects of differences such as these on deliberation results have been under-studied.  On their face, however, they certainly have the potential to affect committee dynamics in important ways. For example, the lack of robust industry and payer participation in deliberation might preclude frank and honest discussion about implementation or access challenges. On the other hand, committee members might act very differently during private deliberations in comparison to having those discussions on full public display.

Beyond comparing the processes used by existing HTA bodies, there are new opportunities to explore these aspects through research to support wider HTA uptake in low- and middle-income countries. As LMICs pursue the establishment of HTA bodies and processes, there is room for experimentation and prospective observational study. For instance, one of us [Carleigh Krubiner] has been co-leading a project in an MIC that uses a methodology of “simulated HTA appraisal committees” to examine various aspects of an HTA approach that might be suited to the context. The research team will run five simulated committee meetings across various parts of the country, collecting qualitative data on the use of the decision criteria framework, as well as on the group dynamics across the five pilot meetings. All groups will assess a constant health intervention, with the same information feeding into the appraisal. This will enable the research team to further examine influencing factors that vary across different simulated committees. This is just one example of how countries and research partners can embed research in activities supporting the development of HTA processes and institutions.

Next steps

Clearly, we cannot take for granted the various factors that contribute to HTA recommendations beyond the evidence itself. But we currently have a poor understanding of how these factors influence decisions for better or worse and what it means for setting up fair and effective HTA processes.

Here are some suggestions for generating empirical data:

  • Mature HTA bodies should explore their own composition and processes, examining variability among different committees as well as impacts of changes to process and composition over time
  • More comparative work should be done across HTA bodies to evaluate and catalogue aspects of HTA composition, format, and deliberative models, especially among multi-country initiatives using similar evidence inputs (e.g., the European Network for Health Technology Assessment, the BeNeLuxA Group)
  • As new HTA processes are developed and introduced in LMICs, research should be conducted to understand potential implications of different appraisal and committee models on the quality of the assessment and on resource requirements

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.


Image credit for social media/web: Images of Empowerment