BLOG POST

Sharing Research Results with Participants: An Ethical Discussion

It is widely recognized that social science research involving human participants should be based on the principles of “doing good” and “mitigating harm.” While research results are often shared with academics and policymakers alike, it is unclear whether—and how often—they are shared with the participants themselves. A key question, then, is if there is also an obligation to share research results with participants? Under what circumstances is it desirable to share research results with participants? How much information should be shared and when? And what form should the information take?

This blog post aims to address these questions in several ways. First, we discuss the ethical considerations to sharing research results with participants, drawing upon lessons from the medical literature. Second, we use examples of research-sharing from a number of projects in the development economics field, and outline the different modalities by which results can be shared. And finally, we use this foundation to propose a framework for researchers to consider when thinking about sharing their own research results. 

To share or not to share, that is the question

While researchers often believe in access to information, there are reasons why information-sharing can be harmful in certain contexts. Medical researchers have wrangled with this issue for decades, drawing heavily upon two ethical principles described in the Belmont Report: respect for persons and beneficence. “Respect for persons” supports sharing results with participants to enable them to make more informed decisions in their lives. “Beneficence” introduces a somewhat messier calculation by asking researchers not only to minimize risk but also to maximize benefits.

Ultimately, researchers must examine how these two principles interact. Conrad Fernandez and coauthors argue that, in order to treat human subjects as more than a means to an end, there is a “duty to disclose research results that flows from the principle of respect for persons.” Others, however, highlight the risks of sharing research results and suggest that each project consider how the risks and benefits, as outlined in the principle of beneficence, align with the goals of autonomy (see here, here, and here). Despite the difficulties of disclosing research results to participants, the medical field has largely moved toward endorsing the practice.

Elements of this concept are embodied in participatory research methods across disciplines. Still, there is room for greater reflection on whether this exercise can be incorporated more widely in development economics. After all, the ethical underpinnings of social science research with human subjects are rooted in the same principles as medical research, as argued by Rachel Glennerster and Shawn Powers.

Social science research: The case of RCTs in development economics

Empathetic and thoughtful enumerators, as well as clear protocols, were key to ensuring participants understood the information in Aker’s research in Niger. The research team went back to communities that participated in their evaluation of a mobile-based adult education program. Conducting a follow-up study provided the opportunity to re-visit households for baseline surveys, during which enumerators also shared results from the initial study. Each enumerator shared private individualized feedback on each student’s learning outcomes (writing and math tests) to households in the treatment and control groups, explaining what the results meant and how that compared with average learning outcomes. The control group also received a poster and booklet, printed in the local language, on how to use mobile phones for learning, which the research found were key elements to the success of the mobile-based adult education approach.

Sharing results with the community helped strengthen Alan’s relationship with her partners, local education officials, and teachers in Turkey for future evaluations and effective scale-ups. After an evaluation of a program aiming to increase “grit” among students, the Ministry of Education hosted and funded a large community event to present the findings. Around 150 teachers, along with students, parents, and press, attended the event to showcase best practices teachers could use to encourage student learning. The event featured students’ work and a video of Alan explaining the average treatment effects, rather than impacts for specific teachers or schools, in part to protect students’ privacy. Because the program was eventually rolled out to both treatment and control groups, all teachers were invited to the event. Sharing research results with teachers immensely improved the process of scale up. Teachers in the original study were automatically qualified to be trainers, equipped well with the scientific evidence generated by their very participation in an RCT.

Udry’s agricultural transformation project in Ghana incorporated reporting results to participants throughout the project lifecycle. At the end of the study, the research team visited all 160 communities taking part in the evaluation and hosted one-hour events to share findings about improved seed varieties. Enumerators who spoke local languages used a script for these sessions, which were recorded to ensure the script was closely followed. Enumerators also distributed pamphlets with results summaries and contact information for community extension agents and suppliers of the best-performing seed variety.

Five questions for deciding whether and how to share study results

Informed by our work and conversations with other development economists, we have identified five key questions for researchers considering whether and how to share results with study participants:

1. What information should be shared?

Sharing results is likely most useful when a study has concrete, actionable findings that can inform participants’ decisions, such as a seed variety shown to improve crop yield or a curriculum shown to improve learning outcomes.

Typically, this will mean sharing average effects rather than the results of any one individual study participant. Discussing average effects across the entire sample can allow a researcher to highlight broad lessons about the effectiveness of behavior changes while not overstating conclusions about any one individual’s outcomes. Sharing individual results may be confusing in the case of outliers. It may also affect self-perceptions (e.g., a weak student compares his test scores pre- and post-intervention to those of a stronger student) or otherwise interfere with complicated relationships (e.g., an ineffective worker’s outputs are shared with her employer). However, there are some cases where sharing individual results may be useful in addition to aggregate results. We suggest that a researcher carefully consider the possible consequences of sharing both average and individual results with participants and decide what will be the most useful input for that particular context.

Furthermore, when presenting average effects, explaining heterogeneity is especially important when the direction or magnitude of an intervention’s impact are different for specific subsets of a community. Even if results include some disaggregation, though, point estimates will certainly not predict the impact an intervention had (or could have) on every individual participating. Including caveats and explaining these concepts is difficult but vital.

When research produces null results or particularly complicated findings, descriptive data may still be meaningful to the community. For example, it could be useful for community health workers to understand how vaccination rates in their areas compare to regional or national levels.

Our experiences lend anecdotal evidence on topics where sharing results may be more or less useful. For example, agricultural projects may be promising candidates because farmers have direct control over their inputs. Likewise, service providers may be particularly receptive to learning about strategies to improve their work because their efficacy is often measured directly (e.g., in their students’ test scores or patients’ health). Social protection programs such as cash transfers may be poorly suited, as participants are not responsible for delivery of transfers. Still, participants can provide valuable feedback on how they experienced the cash delivery process.   

2. When should the results be shared?

Analyses often evolve over time, and preliminary results can be misleading. Nevertheless, participants might value hearing results sooner rather than later. On the one hand, sharing preliminary results comes with the benefit of responding in a more timely manner. On the other, it introduces the worry that refined analysis might change the final interpretation of results. Perhaps unsurprisingly, this is not a topic for which we have a one-size-fits-all recommendation—timing will be very context-specific. In all cases, however, the timeline should be dictated not by researchers’ convenience but rather by when the information can be acted on (e.g., before a new season for farmers or a new school year for teachers or parents). It is a difficult balance between allowing time to adjust analyses and reacting to community needs.

Timing of results disclosure can also influence the integrity of follow-ups. Researchers must weigh the potential impacts against the value and validity of the follow-up to determine whether sharing results in between rounds is feasible. On the other hand, sharing initial results with participants at the same time as another round of data collection can save time and money.

3. With whom should this information be shared?

As researchers and staff who use randomized control trials (RCTs) to evaluate the impact of social policies and programs, this question is of particular interest to us. As in clinical trials, the control group of an RCT often does not receive any intervention (for a discussion of the ethics of randomization, see Rachel Glennerster and Shawn Powers’ chapter in The Oxford Handbook of Professional Economic Ethics). Sharing results with the control group thus involves a potentially upsetting discussion of a program that others received. In our experiences in Ghana and Niger, control group members did not appear upset, although it is worth noting that in both cases, they received some program components and thus were not pure controls. Moreover, it may be especially important to share results with control groups, who sacrificed time to participate without gaining access to a new program. Either phased-in (“roll-out”) research designs or those in which the comparison group receives some alternative intervention may be well suited for this work, since no group is entirely excluded.

Reporting results to treatment and control groups separately may allow groups to focus on questions most relevant to their experience. Other considerations include whether to separate dissemination events by gender or to share information with both service providers and their constituents (e.g., not only teachers but also students and families). Likewise, some people may face barriers to absorbing the information, such as young mothers with children present, people with low literacy levels, or those with cognitive impairments.

4. How should this information be shared?

If enumerators revisit study participants, there are several steps that may be helpful in ensuring accurate disclosure. First, written protocols and scripts for field staff are essential to precise communication. Scripts should be free of jargon and piloted to ensure a nontechnical audience can understand. The same is true for any printed materials or media. Communicating nuance in large meetings, in second languages, or to nontechnical audiences can be challenging. Working with native language speakers and those familiar with the context, taking time to fully explain details and answer questions, and being honest about uncertainties are potential first steps.

Second, as with any data collection, regular debriefings are helpful for enumerators to discuss difficult questions they have encountered. In Ghana and Niger, for example, we held daily debriefings.

Third, the research team can consider recording sessions for quality monitoring. However, taping sessions may have trade-offs in undermining trust with staff that researchers should weigh carefully.

Research teams should be mindful of how biases could influence results dissemination. Teams who have worked on a project for many months may be invested in presenting results in a positive light. On the one hand, staff morale and effort may be higher when they believe in their potential to help others. On the other, eager staff could overstep—with the best intentions—in a way that negatively impacts the project or participants. One way to guard against this is to convey the importance of adhering to strict written protocols.

5. What are the potential benefits and costs of sharing results?

A large body of research, including in education, governance, and health, tells us information can change behavior. People may therefore base decisions on research results shared with them—which is, in part, a desired outcome of this work. But study results often contain some uncertainty. And if results are nuanced or require technical explanation, people may misinterpret them and respond in nonoptimal ways. It is therefore vital to think through—and yet ultimately impossible to fully predict—how people might respond and what the consequences might be.

While actionable results may influence individual behavior, however, we argue that ethics are an important justification for this work: the principle of beneficence considers the potential efficacy of sharing this information as one piece of a larger ethical balance.

Sharing research results with participants may also substantially improve the research, benefitting both research teams and participant communities. Follow-up conversations can shed light on nuanced perspectives for interpreting data. For example, sharing results in Udry’s project in Ghana revealed complexities of social networks that were difficult to discern from data alone and generated new research ideas. These follow-ups may also strengthen relationships with communities and implementing partners, as in Alan’s experience with education officials in Turkey, making future research and scale-up efforts more feasible. Governments in particular could benefit from the public seeing their involvement in research and efforts to improve policies based on evidence.

Individual research teams must ultimately weigh the potential benefits and costs of this work against one another on a project-by-project basis. For example, there is a trade-off between the cost of a researcher’s time and strengthening the partnership or infrastructure for future research. Relatedly, the monetary costs of sharing research results with study participants will inevitably differ with the type of research question, the context, sample size, and more. Still, we hope to provide a starting point from which research teams may begin planning their own budget. Based on our experiences, the budget for this endeavor could mirror that of complementary qualitative research efforts. They could be made significantly cheaper by bundling this outreach with other research processes, such as a baseline for a new study as Aker did in Niger. The costs we highlight do not include opportunity costs for beneficiaries or researchers. Instead, we are speaking from a budgetary costing perspective, rather than a societal costing perspective. The main cost drivers are as follows:

  • Enumerator and other research staff time
  • Transportation
  • Event space
  • Printing of posters, pamphlets, or other materials
  • Monitoring devices (e.g., recording devices if used for back-checks)

Finally, a thorny ethical question arises after an evaluation has found a program effective, but the implementer decides not to continue it—for political, budgetary, or other reasons. In this case, disseminating positive findings may demotivate or anger people who are now aware they don’t have access to an effective program. This may also damage the researcher’s relationship with the implementer.

Should these issues prevent researchers from sharing results with participants? Not necessarily: knowledge of positive results could allow the public to put pressure on governments or other implementers to fund programs that match their priorities, or to mobilize around a proven intervention without external support.

Ultimately, sharing results with participants may be one step towards solidifying a norm of using research results to inform policy decisions, complementing the research community’s existing policy outreach efforts. Empowering participants with learnings they helped generate can contribute to a cycle of accountability. As this piece reflects on the open questions surrounding the ethics, consequences, and logistics of how and when to disclose research findings, we hope development economics can continue these conversations.

*The views expressed in this article are those of the authors and don't necessarily reflect the position of the Abdul Latif Jameel Poverty Action Lab.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.


Image credit for social media/web: Adobe Stock

Topics