With rigorous economic research and practical policy solutions, we focus on the issues and institutions that are critical to global development. Explore our core themes and topics to learn more about our work.
In timely and incisive analysis, our experts parse the latest development news and devise practical solutions to new and emerging challenges. Our events convene the top thinkers and doers in global development.
Each year billions of dollars are spent on development programs with relatively few rigorous studies of whether they actually work. In 2004, CGD set out to address this lack of good quality impact evaluations and our recommendations led to the creation of the International Initiative for Impact Evaluation (3ie) in 2009. The number and quality of impact evaluations has risen significantly, but there is still a long way to go to make sure future development interventions are based on evidence of what works.
In this paper we examine how policymakers and practitioners should interpret the impact evaluation literature when presented with conflicting experimental and non-experimental estimates of the same intervention across varying contexts. We show three things. First, as is well known, non-experimental estimates of a treatment effect comprise a causal treatment effect and a bias term due to endogenous selection into treatment. When non-experimental estimates vary across contexts any claim for external validity of an experimental result must make the assumption that (a) treatment effects are constant across contexts, while (b) selection processes vary across contexts. This assumption is rarely stated or defended in systematic reviews of evidence. Second, as an illustration of these issues, we examine two thoroughly researched literatures in the economics of education—class size effects and gains from private schooling—which provide experimental and non-experimental estimates of causal effects from the same context and across multiple contexts.
The United Kingdom has been a stalwart funder and innovator in foreign assistance for almost 20 years. In 2011, it created the Independent Commission for Aid Impact (ICAI) to report to Parliament on the country’s growing aid portfolio. ICAI is a QUANGO in Brit-speak – a quasi-public non-governmental organization - with a 4-year mandate which is undergoing review this year. Recently, I took a look at the reports it has produced to see whether the organization is fulfilling its role in holding the country’s overseas development aid programs accountable. I found one fascinating report which shows what ICAI could be doing and many more reports that made me wonder whether ICAI is duplicating work already within the purview of the agency, Department for International Development (DFID), which accounts for most of the UK’s foreign assistance programs.
In recent weeks, the public health world and political pundits alike have been abuzz about results from the “Oregon Experiment,” a study published in the New England Journal of Medicine that finds no statistical link between expanded Medicaid coverage and health outcomes such as high cholesterol or hypertension. Limitations of the study aside, the Oregon Experiment is a good example of the importance of rigorously testing all US health programs, rather than just assuming ‘more care = better health’. The Innovation Center at the United States Centers for Medicaid and Medicare Services, created under the umbrella of the Affordable Care Act, represents a new and encouraging approach to address this problem, an approach that we think has important lessons for global health.
The New England Journal of Medicine recently published the results of “the Oregon experiment” based on the 2008 US Medicaid program expansion in Oregon. The study is one of very few randomized control trials on publicly-subsidized health insurance that exists to guide health policy, and found what some commentators considered a disappointing result: while health care utilization increased and households were protected from financial hardship, expanding Medicaid coverage had “no significant impact on measured physical health outcomes over a 2-year period.”
In December CGD announced that Howard White had been selected as the first director of the International Initiative for Impact Evaluation or 3IE ("Triple I E"). The announcement, a milestone in the creation of a new international entity independent from CGD, came just 20 months after the release of a CGD working group report that offered recommendations on how to close the "evaluation gap"--that is, dramatically increase the number of rigorous impact evaluations in areas such as health and education. White, who is currently based in Cairo, has set an ambitious agenda for getting the 3IE up and running.
In 2006, CGD published a working group report that addressed the insufficient number of rigorous impact evaluations of social programs in low- and middle-income countries. Last week —marking 10 years since the report’s release—CGD and J-PAL co-hosted the event, “Improving Development Policy through Impact Evaluation,” which echoed three key messages of the 2006 report: 1) conduct more and better evaluations; 2) connect evaluators and policymakers; and 3) recognize that impact evaluations are an important global public good that requires more unconstrained funding.
This brief outlines the problems that inhibit learning in social development programs, describes the characteristics of a collective international solution, and shows how the international community can accelerate progress by learning what works in social policy. It draws heavily on the work of CGD's Evaluation Gap Working Group and a year-long process of consultation with policymakers, social program managers, and evaluation experts around the world.
In September 2008 official aid donors and recipients will meet in Accra, Ghana, to discuss how to make development assistance more effective. CGD president Nancy Birdsall and co-author Kate Vyborny suggest that advocates of better aid who really want a win at Accra forget haggling over broad conceptual issues and focus instead on getting a public commitment from donors to one or more very concrete steps to improve aid effectiveness and to hold donors accountable.