BLOG POST

2014: A Year in USAID Evaluations

February 18, 2015

With Raj Shah stepping down as USAID Administrator last week, many are taking stock of the numerous accomplishments during his five-year tenure at USAID. One of the unsung achievements of his term was announcing and implementing USAID’s Evaluation Policy.

Launched in January 2011, this policy promised to improve development effectiveness through rigorous, independent, and public evaluations. The evaluations are available in the Development Experience Clearinghouse (DEC), USAID’s repository of technical and project evaluation materials. We took a spin around the DEC and were struck by what we found.

The DEC claims to contain nearly 200,000 program documents spanning the last 50 years. In 2014 alone, USAID published just over 150 evaluations, totaling more than 18,000 pages. We applaud USAID for this tremendous transparency. Posting these evaluations in the public domain is the hallmark of what a transparent development actor should be.

Just looking at the evaluations from 2014, several themes are worth noting:

  • USAID evaluations value depth. The 2014 evaluations are quite lengthy, averaging just under 120 pages. Although length is not always a mark of quality, this is at least an initial sign of the complexity of these evaluations.
  • The evaluations don’t favor one technique. The vast majority of evaluations used a “mixed evaluation” technique, commonly including document review, key-information interviews, and occasionally surveys. We found only nine evaluations that claimed to be “impact” evaluations. Of these, only two appeared to use randomized control evaluations or quasi-experimental design. We readily agree that RCTs are not the best evaluation plan for every intervention (see recent work by our colleagues Lant Pritchett and Justin Sandefur) and that evaluators should have freedom to choose the most appropriate approach. With these caveats, this is still a very low number.
  • The sectors with the highest funding levels usually had the largest number of evaluations. We find that evaluations by sector are largely happening in alignment with FY2014 sector spending, as you can see in Figure 1 below. (The ideal measure here would be to compare percent of projects completed with percent of projects evaluated but that project-level data is unavailable.) Although some sectors seem “over-evaluated,” the sectors that are “under-evaluated” – namely Peace & Security and Humanitarian Assistance – are intuitive.

Figure 1. USAID Evaluations by Sector, 2014

  •  Evaluations roughly follow USAID country expenditures. Afghanistan, where USAID spends the most (almost $1 billion in FY2014), has more evaluations than any country (11 evaluations). However, we find 90 countries where USAID spent funds without any evaluations posted in 2014. These countries have a total $2 billion in expenditures. Five countries (Iraq, Mali, Nigeria, Syria, and West Bank & Gaza) account for about half of this unevaluated $2 billion. We would caveat that Nigeria was included in two multinational evaluations in 2014 and that several projects in Iraq and Mali were evaluated in 2013.   

Recommendations for USAID Evaluations in 2015

USAID has come a long way in pushing for more and better evaluations, and then making these evaluations publicly available through the DEC. But there is still room to grow. First, we’d love to see these evaluations integrated into ForeignAssistance.gov, the USG’s online home for information on foreign aid. USAID should include a link to any related evaluations within the transaction data on ForeignAssistance.gov. The potential power in connecting expenditure data to evaluations could be transformative.

Second, USAID should begin to think about how it could institute a universal rating system for all projects into its evaluations. These systems are common at the World Bank and regional development banks. For example, the World Bank assigns a single outcome ranking to every project, ranging from highly satisfactory to highly unsatisfactory, allowing Bank officials to evaluate and compare project performance at the country, sector, and overall portfolio level. These metrics have helped to inform portfolio adjustments over time, such as the World Bank scaling down support for local development banks after consistently low IEG project ratings. The DEC could be an excellent place for USAID to institute a single comparable program rating that supplements more customized and detailed evaluation frameworks for individual projects.

 

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.