BLOG POST

“Stunning Progress” or “Implausible and Invalid”: The Afghanistan Mortality Survey 2010?

June 06, 2012

This is a joint post with Kate McQueston.Afghanistan accounted for 15 percent of all U.S. economic assistance allocated in FY2012, amounting to 2 billion dollars. USAID has contributed at least 15 billion in aid to Afghanistan since 2001, with cumulative investments in the health sector at nearly 1 billion.  But the impact of these investments has been difficult to judge because of lack of reliable data and accurate measurement, leaving many wondering:  What have these funds achieved? In particular, has this economic assistance improved the health of Afghans?Up until the early 2000s, data on mortality in Afghanistan had been virtually non-existent. In the 2000s, a series of (mixed-quality) surveys motivated the Afghanistan Mortality Survey 2010 (AMS). It was hoped that the AMS, using the best-available survey methodology and conducting the survey with all due diligence (funded largely by USAID to the tune of $5 million dollars or more), would produce better-quality data on Afghan health.Given these significant efforts, it is not surprising that the survey has been cited as evidence of “stunning progress” in Afghanistan, perhaps because of pressure to proclaim positive results (the desire to exaggerate progress has happened before.) Yet the survey has not been without controversy and the evidence is far from conclusive.The survey’s quality and reliability was the subject of examination by Professor Kenneth Hill of Harvard University in a recent seminar at the Center for Global Development. Hill presented serious challenges to the survey’s quality, citing serious flaws and implausible estimates for child and adult mortality, even after correcting for some error patterns (see from slide #10 onward). Although there is agreement that the results for the southern region must be discarded entirely, Hill also finds the estimates for the Northern and Central region implausible as judged by several consistency checks.For example, Hill compared the estimates from the AMS (excluding the south) to other surveys over time – see figure below. The adjusted estimates produced by AMS are inconsistent and discontinuous with the MICS 1997 survey (the purple line), and closer to the estimates from surveys (of mixed quality) obtained in the 2000s during conflict.Source: Kenneth Hill, 2012Hill concluded that demographic surveys which produce incorrect estimates is not uncommon in conflict settings or in fragile states, and called for accurate approaches to estimating mortality during war-time to be developed.Bottom line: Although the AMS used the best survey methodology available and was conducted with due diligence (and although the survey was expensive), the AMS does not inspire confidence as evidence of  “stunning progress” in mortality. Obtaining reasonable estimates in conflict settings is difficult even with the best methods. The politicization of this survey and the exuberant communication of results need to be tempered. With low-quality data, adjustments of the data can only improve estimates by so much. Even after adjustment, the face validity of these findings is low. So what could be done better in conflict settings to provide more reliable and higher quality data to researchers and policy makers alike? We would be interested in your suggestions here in the comments section of this blog.The authors thank Amanda Glassman, John May, and Justin Sandefur for their helpful comments, and Kenneth Hill for his excellent presentation at CGD. 

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.