BLOG POST

We Quantified the Quality of Health Aid! (So What?)

by
and
Denizhan Duran
February 08, 2012
This is a joint post with Denizhan Duran. Which donor provides the "best" health aid, and why is it a relevant question? We attempted to answer these questions by adapting the Quality of Official Development Assistance (QuODA) methodology to health aid. To be honest, one working paper later, we still do not have a definite answer to either question. What we do know is that health aid is relevant: effective health aid has saved lives, and technologies like oral rehydration salts and vaccination are among the most efficient development interventions money can buy. Determining the quality of health aid is also relevant because Western donors spent $28 billion on global health aid in 2011: such a large amount, equaling almost one-fifth of all development aid spending, merits analysis. Finally, donors and recipients themselves seem to think aid effectiveness is important. Donors gathered in Busan in November 2011 to discuss and act on aid effectiveness; in the health sector, the International Health Partnership and the Health Systems Funding Platform are multi-donor/recipient efforts to improve aid effectiveness. However, despite health aid's scale and scope, the literature on aid effectiveness in health is inconclusive. As we discuss in our paper, existing studies correlate "good" aid practices to better health outcomes, but cannot quantify how improved aid effectiveness -measured using the Paris Declaration indicators or others- translates into better health results. In an ideal world, we would be able to say “a decrease in donor fragmentation increases coverage of vaccination by X%,” but given data constraints, we can’t go beyond correlations (see here). If we can't make this link, should we bother with measuring progress on aid effectiveness? Even conceptually, some aid effectiveness measures are not clearly linked with greater development effectiveness. For example, many authors (including myself) have been critical of the volatility of health aid disbursements, arguing that because health aid finances the recurrent costs of providing basic health care, smooth and predictable disbursements are necessary. Yet, procurement of medicines or vaccines can be lumpy over time, with big disbursements one year and even none the next, without necessarily implying that this health aid is "bad." Nevertheless, a subset of aid effectiveness indicators can –at least conceptually- be good proxies for development effectiveness: giving to countries with higher disease burdens and higher poverty, untying aid, channeling more aid through multilaterals, reducing transaction costs to alleviate the burden on recipients and adopting transparency measures have all consistently related to development effectiveness in the literature, as we cite in the methodology of our paper. Calculating such indicators (see here) forms the basis of our index. We address issues that can be quantified, and leave out those that cannot be. Those we omit, such as volatility, innovative financing, harmonization, ownership and mutual accountability, may very well be as important as the included indicators but can simply not be measured for the whole sample of donors. We rank donors across four dimensions of aid effectiveness: maximizing efficiency, fostering institutions, reducing burden and transparency and learning. Rankings within each dimension differ: The Netherlands ranks best in the maximizing efficiency dimension, but fares worse in giving to countries with national health plans. Some donors, such as the United Kingdom, fare above average in every dimension. By presenting rankings across different dimensions, we seek to nudge the donors in the right way: Australia, for example, could increase its overall aid effectiveness tremendously if it improved in allocating its aid to countries with higher disease burdens, as well as increasing the share of its aid that makes it on to recipient budgets (see figure below). Similarly, the United States, the largest health donor, can improve tremendously if it focuses on reducing the burden on recipients through reducing transaction costs; as we also recommend here. We hope donors take note of their relative rankings and capitalize on the results; we also hope recipients hold their donors accountable and report best (or worst) practices and have included analysis of the a sub-set of indicators in the aid-dependent countries. Tracking progress from 2008 to 2009, we find that most donors have regressed in terms of allocative efficiency: donors actually reduced giving to countries with higher disease burdens. Health aid also became more fragmented, which increased the burden on recipients. Comparing health aid to overall aid, we find that health aid is more focused geographically and less tied, but fares worse in going to poor or well-governed countries. In the coming days, I’ll blog on DAC aid purpose codes and donor reporting practices, and connect aid quality with the emerging value-for-money agenda in health aid. Meanwhile, we invite you to explore our data to see how donors do on different indicators and dimensions.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.