BLOG POST

Can We Assess Ag Aid Quality?

by
and
Edward Collins
August 29, 2012
This is a joint post with Edward Collins. Can we assess ag aid quality?  The short answer: sort of. For at least a decade,  aid effectiveness has been in the spotlight because of concerns that, in some cases, aid may do more harm than good and, more recently, because of growing budget pressures.  In 2005, donor and recipient countries agreed on a set of principles for more effective aid and a process to monitor implementation of those principles with the Paris Declaration on Aid Effectiveness.  Based on these principals, and with the objective to provide an independent evaluation of donor performance, Nancy Birdsall, Homi Kharas, and colleagues launched a joint Center for Global Development and Brookings Institution project to assess the Quality of Official Development Assistance, QuODA for short.  Now in its second edition, this project motivated CGD colleagues Amanda Glassman and Denizhan Duran to apply the QuODA methodology to health aid and now, we’ve done the same thing for agricultural aid. It is an apropros time to examine the quality of agricultural aid because of the renewed interest in agricultural development triggered by the food price spikes of 2007-08 and sustained by the predictions of some that the long-term trend of declining commodity prices may be coming to an end. The Food and Agricultural Organization of the United Nations, for example, found that food production will have to rise by 60 percent by 2050 to rising demand due to growing populations, increased incomes, and the diversion of food crops for energy. There was an uptick in donor investment in the sector, but the chart suggests that the increases in aid are already tailing off, with both the levels and the share well below the levels reached during the last major commodity price spikes in the 1970s. In marked contrast to the L’Aquila summit in 2009, where the G8 pledged $20 billion over three years for a new food security initiative, most of the money on the table when President Obama announced the G8’s New Alliance for  Food Security and Nutrition in Washington in May was $3.5 billion from the private sector. In sum, the combination of growing food security challenges and stagnant or shrinking budgets makes the effectiveness of agricultural aid even more critical. But assessing the quality of aid is not easy so, before turning to the results of our efforts, a few caveats are in order. First, the measures that both the Paris Declaration monitoring survey and the QuODA project use are indicators of donor efforts to improve the quality of their aid. They are not direct measures of effectiveness and donors and recipients need to put significantly more effort into monitoring and evaluating the actual impacts of aid. Second, methodological problems arise at the sectoral level because much of the information from the Paris Declaration surveys is only available for aggregate ODA. Thus, of the four QuODA dimensions, we can only assess donors on three:
  • Maximizing efficiency
  • Reducing burdens on recipient countries
  • Transparency and learning.
We have no information at all on how donors do in fostering institutions when they deliver agricultural aid, and we lose or have to adapt several other indicators on the remaining three dimensions.  Finally, as shown in the chart above, agricultural aid is a small share of total ODA, only around 5 percent, and for some smaller donors, there is very little to assess. With those caveats in mind, the overall results are summarized in this chart, which sums (the inverse) of each donors rankings across the three dimensions*: The International Development Association (IDA) of the World Bank and Ireland come out ahead when assessed across all three rankings, but each also has areas where it falls short—IDA on maximizing efficiency and Ireland on reducing burdens (at least for the indicators where we have data). The African Development Fund and the International Fund for Agricultural Development suffer on the transparency and learning dimension mainly because they choose not to voluntarily report certain information about their projects to the Creditor Reporting System (CRS), even though that information is readily available on their own websites and other multilateral institutions do choose to report it. That may seem unfair, but one motivation for the QuODA project is to encourage better and more consistent reporting by all donors. Switzerland, which heavily protects its own agricultural sector, is the biggest surprise, doing much better on the indicators of agricultural aid quality than it does in QuODA for all ODA (see figure 5 in the paper). Overall, however, the results are broadly similar to those in the original QuODA for all ODA, which should probably not be surprising since the lead aid agencies overall in most countries also typically provide the bulk of agricultural aid as well. The British Department for International Development provides 99 percent of agricultural aid, while the US Agency for International Development is responsible for 78 percent of such aid. For 17 of the 28 donors in our analysis, the primary aid agency overall also provides at least 90 percent of agricultural aid. Despite the overlap, if folks feel that it is important to analyze agricultural aid separately, then we need more and better sectoral data, as called for here by our colleagues working on QuODA Health. As we discuss in our paper, there are some improvements in the latest version of reporting to the Creditor Reporting System, but far more would be needed. In addition, as Prabhu Pingali pointed out recently in his presentation to the International Conference of Agricultural Economists (where I also presented the Ag QuODA project), there are a number of emerging donors, both public and private, where we have almost no information at all. On the good news side, an important additional source of information is due to arrive this fall when the Gates Foundation starts reporting to the CRS on its activities in agriculture, which are sizeable. Look for an update from us then as we plan to examine how the Gates Foundation does on these measures of aid quality, relative to others. * Each donor is ranked on each dimension, based on its average (unweighted) score on the indicators in each dimension; in preparing the chart, the ranks were inverted so that a higher bar is associated with higher quality and the donors arrayed according to the sum on all three dimensions.  

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.