Over the last few months, in the context of my new affiliation with CGD, I’ve been making a transition from “Forestry World” — which I inhabited for six years at the Center for International Forestry Research in Indonesia — to “Development Finance World,” headquartered here in Washington with the World Bank, the IMF, and myriad think tanks and advocacy groups interested in development.
CGD Policy Blogs
When we make presentations on COD AIDat development agencies, we are frequently told: “Oh, we’re already doing that.” The more we investigate, however, the fewer cases we find where agencies are really disbursing funds against independently verified outcomes in a hands-off fashion. We’re tempted to say “close but no cigar.”
The World Bank President Jim Kim has said that the next frontier for the World Bank is to 'help to advance a science of delivery'. But the problem is not that we are ignoring politics, as Kevin Watkins suggests: the problem is that we are ignoring complexity.
In a recent survey, 640 development policymakers and practitioners in 100 developing countries were asked about the best ways to improve foreign aid so that it can have the most beneficial impact possible.
As mentioned in our last post, aid agencies are experimenting with programs that incorporate the main features of COD Aid: paying for outputs and outcomes, giving the recipient greater discretion to spend as they see fit, independent verification, and transparency. Once these results-based programs are up and running, they face a critical test when the first results are reported. In particular, most programs create expectations by setting annual targets and are then judged relative to those targets rather than to their baseline. And this means that even successful programs will be viewed as failures (a point also made in an earlier blog). By refusing to set targets, a results-based program can avoid this pitfall. How is it that targets can create such a problem?
An increasing number of aid agencies are experimenting with programs that incorporate the main features of COD Aid: paying for outputs, giving the recipient greater discretion to spend as they see fit, independent verification, and transparency. (See our brief and book for more details). We’ve argued that the design of COD Aid programs can be rather easy, though the quality of the indicators chosen and the verification process are certainly critical to success. We have spent less time talking about what happens once the program is up and running. In particular, what happens when you find out how much progress actually occurred?