Why Does the Inter-American Development Bank Lag Behind in Development Effectiveness?

Ilan Goldfajn, the newly elected president of the Inter-American Development Bank (IDB), has made development effectiveness one of his top corporate priorities, as stated during the institution’s annual meeting in Panama City. He is correct in doing so, as the IDB lags significantly behind other multilateral development banks in successfully achieving project results. According to the IDB’s independent evaluation office, only 53 percent of IDB sovereign projects in the latest validation cycle were successful in the four core areas of relevance, effectiveness, efficiency, and sustainability. This falls considerably short when compared to the World Bank's 80 percent achievement, or the approximately 70 percent achievement for both the African Development Bank and the Asian Development Bank. The IDB lags even further behind if we focus on effectiveness, where only 27 percent of projects achieve their expected results. To bring its performance on effectiveness into line with other multilateral development banks, IDB will need to not only reform its project selection and execution practices but also engage in a cultural shift that rewards effectiveness results.

IDB’s Development Effectiveness Framework

Development effectiveness has been a contentious issue at the IDB in the past, and indeed was at the core of the discussion of the institution’s last capital increase in 2010. At the time, the primary criticism was not the low level of results achieved, but rather that it was impossible to verify any results. This was in part due to projects lacking basic standards and metrics for evaluation, but mostly to the fact that completion reports were not written or verified independently. As a result, the shareholders demanded and adopted the so-called Better Bank Agenda, which included specific commitments to enhance development effectiveness.

IDB management responded with the Development Effectiveness Framework (DEF), which created strong incentives for learning and accountability of development results. The DEF rested on two pillars: (1) doing the right things, and (2) doing things right. The first pillar focused on choosing and prioritizing development interventions better than before—making decisions based on knowledge of what works for particular development challenges, while at the same time understanding local and/or institutional limitations for the implementation of the proposed solutions. The second pillar focused on knowing how much of the intended value was delivered for a specific project. This entailed that the intervention was rightly executed, meaning that the process for transforming inputs into outputs was in place, and that these outputs were delivered within time and budget. More importantly, it required that evaluations asked the right questions to determine whether the planned outputs produced the desired outcomes, applying rigorous methods to answer them, and producing conclusions that are relevant for policy and programs.

These measures introduced strong incentives to incorporate evidence of what works into project design, as well as metrics to monitor implementation and rigorous methods to evaluate results. Specifically, a quality-at-entry checklist was introduced which included a numerical rating of the project proposal and that required a minimum threshold to be reached for the project to the distributed to the board of directors for approval. If a project’s rating was above the minimum but relatively low, it became a topic of discussion at board meetings with executive directors, which creates friction but also a greater level of accountability for results. In a few years, not only did design improve, but rigorous impact evaluations were included in around 25 percent to 30 percent of projects proposals, whereas the World Bank, for example, conducted experimental or quasi-experimental evaluations for 10 percent of projects over a decade.

What could possibly go wrong following this massive effort? This blog examines what went wrong at the IDB and what management could potentially do to enhance development effectiveness.

What went wrong?

Before delving into what went wrong, a few words on what went right. The IDB has today a fully implemented evaluation and validation framework for both its sovereign and non-sovereign projects. This allows shareholders and other partners to know exactly how each development intervention is supposed to create value. There is also empirical evidence that a high rating in the quality-at-entry matrix increases the probability of a development project achieving positive results. A second study verified these findings and also found that, on average, projects that experience a higher share of their outputs discontinued, with respect to their first results matrix, are most likely to be ineffective in achieving their objectives and will likely be rated as unsuccessful.

On the other hand, two components of the DEF clearly faltered. Doing the right things never took off. It entailed applying selectivity to country programming so that the IBD could concentrate its financing on projects with a higher probability of achieving positive results. For a demand-driven institution such as the IDB, this would only be possible if the proper incentives were applied to the selection of projects for preparation, which proved to be too demanding for the political-economic equilibrium at the time. There was no effective filter between country demands and programming, resulting in a selection of projects that did not adhere to existing evidence of what works, or were based on faulty implementation assumptions. According to a Center for Global Development report, after a lot of work done on impact evaluation and evidence available, there is still a big lag between what is already known to work and its application to interventions.

The second piece that faltered was the monitoring of development results during execution of projects. There is empirical evidence than projects that deviate from the intended results, with outputs eliminated or changed, execute a lower share of the approved amounts or experience implementation difficulties, are most likely to be rated as unsuccessful. These alterations are made to outputs without adequate internal discussions and proper authorization levels and processes, so they cannot be validated by the Evaluation Office. Moreover, the cultural reluctance to discontinue projects that are not working and severely lagging on results (the “no loan left behind” syndrome in IDB jargon), adds dead weight to the portfolio. Not to mention the misuse of IDB capital. In comparison, when a significant change is made to a World Bank project, a transparent authorization procedure is followed, allowing the evaluation office to validate the new outputs. This lack of transparency in execution is a significant contributor to the low performance in observed outcomes for the IDB.

One final word on what we believe is not the main reason for poor project performance at the IDB: the Office of Evaluation validation methodology. This methodology follows the common set of evaluation standards applied by all MDBs. The response of IDB management to the Evaluation Office’s report highlights measurement inconsistencies as the primary cause for the poor results indicated. Yet, management does not include a counterfactual success score if such inconsistencies were corrected. The data presented by management suggests a modest improvement at best, which is insufficient to close the gap with other multilateral development banks. The relatively slight differences in measurement methodology among them cannot account for the vast disparity between the IDB and the others. The answer must reside elsewhere.

Three areas of focus for reforms

The DEF brought metrics and incentives to change behaviors in project design, but not in other key areas. The reform agenda that lies ahead must introduce measures that gradually move programming in the direction of projects that have a higher probability of achieving development results and supports a transparent and proactive execution process, cancelling interventions when they are not on track to achieve results. This means a cultural shift where leaders have a key role to play in modelling desired behaviors. We suggest three areas where the reform agenda could concentrate efforts:

1. Focus on program selectivity to increase the number of projects that have a higher chance of achieving development results  

This requires that country dialogue is informed by evidence on what works in the priority areas selected by governments. Programming selectivity could be informed by an independent rating of project developmental risk, allowing for a portfolio approach, with a combination of proven and more experimental projects. Finally, it would also allow for a more efficient operational model by fast tracking the review and approval of projects that have a lot of evidence of potential effectiveness and low development risk.

2. Make the redesign and cancellation of projects open and accountable

When projects must depart from their objectives due to unforeseen factors, project teams must obtain approval, according to a predetermined clear and accountable procedure agreed with the Evaluation Office. This will allow the adjustments to be accepted in the final ex-post validation of results. Also, cancellation should proceed when objective criteria are met, and exceptions be accepted only under exceptional circumstances.

3. Enhance project management capacities and reward results achievement.

There is substantial evidence that the quality of project managers is the most critical element influencing project outcomes. Two research papers by the World Bank and the Inter-American Development Bank, completed at the same time but unrelated to one another, provide persuasive evidence that project leaders are far more relevant for explaining project outcomes than countries or sectors of intervention. As in other organizations that do project work, institutional incentives must reinforce and properly recognize project management responsibilities. When career rewards are proportional to effectiveness accomplishments, behaviors adapt, and priorities shift, and cultures evolve.

Carola Alvarez is Deputy Chair Board of Directors of 3ie and Koldo Echebarria is CGD Non-Resident Fellow. Both are former IDB managers.


CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Image credit for social media/web: Adobe Stock