Governments, impact investors, and philanthropists are increasingly looking for innovative ways to address tricky development challenges. USAID’s Development Innovation Ventures (DIV)—which celebrated its 10-year anniversary last year—was set up to do just that. By openly soliciting proposals for innovative solutions to development problems, selecting promising candidates to pilot and rigorously test, and/or helping scale those that prove successful, DIV is oriented around results. But when you’re in the business of trying new, untested approaches, not every investment will have a meaningful impact.
So how can USAID and other social investors, along with their key stakeholders, evaluate whether they are achieving the right balance of innovation and risk?
Professor Michael Kremer—co-recipient of the 2019 Nobel Prize in Economics and co-founder of DIV—along with coauthors Sasha Gallant, Olga Rostapshova, and Milan Thomas, offer an answer to this question. They argue that the case for investment in development innovation rests not on the success or failure of individual investments but on portfolio-wide returns. And, looking at DIV’s early portfolio, they present an exceptionally strong case. Their initial estimates suggest this set of investments generated a 5 to 1 ratio of social benefits to costs. And it’s likely to be even higher! Forthcoming research by the same team, using updated impact estimates and capturing a wider array of benefits, suggests that DIV’s early portfolio yielded at least $17 in social benefit for each dollar invested, an impressive rate of return by any standard.
We had the privilege of hosting Kremer to discuss this research—the calculation of the social rate of return, the features of DIV’s structure and operational model that contribute to its success, and the implications for other social investors. This blog summarizes some of the key points and findings, and also extrapolates from the work to draw lessons for how new leadership at USAID might approach its broader programming.
The evidence case for innovation, explained
Social investment outfits like DIV are often compared to venture capital funds. But in contrast to venture capital investors, social investors focused on development outcomes seek information beyond expected financial return when making investment decisions. They want to know how to invest scarce foreign assistance or philanthropic funds so they can deliver the biggest social impact. But measuring the social benefit these investments achieve is complicated by limited data and, frankly, uncertainty about how best to do it.
Kremer et al.’s work makes an important contribution to filling this gap. They start by recognizing that not all of DIV’s investments have high quality data on their impact or, for that matter, have impacts that can be easily monetized. (It can be hard to assign a dollar value to the life improvements from some democracy and governance innovations.) Their portfolio approach establishes that if the social benefit of those innovations whose impact can be monetized and that do have adequate data on impact, cost, and people reached can exceed the cost of an entire portfolio, this makes a convincing case for investing in development innovation.
These five innovations alone have yielded over $17 in social benefit for each dollar invested across the entire portfolio.
Here’s how this looked for DIV: of the 41 innovations that received a grant between 2010 and 2012, nine reached more than a million people. Of those nine grantees, five were able to measure impact with high-quality data. Those five were for road safety stickers, water treatment dispensers, affordable glasses, digital attendance monitoring, and software for community health workers. Kremer et al., in their forthcoming work, estimate that through 2019, these five innovations alone have yielded over $17 in social benefit for each dollar invested across the entire portfolio. And this is a conservative estimate of impact: after all, it’s possible that some of the less quantifiable, lower scale investments also yielded social benefits, even though the authors didn’t measure them. Also not captured in the rate of return is the value of the knowledge generated about what does and does not work (and under what circumstances) to achieve particular development outcomes—an intangible benefit that can nevertheless improve the way future aid dollars are used.
The 17-to-1 ratio of benefits to costs is driven not only by the high cost-effectiveness of the innovations which saved lives and improved worker productivity through relatively low-cost interventions, but also by the sheer numbers of people (over 75 million) that benefitted from the subset of five investments. Scalability emerges, therefore, as a necessary condition for high development impact (although scaling by itself isn’t sufficient, since scaling something that doesn’t work is a waste of time and money). And even though most pilot projects don’t scale, they’re a relatively inexpensive test of an intervention, so losses are small if they don’t work—and we can learn a lot from them!) When they do scale—as Kremer et al. show—gains can be large.
But which pilots are most likely to scale? Here, Kremer et al. provide new guidance. Comparing social innovations that scaled with those that didn’t, they find that, among the investments they studied, those more likely to scale: had a low cost per person reached; were based on established evidence; included an academic researcher in the design process to help test, iterate, and improve the innovation over time; leveraged established distribution networks (government or otherwise); and/or collaborated with existing institutions to build local capacity and bring evidence to bear on policy.
Choices about DIV’s structure, processes and funding approach offer implications for the design of other social innovation funds
While DIV retains that “venture capital” flavor, Kremer et al. explore how funds seeking social returns actually behave quite differently from those seeking financial returns since social benefits generate returns beyond what is returned to the investor. This suggests social funds like DIV can uncover opportunities for large social returns often overlooked by private and impact investors.
Highlighting successful aspects of DIV’s focus and operational model, Kremer notes several key characteristics that have helped DIV orient its funds toward social returns.
Tiered funding–offering smaller grants for early stage pilots and larger grants for already tested innovations to determine the most cost-effective paths to scale them—allows for investment in early stage innovations that commercial investors would deem too risky.
DIV also conducts extensive external peer reviews for and provides feedback to rejected applicants in the interest of sharing knowledge and helping to improve innovative ideas. Private investors do very little of this since these steps don’t serve their interests.
Establishing a plan for rigorous evaluation early on can help clarify thinking about the theory of change and plan more strategically for data collection.
And open innovation across sectors, applicants, geographies, and methods, and approaches allowed DIV the flexibility to focus on the most promising applicants. That said, these other factors still apply to funds seeking to generate innovations within specific sectors or geographic areas.
What broader implications can USAID—and its Congressional overseers—take from DIV’s demonstrated success?
While Kremer et al.’s research focuses on conclusions specific to DIV and offers lessons from DIV’s experience for other social investment funds, we put forth two additional implications for DIV’s place within USAID.
First, there’s scope to expand DIV’s work.
Since 2018, Congress has provided DIV an annual set-aside—$30 million in the FY21 omnibus spending package passed in December 2020, up from $23 million in previous years. This funding has been a lifeline for the initiative, which in 2017 had closed its application window in the face of uncertain funding. But this is just around one tenth of one percent of USAID’s total budget. As Kremer et al.’s analysis shows, DIV can create enormous value from a fairly small pool of funds. A larger pool could likely accomplish more. DIV needs a big enough and diversified enough portfolio to ensure it includes enough investments that do deliver big returns. And there is certainly no shortage of innovative proposals to explore.
Second, elevating DIV’s work and better linking it to USAID missions could expand impact.
DIV’s investment in evaluating development innovations can help missions home in on evidence-backed approaches to bring to their country programs and then evaluate their impact. One example of how this has worked is the partnership between DIV and USAID’s mission in Zambia to support the Zambian government’s rollout of Teaching at the Right Level (TaRL), an approach developed by Indian NGO Pratham to help build math and reading skills for primary school students. Rigorous evidence of TaRL’s impact in India (partly supported by DIV) encouraged decisions to expand the approach to other contexts, including by applying for—and winning—a stage 3 DIV grant to support the scale up of TaRL, adapted for Zambia.
Beyond a few good examples like this one, however, DIV has typically had limited reach to missions. This relationship is partly structural—DIV is small, relatively new, and seated within a larger, almost entirely Washington-based bureau.
DIV’s limited reach to missions is also partly by design. With its open innovation mandate, DIV’s primary goal isn’t to serve or respond to mission needs. Nevertheless, there are almost certainly untapped opportunities for missions, in partnership with country governments and other local actors, to adapt and implement at scale some of the innovations DIV has tested and shown to be effective.
As part of USAID’s restructuring, DIV now sits within the new Bureau for Development, Democracy and Innovation (DDI). This will bring new opportunities and challenges. One of the chief goals of DDI is to better support missions; DIV may be able to turn that mandate into stronger ties with country-based teams. But DDI is big and will encompass a wide range of sectors and activities, raising the risk that, without prioritization from leadership, DIV will continue to be overshadowed by other higher-profile bureau-mates. It will be important that the White House nominee to lead DDI recognizes DIV’s outstanding social rate of return and uses this as a basis to effectively champion evidence-based innovation in a time of crisis (and beyond). Working with the new Administrator to assess how to expand DIV’s scope, reach, and budget should be central to these efforts.
Beyond DIV’s place in the organization, there are some broader lessons DIV’s approach offers for USAID. Overall, USAID’s ”standard” portfolio would benefit from a more evidence-based approach. While evidence is at the center of DIV’s approach, much of USAID’s standard programming isn’t as well-informed by evidence. To demonstrate its seriousness about results, the agency should seek systematically to take better stock of existing evidence for proposed approaches in its procurement and program design processes.
Finally, failure is okay, but the failure to identify and learn from failure isn’t.
As one part of that evidence-based approach, value for money is important but under-analyzed. One of the most compelling aspects of Kremer et al.’s work is that they talk not just about impact but about rate of return. What did DIV achieve per dollar spent? Outside of DIV, impact evaluations of USAID-funded work have been rare. But impact evaluations that take cost data into account to talk about results per dollar have been even rarer. Nascent efforts to advance a common costing methodology and explore cost-effectiveness analysis are encouraging, however.
Finally, failure is okay, but the failure to identify and learn from failure isn’t. As Kremer et al.’s work shows, failure is a natural product of experimentation--some pilots just don’t prove to work well enough to pursue further. But while DIV is more explicitly focused on experimentation than much of USAID, a lot of what the agency invests in is at least somewhat experimental since there are still gaps in our understanding of how to use foreign aid to achieve development outcomes. This suggests two things. The first is that USAID should follow much more of a DIV-like approach when funding projects for which there is limited evidence. That is, it should start small and test before pouring big money in. The second is that failure, frankly, is likely. But individual failures shouldn’t condemn the whole enterprise. Advancing our understanding of how best to invest aid resources in the future requires an environment that encourages impact measurement and provides space for discussing and learning from the results, even—perhaps especially—when those results are disappointing.
The authors thank Arthur Baker, Sasha Gallant, Milan Thomas, and other reviewers for helpful comments on an earlier draft.