Ideas to Action:

Independent research for global prosperity


David Roodman's Microfinance Open Book Blog


In her invariably delicious prose, Kim Wilson sums up the Pitt-Khandker debate:

Dear Reader:

What part of “Y*i1 = Ziπ+ε1i where Yi1 = Y*i1 if Y*i1 >; Yi1 = c if Y*i1 ≤ c and Y2i = Yi1δ + ε2i” do you not understand?

If you are like me, you aren’t quite certain why these paired equations have touched off a tsunami of discussion in the world of microfinance. It seems that one set of researchers got the formula wrong (including a mismatched sign, bad dog), and the other set of researchers is ticked off, but if you are following the dog-fight, the wronged party is also wrong, assuming Wrong (causality) is more wrong than wrong (that little ole’ mismatched sign) … show that to your 6th grade teacher.

The scuffle opens up a larger issue: the growing discomfort among practitioners about evidence. Is there one path to the truth (Randomized Control Trials)? Are there several? And what if we don’t do RCTs – are we a pack of feckless losers?

(Except for a missing 0 after the ">", she got it right!)

But actually, I'm not sure I understand the point of the post, maybe because there isn't one, except to sprinkle ideas in preparation for a coming series of posts, "by researchers and practitioners who take the issue of program design and impact seriously, and who believe different approaches can help our services become more relevant, powerful, and efficiently deployed."

The confusing math, nota bene, is from a non-randomized study. The math for randomized control trials (RCTs) could hardly be simpler. You flip a coin a bunch of times to decide who is offered a financial service and who isn't. Then you come back later, and for each of the two groups---those offered and those not---you compute the average outcome (are you still with me?). Then you subtract one average from the other. Got that? Then you're done. Even my third grader can understand that math. Well, my third grader derived the formula for the area of a regular octagon...but his classmates can understand RCT math too. So let's not taint the method with obscurantism.

My main conclusion about non-randomized quantitative studies is that they are obscurantism, however unintentional, and what their complexity hides, ultimately, is a failure to prove the assumptions needed to demonstrate cause and effect. Warren Buffett's investment rule---don't buy what you don't understand---works in microfinance research too. RCTs are easy to understand, so you should buy them more than non-randomized studies.

Kim's post then cites Chris Dunford's talk at the Microfinace USA conference on three disadvantages of RCTs, then adds a fourth of her own. But most of these points (limited generalizability over space and time; sometimes results conform with prevailing wisdom) apply to all types of studies. As for the asserted high cost of RCTs: seems to me the question is not just what is the cost, but what is the benefit compared to the cost. As Jonathan Morduch basically said on the same panel, citing Dean Karlan, an RCT priced in the six figures can influence the nine-figure flow of funds into microfinance. If an RCT makes that flow 0.1% more effective, it has paid for itself.

In his fine new book, Adapt, Tim Harford writes eloquently on the morality of experimental testing of well-intended interventions in medicine:

The ethical agonising over such experiments continues today, but it is surprising that the scales remain heavily loaded against trials, even when there are two apparently equivalent treatments. A doctor who wants to run a properly controlled trial to test these two options needs approval from an ethics committee. A doctor who prescribes one or the other arbitrarily (there being no other basis for the decision), and who makes no special note of the results, needs to satisfy no higher authority. He's simply regarded as doing his job.

Not experimenting, Harford argues, puts development practitioners in the company of the leach doctors who dominated medicine for centuries. They were sure of their methods, but never proved them through controlled trials.

I suppose I have just contributed to Kim's blog least, if you put me in the category of the "researchers from the real world" lined up to contribute. I'll let you judge if I am out of this world.

Related Topics:


CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.