BLOG POST

Six Distinctions in Mapping the Landscape of Data and Research

December 05, 2011

Last Thursday, I joined about 70 people at a meeting organized by CGAP in Washington, DC, on putting clients at the center. I was asked to lead off a session on the role of research. My talk aside, I was impressed with the quality of the presentations during the day. Jan Chipchase talked about his intense and remarkable work as a high-powered design consultant understanding the needs of clients all over the world. Nachiket Mor described KGFS, which impresses every time. Graham Wright told powerful stories about MicroSave's behind-the-scenes roles in the rise of Equity Bank in Kenya and Grameen II in Bangladesh. And more.Here's what I said, more or less.-------Good morning! It’s a pleasure and a privilege to speak before such a distinguished group. I think I probably learned the majority of what I know about microfinance from the people in this room, so I'm not sure what I'm doing standing up here talking to you.This session is about research, and the role it can play in "putting clients at the center." I think the starting point is the observation that there are lots of kinds of research out there and lots of ways that it can be used, as represented by the diversity of professional roles among you: investors, donors, MFI managers, network groups, researchers, and more. The meeting organizers asked me to bring some conceptual order to this landscape as a starting point for our discussion.In thinking over what I would say to you now, I got rather bewildered by the variety of research that is done to understand microfinance clients, and the variety of purposes to which it is put.There are lots of kinds of data and researcher:

This is a smart group. You don’t need me to tell you that what tools you should use depends on your goal.But if that truism sufficed as an analysis of the role of research, we wouldn’t need this session—indeed, I don’t think we’d need this meeting. A lot of the challenges to the microfinance movement in recent years have to do with the lack of information about what is going on with clients (until it is too late); or arrival of new and uncomfortable analysis; or disputes over what constitutes valid evidence of impact. In the last few years, microfinance has had a complex relationship with data and research.I must tell you that in preparing for this talk, I struggled to go beyond that truism in a coherent way. I found myself mapping a web of ideas that is more complex than I realized. What I will do now is share with you the benefits of that struggle, such as they are, in the form of a list of key distinctions and tensions.I may tilt my discussion toward research on impacts as opposed to, say, client needs. One reason for this is that it’s what I’ve thought about most. Another, more deeply, is that I, like all of you, am here to make the world a better place. Impact is our bottom line. Also, the evidence on impact is where there has been the most controversy recently.The first distinction I’ll offer you is straightforward. It relates to the unit of observation. Some research generates data in which the microfinance institution is the unit of observation—social performance metrics and financial data for example. Other research focuses on communities, households, or individuals. Some data are observed at the transaction level. In financial diaries the raw data are individual transactions. I don’t have much to say about this distinction.The second distinction provokes more discussion from me: that between qualitative and quantitative research. The human mind is wired to encode knowledge through stories. But our species has gradually discovered the power of numbers in understanding and manipulating our world. Therein lies a tension.The microcredit impact studies that have grabbed headlines are quantitative. In Hyderabad, for instance, surveyors fanned out to knock on the doors of almost 7,000 homes (at least the ones that had doors) and proceeded to ask perhaps hundreds of questions, almost all of which elicited easily digitized answers—amount borrowed, whether sick in the last month. The idea is that while every human life is unique, if you look at enough of them, you can detect overall patterns.Qualitative research plays into our abilities to absorb and distill small numbers of narratives--stories. As I put it in my book:
Qualitative research involves observing, speaking with, even living with, a few dozen or hundred people to grasp the complexities of a phenomenon of interest. Usually the core of qualitative research is a sequence of in-depth interviews with a defined set of subjects, such as women in a particular slum in Buenos Aires or vendors in a Ugandan market. The interviews can range from rigidly planned sequences of queries to natural conversations motivated by broad questions. Regardless, because of the intense exposure to one milieu, the researcher inevitably picks up extra information in unplanned ways.
Almost everyone here has spent many years in microfinance. As a result, you are wise about it like few others on this planet. You did not achieve your wisdom by fielding a 400-question survey to 7,000 subjects and plugging the data into a computer for analysis. You achieved it by living and breathing microfinance, hearing stories, living through your own, absorbing the unexpected, gradually learning the patterns.I daresay there wouldn’t be a microfinance movement were it not for the power of stories and storytellers.As usual with dichotomies, the distinction between qualitative and quantitative research is no perfectly sharp—qualitative researchers can turn the stories they collect into numbers too. And also as usual, each approach has its advantages. I learn from both kinds of evidence. I’ll let the matter rest there.My first two distinctions were primarily about the nature of the data. The next two are more about how we extrapolate from it.One way we extrapolate from data is by generalizing over time and space. This creates a tension between specificity vs. generalization. If survey respondents in Saigon and Delhi express a desire for better ways to save, then we might expect people of similar income in Bangkok to feel the same—or maybe not. The problem is that while all electrons are the same, making generalization easy in physics, people are not all the same, and societies and people's positions within them are perhaps even more varied. In school my sons were taught: we’re all the same and we’re all different. We all fight temptation and need ways to discipline ourselves. But we all differ in the social and economic contexts in which we live.So it is usually hard to know how much to generalize from the information we have. The standard comment on this issue from researchers, which I think is right, is that when we see similar data from many contexts, then it’s safer to generalize. That’s pretty obvious. But I think it’s worth saying because of the way memes based on overgeneralization have bounced around the microfinance movement, causing real trouble. “It lifts everyone out of poverty.” “If some credit helps, then more is always better.” “New studies show microfinance doesn’t work after all.”The other way we extrapolate from data is along causal chains. This leads me to the distinction between describing reality and inferring causation. It’s one thing to say Rosita the microfinance client is doing well. It’s another thing to interpret that as proof that microfinance improved her life.Studying causation is the sexy thing to do in social sciences, and with good reason. Practitioners, funders, researchers, policy makers…everyone wants to know what causes what. Does eating more fiber reduce cholesterol? Does offering commitment savings accounts reduce poverty?But I think in our eagerness for answers to causal questions, basic descriptive research often goes underappreciated. My favorite example of descriptive work in microfinance is Portfolios of the Poor. Of course it’s an exception to what I'm saying in getting the appreciation it deserves. The book does contain generalizations across space and time—it extrapolates from a few hundreds subjects to the poor in general. But it is humble when it comes to inferring causation, and that gives it credibility. It makes no claims about whether microfinance or any other financial tool reduces poverty or smoothes consumption.At the opposite extreme on the spectrum between description and causal inference are the randomized studies, which are designed from the ground up to figure out what affects what.Which brings me to a fifth distinction: between observing and experimenting. Before the recent randomized studies, most studies of the impact of microfinance took reality as they encountered it, mostly by using client or household surveys, and then struggled to infer what was causing what. For example, if people who had been borrowing for several years are doing better than those just starting, that would be taken (cautiously) as a sign that borrowing improved lives.The huge challenge here is that in social systems—businesses, families, villages, slums, countries—everything affects everything else. Causal arrows run every which way. So while statistics can help us measure correlations—whether borrowers are better off than non-borrowers—it can only do so much to help us infer causation, to help us figure out whether the particular arrow we care about is the one that explains what we see. Maybe microcredit is making people better off. Or maybe better-off people just borrow more.Earlier this year I heard a story on NPR about a study finding that women who get botox injections are less empathetic. The idea is that the inability to smile or frown interferes with the cognitive experience of empathy. But maybe causation goes the other way. Maybe the sorts of women who get botox injections are just less empathetic to begin with... [that's a joke, readers!]Recognizing this challenge, and empowered by cheap computers, econometricians developed increasingly clever methods to try to isolate the signal they care about from all that noise. In my experience, these clever methods are much more apt to obscure the fundamental problem rather than solve it. It is this realization that has led to the randomization revolution in the study of impacts.What makes randomized studies special is that they don’t just observe reality. They manipulate it. If observational studies analyze the chaotic flow of water in a stream in order to infer the laws of hydrodyamics, randomized ones drop a rock in a pond, measure the ripples, and use that to understand how water moves. They introduce variation into the object of study that is uncorrelated with anything else in the universe.One thing I learned studying math 20 years ago is that all conclusions rest on assumptions. Euclid’s Elements, the classical symbol of pure reason, begins with assumptions, such as that for every two points there is one line to connect them. Randomized studies are more powerful because the assumptions on which they rest are credible—in particular that the randomization process affected clients only through microfinance. The assumptions embedded in non-experimental studies, once exposed, are much more debatable.Randomized studies have limitations, many of which they share with observational ones. They posit (or require) a static intervention, which might work well for an established product like group microcredit, but not so well for an organization that is experimenting and learning on the fly, as Chris Dunford has pointed out to me. The ones published so far also have short time frames, only following people for about 12–18 months.In between the humble descriptive work like the financial diaries and high-powered causality-measuring randomized studies is a messy middle ground, types of research whose producers seem to want to ascribe impact but hedge—or who do claim to be showing impact and don’t convince me. Here is where I would put the non-experimental studies I mentioned before. Here also would go the report recently commissioned by the Microcredit Summit Campaign showing how many Bangladeshis have climbed out of poverty in the last decade or so.Here also I would put measures such as the Progress out of Poverty Index, which is based on a series of questions about each client that are easy to answer, such as what her roof is made of. (Not to pick on that one: it's just the best-branded of its type.) I think it is great for microfinance institutions to understand their clients better, though I’m not sure that measuring their poverty is the most practical path to designing better financial services for them. But I get the sense that a big reason people in the microfinance movement have pushed such metrics is that they can seem to generate evidence of positive impact. But they do not measure impacts. They measure outcomes. Outcomes are observed; impacts are inferred.My last distinction is akin to the point I made before about how interpreting data requires assumptions, which we might also call theories—for example, the theory that people are pretty much the same everywhere, which allows generalization of results.In writing my book, my grand attempt to synthesize various kinds of research in order to evaluate microfinance, I realized that at least three notions of success were at play in the global conversation about microfinance. They correspond to different conceptions of the word “development.” And, crucially for this session, each tends to lead one to different kinds of research. The dominant one is development as escape from poverty. Another comes from Amartya Sen: development as freedom. Evaluating microfinance through this lens brings us to questions of empowerment of women, usury, transparency, and flexibility and reliability of financial services.The third is “development as industry building,” the Schumpeterian conception of development as a process of constant churning that produces new technologies and firms. I believe it is in this sense that microfinance has been most successful: we have dozens of thriving microfinance institutions around the world employing thousands and serving millions, competing and innovating—BRAC, Grameen, BancoSol, Pro Mujer. I have trouble thinking of other examples of foreign aid and philanthropy doing so much to stimulate the development of an industry. Of course, the news is not all good here. Sometimes the creative destruction has been more destructive than creative. Sometimes the growth has enriched the economic fabric over the long term, as when MFIs have crashed and burned.My point is that when you are engaged in bringing data to the grand questions of whether microfinance “works,” and how you can make your work in microfinance do more good, you need to be clear about your theory of development and your theory of impact. How, in general, does poverty fall? How can creating financial service institutions contribute to that? Your answers to these questions should shape the proximate goals you pursue, such as expanding client rolls, or reaching poorer people, as well the kinds of research you lean on to judge success.One top-level distinction here is between maximizing direct impact on clients and building industries. If you’re focused on the first, you’ll probably put primary emphasis on client surveys and impact studies. If the second, then you’ll be interested, for example, in how fast companies are growing, worrying that it might be too slow or too fast. You will then rely on a general theory, rooted in economic history, the links the building of a financial services industry to development generally. This theory is ambitious, but, I think, reasonable. Industrialization has been the engine of poverty reduction throughout modern history.In sum, clever experimentation and energetic data gathering can support human judgment, but they cannot replace it.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Topics