The Qualitative Research Challenge to RCT4D: Part 1

Those following this blog (or my twitter feed) know that I have some issues with RCT4D work.  I’m actually working on a serious treatment of the issues I see in this work (i.e. journal article), but I am not above crowdsourcing some of my ideas to see how people respond.  Also, as many of my readers know, I have a propensity for really long posts.  I’m going to try to avoid that here by breaking this topic into two parts.  So, this is part 1 of 2.
To me, RCT4D work is interesting because of its emphasis on rigorous data collection – certainly, this has long been a problem in development research, and I have no doubt that the data they are gathering is valid.  However, part of the reason I feel confident in this data is because, as I raised in an earlier post,  it is replicating findings from the qualitative literature . . . findings that are, in many cases, long-established with rigorously-gathered, verifiable data.  More on that in part 2 of this series.
One of the things that worries me about the RCT4D movement is the (at least implicit, often overt) suggestion that other forms of development data collection lack rigor and validity.  However, in the qualitative realm we spend a lot of time thinking about rigor and validity, and how we might achieve both – and there are tools we use to this end, ranging from discursive analysis to cross-checking interviews with focus groups and other forms of data.  Certainly, these are different means of establishing rigor and validity, but they are still there.
Without rigor and validity, qualitative research falls into bad journalism.  As I see it, good journalism captures a story or an important issue, and illustrates that issue through examples.  These examples are not meant to rigorously explain the issue at hand, but to clarify it or ground it for the reader.  When journalists attempt to move to explanation via these same few examples (as far too often columnists like Kristof and Friedman do), they start making unsubstantiated claims that generally fall apart under scrutiny.  People mistake this sort of work for qualitative social science all the time, but it is not.  Certainly there is some really bad social science out there that slips from illustration to explanation in just the manner I have described, but this is hardly the majority of the work found in the literature.  Instead, rigorous qualitative social science recognizes the need to gather valid data, and therefore requires conducting dozens, if not hundreds, of interviews to establish understandings of the events and processes at hand.
This understanding of qualitative research stands in stark contrast to what is in evidence in the RCT4D movement.  For all of the effort devoted to data collection under these efforts, there is stunningly little time and energy devoted to explanation of the patterns seen in the data.  In short, RCT4D often reverts to bad journalism when it comes time for explanation.  Patterns gleaned from meticulously gathered data are explained in an offhand manner.  For example, in her (otherwise quite well-done) presentation to USAID yesterday, Esther Duflo suggested that some problematic development outcomes could be explained by a combination of “the three I s”: ideology, ignorance and inertia.  This is a boggling oversimplification of why people do what they do – ideology is basically nondiagnostic (you need to define and interrogate it before you can do anything about it), and ignorance and inertia are (probably unintentionally) deeply patronizing assumptions about people living in the Global South that have been disproven time and again (my own work in Ghana has demonstrated that people operate with really fine-grained information about incomes and gender roles, and know exactly what they are doing when they act in a manner that limits their household incomes – see here, here and here).  Development has claimed to be overcoming ignorance and inertia since . . . well, since we called it colonialism.  Sorry, but that’s the truth.
Worse, this offhand approach to explanation is often “validated” through reference to a single qualitative case that may or may not be representative of the situation at hand – this is horribly ironic for an approach that is trying to move development research past the anecdotal.  This is not merely external observation – I have heard from people working inside J-PAL projects that the overall program puts little effort into serious qualitative work, and has little understanding of what rigor and validity might mean in the context of qualitative methods or explanation.  In short, the bulk of explanation for these interesting patterns of behavior that emerges from these studies resorts to uninterrogated assumptions about human behavior that do not hold up to empirical reality.  What RCT4D has identified are patterns, not explanations – explanation requires a contextual understanding of the social.
Coming soon: Part 2 – Qualitative research and the interpretation of empirical data

For everyone who doesn't understand social research . . .

OK, two posts for today, because I can’t help myself. Yeah, I am a social scientist. Which means that people either think I run control experiments on various populations (an idea that freaks me out)*, or they think that I have no method to my research at all – I just sort of run around, talk to a few people until I get bored or run out of money, and then come back and write it up.
Of course, both views are crap.  Good social science is founded on rigorous fieldwork and data whose validity can be verified.  How one collects that data, and verifies that validity, varies – it depends on what you are studying.  For whatever reason, though, people have a hard time understanding this.  Quick story: a former chair of my department, during a debate about field methods, actually once asked me if it was really possible to teach someone to do interviews and participant observation.  My response: “I didn’t pop out of the womb able to do this, you know.”  End of discussion, thankfully.
But now I have found someone who has written this up nicely – Wronging Rights (absolutely hilarious, and totally awful, all at the same time – just go read for a bit and then feel bad about yourself for laughing.  Everyone does) has a great post on the subject that links to a series of even better posts at Texas in Africa that covers it (see the Wronging Rights post link to connect to the relevant Texas in Africa posts).
Social scientists, get to reading.  Journalists, read this and understand why you are not social scientists.  Especially you, Thomas Friedman.  And the rest of you . . . never, ever ask me if you can teach someone to do social science . . .
*controlled experiment: what, am I supposed to pick two identical villages (no such thing), and then start to work with one village while studiously ignoring the other village no matter what happens to that community (i.e. drought, food insecurity, disease, what have you) because I need to preserve the integrity of my control group?  There are other ways to establish the validity of one’s results . . .