Well, the response to part one was great – really good comments, and a few great response posts.  I appreciate the efforts of some of my economist colleagues/friends to clarify the terminology and purpose behind RCTs.  All of this has been very productive for me – and hopefully for others engaged in this conversation.

First, a caveat: On the blog I tend to write quickly and with minimal editing – so I get a bit fast and loose at times – well, faster and looser than I intend.  So, to this end, I did not mean to suggest that nobody was doing rigorous work in development research – in fact, the rest of my post clearly set out to refute that idea, at least in the qualitative sphere.  But I see how Marc Bellemare might have read me that way.  What I should have said was that there has always been work, both in research and implementation, where rigorous data collection and analysis were lacking.  In fact, there is quite a lot of this work.  I think we can all agree this is true . . . and I should have been clearer.

I have also learned that what qualitative social scientists/social theorists mean by theory, and what economists mean by theory, seems to be two different things.  Lee defined theory as “formal mathematical modeling” in a comment on part 1 of this series of posts, which is emphatically not what a social theorist might mean.  When I say theory, I am talking about a conjectural framing of a social totality such that complex causality can at least be contained, if not fully explained.  This framing should have reference to some sort of empirical evidence, and therefore should be testable and refinable over time – perhaps through various sorts of ethnographic work, perhaps through formal mathematical modeling of the propositions at hand (I do a bit of both, actually).  In other words, what I mean by theory (and what I focus on in my work) is the establishment of a causal architecture for observed social outcomes.  I am all about the “why it worked” part of research, and far less about the “if it worked” questions – perhaps mostly because I have researched unintended “development interventions” (i.e. unplanned road construction, the establishment of a forest reserve that alters livelihoods resource access, etc.) that did not have a clear goal, a clear “it worked!” moment to identify.  All I have been looking at are outcomes of particular events, and trying to establish the causes of those outcomes.  Obviously, this can be translated to an RCT environment because we could control for the intervention and expected outcome, and then use my approaches to get at the “why did it work/not work” issues.

It has been very interesting to see the economists weigh in on what RCTs really do – they establish, as Marc puts it, “whether something works, not in how it works.”  (See also Grant’s great comment on the first post).  I don’t think that I would get a lot of argument from people if I noted that without causal mechanisms, we can’t be sure why “what worked” actually worked, and whether the causes of “what worked” are in any way generalizable or transportable.  We might have some idea, but I would have low confidence in any research that ended at this point.  This, of course, is why Marc, Lee, Ruth, Grant and any number of other folks see a need for collaboration between quant and qual – so that we can get the right people, with the right tools, looking at different aspects of a development intervention to rigorously establish the existence of an impact, and the establish an equally rigorous understanding of the causal processes by which that impact came to pass.  Nothing terribly new here, I think.  Except, of course, for my continued claim that the qualitative work I do see associated with RCT work is mostly awful, tending toward bad journalism (see my discussion of bad journalism and bad qualitative work in the first post).

But this discussion misses a much larger point about epistemology – what I intended to write in this second part of the series all along.  I do not see the dichotomy between measuring “if something works” and establishing “why something worked” as analytically valid.  Simply put, without some (at least hypothetical) framing of causality, we cannot rigorously frame research questions around either question.  How can you know if something worked, if you are not sure how it was supposed to work in the first place?  Qualitative research provides the interpretive framework for the data collected via RCT4D efforts – a necessary framework if we want RCT4D work to be rigorous.  By separating qualitative work from the quant oriented RCT work, we are assuming that somehow we can pull data collection apart from the framing of the research question.  We cannot – nobody is completely inductive, which means we all work from some sort of framing of causality.  The danger is when we don’t acknowledge this simple point – under most RCT4D work, those framings are implicit and completely uninterrogated by the practitioners.  Even where they come to the fore (Duflo’s 3 I s), they are not interrogated – they are assumed as framings for the rest of the analysis.

If we don’t have causal mechanisms, we cannot rigorously frame research questions to see if something is working – we are, as Marc says, “like the drunk looking for his car keys under the street lamp when he knows he lost them elsewhere, because the only place he can actually see is under the street lamp.”  Only I would argue we are the drunk looking for his keys under a streetlamp, but he has no idea if they are there or not.

In short, I’m not beating up on RCT4D, nor am I advocating for more conversation – no, I am arguing that we need integration, teams with quant and qual skills that frame the research questions together, that develop tests together, that interpret the data together.  This is the only way we will come to really understand the impact of our interventions, and how to more productively frame future efforts.  Of course, I can say this because I already work in a mixed-methods world where my projects integrate the skills of GIScientists, land use modelers, climate modelers, biogeographers and qualitative social scientists – in short, I have a degree of comfort with this sort of collaboration.  So, who wants to start putting together some seriously collaborative, integrated evaluations?