Entries tagged with “integration”.


I just finished reading Geoff Dabelko’s “The Periphery isn’t Peripheral” on Ensia. In this piece, Geoff diagnoses the problems that beset efforts to address linked environmental and development problems, and offers some thoughts on how to address them. I love his typology of tyrannies that beset efforts to build and implement good, integrative (i.e. cross-sectoral) programs. I agreed with his suggestions on how to make integrative work more acceptable/mainstream in development. And by the end, I was worried about how to make his suggestions reality within the donors and implementers that really need to take on this message.

Geoff’s four tyrannies (Tyranny of the Inbox; Tyranny of Immediate Results; Tyranny of the Single Sector; Tyranny of the Unidimensional Measurement of Success) that he sees crippling environment-and-development programming are dead on. Those of us working in climate change are especially sensitive to tyranny #2, the Tyranny of Immediate Results. How the hell are we supposed to demonstrate results on an adaptation program that is meant to address challenges that are not just happening now, but will intensify over a 30 year horizon? Does our inability to see the future mean that this programming is inherently useless or inefficient? No. But because it is impossible to measure future impact now, adaptation programs are easy to attack…

As a geographer, I love Geoff’s “Tyranny of the Single Sector” – geographers generally cannot help but start integrating things across sectors (that’s what our discipline does, really). In my experiences in the classroom and the donor world, integrative thinking eludes a lot more people than I ever thought possible. Our absurd system of performance measurement in public education is not helping – trust me. But even when you find an integrative thinker, they may not be doing much integrative work. Sometimes people simply can’t see outside their own training and expertise. Sometimes they are victims of tyranny #1 (Tyranny of the Inbox), where they are too busy dealing with immediate challenges within their sector to think across sectors – lord knows, that defined the last 6 months of my life at USAID.

And Geoff’s fourth tyranny speaks right to my post from the other day – the Tyranny of the Unidimensional Measurement of Success. Read Geoff, and then read my post, and you will see why he and I get along so well.

Now, Geoff does not stop with a diagnosis – he suggests that integrative thinking in development will require some changes to how we do our jobs, and provides some illustrations of integrative projects that have produced better results to bolster his argument. While I like all of his suggestions, what concerns me is that these suggestions are easier said than done. For example, Geoff is dead right when he says that:

We must reward, rather than punish, cross-disciplinary or cross-sectoral approaches; define success in a way that encourages, rather than discourages, positive outcomes in multiple arenas; and foster monitoring and evaluation plans that embrace, rather than ignore, different timescales and multiple indicators.”

But how, exactly, are we to do this? What HR levers exist that we can use to make this happen? How much leeway do appointees and other executive-level donor staff have with regard to changing rewards and evaluations? And are the right people in charge to make such changes possible? A lot of people rise through donor organizations by being very good at sectoral work. Why would they reward people for doing things differently?

Similarly, I wonder how we can actually get more long-term thinking built into the practice and implementation of development. How do we really overcome the Tyranny of the Inbox, and the Tyranny of Immediate Results? This is not merely a mindset problem, this is a problem of budget justifications to an often-hostile congress that wants to know what you have done for them lately. Where are our congressional champions to make this sort of change possible?

Asking Geoff to fix all our problems in a single bit of writing is completely unfair. That is the Tyranny of What do We do Now? In the best tradition of academic/policy writing, his piece got me thinking (constructively) about what needs to happen if we are to do a better job of achieving something that looks like sustainable development going forward. For that reason alone it is well worth your time. Go read.

Well, the response to part one was great – really good comments, and a few great response posts.  I appreciate the efforts of some of my economist colleagues/friends to clarify the terminology and purpose behind RCTs.  All of this has been very productive for me – and hopefully for others engaged in this conversation.

First, a caveat: On the blog I tend to write quickly and with minimal editing – so I get a bit fast and loose at times – well, faster and looser than I intend.  So, to this end, I did not mean to suggest that nobody was doing rigorous work in development research – in fact, the rest of my post clearly set out to refute that idea, at least in the qualitative sphere.  But I see how Marc Bellemare might have read me that way.  What I should have said was that there has always been work, both in research and implementation, where rigorous data collection and analysis were lacking.  In fact, there is quite a lot of this work.  I think we can all agree this is true . . . and I should have been clearer.

I have also learned that what qualitative social scientists/social theorists mean by theory, and what economists mean by theory, seems to be two different things.  Lee defined theory as “formal mathematical modeling” in a comment on part 1 of this series of posts, which is emphatically not what a social theorist might mean.  When I say theory, I am talking about a conjectural framing of a social totality such that complex causality can at least be contained, if not fully explained.  This framing should have reference to some sort of empirical evidence, and therefore should be testable and refinable over time – perhaps through various sorts of ethnographic work, perhaps through formal mathematical modeling of the propositions at hand (I do a bit of both, actually).  In other words, what I mean by theory (and what I focus on in my work) is the establishment of a causal architecture for observed social outcomes.  I am all about the “why it worked” part of research, and far less about the “if it worked” questions – perhaps mostly because I have researched unintended “development interventions” (i.e. unplanned road construction, the establishment of a forest reserve that alters livelihoods resource access, etc.) that did not have a clear goal, a clear “it worked!” moment to identify.  All I have been looking at are outcomes of particular events, and trying to establish the causes of those outcomes.  Obviously, this can be translated to an RCT environment because we could control for the intervention and expected outcome, and then use my approaches to get at the “why did it work/not work” issues.

It has been very interesting to see the economists weigh in on what RCTs really do – they establish, as Marc puts it, “whether something works, not in how it works.”  (See also Grant’s great comment on the first post).  I don’t think that I would get a lot of argument from people if I noted that without causal mechanisms, we can’t be sure why “what worked” actually worked, and whether the causes of “what worked” are in any way generalizable or transportable.  We might have some idea, but I would have low confidence in any research that ended at this point.  This, of course, is why Marc, Lee, Ruth, Grant and any number of other folks see a need for collaboration between quant and qual – so that we can get the right people, with the right tools, looking at different aspects of a development intervention to rigorously establish the existence of an impact, and the establish an equally rigorous understanding of the causal processes by which that impact came to pass.  Nothing terribly new here, I think.  Except, of course, for my continued claim that the qualitative work I do see associated with RCT work is mostly awful, tending toward bad journalism (see my discussion of bad journalism and bad qualitative work in the first post).

But this discussion misses a much larger point about epistemology – what I intended to write in this second part of the series all along.  I do not see the dichotomy between measuring “if something works” and establishing “why something worked” as analytically valid.  Simply put, without some (at least hypothetical) framing of causality, we cannot rigorously frame research questions around either question.  How can you know if something worked, if you are not sure how it was supposed to work in the first place?  Qualitative research provides the interpretive framework for the data collected via RCT4D efforts – a necessary framework if we want RCT4D work to be rigorous.  By separating qualitative work from the quant oriented RCT work, we are assuming that somehow we can pull data collection apart from the framing of the research question.  We cannot – nobody is completely inductive, which means we all work from some sort of framing of causality.  The danger is when we don’t acknowledge this simple point – under most RCT4D work, those framings are implicit and completely uninterrogated by the practitioners.  Even where they come to the fore (Duflo’s 3 I s), they are not interrogated – they are assumed as framings for the rest of the analysis.

If we don’t have causal mechanisms, we cannot rigorously frame research questions to see if something is working – we are, as Marc says, “like the drunk looking for his car keys under the street lamp when he knows he lost them elsewhere, because the only place he can actually see is under the street lamp.”  Only I would argue we are the drunk looking for his keys under a streetlamp, but he has no idea if they are there or not.

In short, I’m not beating up on RCT4D, nor am I advocating for more conversation – no, I am arguing that we need integration, teams with quant and qual skills that frame the research questions together, that develop tests together, that interpret the data together.  This is the only way we will come to really understand the impact of our interventions, and how to more productively frame future efforts.  Of course, I can say this because I already work in a mixed-methods world where my projects integrate the skills of GIScientists, land use modelers, climate modelers, biogeographers and qualitative social scientists – in short, I have a degree of comfort with this sort of collaboration.  So, who wants to start putting together some seriously collaborative, integrated evaluations?