research


From my recent post over on HURDLblog, my lab’s group blog, on the challenges of thinking productively about gender and adaptation:

My closing point caused a bit of consternation (I can’t help it – it’s what I do). Basically, I asked the room if the point of paying attention to gender in climate services was to identify the particular needs of men and women, or to identify and address the needs of the most vulnerable. I argued that approaches to gender that treat the categories “man” and “women” as homogenous and essentially linked to particular vulnerabilities might achieve the former, but would do very little to achieve the latter. Mary Thompson and I have produced a study for USAID that illustrates this point empirically. But there were a number of people in the room that got a bit worked up by this point. They felt that I was arguing that gender no longer mattered, and that my presentation marked a retreat from years of work that they and others had put in to get gender to the table in discussions of adaptation and climate services. Nothing could be further from the truth.

Read the full post here.

Those of you who’ve read this blog before know that I have a lot of issues with “technology-will-fix-it” approaches to development program and project design (what Evgeny Morozov calls “solutionism”). My main issue is that such approaches generally don’t work. Despite a very, very long history of such interventions and their outcomes demonstrating this point, the solutionist camp in development seems to grow stronger all the time. If I hear one more person tell me that mobile phones are going to fix [insert development challenge here], I am going to scream. And don’t even get me started about “apps for development,” which is really just a modified incarnation of “mobile phones will fix it” predicated on the proliferation of smartphones around the world. Both arguments, by the way, were on full display at the Conference on the Gender Dimensions of Weather and Climate Services I attended at the WMO last month. Then again, so were really outdated framings of gender. Perhaps this convergence of solutionism and reductionist framings of social difference means something about both sets of ideas, no?

At the moment I’m particularly concerned about the solutionist tendency in weather and climate services for development. At this point, I don’t think there is anything controversial in arguing that the bulk of services in play today were designed by climate scientists/information providers who operated with the assumption that information – any information – is at least somewhat useful to whoever gets it, and must be better than leaving people without any information. With this sort of an assumption guiding service development, it is understandable that nobody would have thought to engage the presumptive users of the service. First, it’s easy to see how some might have argued that the science of the climate is the science of the climate – so citizen engagement cannot contribute much to that. Second, while few people might want to admit this openly, the fact is that climate-related work in the Global South, like much development work, carries with it an implicit bias against the capabilities and intelligence of the (often rural and poor) populations they are meant to serve. The good news is that I have seen a major turn in this field over the past four years, as more and more people working in this area have come to realize that the simple creation and provision of information is not enough to ensure any sort of impact on the lives of presumptive end-users of the information – the report I edited on the Mali Meteorological Service’s Agrometeorological Advisory Program is Exhibit A at the moment.

So, for the first time, I see climate service providers trying to pay serious attention to the needs of the populations they are targeting with their programs. One of the potentially important ideas I see emerging in this vein is that of “co-production”: the design and implementation of climate services that involves the engagement of both providers and a wide range of users, including the presumptive end users of the services. The idea is simple: if a meteorological service wants to provide information that might meet the needs of some/all of the citizens it serves, that service should engage those citizens – both as individuals and via the various civil society organizations to which they might belong – in the process of identifying what information is needed, and how it might best be delivered.

So what’s the problem? Simple: While I think that most people calling for the co-production of climate services recognize that this will be a complex, fraught process, there is a serious risk that co-production could be picked up by less-informed actors and used as a means of pushing aside the need for serious social scientific work on the presumptive users of these services. It’s pretty easy to argue that if we are incorporating their views and ideas into the design of climate services, there is really no need for serious social scientific engagement with these populations, as co-production cuts out the social-science middleman and gets us the unmitigated, unfiltered voice of the user.

If this sounds insanely naïve to you, it is*. But it is also going to be very, very attractive to at least some in the climate services world. Good social science takes time and money (though nowhere near as much time or money as most people think). And cutting time and cost out of project design, including M&E design, speeds implementation. The pressure to cut out serious field research is, and will remain, strong. Further, the bulk of the climate services community is on the provider side. They’ve not spent much, if any, time engaging with end users, and generally have no training at all in social science. All of those lessons that the social sciences have learned about participatory development and its pitfalls (for a fantastic overview, read this) have not yet become common conversation in climate services. Instead, co-production sounds like a wonderful tweak to the solutionist mentality that dominates climate services, a change that does not challenge the current framings of the use and utility of information, or the ways in which most providers do business. Instead, you keep doing what you do, but you talk to the end users while you do it, which will result in better project outcomes.

But for co-production to replace the need for deep social scientific engagement with the users of climate services, certain conditions must be met. First of all, you have to figure out how, exactly you are going to actually incorporate user information, knowledge, and needs into the design and delivery of a climate service. This isn’t just a matter of a few workshops – how, exactly, are those operating in a nomothetic scientific paradigm supposed to engage and meaningfully incorporate knowledge from very different epistemological framings of the world? This issue, by itself, is generating significant literature…which mostly suggests this sort of engagement is really hard. So, until we’ve worked out that issue, co-production looks a bit like this:

Climate science + end user input => Then a miracle happens => successful project

That, folks, is no way to design a project. Oh, but it gets better. You see, the equation above presumes there is a “generic user” out there that can be engaged in a straightforward manner, and for whom information works in the same manner. Of course, there is no such thing – even within a household, there are often many potential users of climate information in their decision-making. They may undertake different livelihoods activities that are differently vulnerable to particular impacts of climate variability and change. They may have very different capacities to act on information – after all, when you don’t own a plow or have the right to use the family plow, it is very difficult to act on a seasonal agricultural advisory that tells you to plant right away. Climate services need serious social science, and social scientists, to figure out who the end users are – to move past presumption to empirical analysis – and what their different needs might be. Without such work, the above equation really looks more like:

Climate science => Then a miracle happens => you identify appropriate end users => end user input => Then another miracle happens => successful project

Yep, two miracles have to happen if you want to use co-production to replace serious social scientific engagement with the intended users of climate services. So, who wants to take a flyer with some funding and see how that goes? Feel free to read the Mali report referenced above if you’d like to find out**.

Co-production is a great idea – and one I strongly support. But it will be very hard, and it will not speed up the process of climate service design or implementation, nor will it allow for the cutting of corners in other parts of the design process. Co-production will only work in the context of deep understandings of the targeted users of a given service, to understand who we should be co-producing with, and for what purpose. HURDL continues to work on this issue in Mali, Senegal, and Zambia – watch this space in the months ahead.

 

 

*Actually, it doesn’t matter how it sounds: this is a very naïve assumption regardless.

** Spoiler: not so well. To be fair to the folks in Mali, their program was designed as an emergency measure, not a research or development program, and so they rushed things out to the field making a lot of assumptions under pressure.

So, DfID paid London’s School of Oriental and African Studies (SOAS) more than $1 million to answer a pretty important question: Whether or not Fairtrade certification improves growers’ lives. As has shown up in the media (see here and here) and around the development blogosphere (here), the headline finding of the report was unexpected: wage workers on Fairtrade-certified sites made less than those working on regular farms. Admittedly, this is a pretty shocking finding, as it undermines the basic premise of Fairtrade.

Edit 12 June: As Matt Collin notes in a comment below, this reading of the study is flawed, as it was not set up to capture the wage effects of Fairtrade. There were no baselines, and without baselines it is impossible to tell if there were improvements in Fairtrade sites – in short, the differences seen in the report could just be pre-existing differences, not a failure of Fairtrade. See the CGDev blog post on this here. So the press’ reading of this report is pretty problematic.

At the same time, this whole discussion completely misses the point. Fairtrade doesn’t work as a development tool because, in the end, Fairtrade does absolutely nothing to address the structural inequalities faced by those in the primary sector of the global economy relative to basically everyone else. Paying an African farmer a higher wage/better price means they are now a slightly wealthier farmer. They are still exposed to environmental shocks like drought and flooding, still tied to shocks and trends in global commodities markets over which they have almost no leverage at all, often still producing commodities (like coffee and cocoa) for which demand is very, very elastic, and in the end still living in states without safety nets to help them weather these economic and environmental shocks. Yes, I think African farmers are stunningly resilient, intelligent people (I write about this a lot). But the convergence of the challenges I just listed means that most farmers in the Global South are addressing one or more of them almost all the time, and the cost of managing these challenges is high (both in terms of hedging and coping). Incremental changes in agricultural incomes will be absorbed, by and large, by these costs – this is not a transformative development pathway.

So why is everyone freaking out at the $1 million dollar finding – even if that finding misrepresents the actual findings of the report? Because it brutally rips the Fairtrade band-aid off the global economy, and strips away any feeling of “doing our part” from those who purchase Fairtrade products. But of course, those of us who purchase Fairtrade products were never doing our part. If anything, we were allowing the shiny idea of better incomes and prices to obscure the structural problems that would always limit the impact of Fairtrade in the lives of the poor.

Bill Gates has a Project Syndicate piece up that, in the context of discussing Nina Munk’s book The Idealist, argues in favor of Jeffrey Sachs’ importance and relevance to contemporary development.

I’m going to leave aside the overarching argument of the piece. Instead, I want to focus on a small passage that, while perhaps a secondary point to Gates, strikes me as a very important lesson that he fails to apply to his own foundation (though to be fair, this is true of most people working in development).

Gates begins by noting that Sachs came to the Gates Foundation to ask for MVP funding, and lays out the fundamental MVP pitch for a “big push” of integrated interventions that crossed health, agriculture, and education sectors that Sachs was selling:

[Sachs’] hypothesis was that these interventions would be so synergistic that they would start a virtuous upward cycle and lift the villages out of poverty for good. He felt that if you focus just on fertilizer without also addressing health, or if you just go in and provide vaccinations without doing anything to help improve education, then progress won’t be sustained without an endless supply of aid.

This is nothing more than integrated development, and it makes sense. But, as was predicted, and as some are now demonstrating, it did not work. In reviewing what happened in the Millennium Villages that led them to come up short of expectations, Gates notes

MVP leaders encouraged farmers to switch to a series of new crops that were in demand in richer countries – and experts on the ground did a good job of helping farmers to produce good crop yields by using fertilizer, irrigation, and better seeds. But the MVP didn’t simultaneously invest in developing markets for these crops. According to Munk, “Pineapple couldn’t be exported after all, because the cost of transport was far too high. There was no market for ginger, apparently. And despite some early interest from buyers in Japan, no one wanted banana flour.” The farmers grew the crops, but the buyers didn’t come.

But then Gates seems to glide over a really key question: how could a smart, well-intentioned man miss the mark like this? Worse, how could a leading economist’s project blow market engagement so badly? Gates’ throwaway argument is “Of course, Sachs knows that it’s critical to understand market dynamics; he’s one of the world’s smartest economists. But in the villages Munk profiled, Sachs seems to be wearing blinders.” This is not an explanation for what happened, as telling us Sachs suffered from blinders is simply restating the obvious. The real issue is the source of these blinders.

The answer is, to me, blindingly obvious. The MVP, like most development interventions, really never understood what was going on in the villages targeted for intervention. Sure, they catalogued behaviors, activities, and outcomes…but there was never any serious investigation into the logic of observed behaviors. Instead, the MVP, like most development interventions, was rife with assumptions about the motivations of those living in Millennium Villages that produced these observed activities and outcomes, assumptions that had little to do with the actual logic of behavior. The result was interventions that implicitly infantilized the Millennium villagers by providing interventions that implicitly assumed, for example, that the villagers had not considered the potential markets for new and different crops/products. Such interventions assume ignorance as the driver of observed behaviors, instead of the enormously complex decision-making that underlies everyday lives and livelihoods in even the smallest village.

To give you an idea of what I mean, take a look at the following illustrations of the complexity of livelihoods decision-making (these are from my forthcoming article on applying the Livelihoods as Intimate Government approach in Applied Geography – a preprint is here).

First, we have #1, which illustrates the causes behind observed decisions captured by most livelihoods frameworks. In short, this is what most contemporary development planning gets to, at best.

Figure 1

However, this is a very incomplete version of any individual’s decision-making reality. #2 illustrates the wider range of factors shaping observed decisions that become visible through multiscalar analysis that nests particular places in wider networks of economic, environment, and politics. Relatively few applications of livelihoods frameworks approach this level of complexity, and those that do tend to consider the impacts of markets on particular livelihoods and places.

Figure 2

While this is better than the overly-simplistic framing of decisions in #1, it is still incomplete because motivations are not, themselves, discrete. #3 illustrates the complex web of factors, local and extralocal, and the ways in which these factors play off of one another at multiple scales, different times, and in different situations.

Figure 3

When we seek to understand why people do what they do (and do not do other things), this is the complexity with which we must engage.

This is important, because were Gates to realize that this was the relevant point of both Munk’s book and his own op-ed, he might better understand why his own foundation has

many projects…that have come up short. It’s hard to deliver effective solutions, even when you plan for every potential contingency and unintended consequence. There is a natural tendency in almost any kind of investment – business, philanthropic, or otherwise – to double down in the face of difficulty. I’ve done it, and I think most other people have too.

So, what do you do? Well, we have an answer: The Livelihoods as Intimate Government approach we use at HURDL (publications here and here, with guidance documents coming later in the summer) charts an analytic path through this level of complexity. Before the usual objections start.

1) We can train people to do it (we are doing so in Mali as I write this). You don’t need a Ph.D. in anthropology to use our approach.

2) It does not take too much time. We can implement at least as fast as any survey process, and depending on spatial focus and resources, can move on a timeframe from weeks to two months.

3) It is not too expensive – qualitative researchers are not expensive, and we do not require high-end equipment to do our work.

The proof is in the reactions we are getting from our colleagues. Here in Mali, I now have colleagues from IER and agricultural extension getting fired up about our approach as they watch the data coming in during our pilot phase. They are stunned by how much data we can collect in a short period of time, and how relevant the data is to the questions at hand because we understand what people are already doing, and why they are doing it. By using this approach, and starting from the assumption that we must understand what people are doing and why before we move to interventions, we are going to lay the foundation for more productive interventions that minimize the sorts of “surprise” outcomes that Gates references as an explanation for project failure.

There are no more excuses for program and project design processes that employ the same limited methods and work from the same problematic assumptions – there are ways to do it differently. But until people like Gates and Sachs reframe their understanding of how development should work, development will continue to be plagued by surprises that aren’t all that surprising.

While development – thought broadly as social/economic/political change that somehow brings about a change in peoples’ quality of life – generally entails changes in behavior, conversations about “behavior change” in development obscure important political and ethical issues around this subject, putting development programs and projects, and worse the people those programs and projects are meant to help, at risk.

We need to return to a long standing conversation about who gets to decide what behaviors need changing.  Most contemporary conversations about behavior change invoke simple public health examples that obscure the politics of behavior change (such as this recent New York Times Opinionator Piece). This piece appears to address the community and household politics of change (via peer pressure), but completely ignores the fact that every intervention mentioned was introduced by someone outside these communities. This is easy to ignore because handwashing or the use of chlorine in drinking water clearly reduces morbidity, nobody benefits from such morbidity, and addressing the causes of that morbidity requires interventions that engage knowledge and technology that, while well-established, were created someplace else.

But if we open up this conversation to other sorts of examples, the picture gets much more complicated. Take, for example, agricultural behaviors. An awful lot of food security/agricultural development programming these days discusses behavior change, ranging from what crops are grown to how farmers engage with markets. Here, the benefits of this behavior change are less clear, and less evenly-distributed through the population. Who decides what should be grown, and on what basis? Is improved yield or increased incomes enough justification to “change behaviors”? Such arguments presume shockingly simple rationales for observed behaviors, such as yield maximization, and often implicitly assume that peasant farmers in the Global South lack information and understandings that would produce such yields, thus requiring “education” to make better decisions. As I’ve argued time and again, and demonstrated empirically several times, most livelihoods decisions are a complex mix of politics, local environment, economy, and social issues that these farmers weigh in the context of surprisingly detailed information (see this post or my book for a discussion of farm allocation in Ghanaian households that illustrates this point). In short, when we start to talk about changing peoples’ behaviors, we often have no idea what it is that we are changing.

The fact we have little to no understanding of the basis on which observed decisions are made is a big, totally undiscussed problem for anyone interested in behavior change. In development, we design programs and projects based on presumptions about people’s motivations, but those presumptions are usually anchored in our own experiences and perceptions – which are quite often different from those with whom we work in the Global South (see the discussion of WEIRD data in psychology, for example here). When we don’t know why people are doing the things we do, we cannot understand the opportunities and challenges that come with those activities/behaviors. This allows an unexamined bias against the intelligence and experience of the global poor to enter and lurk behind this conversation.

Such bias isn’t just politically/ethically problematic – it risks real programmatic disasters. For example, when we perceive “inefficiency” on many African farms, we are often misinterpreting hedging behaviors necessary to manage worst-case scenarios in a setting where there are no safety nets. Erasing such behaviors in the name of efficiency (which will increase yields or incomes) can produce better outcomes…until the situation against which the farmers were hedged arises. Then, without the hedge, all hell can break loose. Among the rural agricultural communities in which I have been working for more than 15 years, such hedges typically address climate and/or market variability, which produce extremes at frequent, if irregular, intervals. Stripping the hedges from these systems presumes that the good years will at least compensate for the bad…a dangerous assumption based far more on hope or optimism than evidence in most places where these projects are designed and implemented. James Scott’s book The Art of Not Being Governed provides examples of agrarian populations that fled the state in the face of “modernization” efforts not because they were foolish or backward, but because they saw such programs as introducing unacceptable risks into their lives (see also this post for a similar discussion in the context of food security).

This is why my lab uses an approach (on a number of projects ranging from climate services evaluation and design to disaster risk reduction) that starts from the other direction – we begin by identifying and explaining particular behaviors relevant to the challenge, issue, or intervention at hand, and then start thinking about what kinds of behavioral change are possible and acceptable to the people with whom we work. We believe that this is both more effective (as we actually identify the rationales for observed behaviors before intervening) and safer (as we are less likely to design/condone interventions that increase vulnerability) than development programming based on presumption.

This is not to say that we should simply valorize all existing behaviors in the Global South. There are inefficiencies out there that could be reduced. There are things like handwashing that are simple and important. Sometimes farmers can change their practices in small ways that do not entail big shifts in risk or vulnerability. Our approach to project design and assessment helps to identify just such situations. But on the whole, we need to think much more critically about what we are assuming when we insist on a particular behavior change, and then replace those assumptions with information. Until we do, behavior change discussions will run the risk of uncritically imposing external values and assumptions on otherwise coherent systems, producing greater risk and vulnerability than existed before. Nobody could call that development.

I’m getting a bit better at updating my website…probably because I have more to update. Specifically, I’ve put up some new work on the publications page. There, you will find:

On the preprints page, I have two new pieces up:

Also be sure to check out the HURDL website. We’ve got new pubs up, and the last member of the lab (Bob Greeley) finally has a bio up!

Andy Sumner was kind enough to invite me to provide a blog entry/chapter for his forthcoming e-book The Donors’ Dilemma: Emergence, Convergence and the Future of Aid. I decided to use the platform as an opportunity to expand on some of my thoughts on the future of food aid and food security in the context of a changing climate.

My central point:

By failing to understand existing agricultural practices as time-tested parts of complex structures of risk management that include concerns for climate variability, we overestimate the current vulnerability of many agricultural systems to the impacts of climate change, and underestimate the risks we create when we wipe these systems away in favor of “more efficient”, more productive systems meant to address this looming global food crisis.

Why does this matter?

In ignoring existing systems and their logic in the name of addressing a crisis that has not yet arrived, development aid runs a significant risk of undermining the nascent turn toward addressing vulnerability, and building resilience, in the policy and implementation world by unnecessarily increasing the vulnerability of the poorest populations.

The whole post is here, along with a number of other really interesting posts on the future of aid here. Head over and offer your thoughts…

Nick Kristof’s piece decrying the distance between academia and the rest of society has, predictably, triggered a screaming firestorm in academia. That’s what you get when you poke the (over)educated, seriously literate beast. A lot of the criticism is very well written and thought out (outstanding examples here and here). But I fear that Kristof’s central message, that society needs a more engaged academia, is getting lost here. My main problem was not that Kristof was arguing for a more engaged academy, but that his prescriptions for how to bring about that engagement did not address the real incentives and barriers that academics negotiate when they try to engage with public debate.

So, in the interest of constructive criticism, I have some suggestions for things that Mr. Kristof might consider looking into – throwing a light on these challenges would actually serve to highlight the real, and often absurdly unnecessary, barriers between the academy and society. This is obviously just a tiny sample of potential topics, drawn from my own experiences in a top-tier department in a large, Research-1 state institution.

  1. Examine the system by which departments are “ranked” in the United States: The National Research Council (NRC) ranks departments at (not so) regular intervals, creating a sort of BCS ranking of departments, with about the same amount of accuracy and certainty. By and large, academics know these rankings are garbage, but administrations love to trot them out to demonstrate the excellence of their institution, and therefore justify the institutional budget/tuition/etc. But here’s a fun fact: if you dig into what counts in the rankings, you can quickly see why university administrations don’t necessarily care for academic outreach. For example, did you know that authoring an NRC report (which is seriously prestigious) DOES NOT COUNT AS A MEASURABLE PUBLICATION IN THE NRC RANKINGS? I know this because my department ran into this problem the last time around, with at least three members of our faculty losing multiple publications because the NRC did not count ITS OWN PUBLICATIONS. If those pubs were excluded, you can imagine that basically all reports in all contexts were excluded. So if administrations love rankings, and rankings hate outreach, you’re not going to get much outreach.
  2. Consider how academic evaluation’s over-focus on the number of articles produced creates less interesting, more arcane academic outputs: The production of knowledge in academia has, for some time, been driven by expectations of ever-greater output (as measured in research dollars and publications) with less input (fewer faculty members). These expectations govern everything from the evaluation of departments to individual tenure decisions. As a result, the publication requirements for tenure have become ever-more challenging, with expectations for the number of publications produced rising so steeply that many who recently got tenure might have published more articles than their very senior colleagues published to become full professors even two decades ago. This is driven by everything from departmental-level politics to the NRC rankings themselves, though I suspect a strong trickle-down effect here. In any case, this has created a crisis of knowledge production in which professors are incentivized to produce what my colleague Carl Dahlman once called the minimum publishable unit (MPU). Because expectations of performance are more and more heavily based on quantitative output (thanks, NRC!), as opposed to the quality of that output, it makes sense for faculty to shy away from “big question” articles that might chew up a lot of their data and interesting ideas, and instead package that same set of ideas as two or three smaller, much more arcane publications. This is a very real pressure: when I put out my retheorization of livelihoods approaches a year ago, more than one colleague suggested that I would have been better cutting its 15000 words into two 8500 word pieces, as it would have counted for more in my annual evaluation. Nothing has driven us toward a proliferation of small, specialized journals carrying tiny, arcane articles quite like this drive for quantification and greater production. Undoing this really awful trend would help a lot, as academics would be freed up to think big thoughts again, both in journals and in other fora. One way to help: publicize the alt-metrics movement (start at the LSE Impact Blog and work from there) that attempts to move beyond a system of academic assessment that reflects a long-dead era of publication and communication.
  3. Focus on how for-profit academic publishers wall off knowledge from the public: Academics must publish to survive professionally, and the best journals in nearly every field are the last profitable properties for a number of publishing houses. These publishers benefit from free labor on the part of authors, reviewers, and the nearly-free labor of editors, and often the subsidy of taxpayer-funded research, yet charge exorbitant amounts for subscriptions to their journals – in the case of public universities, bleeding the taxpayer once again. Academics are absolutely responsible for this situation – after all, we collectively define what the good journals are, and as I’ve argued before we could change our minds if we wanted to. But academia takes time to change, and could use a push. Where is the push from the federal government to demand that the results of taxpayer-funded research be made available to the taxpayers immediately? What happened to the initial push from the Obama White House on this issue? It seems to be a topic ripe for a good investigative journalist.

And, for good measure, an interesting trend that will likely lead to a more engaged academia:

  1. The shift in acceptable academic funding: Until very recently, academic grants from traditional agencies like the National Science Foundation or the National Institutes of Health were given exalted status, with all other forms of funding occupying lesser rungs on the great chain of funding. Thus, to get tenure, many (biophysical science/social science) academics really had to land one of these grants. The programs associated with these grants very often rewarded pure research and actively discouraged “applied” work, and even today the NSF’s requirements for “impact” are fairly surficial. Contracts were very second-tier, and often not taken seriously in one’s academic review. Now, thanks to funding crunches in both universities and the funding agencies, any research-looking dollars have started looking good to university administrations, and contracts are more and more being evaluated alongside more traditional academic grants. There is a tremendous opportunity here to engage academia through this mechanism. [Full disclosure: I’ve been funded in the past by NSF and by the National Geographic Society, but today roughly 90% of my funding comes directly or indirectly from development donors like USAID in the form of contracts or grants]

This is hardly a comprehensive list of things into which a serious journalist could shed light on, and perhaps help leverage change. I’m just typing quickly here. If you have other ideas for things that journalists should be examining, please leave them in the comments or email them to me: ed at edwardrcarr.com   I will append them to this post as they come in, attributing them (or not, depending on the wishes of contributors) in the post.

I’m a big fan of accountability when it comes to aid and development. We should be asking if our interventions have impact, and identifying interventions that are effective means of addressing particular development challenges. Of course, this is a bit like arguing for clean air and clean water. Seriously, who’s going to argue for dirtier water or air. Who really argues for ineffective aid and development spending?

Nobody.

More often than not, discussions of accountability and impact serve only to inflate narrow differences in approach, emphasis, or opinion into full on “good guys”/ “bad guys” arguments, where the “bad guys” are somehow against evaluation, hostile to the effective use of aid dollars, and indeed actively out to hurt the global poor. This serves nothing but particular cults of personality and, in my opinion, serves to squash out really important problems with the accountability/impact agenda in development. And there are major problems with this agenda as it is currently framed – around the belief that we have proven means of measuring what works and how, if only we would just apply those tools.

When we start from this as a foundation, the accountability discussion is narrowed to a rather tepid debate about the application of the right tools to select the right programs. If all we are really talking about are tools, any skepticism toward efforts to account for the impact of aid projects and dollars is easily labeled an exercise in obfuscation, a refusal to “learn what works,” or an example of organizations and individuals captured by their own intellectual inertia. In narrowing the debate to an argument about the willingness of individuals and organizations to apply these tools to their projects, we are closing off discussion of a critical problem in development: we don’t actually know exactly what we are trying to measure.

Look, you can (fairly easily) measure the intended impact of a given project or program if you set things up for monitoring and evaluation at the outset.  Hell, with enough time and money, we can often piece enough data together to do a decent post-hoc evaluation. But both cases assume two things:

1)   The project correctly identified the challenge at hand, and the intervention was actually foundational/central to the needs of the people at hand.

This is a pretty weak assumption. I filled up a book arguing that a lot of the things that we assume about life for the global poor are incorrect, and therefore that many of our fundamental assumptions about how to address the needs of the global poor are incorrect. And when much of what we do in development is based on assumptions about people we’ve never met and places we’ve never visited, it is likely that many projects which achieve their intended outcomes are actually doing relatively little for their target populations.

Bad news: this is pretty consistent with the findings of a really large academic literature on development. This is why HURDL focuses so heavily on the implementation of a research approach that defines the challenges of the population as part of its initial fieldwork, and continually revisits and revises those challenges as it sorts out the distinct and differentiated vulnerabilities (for explanation of those terms, see page one of here or here) experienced by various segments of the population.

Simply evaluating a portfolio of projects in terms of their stated goals serves to close off the project cycle into an ever more hermetically-sealed, self-referential world in which the needs of the target population recede ever further from design, monitoring, and evaluation. Sure, by introducing that drought-tolerant strain of millet to the region, you helped create a stable source of household food that guards against the impact of climate variability. This project could record high levels of variety uptake, large numbers of farmers trained on the growth of that variety, and even improved annual yields during slight downturns in rain. By all normal project metrics, it would be a success. But if the biggest problem in the area was finding adequate water for household livestock, that millet crop isn’t much good, and may well fail in the first truly dry season because men cannot tend their fields when they have to migrate with their animals in search of water.  Thus, the project achieved its goal of making agriculture more “climate smart,” but failed to actually address the main problem in the area. Project indicators will likely capture the first half of the previous scenario, and totally miss the second half (especially if that really dry year comes after the project cycle is over).

2)   The intended impact was the only impact of the intervention.

If all that we are evaluating is the achievement of the expected goals of a project, we fail to capture the wider set of impacts that any intervention into a complex system will produce. So, for example, an organization might install a borehole in a village in an effort to introduce safe drinking water and therefore lower rates of morbidity associated with water-borne illness. Because this is the goal of the project, monitoring and evaluation will center on identifying who uses the borehole, and their water-borne illness outcomes. And if this intervention fails to lower rates of water-borne illness among borehole users, perhaps because post-pump sanitation issues remain unresolved by this intervention, monitoring and evaluation efforts will likely grade the intervention a failure.

Sure, that new borehole might not have resulted in lowered morbidity from water-borne illness. But what if it radically reduced the amount of time women spent gathering water, time they now spend on their own economic activities and education…efforts that, in the long term, produced improved household sanitation practices that ended up achieving the original goal of the borehole in an indirect manner? In this case, is the borehole a failure? Well, in one sense, yes – it did not produce the intended outcome in the intended timeframe. But in another sense, it had a constructive impact on the community that, in the much longer term, produced the desired outcome in a manner that is no longer dependent on infrastructure. Calling that a failure is nonsensical.

Nearly every conversation I see about aid accountability and impact suffers from one or both of these problems. These are easy mistakes to make if we assume that we have 1) correctly identified the challenges that we should address and 2) we know how best to address those challenges. When these assumptions don’t hold up under scrutiny (which is often), we need to rethink what it means to be accountable with aid dollars, and how we identify the impact we do (or do not) have.

What am I getting at? I think we are at a point where we must reframe development interventions away from known technical or social “fixes” for known problems to catalysts for change that populations can build upon in locally appropriate, but often unpredictable, ways. The former framing of development is the technocrats’ dream, beautifully embodied in the (failing) Millennium Village Project, just the latest incarnation of Mitchell’s Rule of Experts or Easterly’s White Man’s Burden. The latter requires a radical embrace of complexity and uncertainty that I suspect Ben Ramalingan might support (I’m not sure how Owen Barder would feel about this). I think the real conversation in aid/development accountability and impact is about how to think about these concepts in the context of chaotic, complex systems.

Since returning to academia in August of 2012, I’ve been pretty swamped. Those who follow this blog, or my twitter feed, know that my rate of posting has been way, way down. It’s not that I got bored with social media, or tired of talking about development, humanitarian assistance, and environmental change. I’ve just been swamped. The transition back to academia took much more out of me than I expected, and I took on far, far too much work. The result – a lot of lost sleep, and a lapsed social media profile in the virtual world, and a lapsed social life in the real world.

One of the things I’ve been working on is getting and organizing enough support around here to do everything I’m supposed to be doing – that means getting grad students and (coming soon) a research associate/postdoc to help out. Well, we’re about 75% of the way there, and if I wait for 100% I’ll probably never get to introduce you all to HURDL…

HURDL is the Humanitarian Response and Development Lab here at the Department of Geography at the University of South Carolina. It’s also a less-than-subtle wink at my previous career in track and field. HURDL is the academic home for me and several (very smart) grad students, and the institution managing about five different workflows for different donors and implementers.  Basically, we are the qualitative/social science research team for a series of different projects that range from policy development to project design and implementation. Sometimes we are doing traditional academic research. Mostly, we do hybrid work that combines primary research with policy and/or implementation needs. I’m not going to go into huge detail here, because we finally have a lab website up. The site includes pages for our personnel, our projects, our lab-related publications, and some media (still under development). We’ll need to put up a news feed and likely a listing of the talks we give in different places.

Have a look around. I think you’ll have a sense of why I’ve been in a social media cave for a while. Luckily, I am surrounded by really smart, dedicated people, and am in a position to add at least one more staff position soon, so I might actually be back on the blog (and sleeping more than 6 hours a night) again soon!

Let us know what you think – this is just a first cut at the page. We’d love suggestions, comments, whatever you have – we want this to be an effective page, and a digital ambassador for our work…

Next Page »