Entries tagged with “development”.


So, DfID paid London’s School of Oriental and African Studies (SOAS) more than $1 million to answer a pretty important question: Whether or not Fairtrade certification improves growers’ lives. As has shown up in the media (see here and here) and around the development blogosphere (here), the headline finding of the report was unexpected: wage workers on Fairtrade-certified sites made less than those working on regular farms. Admittedly, this is a pretty shocking finding, as it undermines the basic premise of Fairtrade.

Edit 12 June: As Matt Collin notes in a comment below, this reading of the study is flawed, as it was not set up to capture the wage effects of Fairtrade. There were no baselines, and without baselines it is impossible to tell if there were improvements in Fairtrade sites – in short, the differences seen in the report could just be pre-existing differences, not a failure of Fairtrade. See the CGDev blog post on this here. So the press’ reading of this report is pretty problematic.

At the same time, this whole discussion completely misses the point. Fairtrade doesn’t work as a development tool because, in the end, Fairtrade does absolutely nothing to address the structural inequalities faced by those in the primary sector of the global economy relative to basically everyone else. Paying an African farmer a higher wage/better price means they are now a slightly wealthier farmer. They are still exposed to environmental shocks like drought and flooding, still tied to shocks and trends in global commodities markets over which they have almost no leverage at all, often still producing commodities (like coffee and cocoa) for which demand is very, very elastic, and in the end still living in states without safety nets to help them weather these economic and environmental shocks. Yes, I think African farmers are stunningly resilient, intelligent people (I write about this a lot). But the convergence of the challenges I just listed means that most farmers in the Global South are addressing one or more of them almost all the time, and the cost of managing these challenges is high (both in terms of hedging and coping). Incremental changes in agricultural incomes will be absorbed, by and large, by these costs – this is not a transformative development pathway.

So why is everyone freaking out at the $1 million dollar finding – even if that finding misrepresents the actual findings of the report? Because it brutally rips the Fairtrade band-aid off the global economy, and strips away any feeling of “doing our part” from those who purchase Fairtrade products. But of course, those of us who purchase Fairtrade products were never doing our part. If anything, we were allowing the shiny idea of better incomes and prices to obscure the structural problems that would always limit the impact of Fairtrade in the lives of the poor.

Bill Gates has a Project Syndicate piece up that, in the context of discussing Nina Munk’s book The Idealist, argues in favor of Jeffrey Sachs’ importance and relevance to contemporary development.

I’m going to leave aside the overarching argument of the piece. Instead, I want to focus on a small passage that, while perhaps a secondary point to Gates, strikes me as a very important lesson that he fails to apply to his own foundation (though to be fair, this is true of most people working in development).

Gates begins by noting that Sachs came to the Gates Foundation to ask for MVP funding, and lays out the fundamental MVP pitch for a “big push” of integrated interventions that crossed health, agriculture, and education sectors that Sachs was selling:

[Sachs’] hypothesis was that these interventions would be so synergistic that they would start a virtuous upward cycle and lift the villages out of poverty for good. He felt that if you focus just on fertilizer without also addressing health, or if you just go in and provide vaccinations without doing anything to help improve education, then progress won’t be sustained without an endless supply of aid.

This is nothing more than integrated development, and it makes sense. But, as was predicted, and as some are now demonstrating, it did not work. In reviewing what happened in the Millennium Villages that led them to come up short of expectations, Gates notes

MVP leaders encouraged farmers to switch to a series of new crops that were in demand in richer countries – and experts on the ground did a good job of helping farmers to produce good crop yields by using fertilizer, irrigation, and better seeds. But the MVP didn’t simultaneously invest in developing markets for these crops. According to Munk, “Pineapple couldn’t be exported after all, because the cost of transport was far too high. There was no market for ginger, apparently. And despite some early interest from buyers in Japan, no one wanted banana flour.” The farmers grew the crops, but the buyers didn’t come.

But then Gates seems to glide over a really key question: how could a smart, well-intentioned man miss the mark like this? Worse, how could a leading economist’s project blow market engagement so badly? Gates’ throwaway argument is “Of course, Sachs knows that it’s critical to understand market dynamics; he’s one of the world’s smartest economists. But in the villages Munk profiled, Sachs seems to be wearing blinders.” This is not an explanation for what happened, as telling us Sachs suffered from blinders is simply restating the obvious. The real issue is the source of these blinders.

The answer is, to me, blindingly obvious. The MVP, like most development interventions, really never understood what was going on in the villages targeted for intervention. Sure, they catalogued behaviors, activities, and outcomes…but there was never any serious investigation into the logic of observed behaviors. Instead, the MVP, like most development interventions, was rife with assumptions about the motivations of those living in Millennium Villages that produced these observed activities and outcomes, assumptions that had little to do with the actual logic of behavior. The result was interventions that implicitly infantilized the Millennium villagers by providing interventions that implicitly assumed, for example, that the villagers had not considered the potential markets for new and different crops/products. Such interventions assume ignorance as the driver of observed behaviors, instead of the enormously complex decision-making that underlies everyday lives and livelihoods in even the smallest village.

To give you an idea of what I mean, take a look at the following illustrations of the complexity of livelihoods decision-making (these are from my forthcoming article on applying the Livelihoods as Intimate Government approach in Applied Geography – a preprint is here).

First, we have #1, which illustrates the causes behind observed decisions captured by most livelihoods frameworks. In short, this is what most contemporary development planning gets to, at best.

Figure 1

However, this is a very incomplete version of any individual’s decision-making reality. #2 illustrates the wider range of factors shaping observed decisions that become visible through multiscalar analysis that nests particular places in wider networks of economic, environment, and politics. Relatively few applications of livelihoods frameworks approach this level of complexity, and those that do tend to consider the impacts of markets on particular livelihoods and places.

Figure 2

While this is better than the overly-simplistic framing of decisions in #1, it is still incomplete because motivations are not, themselves, discrete. #3 illustrates the complex web of factors, local and extralocal, and the ways in which these factors play off of one another at multiple scales, different times, and in different situations.

Figure 3

When we seek to understand why people do what they do (and do not do other things), this is the complexity with which we must engage.

This is important, because were Gates to realize that this was the relevant point of both Munk’s book and his own op-ed, he might better understand why his own foundation has

many projects…that have come up short. It’s hard to deliver effective solutions, even when you plan for every potential contingency and unintended consequence. There is a natural tendency in almost any kind of investment – business, philanthropic, or otherwise – to double down in the face of difficulty. I’ve done it, and I think most other people have too.

So, what do you do? Well, we have an answer: The Livelihoods as Intimate Government approach we use at HURDL (publications here and here, with guidance documents coming later in the summer) charts an analytic path through this level of complexity. Before the usual objections start.

1) We can train people to do it (we are doing so in Mali as I write this). You don’t need a Ph.D. in anthropology to use our approach.

2) It does not take too much time. We can implement at least as fast as any survey process, and depending on spatial focus and resources, can move on a timeframe from weeks to two months.

3) It is not too expensive – qualitative researchers are not expensive, and we do not require high-end equipment to do our work.

The proof is in the reactions we are getting from our colleagues. Here in Mali, I now have colleagues from IER and agricultural extension getting fired up about our approach as they watch the data coming in during our pilot phase. They are stunned by how much data we can collect in a short period of time, and how relevant the data is to the questions at hand because we understand what people are already doing, and why they are doing it. By using this approach, and starting from the assumption that we must understand what people are doing and why before we move to interventions, we are going to lay the foundation for more productive interventions that minimize the sorts of “surprise” outcomes that Gates references as an explanation for project failure.

There are no more excuses for program and project design processes that employ the same limited methods and work from the same problematic assumptions – there are ways to do it differently. But until people like Gates and Sachs reframe their understanding of how development should work, development will continue to be plagued by surprises that aren’t all that surprising.

While development – thought broadly as social/economic/political change that somehow brings about a change in peoples’ quality of life – generally entails changes in behavior, conversations about “behavior change” in development obscure important political and ethical issues around this subject, putting development programs and projects, and worse the people those programs and projects are meant to help, at risk.

We need to return to a long standing conversation about who gets to decide what behaviors need changing.  Most contemporary conversations about behavior change invoke simple public health examples that obscure the politics of behavior change (such as this recent New York Times Opinionator Piece). This piece appears to address the community and household politics of change (via peer pressure), but completely ignores the fact that every intervention mentioned was introduced by someone outside these communities. This is easy to ignore because handwashing or the use of chlorine in drinking water clearly reduces morbidity, nobody benefits from such morbidity, and addressing the causes of that morbidity requires interventions that engage knowledge and technology that, while well-established, were created someplace else.

But if we open up this conversation to other sorts of examples, the picture gets much more complicated. Take, for example, agricultural behaviors. An awful lot of food security/agricultural development programming these days discusses behavior change, ranging from what crops are grown to how farmers engage with markets. Here, the benefits of this behavior change are less clear, and less evenly-distributed through the population. Who decides what should be grown, and on what basis? Is improved yield or increased incomes enough justification to “change behaviors”? Such arguments presume shockingly simple rationales for observed behaviors, such as yield maximization, and often implicitly assume that peasant farmers in the Global South lack information and understandings that would produce such yields, thus requiring “education” to make better decisions. As I’ve argued time and again, and demonstrated empirically several times, most livelihoods decisions are a complex mix of politics, local environment, economy, and social issues that these farmers weigh in the context of surprisingly detailed information (see this post or my book for a discussion of farm allocation in Ghanaian households that illustrates this point). In short, when we start to talk about changing peoples’ behaviors, we often have no idea what it is that we are changing.

The fact we have little to no understanding of the basis on which observed decisions are made is a big, totally undiscussed problem for anyone interested in behavior change. In development, we design programs and projects based on presumptions about people’s motivations, but those presumptions are usually anchored in our own experiences and perceptions – which are quite often different from those with whom we work in the Global South (see the discussion of WEIRD data in psychology, for example here). When we don’t know why people are doing the things we do, we cannot understand the opportunities and challenges that come with those activities/behaviors. This allows an unexamined bias against the intelligence and experience of the global poor to enter and lurk behind this conversation.

Such bias isn’t just politically/ethically problematic – it risks real programmatic disasters. For example, when we perceive “inefficiency” on many African farms, we are often misinterpreting hedging behaviors necessary to manage worst-case scenarios in a setting where there are no safety nets. Erasing such behaviors in the name of efficiency (which will increase yields or incomes) can produce better outcomes…until the situation against which the farmers were hedged arises. Then, without the hedge, all hell can break loose. Among the rural agricultural communities in which I have been working for more than 15 years, such hedges typically address climate and/or market variability, which produce extremes at frequent, if irregular, intervals. Stripping the hedges from these systems presumes that the good years will at least compensate for the bad…a dangerous assumption based far more on hope or optimism than evidence in most places where these projects are designed and implemented. James Scott’s book The Art of Not Being Governed provides examples of agrarian populations that fled the state in the face of “modernization” efforts not because they were foolish or backward, but because they saw such programs as introducing unacceptable risks into their lives (see also this post for a similar discussion in the context of food security).

This is why my lab uses an approach (on a number of projects ranging from climate services evaluation and design to disaster risk reduction) that starts from the other direction – we begin by identifying and explaining particular behaviors relevant to the challenge, issue, or intervention at hand, and then start thinking about what kinds of behavioral change are possible and acceptable to the people with whom we work. We believe that this is both more effective (as we actually identify the rationales for observed behaviors before intervening) and safer (as we are less likely to design/condone interventions that increase vulnerability) than development programming based on presumption.

This is not to say that we should simply valorize all existing behaviors in the Global South. There are inefficiencies out there that could be reduced. There are things like handwashing that are simple and important. Sometimes farmers can change their practices in small ways that do not entail big shifts in risk or vulnerability. Our approach to project design and assessment helps to identify just such situations. But on the whole, we need to think much more critically about what we are assuming when we insist on a particular behavior change, and then replace those assumptions with information. Until we do, behavior change discussions will run the risk of uncritically imposing external values and assumptions on otherwise coherent systems, producing greater risk and vulnerability than existed before. Nobody could call that development.

I just finished reading Geoff Dabelko’s “The Periphery isn’t Peripheral” on Ensia. In this piece, Geoff diagnoses the problems that beset efforts to address linked environmental and development problems, and offers some thoughts on how to address them. I love his typology of tyrannies that beset efforts to build and implement good, integrative (i.e. cross-sectoral) programs. I agreed with his suggestions on how to make integrative work more acceptable/mainstream in development. And by the end, I was worried about how to make his suggestions reality within the donors and implementers that really need to take on this message.

Geoff’s four tyrannies (Tyranny of the Inbox; Tyranny of Immediate Results; Tyranny of the Single Sector; Tyranny of the Unidimensional Measurement of Success) that he sees crippling environment-and-development programming are dead on. Those of us working in climate change are especially sensitive to tyranny #2, the Tyranny of Immediate Results. How the hell are we supposed to demonstrate results on an adaptation program that is meant to address challenges that are not just happening now, but will intensify over a 30 year horizon? Does our inability to see the future mean that this programming is inherently useless or inefficient? No. But because it is impossible to measure future impact now, adaptation programs are easy to attack…

As a geographer, I love Geoff’s “Tyranny of the Single Sector” – geographers generally cannot help but start integrating things across sectors (that’s what our discipline does, really). In my experiences in the classroom and the donor world, integrative thinking eludes a lot more people than I ever thought possible. Our absurd system of performance measurement in public education is not helping – trust me. But even when you find an integrative thinker, they may not be doing much integrative work. Sometimes people simply can’t see outside their own training and expertise. Sometimes they are victims of tyranny #1 (Tyranny of the Inbox), where they are too busy dealing with immediate challenges within their sector to think across sectors – lord knows, that defined the last 6 months of my life at USAID.

And Geoff’s fourth tyranny speaks right to my post from the other day – the Tyranny of the Unidimensional Measurement of Success. Read Geoff, and then read my post, and you will see why he and I get along so well.

Now, Geoff does not stop with a diagnosis – he suggests that integrative thinking in development will require some changes to how we do our jobs, and provides some illustrations of integrative projects that have produced better results to bolster his argument. While I like all of his suggestions, what concerns me is that these suggestions are easier said than done. For example, Geoff is dead right when he says that:

We must reward, rather than punish, cross-disciplinary or cross-sectoral approaches; define success in a way that encourages, rather than discourages, positive outcomes in multiple arenas; and foster monitoring and evaluation plans that embrace, rather than ignore, different timescales and multiple indicators.”

But how, exactly, are we to do this? What HR levers exist that we can use to make this happen? How much leeway do appointees and other executive-level donor staff have with regard to changing rewards and evaluations? And are the right people in charge to make such changes possible? A lot of people rise through donor organizations by being very good at sectoral work. Why would they reward people for doing things differently?

Similarly, I wonder how we can actually get more long-term thinking built into the practice and implementation of development. How do we really overcome the Tyranny of the Inbox, and the Tyranny of Immediate Results? This is not merely a mindset problem, this is a problem of budget justifications to an often-hostile congress that wants to know what you have done for them lately. Where are our congressional champions to make this sort of change possible?

Asking Geoff to fix all our problems in a single bit of writing is completely unfair. That is the Tyranny of What do We do Now? In the best tradition of academic/policy writing, his piece got me thinking (constructively) about what needs to happen if we are to do a better job of achieving something that looks like sustainable development going forward. For that reason alone it is well worth your time. Go read.

I’m a big fan of accountability when it comes to aid and development. We should be asking if our interventions have impact, and identifying interventions that are effective means of addressing particular development challenges. Of course, this is a bit like arguing for clean air and clean water. Seriously, who’s going to argue for dirtier water or air. Who really argues for ineffective aid and development spending?

Nobody.

More often than not, discussions of accountability and impact serve only to inflate narrow differences in approach, emphasis, or opinion into full on “good guys”/ “bad guys” arguments, where the “bad guys” are somehow against evaluation, hostile to the effective use of aid dollars, and indeed actively out to hurt the global poor. This serves nothing but particular cults of personality and, in my opinion, serves to squash out really important problems with the accountability/impact agenda in development. And there are major problems with this agenda as it is currently framed – around the belief that we have proven means of measuring what works and how, if only we would just apply those tools.

When we start from this as a foundation, the accountability discussion is narrowed to a rather tepid debate about the application of the right tools to select the right programs. If all we are really talking about are tools, any skepticism toward efforts to account for the impact of aid projects and dollars is easily labeled an exercise in obfuscation, a refusal to “learn what works,” or an example of organizations and individuals captured by their own intellectual inertia. In narrowing the debate to an argument about the willingness of individuals and organizations to apply these tools to their projects, we are closing off discussion of a critical problem in development: we don’t actually know exactly what we are trying to measure.

Look, you can (fairly easily) measure the intended impact of a given project or program if you set things up for monitoring and evaluation at the outset.  Hell, with enough time and money, we can often piece enough data together to do a decent post-hoc evaluation. But both cases assume two things:

1)   The project correctly identified the challenge at hand, and the intervention was actually foundational/central to the needs of the people at hand.

This is a pretty weak assumption. I filled up a book arguing that a lot of the things that we assume about life for the global poor are incorrect, and therefore that many of our fundamental assumptions about how to address the needs of the global poor are incorrect. And when much of what we do in development is based on assumptions about people we’ve never met and places we’ve never visited, it is likely that many projects which achieve their intended outcomes are actually doing relatively little for their target populations.

Bad news: this is pretty consistent with the findings of a really large academic literature on development. This is why HURDL focuses so heavily on the implementation of a research approach that defines the challenges of the population as part of its initial fieldwork, and continually revisits and revises those challenges as it sorts out the distinct and differentiated vulnerabilities (for explanation of those terms, see page one of here or here) experienced by various segments of the population.

Simply evaluating a portfolio of projects in terms of their stated goals serves to close off the project cycle into an ever more hermetically-sealed, self-referential world in which the needs of the target population recede ever further from design, monitoring, and evaluation. Sure, by introducing that drought-tolerant strain of millet to the region, you helped create a stable source of household food that guards against the impact of climate variability. This project could record high levels of variety uptake, large numbers of farmers trained on the growth of that variety, and even improved annual yields during slight downturns in rain. By all normal project metrics, it would be a success. But if the biggest problem in the area was finding adequate water for household livestock, that millet crop isn’t much good, and may well fail in the first truly dry season because men cannot tend their fields when they have to migrate with their animals in search of water.  Thus, the project achieved its goal of making agriculture more “climate smart,” but failed to actually address the main problem in the area. Project indicators will likely capture the first half of the previous scenario, and totally miss the second half (especially if that really dry year comes after the project cycle is over).

2)   The intended impact was the only impact of the intervention.

If all that we are evaluating is the achievement of the expected goals of a project, we fail to capture the wider set of impacts that any intervention into a complex system will produce. So, for example, an organization might install a borehole in a village in an effort to introduce safe drinking water and therefore lower rates of morbidity associated with water-borne illness. Because this is the goal of the project, monitoring and evaluation will center on identifying who uses the borehole, and their water-borne illness outcomes. And if this intervention fails to lower rates of water-borne illness among borehole users, perhaps because post-pump sanitation issues remain unresolved by this intervention, monitoring and evaluation efforts will likely grade the intervention a failure.

Sure, that new borehole might not have resulted in lowered morbidity from water-borne illness. But what if it radically reduced the amount of time women spent gathering water, time they now spend on their own economic activities and education…efforts that, in the long term, produced improved household sanitation practices that ended up achieving the original goal of the borehole in an indirect manner? In this case, is the borehole a failure? Well, in one sense, yes – it did not produce the intended outcome in the intended timeframe. But in another sense, it had a constructive impact on the community that, in the much longer term, produced the desired outcome in a manner that is no longer dependent on infrastructure. Calling that a failure is nonsensical.

Nearly every conversation I see about aid accountability and impact suffers from one or both of these problems. These are easy mistakes to make if we assume that we have 1) correctly identified the challenges that we should address and 2) we know how best to address those challenges. When these assumptions don’t hold up under scrutiny (which is often), we need to rethink what it means to be accountable with aid dollars, and how we identify the impact we do (or do not) have.

What am I getting at? I think we are at a point where we must reframe development interventions away from known technical or social “fixes” for known problems to catalysts for change that populations can build upon in locally appropriate, but often unpredictable, ways. The former framing of development is the technocrats’ dream, beautifully embodied in the (failing) Millennium Village Project, just the latest incarnation of Mitchell’s Rule of Experts or Easterly’s White Man’s Burden. The latter requires a radical embrace of complexity and uncertainty that I suspect Ben Ramalingan might support (I’m not sure how Owen Barder would feel about this). I think the real conversation in aid/development accountability and impact is about how to think about these concepts in the context of chaotic, complex systems.

Since returning to academia in August of 2012, I’ve been pretty swamped. Those who follow this blog, or my twitter feed, know that my rate of posting has been way, way down. It’s not that I got bored with social media, or tired of talking about development, humanitarian assistance, and environmental change. I’ve just been swamped. The transition back to academia took much more out of me than I expected, and I took on far, far too much work. The result – a lot of lost sleep, and a lapsed social media profile in the virtual world, and a lapsed social life in the real world.

One of the things I’ve been working on is getting and organizing enough support around here to do everything I’m supposed to be doing – that means getting grad students and (coming soon) a research associate/postdoc to help out. Well, we’re about 75% of the way there, and if I wait for 100% I’ll probably never get to introduce you all to HURDL…

HURDL is the Humanitarian Response and Development Lab here at the Department of Geography at the University of South Carolina. It’s also a less-than-subtle wink at my previous career in track and field. HURDL is the academic home for me and several (very smart) grad students, and the institution managing about five different workflows for different donors and implementers.  Basically, we are the qualitative/social science research team for a series of different projects that range from policy development to project design and implementation. Sometimes we are doing traditional academic research. Mostly, we do hybrid work that combines primary research with policy and/or implementation needs. I’m not going to go into huge detail here, because we finally have a lab website up. The site includes pages for our personnel, our projects, our lab-related publications, and some media (still under development). We’ll need to put up a news feed and likely a listing of the talks we give in different places.

Have a look around. I think you’ll have a sense of why I’ve been in a social media cave for a while. Luckily, I am surrounded by really smart, dedicated people, and am in a position to add at least one more staff position soon, so I might actually be back on the blog (and sleeping more than 6 hours a night) again soon!

Let us know what you think – this is just a first cut at the page. We’d love suggestions, comments, whatever you have – we want this to be an effective page, and a digital ambassador for our work…

I’ve always been a bit skeptical of development programs that claim to work on issues of environmental governance. Most donor-funded environmental governance work stems from concerns about issues like sustainability and climate change at the national to global scale. These are legitimate challenges that require attention. However, such programs often strike me as instances of thinking globally, but implementing locally (and ideally someplace else). You see, there are things that we in the wealthiest countries should be doing to mitigate climate change and make the world a more sustainable place. But they are inconvenient. They might cost us a bit of money. They might make us do a few things differently. So we complain about them, and they get implemented slowly, if ever.

Yet somehow we fail to see how this works in exactly the same manner when we implement programs that are, for example, aimed at the mitigation of climate change in the Global South. These programs tend to take away particular livelihoods activities and resources (such as cutting trees, burning charcoal, or fishing and hunting particular species), which is inconvenient, tends to reduce household access to food and income, and forces changes upon people – all of which they don’t really like. So it is sort of boggling to me that we are surprised when populations resist these programs and projects.

I’m on this topic because, while conducting preliminary fieldwork in Zambia’s Kazungula District last week, I had yet another experience of this problem. In the course of a broad conversation on livelihoods, vulnerabilities, and opportunities in his community, a senior man raised charcoal production as an alternative livelihood in the area (especially in the dry season, when there is little water for gardening/farming and no nearby source of fishing). Noting that charcoal production was strictly limited for purposes of limiting the impacts of climate change*, a rationale whose legitimacy he did not challenge, he complained that addressing the issue of charcoal production is not well understood or accepted by the local population. He argued that much of the governance associated with this effort consisted of agents of the state telling people “it’s an offense” and demanding they stop cutting trees and burning charcoal without explaining why it is an offense. He then pointed to one of his sons and said “how can you tell him ‘don’t cut this tree’? And his fields are flooding [thus destroying his crops, a key source of food and income].” But the quote that pulled it all together…

“Don’t make people be rude or be criminals. Give them a policy that will open them.”

The text is clear here: if you are going to take away a portion of our livelihoods for the sake of the environment, please give us an alternative so we can comply. This is obvious – and yet to this point I think the identification and implementation of alternative livelihoods in the context of environmental governance programs is, at best, uneven.

But the subtext might be more important: If you don’t give us an alternative, you make us into criminals because we will be forced to keep practicing these now-banned activities. And when that happens, we will never view the regulations or those that enforce them as legitimate. In other words, the way we tend to implement environmental governance programming undermines the legitimacy of the governance structures we are trying to put in place.

Oops.

The sad part is that there have been innumerable cases of just the phenomena I encountered last week at other times and in other places. They’ve been documented in reports and refereed publications. Hell, I’ve heard narratives like this in the course of my work in Ghana and Malawi. But environmental governance efforts continue to inadequately explain their rationales to the populations most affected by their implementation. They continue to take away livelihoods activities from those that need them most in the name of a greater good for which others pay no tangible price. And they continue to be surprised when people ignore the tenets of the program, and begin to question the legitimacy of any governance structure that would bring such rules into effect. Environmental governance is never going to work if it is the implementation of a “think globally, implement locally (ideally someplace else)” mentality. It has to be thought, understood, and legitimized in the place it will be implemented, or it will fail.

 

 

* Yes, he really said that, as did a lot of other people. The uniformity of that answer strikes me as the product of some sort of sensitization campaign that, to be honest, is pretty misplaced. There are good local environmental reasons for controlling deforestation, but the contribution of charcoal production to the global emissions budget is hilariously small.

There is a lot of hue and cry about the issue of loss and damage at the current Conference of the Parties (COP-19). For those unfamiliar with the topic, in a nutshell the loss and damage discussion is one of attributing particular events and their impacts on poorer countries to climate variability and change that has, to this point, been largely driven by activities in the wealthier countries. At a basic level, this question makes sense and is, in the end, inevitable. Those who have contributed the most (and by the most, I mean nearly all) to the anthropogenic component of climate change are not experiencing the same level of impact from that climate change – either because they see fewer extreme events, more attenuated long-term trends, or simply have substantially greater capacity to manage individual events and adapt to longer-term changes. This is fundamentally unfair. But it is also a development challenge.

The more I work in this field, and the more I think about it, the more I am convinced that the future of development lies in creating the strong, stable foundations upon which individuals can innovate in locally-appropriate ways. These foundations are often tenuous in poorer countries, and the impacts of climate change and variability (mostly variability right now) certainly do not help. Most agrarian livelihoods systems I have worked with in sub-Saharan Africa are massively overbuilt to manage climate extremes (i.e. flood or drought) that, while infrequent, can be catastrophic. The result: in “good” or “normal” years, farmers are hedging away very significant portions of their agricultural production, through such decisions as the siting of farms, the choice of crops, or the choice of varieties. I’ve done a back-of-the-envelope calculation of this cost of hedging in the communities I’ve worked with in Ghana, and the range is between 6% and 22% of total agricultural production each year. That is, some of these farmers are losing 22% of their total production because they are unnecessarily siting their fields in places that will perform poorly in all but the most extreme (dry or wet) years. When you are living on the local equivalent of $1.25/day, this is a massive hit to one’s income, and without question a huge barrier to transformative local innovations. Finding ways to help minimize the cost of hedging, or the need for hedging, is critical to development in many parts of the Global South.

Therefore, a stream of finance attached to loss and damage could be a really big deal for those in the Global South, something perhaps as important as debt relief was to the MDRI countries. We need to sort out loss and damage. But NOT NOW.

Why not? Simply put, we don’t have the faintest idea what we are negotiating right now. The attribution of particular events to anthropogenic climate change and variability is inordinately difficult (it is somewhat easier for long-term trends, but this has its own problem – it takes decades to establish the trend). However, for loss and damage to work, we need this attribution, as it assigns responsibility for particular events and their costs to those who caused those events and costs. Also, we need means of measuring the actual costs of such events and trends – and we don’t have that locked down yet, either. This is both a technical and a political question: what can we measure, and how should we measure it is a technical question that remains unanswered. But what should we measure is a political question – just as certain economic stimuli have multiplier effects through an economy, disasters and long-term degradation have radiating “multipliers” through economies. Where do we stop counting the losses from an event or trend? We don’t have an answer to that, in part because we don’t yet have attribution, nor do we have the tools to measure costs even if we had attribution.

So, negotiating loss and damage now is a terrible idea. Rich countries could find themselves facing very large bills without the empirical evidence to justify the size of the bills or their responsibility for paying them – which will make such bills political nonstarters in rich countries. In short, this process has to deliver a bill that everyone agrees should be paid, and that the rich countries agree can be paid. At the same time, poorer countries need to be careful here – because we don’t have strong attribution or measurements of costs, there is a real risk that they could negotiate for too little – not enough to actually invest in the infrastructure and processes needed to ensure a strong foundation for local innovation. Either outcome would be a disaster. And these are the most likely outcomes of any negotiation conducted in blindly.

I’m glad loss and damage is on the table. I hope that more smart people start looking into it in their research and programs, and that we rapidly build an evidence base for attribution and costing. That, however, will take real investment by the richest countries (who can afford it), and that investment has not been forthcoming.  If we should be negotiating for anything right now, it should be for funds to push the frontiers of our knowledge of attribution and costing so that we can get to the table with evidence as soon as humanly possible.

So, this just popped up in my twitter feed from @USAID:

 

Screen Shot 2013-11-22 at 11.23.57 AM

 

What’s fun about this tweet is the fact that USAID, and indeed most development donors, are  “other places” where failure = damage to your career and budget. There are a host of reasons for this (just start reading this Natsios piece and work out from there), but anyone who denies this problem is just delusional (there are pockets of high risk operations at USAID, but they are the exceptions, not the rule).

Also note, this tweet is related to a discussion of innovation policy…which, it seems to me, missed the point of innovation entirely. Let me summarize a workable innovation policy for everyone: If donors want innovation, they need to foster a culture of reasonable risk for big rewards among their staff and implementers. They need to stop being the “other places” in this tweet. How they do this depends on the institution, but this should be the goal.

See, you don’t even need an executive summary for that.

I’ve been off the blog for a while now. OK, about two months, which is too long. The new semester, and a really large number of projects, has landed on me like an avalanche. I have a small lab that I now manage (the Humanitarian Response and Development Lab, HURDL), and while I am fortunate to have a bunch of really good students in that lab, I’ve never run a lab before (nor have I ever worked in someone else’s lab before). So figuring out how best to manage projects and personnel is a new challenge that eats up time. As I told my students, this is not a fully operational, efficient program that they have joined. It’s more like a car that has stalled, and every day I am pushing it along screaming “pop the clutch” at whoever is in the driver’s seat.  To follow the metaphor, there are a lot of fits and starts right now, but things are coming together.  Among them:

  • A report on gender and adaptation in agrarian settings for USAID’s Office of Gender Equality and Women’s Empowerment and the Office of Global Climate Change which, through both literature review and empirical example, is a first step toward thinking about and implementing much more complex ideas about gender in project design and evaluation. This report will spawn several related journal articles. Watch this space for both activities and publications.
  • A long-awaited report offering a detailed, if preliminary, assessment of the Mali Meteorological Service’s Agrometeorological Advisory Program. I started this project before I left USAID, but it is finally coming together. Again, a set of journal articles will come from this – our empirical basis alone is absurd (720 interviews, 144 focus groups, 36 villages covering most of Southern Mali).  There are going to be a lot of interesting lessons for those interested in providing weather and climate information to farmers in this report…
  • A white paper/refereed article laying out how to implement the Livelihoods as Governmentality (LAG) approach that I presented in this article earlier this year. It is one thing to present a reframing of livelihoods decision-making and the livelihoods approach, and another to make it implementable. One of my students and I piloted this approach over the summer in Senegal, and we are pulling it together for publication now.  This will become the core of some trainings that we are likely to be doing in 2014 as we start building capacity in various countries to conduct detailed livelihoods analyses that might inform project design.

Then there is work in Zambia with the Red Cross on anticipatory humanitarian assistance (focused on hydrometeorological hazards), and a new project as part of a rather huge consortium looking at migration as an adaptation strategy in deltas in several parts of the world.

Did I mention that it’s a small lab – me and three other students working on all of this? Yeah, we’re a little short-staffed. I’m supposed to have a postdoc/research associate on board to help as well, but there have been some contract challenges that have prevented me from advertising the position. I hope to have that out some time in the next month or two, ideally to bring someone on for a year, extendable if the funding comes through.  So if you are interested in gender and some combination of development, climate change adaptation, and disaster risk reduction/humanitarian assistance, and want to join a really outstanding group of people wired in to a lot of donors and partners, and working on projects that bring critical scholarship to the ground, let me know…

So that’s where I’ve been hiding. I am crawling out from under the rock, and hope to rejoin the blogosphere in a more active capacity in coming weeks. Thanks for your patience…