Entries tagged with “development”.
Did you find what you wanted?
Sun 17 Jan 2016
Posted by Ed under Adaptation, Africa, development, Development Institutions, Food Security, Humanitarian Assistance, Livelihoods, research
Comments Off on Not bugs, but features: Or, adaptation is harder than you’d think
Back in September, HURDL released its final report on our work assessing Mali’s Agrometeorological Advisory program – an effort, conceived and run by the Government of Mali, to deliver weather and climate information to farmers to improve agricultural outcomes in the country. You’d think this would be a straightforwardly good idea – you know, more information (or indeed any information) being better than none. So our findings were a bit stunning:
- As we found in our preliminary report, less than 20% of those with access to the advisories are actually using them
- Nearly everyone using the advisories is a man
- Nearly everyone using the advisories is already relatively well-off
- The advisories were most used in the parts of the country where precipitation is most secure (see map below).
This was, to say the least, a set of surprising findings. And, on their surface, they suggest that the program is another example of development failure: a project that only reaches those who least need the help it is providing.
But that conclusion only holds if this program was oriented toward development and adaptation in the first place…and it was not. The program was established in 1981 as an effort to address conditions of acute food insecurity closely linked to severe drought. The goal was simple: use short-term and seasonal advisories to help farmers make better decisions under stress and boost food availability in Mali. This program, in other words, was an effort to address a particular, acute problem (food insecurity linked to extreme drought) through a very specific means (boosting food availability). This was not a development project, it was a humanitarian response to a crisis. And as such, it was brilliant – and each of the findings above demonstrate why.
- The goal was to rapidly boost yields of grains (and cotton), for which men have most decision-making authority.
- The goal was to rapidly boost overall yields of grains to improve availability within Mali, and therefore targeting the wealthy farmers who had the access to equipment and animal traction necessary to use the advisories made sense.
- The goal was to rapidly boost grain production…and much more grain is grown in the wetter parts of Mali than in the dryer areas in the north.
In short, the project was never intended to address development goals – it was supposed to address a particular aspect of a humanitarian crisis through particular means, and its design targeted exactly the right decision-makers/actors to achieve that goal. Indeed, one could argue that the rather narrow use of advisories speaks to how well designed this humanitarian intervention was. In short, the gendered/wealth-dependent character of advisory use, and the fact they are most used in areas that are already very agriculturally productive, are not bugs in this project: they are features!
The problem, then, is not with the design of the project, but the fact it continued for more than 30 years, and some 25 years after the end of the droughts. As a narrowly-focused effort to address a particular, short-term humanitarian crisis, the gendered/wealth-based outcomes of the project were acceptable trade-offs to achieve higher grain yields. But over 30 years, and without the justification of an acute crisis, it is likely this project has served to unnecessarily exacerbate agricultural inequality in rural southern Mali.
HURDL is now engaged in a project to redesign this program, to shift it from a (now unnecessary) humanitarian assistance effort to a development/adaptation project. With this shift in priorities comes a shift in how we view the outcomes of the program – the very things that made it an effective humanitarian assistance program (gendered and income-based inequality) are now aspects of the project that we must change to ensure that the widest number of farmers possible have access to information they can use in their livelihoods decisions as we move into conditions of greater economic and environmental uncertainty. In short, we now have to bridge the DRR and Humanitarian Response/Development and Adaptation divide that has so plagued those of us concerned with the situation of those in the Global South. This will be tremendously challenging, but through this process we hope to not only work with Malian colleagues to design and deliver a development and adaptation version of this program to Malian farmers, but also to learn more about how to bridge the particular time/scope emphases of these two assistance arenas.
Wed 13 Jan 2016
Posted by Ed under development, Development Institutions, Humanitarian Assistance, Random Musing
Comments Off on Of Death Stars and Development
Look, I know there have been lots of Star Wars and development posts/tweets (here, here, here), so I won’t belabor things. But forgive me a quick observation after seeing the most recent Star Wars: isn’t the continual construction of bigger and more powerful flying orbs of death by the bad guys (the Empire, then the First Order) a perfect metaphor for the sort of thinking that gave us the Millennium Villages?
Goal: Galactic Domination
Project 1: Star Wars: A New Hope
Logframe: Build giant Death Star space station, blow up a representative planet, watch galaxy cower in fear => Galactic Domination
Evaluation: Failure to address single design flaw results in giant space station destroyed
Outcome: Lack of Domination
Project 2: Star Wars: Return of the Jedi
Logframe: Build bigger, better Death Star space station, everyone will remember the last one blew up a planet, and because this one is even bigger the galaxy will cower in fear => Galactic Domination
Evaluation: Fixed previous design flaw, overconfidence in tactics and shields failed to account for another fatal flaw, giant space station destroyed
Outcome: Catastrophe, Complete collapse of the Empire
Project 3: Star Wars: The Force Awakens
Logframe: F*ck it, we’re making an actual moon/planet into an absolutely massive, sun-powered Starkiller base (rebranded to avoid stigma of previous Death Stars), blow up the entire Federation home system, watch galaxy cower in fear=> Galactic Domination
Evaluation: Pretty much the same flaw as with the second Death Star, with pretty much the same result: Starkiller base destroyed
Outcome: Still no domination
So, to summarize: we have a problem, we can’t seem to solve it, so we will keep plowing ahead with the same approach, but bigger and more expensive, because clearly it isn’t the concept that’s flawed, we just haven’t gone big enough!
Yep, sounds like a lot of development.
Sun 28 Dec 2014
Raj Shah has announced his departure from USAID. Honestly, this surprises nobody at the Agency, or anyone in the development world who’s been paying attention. If anything, folks are surprised he is still around – it is well-known (or at least well-gossiped) that he was looking for the door, and at any number of opportunities, at least since the spring of 2012. There are plenty of reviews of Shah’s tenure posted around the web, and I will not rehash them. While I have plenty of opinions of the various initiatives that Shah oversaw/claims credit for (and these are not always the same, by the way), gauging what did and did not work under a particular administrator is usually a question for history, and it will take a bit of space and time before anyone should feel comfortable offering a full review of this administrator’s work.
I will say that I hope much of what Shah pushed for under USAID Forward, especially the rebuilding of the technical capacity of USAID staff, the emphasis on local procurement, and the strengthening of evaluation, becomes entrenched at the agency. Technical capacity is critical – not because USAID is ever going to implement its own work. That would require staffing the Agency at something like three or four times current levels, and nobody is ever going to approve that. Instead, it is critical for better monitoring and evaluating the work of the Agency’s implementing partners. In my time at USAID, I saw implementer work and reports that ran the gamut from “truly outstanding” to “dumpster fire”. The problem is that there are many cases where work that falls on the dumpster fire end of the spectrum is accepted because Agency staff lack the technical expertise to recognize the hot mess they’ve been handed. This is going to be less of a problem going forward, as long as the Agency continues to staff up on the technical side.
Local procurement is huge for both the humanitarian assistance and development missions of USAID. For example, there is plenty of evidence supporting the cost/time effectiveness of procuring emergency food aid in or near regions of food crisis. Further, mandates that push more USAID funding to local organizations and implementers will create incentives to truly build local capacity to manage these funds and design/implement projects, as it will be difficult for prime contractors to meet target indicators and other goals without high-capacity local partners.
A strong evaluation policy will be huge for the Agency…if it ever really comes to pass. While I have seen real signs of Agency staff struggling with how to meaningfully evaluate the impact of their programs, the overall state of evaluation at the Agency remains in flux. The Evaluation Policy was never really implementable, for example because it seems nobody actually considered who would do the evaluations. USAID staff generally lack the time and/or expertise to conduct these evaluations, and the usual implementing partners suffer from a material conflict of interest – very often, they would have to evaluate programs and projects implemented by their competitors…even projects where they had lost the bid to a competitor. Further, the organizations I have seen/interacted with that focus on evaluation remain preoccupied with quantitative approaches to evaluation that, while perhaps drawing on Shah’s interest in the now-fading RCT craze in development, really cannot identify or measure the sorts of causal processes that connect development interventions and outcomes. Finally, despite the nice words to the contrary, the culture at USAID remains intolerant of project failure, and the leadership of the Agency never mounted the strong defense of this culture change to the White House or Congress needed to create the space for a new understanding of evaluation, nor did it ever really convey a message of culture change that the staff of USAID found convincing across the board. There are some groups/offices at USAID (for example, in the ever-growing Global Development Lab) where this culture is fully in bloom, but these are small offices with small budgets. Most everyone else remains mired in very old thinking on evaluation.
At least from an incrementalist perspective, entrenching and building on these aspects of USAID Forward would be a major accomplishment for Shah’s successor. Whoever comes next will not simply run out the clock of the Obama Administration – there are two years left. I therefore expect the administration to appoint an administrator (rather than promote a career USAID staff caretaker with no political mandate) to the position. In a perfect world, this would be a person who understands development as a discipline, but also has the government and implementing experience to understand how development thought intersects with development practice in the real world. Someone with a real understanding of development and humanitarian assistance as a body of thought and practice with a long history that can be learned from and built upon would be able to parse the critical parts of USAID Forward from the fluff, could prevent the design and implementation of projects that merely repeat the efforts (and often failures) of decades ago, and could perhaps reverse the disturbing trend at USAID to view development challenges as technical challenges akin to those informed by X-Prizes – a trend that has shoved the social aspects of development to the back seat at the Agency. At the same time, someone with implementing and government experience would understand what is possible within the current structure, thus understanding where incremental victories might push the Agency in important and productive directions that move toward the achievement of more ideal, long-term goals
There are very, very few people out there who meet these criteria. Steve Radelet does, and he served as the Chief Economist at USAID while I was there, but I have no idea if he is interested or, more importantly, if anyone is interested in him. Much the pity if not. More likely, the administration is going to go with the relatively new Deputy Administrator Alfonso Lenhardt. Looking at his background, he’s already been vetted by the Senate for his current position, has foreign service experience, time in various implementer-oriented positions, and he is well-positioned to avoid a long confirmation process as a former lobbyist and from his time as House Sergeant-at-Arms, which likely give him deep networks on both sides of the aisle. In his background, I see no evidence of a long engagement with development as a discipline, and I wonder how reform-minded a former Senior Vice President for Government Relations at an implementer can be. I do not know Deputy Administrator Lenhardt at all, and so I cannot speak to where he might fall on any or all of the issues above. According to Devex, he says his goal is to “improve management processes and institutionalize the reforms and initiatives that Shah’s administration has put in place.” I have no objection to either of these goals – they are both important. But what this means in practice, should Lenhardt be promoted, is an open question that will have great impact on the future direction of the Agency.
Sun 21 Dec 2014
Five and half years ago, at the end of the spring semester of 2009, I sat down and over the course of 30 days drafted my book Delivering Development. The book was, for me, many things: an effort to impose a sort of narrative on the work I’d been doing for 12 years in Ghana and other parts of Africa; an effort to escape the increasingly claustrophobic confines of academic writing and debates; and an effort to exorcise the growing frustration and isolation I felt as an academic working on international development in a changing climate, but without a meaningful network into any development donors. Most importantly, however, it was a 90,000 word scream at the field that could be summarized in three sentences:
- Most of the time, we have no idea what the global poor are doing or why they are doing it.
- Because of this, most of our projects are designed for what we think is going on, which rarely aligns with reality
- This is why so many development projects fail, and if we keep doing this, the consequences will get dire
The book had a generous reception, received very fair (if sometimes a bit harsh) reviews, and actually sold a decent number of copies (at least by the standards of the modern publishing industry, which was in full collapse by the time the book appeared in January 2011). Maybe most gratifying, I heard from a lot of people who read the book and who heard the message, or for whom the book articulated concerns they had felt in their jobs.
This is not to say the book is without flaws. For example, the second half of the book, the part addressing the implications of being wrong about the global poor, was weaker than the first – and this is very clear to me now, as the former employee of a development donor. Were I writing the book now, I would do practically nothing to the first half, but I would revise several parts of the second half (and the very dated scenarios chapter really needs revision at this point, anyway). But, five and a half years after I drafted it, I can still say one thing clearly.
I WAS RIGHT.
Well, I was right about point #1 above, anyway. The newest World Development Report from the World Bank has empirically demonstrated what was so clear to me and many others, and what I think I did a very nice job of illustrating in Delivering Development: most people engaged in the modern development industry have very little understanding of the lives and thought processes of the global poor, the very people that industry is meant to serve. Chapter 10 is perfectly titled: “The biases of development professionals.” All credit to the authors of the report for finally turning the analytic lens on development itself, as it would have been all too easy to simply talk about the global poor through the lens of perception and bias. And when the report turns to development professionals’ perceptions…for the love of God. Just look at the findings on page 188. No, wait, let me show you some here:
For those who are chart-challenged, let me walk you through this. In three settings, the survey asked development professionals what percentage of their beneficiaries thought “what happens in the future depends on me.” For the bottom third, the professionals assumed very few people would say this. Except that a huge number of very poor people said this, in all settings. In short, the development professionals were totally wrong about what these people thought, which means they don’t understand their mindsets, motivations, etc. Holy crap, folks. This isn’t a near miss. This is I-have-no-idea-what-I-am-talking-about stuff here. These are the error bars on the initial ideas that lead to projects and programs at development donors.
WDR’s frames these findings in pretty stark terms (page 180):
Perhaps the most pressing concern is whether development professionals understand the circumstances in which the beneficiaries of their policies actually live and the beliefs and attitudes that shape their lives.
And their proposed solution is equally pointed (page 190):
For project and program design, development professionals should “eat their own dog food”: that is, they should try to experience firsthand the programs and projects they design.
Yes. Or failing that, they should really start either reading the work of people who can provide that experience for them, or start funding the people who can generate the data that allows for this experience (metaphorically).
On one hand, I am thrilled to see this point in mainstream development conversation. On the other…I said this five years ago, and not that many people cared. Now the World Bank says it…or maybe more to the point, the World Bank says it in terms of behavioral economics, and everyone gets excited. Well, my feelings on this are pretty clear:
- Just putting this in terms of behavioral economics is actually putting the argument out there in the least threatening manner possible, as it is still an argument from economics that preserves that disciplinary perspective’s position of superiority in development
- The things that behavioral economics have been “discovering” about the global poor that anthropology, geography, sociology, and social history have been saying for decades. Further, their analyses generally lack explanatory rigor or anything resembling external validity – see my posts here, here, and here.
Also, the WDR never makes a case for why we should care that we are probably misunderstanding/ misrepresenting the global poor. As a result, this just reads as an extended “oopsie!” piece that needs not be seriously addressed as long as we look a little sheepish – then we can get back to work. But getting this stuff wrong is really, really important – this was the central point of the second half of Delivering Development (a point that Duncan Green unfortunately missed in his review). We can design projects that not only fail to make things better, we can actually make things much worse: we can kill people by accident. We can gum up the global environment, which is not going to only hurt some distant, abstract global poor person – it will hit those in the richest countries, too. We can screw up the global economy, another entity that knows few borders and over which nobody has complete control. This is not “oopsie!” This is a disaster that requires serious attention and redress.
So, good first step World Bank, but not far enough. Delivering Development still goes a lot further than you are willing to now. Delivering Development goes much further than behavioral development economics has gone, or really can go. Time to catch up to the real nature of this problem, and the real challenges it presents. Time to catch up to things I was writing five years ago, before it’s too late.
Thu 27 Nov 2014
Posted by Ed under development, Humanitarian Assistance, Random Musing
Comments Off on Letters Left Unsent: Prospective Aidworkers Must-Read
A very long time ago, J asked me to review his book Letters Left Unsent. I’ve long been a fan of J’s writing on his blog Tales from the Hood, and have had the fortune to meet him, hang out, and develop what passes for a friendship in an era where people living on different coasts, and constantly on the move, can stay in touch through various electronic means. All this by way of saying that this will hardly be an impartial review.
So, here is my one sentence review: If you are interested in going into development/humanitarian work, or know someone who is, you need to get a copy of this book and read it/give it to them.
This is not to say that you will enjoy every message in the book – actually, you or your prospective aidworker will likely hate whole chunks of it. The reason for this is simple: the book is hard – really hard. It’s not the prose, which is actually quite fluid. It is the content. The book contains some of J’s most unvarnished stories and writing, work that strips away the romance of the job, exposing it as just that: a job. In chapter after chapter, J demonstrates that development and relief work is a very important, rewarding job, but sometimes a job where the biggest impacts come not from handing some poor soul food, but in getting a spreadsheet right or from attending the right meeting. Further, these lessons are not delivered in a detached, objective manner that can be easily forgotten, but through personal stories that emerge as J points the keyboard at himself and his own experiences. This is no casting of stones at unnamed, straw-man others (something the world could use much less of). It is, at times, a brutal first-person account of the compromises, decisions, crises, frustrations, and rewards that this career brings.
To be fair, there are personal reasons why this book challenged me. First, I know J personally. This means that I know how seriously he takes this job, how hard he works, and how much he believes in what he does. This means I cannot dismiss this book as the work of a cynic or an anti-aid crank, and therefore when the stories and their lessons hurt, there is no easy escape route. Second, some of these stories hit pretty close to home. J and I live in pretty different parts of the aid world. I’ve spent the bulk of my career as an academic, with a brief stint as the employee of a donor. I don’t live for or between deployments, and I never really have. But I’ve been in donor coordination meetings for a major crisis (the 2011 Horn of Africa famine), and in reading this book, I was transported to days of watching terribly difficult decisions get made, measuring the toll the crisis took on people around me – and I still consider those experiences to be some of the tougher ones in my career. At the same time, I’ve spent an awful lot of time conducting fieldwork. In my early days as an academic, I would disappear into villages for months on end. In the pre-cellphone era, this tended to have a deleterious effect on my personal life. Some of the collateral damage from such travel that J describes marks my own personal history. In this book, I heard the echoes of some my own decisions, and my own consequences…
So, I am not J. But I know J, both in the sense that I know the author, and I know many of those in this field for whom he writes. From my perspective, his stories ring true, and the lessons they present are real. And I have my own reasons for feeling challenged by this book, but I suspect most aidworkers would experience similar feelings as they recognize themselves in this book. In the end, my personal biases and feelings don’t change what I think is the value of this book. It is an important illustration of the development/aid worker’s life that does not resort to pieties or broad brushes. Instead, it wrestles with the ambiguities of live in this career. Development work is hard. Humanitarian assistance is hard. It is thrilling and appallingly mundane. It’s malaria and spreadsheets. Mostly spreadsheets. We succeed. We fail. We keep going, trying to learn from both. But if you are headed into this field, into this career, you are headed where J has been. Only fools ignore history, even if it is not their own. Only a very foolish prospective aidworker will ignore this book.
Wed 11 Jun 2014
So, DfID paid London’s School of Oriental and African Studies (SOAS) more than $1 million to answer a pretty important question: Whether or not Fairtrade certification improves growers’ lives. As has shown up in the media (see here and here)
and around the development blogosphere (here), the headline finding of the report was unexpected: wage workers on Fairtrade-certified sites made less than those working on regular farms. Admittedly, this is a pretty shocking finding, as it undermines the basic premise of Fairtrade.
Edit 12 June: As Matt Collin notes in a comment below, this reading of the study is flawed, as it was not set up to capture the wage effects of Fairtrade. There were no baselines, and without baselines it is impossible to tell if there were improvements in Fairtrade sites – in short, the differences seen in the report could just be pre-existing differences, not a failure of Fairtrade. See the CGDev blog post on this here. So the press’ reading of this report is pretty problematic.
At the same time, this whole discussion completely misses the point. Fairtrade doesn’t work as a development tool because, in the end, Fairtrade does absolutely nothing to address the structural inequalities faced by those in the primary sector of the global economy relative to basically everyone else. Paying an African farmer a higher wage/better price means they are now a slightly wealthier farmer. They are still exposed to environmental shocks like drought and flooding, still tied to shocks and trends in global commodities markets over which they have almost no leverage at all, often still producing commodities (like coffee and cocoa) for which demand is very, very elastic, and in the end still living in states without safety nets to help them weather these economic and environmental shocks. Yes, I think African farmers are stunningly resilient, intelligent people (I write about this a lot). But the convergence of the challenges I just listed means that most farmers in the Global South are addressing one or more of them almost all the time, and the cost of managing these challenges is high (both in terms of hedging and coping). Incremental changes in agricultural incomes will be absorbed, by and large, by these costs – this is not a transformative development pathway.
So why is everyone freaking out at the $1 million dollar finding – even if that finding misrepresents the actual findings of the report? Because it brutally rips the Fairtrade band-aid off the global economy, and strips away any feeling of “doing our part” from those who purchase Fairtrade products. But of course, those of us who purchase Fairtrade products were never doing our part. If anything, we were allowing the shiny idea of better incomes and prices to obscure the structural problems that would always limit the impact of Fairtrade in the lives of the poor.
Thu 22 May 2014
Posted by Ed under development, Development Institutions, Livelihoods, policy, research
Comments Off on Is Bill Gates missing his own point?
Bill Gates has a Project Syndicate piece up that, in the context of discussing Nina Munk’s book The Idealist, argues in favor of Jeffrey Sachs’ importance and relevance to contemporary development.
I’m going to leave aside the overarching argument of the piece. Instead, I want to focus on a small passage that, while perhaps a secondary point to Gates, strikes me as a very important lesson that he fails to apply to his own foundation (though to be fair, this is true of most people working in development).
Gates begins by noting that Sachs came to the Gates Foundation to ask for MVP funding, and lays out the fundamental MVP pitch for a “big push” of integrated interventions that crossed health, agriculture, and education sectors that Sachs was selling:
[Sachs’] hypothesis was that these interventions would be so synergistic that they would start a virtuous upward cycle and lift the villages out of poverty for good. He felt that if you focus just on fertilizer without also addressing health, or if you just go in and provide vaccinations without doing anything to help improve education, then progress won’t be sustained without an endless supply of aid.
This is nothing more than integrated development, and it makes sense. But, as was predicted, and as some are now demonstrating, it did not work. In reviewing what happened in the Millennium Villages that led them to come up short of expectations, Gates notes
MVP leaders encouraged farmers to switch to a series of new crops that were in demand in richer countries – and experts on the ground did a good job of helping farmers to produce good crop yields by using fertilizer, irrigation, and better seeds. But the MVP didn’t simultaneously invest in developing markets for these crops. According to Munk, “Pineapple couldn’t be exported after all, because the cost of transport was far too high. There was no market for ginger, apparently. And despite some early interest from buyers in Japan, no one wanted banana flour.” The farmers grew the crops, but the buyers didn’t come.
But then Gates seems to glide over a really key question: how could a smart, well-intentioned man miss the mark like this? Worse, how could a leading economist’s project blow market engagement so badly? Gates’ throwaway argument is “Of course, Sachs knows that it’s critical to understand market dynamics; he’s one of the world’s smartest economists. But in the villages Munk profiled, Sachs seems to be wearing blinders.” This is not an explanation for what happened, as telling us Sachs suffered from blinders is simply restating the obvious. The real issue is the source of these blinders.
The answer is, to me, blindingly obvious. The MVP, like most development interventions, really never understood what was going on in the villages targeted for intervention. Sure, they catalogued behaviors, activities, and outcomes…but there was never any serious investigation into the logic of observed behaviors. Instead, the MVP, like most development interventions, was rife with assumptions about the motivations of those living in Millennium Villages that produced these observed activities and outcomes, assumptions that had little to do with the actual logic of behavior. The result was interventions that implicitly infantilized the Millennium villagers by providing interventions that implicitly assumed, for example, that the villagers had not considered the potential markets for new and different crops/products. Such interventions assume ignorance as the driver of observed behaviors, instead of the enormously complex decision-making that underlies everyday lives and livelihoods in even the smallest village.
To give you an idea of what I mean, take a look at the following illustrations of the complexity of livelihoods decision-making (these are from my forthcoming article on applying the Livelihoods as Intimate Government approach in Applied Geography – a preprint is here).
First, we have #1, which illustrates the causes behind observed decisions captured by most livelihoods frameworks. In short, this is what most contemporary development planning gets to, at best.
However, this is a very incomplete version of any individual’s decision-making reality. #2 illustrates the wider range of factors shaping observed decisions that become visible through multiscalar analysis that nests particular places in wider networks of economic, environment, and politics. Relatively few applications of livelihoods frameworks approach this level of complexity, and those that do tend to consider the impacts of markets on particular livelihoods and places.
While this is better than the overly-simplistic framing of decisions in #1, it is still incomplete because motivations are not, themselves, discrete. #3 illustrates the complex web of factors, local and extralocal, and the ways in which these factors play off of one another at multiple scales, different times, and in different situations.
When we seek to understand why people do what they do (and do not do other things), this is the complexity with which we must engage.
This is important, because were Gates to realize that this was the relevant point of both Munk’s book and his own op-ed, he might better understand why his own foundation has
many projects…that have come up short. It’s hard to deliver effective solutions, even when you plan for every potential contingency and unintended consequence. There is a natural tendency in almost any kind of investment – business, philanthropic, or otherwise – to double down in the face of difficulty. I’ve done it, and I think most other people have too.
So, what do you do? Well, we have an answer: The Livelihoods as Intimate Government approach we use at HURDL (publications here and here, with guidance documents coming later in the summer) charts an analytic path through this level of complexity. Before the usual objections start.
1) We can train people to do it (we are doing so in Mali as I write this). You don’t need a Ph.D. in anthropology to use our approach.
2) It does not take too much time. We can implement at least as fast as any survey process, and depending on spatial focus and resources, can move on a timeframe from weeks to two months.
3) It is not too expensive – qualitative researchers are not expensive, and we do not require high-end equipment to do our work.
The proof is in the reactions we are getting from our colleagues. Here in Mali, I now have colleagues from IER and agricultural extension getting fired up about our approach as they watch the data coming in during our pilot phase. They are stunned by how much data we can collect in a short period of time, and how relevant the data is to the questions at hand because we understand what people are already doing, and why they are doing it. By using this approach, and starting from the assumption that we must understand what people are doing and why before we move to interventions, we are going to lay the foundation for more productive interventions that minimize the sorts of “surprise” outcomes that Gates references as an explanation for project failure.
There are no more excuses for program and project design processes that employ the same limited methods and work from the same problematic assumptions – there are ways to do it differently. But until people like Gates and Sachs reframe their understanding of how development should work, development will continue to be plagued by surprises that aren’t all that surprising.
Fri 16 May 2014
While development – thought broadly as social/economic/political change that somehow brings about a change in peoples’ quality of life – generally entails changes in behavior, conversations about “behavior change” in development obscure important political and ethical issues around this subject, putting development programs and projects, and worse the people those programs and projects are meant to help, at risk.
We need to return to a long standing conversation about who gets to decide what behaviors need changing. Most contemporary conversations about behavior change invoke simple public health examples that obscure the politics of behavior change (such as this recent New York Times Opinionator Piece). This piece appears to address the community and household politics of change (via peer pressure), but completely ignores the fact that every intervention mentioned was introduced by someone outside these communities. This is easy to ignore because handwashing or the use of chlorine in drinking water clearly reduces morbidity, nobody benefits from such morbidity, and addressing the causes of that morbidity requires interventions that engage knowledge and technology that, while well-established, were created someplace else.
But if we open up this conversation to other sorts of examples, the picture gets much more complicated. Take, for example, agricultural behaviors. An awful lot of food security/agricultural development programming these days discusses behavior change, ranging from what crops are grown to how farmers engage with markets. Here, the benefits of this behavior change are less clear, and less evenly-distributed through the population. Who decides what should be grown, and on what basis? Is improved yield or increased incomes enough justification to “change behaviors”? Such arguments presume shockingly simple rationales for observed behaviors, such as yield maximization, and often implicitly assume that peasant farmers in the Global South lack information and understandings that would produce such yields, thus requiring “education” to make better decisions. As I’ve argued time and again, and demonstrated empirically several times, most livelihoods decisions are a complex mix of politics, local environment, economy, and social issues that these farmers weigh in the context of surprisingly detailed information (see this post or my book for a discussion of farm allocation in Ghanaian households that illustrates this point). In short, when we start to talk about changing peoples’ behaviors, we often have no idea what it is that we are changing.
The fact we have little to no understanding of the basis on which observed decisions are made is a big, totally undiscussed problem for anyone interested in behavior change. In development, we design programs and projects based on presumptions about people’s motivations, but those presumptions are usually anchored in our own experiences and perceptions – which are quite often different from those with whom we work in the Global South (see the discussion of WEIRD data in psychology, for example here). When we don’t know why people are doing the things we do, we cannot understand the opportunities and challenges that come with those activities/behaviors. This allows an unexamined bias against the intelligence and experience of the global poor to enter and lurk behind this conversation.
Such bias isn’t just politically/ethically problematic – it risks real programmatic disasters. For example, when we perceive “inefficiency” on many African farms, we are often misinterpreting hedging behaviors necessary to manage worst-case scenarios in a setting where there are no safety nets. Erasing such behaviors in the name of efficiency (which will increase yields or incomes) can produce better outcomes…until the situation against which the farmers were hedged arises. Then, without the hedge, all hell can break loose. Among the rural agricultural communities in which I have been working for more than 15 years, such hedges typically address climate and/or market variability, which produce extremes at frequent, if irregular, intervals. Stripping the hedges from these systems presumes that the good years will at least compensate for the bad…a dangerous assumption based far more on hope or optimism than evidence in most places where these projects are designed and implemented. James Scott’s book The Art of Not Being Governed provides examples of agrarian populations that fled the state in the face of “modernization” efforts not because they were foolish or backward, but because they saw such programs as introducing unacceptable risks into their lives (see also this post for a similar discussion in the context of food security).
This is why my lab uses an approach (on a number of projects ranging from climate services evaluation and design to disaster risk reduction) that starts from the other direction – we begin by identifying and explaining particular behaviors relevant to the challenge, issue, or intervention at hand, and then start thinking about what kinds of behavioral change are possible and acceptable to the people with whom we work. We believe that this is both more effective (as we actually identify the rationales for observed behaviors before intervening) and safer (as we are less likely to design/condone interventions that increase vulnerability) than development programming based on presumption.
This is not to say that we should simply valorize all existing behaviors in the Global South. There are inefficiencies out there that could be reduced. There are things like handwashing that are simple and important. Sometimes farmers can change their practices in small ways that do not entail big shifts in risk or vulnerability. Our approach to project design and assessment helps to identify just such situations. But on the whole, we need to think much more critically about what we are assuming when we insist on a particular behavior change, and then replace those assumptions with information. Until we do, behavior change discussions will run the risk of uncritically imposing external values and assumptions on otherwise coherent systems, producing greater risk and vulnerability than existed before. Nobody could call that development.
Wed 12 Feb 2014
I just finished reading Geoff Dabelko’s “The Periphery isn’t Peripheral” on Ensia. In this piece, Geoff diagnoses the problems that beset efforts to address linked environmental and development problems, and offers some thoughts on how to address them. I love his typology of tyrannies that beset efforts to build and implement good, integrative (i.e. cross-sectoral) programs. I agreed with his suggestions on how to make integrative work more acceptable/mainstream in development. And by the end, I was worried about how to make his suggestions reality within the donors and implementers that really need to take on this message.
Geoff’s four tyrannies (Tyranny of the Inbox; Tyranny of Immediate Results; Tyranny of the Single Sector; Tyranny of the Unidimensional Measurement of Success) that he sees crippling environment-and-development programming are dead on. Those of us working in climate change are especially sensitive to tyranny #2, the Tyranny of Immediate Results. How the hell are we supposed to demonstrate results on an adaptation program that is meant to address challenges that are not just happening now, but will intensify over a 30 year horizon? Does our inability to see the future mean that this programming is inherently useless or inefficient? No. But because it is impossible to measure future impact now, adaptation programs are easy to attack…
As a geographer, I love Geoff’s “Tyranny of the Single Sector” – geographers generally cannot help but start integrating things across sectors (that’s what our discipline does, really). In my experiences in the classroom and the donor world, integrative thinking eludes a lot more people than I ever thought possible. Our absurd system of performance measurement in public education is not helping – trust me. But even when you find an integrative thinker, they may not be doing much integrative work. Sometimes people simply can’t see outside their own training and expertise. Sometimes they are victims of tyranny #1 (Tyranny of the Inbox), where they are too busy dealing with immediate challenges within their sector to think across sectors – lord knows, that defined the last 6 months of my life at USAID.
And Geoff’s fourth tyranny speaks right to my post from the other day – the Tyranny of the Unidimensional Measurement of Success. Read Geoff, and then read my post, and you will see why he and I get along so well.
Now, Geoff does not stop with a diagnosis – he suggests that integrative thinking in development will require some changes to how we do our jobs, and provides some illustrations of integrative projects that have produced better results to bolster his argument. While I like all of his suggestions, what concerns me is that these suggestions are easier said than done. For example, Geoff is dead right when he says that:
We must reward, rather than punish, cross-disciplinary or cross-sectoral approaches; define success in a way that encourages, rather than discourages, positive outcomes in multiple arenas; and foster monitoring and evaluation plans that embrace, rather than ignore, different timescales and multiple indicators.”
But how, exactly, are we to do this? What HR levers exist that we can use to make this happen? How much leeway do appointees and other executive-level donor staff have with regard to changing rewards and evaluations? And are the right people in charge to make such changes possible? A lot of people rise through donor organizations by being very good at sectoral work. Why would they reward people for doing things differently?
Similarly, I wonder how we can actually get more long-term thinking built into the practice and implementation of development. How do we really overcome the Tyranny of the Inbox, and the Tyranny of Immediate Results? This is not merely a mindset problem, this is a problem of budget justifications to an often-hostile congress that wants to know what you have done for them lately. Where are our congressional champions to make this sort of change possible?
Asking Geoff to fix all our problems in a single bit of writing is completely unfair. That is the Tyranny of What do We do Now? In the best tradition of academic/policy writing, his piece got me thinking (constructively) about what needs to happen if we are to do a better job of achieving something that looks like sustainable development going forward. For that reason alone it is well worth your time. Go read.
Mon 10 Feb 2014
I’m a big fan of accountability when it comes to aid and development. We should be asking if our interventions have impact, and identifying interventions that are effective means of addressing particular development challenges. Of course, this is a bit like arguing for clean air and clean water. Seriously, who’s going to argue for dirtier water or air. Who really argues for ineffective aid and development spending?
More often than not, discussions of accountability and impact serve only to inflate narrow differences in approach, emphasis, or opinion into full on “good guys”/ “bad guys” arguments, where the “bad guys” are somehow against evaluation, hostile to the effective use of aid dollars, and indeed actively out to hurt the global poor. This serves nothing but particular cults of personality and, in my opinion, serves to squash out really important problems with the accountability/impact agenda in development. And there are major problems with this agenda as it is currently framed – around the belief that we have proven means of measuring what works and how, if only we would just apply those tools.
When we start from this as a foundation, the accountability discussion is narrowed to a rather tepid debate about the application of the right tools to select the right programs. If all we are really talking about are tools, any skepticism toward efforts to account for the impact of aid projects and dollars is easily labeled an exercise in obfuscation, a refusal to “learn what works,” or an example of organizations and individuals captured by their own intellectual inertia. In narrowing the debate to an argument about the willingness of individuals and organizations to apply these tools to their projects, we are closing off discussion of a critical problem in development: we don’t actually know exactly what we are trying to measure.
Look, you can (fairly easily) measure the intended impact of a given project or program if you set things up for monitoring and evaluation at the outset. Hell, with enough time and money, we can often piece enough data together to do a decent post-hoc evaluation. But both cases assume two things:
1) The project correctly identified the challenge at hand, and the intervention was actually foundational/central to the needs of the people at hand.
This is a pretty weak assumption. I filled up a book arguing that a lot of the things that we assume about life for the global poor are incorrect, and therefore that many of our fundamental assumptions about how to address the needs of the global poor are incorrect. And when much of what we do in development is based on assumptions about people we’ve never met and places we’ve never visited, it is likely that many projects which achieve their intended outcomes are actually doing relatively little for their target populations.
Bad news: this is pretty consistent with the findings of a really large academic literature on development. This is why HURDL focuses so heavily on the implementation of a research approach that defines the challenges of the population as part of its initial fieldwork, and continually revisits and revises those challenges as it sorts out the distinct and differentiated vulnerabilities (for explanation of those terms, see page one of here or here) experienced by various segments of the population.
Simply evaluating a portfolio of projects in terms of their stated goals serves to close off the project cycle into an ever more hermetically-sealed, self-referential world in which the needs of the target population recede ever further from design, monitoring, and evaluation. Sure, by introducing that drought-tolerant strain of millet to the region, you helped create a stable source of household food that guards against the impact of climate variability. This project could record high levels of variety uptake, large numbers of farmers trained on the growth of that variety, and even improved annual yields during slight downturns in rain. By all normal project metrics, it would be a success. But if the biggest problem in the area was finding adequate water for household livestock, that millet crop isn’t much good, and may well fail in the first truly dry season because men cannot tend their fields when they have to migrate with their animals in search of water. Thus, the project achieved its goal of making agriculture more “climate smart,” but failed to actually address the main problem in the area. Project indicators will likely capture the first half of the previous scenario, and totally miss the second half (especially if that really dry year comes after the project cycle is over).
2) The intended impact was the only impact of the intervention.
If all that we are evaluating is the achievement of the expected goals of a project, we fail to capture the wider set of impacts that any intervention into a complex system will produce. So, for example, an organization might install a borehole in a village in an effort to introduce safe drinking water and therefore lower rates of morbidity associated with water-borne illness. Because this is the goal of the project, monitoring and evaluation will center on identifying who uses the borehole, and their water-borne illness outcomes. And if this intervention fails to lower rates of water-borne illness among borehole users, perhaps because post-pump sanitation issues remain unresolved by this intervention, monitoring and evaluation efforts will likely grade the intervention a failure.
Sure, that new borehole might not have resulted in lowered morbidity from water-borne illness. But what if it radically reduced the amount of time women spent gathering water, time they now spend on their own economic activities and education…efforts that, in the long term, produced improved household sanitation practices that ended up achieving the original goal of the borehole in an indirect manner? In this case, is the borehole a failure? Well, in one sense, yes – it did not produce the intended outcome in the intended timeframe. But in another sense, it had a constructive impact on the community that, in the much longer term, produced the desired outcome in a manner that is no longer dependent on infrastructure. Calling that a failure is nonsensical.
Nearly every conversation I see about aid accountability and impact suffers from one or both of these problems. These are easy mistakes to make if we assume that we have 1) correctly identified the challenges that we should address and 2) we know how best to address those challenges. When these assumptions don’t hold up under scrutiny (which is often), we need to rethink what it means to be accountable with aid dollars, and how we identify the impact we do (or do not) have.
What am I getting at? I think we are at a point where we must reframe development interventions away from known technical or social “fixes” for known problems to catalysts for change that populations can build upon in locally appropriate, but often unpredictable, ways. The former framing of development is the technocrats’ dream, beautifully embodied in the (failing) Millennium Village Project, just the latest incarnation of Mitchell’s Rule of Experts or Easterly’s White Man’s Burden. The latter requires a radical embrace of complexity and uncertainty that I suspect Ben Ramalingan might support (I’m not sure how Owen Barder would feel about this). I think the real conversation in aid/development accountability and impact is about how to think about these concepts in the context of chaotic, complex systems.