I’m back…and cranky about resilience, development, etc.

So, I’m finally back in academia, with some time to start writing again…and able to do so without worrying about who I might annoy.  Ah, the joys of tenure.  Actually, I shouldn’t make that sound so glib – the fact is, this is what tenure is for: it allows people like me to argue about important ideas and take politically challenging positions without having to worry about our incomes.

Quickly, then, I would like to make a point about a Nick Kristof column that appeared back in early July (but I had to shut up about at the time).  In it, he talks about some USAID-funded food security programs that work “with local farmers to promote new crops and methods so that farmers don’t have to worry about starving in the first place.”  Nothing wrong with that – this just makes good sense, really, given the dramatic economic and environmental changes that so many folks must address in their everyday lives and livelihoods.  But then Kristof describes the program via an anecdote:

Jonas Kabudula is a local farmer whose corn crop completely failed, and he said that normally he and his family would now be starving. But, with the help of a U.S.A.I.D. program, he and other farmers also planted chilies, a nontraditional crop that doesn’t need much rain.

“Other crops wither, and the chilies survive,” Kabudula told me. What’s more, each bag of chilies is worth about five bags of corn, so he and other villagers have been able to sell the chilies and buy all the food they need.

“If it weren’t for the chilies,” said another farmer, Staford Phereni, “we would have no food.”

Er, this is not resilience.  Sure, it is a different crop, with different biophysical needs than maize…but they still have to sell it to get the money to eat.  Chilies are, in the end, seasoning – in economic terms, there is a lot of price elasticity in there, as people can just choose not to season their food if they run out of money.  So, when all hell breaks loose in a country, such as when a drought compromises the principal food crop, a large percentage of the people who would buy chilies (other farmers) cannot do so, depressing the price and lowering the relative value of the chilies versus needed food items (the prices of which are likely rising as demand for alternatives to maize kick in) – in short, your cash crop buys you less food than it did under good conditions.  You end up just as screwed as everyone else, albeit a few weeks later.  Further, this all presumes that markets are functioning at anything like regular levels, which is a bad bet when things really get stressed.  Basically, this program gets resilience wrong because it fails to capture all of the things that people are vulnerable to: it isn’t just the climate, it is also the market.  Yes, you’ve addressed at least some of the climate vulnerability…by pushing people onto a precarious market likely to be upset by the very climate conditions you are trying to address in the first place.  Oops.  More income is not necessarily more resilience if that income can be destabilized by the very thing it was meant to help address.

Given all of this, it seems to me that Kristof has missed the really important issue here: if this actually worked for this farmer, we need to know why it worked given all that could go wrong, and build on that.  However, he doesn’t dive into that, at least in part because I think he sees the project success in this case as an expected outcome, the sort of thing that “should happen” because more income means more resilience and therefore less vulnerability to climate change and food insecurity.  And that has everything to do with how we in development talk and think about vulnerability and resilience.  While Kristof does not use these terms, they are implicit in his thinking about how this program helped this farmer – the farmer had other options that allowed him to address a climate-related challenge by increasing his income (or at least holding the line in bad situations), making him more resilient/less vulnerable than his neighbors in the face of this challenge.  However, this example in this column is an argument for why we should be worried about the ways in which development has started slinging these terms around of late.  It is unclear to me how this program can really address vulnerability or build resilience because it seems that it does not really address some significant factors shaping local vulnerability, nor has it really identified why those who display resilience in the face of a climate/food security challenge really are having better outcomes.

Granted, I am griping about one part of the program (the others actually sound quite interesting and reasonable), but this part just trips my vulnerability/resilience switch…

It comes down to this bit of bad news: until we come to grips with how we understand and define vulnerability and resilience, and do a better job of grounding these concepts in place, we will continue to design programs and projects that just trade one risk/vulnerability for another.  That’s no way to get our job done.

Development Studies: A Disciplinary Home for Interdisciplinary Work?

I continue my musings on the recent emergence of development studies in the American academy . . .

The rise of development studies presents two interesting opportunities for development in general – a chance to start treating development as a discipline, and the chance to bring interdisciplinary (or, in the parlance of the donor and implementation world, integrated) thinking to the fore in development.

What do I mean by treating development as a discipline?  Various social scientists have demonstrated that development is not just a set of activities, it is a body of thought.  This is what I meant in Delivering Developmentwhen I said that

“contemporary development is not the product of a single organizational mission, a single theory, or a particular set of practices. It is the congealed outcome of more than six decades of often-uncoordinated administrative decisions, monitoring reports, economic theories, academic studies, and local responses. These ideas, such as the value of free trade and global markets for the global poor, are repeated so often and in so many venues that they seem to lack a single author or source. For the contemporary development practitioner, they seem to come from nowhere and everywhere at the same time. The same assumption is repeated over and over in development documents until, for example, it is impossible to talk about development in the absence of markets. The results are practices and ideas that seem both universal and eternal.” (p. 7-8)

If people come into development from narrow, technical backgrounds, they are unlikely to know the history of ideas into which they have waded.  They may not know the history of interventions that have been tried in the past.  Understanding the ideas to which one is responding or building on with a particular program or project, and knowing the previous history of similar efforts, seems to me to be critical to achieving any development goals.  For such a knowledge base to become common in the field, development cannot just be an object of study for other academic disciplines – it has to be recognized as its own discipline to which new students must be introduced.

Academia has, for essentially my entire academic life since I entered undergrad, argued for greater interdisciplinary collaboration.  As best I can tell, very little of academia has actually shifted academic incentives such that interdisciplinary work might actually emerge and flourish.  The emergence of development studies presents an opportunity to create such incentives within an academic discipline*.  Any program of development studies that considers not only theory and thought, but also the history of development interventions, will necessarily engage the fact that development is an inherently interdisciplinary undertaking.  While economists have long held sway over the (informal) discipline of development, they are hardly the final answer for most questions that anyone engaged in development might face on a day-to-day basis (market failure around the environment, anyone?).  As the same time, the climate scientist is probably not going to have a lot of answers for how we might foster the emergence of local markets better able to address the predicted/modeled challenges of future climate change.  Technical expertise is critical to achieving development goals, but narrow disciplinary expertise is likely to reproduce stovepipes of information, funding and programming that make it difficult to address the suite of issues arising around most development challenges.  In the rise of development studies, we have the chance to break down these stovepipes under the rubric of a single discipline, thus creating a home for interdisciplinary work within a discipline (yes, that is contradictory), as it were.  At the same time, graduates of such programs would already think “integratively,” perhaps one of the biggest challenges I have seen for implementation.

Much of this opportunity could be realized even in the course of a Masters degree – which is critical to most programs, as they are Masters-terminal.  However, if development studies is to realize these potentials, it will require Ph.D.-level engagement by students and faculty to build literature, journals, and approaches requisite of an academic discipline.  This, however, must take shape in the context of an extended and varied engagement with donors and implementers that can only really be had if we move more people between academia and the donor/implementer world.  Creating the incentives for such movement is an entirely different question . . .

 

 

 

*Note: as a geographer, I have to point out that my discipline displays all of the characteristics of an interdisciplinary endeavor – most departments contain everything from qualitative social scientists to soil or atmospheric scientists to experts in the GISciences, and we are rewarded for collaborating with one another.  Of course, we are collaborating within geography, and publishing in journals accepted by geography, which makes things much easier.  But working across the various academic divides (quant/qual, human/environment, etc.) has already been modeled . . .

Is blogging an “extreme sport” for academics?

Earlier this week, Linda Raftree pointed me to this article, which references another article that calls blogging without tenure “an extreme sport” because of the risks involved.  It is a little hard for me to comment on this specifically, as I did not start blogging until after I had tenure – not because I was afraid of blogging, but because it never occurred to me to blog before then (basically, my agent and my publisher pushed me to blog to promote my book).  I did plenty of “public sphere” writing, such as op-eds in The State (Columbia, SC).  Hell, right before I went up for tenure I published one titled “Governor’s energy report has no clothes.”  I walked into my chair’s office the day it was published, and he shook his head and said “not exactly keeping your head down, are you?”  The op-ed had no impact on my tenure at all.  In most cases, neither will blogging.

I think most academics are far too timid when it comes to public expression.  They fear reprisals against their careers, but rarely seem to be able to articulate where such reprisals might come from or how they might actually create harm.  I am sure there are indeed cases of highly dysfunctional situations where individual’s careers might be harmed by the public expression of their views on a given subject within their expertise, but such situations are volatile for many reasons and blogging is unlikely to ever be the cause of career problems.  In fact, I am convinced that there is far more upside to blogging than there might ever be a downside.  On the upside:

1) As I recently noted, my blog and twitter accounts appear to have done a great deal to spread my work around, and to get that work used (at least by other writers).  Find me a department that will complain about your rapidly rising citation counts.

2) You will develop a whole new community of colleagues, and they will bring new ideas and perspectives that you simply cannot get talking to people in your department, or even in your discipline.  These ideas and perspectives can be challenging, but if you can harness them, they can carry your thinking to new and innovative places.

3) When you develop a public persona, you can build a degree of freedom from problematic situations in your home institution.  You can cultivate a community in which there might be several people interested in giving you a job.  Further, universities love publicly-visible faculty, because they are easy to point to when someone asks what the faculty contribute to the larger society (and yes, this does get asked often).

4) You practice speaking in multiple registers: we all write academic articles, and if you are on the tenure track I hope you’ve figured that process out.  But do you know how to engage the person on the street?  Taxpayers fund a lot of research, and explaining to them why they should be happy they are funding yours is a worthwhile skill.  You can’t do that through a journal article, or in the language of your discipline.

On the downside:

1) Bill Easterly said it best: the blog is a hungry mouth.  It can be hard to keep up with posting, especially when you have a bunch of other stuff going on during the semester.

2) You will be exposed to griefers – the internet is a harsh place.  People will say nasty things about you and your ideas.  If you are fragile, do not try this at home.

Anyway, these are just my quick thoughts on blogging and academia, and I am sure my thoughts are incomplete and others will have something to add.  Indeed, you should check out Marc Bellemare’s recent post on things he has learned as an untenured blogger.  Speaking for myself, though, I have not regretted blogging at all, and aside from sometimes being exhausted after finishing a post, I have yet to see a serious drawback from doing so – but the benefits have been remarkable.

Does the journal really matter anymore?

Following onmy previous post, another thought that springs from personal experience and its convergence with someone’s research.  If you look at my Google Scholar profile, you will note that in 2011 my citation counts exploded (by social science standards, mind you – in the qualitative social sciences an article with 50 citations or more is pretty huge).  Now, part of this is probably a product of my academic maturation – a number of articles now getting attention have been around for 3-4 years, which is about how long it takes for things to work their way into the literature.  However, I’ve also seen a surge in a few older pieces that had previously plateaued in terms of citations.  This can’t be attributed to a new surge in interest in a particular topic, as these articles cross a range of development issues.  However, they all seem to be surging since I got on Twitter and joined the blogosphere.  Bascially, it seems a new circle (circles?) of interested folks now has access to my work and ideas, and the result is that my work is finding its way into a new set of venues/disciplines that it might otherwise not have reached.  It is hard to be sure about this, as my 18 months on the blog and 1 year on twitter are just at the edges of how long it takes to get an article written, submitted, accepted and published, but clearly something is happening here . . .

This seems to be borne out by some work done by Gunther Eysenbach examining the relationship between tweets (references to a paper on twitter) and the level of citation that paper eventually enjoyed.  Eysenbach found that “highly tweeted” papers tended to become highly cited papers, though the study was quite preliminary (h/t to Martin Fenner at Gobbledygook.  You can find links to Eysenbach’s paper and Martin’s thoughts on it here).  This makes sense to me – but it requires a bit more study.  I like what Fenner and his colleagues are trying to do now, capturing the type of reference made in the tweet (supporting/agreeing, discussing, disagreeing, etc.).  Frankly, references in general should be subject to such scrutiny.  As one of my colleagues once said, if citation counts are all that matter we should write the worst paper ever on a subject, jam it into some journal that did not know better, publicize it and wait for the piles of angry negative citations to pile in . . . only we just have to count the citations, not admit that we are being cited because people hate us!

The altmetrics movement is starting to take off in academia (see, for example, this very cool discussion) I have not yet seen any discussion, though, of what social media might do to journal prestige.  While there will always be flagship journals to which disciplines full of tenure-track faculty will bow, once tenure is achieved this sort of homage becomes less important.  Given what I am seeing with regard to my citations right now, my desire to have my work have impact beyond my discipline and the academy, and my concerns for the policing effect of peer review (which emerges most acutely in flagship journals – see my posts here and here), why should I struggle to get my work into a flagship journal when I can get a quick turnaround and publication in a smaller journal, still have the stamp of peer review on the piece, and then promote it via social media to a crowd more than willing to have a look?  If I (or anyone else) can drive citations through mild self-promotion via social media, does the journal it is published in really matter that much?  I wonder what sort of effect this might have on the structure of publishing now – will flagship journals have to become more nimble and responsive, or will they soldier on without changes?  Will smaller journals sense this opportunity and move into this gap?  Will my colleagues embrace the rising influence of social media on academic practice?

Does any of this matter?  Not really.  If the emerging studies on social media and citation are correct, and my trends are sustainable, then one day I will be one of the “important” folks with a lot of citations . . . and I will be training my students to engage in conventional and non-conventional ways.  I will not be the only one.  Those of us who engage with social media, and train our students to do so, will eventually win this race.  Change is coming to academia, but the nature and importance of that change remain up in the air . . .

Only the senior faculty can save us…

Been a while . . . been busy.  And yes, I stole that post title from Ralph Nader . . .

As those who follow this blog know, one of my big concerns is with the walls that academia is building around itself through practices like the current incarnation of peer review in specialist journals. It’s not that I have a problem with peer review at all – I think it is an important tool through which we improve and vet academic work. Anything that survives peer review is by and large more reliable than an unvetted website (like this one, for example).

But the practice of peer review in contemporary academia has turned really problematic. Most respected journals are more expensive than ever, making access to them the near-sole province of academics with access to libraries willing to purchase such journals. The pressure to publish increases all the time, both in rising demands on individual researchers (my requirements for tenure were much tougher than most requirements from a generation before) and in terms of an ever-expanding academic community. The proliferation of published work that has emerged from these two trends has not really improved the quality of information or the pace of advances – there is still a lot of good work out there, but it is harder and harder to find in an ever-growing pile of average and even not-so-good work. And I have found peer review to often function as a means of policing new ideas, slowing the flow of innovative ideas into academia not because the ideas are unsupported, but because these ideas and findings run contrary to previously-accepted ideas upon which many reviewers might have done their work. This byzantine politics of peer review is not well-understood by those outside the academic tent, and does little to improve our public image.

So I am wondering where the tipping point is that might bring about something new. Social media is nice, but it is not peer-reviewed. I tend to think about it as advertizing that points me to useful content, but not as content itself (I have a post on this coming next). I still want peer-review, or something like it. So, a modest proposal: senior colleagues of mine in Geography – yes, those of you who are full professors at the top of the profession, who have nothing to lose from a change in the status quo at this point – who will get together and identify a couple of open-access, very low-cost journals and more or less pronounce them valid (probably in part by blessing them with a few of your own papers to start). Don’t pick the ones that want to charge $1500 in publishing fees – those are absurd. But pick something different . . .

This, I think, is all it would take to start a real movement in my discipline – admittedly, a small discipline, so maybe easier to move. Just making our publications open to all is a tiny first step, but an important one – once a wider community has access to our ideas, they can respond and prompt us for new ones. Collaborations can emerge that should have emerged long ago. Colleagues (and research subjects) in the Global South will be able to read what is written about their environments, economies and homes, improving our responsiveness to those with whom, and hopefully for whom, we work. First steps can be catalytic . . .

The $1 Billion Question

So, it seems I have been challenged/called out/what-have-you by the folks at Imagine There Is No . . . over what I would do (as opposed to critique) about development.  At least I think that is what is going on, given that I received this tweet from them:

@edwardrcarr what would You do with 1 Billion $ for #developmentbit.ly/rQrUOd #The.1.Bill.$.Question

In general, I think this is a fair question.  Critique is nice, but at the end of the day I strive to build something from my critiques.  As I tell my grad students, I can train a monkey to take something apart – there isn’t much talent to that.  On the other hand, rebuilding something from whatever you just dismantled actually requires talent.  I admit to being a bit concerned about calling what I build “better”, mostly because such judgments gloss over the fact that any development intervention produces winners and losers, and therefore even a “better” intervention will probably not be better for someone.  I prefer to think about doing things differently, with an eye toward resolving some of the issues that I critique.

So, I will endeavor to answer – but first I must point out that asking someone what s/he would do for development with $1 billion is a very naive question.  I appreciate its spirit, but there isn’t much point to laying down a challenge that has little alignment with how the world works.  I think this is worth pointing out in light of the post on Imagine There Is No . . ., as they seem to be tweaking Bill Easterly for not having a good answer to their question.  However, for anyone who has ever worked for a development agency, the question “on what would you spend a billion dollars” comes off as a gotcha question because it is sort of nonsensical.  While the question might be phrased to make us think about an ideal world, those of us engaged in the doing of development who take its critique and rethinking seriously immediately start thinking about the sorts of things that would have to happen to make spending $1 billion possible and practical.  Those problems are legion . . . and pretty much any answer you give to the question is open to a lot of critique, either from a practical standpoint (great idea that is totally impractical) or from the critique side (and idea that is just replicating existing problems).  When caught in a no-win situation, the best option is not to answer at all.  Sure, we should imagine a perfect world (after all, according to A World Of Difference, I am “something of a radical thinker”), but we do not work in that world – and people live in the Global South right now, so anything we do necessarily must engage with the imperfections of the now even as we try to transcend them.

Given all of this, I offer the following important caveats to my answer:

1) I am presuming that I will receive this money as individual and not as part of any existing organization, as organizations have structures, mandates and histories that greatly shape what they can do.

2) I am presuming that I have my own organization, and that it already has sufficient staff to program $1 billion dollars – so a lot of contracting officers and lawyers are in place.  Spending money is a lot harder than you’d think.

3) I am presuming that I answer only to myself and the folks in the Global South.  Monitoring and evaluation are some of the biggest constraints on how we do development today.  As I said in my talk at SAIS a little while ago, it is all well and good to argue that development merely catalyzes change in complex systems, which makes its outcomes inherently unpredictable.  It is entirely another to program against that understanding – if the possible outcomes of a given intervention are hard to predict, how do you know which indicators to choose?  How can you build an evaluation system that allows you to capture unintended positive and negative outcomes as the project matures without looking like you are fudging the numbers?  This sounds like constrained thinking, but it is reality for anyone working in a big donor agency, and for all of the folks who implement the work of those agencies.

4) I am presuming there are enough qualified staff out there willing to quit what they are doing and come work for this project . . . and I am going to need a hell of a lot of staff.

5) I am presuming that I am expected to accomplish something in the relatively short term – i.e. 3-5 years, as well as trigger transformative changes in the Global South over the long haul.  If you don’t produce some results relatively soon, people will bail out on you.

All of these, except for 5), are giant caveats that basically divorce the question and its answer from reality.  I just need to point that out.  Because of these caveats, my answer here cannot be interpreted as a critique of my current employer, or indeed any other development organization – an answer that would also serve as a critique of those institutions would have to engage with their realities, blowing out a lot of my caveats above . . . sorry, but that’s reality, and it is really important to acknowledge the limits of any answer to such a loaded question.

So, here goes.  If I had $1 billion, I would spend it 1) figuring out what people really do to manage the challenges they face day-to-day, 2) identifying which of these activities are most effective at addressing those challenges and why, 3) evaluating whether any of these activities can be brought to scale or introduced to new places, and 4) bringing these ideas to scale.

Basically, I would spend $1 billion dollars on the argument “the new big idea is no more big ideas.”

Why would I do this, and do it this way?  Well, I believe that in a general way those of us working in development have very poor information about what is actually happening in the Global South, in the places where the challenges to human well-being are most acute.  We have a lot of assumptions about what is happening and why, but these are very often wrong.  I wrote a whole book making this point – rather convincingly, if some of the reviews are to be believed.  Because we don’t know what is happening, and our assumptions are wide of the mark, a lot of the interventions we design and implement are irrelevant (at best) or inappropriate (at worst) to the intended beneficiaries.  Basically, the claim (a la Sachs and the Millennium Villages Project) that there are proven development interventions is crap.  If we had known, proven interventions WE WOULD BE USING THEM.  To assume otherwise is to basically slander the bulk of people working on development as either insufficiently motivated (if we weren’t so damn lazy, and we really cared about poor people, we could fix all of the problems in the world with these proven interventions) or to argue that there simply needs to be more money spent on these interventions to fix everything (except in many cases there is little evidence that funding is the principal cause of project failure).  Of course, this is exactly what Sachs argues when asking for more support for the MVP, or when he is attacking anyone who dares critique the project.

The only way to really know what is happening is to get out there and talk to people.  When you do, what you find is that the folks we classify as the “global poor” are hardly helpless.  They are remarkably capable people who make livings under very difficult circumstances with very little resource and limited fallback options.  They know their environments, their economy, and their society far better than anyone from the outside ever will.  They are, in short, remarkable resources that should be treated as treasured repositories of human knowledge, not as a bunch of children who can’t work things out for themselves.  $1 billion would get us a lot of people in a lot of places doing a lot of learning . . . and this sort of thing can be programmed to run over 6 months to a year to run fieldwork, do some data analysis, and start producing tailored understandings of what works and why in different places . . . which then makes it relatively easy to start identifying opportunities for scale-up.  Actually, the scale-up could be done really easily, and could be very responsive to local needs, if we would just set up a means of letting communities speak to one another in a free and open manner – a network that let people in the Global South ask each other questions, and offer their answers and solutions, to one another.  Members of this project from the Global North, from the Universities and from development organizations, could work with communities to convey the lessons the project has gleaned from various activities in various places to help transfer ideas and technology in a manner that facilitates their productive introduction in new contexts.  So I suppose I would have to carve part of the $1 billion off for that network, but it would come in under the scale-up component of my project.  Eventually, I suspect this sort of network would also become a means of learning about what is happening in the Global South as well . . .

With any luck at all, by year 3 we would see the cross-fertilization of all kinds of locally-appropriate ideas and technology happening around the world and the establishment of a nascent network that could build on this momentum to yield even more information about what people are already doing, and what challenges they really face.  We would have started a process that has immediate impacts, but can work in tandem with the generational timescales of social change that are necessary to bring about major changes in any place.  We would have started a process that likely could not be stopped.  How it would play out is anyone’s guess . . . but it would sure look different than whatever we are doing now.

Ed Fail

Yep, no sooner do I post on failure and how we account for it and learn from it, then I come upon a big fail of my own.  That I can learn from. Irony, anyone?

As many of you know, I have been working in Ghana since 1997.  I’ve spent some 20 months there, though it has been a while since I was last on the ground (I need to change that) – basically, the last meaningful research trip I took was in the summer of 2006.  That work, along with the fieldwork that came before it, was so rich that I am still working through what it all means – and it has led me down the path of a book about why development doesn’t work as we expect, and now a (much more academic) complete rethinking of the livelihoods framework that many in development use to assess how people make a living.

One of my big findings (at least according to some of my more senior colleagues) is that inequality and (depending on how you look at it) injustice are not accidental products of “bad information” or “false consciousness” in livelihoods strategies, but integral parts of how people make a living (article to this effect here, with related work hereand here, as well as a long discussion in Delivering Development).  One constraint specific to the livelihoods in the villages in which I have been working is the need to balance the material needs of the household with the social requirement that men make more money than their wives.  I have rich empirical data demonstrating this to be true, and illustrating how it plays out in agricultural practice (which makes up about 65% of most household incomes).

In other words, I know damn well that men get very itchy about anything that allows women to become more productive, as this calls one of the two goals of existing livelihoods strategies into question.  Granted, I figured this out for the first time around 2007, and have only very recently (i.e. articles in review) been able to get at this systematically, but still, I knew this.

And I completely overlooked it when trying to implement the one village improvement project with which I have been involved.  Yep, I totally failed to apply my own lessons to myself.

What happened?  Well, to put it simply, I had some money available after the 2006 fieldwork for a village improvement project, which I wanted the residents of Dominase and Ponkrum to identify and, to the extent possible, design for themselves.  We had several community meetings that meandered (as they do) and generally seemed to reflect the dominant voices of men.  However, at the end of one of these meetings, one of my extraordinarily talented Ghanaian colleagues from the University of Cape Coast had the experience and the awareness to quietly wander off to a group of women and chat with them.  I noticed this but did not say anything.  A few minutes later, he strolled by, and as he did he said to me “we need to build a nursery.”  Kofi had managed to elicit the womens’ childcare needs, which were much more practical and actionable than any other plans we had heard.  At the next community meeting we raised this, and nobody objected – we just got into wrangling over details.  I left at the end of the field season, confident we could get this nursery built and staffed.

Five years later, nothing has happened.  They formed the earth blocks, but nobody cleared the agreed-upon area for the nursery.  It was never a question of money, and my colleagues at the University of Cape Coast checked in regularly.  Each time, they left with promises that something would get going, and nothing ever did.  I don’t fault the UCC team – the community needed to mobilize some labor so they would have buy-in for the project, and would take responsibility for the long-term maintenance of the structure. This is on the community – they just never built it.

And it wasn’t until yesterday, when talking about this with a colleague, that I suddenly realized why – childcare would lessen one demand on women that limits their agricultural productivity and incomes.  Thus, with a nursery in place women’s incomes would surely rise . . . and men have no interest in that, as this is not the sort of intervention that would drive a parallel increase in their own incomes.  I have very robust data that demonstrates that men move to control any increase in their wives incomes that might threaten the social order of the household, even if that decreases overall household income and access to food.

So why, oh why, did I ever think that men would allow this nursery to be built?  Of course they wouldn’t.

I can excuse myself between 2006-2008 for missing this, as I was still working through what was going on in these livelihoods.  But for the last three years I knew about this fundamental component of livelihoods, and how robust this aspect of livelihoods decision-making really is, even under conditions of change such as road construction.  I have been looking at how others misinterpret livelihoods and design/implement bad interventions for years, all the while doing that very thing myself.

Healer, heal thyself.

At what scale can we fail?

I am a big fan of the idea of admitting failure and trying to learn from it.  I like ambitious projects with potentially huge payoffs, but a lot of risk of failure – they’re just much more interesting than going at things incrementally.  Besides, if you are going to fail, why not fail spectacularly?  As I tell my grad students, if you are going to ride it all the way to the ground, you might as well dig a big hole when you get there.  At least people will notice the hole, and try to figure out what the hell you were up to . . . of course, I am an academic (with tenure), so I have a pretty big cushion to land on these days.

All that said, I wonder about the utility of these admitting failure efforts that I see coming from groups like Engineers without Borders.  I had the good fortune to catch up with Tom Murphy (or, as the twitterati know him, @viewfromthecave) the other day while he was here in DC, and we started talking about learning from failure.  In the course of our conversation, we came around to two key problems.  First, really admitting failure requires reframing the public image of development as an inherently do-no-harm effort, where just doing something is better than nothing.  Second, given this first problem, when we really start talking about what failure means, even in the most constructive of settings, we will call the entire development enterprise into question. How do we avoid throwing the baby out with the bathwater?

We have long allowed ourselves and our donor constituencies to believe that development work should never have bad outcomes – there is a pervasive belief (under challenge right now, at least by some) that, at worst, a failed project will not change anything – that is what development failure means. Of course, this is simply untrue – development efforts can make things much, much worse for people if they are poorly framed, designed, and implemented – a point I try to make in Delivering Development.  This has a lot to do with the very imagery of a helpless and oppressed global poor the aid world relies upon to raise funds.  When people see someone in a situation that difficult, they assume things could not get worse.  There is no discussion of what is working in the lives of the poor, and therefore the public has little sense that there are fragile things in peoples’ lives and livelihoods that should be protected as we bring new programs and projects to ground. As a result, development takes on the image of a low-risk enterprise in which social protection and “do no harm” safeguards are superfluous, as the worst we could do is leave people as they were.

Up against that worldview, admitting failure seems just fine – “hey, we didn’t really move the needle with that project, but we’ll figure out what we did wrong and try again” sounds much better than “we are incredibly sorry for utterly devastating the physical basis of your livelihoods and forcing many of you to abandon your farms because we ignored your existing land management practices.”  Unfortunately, admitting failure means a lot of the latter, and I am not at all convinced that anyone has the stomach to really wade into that.

This issue has to be combined with a concern for the scale of failure.  It is all well and good to admit failure, even ugly failures, at the project level – stuff happens.  A failed project can usually be traced to concrete causes that can then be addressed and remedied.  But how can a bilateral aid agency, or even a multilateral agency, do the same for its programs?  It is one thing for such huge organizations to talk about the failure of individual projects, and learn from them, but how can we talk about learning from entire programs that don’t live up to expectations without attracting serious challenges to the aid budget that end up wrecking even successful programs, or preventing the scale-up of things that we know work? Put another way, how can we create an environment where learning from our activities is truly possible, and balance that environment with the political reality of aid agencies and NGOs that answer to (different) constituencies that expect only good things to happen?

This framing of global poverty, and the persistent need to justify aid budgets, puts everyone involved with development on a terrible tightrope – at least for those of us interested in evidence-based programming and policy.  Just saying that admitting failure is good does not begin to get us to a world in which we can see that as more than a slogan.  We will have to unwind decades of public relations and fundraising practice, and back out of some very long-standing and pervasive views of global poverty, before we have any real hope of bringing real learning to the fore of development practice.

Or, we could just give everyone tenure . . .

On agricultural productivity and food security

I was on a panel at the Organic Trade Association‘s research series at the Natural Products Expo East in Baltimore last Friday, discussing the issue of organic farming and the need to feed the world.  As I heard over and over from proponents of organic agriculture, the argument “you can’t feed the world on organic” is something thrown at them all the time.  As I argued, though, this is a production-based argument: that is, organic farming often has somewhat lower levels of productivity than industrial farming (though there are several cases where this does not seem to hold, and a number of confounding factors that make it entirely possible that the productivity difference is actually quite small).  Well, that would be a relevant argument if we were already using our food resources carefully.  Except we aren’t.  Consider:

  • We still produce more than enough food globally to feed everyone a very healthy number of calories, and probably enough that those calories could be accompanied by adequate nutrients.  The current problems of food insecurity are primarily about distribution, not production.
  • Anywhere between 20% and 40% of all food grown globally spoils before it reaches market.  The figures are lower for grains (which tend to travel well) and much higher for vegetables.
  • In the US, we throw away roughly 30% of all food we purchase.
  • Consider those two numbers together: In the US, we probably lose a lot less of the crop between farm and purchase at market, but then throw 30% of it away.  In other places, the food that reaches the table is nearly completely eaten, but we could lose up to 40% of that food before it reaches market.  In other words, no matter where you go on Earth, there is a hell of a lot of waste in the food system.
  • Finally, consider that 33% of all farmland is used for animal feed, one of the less efficient ways of getting calories out of the environment.  It is unclear to me if this 33% includes biofuel crops, but in any case biofuels would only add a few percentage points to this at most.

In short, we have distribution problems and an astonishing amount of waste in our food systems, but it seems that a lot of the food security debate in policy circles is driven by production arguments.  Enhancing production is not a low hanging fruit.  Enhancing production is often used as an excuse for ignoring local knowledge and capacity in favor of reworking entire agroecological systems (which usually ends badly).  Those of us working in development would be well-served to consider all the ways we might address hunger, including waste and distribution, rather than focus myopically on one cause for what might be a phantom problem.  Welcome to another central theme of Delivering Development: misunderstanding/misidentifying the development challenge, and then trying to solve the wrong thing.

One caveat: there are places in the world in absolute production crises – that is, they lack market access to facilitate the movement of needed food, and their agricultural systems are no longer resilient in the face of current challenges.  In these places, waste may be less of an issue, and distribution solutions may be years in the future (good infrastructure and markets require good governance, which is no easy fix), and therefore the application of new agricultural technologies might become the low hanging fruit solution for the time being, until the other challenges can be met. It’s about finding the right tool for the job (and knowing exactly what the job is, too).

On data and its caveats

Marc Bellemare at Duke has been using Delivering Developmentin his development seminar this semester.  On Friday, he was kind enough to blog a bitabout one of the things he found interesting in the book: the finding that women were more productive than men on a per-hectare basis.  As Marc notes, this runs contrary to most assumptions in the agricultural/development economics literature, especially some rather famous work by Chris Udry:

Whereas one would expect men and women to be equally productive on their respective plots within the household, Udry finds that in Burkina Faso, men are more productive than women at the margin when controlling for a host of confounding factors.

This is an important finding, as it speaks to our understanding of inefficiency in household production . . . which, as you might imagine given Udry’s findings, is often assumed to be a problem of men farming too little and women farming a bit too much land.  So Marc was a bit taken aback to read that in coastal Ghana the situation is actually reversed – women are more productive than men per unit area of land, and therefore to achieve optimal distributions of agricultural resources (read:land) in these households we would actually have to shift land out of men’s production into women’s production.

I knew that this finding ran contrary to Udry and some other folks, but I did not think it was that big a deal: Udry worked in the Sahel, which is quite a different environment and agroecology than coastal Ghana.  Further, he worked with folks of a totally different ethnicity engaged with different markets.  In short, I chalked his findings up to the convergence of any number of factors that had played out somewhat differently in my research context.  I certainly don’t see my findings as generalizable much beyond Akan-speaking peoples living in rural parts of Ghana . . .

All of that said, Marc points out that with regard to my findings:

Of course, this would need to be subjected to the proper empirical specification and to a battery of statistical tests . . .

Well, that is an interesting question.  So, a bit of transparency on my data (it is pretty transparent in my refereed pubs, but the book didn’t wade into all of that):

Weaknesses:

  • The data was gathered during the main rainy season, typically as the harvest was just starting to come in.  This required folks to make some degree of projection about the productivity of their fields at least a month into the future, and often several months into the future
  • The income figures for each crop, and therefore for total agricultural productivity, were self-reported. I was not able to cross-check these reported figures by counting the actual amount of crop coming off each farm.
    • I also gathered information on expenses, and when I totaled up expenses and subtracted them from reported income, every household in the village was running in the red.  I know that is not true, having lived there for some 18 months of my life.
    • There is no doubt in my mind that production figures were underestimated, and expenses overestimated, in my data – this fits into patterns of income reporting among the Akan that are seen elsewhere in the literature.
    • Therefore, you cannot trust the reported figures as accurate absolute measures of farm productivity.

Strengths:

  • The data was replicated across three field seasons.  The first two field seasons, I conducted all data collection with my research assistant.  However, in the final year of data collection, I lead a team of four interviewers from the University of Cape Coast, who worked with local guides to identify farms and farmers to interview – in the last year, we interviewed every willing farmer in the village (nearly 100% of the population).
    • It turns out that my snowball sample of households in the first two years of data collection actually covered the entire universe of households operating under non-exceptional household circumstances (i.e. they are not samples, they are reports on the activities of the population).
      • In other words, you don’t have to ask about my sampling – there was no sampling.  I just described the activities of the entire relevant population in all three years.
      • This removes a lot of concerns people have about the size of my samples – some household strategies only had 7 or 8 households working with them in a given year, which makes statistical work a little tricky :)  Well, turns out there is no real need for stats, as this is everyone!
      • The only exception to this: female-headed households.  I grossly underinterviewed them in years 1 and 2 (inadvertently), and the women I did interview do not appear to be representative of all female-headed households.  I therefore can only make very limited claims about trends in these households.
    • Even with completely new interviewers who had no preconceived notions about the data, the income findings came in roughly the same as when I gathered the data. That’s replicability, folks! Well, at least as far as qualitative social science gets in a dynamic situation.
    • Though the data was gathered at only one point in the season, at that point farmers were already seeing how the first wave of the harvest was doing and could make reasonable projections about the rest of the harvest.

I’m probably forgetting other problems and answers . . . Marc will remind me, I’m sure!  In any case, though, Marc asks a really interesting question at the end of his post:

Assuming the finding holds, it would be interesting to compare the two countries given that Burkina Faso and Ghana share a border. Is the change in gender differences due to different institutions? Different crops?

The short answer, for now, has to be a really unsatisfying “I don’t know.”  Delivering Developmentlays out in relatively simple terms a really complex argument I have building for some time about livelihoods, that they are motivated by and optimized with reference to a lot more than material outcomes.  The book builds a fairly simple explanation for how men balanced the need to remain in charge of their households with the need to feed and shelter those households . . . but I have elaborated on this in a piece in review at the Development and Change.  I will send them an email and figure out where this is in review – they have been struggling mightily with reviewers (last I heard, they had gone through 13!?!) and put up a preprint as soon as I am able.  This is relevant here because I would need a lot more information about the Burkina setting to work through my new livelihoods framework before I could answer Marc’s question.

Stay tuned!