Entries tagged with “qualitative methods”.


Since returning to academia in August of 2012, I’ve been pretty swamped. Those who follow this blog, or my twitter feed, know that my rate of posting has been way, way down. It’s not that I got bored with social media, or tired of talking about development, humanitarian assistance, and environmental change. I’ve just been swamped. The transition back to academia took much more out of me than I expected, and I took on far, far too much work. The result – a lot of lost sleep, and a lapsed social media profile in the virtual world, and a lapsed social life in the real world.

One of the things I’ve been working on is getting and organizing enough support around here to do everything I’m supposed to be doing – that means getting grad students and (coming soon) a research associate/postdoc to help out. Well, we’re about 75% of the way there, and if I wait for 100% I’ll probably never get to introduce you all to HURDL…

HURDL is the Humanitarian Response and Development Lab here at the Department of Geography at the University of South Carolina. It’s also a less-than-subtle wink at my previous career in track and field. HURDL is the academic home for me and several (very smart) grad students, and the institution managing about five different workflows for different donors and implementers.  Basically, we are the qualitative/social science research team for a series of different projects that range from policy development to project design and implementation. Sometimes we are doing traditional academic research. Mostly, we do hybrid work that combines primary research with policy and/or implementation needs. I’m not going to go into huge detail here, because we finally have a lab website up. The site includes pages for our personnel, our projects, our lab-related publications, and some media (still under development). We’ll need to put up a news feed and likely a listing of the talks we give in different places.

Have a look around. I think you’ll have a sense of why I’ve been in a social media cave for a while. Luckily, I am surrounded by really smart, dedicated people, and am in a position to add at least one more staff position soon, so I might actually be back on the blog (and sleeping more than 6 hours a night) again soon!

Let us know what you think – this is just a first cut at the page. We’d love suggestions, comments, whatever you have – we want this to be an effective page, and a digital ambassador for our work…

First up on my week up update posts is a re-introduction to my reworked livelihoods approach. As some of you might remember, the formal academic publication laying out the theoretical basis for this approach came out in early 2013. This approach presented in the article is the conceptual foundation for much of the work we are doing in my lab. This pub is now up on my home page, via the link above or through a link on the publications page.

The premise behind this approach, and why I developed it in the first place, is simple. Most livelihoods approaches implicitly assume that the primary motivation for livelihoods decisions is the maximization of some sort of material return on that activity. Unfortunately, in almost all cases this is a massive oversimplification of livelihoods decision-making processes, and in many cases is fundamentally incorrect. Think about the number of livelihoods studies where there are many decisions or behaviors that seem illogical when held up to the logic of material maximization (which would be any good livelihoods study, really). We spend a lot of time trying to explain these decisions away (idiosyncrasy, incomplete information, etc.). But this makes no sense – if you are living on $1.25 a day, and you are illogical or otherwise making decisions against interest, you are likely dead. So there must be a logic behind these decisions, one that we must engage if we are to understand why people do what they do, and if we are to design and implement development interventions that are relevant to the needs of the global poor. My livelihoods approach provides a means of engaging with and explaining these behaviors built on explicit, testable framings of decision-making, locally-appropriate divisions of the population into relevant groupings (i.e. gender, age, class), and the consideration of factors from the local to the global scale.

The article is a straight-ahead academic piece – to be frank, the first half of the article is not that accessible to those without backgrounds in social theory and livelihoods studies. However, the second half of the article is a case study that lays out what the approach allows the user to see and explain, which should be of interest to most everyone who works with livelihoods approaches.

For those who would like a short primer on the approach and what it means in relatively plain English, I’ve put up a “top-line messages” document on the preprints page of my website.

Coming soon is an implementation piece that guides the user through the actual use of the approach. I field-tested the approach in Kaffrine, Senegal with one of my graduate students from May-July 2013. I am about to put the approach to work in a project with the Red Cross in the Zambezi Basin in Zambia next month. In short, this is not just a theoretical pipe dream – it is a real approach that works. In fact, the reason we are working with Red Cross is because Pablo Suarez of Boston University and the Red Cross Climate Centre read the academic piece and immediately grasped what it could do, and then reached out to me to bring me into one of their projects. The implementation piece is already fully drafted, but I am circulating it to a few people in the field to get feedback before I submit it for review or post it to the preprints page. I am hoping to have this up by the end of January.  Once that is out the door, I will look into building a toolkit for those who might be interested in using the approach.

I’m really excited by this approach, and the things that are emerging from it in different places (Mali, Zambia, and Senegal, at the moment). I would love feedback on the concept or its use – I’m not a defensive or possessive person when it comes to ideas, as I think debate and critique tend to make things stronger. The reason I am developing a new livelihoods approach is because the ones we have simply don’t explain the things we need to know, and the other tools of development research that dominate the field at the moment (i.e. RCTs) cannot address the complex, integrative questions that drive outcomes at the community level. So consider all of this a first draft, one that you can help bring to final polished form!

Ok, so that title was meant to goad my fellow anthropologists, but before everyone freaks out, let me explain what I mean. The best anthropology, to quote Marshall Sahlins, “consists of making the apparently wild thought of others logically compelling in their own cultural settings and intellectually revealing of the human condition.” This is, of course, not bound by time. Understanding the thought of others, wherever and whenever it occurs, helps to illuminate the human condition. In that sense, ethnographies are forever.

However, in the context of development and climate change, ethnography has potential value beyond this very broad goal. The understandings of human behavior produced through ethnographic research are critical to the achievement of the most noble and progressive goals of development*. As I have argued time and again, we understand far less about what those in the Global South are doing than we think, and I wrote a book highlighting how our assumptions about life in such places are a) mostly incorrect and b) potentially very dangerous to the long-term well-being of everyone on Earth.  To correct this problem, development research, design, and monitoring and evaluation all need much, much more engagement with qualitative research, including ethnographic work. Such work brings a richness to our understanding of other people, and lives in other places, that is invaluable to the design of progressive programs and projects that meet the actual (as opposed to assumed) needs of the global poor now and in the future.

As I see it, the need for ethnographic work in development presents two significant problems. The first, which I have discussed before, is the dearth of such work in the world. Everyone seems to think the world is crawling with anthropologists and human geographers who do this sort of work, but how many books and dissertations are completed each year? A thousand? Less?  Compare that to the two billion (or more) poor people living in low-income countries (and that leaves aside the billion or so very poor that Andy Sumner has identified as living in middle-income countries).  A thousand books for at least two billion people? No problem, it just means that each book or dissertation has to cover the detailed experiences, motivations, and emotions of two million people. I mean, sure, the typical ethnography addresses an N that ranges from a half dozen to communities of a few hundred, but surely we can just adjust the scale…

Er…

Crap.

OK, so there is a huge shortage of this work, and we need much, much more of it. Well, the good new is that people have been doing this sort of work for a long time. Granted, the underlying assumptions about other people have shifted over time (“scientific racism” was pretty much the norm back in the first half of the 20th Century), but surely the observations of human behavior and thought might serve to fill the gaps from which we currently suffer, right. After all, if a thousand people a year knocked out a book or dissertation over the past hundred years, surely our coverage will improve.  Right?

Well, maybe not. Ethnographies describe a place and a time, and most of the Global South is changing very, very rapidly. Indeed, it has been changing for a while, but of late the pace of change seems to be accelerating (again, see Sumner’s work on the New Bottom Billion). Things change so quickly, and can change so pervasively, that I wonder how long it takes for many of the fundamental observations about life and thought that populate ethnographies to become historical relics that tell us a great deal about a bygone era, but do not reflect present realities.  For example, in my work in Ghana, I drew upon some of the very few ethnographies of the Akan, written during the colonial era. These were useful for the archaeological component of my work, as they helped me to contextualize artifacts I was recovering from the time of those ethnographies. But their descriptions of economic practice, local politics, social roles, and livelihoods really had very little to do with life in Ghana’s Central Region in the late 1990s.  In terms of their utility for interpreting contemporary life among the Akan, they had, for all intents and purposes, expired.

So, the questions I pose here:

1)    How do we know when an ethnography has expired?  Is it expired when any aspect of the ethnography is no longer true, or when a majority of its observations no longer hold?

2)    Whatever standard we might hold them to, how long does it take to reach that standard? Five years? Ten years? Thus far, my work from 2001 in Ghana seems to be holding, but things are wobbling a bit.  It is possible that a permanent shift in livelihoods took place in 2006 (I need to examine this), which would invalidate the utility of my earlier work for project design in this area.

These are questions worth debating. If we are to bring more qualitative, ethnographic work to the table in development, we have to find ways to improve our coverage of the world and our ability to assess the resources from which we might draw.

 

 

*I know some people think that “noble” and “progressive” are terms that cannot be applied to development. I’m not going to take up that debate here.

I have a confession. For a long time now I have found myself befuddled by those who claim to have identified the causes behind observed outcomes in social research via the quantitative analysis of (relatively) large datasets (see posts here, here, and here).  For a while, I thought I was seeing the all-to-common confusion of correlation and causation…except that a lot of smart, talented people seemed to be confusing correlation with causation.  This struck me as unlikely.

Then, the other day in seminar (I was covering for a colleague in our department’s “Contemporary Approaches to Geography” graduate seminar, discussing the long history of environmental determinism within and beyond the discipline), I found myself in a similar discussion related to explanation…and I think I figured out what has been going on.  The remote sensing and GIS students in the course, all of whom are extraordinarily well-trained in quantitative methods, got to thinking about how to determine if, in fact, the environment was “causing” a particular behavior*. In the course of this discussion, I realized that what they meant by “cause” was simple (I will now oversimplify): when you can rule out/control for the influence of all other possible factors, you can say that factor X caused event Y to happen.  Indeed, this does establish a causal link.  So, I finally get what everyone was saying when they said that, via well-constructed regressions, etc., one can establish causality.

So it turns out I was wrong…sort of. You see, I wasn’t really worried about causality…I was worried about explanation. My point was that the information you would get from a quantitative exercise designed to establish causal relationships isn’t enough to support rigorous project and program design. Just because you know that the construction of a borehole in a village caused girl-child school attendance to increase in that village doesn’t mean you know HOW the borehole caused this change in school attendance to happen.  If you cannot rigorously explain this relationship, you don’t understand the mechanism by which the borehole caused the change in attendance, and therefore you don’t really understand the relationship. In the “more pure” biophysical sciences**, this isn’t that much of a problem because there are known rules that particles, molecules, compounds, and energy obey, and therefore under controlled conditions one can often infer from the set of possible actors and actions defined by these rules what the causal mechanism is.

But when we study people it is never that simple.  The very act of observing people’s behaviors causes shifts in that behavior, making observation at best a partial account of events. Interview data are limited by the willingness of the interviewee to talk, and the appropriateness of the questions being asked – many times I’ve had to return to an interviewee to ask a question that became evident later, and said “why didn’t you tell me this before?”  (to which they answer, quite rightly, with something to the effect of “you didn’t ask”).  The causes of observed human behavior are staggeringly complex when we get down to the real scales at which decisions are made – the community, household/family, and individual. Decisions may vary by time of the year, or time of day, and by the combination of gender, age, ethnicity, religion, and any other social markers that the group/individual chooses to mobilize at that time.  In short, just because we see borehole construction cause increases in girl-child school attendance over and over in several places, or even the same place, doesn’t mean that the explanatory mechanism between the borehole and attendance is the same at all times.

Understanding that X caused Y is lovely, but in development it is only a small fraction of the battle.  Without understanding how access to a new borehole resulted in increased girl-child school attendance, we cannot scale up borehole construction in the context of education programming and expect to see the same results.  Further, if we do such a scale-up, and don’t get the same results, we won’t have any idea why.  So there is causality (X caused Y to happen) and there are causal mechanisms (X caused Y to happen via Z – where Z is likely a complex, locally/temporally specific alignment of factors).

Unfortunately, when I look at much quantitative development research, especially in development economics, I see a lot of causality, but very little work on causal mechanisms that get us to explanation.  There is a lot of story time, “that pivot from the quantitative finding to the speculative explanation.”  In short, we might be programming development and aid dollars based upon evidence, but much of the time that evidence only gets us part of the way to what we really need to know to really inform program and project design.

This problem is avoidable –it does not represent the limits of our ability to understand the world. There is one obvious way to get at those mechanisms – serious, qualitative fieldwork.  We need to be building research and policy teams where ethnographers and other qualitative social scientists learn to respect the methods and findings of their quantitative brethren such that they can target qualitative methods at illuminating the mechanisms driving robust causal relationships. At the same time, the quantitative researchers on these teams will have to accept that they have only partially explained what we need to know when they have established causality through their methods, and that qualitative research can carry their findings into the realm of implementation.

The bad news for everyone…for this to happen, you are going to have to pick your heads up out of your (sub)disciplinary foxholes and start reading across disciplines in your area of interest.  Everyone talks a good game about this, but when you read what keeps getting published, it is clear that cross-reading is not happening.  Seriously, the number of times I have seen people in one field touting their “new discoveries” about human behavior that are already common conversation in other disciplines is embarrassing…or at least it should be to the authors. But right now there is no shame in this sort of thing, because most folks (including peer reviewers) don’t read outside their disciplines, and therefore have no idea how absurd these claims of discovery really are. As a result, development studies gives away its natural interdisciplinary advantage and returns to the problematic structure of academic knowledge and incentives, which not only enable, but indeed promote narrowly disciplinary reading and writing.

Development donors, I need a favor. I need you to put a little research money on the table to learn about whatever it is you want to learn about. But when you do, I want you to demand it be published in a multidisciplinary development-focused journal.  In fact, please start doing this for all of your research-related money. People will still pursue your money, as the shrinking pool of research dollars is driving academia into your arms. Administrators like grant and contract money, and so many academics are now being rewarded for bringing in grants and contracts from non-traditional sources (this is your carrot). Because you hold the carrot, you can draw people in and then use “the stick” inherent in the terms of the grant/contract to demand cross-disciplinary publishing that might start to leverage change in academia. You all hold the purse, so you can call the tune…

 

 

 

*Spoiler alert: you can’t.  Well, you probably can if 1) you pin the behavior you want to explain down to something extraordinarily narrow, 2) can limit the environmental effect in question to a single independent biophysical process (good luck with that), and 3) limit your effort to a few people in a single place. But at that point, the whole reason for understanding the environmental determinant of that behavior starts to go out the window, as it would clearly not be generalizable beyond the study. Trust me, geography has been beating its head against this particular wall for a century or more, and we’ve buried the idea.  Learn from our mistakes.

 

**by “more pure” I am thinking about those branches of physics, chemistry, and biology in which lab conditions can control for many factors. As soon as you get into field sciences, or starting asking bigger questions, complexity sets in and things like causality get muddied in the manner I discuss below…just ask an ecologist.

Alright, last post I laid out an institutional problem with M&E in development – the conflict of interest between achieving results to protect one’s budget and staff, and the need to learn why things do/do not work to improve our effectiveness.  This post takes on a problem in the second part of that equation – assuming we all agree that we need to know why things do/do not work, how do we go about doing it?

As long-time readers of this blog (a small, but dedicated, fanbase) know, I have some issues with over-focusing on quantitative data and approaches for M&E.  I’ve made this clear in various reactions to the RCT craze (see herehere, here and here). Because I framed my reactions in terms of RCTs, I think some folks think I have an “RCT issue.”  In fact, I have a wider concern – the emerging aggressive push for quantifiable data above all else as new, more rigorous implementation policies come into effect.  The RCT is a manifestation of this push, but really is a reflection of a current fad in the wider field.  My concern is that the quantification of results, while valuable in certain ways, cannot get us to causation – it gets us to really, really rigorously established correlations between intervention and effect in a particular place and time (thoughtful users of RCTs know this).  This alone is not generalizable – we need to know how and why that result occurred in that place, to understand the underlying processes that might make that result replicable (or not) in the future, or under different conditions.

As of right now, the M&E world is not doing a very good job of identifying how and why things happen.  What tends to happen after rigorous correlation is established is what a number of economists call “story time”, where explanation (as opposed to analysis) suddenly goes completely non-rigorous, with researchers “supposing” that the measured result was caused by social/political/cultural factor X or Y, without any follow on research to figure out if in fact X or Y even makes sense in that context, let alone whether or not X or Y actually was causal.  This is where I fear various institutional pushes for rigorous evaluation might fall down.  Simply put, you can measure impact quantitatively – no doubt about it.  But you will not be able to rigorously say why that impact occurred unless someone gets in there and gets seriously qualitative and experiential, working with the community/household/what have you to understand the processes by which the measured outcome occurred.  Without understanding these processes, we won’t have learned what makes these projects and programs scalable (or what prevents them from being scaled) – all we will know is that it worked/did not work in a particular place at a particular time.

So, we don’t need to get rid of quantitative evaluation.  We just need to build a strong complementary set of qualitative tools to help interpret that quantitative data.  So the next question to you, my readers: how are we going to build in the space, time, and funding for this sort of complementary work? I find most development institutions to be very skeptical as soon as you say the words qualitative…mostly because it sounds “too much like research” and not enough like implementation. Any ideas on how to overcome this perception gap?

(One interesting opportunity exists in climate change – a lot of pilot projects are currently piloting new M&E approaches, as evaluating impacts of climate change programming requires very long-term horizons.  In at least one M&E effort I know of, there is talk of running both quantitative and qualitative project evaluations to see what each method can and cannot answer, and how they might fit together.  Such a demonstration might catalyze further efforts…but this outcome is years away)

I will be speaking about my book and research at the University of Florida on Friday as part of the Glen R. Anderson Visiting Lectureship.  Poster here:

Hope to see folks there!

I’ll be running my mouth about the book again at Chatham University on December 2nd.  Chatham has some very cool stuff going in sustainability and the environment (a new school!), including a new Eden Hall Campus in Richland Township, PA.  My talk will actually be out on that campus, and not in the Shadyside campus . . . directions are here.

The flyer (they’ve done a nice job on it):

Hope to see some of you there . . .

I’ve made a few changes to my personal homepage (www.edwardrcarr.com).  This included cleaning up a few things, adding a few book reviews for Delivering Development, and updating my CVs.  However, today, for the first time since I set my homepage up, I have added a page . . . there is now a page for pre-prints.  I have become thoroughly fed up with the gatekeeping and slow pace of academic publishing – I was annoyed to start with, but after more than a year in an agency, and about 18 months engaged with a much wider environment/development community via the blog and twitter, I have come to realize that academic publishing, for all its rigor and legitimacy, is something of a liability.  There is no way anyone is going to wait around for my work, or anyone else’s work, to wend its way through peer review and the inevitable publication delays before it appears in print.

To address this, I am now posting work that I have submitted for review – it is polished, and sometimes it has seen a round of peer review already (those will be marked revised and resubmitted).  However, they are not fully finished, peer-approved work – which means they will likely change a little before they come out in final form.  My goal is to make this stuff available more or less as soon as I submit it.  I am open to comments and suggestions – I can still work them in before the final version goes out!

Some of you might wonder how this could affect the idea of double-blind peer review.  Well, in my experience, double-blind peer review in development studies – or indeed in any of the qualitative social sciences – is largely a joke.  In my field, we tend to invest a lot of time and effort working in a particular place, and so it is very, very easy to figure out who is writing about what.  I often know who the author of a piece is as soon as I read the abstract – and there are always enough details in any manuscript to facilitate a quick Google search that will identify the author.  Both pieces that I currently have on my website work from material for which I am well-known within my field.  For example, just mentioning the villages of Dominase and Ponkrum in Ghana in the livelihoods piece pretty much tells everyone who it is.  And the piece on academic engagement with development practice comes directly from a panel at last year’s Association of American Geographers Annual Meeting which was attended by more than 100 people, as well as an extended listserv exchange in the fall of 2010 that was sent out to several thousand subscribers of various lists.  Again, pretty much everyone will be able to figure out who wrote it.

So, the work is now up there for your perusal.  Have a look, and let me know what you think . . .

So, it seems I have been challenged/called out/what-have-you by the folks at Imagine There Is No . . . over what I would do (as opposed to critique) about development.  At least I think that is what is going on, given that I received this tweet from them:

@edwardrcarr what would You do with 1 Billion $ for #developmentbit.ly/rQrUOd #The.1.Bill.$.Question

In general, I think this is a fair question.  Critique is nice, but at the end of the day I strive to build something from my critiques.  As I tell my grad students, I can train a monkey to take something apart – there isn’t much talent to that.  On the other hand, rebuilding something from whatever you just dismantled actually requires talent.  I admit to being a bit concerned about calling what I build “better”, mostly because such judgments gloss over the fact that any development intervention produces winners and losers, and therefore even a “better” intervention will probably not be better for someone.  I prefer to think about doing things differently, with an eye toward resolving some of the issues that I critique.

So, I will endeavor to answer – but first I must point out that asking someone what s/he would do for development with $1 billion is a very naive question.  I appreciate its spirit, but there isn’t much point to laying down a challenge that has little alignment with how the world works.  I think this is worth pointing out in light of the post on Imagine There Is No . . ., as they seem to be tweaking Bill Easterly for not having a good answer to their question.  However, for anyone who has ever worked for a development agency, the question “on what would you spend a billion dollars” comes off as a gotcha question because it is sort of nonsensical.  While the question might be phrased to make us think about an ideal world, those of us engaged in the doing of development who take its critique and rethinking seriously immediately start thinking about the sorts of things that would have to happen to make spending $1 billion possible and practical.  Those problems are legion . . . and pretty much any answer you give to the question is open to a lot of critique, either from a practical standpoint (great idea that is totally impractical) or from the critique side (and idea that is just replicating existing problems).  When caught in a no-win situation, the best option is not to answer at all.  Sure, we should imagine a perfect world (after all, according to A World Of Difference, I am “something of a radical thinker”), but we do not work in that world – and people live in the Global South right now, so anything we do necessarily must engage with the imperfections of the now even as we try to transcend them.

Given all of this, I offer the following important caveats to my answer:

1) I am presuming that I will receive this money as individual and not as part of any existing organization, as organizations have structures, mandates and histories that greatly shape what they can do.

2) I am presuming that I have my own organization, and that it already has sufficient staff to program $1 billion dollars – so a lot of contracting officers and lawyers are in place.  Spending money is a lot harder than you’d think.

3) I am presuming that I answer only to myself and the folks in the Global South.  Monitoring and evaluation are some of the biggest constraints on how we do development today.  As I said in my talk at SAIS a little while ago, it is all well and good to argue that development merely catalyzes change in complex systems, which makes its outcomes inherently unpredictable.  It is entirely another to program against that understanding – if the possible outcomes of a given intervention are hard to predict, how do you know which indicators to choose?  How can you build an evaluation system that allows you to capture unintended positive and negative outcomes as the project matures without looking like you are fudging the numbers?  This sounds like constrained thinking, but it is reality for anyone working in a big donor agency, and for all of the folks who implement the work of those agencies.

4) I am presuming there are enough qualified staff out there willing to quit what they are doing and come work for this project . . . and I am going to need a hell of a lot of staff.

5) I am presuming that I am expected to accomplish something in the relatively short term – i.e. 3-5 years, as well as trigger transformative changes in the Global South over the long haul.  If you don’t produce some results relatively soon, people will bail out on you.

All of these, except for 5), are giant caveats that basically divorce the question and its answer from reality.  I just need to point that out.  Because of these caveats, my answer here cannot be interpreted as a critique of my current employer, or indeed any other development organization – an answer that would also serve as a critique of those institutions would have to engage with their realities, blowing out a lot of my caveats above . . . sorry, but that’s reality, and it is really important to acknowledge the limits of any answer to such a loaded question.

So, here goes.  If I had $1 billion, I would spend it 1) figuring out what people really do to manage the challenges they face day-to-day, 2) identifying which of these activities are most effective at addressing those challenges and why, 3) evaluating whether any of these activities can be brought to scale or introduced to new places, and 4) bringing these ideas to scale.

Basically, I would spend $1 billion dollars on the argument “the new big idea is no more big ideas.”

Why would I do this, and do it this way?  Well, I believe that in a general way those of us working in development have very poor information about what is actually happening in the Global South, in the places where the challenges to human well-being are most acute.  We have a lot of assumptions about what is happening and why, but these are very often wrong.  I wrote a whole book making this point – rather convincingly, if some of the reviews are to be believed.  Because we don’t know what is happening, and our assumptions are wide of the mark, a lot of the interventions we design and implement are irrelevant (at best) or inappropriate (at worst) to the intended beneficiaries.  Basically, the claim (a la Sachs and the Millennium Villages Project) that there are proven development interventions is crap.  If we had known, proven interventions WE WOULD BE USING THEM.  To assume otherwise is to basically slander the bulk of people working on development as either insufficiently motivated (if we weren’t so damn lazy, and we really cared about poor people, we could fix all of the problems in the world with these proven interventions) or to argue that there simply needs to be more money spent on these interventions to fix everything (except in many cases there is little evidence that funding is the principal cause of project failure).  Of course, this is exactly what Sachs argues when asking for more support for the MVP, or when he is attacking anyone who dares critique the project.

The only way to really know what is happening is to get out there and talk to people.  When you do, what you find is that the folks we classify as the “global poor” are hardly helpless.  They are remarkably capable people who make livings under very difficult circumstances with very little resource and limited fallback options.  They know their environments, their economy, and their society far better than anyone from the outside ever will.  They are, in short, remarkable resources that should be treated as treasured repositories of human knowledge, not as a bunch of children who can’t work things out for themselves.  $1 billion would get us a lot of people in a lot of places doing a lot of learning . . . and this sort of thing can be programmed to run over 6 months to a year to run fieldwork, do some data analysis, and start producing tailored understandings of what works and why in different places . . . which then makes it relatively easy to start identifying opportunities for scale-up.  Actually, the scale-up could be done really easily, and could be very responsive to local needs, if we would just set up a means of letting communities speak to one another in a free and open manner – a network that let people in the Global South ask each other questions, and offer their answers and solutions, to one another.  Members of this project from the Global North, from the Universities and from development organizations, could work with communities to convey the lessons the project has gleaned from various activities in various places to help transfer ideas and technology in a manner that facilitates their productive introduction in new contexts.  So I suppose I would have to carve part of the $1 billion off for that network, but it would come in under the scale-up component of my project.  Eventually, I suspect this sort of network would also become a means of learning about what is happening in the Global South as well . . .

With any luck at all, by year 3 we would see the cross-fertilization of all kinds of locally-appropriate ideas and technology happening around the world and the establishment of a nascent network that could build on this momentum to yield even more information about what people are already doing, and what challenges they really face.  We would have started a process that has immediate impacts, but can work in tandem with the generational timescales of social change that are necessary to bring about major changes in any place.  We would have started a process that likely could not be stopped.  How it would play out is anyone’s guess . . . but it would sure look different than whatever we are doing now.

Whenever you write something, you hope that other people will like it . . . or perhaps hate it so much it spurs them to do something useful in response.  In any case, you want feedback.  A vast, echoey silence just sucks.  I have a weird version of this with my own academic work.  More often than not, I write things that land in the literature with a huge thud.  One or two people notice, read and cite it in the first two or so years it is out . . . and then all of a sudden lots of people start citing it in all kinds of places, ranging from academic journals to UN Reports.  This has become a pretty regular pattern for me, which to some extent reflects the fact that I have a habit of writing stuff on the edges of my discipline(s), and also reflects how long it takes new ideas to get into people’s work and show up in print (generally speaking, it takes between 9 months and a year, at least, from the acceptance of an article to its appearance in print – so any new idea has to be read, processed and incorporated into a new article, which takes a few months.  Then the article has to be accepted, and review typically takes 3-6 months.  Finally, after it is accepted, another 9-12 month wait.  Add it up, and you realize that it takes anywhere from 14-24 months for the first people who read a new idea to start responding in print).

Delivering Development has been a little different, as it is being reviewed in different kinds of venues – a lot of blog attention, for example.  I also had the good fortune of having two people review the piece for the back cover, so I got some feedback before the book even came out.  In any case, the reviews are now starting to flow in, and overall they are really kind.  Best of all, they seem to get what I was trying to do with the book – which are the best kind of reviews one can get as an author.  The reviews (with links to full reviews):

Back Cover

Carr’s concern is that development and globalization, as currently pursued, are creating more poverty than they solve, needlessly producing economic and environmental challenges that put everyone on Earth at risk. Confronting this paradoxical outcome head-on, Carr questions the “wisdom” of the traditional development-via-globalization strategy, a sort of connect-the-development-dots, by arguing that in order to connect the dots one must first see the dots. By failing to do so, agencies do not understand what they are connecting and why. This fundamental questioning of Post WWII development strategies, grounded in life along “Globalization’s Shoreline,” sets his approach to development in the age of globalization apart from much of the contemporary development literature.

— Michael H. Glantz, Director, CCB (Consortium for Capacity Building), INSTAAR, University of Colorado

Over the fifty years since the end of the colonial era, rich nations have granted Africa billions of dollars in development aid—the equivalent of six Marshall Plans—and yet, today, much of the continent is as desperate as ever for help. In Delivering Development, Edward Carr delves into the question of why the aid system has failed to deliver on its promises, and offers a provocative thesis: that economic development, at least as international donors define it, is not necessarily equal to advancement. Unlike many combatants in the debate over the causes of global poverty, who jet in and out of these countries and offer the view from 10,000 feet, Carr takes a novel approach to the problem. He examines the aid system as it is actually experienced by poor Africans.Delivering Development focuses on a pair of Ghanaian villages, which despite their poverty by statistical measures have nonetheless managed to construct sophisticated systems of agricultural cultivation and risk management. Carr doesn’t argue that these places hold the secret to ending poverty. On the contrary, his point is that there are no overarching solutions, that each community holds a unique set of keys to its own future. By delving into development at the grassroots, Carr reveals the rich and bedeviling complexity of a problem that, all too often, is reduced to simplistic ideological platitudes.”

— Andrew Rice, author of The Teeth May Smile but the Heart Does Not Forget: Murder and Memory in Uganda

Summaries of Recent Reviews (with links to full reviews)

The book is a riveting read, horizon broadening and . . . takes a somewhat unusual path towards challenging the dominant paradigm that complements other, parallel efforts . . . All-in-all, a must read for aid wonks everywhere.

— Andy Sumner, Global Dashboard

Development often fails. This is not a new premise. Many have written about it. But Edward Carr offers a fascinating perspective on why he believes this is true in Delivering Development.”

— Robin Pendoley, Thinking Beyond Borders

This book makes an important contribution to critical literatures on globalization and development . . . [providing] an often overlooked perspective within critical development literature: the real possibility for positive change and for a more active role of development’s target population to participate and shape the direction of change in their communities.

— Kelsey Hanrahan, Africa Today