Those following this blog (or my twitter feed) know that I have some issues with RCT4D work.  I’m actually working on a serious treatment of the issues I see in this work (i.e. journal article), but I am not above crowdsourcing some of my ideas to see how people respond.  Also, as many of my readers know, I have a propensity for really long posts.  I’m going to try to avoid that here by breaking this topic into two parts.  So, this is part 1 of 2.

To me, RCT4D work is interesting because of its emphasis on rigorous data collection – certainly, this has long been a problem in development research, and I have no doubt that the data they are gathering is valid.  However, part of the reason I feel confident in this data is because, as I raised in an earlier post,  it is replicating findings from the qualitative literature . . . findings that are, in many cases, long-established with rigorously-gathered, verifiable data.  More on that in part 2 of this series.

One of the things that worries me about the RCT4D movement is the (at least implicit, often overt) suggestion that other forms of development data collection lack rigor and validity.  However, in the qualitative realm we spend a lot of time thinking about rigor and validity, and how we might achieve both – and there are tools we use to this end, ranging from discursive analysis to cross-checking interviews with focus groups and other forms of data.  Certainly, these are different means of establishing rigor and validity, but they are still there.

Without rigor and validity, qualitative research falls into bad journalism.  As I see it, good journalism captures a story or an important issue, and illustrates that issue through examples.  These examples are not meant to rigorously explain the issue at hand, but to clarify it or ground it for the reader.  When journalists attempt to move to explanation via these same few examples (as far too often columnists like Kristof and Friedman do), they start making unsubstantiated claims that generally fall apart under scrutiny.  People mistake this sort of work for qualitative social science all the time, but it is not.  Certainly there is some really bad social science out there that slips from illustration to explanation in just the manner I have described, but this is hardly the majority of the work found in the literature.  Instead, rigorous qualitative social science recognizes the need to gather valid data, and therefore requires conducting dozens, if not hundreds, of interviews to establish understandings of the events and processes at hand.

This understanding of qualitative research stands in stark contrast to what is in evidence in the RCT4D movement.  For all of the effort devoted to data collection under these efforts, there is stunningly little time and energy devoted to explanation of the patterns seen in the data.  In short, RCT4D often reverts to bad journalism when it comes time for explanation.  Patterns gleaned from meticulously gathered data are explained in an offhand manner.  For example, in her (otherwise quite well-done) presentation to USAID yesterday, Esther Duflo suggested that some problematic development outcomes could be explained by a combination of “the three I s”: ideology, ignorance and inertia.  This is a boggling oversimplification of why people do what they do – ideology is basically nondiagnostic (you need to define and interrogate it before you can do anything about it), and ignorance and inertia are (probably unintentionally) deeply patronizing assumptions about people living in the Global South that have been disproven time and again (my own work in Ghana has demonstrated that people operate with really fine-grained information about incomes and gender roles, and know exactly what they are doing when they act in a manner that limits their household incomes – see here, here and here).  Development has claimed to be overcoming ignorance and inertia since . . . well, since we called it colonialism.  Sorry, but that’s the truth.

Worse, this offhand approach to explanation is often “validated” through reference to a single qualitative case that may or may not be representative of the situation at hand – this is horribly ironic for an approach that is trying to move development research past the anecdotal.  This is not merely external observation – I have heard from people working inside J-PAL projects that the overall program puts little effort into serious qualitative work, and has little understanding of what rigor and validity might mean in the context of qualitative methods or explanation.  In short, the bulk of explanation for these interesting patterns of behavior that emerges from these studies resorts to uninterrogated assumptions about human behavior that do not hold up to empirical reality.  What RCT4D has identified are patterns, not explanations – explanation requires a contextual understanding of the social.

Coming soon: Part 2 – Qualitative research and the interpretation of empirical data

I was at a talk today where folks from Michigan State were presenting research and policy recommendations to guide the Feed the Future initiative.  I greatly appreciate this sort of presentation – it is good to get real research in the building, and to see USAID staff that have so little time turn out in large numbers to engage.  Once again, folks, its not that people in the agencies aren’t interested or don’t care, its a question of time and access.

In the course of one of the presentations, however, I saw a moment of “explanation” for observed behavior that nicely captures a larger issue that has been eating at me as the randomized control trials for development (RCT4D) movement gains speed . . . there isn’t a lot of explanation there.  There is really interesting data, rigorously collected, but explanation is another thing entirely.

In the course of the presentation, the presenter put up a slide that showed a wide dispersion of prices around the average price received by farmers for their maize crops around a single market area (near where I happen to do work in Malawi).  Nothing too shocking there, as this happens in Malawi, and indeed in many places.  However, from a policy and programming perspective, it’s important to know that the average price is NOT the same thing as what a given household is taking home.  But then the presenter explained this dispersion by noting (in passing) that some farmers were more price-savvy than others.

1) there is no evidence at all to support this claim, either in his data or in the data I have from an independent research project nearby

2) this offhand explanation has serious policy ramifications.

This explanation is a gross oversimplification of what is actually going on here – in Mulanje (near the Luchenza market area analyzed in the presentation), price information is very well communicated in villages.  Thus, while some farmers might indeed be more savvy than others, the prices they are able to get are communicated throughout the village, thus distributing that information.  So the dispersion of prices is the product of other factors.  Certainly desperation selling is probably part of the issue (another offhand explanation offered later in the presentation).  However, what we really need, if we want a rigorous understanding of the causes of this dispersion and how to address it, is a serious effort to grasp the social component of agriculture in this area – how gender roles, for example, shape household power dynamics, farm roles, and the prices people will sell at (this is a social consideration that exceeds explanation via markets), or how social networks connect particular farmers to particular purchasers in a manner that facilitates or inhibits price maximization at market.  These considerations are both causal of the phenomena that the presenter described, and the points of leverage on which policy might act to actually change outcomes.  If farmers aren’t “price savvy”, this suggests the need for a very different sort of intervention than what would be needed to address gendered patterns of agricultural strategy tied to long-standing gender roles and expectations.

This is a microcosm of what I am seeing in the RCT4D world right now – really rigorous data collection, followed by really thin interpretations of the data.  It is not enough to just point out interesting patterns, and then start throwing explanations out there – we must turn from rigorous quantitative identification of significant patterns of behavior to the qualitative exploration of the causes of those patterns and their endurance over time.  I’ve been wrestling with these issues in Ghana for more than a decade now, an effort that has most recently led me to a complete reconceptualization of livelihoods (shifting from understanding livelihoods as a means of addressing material conditions to a means of governing behaviors through particular ways of addressing material conditions – the article is in review at Development and Change).  However, the empirical tests of this approach (with admittedly tiny-n size samples in Ghana, and very preliminary looks at the Malawi data) suggest that I have a better explanatory resolution for explained behaviors than possible through existing livelihoods approaches (which would end up dismissing a lot of choices as illogical or the products of incomplete information) – and therefore I have a better foundation for policy recommendations than available without this careful consideration of the social.

See, for example, this article I wrote on how we approach gender in development (also a good overview of the current state of gender and development, if I do say so myself).  I empirically demonstrate that a serious consideration of how gender is constructed in particular places has large material outcomes on whose experiences we can understand, and therefore the sorts of interventions we might program to address particular challenges.  We need more rigorous wrestling with “the social” if we are going to learn anything meaningful from our data.  Period.

In summary, explanation is hard.  Harder, in many ways, than rigorous data collection.  Until we start spending at least as much effort on the explanation side as we do on the collection side, we will not really change much of anything in development.

There is a great post up at Good on “Pretending to be Poor” experiments, where participants try to live on tiny sums of money (i.e. $1.50/day) to better understand the plight of the global poor.  Cord Jefferson refers to this sort of thing as “playing poor”, at least in part because participants don’t really live on $1.50 a day . . . after all, they are probably not abandoning their secure homes, and probably not working the sort of dangerous, difficult job that pays such a tiny amount.  Consuming $1.50/day is one thing.  Living on it is entirely another.  (h/t to Michael Kirkpatrick at Independent Global Citizen for pointing out the post).

This, for me, brings up another issue – the “authenticity” of the experiences many of us have had while doing fieldwork (or working in field programs), an issue that has been amplified by what seems to be the recent discovery of fieldwork by the RCT trials for development crowd (I still can’t get over the idea that they think living among the poor is a revolutionary idea).  The whole point of participant observation is to better understand what people do and why they do it by experiencing, to some extent, their context – I find it inordinately difficult to understand how people even begin to meaningfully parse social data without this sort of grounding.  In a concrete way, having malaria while in a village does help one come to grips with the challenges this might pose to making a living via agriculture in a rather visceral way.  So too, living in a village during a drought that decimated a portion of the harvest, by putting me in a position where I had to go a couple of (intermittent) days without food, and with inadequate food for quite a few more, helped me to come to grips with both the capacity and the limitations of the livelihoods strategies in the villages I write about in Delivering Development, and at least a limited understanding of the feelings of frustration and inadequacy that can arise when things go wrong in rural Africa, even as livelihoods strategies work to prevent the worst outcomes.

But the key part of that last sentence was “at least a limited understanding.”  Being there is not the same thing as sharing the experience of poverty, development, or disaster.  When I had malaria, I knew what clinics to go to, and I knew that I could afford the best care available in Cape Coast (and that care was very good) – I was not a happy guy on the morning I woke up with my first case, but I also knew where to go, and that the doctor there would treat me comprehensively and I would be fine.  So too with the drought – the villages I was living in were, at most, about 5 miles (8km) from a service station with a food mart attached.  Even as I went without food for a day, and went a bit hungry for many more, I knew in the back of my mind that if things turned dire, I could walk that distance and purchase all of the food I needed.  In other words, I was not really experiencing life in these villages because I couldn’t, unless I was willing to throw away my credit card, empty my bank account, and renounce all of my upper-class and government colleagues and friends.  Only then would I have been thrown back on only what I could earn in a day in the villages and the (mostly appalling) care available in the rural clinic north of Eguafo.  I was always critically aware of this fact, both in the moment and when writing and speaking about it since.  Without that critical awareness, and a willingness to downplay our own (or other’s) desire to frame our work as a heroic narrative, there is a real risk in creating our own versions of “playing poor” as we conduct fieldwork.

A letter to the editor in today’s Washington Post (scroll down to the second letter on this page) infuriated me beyond words.  People have a right to offer their opinion, however ill-informed, in a democracy.  Nobody, however, has a right to basically lie outright.  Yet James L. Henry, chairman of USA Maritime managed to get a letter to the editor published that did just that.

Henry was responding to the column “5 Myths About Foreign Aid” in which the author, John Norris, the executive director of the sustainable security program at the Center for American Progress quite rightly noted:

Congress mandates that 75 percent of all U.S. international food aid be shipped aboard U.S. flagged vessels — ships registered in the United States. A study by several researchers at Cornell University concluded that this subsidy of elite U.S. shipping companies cost American taxpayers $140 million in unnecessary transportation costs during 2006 alone.

The Government Accountability Office noted that between 2006 and 2008, U.S. food aid funding increased by nearly 53 percent, but the amount of food delivered actually decreased by 5 percent. Why? Because our food aid policies are swayed by an agribusiness lobby that stresses buying American, not buying cheaply.

Both of these points are well-documented.  But Henry, chair of the interest group that protects the US shipping industry from competition in the delivery of food aid, really doesn’t want you to know this.  Instead, he argues:

The reality is that cargo preference adds no additional cost to foreign aid programs and should be credited with sustaining an essential national defense sealift capability.

Cargo preference does not divert one dollar away from food aid programs. To the extent that cargo preference increases costs, the difference has been reimbursed by the Transportation Department. For example, reimbursements resulted in a $128 million net increase in available food aid funding in 2006. The Transportation Department reimburses these costs because a reliable U.S.-flag commercial fleet provides essential sealift capacity in times of war or national emergencies.

The language here is very careful – technically, he is not lying.  But by no means is his explanation meant to help the reader understand what is going on.  In arguing that “cargo preference does not divert one dollar away from food aid programs,” he fails to point out that the cost of cargo preference is built into existing budgets . . . it is part of existing food aid programs, and therefore technically does not divert money from them.  But this, of course, is not what Norris meant in his “5 myths” piece, nor is it what most people care about.  The simple fact of the matter is that more of the food aid budget could go to procuring food if the cargo preference requirement was dropped.  Period.

Second, if we read these two paragraphs carefully, we find that Henry is engaged in one of the more carefully phrased but entertainingly contradictory bits of writing I have ever seen.  Pay attention, now: in the first paragraph, he argues that “cargo preference adds no additional cost to foreign aid programs.”  In the second, he notes “To the extent that cargo preference increases costs, the difference has been reimbursed by the Transportation Department.” OK, first, let’s note that he just admitted that cargo preference does increase costs.  Second, he is technically correct – the burden of those costs is shifted outside foreign aid programs . . . to the Transportation Department.  Which is funded by the same tax dollars as foreign aid.  Basically, he is arguing that their taxpayer-funded subsidy/reimbursement should not be seen as having any impact on taxpayer-funded foreign relief operations.  Even though these are the same tax dollars, in the end.

Technically all true.  Clearly intended to deceive.  So, WaPo, how do you feel about publishing letters from poverty/disaster profiteers?

You know, qualitative social scientists of various stripes have long complained of their marginalization in development.  Examples abound of anthropologists, geographers, and sociologists complaining about the influence of the quantitatively-driven economists (and to a lesser extent, some political scientists) over development theory and policy.  While I am not much for whining, these complaints are often on the mark – quantitative data (of the sort employed by economists, and currently all the rage in political science) tends to carry the day over qualitative data, and the nuanced lessons of ethnographic research are dismissed as unimplementable, ideosyncratic/place-specific, without general value, etc.  This is not to say that I have an issue with quantitative data – I believe we should employ the right tool for the job at hand.  Sadly, most people only have either qualitative or quantitative skills, making the selection of appropriate tools pretty difficult . . .

But what is interesting, of late, is what appears to be a turn toward the lessons of the qualitative social sciences in development . . . only without actually referencing or reading those qualitative literatures.  Indeed, the former quantitative masters of the development universe are now starting to figure out and explore . . . the very things that the qualitative community has known for decades. What is really frustrating and galling is that these “new” studies are being lauded as groundbreaking and getting great play in the development world, despite the fact they are reinventing the qualitative wheel, and without much of the nuance of the current qualitative literature and its several decades of nuance.

What brings me to today’s post is the new piece on hunger in Foreign Policy by Abhijit Banerjee and Esther Duflo.  On one hand, this is great news – good to see development rising to the fore in an outlet like Foreign Policy.  I also largely agree with their conclusions – that the poverty trap/governance debate in development is oversimplified, that food security outcomes are not explicable through a single theory, etc.  On the other hand, from the perspective of a qualitative researcher looking at development, there is nothing new in this article.  Indeed, the implicit premise of the article is galling: When they argue that to address poverty, “In practical terms, that meant we’d have to start understanding how the poor really live their lives,” the implication is that nobody has been doing this.  But what of the tens of thousands of anthropologists, geographers and sociologists (as well as representatives of other cool, hybridized fields like new cultural historians and ethnoarchaeologists).  Hell, what of the Peace Corps?

Whether intentional or not, this article wipes the qualitative research slate clean, allowing the authors to present their work in a methodological and intellectual vacuum.  This is the first of my problems with this article – not so much with its findings, but with its appearance of method.  While I am sure that there is more to their research than presented in the article, the way their piece is structured, the case studies look like evidence/data for a new framing of food security.  They are not – they are illustrations of the larger conceptual points that Banerjee and Duflo are making.  I am sure that Banerjee and Duflo know this, but the reader does not – instead, most readers will think this represents some sort of qualitative research, or a mixed method approach that takes “hard numbers” and mixes it in with the loose suppositions that Banerjee and Duflo offer by way of explanation for the “surprising” outcomes they present.  But loose supposition is not qualitative research – at best, it is journalism. Bad journalism. My work, and the work of many, many colleagues, is based on rigorous methods of observation and analysis that produce validatable data on social phenomena.  The work that led to Delivering Development and many of my refereed publications took nearly two years of on-the-ground observation and interviewing, including follow-ups, focus groups and even the use of archaeology and remotely-sensed data on land use to cross-check and validate both my data and my analyses.

The result of all that work was a deep humility in the face of the challenges that those living in places like Coastal Ghana or Southern Malawi manage on a day-to-day basis . . . and deep humility when addressing the idea of explanation.  This is an experience I share with countless colleagues who have spent a lot of time on the ground in communities, ministries and aid organizations, a coming to grips with the fact that massively generalizable solutions simply don’t exist in the way we want them to, and that singular interventions will never address the challenges facing those living in the Global South.

So, I find it frustrating when Banerjee and Duflo present this observation as in any way unique:

What we’ve found is that the story of hunger, and of poverty more broadly, is far more complex than any one statistic or grand theory; it is a world where those without enough to eat may save up to buy a TV instead, where more money doesn’t necessarily translate into more food, and where making rice cheaper can sometimes even lead people to buy less rice.

For anyone working in food security – that is, anyone who has been reading the literature coming out of anthropology, geography, sociology, and even some areas of ag econ, this is not a revelation – this is standard knowledge.  A few years ago I spent a lot of time and ink on an article in Food Policy that tried to loosely frame a schematic of local decision-making that leads to food security outcomes – an effort to systematize an approach to the highly complex sets of processes and decisions that produce hunger in particular places because there is really no way to get a single, generalized statistic or finding that will explain hunger outcomes everywhere.

In other words: We know.  So what do you have to tell us?

The answer, unfortunately, is not very much . . . because in the end they don’t really dive into the social processes that lead to the sorts of decisions that they see as interesting or counterintuitive.  This is where the heat is in development research – there are a few of us working down at this level, trying to come up with new framings of social process that move us past a reliance solely on the blunt tool of economistic rationality (which can help explain some behaviors and decisions) toward a more nuanced framing of how those rationalities are constructed by, and mobilize, much larger social processes like gender identification.  The theories in which we are dealing are very complex, but they do work (at least I think my work with governmentality is working – but the reviewers at Development and Change might not agree).

And maybe, just maybe, there is an opening to get this sort of work out into the mainstream, to get it applied – we’re going to try to do this at work, pulling together resources and interests across two Bureaus and three offices to see if a reframing of livelihoods around Foucault’s idea of governmentality can, in fact, get us better resolution on livelihoods and food security outcomes than current livelihoods models (which mostly assume that decisionmaking is driven by an effort to maximize material returns on investment and effort). Perhaps I rest too much faith on the idea of evidence, but if we can implement this idea and demonstrate that it works better, perhaps we will have a lever with which to push oversimplified economistic assumptions out of the way, while still doing justice to the complexity of social process and explanation in development.

what she said!

RealClimate had an interesting post the other day about adaptation – specifically, how we bring together models that operate at the global-to-regional scales with an understanding of current and future impacts of climate change, which we feel at the local scale. This post was written from a climate science perspective – and so focuses on modeling capabilities and needs as related to the biophysical world.  In doing so, I think that one key uncertainty in our use of downscaled models for adaptation planning is huge – the likely pathways of human response to changes in the climate over the next several decades.  In places like sub-Saharan Africa, how people respond to climate change will have impacts on land use decisions, and therefore land cover . . . and land cover is a key component of local climate.  In other words, as we downscale climate models, we need to start adding new types of data to them – social data on adaptation decision-making, so that we might project plausible future pathways and build them into these downscaled models.

For example, many modeling exercises currently suggest that a combination of temperature increases and changes in the amount and pattern of rainfall in parts of southern Africa will make it very difficult to raise maize there over the next few decades.  This is a major problem, as maize is a staple of the region.  So, what will people do?  Will they continue to grow maize that is less hardy and takes up less CO2 and water as it grows, will they switch to a crop that takes up more CO2 than maize ever did, or will they begin to abandon the land and migrate to cities, creating pockets of fallow land and/or opening a frontier for mechanized agriculture (both outcomes likely to have significant impacts on greenhouse gas emissions and water cycling, among other things)?  Simply put, we don’t really know.  But we need to know, and we need to know with reasonably high resolution.  That is, it is not enough to simply say “they will stop planting maize and plant X.”  We need to know when this transition will take place.  We need to know if it will happen suddenly or gradually.  We need to know if that transition will itself be sustainable going forward, or if other changes will be needed in the near future.  All of this information needs to be part of iterative model runs that capture land cover changes and biogeochemical cycling changes associated with these decisions to better understand future local pathways of climate change impacts and the associated likely adaptation pathways that these populations will occupy.

The good news* is that I am on this – along with my colleague Brent McCusker at West Virginia University (see pubs here and here).  Between the two of us, we’ve developed a pretty solid understanding of adaptation and livelihoods decision-making, and have spent a good bit of time theorizing the link between land use change and livelihoods change to enable the examination of the issues I have raised above.  We have a bit of money from NSF to run a pilot this summer (Brent will manage this while I am a government employee), and I plan to spend next year working on how to integrate this research program into the global climate change programming of my current employer.

Long and short: climate modelers, you need us social scientists, now more than ever.  We’re here to work with you . . .

*Calling this good news presumes that you see me as competent, or at least that you see Brent as competent enough to make up for my incompetence.

Athletic aid?  Do universities really subsidize their athletic departments in these hard times, and can they really make those subsidies back?  Phil Miller at Environmental Economics provided a link to data on athletic department revenues and expenses for most D-1 schools.  It is an interesting dataset, especially at a time when ever-contracting budgets make subsidies for athletic departments less and less attractive.  At the same time, I am a former college athlete (track and field, one of those sports that costs a lot more than it will ever make) and I don’t want to see athletic departments tossed entirely.  I wonder if public disclosure of these costs in such an easily accessible form will do anything in terms of public awareness and attitudes in a tough economy.  My guess is that the answer will depend on the school in question.  I assure you that no matter what the figures, even Tea Party central (or as you might call it, the state of South Carolina) is going to just keep supporting the athletic departments of Clemson and the University of South Carolina (where, in full disclosure, I should note that I am currently employed).

A quick glance at the data suggests that The University of South Carolina’s athletic department does quite well year-to-year.  When you subtract revenues from expenses, you get a profits of more than $2 million in 2008, a little over half a million in 2009, and nearly $1.6 million in 2010. This looks great, until you take the data apart a little.  In the good news column, the university has not paid any direct subsidy to the athletic department over the past three years.  However, they have been charging student fees to support the department.  A lot of student fees: in 2008, $1,987,931.  In 2009, $2,098,087.  And in 2010, $2,146,293.  That trend is probably going in the wrong direction, though not all that much.  But if we take the student fee subsidy (and let’s be honest, that’s what it is) out of the revenue column, the figures don’t look that rosy:

2008: a tiny profit, less than $200k

2009: a loss of $1.6 million

2010: a loss of more than half a million.

Yep, over the past three years, the athletic department has lost around $2 million.  Now, to be fair, that does not include the revenues to branded merchandise that is not attributed to the athletic department (i.e. bookstore sales, which are huge) that went to the university general fund, which likely pushed this figure back toward revenue neutrality.


Now, let’s look up the road to our in-state rival, Clemson (also a state school, though a lot of people seem to be unaware of this).  Again, a quick look at the numbers suggests that Clemson ran in the black, making more than $870,000 in 2008, losing $500,000 in 2009, and rebounding to make $780,000 in 2010.  But if you take apart the numbers, it gets ugly quick:

In 2008, the school subsidized the athletic department to the tune of $2,435,268, and charged students an additional $1,501,216 in fees to support the department ($3.9 million total).  In 2009, it was a subsidy of $2,924,005 and fees of $1,535,940 (nearly $4.5 million).  Finally, in 2010, the numbers were $3,233,520 in subsidy and $1,585,556 in fees ($4.8 million).  This is a lot of money in a state that is more or less broke, and in which tuition and fees continue to rise.  The net?

2008: A loss of over $3 million

2009: A loss of $5 million

2010: Another loss of $4 million

Yeah, there is no way they are covering that with merchandizing, and given the relatively poor quality of the teams coming out of their program in recent years, I doubt this is spurring serious alumni donations.

Oh, and WTF is going on with Virginia’s (my old athletic department) numbers?  In 2010, they ran an $11 million dollar profit, but charged the students $12 million in fees?  This makes absolutely no sense . . . I have to assume the data here is screwed up somehow, as that would work out to around $870 per student (including grads and professional students)!  There is no way that flies there.  I can only assume we are looking at a lot of misreported fees – I mean, looking at dollars in and dollars out, why not just cut the fees down to $1 million and go for revenue neutrality?

In my guest post on Aid Watch yesterday, I argued that a basic familiarity with the history and philosophy of development, and some training in critical approaches to development, might have averted at least one of the problems currently associated with the Millennium Village Project (a conflict of interest for project workers when the stated goals and interventions of the project and the needs of MVP communities do not align) before it happened.

A failure of background knowledge also lies at the heart of the MVP’s enduring popularity, even in the face of mounting empirical evidence that it is not working.  It is one thing to ignore the predictions of a lone academic (or a few academics).  It is another to overlook evidence of problems trickling in from around the world. If the MVP is so flawed, why do so many continue to support it?

I argue that the MVP drew its popularity from two sources: its theoretical eclecticism, and from the ways in which it resonated with conventional understandings of development and development practice in the major agencies.  If one goes through the literature on the MVP, one will find echoes of many different bodies of development theory (I say “echoes” purposefully: the MVP has never overtly referenced any bodies of development theory in its publications, forcing critics to read between the lines).  For example, various authors (e.g. here and here) have found in the MVP the influence of “big push” theories with their foundations in the 1950s, while others hear the reverberations of Reagan-era privatization and deregulation.

While drawing upon many bodies of theory to build something new is not a problem in and of itself, doing so productively requires an understanding of each theory from which one is drawing.  The framing of the MVP shows no sign of such familiarity.  Instead, it appears to pluck “useful” bits and pieces of these theories that support the project’s larger political agenda and justifications for its technical interventions.  It adopts the language of “big push” theories when it argues for a concentrated injection of capital across sectors of a village economy to get them all moving simultaneously.  At the same time, it turns to the governance focus with echoes in modernization theory.  As I argued in my article on the MVP:

This focus, insofar as it does not consider the ways in which existing processes do function and places a priori weight on Western modes of administration and governance, echoes earlier, often ethnocentric, tenets of modernization theory, such as the need to convince societies to embrace new, Western forms of administration on their path to ‘development’. (338)

The problem is that these “useful bits” were parts of larger theories that, on the whole, often contradicted one another.

For example, as Cabral et al. (2006) have observed, ‘big push’ theories of development that see a coordinated injection of capital across all sectors of an economy as a productive means of driving economic ‘take off ’ and development (for example, Rostow 1959) run contrary to the claims of modernization theorists like Lewis (1954), who saw unbalanced growth in different sectors of the economy as a key to stimulating the overall economy. (338)

The result was a project that on one hand had something for every development perspective.  However, this came at the cost of internal coherence, and the ability to reflect upon or address the well-known historical problems encountered by those who employed the larger theories from which these bits were taken. A reasonable familiarity with the history and philosophy of development would have made these issues apparent long before there was a need to gather empirical evidence on the performance of the MVP.

But this sort of eclecticism only goes so far in explaining the popularity of a project – after all, most people do not worry much about the underlying assumptions of a given project or program.  What policymakers certainly do notice are the ways in which the MVP nicely aligns itself with conventional understandings of development policy and practice.  For example, there are broad similarities in approach and assumptions between the MVP and Poverty Reduction Strategy Papers (PRSPs) which suggest that the MVP is not only nothing new, it is nothing revolutionary (or, in fact, even that different from what is already being done by the mainstream development community):

Like the MVP, PRSPs tend to deal with development issues sectorally, without addressing either the tradeoffs or the synergies between different sectors – this is particularly true in the context of sustainable development planning. PRSPs also tend to conceive of solutions to sectoral problems without reference to local conditions. For example, lagging agricultural production is often addressed through the introduction of more inputs, which on its surface might seem like the ‘common sense’ application of ‘tested and true methods’. Such a set of solutions and rhetoric is nearly identical to that seen in the MVP. Finally, PRSPs, like the MVP, do not consider the social context and processes through which problems are identified and solutions shaped at the national or local level. Yet, national politics may influence the identification of a particular harvest as ‘insufficient’ or ‘sufficient’, a label that shapeshow people view that harvest and the needs of those who are dependent on it for their livelihoods. In short, the MVP and the PSRPs are mutually reinforcing – there is no challenge to the development status quo in the MVP, except perhaps in the form of a call for more money to fund the ‘big push’ (Cabral et al. 2006) needed to ‘kick-start’ development in these villages. (338-339)

Again, a familiarity with the conceptual literature in development studies would have allowed those who touted this project as something new to recognize its fundamentally conservative approach to development.

All of this goes to deepen an underlying point in the Aid Watch post: more practitioner training in the history and philosophy of development, and a wider exposure to critical approaches to development, are critical first steps toward the creation of (or simply the recognition of) truly revolutionary, coherent and ultimately successful projects.

UPDATE: Marc Bellemare pointed out some issues with this post, which I have addressed here.  These issues, though, strengthen the argument about strategic deglobalization . . .


There have been an interesting series of blog posts going around about the issue of price speculation in food markets, and the impact of that speculation on food security and people’s welfare.  Going back through some of these exchanges, it seems to me that a number of folks are arguing past one another.

The most recent discussion was spurred by a post on the Guardian’s Global Development blog by John Vidal that took on the issue of speculation in food markets.  In the post, Vidal argues that food speculation is a key driver of price instability on global food markets, which results in serious impacts for the poorest people in the world – a sort of famine profiteering, as it were.

The weakness of this post, as I see it, are twofold.  First, it doesn’t take the issue of price arbitrage seriously – that is, how speculation is supposed to function.  Aid Thoughts, via one of the comments on Vidal’s post, takes Vidal to task for this.  As Aid Thoughts/the commenter point out, the idea behind speculation is to pull future price impacts of shortage into the present, stimulating responses to future shortages before they occur.  Thus, a blanket condemnation of speculation makes very little sense from the perspective of one who wants to see food security enhanced around the world – without speculation, there will be no market signal for future shortage, creating a world that addresses shortages in a reactive instead of proactive manner. This is a completely fair critique of Vidal, I think.

However, neither Vidal nor those responding to him actually address the evidence for significant market manipulation, and the intentional generation of instability for the purposes of profiteering.  This evidence first emerged in a somewhat anecdotal manner in Fredrick Kaufman’s “The Food Bubble: How Wall Street starved millions and got away with it.”  In this article, Kaufman uses a fairly limited number of informants to lay out a case for the intentional manipulation of wheat markets in 2008.  It is an interesting read, though I argued in an earlier post that it suffers from trying to be a parable for the pervasive presence of complex investment vehicles in the modern world.  And in the end, its findings can hardly be called robust.

Though Kaufman’s argument might, by itself, be less than robust, it received a serious empirical boost from the International Food Policy Research Institute (IFPRI) in the fall of 2010.  In a discussion paper that remains underreported and under-considered in food security circles (trust me, it is difficult to get anyone to even talk about speculation in program settings), Bryce Cooke and Miguel Robles demonstrate quantitatively that the dramatic price rises for food in 2008 is best explained by various proxies for speculation and activity on futures markets.  Now, we can argue about how large an impact that activity had on actual prices, but it seems to me that Cooke and Robles, when taken in concert with the Kaufman piece, have demonstrated that the speculation we see in the markets right now is not merely a normal market response to potential future shortage – indeed, the Food and Agricultural Organization (FAO) of the United Nations has been arguing for months that there are no likely supply issues that should be triggering the price increases we see.  In other words, while it is foolish to simply blame price arbitrage for food insecurity, it is equally blind to assume that all of those practicing such arbitrage are doing so in the manner prescribed in the textbooks.  Someone will always try to game the system, and in tightly connected markets, a few efforts to game a market can have radiating impacts that draw in honest arbitrage efforts.  There is need for regulatory oversight.  But regulation will not solve all our food problems.

But this all leaves one last question unanswered: what is the impact of price instability, whether caused by actual likely future shortages or by efforts to game markets for short-term profits, on the welfare of the poor?  Vidal, Kaufman and many others assume that the impacts are severe.  Well, maybe.  You see, where matters (again – yep, I’m a geographer).  In a very interesting paper, Marc Bellemare (along with Chris Barrett and David Just) demonstrates that, at least in Ethiopia:

contrary to conventional wisdom, the welfare gains from eliminating price volatility would be concentrated in the upper 40 percent of the income distribution, making food price stabilization a distributionally regressive policy in this context.

This finding may be a shock to those working in aid at first glance, but this finding is actually intuitive.  In fact, in my book (out tomorrow!) I lay out a qualitative picture of livelihoods in rural Ghana that aligns perfectly with this finding.  In Bellemare et al, I would bet my house that the upper 40% of the population is that segment of the population living in urban areas and/or wealthy enough to be purchasing large amounts of processed food.  Why does this matter?  This is the segment of the population that typically has the most limited options when food prices begin to get unstable.  On the other hand, the bottom 60% of the population, especially those in this cohort living in rural areas (it is unclear from the study how much of an overlap between poor and rural there is in the sample, but I am betting it is pretty high), has a much more limited engagement with global food markets.  As a result, when food prices begin to spike, they have the ability to effect a temporary partial, or even complete, disengagement from the global market.  In other words, much as I saw in Ghana, this study seems to suggest that temporary deglobalization is a coping strategy that at least some people in Ethiopia use to guard against the vagaries of markets.  Ironically, those best positioned to effect such a strategy are the poorest, and therefore they are better able to manage the impact of price instability on food markets.

In short, I would argue that Marc’s (and his co-authors’) work is a quantitative empirical demonstration of one of my core arguments in Delivering Development:

2. At globalization’s shoreline the experience of “development” is often negative. The integration of local economies, politics, and society into global networks is not the unmitigated boon to human well- being presented by many authors. Those living along the shores of globalization deal with significant challenges in their lives, such as degrading environments, social inequality that limits opportunity for significant portions of society, and inadequate medical care. The integration of these places into a global economy does not necessarily solve these problems. In the best cases such integration provides new sources of income that might be used to address some of these challenges. In nearly all cases, however, such integration also brings new challenges and uncertainties that come at a cost to people’s incomes and well- being. (pp.14-15)

I’m not suggesting Marc endorses this claim – hell, for all I know he’ll start throwing things when he sees it.  But there is an interesting convergence happening here.  I’m glad I met Marc at a tweet-up in DC a few weeks ago.  We’re going to have to talk some more . . . I see the beginning of a beautiful friendship.

In summary, while efforts to game global food markets do exist, and have very serious impacts on at least some people, they do not crush everyone in the Global South.  Instead, this instability will be most felt by those in urban areas – in the form of a disaffected middle and upper class, and a large cohort of the urban poor who, lacking alternative food sources, might be pushed over the brink by price increases.  The policy implications are clear:

  • We need to be watching the impact of price increases on urban food insecurity more than rural insecurity
  • Demanding that rural producers orient themselves toward greater and greater integration with global markets in the absence of robust fallback measures (such as established, transparent microinsurance and microsavings initiatives) will likely extend the impact of future price instability further into the poorest populations.
  • We need to better understand the scope of artificially-generated instability and uncertainty in global food markets, and establish means of identifying and regulating this activity without closing price arbitrage down entirely.

« Previous PageNext Page »