Are we really that bad?

So, the Center for Global Development, a non-partisan think tank focused on reducing poverty and making globalization work for the poor (a paraphrase of their mission statement, which can be found here), has issued a report that more or less says that USAID’s quality and effectiveness of aid is very low when compared to other agencies.
Well, I’m not all that freaked out by this assessment, principally because it fails to ask important questions relevant to understanding development needs and development outcomes.  In fact, the entire report is rigged – not intentionally, mind you, but I suspect out of a basic ignorance of the difference between the agencies being evaluated, and an odd (mis)understanding of what development is.
For me, the most telling point in the report came right away, on pages 3 and 4:

Given these difficulties in relating aid to development impact on the ground, the scholarly literature on aid effectiveness has failed to convince or impress those who might otherwise spend more because aid works (as in Sachs 2005) or less because aid doesn’t work often enough (Easterly 2003).

Why did this set me off?  Well, in my book I argue that the “poles” of Sachs and Easterly in the development literature are not poles at all – they operate from the same assumptions about how development and globalization work, and I just spent 90,000 words worth of a book laying out those assumptions and why they are often wrong.  In short, this whole report is operating from within the development echo chamber from which this blog takes its name.  But then they really set me off:

In donor countries especially, faced with daunting fiscal and debt problems, there is new and healthy emphasis on value for money and on maximizing the impact of their aid spending.

Folks, yesterday I posted about how the desire to get “value for our money” in development was putting all the wrong pressures on agencies . . . not because value is bad, but because it puts huge pressures on the development agencies to avoid risk (and associated costs), which in turn chokes off innovation in their programs and policies.  And here we have a report, evaluating the quality of aid (their words) in terms of its cost-effectiveness.  One of their four pillar analyses is the ability of agencies to maximize aid efficiency.  This is nuts.
Again, its not that there should be no oversight of the funds or their uses, or that there should be no accountability for those uses.  But to demand efficiency is to largely rule out high risk efforts which could have huge returns but carry a significant risk of failure.  Put another way, if this metric was applied to the Chilean mine rescue, then it would score low for efficiency because they tried three methods at once and two failed.  Of course, that overlooks the fact that they GOT THE MINERS OUT ALIVE.  Same thing for development – give me an “inefficient” agency that can make transformative leaps forward in our understandings of how development works and how to improve the situation of the global poor over the “efficient” agency that never programs anything of risk, and never makes those big leaps.
Now, let’s look at the indicators – because they tell the same story.  One of the indicators under efficiency is “Share of allocation to well-governed countries.”  Think about the pressure that places on an agency that has to think about where to set up its programming.  What about all of the poor, suffering people in poorly-governed countries?  Is USAID not supposed to send massive relief to Haiti after an earthquake because its government is not all we might hope?  This indicator either misses the whole point of development as a holistic, collaborative process of social transformation, or it is a thinly-veiled excuse to start triaging countries now.
They should know better – Andrew Natsios is one of their fellows, and he has explained how these sorts of evaluation pressures choke an agency to death.  Amusingly, they cite this work in here . . . almost completely at random on page 31, for a point that has no real bearing on that section of the text.  I wonder what he thinks of this report . . .
In the end, USAID comes out 126th of 130 agencies evaluated for “maximizing efficiency.”  Thank heavens.  It probably means that we still have some space to experiment and fail left.  Note that of the top 20% of donors, the highest scores went to the World Bank and UN Agencies, arguably the groups that do the least direct programming on the ground – in other words, the “inefficiencies” of their work are captured elsewhere, when the policies and programs they set up for others to run begin to come apart.  The same could be said of the Millennium Challenge Corporation here in the US, which also scored high.  In other words, they are rewarding the agencies that don’t actually do all that much on the ground for their efficiency, while the agencies that actually have to deal with the uncertainties of real life get dinged for it.
And the Germans ended up ranking high, but hey, nothing goes together like Germans and efficiency.  That one’s for you, Daniel Esser.
What a mess of a report . . . and what a mess this will cause in the press, in Congress, etc.  For no good reason.

Required reading . . .

I’ve worked in the field of development studies for more than a decade now, mostly from the academic side.  In academia, we are very good at looking at the nuances of language and practice to try and detect why people do the things that they do.  As a result, in development studies we spend a lot of time thinking about discourses of development – the ways that we think about, speak about and act in the world – and how those shape the ways in which we “do development”.  Mostly, academics do this to explain why it is that development agencies and practitioners keep doing the same things over and over, hoping for a different result (which, you might remember, is how Einstein defined insanity).  There are some wonderful studies based in this approach that everyone should be reading, including Ferguson’s The Anti-Politics Machine, Scott’s Seeing Like A State, and Mitchell’s Rule of Experts (links in the sidebar to the right).  All help us to better understand why development doesn’t seem to work as well as we hope.  I suppose my forthcoming book (link also to the right) falls into this category as well, though I do not wade explicitly into social theory there (if you know the theory, you will see it in there – if you don’t, no worries, the argument is still perfectly intelligible).
What we academic types are not so good at is understanding the reality of life and work in a development organization.  Many of us have never worked in one, or did so a long time ago as a relatively low-ranking person.  However, when you rise in rank in an agency, you start to see the various organizational and political impediments to good work . . . and these impediments are at least as important for explaining development’s many failures as the (often-flawed) discursive framings of the world these agencies employ to understand the world.
With that in mind, I now strongly recommend you read The Clash of the Counter-bureaucracy and Development by former USAID Administrator Andrew Natsios.  Now, I don’t agree with a lot of the things that Natsios says about development in general – indeed, I think some of his logic with regard to economic growth as the core of development is very flawed – but I cannot argue at all with his gloves-off reading of how accountability measures, like monitoring and evaluation, are slowly choking USAID to death.  And it is gloves off – the man names names.  I was not AID under his leadership, but my colleagues all agree that he was a great administrator to work for, even if they did not agree with him all the time.  The man knows development . . . which is more than I can say about some previous administrators here.
By the way, even if you don’t work in development, you should read this – it is a wider lesson about how the best intentions related to accountability can go all wrong.  Those of you working for larger organizations will likely recognize parts of this storyline from where you sit.  And it is a pretty entertaining read, if for no other reason then to watch Natsios just lay it out there on a few people.  Must be nice to be retired . . .

The new job looms . . .

and I know it, because news stories like this one about the flooding in Niger hit me a completely different way now – previously, I would have thought about how this could be teachable, and even how it might relate to some research ideas . . . now, I recall interviews from April with people in my new Bureau at USAID where we discussed the looming food crisis in Niger.  In mid-September, this won’t be a teachable moment – this will be a fire drill for which I have some degree of responsibility.  Sobering.
Incidentally, this is another example of the challenges that face those of us working at the intersection of environment and development.  The long-term (last four-five decades) signal for precipitation is in steady decline.  It is hard to say if this is a visible outcome of climate change, mostly because we have a lot of trouble understanding the mechanics of the West African climate (for those so inclined, there are some issues with the teleconnections from ENSO and the influence of the NAO).

Dunkwa (Ghana) weather station precipitation figures 1963-2000 (source: Ghana Meteorological Service)

This figure (from my upcoming book) illustrates the real problem, though – the long-term decline is clear at this weather station (the closest one to my research area that is not parked right on the beach), but more striking is the variability around the centerline.  While this station is not showing any real trend toward greater variability, many other places in West Africa are – hence the massive, surprising flooding we are seeing in Niger, despite a long-term trend toward less precipitation in the region.  People forget that there are two key variables that shape precipitation outcomes – amount and timing.
This is probably the hardest part of the job – thinking about how to plan for increasing unpredictability and variability.  Trends are easy, assuming their mechanics are understood and therefore somewhat predictable.  If I know there will be 10% less rainfall in a particular place by a particular year, I can go about figuring out what the biophysical, economic and social impacts of that change might be.  However, it is a hell of a lot harder to plan for 10% more variability by a given year (assuming we could even quantify rising variability in such a manner).  Well, if it was easy, it wouldn’t be interesting . . . and someone else would have solved it already.