So, the Center for Global Development, a non-partisan think tank focused on reducing poverty and making globalization work for the poor (a paraphrase of their mission statement, which can be found here), has issued a report that more or less says that USAID’s quality and effectiveness of aid is very low when compared to other agencies.
Well, I’m not all that freaked out by this assessment, principally because it fails to ask important questions relevant to understanding development needs and development outcomes. In fact, the entire report is rigged – not intentionally, mind you, but I suspect out of a basic ignorance of the difference between the agencies being evaluated, and an odd (mis)understanding of what development is.
For me, the most telling point in the report came right away, on pages 3 and 4:
Given these difficulties in relating aid to development impact on the ground, the scholarly literature on aid effectiveness has failed to convince or impress those who might otherwise spend more because aid works (as in Sachs 2005) or less because aid doesn’t work often enough (Easterly 2003).
Why did this set me off? Well, in my book I argue that the “poles” of Sachs and Easterly in the development literature are not poles at all – they operate from the same assumptions about how development and globalization work, and I just spent 90,000 words worth of a book laying out those assumptions and why they are often wrong. In short, this whole report is operating from within the development echo chamber from which this blog takes its name. But then they really set me off:
In donor countries especially, faced with daunting fiscal and debt problems, there is new and healthy emphasis on value for money and on maximizing the impact of their aid spending.
Folks, yesterday I posted about how the desire to get “value for our money” in development was putting all the wrong pressures on agencies . . . not because value is bad, but because it puts huge pressures on the development agencies to avoid risk (and associated costs), which in turn chokes off innovation in their programs and policies. And here we have a report, evaluating the quality of aid (their words) in terms of its cost-effectiveness. One of their four pillar analyses is the ability of agencies to maximize aid efficiency. This is nuts.
Again, its not that there should be no oversight of the funds or their uses, or that there should be no accountability for those uses. But to demand efficiency is to largely rule out high risk efforts which could have huge returns but carry a significant risk of failure. Put another way, if this metric was applied to the Chilean mine rescue, then it would score low for efficiency because they tried three methods at once and two failed. Of course, that overlooks the fact that they GOT THE MINERS OUT ALIVE. Same thing for development – give me an “inefficient” agency that can make transformative leaps forward in our understandings of how development works and how to improve the situation of the global poor over the “efficient” agency that never programs anything of risk, and never makes those big leaps.
Now, let’s look at the indicators – because they tell the same story. One of the indicators under efficiency is “Share of allocation to well-governed countries.” Think about the pressure that places on an agency that has to think about where to set up its programming. What about all of the poor, suffering people in poorly-governed countries? Is USAID not supposed to send massive relief to Haiti after an earthquake because its government is not all we might hope? This indicator either misses the whole point of development as a holistic, collaborative process of social transformation, or it is a thinly-veiled excuse to start triaging countries now.
They should know better – Andrew Natsios is one of their fellows, and he has explained how these sorts of evaluation pressures choke an agency to death. Amusingly, they cite this work in here . . . almost completely at random on page 31, for a point that has no real bearing on that section of the text. I wonder what he thinks of this report . . .
In the end, USAID comes out 126th of 130 agencies evaluated for “maximizing efficiency.” Thank heavens. It probably means that we still have some space to experiment and fail left. Note that of the top 20% of donors, the highest scores went to the World Bank and UN Agencies, arguably the groups that do the least direct programming on the ground – in other words, the “inefficiencies” of their work are captured elsewhere, when the policies and programs they set up for others to run begin to come apart. The same could be said of the Millennium Challenge Corporation here in the US, which also scored high. In other words, they are rewarding the agencies that don’t actually do all that much on the ground for their efficiency, while the agencies that actually have to deal with the uncertainties of real life get dinged for it.
And the Germans ended up ranking high, but hey, nothing goes together like Germans and efficiency. That one’s for you, Daniel Esser.
What a mess of a report . . . and what a mess this will cause in the press, in Congress, etc. For no good reason.