Bill Gates, in his annual letter, makes a compelling argument for the need to better measure the effectiveness of aid. There is a nice, 1 minute summary video here. This is becoming a louder and louder message in development and aid, having been pushed now by folks ranging from Raj Shah, the Administrator of USAID, to most everyone at the Center for Global Development. There are interesting debates going on about how to shift from a focus on outputs (we bought this much stuff for this many dollars) to a focus on impacts (the stuff we bought did the following good things in the world). Most of these discussions are technical, focused on indicators and methods. What is not discussed is the massively failure-averse institutional culture of development donors, and how this culture is driving most of these debates. As a result, I think that Gates squanders his bully pulpit by arguing that we should be working harder on evaluation. We all know that better evaluation would improve aid and development. Suggesting that this is even a serious debate in development requires a nearly-nonexistent straw man that somehow thinks learning from our programs and projects is bad.
Like most everyone else in the field, I agree with the premise that better measurement (thought very broadly, to include methods and data across the quantitative to qualitative spectrum) can create a learning environment from which we might make better decisions about aid and development. But none of this matters if all of the institutional pressures run against hearing bad news. Right now, donors simply cannot tolerate bad news, even in the name of learning. Certainly, there are lots of people within the donor agencies that are working hard on finding ways to better evaluate and learn from existing and past programs, but these folks are going to be limited in their impact as long as agencies such as USAID answer to legislators that seem ready to declare any misstep a waste of taxpayer money, and therefore a reason to cut the aid budget…so how can they talk about failure?
So, a modest proposal for Bill Gates. Bill (may I call you Bill?), please round up a bunch of venture capitalists. Not the nice socially-responsible ones (who could be dismissed as bleeding-heart lefties or something of the sort), the real red-in-tooth-and-claw types. Bring them over to DC, and parade out these enormously wealthy, successful (by economic standards, at least) people, and have them explain to Congress how they make their money. Have them explain how they got rich failing on eight investments out of ten, because the last two investments more than paid for the cost of the eight failures. Have them explain how failure is a key part of learning, of success, and how sometimes failure isn’t the fault of the investor or donor – sometimes it is just bad luck. Finally, see if anyone is interested in taking a back-of-the-envelope shot at calculating how much impact is lost due to risk-averse programming at USAID (or any other donor, really). You can shame Congress, who might feel comfortable beating up on bureaucrats, but not so much on economically successful businesspeople. You could start to bring about the culture change needed to make serious evaluation a reality. The problem is not that people don’t understand the need for serious evaluation – I honestly don’t know anyone making that argument. The problem is creating a space in which that can happen. This is what you should be doing with your annual letter, and with the clout that your foundation carries.
Failing that (or perhaps alongside that), lead by demonstration – create an environment in your foundation in which failure becomes a tag attached to anything from which we do not learn, instead of a tag attached to a project that does not meet preconceived targets or outcomes. Forget charter cities (no, really, forget them), become the “charter donor” that shows what can be done when this culture is instituted.
The evaluation agenda is getting stale, running aground on the rocky shores of institutional incentives. We need someone to pull it off the rocks. Now.
So, the Center for Global Development, a non-partisan think tank focused on reducing poverty and making globalization work for the poor (a paraphrase of their mission statement, which can be found here), has issued a report that more or less says that USAID’s quality and effectiveness of aid is very low when compared to other agencies.
Well, I’m not all that freaked out by this assessment, principally because it fails to ask important questions relevant to understanding development needs and development outcomes. In fact, the entire report is rigged – not intentionally, mind you, but I suspect out of a basic ignorance of the difference between the agencies being evaluated, and an odd (mis)understanding of what development is.
For me, the most telling point in the report came right away, on pages 3 and 4:
Given these difficulties in relating aid to development impact on the ground, the scholarly literature on aid effectiveness has failed to convince or impress those who might otherwise spend more because aid works (as in Sachs 2005) or less because aid doesn’t work often enough (Easterly 2003).
Why did this set me off? Well, in my book I argue that the “poles” of Sachs and Easterly in the development literature are not poles at all – they operate from the same assumptions about how development and globalization work, and I just spent 90,000 words worth of a book laying out those assumptions and why they are often wrong. In short, this whole report is operating from within the development echo chamber from which this blog takes its name. But then they really set me off:
In donor countries especially, faced with daunting fiscal and debt problems, there is new and healthy emphasis on value for money and on maximizing the impact of their aid spending.
Folks, yesterday I posted about how the desire to get “value for our money” in development was putting all the wrong pressures on agencies . . . not because value is bad, but because it puts huge pressures on the development agencies to avoid risk (and associated costs), which in turn chokes off innovation in their programs and policies. And here we have a report, evaluating the quality of aid (their words) in terms of its cost-effectiveness. One of their four pillar analyses is the ability of agencies to maximize aid efficiency. This is nuts.
Again, its not that there should be no oversight of the funds or their uses, or that there should be no accountability for those uses. But to demand efficiency is to largely rule out high risk efforts which could have huge returns but carry a significant risk of failure. Put another way, if this metric was applied to the Chilean mine rescue, then it would score low for efficiency because they tried three methods at once and two failed. Of course, that overlooks the fact that they GOT THE MINERS OUT ALIVE. Same thing for development – give me an “inefficient” agency that can make transformative leaps forward in our understandings of how development works and how to improve the situation of the global poor over the “efficient” agency that never programs anything of risk, and never makes those big leaps.
Now, let’s look at the indicators – because they tell the same story. One of the indicators under efficiency is “Share of allocation to well-governed countries.” Think about the pressure that places on an agency that has to think about where to set up its programming. What about all of the poor, suffering people in poorly-governed countries? Is USAID not supposed to send massive relief to Haiti after an earthquake because its government is not all we might hope? This indicator either misses the whole point of development as a holistic, collaborative process of social transformation, or it is a thinly-veiled excuse to start triaging countries now.
They should know better – Andrew Natsios is one of their fellows, and he has explained how these sorts of evaluation pressures choke an agency to death. Amusingly, they cite this work in here . . . almost completely at random on page 31, for a point that has no real bearing on that section of the text. I wonder what he thinks of this report . . .
In the end, USAID comes out 126th of 130 agencies evaluated for “maximizing efficiency.” Thank heavens. It probably means that we still have some space to experiment and fail left. Note that of the top 20% of donors, the highest scores went to the World Bank and UN Agencies, arguably the groups that do the least direct programming on the ground – in other words, the “inefficiencies” of their work are captured elsewhere, when the policies and programs they set up for others to run begin to come apart. The same could be said of the Millennium Challenge Corporation here in the US, which also scored high. In other words, they are rewarding the agencies that don’t actually do all that much on the ground for their efficiency, while the agencies that actually have to deal with the uncertainties of real life get dinged for it.
And the Germans ended up ranking high, but hey, nothing goes together like Germans and efficiency. That one’s for you, Daniel Esser.
What a mess of a report . . . and what a mess this will cause in the press, in Congress, etc. For no good reason.