Measurement matters . . .

Todd Moss at the Center for Global Development has a post about Ghana and the Millennium Challenge Corporation (MCC).  Overall, he makes some good points about the purpose of MCC compacts, and whether or not it makes sense to re-up with Ghana in 2012 for a second compact.  While Moss makes a number of good points in his post (including the fact that Ghana has a lot of capital incoming from oil, and a ready market for its debt, both of which seem to negate the need for continued grants), I was brought up short by one stunning statement:

Ghana is (suddenly) just barely “low income”.  A recent rebasing of its GDP found the country was 63% richer than everyone thought.  Ghana might still technically qualify for the MCC but the rationale for another huge compact drops pretty significantly.

Now, to be fair to Moss, he has an excellent post here on the implications of such rebasing.  Importantly, the second lesson he takes away from this sudden revaluation of Ghana’s economy is:

Boy, we really don’t know anything. Over the past thirty years Ghana has been one of the most scrutinized, measured, studied, picked-over economies in Africa. (yes, I too did my PhD on Ghana…) Yet, we were all taking as gospel a number that was off by a tremendous margin. If we are nearly two-thirds wrong on Ghana’s GDP, what hope can we possibly have in stats for Chad? Everyone knows that data is dubious, but this seems to add a whole new level of doubt.

His fourth point is closely related:

I’m still confused… but it probably doesn’t matter. The Reuters article quotes the government statistician as estimating GDP per capita at $1318 instead of $753. This doesn’t add up to the total GDP figures also given since this implies a 75% increase. If the $1318 is correct, then that either implies that the government thinks there are only 19.4 million people instead of the normal estimates of about 24 million. Or, if the total GDP number of $25.6 billion is right, then per capita GDP is really $1067 per capita. (I think I’m already violating my lesson from #2.)

I have a chapter in my book dedicated to understanding why our measurements of the economy and environment in the Global South are mostly crap, and even when the data is firm it often does not capture the dynamics we think it does.  I then spend a few chapters suggesting what to do about it (including respatializing data/data collection so that it can be organized into spatial units that have social, economic, and ecological meaning, and using basic crowdsourcing techniques to both collect data and ground truth of existing statistics).  Even better, this is rooted in a discussion of Ghana’s economy.  I give Moss credit for being willing to point out the confusing numbers, and acknowledge that they confuse him.  They should.
But Moss gets it totally wrong here:

Ghana has long aspired to be a middle-income country by 2020, and this now seems like it will happen many years early. Accra certainly feels like a middle-income city.

This statement explains how he can label Ghana “barely low-income”, even after he has called the very statistics that make such a claim possible into question: he’s focused on Accra.  Accra has very little to do with how the bulk of the Ghanaian population lives – and most of that population is very, very poor.  Ghana is not barely low income – it is still quite low income, with some pockets of extreme wealth starting to distort the national statistics.  It doesn’t matter how Accra feels – that city is home to at best 10% of the population.  Kumasi is home to between 5-8% more.  Generously including Tamale and Takoradi in the middle-income city categories (this is very generous) nets you probably 25% of the population – nobody else is living in a middle income country.  Like Moss, I did my dissertation work in Ghana.  I still work there.  The difference is that I did my work in rural villages, and still do.  $1 a day beyond subsistence is a common income in the rural areas of the Central Region, even now – and the Central Region has a lot more infrastructure than most of the Northern, Upper East and Upper West Regions.  This population remains poorly educated – failed by poor rural schools.  They cannot support a transformation of the Ghanaian economy.  Most of Ghana is still a very low income country, not ready for any sort of sustained economic growth.  The country has seen enormous success in recent years – I am stunned by what I have seen in the past 13 years – but the fruits of that success are not distributed evenly.  While the cities have boomed, the villages are nearly unchanged.  This is Ghana’s new challenge – to spread this new wealth out and foster a diverse, resilient economy.
This is not to say that an MCC compact is the right tool to foster this, or that Ghana is the best place to be putting MCC money.  However, declaring “success” too soon creates its own set of risks – let’s use some nuance when considering how a country is doing, so we can identify the real challenges to overcome and successes to build on moving forward.

Are we really that bad?

So, the Center for Global Development, a non-partisan think tank focused on reducing poverty and making globalization work for the poor (a paraphrase of their mission statement, which can be found here), has issued a report that more or less says that USAID’s quality and effectiveness of aid is very low when compared to other agencies.
Well, I’m not all that freaked out by this assessment, principally because it fails to ask important questions relevant to understanding development needs and development outcomes.  In fact, the entire report is rigged – not intentionally, mind you, but I suspect out of a basic ignorance of the difference between the agencies being evaluated, and an odd (mis)understanding of what development is.
For me, the most telling point in the report came right away, on pages 3 and 4:

Given these difficulties in relating aid to development impact on the ground, the scholarly literature on aid effectiveness has failed to convince or impress those who might otherwise spend more because aid works (as in Sachs 2005) or less because aid doesn’t work often enough (Easterly 2003).

Why did this set me off?  Well, in my book I argue that the “poles” of Sachs and Easterly in the development literature are not poles at all – they operate from the same assumptions about how development and globalization work, and I just spent 90,000 words worth of a book laying out those assumptions and why they are often wrong.  In short, this whole report is operating from within the development echo chamber from which this blog takes its name.  But then they really set me off:

In donor countries especially, faced with daunting fiscal and debt problems, there is new and healthy emphasis on value for money and on maximizing the impact of their aid spending.

Folks, yesterday I posted about how the desire to get “value for our money” in development was putting all the wrong pressures on agencies . . . not because value is bad, but because it puts huge pressures on the development agencies to avoid risk (and associated costs), which in turn chokes off innovation in their programs and policies.  And here we have a report, evaluating the quality of aid (their words) in terms of its cost-effectiveness.  One of their four pillar analyses is the ability of agencies to maximize aid efficiency.  This is nuts.
Again, its not that there should be no oversight of the funds or their uses, or that there should be no accountability for those uses.  But to demand efficiency is to largely rule out high risk efforts which could have huge returns but carry a significant risk of failure.  Put another way, if this metric was applied to the Chilean mine rescue, then it would score low for efficiency because they tried three methods at once and two failed.  Of course, that overlooks the fact that they GOT THE MINERS OUT ALIVE.  Same thing for development – give me an “inefficient” agency that can make transformative leaps forward in our understandings of how development works and how to improve the situation of the global poor over the “efficient” agency that never programs anything of risk, and never makes those big leaps.
Now, let’s look at the indicators – because they tell the same story.  One of the indicators under efficiency is “Share of allocation to well-governed countries.”  Think about the pressure that places on an agency that has to think about where to set up its programming.  What about all of the poor, suffering people in poorly-governed countries?  Is USAID not supposed to send massive relief to Haiti after an earthquake because its government is not all we might hope?  This indicator either misses the whole point of development as a holistic, collaborative process of social transformation, or it is a thinly-veiled excuse to start triaging countries now.
They should know better – Andrew Natsios is one of their fellows, and he has explained how these sorts of evaluation pressures choke an agency to death.  Amusingly, they cite this work in here . . . almost completely at random on page 31, for a point that has no real bearing on that section of the text.  I wonder what he thinks of this report . . .
In the end, USAID comes out 126th of 130 agencies evaluated for “maximizing efficiency.”  Thank heavens.  It probably means that we still have some space to experiment and fail left.  Note that of the top 20% of donors, the highest scores went to the World Bank and UN Agencies, arguably the groups that do the least direct programming on the ground – in other words, the “inefficiencies” of their work are captured elsewhere, when the policies and programs they set up for others to run begin to come apart.  The same could be said of the Millennium Challenge Corporation here in the US, which also scored high.  In other words, they are rewarding the agencies that don’t actually do all that much on the ground for their efficiency, while the agencies that actually have to deal with the uncertainties of real life get dinged for it.
And the Germans ended up ranking high, but hey, nothing goes together like Germans and efficiency.  That one’s for you, Daniel Esser.
What a mess of a report . . . and what a mess this will cause in the press, in Congress, etc.  For no good reason.