Entries tagged with “Center for Global Development”.


CGD has an interesting short essay up, written by Matthew Darling, Saugato Datta, and Sendhil Mullainathan, entitled “The Nature of the BEast: What Behavioral Economics Is Not.” The piece aims to dispel a few myths about behavioral economics, while offering a quick summary of what this field is, and what its goals are. I’ve been looking around for a good short primer on BE, and so I had high hopes for this piece…unfortunately, for two reasons the piece did not live up to expectations.

First, the authors tie themselves in a strange knot as they try to argue that behavioral economics is not about controlling behavior. While they note that BE studies and tools could be used to nudge human behavior in particular directions, they argue that “What distinguishes the behavioral toolset [from those of marketers, for example], however, is that so many of the tools are about helping people to make the choices that they themselves want to make.” This claim sidesteps a very important question: how do we know what choices they want to make? What we see as problematic livelihoods outcomes might not, in fact, be all that problematic to those living those outcomes, and indeed might have local rationales that are quite reasonable. While this might seem an obvious point, most BE work that I have seen seems to rest on a near-total lack of understanding of why those under investigation engage in the behaviors that “require explanation”. Therefore, the claim that BE helps people make the choices they want to make is, in fact, rather patriarchal in that the determination of what choices people want to make does not rest with those people, but with the behavioral economist. Sadly, this is a fairly accurate representation of much work done under the heading of BE. It would have been better if the authors had simply pointed out that BE is no more obsessed with incentives than any other part of economics, and if people are worried about behavioral control, they’d best have a look at the US (or their own national) tax code and focus their anxiety there.

Second, the authors argue “Behavioral economics differs from standard economics in that it uses a more realistic (and more complicated) model for people [and their decisions].” Honestly, I have seen no evidence for a coherent model of humans or their behavior in BE. What I have seen is a lot of rigorous data collection, the results of which are then shoehorned into some sort of implicit explanatory framework laden with unexamined assumptions that generally do not hold in the real world. Rigorously identifying when particular stimuli result in different behaviors is not the same thing as explaining how those stimuli bring about those behaviors. BE is rather good at the former, and not very good at all at the latter. The authors are right – we need more realistic and complicated models of human decision-making, and there are some out there (for example, see here and here – email me if you need a copy of either .pdf). BE would do well to actually read something outside of economics if it is serious about this goal. There are a couple of disciplines out there (for example, anthropology, geography, some aspects of sociology and social history) that have long operated with complex framings of human behavior, and have already derived many of the lessons that BE is just now (re)discovering. In this light, then, this short paper does show us what BE isn’t: it isn’t anthropology, geography, or any other social science that has already engaged the same questions as BE, but with more complex framings of human behavior and more rigorous interpretations of observed outcomes. And if it isn’t that, what exactly is the point of this field of inquiry?

Bill Gates, in his annual letter, makes a compelling argument for the need to better measure the effectiveness of aid.  There is a nice, 1 minute summary video here.  This is becoming a louder and louder message in development and aid, having been pushed now by folks ranging from Raj Shah, the Administrator of USAID, to most everyone at the Center for Global Development.  There are interesting debates going on about how to shift from a focus on outputs (we bought this much stuff for this many dollars) to a focus on impacts (the stuff we bought did the following good things in the world).  Most of these discussions are technical, focused on indicators and methods.  What is not discussed is the massively failure-averse institutional culture of development donors, and how this culture is driving most of these debates.  As a result, I think that Gates squanders his bully pulpit by arguing that we should be working harder on evaluation. We all know that better evaluation would improve aid and development. Suggesting that this is even a serious debate in development requires a nearly-nonexistent straw man that somehow thinks learning from our programs and projects is bad.

Like most everyone else in the field, I agree with the premise that better measurement (thought very broadly, to include methods and data across the quantitative to qualitative spectrum) can create a learning environment from which we might make better decisions about aid and development. But none of this matters if all of the institutional pressures run against hearing bad news. Right now, donors simply cannot tolerate bad news, even in the name of learning. Certainly, there are lots of people within the donor agencies that are working hard on finding ways to better evaluate and learn from existing and past programs, but these folks are going to be limited in their impact as long as agencies such as USAID answer to legislators that seem ready to declare any misstep a waste of taxpayer money, and therefore a reason to cut the aid budget…so how can they talk about failure?

So, a modest proposal for Bill Gates. Bill (may I call you Bill?), please round up a bunch of venture capitalists. Not the nice socially-responsible ones (who could be dismissed as bleeding-heart lefties or something of the sort), the real red-in-tooth-and-claw types.  Bring them over to DC, and parade out these enormously wealthy, successful (by economic standards, at least) people, and have them explain to Congress how they make their money. Have them explain how they got rich failing on eight investments out of ten, because the last two investments more than paid for the cost of the eight failures. Have them explain how failure is a key part of learning, of success, and how sometimes failure isn’t the fault of the investor or donor – sometimes it is just bad luck. Finally, see if anyone is interested in taking a back-of-the-envelope shot at calculating how much impact is lost due to risk-averse programming at USAID (or any other donor, really).  You can shame Congress, who might feel comfortable beating up on bureaucrats, but not so much on economically successful businesspeople.  You could start to bring about the culture change needed to make serious evaluation a reality. The problem is not that people don’t understand the need for serious evaluation – I honestly don’t know anyone making that argument.  The problem is creating a space in which that can happen. This is what you should be doing with your annual letter, and with the clout that your foundation carries.

Failing that (or perhaps alongside that), lead by demonstration – create an environment in your foundation in which failure becomes a tag attached to anything from which we do not learn, instead of a tag attached to a project that does not meet preconceived targets or outcomes.  Forget charter cities (no, really, forget them), become the “charter donor” that shows what can be done when this culture is instituted.

The evaluation agenda is getting stale, running aground on the rocky shores of institutional incentives. We need someone to pull it off the rocks.  Now.

Ben Leo at ONE.org (formerly of CGD) put forth an intriguing proposal recently on Huffington Post Impact: It’s Time to Ask the World’s Poor What They Really Want.  In short, Ben is trying to argue that the current top-down definition of development goals, no matter how well-intentioned, is unlikely to reflect the views of the people these development goals are meant to benefit.

Hear, hear.  I made a similar point in Delivering Development. Actually, that sort of was one of the main points of the book.  See also my articles here and here.

But I am concerned that Leo is representing this effort a little too idealistically.  Just because we decide to ask people what they want doesn’t mean that we will really find out what they want.  Getting to this sort of information has everything to do with asking the right questions in the right way – there is no silver bullet for participation that will ensure that everyone’s voices will be heard.  To that end, what worries me here is that Ben does not explain exactly how ONE plans to develop the standardized survey they will put out there, or how exactly they will administer this survey.  So, here are a few preliminary questions for Ben and the ONE team:

1)   Does a standardized survey make sense? Given the very different challenges that people face around the world, and the highly variable capacity of people to deal with those challenges, it seems to me that going standardized is going to result in one of two outcomes: either you ask focused questions that only partially capture the challenges facing most people, or you ask really general questions that basically capture the suite of challenges we see globally, but do so in a manner that is so vague as to be unactionable.  How will ONE thread this needle?

2)   Who is designing the survey? To my point above, what questions are asked determine who will answer, and therefore determines what you will learn.  While the information gleaned from this sort of survey is likely to be very interesting, it is not the same thing as an open participatory process – full participation includes defining the questions, not just the answers.  Indeed, I would suggest that ONE needs to ditch the term participatory here, as in the end I fear it will be misleading.

3)   How will you administer the survey? Going out with enumerators takes a lot of time and money, and is subject to “investigator bias” – that is, the simple problem that some enumerators will do their job in a different manner than others, thus getting you different kinds/qualities of answers to the same questions.  On the other hand, if you are reliant on mobile technology, how will you incentivize those rural populations with mobile handsets to participate?  If you can’t do this, you will end up with a highly unrepresentative sample, making the results far less useful.

This is not to dismiss the effort Ben is spearheading – indeed, it is fantastic to see a visible organization make this argument and take concrete steps to actually get the voices of the global poor into the agenda-setting exercises.  However, this is not a participatory process – it is, instead, an information-driven process (which is good) that is largely shaped by the folks at ONE in the name of the global poor.  If ONE wants this to be more than information-driven, it needs to think about how it is going to let a representative sample of the global poor define the questions as well as the answers.  That is no easy task.

In all sincerity, I am happy to talk this through with anyone who is interested – I do think it is a good idea in principle, but execution is everything if you want it to be more than a publicity stunt…

Marc Bellemare at Duke has been using Delivering Development in his development seminar this semester.  On Friday, he was kind enough to blog a bit about one of the things he found interesting in the book: the finding that women were more productive than men on a per-hectare basis.  As Marc notes, this runs contrary to most assumptions in the agricultural/development economics literature, especially some rather famous work by Chris Udry:

Whereas one would expect men and women to be equally productive on their respective plots within the household, Udry finds that in Burkina Faso, men are more productive than women at the margin when controlling for a host of confounding factors.

This is an important finding, as it speaks to our understanding of inefficiency in household production . . . which, as you might imagine given Udry’s findings, is often assumed to be a problem of men farming too little and women farming a bit too much land.  So Marc was a bit taken aback to read that in coastal Ghana the situation is actually reversed – women are more productive than men per unit area of land, and therefore to achieve optimal distributions of agricultural resources (read:land) in these households we would actually have to shift land out of men’s production into women’s production.

I knew that this finding ran contrary to Udry and some other folks, but I did not think it was that big a deal: Udry worked in the Sahel, which is quite a different environment and agroecology than coastal Ghana.  Further, he worked with folks of a totally different ethnicity engaged with different markets.  In short, I chalked his findings up to the convergence of any number of factors that had played out somewhat differently in my research context.  I certainly don’t see my findings as generalizable much beyond Akan-speaking peoples living in rural parts of Ghana . . .

All of that said, Marc points out that with regard to my findings:

Of course, this would need to be subjected to the proper empirical specification and to a battery of statistical tests . . .

Well, that is an interesting question.  So, a bit of transparency on my data (it is pretty transparent in my refereed pubs, but the book didn’t wade into all of that):

Weaknesses:

  • The data was gathered during the main rainy season, typically as the harvest was just starting to come in.  This required folks to make some degree of projection about the productivity of their fields at least a month into the future, and often several months into the future
  • The income figures for each crop, and therefore for total agricultural productivity, were self-reported. I was not able to cross-check these reported figures by counting the actual amount of crop coming off each farm.
    • I also gathered information on expenses, and when I totaled up expenses and subtracted them from reported income, every household in the village was running in the red.  I know that is not true, having lived there for some 18 months of my life.
    • There is no doubt in my mind that production figures were underestimated, and expenses overestimated, in my data – this fits into patterns of income reporting among the Akan that are seen elsewhere in the literature.
    • Therefore, you cannot trust the reported figures as accurate absolute measures of farm productivity.

Strengths:

  • The data was replicated across three field seasons.  The first two field seasons, I conducted all data collection with my research assistant.  However, in the final year of data collection, I lead a team of four interviewers from the University of Cape Coast, who worked with local guides to identify farms and farmers to interview – in the last year, we interviewed every willing farmer in the village (nearly 100% of the population).
    • It turns out that my snowball sample of households in the first two years of data collection actually covered the entire universe of households operating under non-exceptional household circumstances (i.e. they are not samples, they are reports on the activities of the population).
      • In other words, you don’t have to ask about my sampling – there was no sampling.  I just described the activities of the entire relevant population in all three years.
      • This removes a lot of concerns people have about the size of my samples – some household strategies only had 7 or 8 households working with them in a given year, which makes statistical work a little tricky :)  Well, turns out there is no real need for stats, as this is everyone!
      • The only exception to this: female-headed households.  I grossly underinterviewed them in years 1 and 2 (inadvertently), and the women I did interview do not appear to be representative of all female-headed households.  I therefore can only make very limited claims about trends in these households.
    • Even with completely new interviewers who had no preconceived notions about the data, the income findings came in roughly the same as when I gathered the data. That’s replicability, folks! Well, at least as far as qualitative social science gets in a dynamic situation.
    • Though the data was gathered at only one point in the season, at that point farmers were already seeing how the first wave of the harvest was doing and could make reasonable projections about the rest of the harvest.

I’m probably forgetting other problems and answers . . . Marc will remind me, I’m sure!  In any case, though, Marc asks a really interesting question at the end of his post:

Assuming the finding holds, it would be interesting to compare the two countries given that Burkina Faso and Ghana share a border. Is the change in gender differences due to different institutions? Different crops?

The short answer, for now, has to be a really unsatisfying “I don’t know.”  Delivering Development lays out in relatively simple terms a really complex argument I have building for some time about livelihoods, that they are motivated by and optimized with reference to a lot more than material outcomes.  The book builds a fairly simple explanation for how men balanced the need to remain in charge of their households with the need to feed and shelter those households . . . but I have elaborated on this in a piece in review at the Development and Change.  I will send them an email and figure out where this is in review – they have been struggling mightily with reviewers (last I heard, they had gone through 13!?!) and put up a preprint as soon as I am able.  This is relevant here because I would need a lot more information about the Burkina setting to work through my new livelihoods framework before I could answer Marc’s question.

Stay tuned!

 

Charles Kenny’s* book Getting Better has received quite a bit of attention in recent months, at least in part because Bill Gates decided to review it in the Wall Street Journal (up until that point, I thought I had a chance of outranking Charles on Amazon, but Gates’ positive review buried that hope).  The reviews that I have seen (for example here, here and here) cast the book as a counterweight to the literature of failure that surrounds development, and indeed Getting Better is just that.  It’s hard to write an optimistic book about a project as difficult as development without coming off as glib, especially when it is all too easy to write another treatise that critiques development in a less than constructive way.  It’s a challenge akin to that facing the popular musician – it’s really, really hard to convey joy in a way that moves the listener (I’m convinced this ability is the basis of Bjork’s career), but fairly easy to go hide in the basement for a few weeks, pick up a nice pallor, tune everything a step down, put on a t-shirt one size too small and whine about the girlfriend/boyfriend that left you.

Much of the critical literature on development raises important challenges to development practice and thought, but does so in a manner that makes addressing those challenges very difficult (if not intentionally impossible).  For example, deep (and important) criticisms of development anchored in poststructural understandings of discourse, meaning and power (for example, Escobar’s Encountering Development and Ferguson’s The Anti-Politics Machine) emerged in the early and mid-1990s, but their critical power was not tied in any way to a next step . . . which eventually undermined the critical project.  It also served to isolate academic development studies from the world of development practice in many ways, as even those working in development who were open to these criticisms could find no way forward from them.  Tearing something down is a lot easier than building something new from the rubble.

While Getting Better does not reconstruct development, its realistically grounded optimism provides what I see as a potential foundation for a productive rethinking of efforts to help the global poor.  Kenny chooses to begin from a realistic grounding, where Chapters 2 and 3 of the book present us with the bad news (global incomes are diverging) and the worse news (nobody is really sure how to raise growth rates).  But, Kenny answers these challenges in three chapters that illustrate ways in which things have been improving over the past several decades, from sticking a fork in the often-overused idea of poverty traps to the recognition that quality of life measures appear to be converging globally.  This is more than a counterweight to the literature of failure – this book is a counterweight to the literature of development that all-too-blindly worships growth as its engine.  In this book, Kenny clearly argues that growth-centric approaches to development don’t seem to be having the intended results, and growth itself is extraordinarily difficult to stimulate . . . and despite these facts, things are improving in many, many places around the world.   This opens the door to question the directionality of causality in the development and growth relationship: is growth the cause of development, or its effect?

Here, I am pushing Kenny’s argument beyond its overtly stated purpose in the book. Kenny doesn’t overtly take on a core issue at the heart of development-as-growth: can we really guarantee 3% growth per year for everyone forever?  But at the same time, he illustrates that development is occurring in contexts where there is little or no growth, suggesting that we can delink the goal of development from the impossibility of endless growth.  If ever there were a reason to be an optimist about the potential for development, this delinking is it.

I feel a great kinship with this book, in its realistic optimism.  I also like the lurking sense of development as a catalyst for change, as opposed to a tool or process by which we obtain predictable results from known interventions.  I did find Getting Better’s explanations for social change to rest a bit too heavily on a simplistic diffusion of ideas, a rather exogenous explanation of change that was largely abandoned by anthropology and geography back in the structure-functionalism of the 1940s and 50s.  The book does not really dig into “the social” in general.  For example, Kenny’s discussion of randomized control trials for development (RCT4D), like the RCT4D literature itself, is preoccupied with “what works” without really diving into an exploration of why the things that worked played out so well.  To be fair to Kenny, his discussion was not focused on explanation, but on illustrating that some things that we do in development do indeed make things better in some measurable way.  I also know that he understands that “what works” is context specific . . . as indeed is the very definition of “works.”  However, why these things work and how people define success is critical to understanding if they are just anecdotes of success in a sea of failure, or replicable findings that can help us to better address the needs of the global poor.  In short, without an exploration of social process, it is not clear from these examples and this discussion that things are really getting better.

An analogy to illustrate my point – while we have very good data on rainfall over the past several decades in many parts of West Africa that illustrate a clear downward trend in overall precipitation, and some worrying shifts in the rainy seasons (at least in Ghana), we do not yet have a strong handle on the particular climate dynamics that are producing these trends.  As a result, we cannot say for certain that the trend of the past few decades will continue into the future – because we do not understand the underlying mechanics, all we can do is say that it seems likely, given the past few decades, that this trend will continue into the future.  This problem suggests a need to dig into such areas as atmospheric physics, ocean circulation, and land cover change to try to identify the underlying drivers of these observed changes to better understand the future pathways of this trend.  In Getting Better (and indeed in the larger RCT4D literature), we have a lot of trends (things that work), but little by way of underlying causes that might help us to understand why these things worked, whether they will work elsewhere, or if they will work in the same places in the future.

In the end, I think Getting Better is an important counterweight to both the literature of failure and a narrowly framed idea of development-as-growth.  My minor grumbles amount to a wish that this counterweight was heavier.  It is most certainly worth reading, and it is my hope that its readers will take the book as a hopeful launching point for further explorations of how we might actually achieve an end to global poverty.

 

*Full disclosure: I know Charles, and have had coffee with him in his office discussing his book and mine.  If you think that somehow that has swayed my reading of Getting Better, well, factor that into your interpretation of my review.


I had the good fortune to be invited to a presentation by Andy Sumner at the Center for Global Development on Thursday – a senior staff lunch presentation, actually.  So CGD was very kind in having me along.  I really enjoyed the atmosphere – it was nice to be back around a room full of very smart people who spend a lot of time thinking about the issue of development, and who clearly enjoy pushing each other and the ideas in the room.  Andy had a small novel’s worth of comments to consider by the end, but it was a really constructive pile of ideas.

Andy has come to a bit of fame recently for pointing out that what Collier called The Bottom Billion, really poor people more or less trapped in a few dozen very poor countries, no longer really works to describe the world (his paper is here).  If that bottom billion existed in the late 1990s when Collier was writing, today it seems that there is a new bottom billion, living in middle income countries (MICs) – indeed, the majority of the very poor globally are found in MICs.  The discussion around the presentation focused on everything from issues of data and method that led to this conclusion to wider policy concerns about whether or not this shift signals the end of grant-based aid because it will be politically infeasible to give (as opposed to lend) money to middle income countries (some of which have large cash reserves) for poverty alleviation – that aid to the very poor will have to shift to market-based lending.

I walked away from the presentation and discussion struck by something else: the term Middle Income Country is pointless.  If Angola is a middle income country, and Ghana is about to be reclassified as such because of its new oil revenues, we might as well just chuck the typology.  While GINI data (a measure of income inequality within a country) is tough to come by right now, it seems to me that a lot of the countries that have recently made the jump to middle income, yet still house a tremendous number of the “bottom billion” (i.e. India, China, Nigeria, and Indonesia), are clearly making that jump by enhancing inequality within their borders.  This means that the basis for this shift in classification is not widespread through the country or its population – which opens up another question that is analytically crucial to understanding the likely future for aid to the poorest of the poor: on what basis did these countries make the jump to middle income status, what is the current structure of the economy, and to what is that jump, and the current economy, vulnerable.  The impetus for aid grants disappears only if we assume that the gains made by these countries are widespread through the population and robust enough to withstand pressures and shocks that might push them back to low income status.  I have my serious doubts that many places making the jump and becoming MICs can say either with confidence – climate change and a tightly interlinked global economy will challenge many of these economies in significant ways that will compromise their abilities to address the needs of the poorest within their borders.  However, without addressing the needs of this portion of the population these countries will put their social, economic and environmental futures at risk.  Now, perhaps more than ever, we need to be focused on fostering safety and certainty for the world’s most vulnerable, to ensure that a country making the jump to MIC status has achieved something meaningful and durable.

On his blog Shanta Devarajan, the World Bank Chief Economist for Africa, has a post discussing the debate about the performance and results of the Millennium Villages Project (MVP).  The debate, which takes shape principally in papers by Matt Clemens and Gabriel Demombynes of Center for Global Development and Paul Pronyk, John McArthur, Prabhjot Singh, and Jeffrey Sachs of the Millennium Villages Project, questions how the MVP is capturing the impacts of its interventions in the Millennium Villages.  As Devarajan notes, the paper by Clemens and Demombynes rightly notes that the MVP’s claims about its performance are not really that clearly framed in evidence, which makes it hard to tell how much of the changes in the villages can be attributed to their work, and how much is change driven by other factors.  Clemens and Demombynes are NOT arguing that the MVP has had no impact, but that there are ways to rigorously evaluate that impact – and when impact is rigorously evaluated, it turns out that the impact of MVP interventions is not quite as large as the project would like to claim.

This is not all that shocking, really – it happens all the time, and it is NOT evidence of malfeasance on the part of the MVP.  It just has to do with a simple debate about how to rigorously capture results of development projects.  But this simple debate will, I think, have long-term ramifications for the MVP.  As Devarajan points out:

In short, Clemens and Demombynes have undertaken the first evaluation of the MVP.  They have shown that the MVP has delivered sizeable improvements on some important development indicators in many of the villages, albeit with effects that are smaller than those described in the Harvests of Development paper.  Of course, neither study answers the question of whether these gains are sustainable, or whether they could have been obtained at lower cost.  These should be the subject of the next evaluation.

I do not, however, think that this debate is quite as minor as Devarajan makes it sound – and he is clearly trying to downplay the conflict here.  Put simply, the last last two sentences in the quote above are, I think, what has the MVP concerned – because the real question about MVP impacts is not in the here and now, but in the future.  While I have been highly critical of the MVP in the past, I am not at all surprised to hear that their interventions have had some measurable impact on life in these villages.  The project arrived in these villages with piles of money, equipment and technical expertise, and went to work.  Hell, they could have simply dumped the money (the MVP is estimated to cost about $150 per person per year) into the villages and you would have seen significant movement in many target areas of the MVP.  I don’t think that anyone doubts that the project has had a measurable impact on life in all of the Millennium Villages.

Instead, the whole point here is to figure out if what has been done is sustainable – that is the measure of performance here.  Anyone can move the needle in a community temporarily – hell, the history of aid (and development) is littered with such projects.  The hard part is moving the needle in a permanent way, or doing so in a manner that creates the processes by which lasting change can occur.  As I have argued elsewhere (and much earlier that in this debate), and as appears to be playing out on the ground now, the MVP was never conceptually framed in a way that would bring about such lasting changes.  Clemens and Demombynes’ work is important because it provides an external critique of the MVP’s claims about its own performance – and it is terrifying to at least some in the MVP, as external evaluations are going to empirically demonstrate that the MVP is not, and never was, a sustainable model for rural development.

While I would not suggest that Clemens and Demombynes’ approach to evaluation is perfect (indeed, they make no such claim), I think it is important because it is trying to move past assumptions to evidence.  This is a central call of my book – the MVP is exhibit A of a project founded on deeply problematic assumptions about how development and globalization work, and framed and implemented in a manner where data collection and evaluation cannot really question those assumptions . . . thus missing what is actually happening (or not happening) on the ground.  This might also explain the somewhat non-responsive response to Clemens and Demombynes in the Pronyk et al article – the MVP team is having difficulty dealing with suggestions that their assumptions about how things work are not supported by evidence from their own project, and instead of addressing those assumptions, are trying to undermine the critique at all costs.  This is not a productive way forward, this is dogma.  Development is many things, but if it is to be successful by any definition, it cannot be dogmatic.

Todd Moss at the Center for Global Development has a post about Ghana and the Millennium Challenge Corporation (MCC).  Overall, he makes some good points about the purpose of MCC compacts, and whether or not it makes sense to re-up with Ghana in 2012 for a second compact.  While Moss makes a number of good points in his post (including the fact that Ghana has a lot of capital incoming from oil, and a ready market for its debt, both of which seem to negate the need for continued grants), I was brought up short by one stunning statement:

Ghana is (suddenly) just barely “low income”.  A recent rebasing of its GDP found the country was 63% richer than everyone thought.  Ghana might still technically qualify for the MCC but the rationale for another huge compact drops pretty significantly.

Now, to be fair to Moss, he has an excellent post here on the implications of such rebasing.  Importantly, the second lesson he takes away from this sudden revaluation of Ghana’s economy is:

Boy, we really don’t know anything. Over the past thirty years Ghana has been one of the most scrutinized, measured, studied, picked-over economies in Africa. (yes, I too did my PhD on Ghana…) Yet, we were all taking as gospel a number that was off by a tremendous margin. If we are nearly two-thirds wrong on Ghana’s GDP, what hope can we possibly have in stats for Chad? Everyone knows that data is dubious, but this seems to add a whole new level of doubt.

His fourth point is closely related:

I’m still confused… but it probably doesn’t matter. The Reuters article quotes the government statistician as estimating GDP per capita at $1318 instead of $753. This doesn’t add up to the total GDP figures also given since this implies a 75% increase. If the $1318 is correct, then that either implies that the government thinks there are only 19.4 million people instead of the normal estimates of about 24 million. Or, if the total GDP number of $25.6 billion is right, then per capita GDP is really $1067 per capita. (I think I’m already violating my lesson from #2.)

I have a chapter in my book dedicated to understanding why our measurements of the economy and environment in the Global South are mostly crap, and even when the data is firm it often does not capture the dynamics we think it does.  I then spend a few chapters suggesting what to do about it (including respatializing data/data collection so that it can be organized into spatial units that have social, economic, and ecological meaning, and using basic crowdsourcing techniques to both collect data and ground truth of existing statistics).  Even better, this is rooted in a discussion of Ghana’s economy.  I give Moss credit for being willing to point out the confusing numbers, and acknowledge that they confuse him.  They should.

But Moss gets it totally wrong here:

Ghana has long aspired to be a middle-income country by 2020, and this now seems like it will happen many years early. Accra certainly feels like a middle-income city.

This statement explains how he can label Ghana “barely low-income”, even after he has called the very statistics that make such a claim possible into question: he’s focused on Accra.  Accra has very little to do with how the bulk of the Ghanaian population lives – and most of that population is very, very poor.  Ghana is not barely low income – it is still quite low income, with some pockets of extreme wealth starting to distort the national statistics.  It doesn’t matter how Accra feels – that city is home to at best 10% of the population.  Kumasi is home to between 5-8% more.  Generously including Tamale and Takoradi in the middle-income city categories (this is very generous) nets you probably 25% of the population – nobody else is living in a middle income country.  Like Moss, I did my dissertation work in Ghana.  I still work there.  The difference is that I did my work in rural villages, and still do.  $1 a day beyond subsistence is a common income in the rural areas of the Central Region, even now – and the Central Region has a lot more infrastructure than most of the Northern, Upper East and Upper West Regions.  This population remains poorly educated – failed by poor rural schools.  They cannot support a transformation of the Ghanaian economy.  Most of Ghana is still a very low income country, not ready for any sort of sustained economic growth.  The country has seen enormous success in recent years – I am stunned by what I have seen in the past 13 years – but the fruits of that success are not distributed evenly.  While the cities have boomed, the villages are nearly unchanged.  This is Ghana’s new challenge – to spread this new wealth out and foster a diverse, resilient economy.

This is not to say that an MCC compact is the right tool to foster this, or that Ghana is the best place to be putting MCC money.  However, declaring “success” too soon creates its own set of risks – let’s use some nuance when considering how a country is doing, so we can identify the real challenges to overcome and successes to build on moving forward.

So, the Center for Global Development, a non-partisan think tank focused on reducing poverty and making globalization work for the poor (a paraphrase of their mission statement, which can be found here), has issued a report that more or less says that USAID’s quality and effectiveness of aid is very low when compared to other agencies.

Well, I’m not all that freaked out by this assessment, principally because it fails to ask important questions relevant to understanding development needs and development outcomes.  In fact, the entire report is rigged – not intentionally, mind you, but I suspect out of a basic ignorance of the difference between the agencies being evaluated, and an odd (mis)understanding of what development is.

For me, the most telling point in the report came right away, on pages 3 and 4:

Given these difficulties in relating aid to development impact on the ground, the scholarly literature on aid effectiveness has failed to convince or impress those who might otherwise spend more because aid works (as in Sachs 2005) or less because aid doesn’t work often enough (Easterly 2003).

Why did this set me off?  Well, in my book I argue that the “poles” of Sachs and Easterly in the development literature are not poles at all – they operate from the same assumptions about how development and globalization work, and I just spent 90,000 words worth of a book laying out those assumptions and why they are often wrong.  In short, this whole report is operating from within the development echo chamber from which this blog takes its name.  But then they really set me off:

In donor countries especially, faced with daunting fiscal and debt problems, there is new and healthy emphasis on value for money and on maximizing the impact of their aid spending.

Folks, yesterday I posted about how the desire to get “value for our money” in development was putting all the wrong pressures on agencies . . . not because value is bad, but because it puts huge pressures on the development agencies to avoid risk (and associated costs), which in turn chokes off innovation in their programs and policies.  And here we have a report, evaluating the quality of aid (their words) in terms of its cost-effectiveness.  One of their four pillar analyses is the ability of agencies to maximize aid efficiency.  This is nuts.

Again, its not that there should be no oversight of the funds or their uses, or that there should be no accountability for those uses.  But to demand efficiency is to largely rule out high risk efforts which could have huge returns but carry a significant risk of failure.  Put another way, if this metric was applied to the Chilean mine rescue, then it would score low for efficiency because they tried three methods at once and two failed.  Of course, that overlooks the fact that they GOT THE MINERS OUT ALIVE.  Same thing for development – give me an “inefficient” agency that can make transformative leaps forward in our understandings of how development works and how to improve the situation of the global poor over the “efficient” agency that never programs anything of risk, and never makes those big leaps.

Now, let’s look at the indicators – because they tell the same story.  One of the indicators under efficiency is “Share of allocation to well-governed countries.”  Think about the pressure that places on an agency that has to think about where to set up its programming.  What about all of the poor, suffering people in poorly-governed countries?  Is USAID not supposed to send massive relief to Haiti after an earthquake because its government is not all we might hope?  This indicator either misses the whole point of development as a holistic, collaborative process of social transformation, or it is a thinly-veiled excuse to start triaging countries now.

They should know better – Andrew Natsios is one of their fellows, and he has explained how these sorts of evaluation pressures choke an agency to death.  Amusingly, they cite this work in here . . . almost completely at random on page 31, for a point that has no real bearing on that section of the text.  I wonder what he thinks of this report . . .

In the end, USAID comes out 126th of 130 agencies evaluated for “maximizing efficiency.”  Thank heavens.  It probably means that we still have some space to experiment and fail left.  Note that of the top 20% of donors, the highest scores went to the World Bank and UN Agencies, arguably the groups that do the least direct programming on the ground – in other words, the “inefficiencies” of their work are captured elsewhere, when the policies and programs they set up for others to run begin to come apart.  The same could be said of the Millennium Challenge Corporation here in the US, which also scored high.  In other words, they are rewarding the agencies that don’t actually do all that much on the ground for their efficiency, while the agencies that actually have to deal with the uncertainties of real life get dinged for it.

And the Germans ended up ranking high, but hey, nothing goes together like Germans and efficiency.  That one’s for you, Daniel Esser.

What a mess of a report . . . and what a mess this will cause in the press, in Congress, etc.  For no good reason.