UNDP has launched its 20th anniversary edition of the Human Development Report. In the report, they argue that development is working better than we realize – and use this to argue that aid is therefore working better than people think. However, there is an important caveat in the report which calls this general claim into question. As the BBC reports “There has been most progress in the areas of health and education, sectors which have received most focus in development assistance.”
This is a huge caveat. These are the sectors that are easiest to measure – at least through traditional indicators. Development programs have been designing programs around clear indicators and pumping money into achieving those indicators for some time – the same indicators used by the human development report. Of course literacy rates are up. Of course life expectancy is up. These are low-hanging fruit. But what does this really mean for the quality of life of people living in the Global South? Are they living better, happier lives? Or are they living longer, in greater misery than ever before? Are any of these gains sustainable, or are they predicated on continual flows of aid? There is no answer here – and it is an answer we need to obtain not through indicators, but by getting out there and talking to those we intend to help with development. Get on your boots, and get out of the SUV/Mission Office!
I do, however, like that this report is trying to make an evidence-based case for the persistence of market failures around public goods. We have seen, time and again, that when governments fail to provide security, access to healthcare, and education for their populations, the markets DO NOT step in to fill the gap. A lot of poor, vulnerable people get left behind. (Given recent trends and this week’s election results, it is entirely likely that South Carolina will empirically demonstrate this can happen even here in the US, at least in the area of education, over the next four years).
Tag: indicators
Availability isn't validity . . .
So, to clarify one one my points from my previous post, let me use an example to show why building an index of development (or an index of anything, really) on data based on its availability can lead to tremendous problems – and result in a situation where the index is actually so misleading as to be worse than having no index at all.
A few years ago, Nate Kettle, Andrew Hoskins and I wrote a piece examining poverty-environment indicators (link here, or check out chapter 9 of Delivering Development when it comes out in January) where we pointed out that the data used by one study to evaluate the relationship between poverty and the environment in Nigeria did not bear much relationship to the meaningful patterns of environment and livelihood in Nigeria. For example, one indicator of this relationship was ‘percentage of irrigated area in the total agricultural area’, an index whose interpretation rested on the assumption that a greater percentage of irrigated area will maximize the environment’s agricultural potential and lead to greater income and opportunity for those living in the area. While this seems like a reasonable interpretation, we argued that there were other, equally plausible interpretations:
“While this may be a relatively safe assumption in places where the irrigated area is a very large percentage of total agricultural area, it may not be as applicable in places where the irrigated area is relatively small and where the benefits of irrigation are not likely to reach the entire population. Indeed, in such settings those with access to irrigation might not only experience greater opportunities in an average year, but also have incomes that are much more resistant to environmental shocks that might drive other farmers to adopt severe measures to preserve their livelihoods, such as selling off household stocks or land to those whose incomes are secured by irrigation. In such situations, a small but rising percentage of area under irrigation is as likely to reflect a consolidation of wealth (and therefore declining incomes and opportunities for many) in a particular area as it does greater income and opportunity for the whole population.” (p.90)
The report we were critiquing made no effort to control for these alternative interpretations, at least in part because it had gathered data at the national scale for Nigeria. The problem here is that Nigeria contains seven broad agroecological zones (and really many more subzones) in which different crops and combinations of crops will be favored – averaging this across the country just homogenizes important differences in particular places into a general, but meaningless indicator. When we combined this environmental variability with broad patterns of land tenure (people’s access to land), we found that the country really had to be divided up into at least 13 different zones – in each zone, the interpretation of this poverty-environment indicator was likely to be consistent, but there was no guarantee that it would be consistent from zone to zone. In some zones, a rising area under irrigation would reflect a positive shift in poverty and environmental quality, while in others it might reflect declining human well-being.
To add to this complexity, we then mapped these zones against the smallest administrative units (states) of Nigeria at which meaningful data on poverty and the environment are most likely to be available. What resulted was this:
As you can see, there are several states with multiple zones inside their borders – which means a single indicator cannot be assumed to have the same interpretion across the state (let alone the entire country). So, while there might be data on poverty and environmental quality available at the state level such that we can identify indicators and build indexes with it, the likelihood is that the interpretation of that data will be, in many cases, incorrect, leading to problematic policies (like promoting irrigation in areas where it leads to land consolidation and the marginalization of the poor) – in other words, making things much worse than if there was no index or indicator at all.
Just because the data is available doesn’t mean that it is useful, or that it should be used.