Entries tagged with “Nigeria”.


The BBC has posted an interesting map of Nigeria that captures the spatiality of politics, ethnicity, wealth, health, literacy and oil.  There are significant problems with this map.  The underlying data has fairly large error bars that are not acknowledged, and the presentation of the data is somewhat problematic; for example, the ethnic “areas” in the country are represented only by the majority group, hiding the heterogeneity of these areas, and other data is aggregated at the state level, blurring heterogenous voting patterns, incomes, literacy rates and health situations. I really wish that those who create this sort of thing would do a better job addressing some of these issues, and pointing out the issues they cannot address to help the reader better evaluate the data.

But even with all of these caveats, this map is a striking illustration of the problems with using national-level statistics to guide development policy and programs.  Look at the distributions of wealth, health and literacy in the country – error bars or no, this data clearly demonstrates that national measures of wealth cannot guide useful economic policy, national measures of literacy might obscure regional or ethnic patterns of educational neglect, and national vaccination statistics tell us nothing about the regional variations in disease ecology and healthcare delivery that shape health outcomes in this country.

This is not to say that states don’t matter – they matter a lot.  However, when we use national-scale data for just about anything, we are making very bad assumptions about the heterogeneity of the situation in that country . . . and we are probably missing key opportunities and challenges we should be addressing in our work.

So, to clarify one one my points from my previous post, let me use an example to show why building an index of development (or an index of anything, really) on data based on its availability can lead to tremendous problems – and result in a situation where the index is actually so misleading as to be worse than having no index at all.

A few years ago, Nate Kettle, Andrew Hoskins and I wrote a piece examining poverty-environment indicators (link here, or check out chapter 9 of Delivering Development when it comes out in January) where we pointed out that the data used by one study to evaluate the relationship between poverty and the environment in Nigeria did not bear much relationship to the meaningful patterns of environment and livelihood in Nigeria.  For example, one indicator of this relationship was ‘percentage of irrigated area in the total agricultural area’, an index whose interpretation rested on the assumption that a greater percentage of irrigated area will maximize the environment’s agricultural potential and lead to greater income and opportunity for those living in the area.  While this seems like a reasonable interpretation, we argued that there were other, equally plausible interpretations:

“While this may be a relatively safe assumption in places where the irrigated area is a very large percentage of total agricultural area, it may not be as applicable in places where the irrigated area is relatively small and where the benefits of irrigation are not likely to reach the entire population. Indeed, in such settings those with access to irrigation might not only experience greater opportunities in an average year, but also have incomes that are much more resistant to environmental shocks that might drive other farmers to adopt severe measures to preserve their livelihoods, such as selling off household stocks or land to those whose incomes are secured by irrigation. In such situations, a small but rising percentage of area under irrigation is as likely to reflect a consolidation of wealth (and therefore declining incomes and opportunities for many) in a particular area as it does greater income and opportunity for the whole population.” (p.90)

The report we were critiquing made no effort to control for these alternative interpretations, at least in part because it had gathered data at the national scale for Nigeria.  The problem here is that Nigeria contains seven broad agroecological zones (and really many more subzones) in which different crops and combinations of crops will be favored – averaging this across the country just homogenizes important differences in particular places into a general, but meaningless indicator.  When we combined this environmental variability with broad patterns of land tenure (people’s access to land), we found that the country really had to be divided up into at least 13 different zones – in each zone, the interpretation of this poverty-environment indicator was likely to be consistent, but there was no guarantee that it would be consistent from zone to zone.  In some zones, a rising area under irrigation would reflect a positive shift in poverty and environmental quality, while in others it might reflect declining human well-being.

To add to this complexity, we then mapped these zones against the smallest administrative units (states) of Nigeria at which meaningful data on poverty and the environment are most likely to be available.  What resulted was this:

A map contrasting the 13 agroecological zones in which poverty-environment indicators might be consistently interpreted and the boundaries of the smallest administrative units (states) in Nigeria that might have meaningful poverty and environmental data

As you can see, there are several states with multiple zones inside their borders – which means a single indicator cannot be assumed to have the same interpretion across the state (let alone the entire country).  So, while there might be data on poverty and environmental quality available at the state level such that we can identify indicators and build indexes with it, the likelihood is that the interpretation of that data will be, in many cases, incorrect, leading to problematic policies (like promoting irrigation in areas where it leads to land consolidation and the marginalization of the poor) – in other words, making things much worse than if there was no index or indicator at all.

Just because the data is available doesn’t mean that it is useful, or that it should be used.