Entries tagged with “index”.

things often go wrong.  Take, for example, the climate change vulnerability index produced by Maplecroft.  At first glance, this looks interesting – a scale of risk that can be mapped to visually represent the levels of challenge presented by climate change to any particular place.

© MapleCroft

However, look more closely and it becomes clear that the product isn’t really useful at all.  Anybody who takes 42 variables and aggregates them into a single category (vulnerability) has created something sort of useless.  OK, so the vulnerability is high.  But vulnerability to what?  Flood, drought, crop failure due to temperature, coastal fisheries collapse?  All of these things are problems related to climate change, but they are not present in all places at all times, and they all have different impacts on people (and Maplecroft should probably note that they have different impacts on investments) that require different interventions.  So the index does not tell you anything diagnostic about this vulnerability.  It is, at best, a first step to thinking about vulnerability and how to address it.

On top of overselling the product and its value, their underlying data is problematic – if you download the map you can see the size of the grid they used for the data – it is huge.  This suggests that they have used global circulation models (GCMs) for their climate projection variables.  The use of global scale data in local cases is highly problematic – downscaling these models to regional or even local levels has proven very difficult because the factors that most influence the global climate are not necessarily the most important factors at regional or local scales.  For example, local deforestation can have a huge impact on local precipitation patterns over time without having a very large impact on global circulation as a whole – so the downscaled model (focused on global circulation) will not capture the importance of this local factor in determining local climate outcomes.  Just looking at Ghana on their free map (you can download a copy from the page above), I can tell you that they have missed a really distressing trend toward the loss of the minor rainy season in the forest (Southern) areas of Ghana . . . which is going to have a massive impact on both cocoa production (national economic impact) and rain-fed agriculture.  If they got this wrong, I am guessing they have missed a hell of a lot of other things.

This is what happens when the business community starts jonesing for climate change, but won’t go to the scientific community to get solid advice on how to get the information they need.  Look at Maplecroft’s core team – only one of the six has really engaged with climate change or global environmental change more broadly in any meaningful way – and he is trained in Business Studies, not climatology, biogeography, ecology, anthropology, political ecology or any other number of fields that produce the people who develop basic knowledge on climate change, environmental change and their related human impacts.  In short, they really don’t know what they are talking about, but they have made a nice looking product that might mislead people into thinking that they do.

What drives my concern here is not some sort of academic/governmental territoriality.  When people approach the issue of climate change and its human impacts without a serious consideration of the science behind these broad issues, there is the potential for very serious problems.  You should see the REDD+-related business proposals circulating out there . . . I’ve seen crazy stuff, like people wanting to plant genetically-modified super-fast-growing eucalypts in the swamps around the Amazon to enhance carbon uptake in otherwise not-so-forested areas, without the slightest consideration for the ecological impact of such a species (which would, according to my biogeography colleagues, surely go invasive immediately).  Without meaning to, people might end up doing a hell of a lot more damage than good if they just run off willy-nilly.

There are a lot of us out here who would love to work with you – we want to help, and we’ve already made a lot of these mistakes.  Let us save you time, and save the folks suffering these vulnerabilities a lot of unnecessary pain.

So, to clarify one one my points from my previous post, let me use an example to show why building an index of development (or an index of anything, really) on data based on its availability can lead to tremendous problems – and result in a situation where the index is actually so misleading as to be worse than having no index at all.

A few years ago, Nate Kettle, Andrew Hoskins and I wrote a piece examining poverty-environment indicators (link here, or check out chapter 9 of Delivering Development when it comes out in January) where we pointed out that the data used by one study to evaluate the relationship between poverty and the environment in Nigeria did not bear much relationship to the meaningful patterns of environment and livelihood in Nigeria.  For example, one indicator of this relationship was ‘percentage of irrigated area in the total agricultural area’, an index whose interpretation rested on the assumption that a greater percentage of irrigated area will maximize the environment’s agricultural potential and lead to greater income and opportunity for those living in the area.  While this seems like a reasonable interpretation, we argued that there were other, equally plausible interpretations:

“While this may be a relatively safe assumption in places where the irrigated area is a very large percentage of total agricultural area, it may not be as applicable in places where the irrigated area is relatively small and where the benefits of irrigation are not likely to reach the entire population. Indeed, in such settings those with access to irrigation might not only experience greater opportunities in an average year, but also have incomes that are much more resistant to environmental shocks that might drive other farmers to adopt severe measures to preserve their livelihoods, such as selling off household stocks or land to those whose incomes are secured by irrigation. In such situations, a small but rising percentage of area under irrigation is as likely to reflect a consolidation of wealth (and therefore declining incomes and opportunities for many) in a particular area as it does greater income and opportunity for the whole population.” (p.90)

The report we were critiquing made no effort to control for these alternative interpretations, at least in part because it had gathered data at the national scale for Nigeria.  The problem here is that Nigeria contains seven broad agroecological zones (and really many more subzones) in which different crops and combinations of crops will be favored – averaging this across the country just homogenizes important differences in particular places into a general, but meaningless indicator.  When we combined this environmental variability with broad patterns of land tenure (people’s access to land), we found that the country really had to be divided up into at least 13 different zones – in each zone, the interpretation of this poverty-environment indicator was likely to be consistent, but there was no guarantee that it would be consistent from zone to zone.  In some zones, a rising area under irrigation would reflect a positive shift in poverty and environmental quality, while in others it might reflect declining human well-being.

To add to this complexity, we then mapped these zones against the smallest administrative units (states) of Nigeria at which meaningful data on poverty and the environment are most likely to be available.  What resulted was this:

A map contrasting the 13 agroecological zones in which poverty-environment indicators might be consistently interpreted and the boundaries of the smallest administrative units (states) in Nigeria that might have meaningful poverty and environmental data

As you can see, there are several states with multiple zones inside their borders – which means a single indicator cannot be assumed to have the same interpretion across the state (let alone the entire country).  So, while there might be data on poverty and environmental quality available at the state level such that we can identify indicators and build indexes with it, the likelihood is that the interpretation of that data will be, in many cases, incorrect, leading to problematic policies (like promoting irrigation in areas where it leads to land consolidation and the marginalization of the poor) – in other words, making things much worse than if there was no index or indicator at all.

Just because the data is available doesn’t mean that it is useful, or that it should be used.