Watching Mitt Romney get hammered for daring to suggest that anthropogenic climate change (ACC) is a real problem has, yet again, got me thinking about how to explain to people the generally-held view of the scientific community on this topic. I think we make something of a mistake when we argue there is a scientific consensus – if we agree with the Miriam Webster definition of consensus, “general agreement: unanimity”, what we have doesn’t quite rise to this standard. There are a few folks out there that insist that the huge majority of people working on this issue are wrong, and there is really no way to resolve or mitigate the issues of concern that animate many skeptics. So every time we say consensus, we are opening ourselves up to the criticism that “Person X disagrees,” thus invalidating (for many) the claim of consensus, which then is (illogically) extended to mean that all arguments for anthropogenic climate change are invalid.
However, thanks to Grist, I stumbled across another means of communicating the state of the science of climate change: a very cool visualization of the evolution of the literature on the subject, from 1824 to the present. The folks at Skeptical Science have divided the climate change literature into three camps: skeptic, neutral and pro-anthropogenic climate change. They then classified each of the 4811 papers they could find on climate change into one of these three categories. Now, their classification system is unique and, in my opinion, somewhat problematic in that they have stretched a bit in placing some pieces into the skeptical or pro categories (see their explanation under the animation). That said, by 2011 their visualization is striking:
Yeah, no matter how you classify things, unless you completely and utterly pervert the literature, this picture is striking. The vast bulk of the literature either tests a climate change issue without directly addressing the causes of climate change (neutral) or comes down supporting a human cause for (at least some of) observed climate change. Only 187 papers since 1900 have argued against the idea. Go to the visualization, and you can drag the slider across the bottom and watch the literature emerge over time. It has never been a close raise between those who deny the human causes of climate change and those of us who see clear human causes – the pro-anthropogenic climate change crowd wins by a mile.
Is this consensus? No. But does it help people see that the dissent in the scientific literature is diminishingly small and always has been? Less than 4% of all articles published on climate change argued against human causes. If unanimity is the standard, then we need to start questioning a lot more than climate change . . . like gravity, for example. We still have a few unresolved issues with that particular force (really), but I don’t see anyone grabbing onto those tiny knowledge gaps to suggest that we shouldn’t pay any attention to it and exiting through a second-floor window is perfectly acceptable. This slider shows you one representation of the literature, a literature that represents a clear plurality view in favor of ACC (though given many of the “neutral” papers are reporting on work done because the authors accept the fundamental premise of ACC, suggesting a significant majority in this camp). Enough with the irrational doubt – let’s focus on the real challenges (better understanding the mechanisms of change, the total human contribution to observed change, and the likely resilience of ecology and society in the face of the challenges that now loom . . .
Category: research
3.36 Billion Africans in 2100?
Schuyler Null has a post up on The New Security Beat on the 2010 revision of the United Nations (UNDESA) World Population Prospects, noting that this new revision suggests that by 2100 roughly 1 in every 3 people in the world will live in sub-Saharan Africa – a total of 3.36 billion people. It is far too early to pick apart these projections, especially as the underlying assumptions used to guide their construction are not yet available to the public. Null is quite right to note:
the UN’s numbers are based on projections that can and do change. The range of uncertainty for the sub-Saharan African region, in particular, is quite large. The medium-variant projection for the region’s total population in 2100 is 3.36 billion people, but the high variant projection is 4.85 billion and the low variant is 2.25 billion.
A few preliminary thoughts, though. I pulled up the data for a country I know reasonably well – Ghana. Under this new revised projection, Ghana’s population is expected to reach more than 67 million by 2100. Peak population growth is supposed to take place between 2035 and 2040, with steady declines in population growth after that. With life expectancies projected to rise to 79 years by 2100, certainly a lot more Ghanaians will be around for a lot longer than they are today (current life expectancy is just shy of 60 years). That said, these numbers trouble me. First, I don’t quite see how Ghana will be able to sustain a population of this size at any point in the future – the number is just too massive. Second, it seems to me that the life expectancy estimates and the population size estimates contradict one another – as Charles Kenny quite ably demonstrates in Getting Better, as life expectancies rise and more children reach adulthood, the general trend is to lower total fertility. The only way Ghana’s projection can be made to work is to assume massive demographic momentum that I am not sure will play out in the face of expected declines in infant mortality and the increased cost burden for prospective parents supporting older family members for much longer than they do today. In other words, this seems to me to be a rather dire overestimation of where Ghana is going to be in the future.
Now, this is just a quick cut at what appear to be the assumptions for one country, but I worry that this potential overestimation has a certain political utility. The Malthusian specter, however inaccurate it may be, remains a great motivator for aid and development spending. Further, presuming massive demographic momentum requires we assume that adequate reproductive health options are not in place in places like Ghana. Given that the monitoring of reproductive health, presumably to better direct development interventions, seems to be a large focus of UNDESA’s and other UN organizations’ mandate, they might have a bit of a built-in bias against a lower population number because such a number would presume significant progress on the reproductive health front, thus challenging the need for this particular service. In a wider sense, it seems to revive fears of a population bomb, albeit in this case limited to Africa. While I have no doubt that demography will be an important challenge to address in the future, I think the current numbers, even the low estimates, seem overstated.
Besides, any projection of any social process 90 years into the future probably has gigantic error bars that could encompass anything from negative growth to massive overgrowth . . . the problem here is that policy makers often fail to grasp this uncertainty, see the 100-year projection, freak out entirely and reorient the next 5 years worth of aid programming to address a problem that may not exist.
Fisheries . . . this is a development challenge
A while back, I had a blog post on a report for ActionAid, written by Alex Evans, on critical uncertainties for development between the present and 2020. One of the big uncertainties Alex identified were environmental shocks, though in that version of the report he limited these shocks to climate-driven environmental shocks. In my post, I suggested to Alex that he widen his scope, for environmental shocks might also include ecosystem collapse, such as in major global fisheries – such environmental shocks are not really related to climate change, but are still of great importance. The collapse of the Gulf of Guinea large marine ecosystem (largely due to commercial overfishing from places other than Africa) has devastated local fish hauls, lowering the availability of protein in the diets of coastal areas and driving enormous pressure on terrestrial fauna as these populations seek to make up for the lost protein. Alex was quite generous with my comments, and agreed with this observation wholeheartedly.
And then today, I stumbled on this – a simple visualization of Atlantic Fisheries in 1900 and 2000, by fish haul. The image is striking (click to expand):
Now, I have no access to the datasets used to construct this visualization, and therefore I can make no comments on its accuracy (the blog post on the Guardian site is not very illuminating). However, this map could be off by quite a bit in terms of how good hauls were in 1900, and how bad they are now, and the picture would still be very, very chilling. As I keep telling my students, all those new, “exotic” fish showing up in restaurants are not delicacies – they are just all that is left in these fisheries.
This is obviously a development problem, as it compromises livelihoods and food supplies. Yet I don’t see anyone addressing it directly, even aid organizations engaged with countries on the coast of the Gulf of Guinea, where this impact is most pronounced. And how long until even the rich really start to feel the pinch?
Go here to see more visualizations – including one of the reach of the Spanish fishing fleet that makes clear where the pressure on the Gulf of Guinea is coming from.
Academic Adaptation and "The New Communications Climate"
Andrew Revkin has a post up on Dot Earth that suggests some ways of rethinking scientific engagement with the press and the public. The post is something of a distillation of a more detailed piece in the WMO Bulletin. Revkin was kind enough to solicit my comments on the piece, as I have appeared in Dot Earth before in an effort to deal with this issue as it applies to the IPCC, and this post is something of a distillation of my initial rapid response.
First, I liked the message of these two pieces a lot, especially the push for a more holistic engagement with the public through different forms of media, including the press. As Revkin rightly states, we need to “recognize that the old model of drafting a press release and waiting for the phone to ring is not the path to efficacy and impact.” Someone please tell my university communications office.
A lot of the problem stems from our lack of engagement with professionals in the messaging and marketing world. As I said to the very gracious Rajendra Pachauri in an email exchange back when we had the whole “don’t talk to the media” controversy:
I am in no way denigrating your [PR] efforts. I am merely suggesting that there are people out there who spend their lives thinking about how to get messages out there, and control that message once it is out there. Just as we employ experts in our research and in these assessment reports precisely because they bring skills and training to the table that we lack, so too we must consider bringing in those with expertise in marketing and outreach.
I assume that a decent PR team would be thinking about multiple platforms of engagement, much as Revkin is suggesting. However, despite the release of a new IPCC communications strategy, I’m not convinced that the IPCC (or much of the global change community more broadly) yet understands how desperately we need to engage with professionals on this front. In some ways, there are probably good reasons for the lack of engagement with pros, or with the “new media.” For example, I’m not sure Twitter will help with managing climate change rumors/misinformation as it is released, if only because we are now too far behind the curve – things are so politicized that it is too late for “rapid response” to misinformation. I wish we’d been on this twenty years ago, though . . .
But this “behind the curve” mentality does not explain our lack of engagement. Instead, I think there are a few other things lurking here. For example, there is the issue of institutional politics. I love the idea of using new media/information and communication technologies for development (ICT4D) to gather and communicate information, but perhaps not in the ways Revkin suggests. I have a section later in Delivering Development that outlines how, using existing mobile tech in the developing world, we could both get better information about what is happening to the global poor (the point of my book is that, as I think I demonstrate in great detail, we actually have a very weak handle on what is going on in most parts of the developing world) and could empower the poor to take charge of efforts to address the various challenges, environmental, economic, political and social, that they face every day. It seems to me, though, that the latter outcome is a terrifying prospect for some in development organizations, as this would create a much more even playing field of information that might force these organizations to negotiate with and take seriously the demands of the people with whom they are working. Thus, I think we get a sort of ambiguity about ICT4D in development practice, where we seem thrilled by its potential, yet continue to ignore it in our actual programming. This is not a technical problem – after all, we have the tech, and if we want to do this, we can – it is a problem of institutional politics. I did not wade into a detailed description of the network I envision in the book because I meant to present it as a political challenge to a continued reticence on the part of many development organizations and practitioners to really engage the global poor (as opposed to tell them what they need and dump it on them). But my colleagues and I have a detailed proposal for just such a network . . . and I think we will make it real one day.
Another, perhaps more significant barrier to major institutional shifts with regard to outreach is the a chicken-and-egg situation of limited budgets and a dominant academic culture that does not understand media/public engagement or politics very well and sees no incentive for engagement. Revkin nicely hits on the funding problem as he moves past simply beating up on old-school models of public engagement:
As the IPCC prepares its Fifth Assessment Report, it does so with what, to my eye, appears to be an utterly inadequate budget for communicating its findings and responding in an agile way to nonstop public scrutiny facilitated by the Internet.
However, as much as I agree with this point (and I really, really agree), the problem here is not funding unto itself – it is the way in which a lack of funding erases an opportunity for cultural change that could have a positive feedback effect on the IPCC, global assessments, and academia more generally that radically alters all three. The bulk of climate science, as well as social impact studies, come from academia – which has a very particular culture of rewards. Virtually nobody in academia is trained to understand that they can get rewarded for being a public intellectual, for making one’s work accessible to a wide community – and if I am really honest, there are many places that actively discourage this engagement. But there is a culture change afoot in academia, at least among some of us, that could be leveraged right now – and this is where funding could trigger a positive feedback loop.
Funding matters because once you get a real outreach program going, productive public engagement would result in significant personal, intellectual and financial benefits for the participants that I believe could result in very rapid culture change. My twitter account has done more for the readership of my blog, and for my awareness of the concerns and conversations of the non-academic development world, than anything I have ever done before – this has been a remarkable personal and intellectual benefit of public engagement for me. As universities continue to retrench, faculty find themselves ever-more vulnerable to downsizing, temporary appointments, and a staggering increase in administrative workload (lots of tasks distributed among fewer and fewer full-time faculty). I fully expect that without some sort of serious reversal soon, I will retire thirty-odd years hence as an interesting and very rare historical artifact – a professor with tenure. Given these pressures, I have been arguing to my colleagues that we must engage with the public and with the media to build constituencies for what we do beyond our academic communities. My book and my blog are efforts to do just this – to become known beyond the academy such that I, as a public intellectual, have leverage over my university, and not the other way around. And I say this as someone who has been very successful in the traditional academic model. I recognize that my life will need to be lived on two tracks now – public and academic – if I really want to help create some of the changes in the world that I see as necessary.
But this is a path I started down on my own, for my own idiosyncratic reasons – to trigger a wider change, we cannot assume that my academic colleagues will easily shed the value systems in which they were intellectually raised, and to which they have been held for many, many years. Without funding to get outreach going, and demonstrate to this community that changing our model is not only worthwhile, but enormously valuable, I fear that such change will come far more slowly than the financial bulldozers knocking on the doors of universities and colleges across the country. If the IPCC could get such an effort going, demonstrate how public outreach improved the reach of its results, enhanced the visibility and engagement of its participants, and created a path toward the progressive politics necessary to address the challenge of climate change, it would be a powerful example for other assessments. Further, the participants in these assessments would return to their campuses with evidence for the efficacy and importance of such engagement . . . and many of these participants are senior members of their faculties, in a position to midwife major cultural changes in their institutions.
All this said, this culture change will not be birthed without significant pains. Some faculty and members of these assessments want nothing to do with the murky world of politics, and prefer to continue operating under the illusion that they just produce data and have no responsibility for how it is used. And certainly the assessments will fear “politicization” . . . to which I respond “too late.” The question is not if the findings of an assessment will be politicized, but whether or not those who best understand those findings will engage in these very consequential debates and argue for what they feel is the most rigorous interpretation of the data at hand. Failure to do so strikes me as dereliction of duty. On the other hand, just as faculty might come to see why public engagement is important for their careers and the work they do, universities will be gripped with contradictory impulses – a publicly-engaged faculty will serve as a great justification for faculty salaries, increased state appropriations, new facilities, etc. Then again, nobody likes to empower the labor, as it were . . .
In short, in thinking about public engagement and the IPCC, Revkin is dredging up a major issue related to all global assessments, and indeed the practices of academia. I think there is opportunity here – and I feel like we must seize this opportunity. We can either guide a process of change to a productive end, or ride change driven by others wherever it might take us. I prefer the former.
The Qualitative Research Challenge to RCT4D: Part 2
Well, the response to part one was great – really good comments, and a few great response posts. I appreciate the efforts of some of my economist colleagues/friends to clarify the terminology and purpose behind RCTs. All of this has been very productive for me – and hopefully for others engaged in this conversation.
First, a caveat: On the blog I tend to write quickly and with minimal editing – so I get a bit fast and loose at times – well, faster and looser than I intend. So, to this end, I did not mean to suggest that nobody was doing rigorous work in development research – in fact, the rest of my post clearly set out to refute that idea, at least in the qualitative sphere. But I see how Marc Bellemare might have read me that way. What I should have said was that there has always been work, both in research and implementation, where rigorous data collection and analysis were lacking. In fact, there is quite a lot of this work. I think we can all agree this is true . . . and I should have been clearer.
I have also learned that what qualitative social scientists/social theorists mean by theory, and what economists mean by theory, seems to be two different things. Lee defined theory as “formal mathematical modeling” in a comment on part 1 of this series of posts, which is emphatically not what a social theorist might mean. When I say theory, I am talking about a conjectural framing of a social totality such that complex causality can at least be contained, if not fully explained. This framing should have reference to some sort of empirical evidence, and therefore should be testable and refinable over time – perhaps through various sorts of ethnographic work, perhaps through formal mathematical modeling of the propositions at hand (I do a bit of both, actually). In other words, what I mean by theory (and what I focus on in my work) is the establishment of a causal architecture for observed social outcomes. I am all about the “why it worked” part of research, and far less about the “if it worked” questions – perhaps mostly because I have researched unintended “development interventions” (i.e. unplanned road construction, the establishment of a forest reserve that alters livelihoods resource access, etc.) that did not have a clear goal, a clear “it worked!” moment to identify. All I have been looking at are outcomes of particular events, and trying to establish the causes of those outcomes. Obviously, this can be translated to an RCT environment because we could control for the intervention and expected outcome, and then use my approaches to get at the “why did it work/not work” issues.
It has been very interesting to see the economists weigh in on what RCTs really do – they establish, as Marc puts it, “whether something works, not in how it works.” (See also Grant’s great comment on the first post). I don’t think that I would get a lot of argument from people if I noted that without causal mechanisms, we can’t be sure why “what worked” actually worked, and whether the causes of “what worked” are in any way generalizable or transportable. We might have some idea, but I would have low confidence in any research that ended at this point. This, of course, is why Marc, Lee, Ruth, Grant and any number of other folks see a need for collaboration between quant and qual – so that we can get the right people, with the right tools, looking at different aspects of a development intervention to rigorously establish the existence of an impact, and the establish an equally rigorous understanding of the causal processes by which that impact came to pass. Nothing terribly new here, I think. Except, of course, for my continued claim that the qualitative work I do see associated with RCT work is mostly awful, tending toward bad journalism (see my discussion of bad journalism and bad qualitative work in the first post).
But this discussion misses a much larger point about epistemology – what I intended to write in this second part of the series all along. I do not see the dichotomy between measuring “if something works” and establishing “why something worked” as analytically valid. Simply put, without some (at least hypothetical) framing of causality, we cannot rigorously frame research questions around either question. How can you know if something worked, if you are not sure how it was supposed to work in the first place? Qualitative research provides the interpretive framework for the data collected via RCT4D efforts – a necessary framework if we want RCT4D work to be rigorous. By separating qualitative work from the quant oriented RCT work, we are assuming that somehow we can pull data collection apart from the framing of the research question. We cannot – nobody is completely inductive, which means we all work from some sort of framing of causality. The danger is when we don’t acknowledge this simple point – under most RCT4D work, those framings are implicit and completely uninterrogated by the practitioners. Even where they come to the fore (Duflo’s 3 I s), they are not interrogated – they are assumed as framings for the rest of the analysis.
If we don’t have causal mechanisms, we cannot rigorously frame research questions to see if something is working – we are, as Marc says, “like the drunk looking for his car keys under the street lamp when he knows he lost them elsewhere, because the only place he can actually see is under the street lamp.” Only I would argue we are the drunk looking for his keys under a streetlamp, but he has no idea if they are there or not.
In short, I’m not beating up on RCT4D, nor am I advocating for more conversation – no, I am arguing that we need integration, teams with quant and qual skills that frame the research questions together, that develop tests together, that interpret the data together. This is the only way we will come to really understand the impact of our interventions, and how to more productively frame future efforts. Of course, I can say this because I already work in a mixed-methods world where my projects integrate the skills of GIScientists, land use modelers, climate modelers, biogeographers and qualitative social scientists – in short, I have a degree of comfort with this sort of collaboration. So, who wants to start putting together some seriously collaborative, integrated evaluations?
The Qualitative Research Challenge to RCT4D: Part 1
Those following this blog (or my twitter feed) know that I have some issues with RCT4D work. I’m actually working on a serious treatment of the issues I see in this work (i.e. journal article), but I am not above crowdsourcing some of my ideas to see how people respond. Also, as many of my readers know, I have a propensity for really long posts. I’m going to try to avoid that here by breaking this topic into two parts. So, this is part 1 of 2.
To me, RCT4D work is interesting because of its emphasis on rigorous data collection – certainly, this has long been a problem in development research, and I have no doubt that the data they are gathering is valid. However, part of the reason I feel confident in this data is because, as I raised in an earlier post, it is replicating findings from the qualitative literature . . . findings that are, in many cases, long-established with rigorously-gathered, verifiable data. More on that in part 2 of this series.
One of the things that worries me about the RCT4D movement is the (at least implicit, often overt) suggestion that other forms of development data collection lack rigor and validity. However, in the qualitative realm we spend a lot of time thinking about rigor and validity, and how we might achieve both – and there are tools we use to this end, ranging from discursive analysis to cross-checking interviews with focus groups and other forms of data. Certainly, these are different means of establishing rigor and validity, but they are still there.
Without rigor and validity, qualitative research falls into bad journalism. As I see it, good journalism captures a story or an important issue, and illustrates that issue through examples. These examples are not meant to rigorously explain the issue at hand, but to clarify it or ground it for the reader. When journalists attempt to move to explanation via these same few examples (as far too often columnists like Kristof and Friedman do), they start making unsubstantiated claims that generally fall apart under scrutiny. People mistake this sort of work for qualitative social science all the time, but it is not. Certainly there is some really bad social science out there that slips from illustration to explanation in just the manner I have described, but this is hardly the majority of the work found in the literature. Instead, rigorous qualitative social science recognizes the need to gather valid data, and therefore requires conducting dozens, if not hundreds, of interviews to establish understandings of the events and processes at hand.
This understanding of qualitative research stands in stark contrast to what is in evidence in the RCT4D movement. For all of the effort devoted to data collection under these efforts, there is stunningly little time and energy devoted to explanation of the patterns seen in the data. In short, RCT4D often reverts to bad journalism when it comes time for explanation. Patterns gleaned from meticulously gathered data are explained in an offhand manner. For example, in her (otherwise quite well-done) presentation to USAID yesterday, Esther Duflo suggested that some problematic development outcomes could be explained by a combination of “the three I s”: ideology, ignorance and inertia. This is a boggling oversimplification of why people do what they do – ideology is basically nondiagnostic (you need to define and interrogate it before you can do anything about it), and ignorance and inertia are (probably unintentionally) deeply patronizing assumptions about people living in the Global South that have been disproven time and again (my own work in Ghana has demonstrated that people operate with really fine-grained information about incomes and gender roles, and know exactly what they are doing when they act in a manner that limits their household incomes – see here, here and here). Development has claimed to be overcoming ignorance and inertia since . . . well, since we called it colonialism. Sorry, but that’s the truth.
Worse, this offhand approach to explanation is often “validated” through reference to a single qualitative case that may or may not be representative of the situation at hand – this is horribly ironic for an approach that is trying to move development research past the anecdotal. This is not merely external observation – I have heard from people working inside J-PAL projects that the overall program puts little effort into serious qualitative work, and has little understanding of what rigor and validity might mean in the context of qualitative methods or explanation. In short, the bulk of explanation for these interesting patterns of behavior that emerges from these studies resorts to uninterrogated assumptions about human behavior that do not hold up to empirical reality. What RCT4D has identified are patterns, not explanations – explanation requires a contextual understanding of the social.
Coming soon: Part 2 – Qualitative research and the interpretation of empirical data
On explanation in development research
I was at a talk today where folks from Michigan State were presenting research and policy recommendations to guide the Feed the Future initiative. I greatly appreciate this sort of presentation – it is good to get real research in the building, and to see USAID staff that have so little time turn out in large numbers to engage. Once again, folks, its not that people in the agencies aren’t interested or don’t care, its a question of time and access.
In the course of one of the presentations, however, I saw a moment of “explanation” for observed behavior that nicely captures a larger issue that has been eating at me as the randomized control trials for development (RCT4D) movement gains speed . . . there isn’t a lot of explanation there. There is really interesting data, rigorously collected, but explanation is another thing entirely.
In the course of the presentation, the presenter put up a slide that showed a wide dispersion of prices around the average price received by farmers for their maize crops around a single market area (near where I happen to do work in Malawi). Nothing too shocking there, as this happens in Malawi, and indeed in many places. However, from a policy and programming perspective, it’s important to know that the average price is NOT the same thing as what a given household is taking home. But then the presenter explained this dispersion by noting (in passing) that some farmers were more price-savvy than others.
1) there is no evidence at all to support this claim, either in his data or in the data I have from an independent research project nearby
2) this offhand explanation has serious policy ramifications.
This explanation is a gross oversimplification of what is actually going on here – in Mulanje (near the Luchenza market area analyzed in the presentation), price information is very well communicated in villages. Thus, while some farmers might indeed be more savvy than others, the prices they are able to get are communicated throughout the village, thus distributing that information. So the dispersion of prices is the product of other factors. Certainly desperation selling is probably part of the issue (another offhand explanation offered later in the presentation). However, what we really need, if we want a rigorous understanding of the causes of this dispersion and how to address it, is a serious effort to grasp the social component of agriculture in this area – how gender roles, for example, shape household power dynamics, farm roles, and the prices people will sell at (this is a social consideration that exceeds explanation via markets), or how social networks connect particular farmers to particular purchasers in a manner that facilitates or inhibits price maximization at market. These considerations are both causal of the phenomena that the presenter described, and the points of leverage on which policy might act to actually change outcomes. If farmers aren’t “price savvy”, this suggests the need for a very different sort of intervention than what would be needed to address gendered patterns of agricultural strategy tied to long-standing gender roles and expectations.
This is a microcosm of what I am seeing in the RCT4D world right now – really rigorous data collection, followed by really thin interpretations of the data. It is not enough to just point out interesting patterns, and then start throwing explanations out there – we must turn from rigorous quantitative identification of significant patterns of behavior to the qualitative exploration of the causes of those patterns and their endurance over time. I’ve been wrestling with these issues in Ghana for more than a decade now, an effort that has most recently led me to a complete reconceptualization of livelihoods (shifting from understanding livelihoods as a means of addressing material conditions to a means of governing behaviors through particular ways of addressing material conditions – the article is in review at Development and Change). However, the empirical tests of this approach (with admittedly tiny-n size samples in Ghana, and very preliminary looks at the Malawi data) suggest that I have a better explanatory resolution for explained behaviors than possible through existing livelihoods approaches (which would end up dismissing a lot of choices as illogical or the products of incomplete information) – and therefore I have a better foundation for policy recommendations than available without this careful consideration of the social.
See, for example, this article I wrote on how we approach gender in development (also a good overview of the current state of gender and development, if I do say so myself). I empirically demonstrate that a serious consideration of how gender is constructed in particular places has large material outcomes on whose experiences we can understand, and therefore the sorts of interventions we might program to address particular challenges. We need more rigorous wrestling with “the social” if we are going to learn anything meaningful from our data. Period.
In summary, explanation is hard. Harder, in many ways, than rigorous data collection. Until we start spending at least as much effort on the explanation side as we do on the collection side, we will not really change much of anything in development.
On field experience and playing poor
There is a great post up at Good on “Pretending to be Poor” experiments, where participants try to live on tiny sums of money (i.e. $1.50/day) to better understand the plight of the global poor. Cord Jefferson refers to this sort of thing as “playing poor”, at least in part because participants don’t really live on $1.50 a day . . . after all, they are probably not abandoning their secure homes, and probably not working the sort of dangerous, difficult job that pays such a tiny amount. Consuming $1.50/day is one thing. Living on it is entirely another. (h/t to Michael Kirkpatrick at Independent Global Citizen for pointing out the post).
This, for me, brings up another issue – the “authenticity” of the experiences many of us have had while doing fieldwork (or working in field programs), an issue that has been amplified by what seems to be the recent discovery of fieldwork by the RCT trials for development crowd (I still can’t get over the idea that they think living among the poor is a revolutionary idea). The whole point of participant observation is to better understand what people do and why they do it by experiencing, to some extent, their context – I find it inordinately difficult to understand how people even begin to meaningfully parse social data without this sort of grounding. In a concrete way, having malaria while in a village does help one come to grips with the challenges this might pose to making a living via agriculture in a rather visceral way. So too, living in a village during a drought that decimated a portion of the harvest, by putting me in a position where I had to go a couple of (intermittent) days without food, and with inadequate food for quite a few more, helped me to come to grips with both the capacity and the limitations of the livelihoods strategies in the villages I write about in Delivering Development, and at least a limited understanding of the feelings of frustration and inadequacy that can arise when things go wrong in rural Africa, even as livelihoods strategies work to prevent the worst outcomes.
But the key part of that last sentence was “at least a limited understanding.” Being there is not the same thing as sharing the experience of poverty, development, or disaster. When I had malaria, I knew what clinics to go to, and I knew that I could afford the best care available in Cape Coast (and that care was very good) – I was not a happy guy on the morning I woke up with my first case, but I also knew where to go, and that the doctor there would treat me comprehensively and I would be fine. So too with the drought – the villages I was living in were, at most, about 5 miles (8km) from a service station with a food mart attached. Even as I went without food for a day, and went a bit hungry for many more, I knew in the back of my mind that if things turned dire, I could walk that distance and purchase all of the food I needed. In other words, I was not really experiencing life in these villages because I couldn’t, unless I was willing to throw away my credit card, empty my bank account, and renounce all of my upper-class and government colleagues and friends. Only then would I have been thrown back on only what I could earn in a day in the villages and the (mostly appalling) care available in the rural clinic north of Eguafo. I was always critically aware of this fact, both in the moment and when writing and speaking about it since. Without that critical awareness, and a willingness to downplay our own (or other’s) desire to frame our work as a heroic narrative, there is a real risk in creating our own versions of “playing poor” as we conduct fieldwork.
I'm a talking head . . .
Geoff Dabelko, Sean Peoples, Schuyler Null and the rest of the good folks at the Environmental Change and Security Program at the Woodrow Wilson Center for Scholars were kind enough to interview me about some of the themes in Delivering Development. They’ve posted the video on te ECSP’s blog, The New Security Beat (you really should be checking them out regularly). So, if you want to see/hear me (as opposed to read me), you can go over to their blog, or just click below.
Well, this is interesting . . .
It’s been a while since I focused on the environment side of the whole “global change” thing that this blog is supposed to be covering . . . at least directly. Pretty much everything we do in development is connected to the environment – indeed, of late I have been referring to climate change as development’s Kevin Bacon while at work: I can get you from climate change to a development challenge, or vice versa, in three steps or less. But I have not been writing much on the subject directly.
However, thanks to Garry over at Resilience Science, I’ve just read a really interesting article in Science (and a nice counterpoint to the recent bin Laden ambulance chasing in that journal) by Steve Carpenter and a bunch of others on Early Warnings of Regime Shifts in ecosystems. For years, I have been teaching my students about the challenges of global environmental change, and trying to impress upon them that the part of these changes I find most worrying are the parts that are hardest to predict – the thresholds when particular biophysical systems might make sudden, discontinuous transitions to new states. What has worried me, and I think much of the global change community, the most is the fact that we are not sure where these thresholds are, nor are we sure what it looks like when we approach one. Thus, there is a pervasive concern within the community that we won’t know we’ve crossed a threshold or done something irreversible.
Carpenter and his co-authors, however, tested the hypothesis that “catastrophic ecological regime shifts may be announced in advance by statistical early-warning signals such as slowing return rates from perturbation and rising variance” by artifically inducing a regime shift in an aquatic food web (Carpenter is a limnologist – he does lakes, as it were) while monitoring a nearby similar lake as a control. Their finding: they could see statistical warnings of an impending regime shift for more than a year before it occurred, validating their chosen early warning indicators (chosen from previously constructed understandings of the food web in question, and a bit much to synthesize here).
That there might be early warning indicators, or that the variables chosen by Carpenter, et al served as useful early warning indicators for regime change in this particular system are not terribly surprising. What is interesting, though, is that the authors were able to demonstrate in a real-world (experimental) context (as opposed to desk theorization) that the early warning signals of regime shift are in fact detectable and measurable. Granted, this is for a small, bounded food web – but the demonstration is important in a much wider way. If we can find early warning indicators for regime shift in a small food web, there is no reason why we cannot find indicators for other complex systems – we can find a lot more early warning indicators of the discontinuous changes we fear, and in enough time to possibly address those changes before they occur.
But one big caveat here: this study did not reveal the actual mechanisms of regime shift. As the authors note:
The precise mechanism of the nonlinear transitions is not known for our experiment; it could be one of the processes proposed in the literature, or something else. These early warning signals are expected to occur for a wide class of nonlinear transitions (7). Even though the mechanism is not known, manipulation of an apex predator triggered a nonlinear food web transition that was signaled by early warning indicators more than a year before the food web transition was complete. Thus the early warning indicators appear to be useful even in cases where the form of the potential regime shift is not known.
It seems to me that there is a serious risk of conflating correlation and causation here – that the authors got a bit lucky in this experiment, but that in other systems without an adequate understanding of the mechanisms of change, false correlations could cause us to lose the signal of regime shift in the noise of inappropriate data points. I’m not sure how, or if, they intend to address this . . . but I think they will have to, if we are to usefully apply this to our food-producing ecosystems in a manner that allows us to think about sustainable development and food security in a meaningful way . . .