Food security and conflict done badly . . .

Over at the Guardian, Damian Carrington has a blog post arguing that “Food is the ultimate security need.”  He bases this argument on a map produced by risk analysts Maplecroft, which sounds quite rigorous:

The Maplecroft index [represented on the map], reviewed last year by the World Food Programme, uses 12 types of data to derive a measure of food risk that is based on the UN FAO’s concept. That covers the availability, access and stability of food supplies, as well as the nutritional and health status of populations.

I’m going to leave aside the question of whether we can or should be linking food security to conflict – Marc Bellemare is covering this issue in his research and has anice short post up that you should be reading.  He also has a link to a longer technical paper where he interrogates this relationship…I am still wading through it, as it involves a somewhat frightening amount of math, but if you are statistically inclined, check it out.

Instead, I would like to quickly raise some questions about this index and the map that results. First, the construction of the index itself is opaque (I assume because it is seen as a proprietary product), so I have no idea what is actually in there.  Given the character of the map, though, it looks like it was constructed from national-level data.  If it was, it is not particularly useful – food insecurity is not only about the amount of food, but access to that food and entitlement to get access to the food, and these are things that tend to be determined locally.  You cannot aggregate entitlement at the national level and get a meaningful understanding of food insecurity – and certainly not actionable information.

Further, you can’t aggregate food markets or prices at the national level and get anything meaningful with regard to food security – let’s compare Maplecroft’s map with FEWS-NETs maps for the immediate future (August-September 2011):

First Maplecroft:

Now FEWS-NET:

While FEWS-NET does not have global coverage, compare their maps to those of Maplecroft and you see two things: One, FEWS is clearly working at a much finer geographic scale, because they have on-the-ground information about actual markets and access, as well as a deep understanding of climate and livelihoods through which to contextualize their grounded data.  This is what it takes to represent variable vulnerability within a country.  The variability you see on their map illustrates my point about the problems of national-level statistics – clearly food insecurity is a regional-to-local problem in every country, even Somalia.  Two, FEWS is not projecting major risk in the same places as Maplecroft, whose map has painted most of equatorial and dryland Africa as problematic at best.  Now, FEWS-NET’s medium-term projections (October-December 2011):

Again, no real resemblance to the Maplecroft map.

Now, you can argue that the Maplecroft map is aimed at a different goal than the FEWS-NET maps, as Maplecroft is trying to create a risk-assessment picture of food security in the region.  However, Maplecroft’s timescale is unclear (does it cover the next 6 months? 1 year? 5 years?), and its data is so over-aggregated as to be non-actionable.  You can’t build policy or programs from this, and I would argue that you can’t really assess the risk of food insecurity from the map or the underlying index either.  FEWS-NET’s maps are what actionable information looks like . . .

I appreciate the point Carrington is trying to make on his blog – food security is a really important issue.  But if we are to address the challenges of hunger and conflict, we need to build our understanding of the connection between them from meaningful data . . . and probably work from the outstanding material already available via FEWS-NET and others.

Relief vs. Fundraising

David Reiff has a great piece on ForeignPolicy.com called “Millions May Die . . . Or Not.”  It is hard to read, in some ways, because nobody really wants to criticize folks whose hearts are in the right place.  At the same time, couching pleas for aid in ever escalating “worst disaster ever” claims, is risking the long-term viability of charitable contributions:

By continually upping the rhetorical ante, relief agencies, whatever their intentions, are sowing the seeds of future cynicism, raising the bar of compassion to the point where any disaster in which the death toll cannot be counted in the hundreds of thousands, that cannot be described as the worst since World War II or as being of biblical proportions, is almost certainly condemned to seem not all that bad by comparison.

I see this as akin to blizzard predictions – what one of my friends long ago started calling the “Storm of the Century of the Week” problem.  I cannot take an apocalyptic blizzard prediction seriously anymore, because they are all apocalyptic.  One day this will bite me in the ass, I know . . . well, unless I stay in DC and/or South Carolina.
But there was one thing left unexamined in the article that I wonder about – Reiff notes, quite rightly, that:

All relief agencies know that, where disasters are concerned, not only the media but the public as a whole practices a species of serial monogamy, focusing on one crisis to the exclusion of all others until what is sometimes called “compassion fatigue” sets in. Then, attention shifts to the next emergency.

Reiff does not tell us the origins of this syndrome – and the article seems to suggest that it “just exists,” a cause of the ever-escalating claims about the scale and scope of a given disaster.  I wonder, however, if he has overlooked something important here – that perhaps the escalating claims are the very thing that has created this “serial charity/aid monogamy” by overwhelming our capacity to address the wide range of needs that exist in the world.
In short, has the competition for relief dollars created a cycle in which claims about the magnitude of the crisis will continue to inflate, further focusing the attention of the public and media into shorter and shorter cycles until it completely evaporates?  Are we looking at a midpoint to the creative destruction of the relief industry?  And what have the policy implications of this narrowing been – is there space to back up and think more holistically, and with greater perspective, to do a better job of assessing need and capabilities of meeting it?



The Qualitative Research Challenge to RCT4D: Part 1

Those following this blog (or my twitter feed) know that I have some issues with RCT4D work.  I’m actually working on a serious treatment of the issues I see in this work (i.e. journal article), but I am not above crowdsourcing some of my ideas to see how people respond.  Also, as many of my readers know, I have a propensity for really long posts.  I’m going to try to avoid that here by breaking this topic into two parts.  So, this is part 1 of 2.
To me, RCT4D work is interesting because of its emphasis on rigorous data collection – certainly, this has long been a problem in development research, and I have no doubt that the data they are gathering is valid.  However, part of the reason I feel confident in this data is because, as I raised in an earlier post,  it is replicating findings from the qualitative literature . . . findings that are, in many cases, long-established with rigorously-gathered, verifiable data.  More on that in part 2 of this series.
One of the things that worries me about the RCT4D movement is the (at least implicit, often overt) suggestion that other forms of development data collection lack rigor and validity.  However, in the qualitative realm we spend a lot of time thinking about rigor and validity, and how we might achieve both – and there are tools we use to this end, ranging from discursive analysis to cross-checking interviews with focus groups and other forms of data.  Certainly, these are different means of establishing rigor and validity, but they are still there.
Without rigor and validity, qualitative research falls into bad journalism.  As I see it, good journalism captures a story or an important issue, and illustrates that issue through examples.  These examples are not meant to rigorously explain the issue at hand, but to clarify it or ground it for the reader.  When journalists attempt to move to explanation via these same few examples (as far too often columnists like Kristof and Friedman do), they start making unsubstantiated claims that generally fall apart under scrutiny.  People mistake this sort of work for qualitative social science all the time, but it is not.  Certainly there is some really bad social science out there that slips from illustration to explanation in just the manner I have described, but this is hardly the majority of the work found in the literature.  Instead, rigorous qualitative social science recognizes the need to gather valid data, and therefore requires conducting dozens, if not hundreds, of interviews to establish understandings of the events and processes at hand.
This understanding of qualitative research stands in stark contrast to what is in evidence in the RCT4D movement.  For all of the effort devoted to data collection under these efforts, there is stunningly little time and energy devoted to explanation of the patterns seen in the data.  In short, RCT4D often reverts to bad journalism when it comes time for explanation.  Patterns gleaned from meticulously gathered data are explained in an offhand manner.  For example, in her (otherwise quite well-done) presentation to USAID yesterday, Esther Duflo suggested that some problematic development outcomes could be explained by a combination of “the three I s”: ideology, ignorance and inertia.  This is a boggling oversimplification of why people do what they do – ideology is basically nondiagnostic (you need to define and interrogate it before you can do anything about it), and ignorance and inertia are (probably unintentionally) deeply patronizing assumptions about people living in the Global South that have been disproven time and again (my own work in Ghana has demonstrated that people operate with really fine-grained information about incomes and gender roles, and know exactly what they are doing when they act in a manner that limits their household incomes – see here, here and here).  Development has claimed to be overcoming ignorance and inertia since . . . well, since we called it colonialism.  Sorry, but that’s the truth.
Worse, this offhand approach to explanation is often “validated” through reference to a single qualitative case that may or may not be representative of the situation at hand – this is horribly ironic for an approach that is trying to move development research past the anecdotal.  This is not merely external observation – I have heard from people working inside J-PAL projects that the overall program puts little effort into serious qualitative work, and has little understanding of what rigor and validity might mean in the context of qualitative methods or explanation.  In short, the bulk of explanation for these interesting patterns of behavior that emerges from these studies resorts to uninterrogated assumptions about human behavior that do not hold up to empirical reality.  What RCT4D has identified are patterns, not explanations – explanation requires a contextual understanding of the social.
Coming soon: Part 2 – Qualitative research and the interpretation of empirical data

On explanation in development research

I was at a talk today where folks from Michigan State were presenting research and policy recommendations to guide the Feed the Future initiative.  I greatly appreciate this sort of presentation – it is good to get real research in the building, and to see USAID staff that have so little time turn out in large numbers to engage.  Once again, folks, its not that people in the agencies aren’t interested or don’t care, its a question of time and access.
In the course of one of the presentations, however, I saw a moment of “explanation” for observed behavior that nicely captures a larger issue that has been eating at me as the randomized control trials for development (RCT4D) movement gains speed . . . there isn’t a lot of explanation there.  There is really interesting data, rigorously collected, but explanation is another thing entirely.
In the course of the presentation, the presenter put up a slide that showed a wide dispersion of prices around the average price received by farmers for their maize crops around a single market area (near where I happen to do work in Malawi).  Nothing too shocking there, as this happens in Malawi, and indeed in many places.  However, from a policy and programming perspective, it’s important to know that the average price is NOT the same thing as what a given household is taking home.  But then the presenter explained this dispersion by noting (in passing) that some farmers were more price-savvy than others.
1) there is no evidence at all to support this claim, either in his data or in the data I have from an independent research project nearby
2) this offhand explanation has serious policy ramifications.
This explanation is a gross oversimplification of what is actually going on here – in Mulanje (near the Luchenza market area analyzed in the presentation), price information is very well communicated in villages.  Thus, while some farmers might indeed be more savvy than others, the prices they are able to get are communicated throughout the village, thus distributing that information.  So the dispersion of prices is the product of other factors.  Certainly desperation selling is probably part of the issue (another offhand explanation offered later in the presentation).  However, what we really need, if we want a rigorous understanding of the causes of this dispersion and how to address it, is a serious effort to grasp the social component of agriculture in this area – how gender roles, for example, shape household power dynamics, farm roles, and the prices people will sell at (this is a social consideration that exceeds explanation via markets), or how social networks connect particular farmers to particular purchasers in a manner that facilitates or inhibits price maximization at market.  These considerations are both causal of the phenomena that the presenter described, and the points of leverage on which policy might act to actually change outcomes.  If farmers aren’t “price savvy”, this suggests the need for a very different sort of intervention than what would be needed to address gendered patterns of agricultural strategy tied to long-standing gender roles and expectations.
This is a microcosm of what I am seeing in the RCT4D world right now – really rigorous data collection, followed by really thin interpretations of the data.  It is not enough to just point out interesting patterns, and then start throwing explanations out there – we must turn from rigorous quantitative identification of significant patterns of behavior to the qualitative exploration of the causes of those patterns and their endurance over time.  I’ve been wrestling with these issues in Ghana for more than a decade now, an effort that has most recently led me to a complete reconceptualization of livelihoods (shifting from understanding livelihoods as a means of addressing material conditions to a means of governing behaviors through particular ways of addressing material conditions – the article is in review at Development and Change).  However, the empirical tests of this approach (with admittedly tiny-n size samples in Ghana, and very preliminary looks at the Malawi data) suggest that I have a better explanatory resolution for explained behaviors than possible through existing livelihoods approaches (which would end up dismissing a lot of choices as illogical or the products of incomplete information) – and therefore I have a better foundation for policy recommendations than available without this careful consideration of the social.
See, for example, this article I wrote on how we approach gender in development (also a good overview of the current state of gender and development, if I do say so myself).  I empirically demonstrate that a serious consideration of how gender is constructed in particular places has large material outcomes on whose experiences we can understand, and therefore the sorts of interventions we might program to address particular challenges.  We need more rigorous wrestling with “the social” if we are going to learn anything meaningful from our data.  Period.
In summary, explanation is hard.  Harder, in many ways, than rigorous data collection.  Until we start spending at least as much effort on the explanation side as we do on the collection side, we will not really change much of anything in development.

On field experience and playing poor

There is a great post up at Good on “Pretending to be Poor” experiments, where participants try to live on tiny sums of money (i.e. $1.50/day) to better understand the plight of the global poor.  Cord Jefferson refers to this sort of thing as “playing poor”, at least in part because participants don’t really live on $1.50 a day . . . after all, they are probably not abandoning their secure homes, and probably not working the sort of dangerous, difficult job that pays such a tiny amount.  Consuming $1.50/day is one thing.  Living on it is entirely another.  (h/t to Michael Kirkpatrick at Independent Global Citizen for pointing out the post).
This, for me, brings up another issue – the “authenticity” of the experiences many of us have had while doing fieldwork (or working in field programs), an issue that has been amplified by what seems to be the recent discovery of fieldwork by the RCT trials for development crowd (I still can’t get over the idea that they think living among the poor is a revolutionary idea).  The whole point of participant observation is to better understand what people do and why they do it by experiencing, to some extent, their context – I find it inordinately difficult to understand how people even begin to meaningfully parse social data without this sort of grounding.  In a concrete way, having malaria while in a village does help one come to grips with the challenges this might pose to making a living via agriculture in a rather visceral way.  So too, living in a village during a drought that decimated a portion of the harvest, by putting me in a position where I had to go a couple of (intermittent) days without food, and with inadequate food for quite a few more, helped me to come to grips with both the capacity and the limitations of the livelihoods strategies in the villages I write about in Delivering Development, and at least a limited understanding of the feelings of frustration and inadequacy that can arise when things go wrong in rural Africa, even as livelihoods strategies work to prevent the worst outcomes.
But the key part of that last sentence was “at least a limited understanding.”  Being there is not the same thing as sharing the experience of poverty, development, or disaster.  When I had malaria, I knew what clinics to go to, and I knew that I could afford the best care available in Cape Coast (and that care was very good) – I was not a happy guy on the morning I woke up with my first case, but I also knew where to go, and that the doctor there would treat me comprehensively and I would be fine.  So too with the drought – the villages I was living in were, at most, about 5 miles (8km) from a service station with a food mart attached.  Even as I went without food for a day, and went a bit hungry for many more, I knew in the back of my mind that if things turned dire, I could walk that distance and purchase all of the food I needed.  In other words, I was not really experiencing life in these villages because I couldn’t, unless I was willing to throw away my credit card, empty my bank account, and renounce all of my upper-class and government colleagues and friends.  Only then would I have been thrown back on only what I could earn in a day in the villages and the (mostly appalling) care available in the rural clinic north of Eguafo.  I was always critically aware of this fact, both in the moment and when writing and speaking about it since.  Without that critical awareness, and a willingness to downplay our own (or other’s) desire to frame our work as a heroic narrative, there is a real risk in creating our own versions of “playing poor” as we conduct fieldwork.

Mendacious crap . . .

A letter to the editor in today’s Washington Post (scroll down to the second letter on this page) infuriated me beyond words.  People have a right to offer their opinion, however ill-informed, in a democracy.  Nobody, however, has a right to basically lie outright.  Yet James L. Henry, chairman of USA Maritime managed to get a letter to the editor published that did just that.
Henry was responding to the column “5 Myths About Foreign Aid” in which the author, John Norris, the executive director of the sustainable security program at the Center for American Progress quite rightly noted:

Congress mandates that 75 percent of all U.S. international food aid be shipped aboard U.S. flagged vessels — ships registered in the United States. A study by several researchers at Cornell University concluded that this subsidy of elite U.S. shipping companies cost American taxpayers $140 million in unnecessary transportation costs during 2006 alone.

The Government Accountability Office noted that between 2006 and 2008, U.S. food aid funding increased by nearly 53 percent, but the amount of food delivered actually decreased by 5 percent. Why? Because our food aid policies are swayed by an agribusiness lobby that stresses buying American, not buying cheaply.

Both of these points are well-documented.  But Henry, chair of the interest group that protects the US shipping industry from competition in the delivery of food aid, really doesn’t want you to know this.  Instead, he argues:

The reality is that cargo preference adds no additional cost to foreign aid programs and should be credited with sustaining an essential national defense sealift capability.

Cargo preference does not divert one dollar away from food aid programs. To the extent that cargo preference increases costs, the difference has been reimbursed by the Transportation Department. For example, reimbursements resulted in a $128 million net increase in available food aid funding in 2006. The Transportation Department reimburses these costs because a reliable U.S.-flag commercial fleet provides essential sealift capacity in times of war or national emergencies.

The language here is very careful – technically, he is not lying.  But by no means is his explanation meant to help the reader understand what is going on.  In arguing that “cargo preference does not divert one dollar away from food aid programs,” he fails to point out that the cost of cargo preference is built into existing budgets . . . it is part of existing food aid programs, and therefore technically does not divert money from them.  But this, of course, is not what Norris meant in his “5 myths” piece, nor is it what most people care about.  The simple fact of the matter is that more of the food aid budget could go to procuring food if the cargo preference requirement was dropped.  Period.
Second, if we read these two paragraphs carefully, we find that Henry is engaged in one of the more carefully phrased but entertainingly contradictory bits of writing I have ever seen.  Pay attention, now: in the first paragraph, he argues that “cargo preference adds no additional cost to foreign aid programs.”  In the second, he notes “To the extent that cargo preference increases costs, the difference has been reimbursed by the Transportation Department.” OK, first, let’s note that he just admitted that cargo preference does increase costs.  Second, he is technically correct – the burden of those costs is shifted outside foreign aid programs . . . to the Transportation Department.  Which is funded by the same tax dollars as foreign aid.  Basically, he is arguing that their taxpayer-funded subsidy/reimbursement should not be seen as having any impact on taxpayer-funded foreign relief operations.  Even though these are the same tax dollars, in the end.
Technically all true.  Clearly intended to deceive.  So, WaPo, how do you feel about publishing letters from poverty/disaster profiteers?

Qualitative research was (already) here . . .

You know, qualitative social scientists of various stripes have long complained of their marginalization in development.  Examples abound of anthropologists, geographers, and sociologists complaining about the influence of the quantitatively-driven economists (and to a lesser extent, some political scientists) over development theory and policy.  While I am not much for whining, these complaints are often on the mark – quantitative data (of the sort employed by economists, and currently all the rage in political science) tends to carry the day over qualitative data, and the nuanced lessons of ethnographic research are dismissed as unimplementable, ideosyncratic/place-specific, without general value, etc.  This is not to say that I have an issue with quantitative data – I believe we should employ the right tool for the job at hand.  Sadly, most people only have either qualitative or quantitative skills, making the selection of appropriate tools pretty difficult . . .
But what is interesting, of late, is what appears to be a turn toward the lessons of the qualitative social sciences in development . . . only without actually referencing or reading those qualitative literatures.  Indeed, the former quantitative masters of the development universe are now starting to figure out and explore . . . the very things that the qualitative community has known for decades. What is really frustrating and galling is that these “new” studies are being lauded as groundbreaking and getting great play in the development world, despite the fact they are reinventing the qualitative wheel, and without much of the nuance of the current qualitative literature and its several decades of nuance.
What brings me to today’s post is the new piece on hunger in Foreign Policy by Abhijit Banerjee and Esther Duflo.  On one hand, this is great news – good to see development rising to the fore in an outlet like Foreign Policy.  I also largely agree with their conclusions – that the poverty trap/governance debate in development is oversimplified, that food security outcomes are not explicable through a single theory, etc.  On the other hand, from the perspective of a qualitative researcher looking at development, there is nothing new in this article.  Indeed, the implicit premise of the article is galling: When they argue that to address poverty, “In practical terms, that meant we’d have to start understanding how the poor really live their lives,” the implication is that nobody has been doing this.  But what of the tens of thousands of anthropologists, geographers and sociologists (as well as representatives of other cool, hybridized fields like new cultural historians and ethnoarchaeologists).  Hell, what of the Peace Corps?
Whether intentional or not, this article wipes the qualitative research slate clean, allowing the authors to present their work in a methodological and intellectual vacuum.  This is the first of my problems with this article – not so much with its findings, but with its appearance of method.  While I am sure that there is more to their research than presented in the article, the way their piece is structured, the case studies look like evidence/data for a new framing of food security.  They are not – they are illustrations of the larger conceptual points that Banerjee and Duflo are making.  I am sure that Banerjee and Duflo know this, but the reader does not – instead, most readers will think this represents some sort of qualitative research, or a mixed method approach that takes “hard numbers” and mixes it in with the loose suppositions that Banerjee and Duflo offer by way of explanation for the “surprising” outcomes they present.  But loose supposition is not qualitative research – at best, it is journalism. Bad journalism. My work, and the work of many, many colleagues, is based on rigorous methods of observation and analysis that produce validatable data on social phenomena.  The work that led to Delivering Development and many of my refereed publications took nearly two years of on-the-ground observation and interviewing, including follow-ups, focus groups and even the use of archaeology and remotely-sensed data on land use to cross-check and validate both my data and my analyses.
The result of all that work was a deep humility in the face of the challenges that those living in places like Coastal Ghana or Southern Malawi manage on a day-to-day basis . . . and deep humility when addressing the idea of explanation.  This is an experience I share with countless colleagues who have spent a lot of time on the ground in communities, ministries and aid organizations, a coming to grips with the fact that massively generalizable solutions simply don’t exist in the way we want them to, and that singular interventions will never address the challenges facing those living in the Global South.
So, I find it frustrating when Banerjee and Duflo present this observation as in any way unique:

What we’ve found is that the story of hunger, and of poverty more broadly, is far more complex than any one statistic or grand theory; it is a world where those without enough to eat may save up to buy a TV instead, where more money doesn’t necessarily translate into more food, and where making rice cheaper can sometimes even lead people to buy less rice.

For anyone working in food security – that is, anyone who has been reading the literature coming out of anthropology, geography, sociology, and even some areas of ag econ, this is not a revelation – this is standard knowledge.  A few years ago I spent a lot of time and ink on an article in Food Policy that tried to loosely frame a schematic of local decision-making that leads to food security outcomes – an effort to systematize an approach to the highly complex sets of processes and decisions that produce hunger in particular places because there is really no way to get a single, generalized statistic or finding that will explain hunger outcomes everywhere.
In other words: We know.  So what do you have to tell us?
The answer, unfortunately, is not very much . . . because in the end they don’t really dive into the social processes that lead to the sorts of decisions that they see as interesting or counterintuitive.  This is where the heat is in development research – there are a few of us working down at this level, trying to come up with new framings of social process that move us past a reliance solely on the blunt tool of economistic rationality (which can help explain some behaviors and decisions) toward a more nuanced framing of how those rationalities are constructed by, and mobilize, much larger social processes like gender identification.  The theories in which we are dealing are very complex, but they do work (at least I think my work with governmentality is working – but the reviewers at Development and Change might not agree).
And maybe, just maybe, there is an opening to get this sort of work out into the mainstream, to get it applied – we’re going to try to do this at work, pulling together resources and interests across two Bureaus and three offices to see if a reframing of livelihoods around Foucault’s idea of governmentality can, in fact, get us better resolution on livelihoods and food security outcomes than current livelihoods models (which mostly assume that decisionmaking is driven by an effort to maximize material returns on investment and effort). Perhaps I rest too much faith on the idea of evidence, but if we can implement this idea and demonstrate that it works better, perhaps we will have a lever with which to push oversimplified economistic assumptions out of the way, while still doing justice to the complexity of social process and explanation in development.

What else we don't know about adaptation

RealClimate had an interesting post the other day about adaptation – specifically, how we bring together models that operate at the global-to-regional scales with an understanding of current and future impacts of climate change, which we feel at the local scale. This post was written from a climate science perspective – and so focuses on modeling capabilities and needs as related to the biophysical world.  In doing so, I think that one key uncertainty in our use of downscaled models for adaptation planning is huge – the likely pathways of human response to changes in the climate over the next several decades.  In places like sub-Saharan Africa, how people respond to climate change will have impacts on land use decisions, and therefore land cover . . . and land cover is a key component of local climate.  In other words, as we downscale climate models, we need to start adding new types of data to them – social data on adaptation decision-making, so that we might project plausible future pathways and build them into these downscaled models.
For example, many modeling exercises currently suggest that a combination of temperature increases and changes in the amount and pattern of rainfall in parts of southern Africa will make it very difficult to raise maize there over the next few decades.  This is a major problem, as maize is a staple of the region.  So, what will people do?  Will they continue to grow maize that is less hardy and takes up less CO2 and water as it grows, will they switch to a crop that takes up more CO2 than maize ever did, or will they begin to abandon the land and migrate to cities, creating pockets of fallow land and/or opening a frontier for mechanized agriculture (both outcomes likely to have significant impacts on greenhouse gas emissions and water cycling, among other things)?  Simply put, we don’t really know.  But we need to know, and we need to know with reasonably high resolution.  That is, it is not enough to simply say “they will stop planting maize and plant X.”  We need to know when this transition will take place.  We need to know if it will happen suddenly or gradually.  We need to know if that transition will itself be sustainable going forward, or if other changes will be needed in the near future.  All of this information needs to be part of iterative model runs that capture land cover changes and biogeochemical cycling changes associated with these decisions to better understand future local pathways of climate change impacts and the associated likely adaptation pathways that these populations will occupy.
The good news* is that I am on this – along with my colleague Brent McCusker at West Virginia University (see pubs here and here).  Between the two of us, we’ve developed a pretty solid understanding of adaptation and livelihoods decision-making, and have spent a good bit of time theorizing the link between land use change and livelihoods change to enable the examination of the issues I have raised above.  We have a bit of money from NSF to run a pilot this summer (Brent will manage this while I am a government employee), and I plan to spend next year working on how to integrate this research program into the global climate change programming of my current employer.
Long and short: climate modelers, you need us social scientists, now more than ever.  We’re here to work with you . . .
*Calling this good news presumes that you see me as competent, or at least that you see Brent as competent enough to make up for my incompetence.

Athletic aid? The cost of college athletics

Athletic aid?  Do universities really subsidize their athletic departments in these hard times, and can they really make those subsidies back?  Phil Miller at Environmental Economics provided a link to data on athletic department revenues and expenses for most D-1 schools.  It is an interesting dataset, especially at a time when ever-contracting budgets make subsidies for athletic departments less and less attractive.  At the same time, I am a former college athlete (track and field, one of those sports that costs a lot more than it will ever make) and I don’t want to see athletic departments tossed entirely.  I wonder if public disclosure of these costs in such an easily accessible form will do anything in terms of public awareness and attitudes in a tough economy.  My guess is that the answer will depend on the school in question.  I assure you that no matter what the figures, even Tea Party central (or as you might call it, the state of South Carolina) is going to just keep supporting the athletic departments of Clemson and the University of South Carolina (where, in full disclosure, I should note that I am currently employed).
A quick glance at the data suggests that The University of South Carolina’s athletic department does quite well year-to-year.  When you subtract revenues from expenses, you get a profits of more than $2 million in 2008, a little over half a million in 2009, and nearly $1.6 million in 2010. This looks great, until you take the data apart a little.  In the good news column, the university has not paid any direct subsidy to the athletic department over the past three years.  However, they have been charging student fees to support the department.  A lot of student fees: in 2008, $1,987,931.  In 2009, $2,098,087.  And in 2010, $2,146,293.  That trend is probably going in the wrong direction, though not all that much.  But if we take the student fee subsidy (and let’s be honest, that’s what it is) out of the revenue column, the figures don’t look that rosy:
2008: a tiny profit, less than $200k
2009: a loss of $1.6 million
2010: a loss of more than half a million.
Yep, over the past three years, the athletic department has lost around $2 million.  Now, to be fair, that does not include the revenues to branded merchandise that is not attributed to the athletic department (i.e. bookstore sales, which are huge) that went to the university general fund, which likely pushed this figure back toward revenue neutrality.
Super.
Now, let’s look up the road to our in-state rival, Clemson (also a state school, though a lot of people seem to be unaware of this).  Again, a quick look at the numbers suggests that Clemson ran in the black, making more than $870,000 in 2008, losing $500,000 in 2009, and rebounding to make $780,000 in 2010.  But if you take apart the numbers, it gets ugly quick:
In 2008, the school subsidized the athletic department to the tune of $2,435,268, and charged students an additional $1,501,216 in fees to support the department ($3.9 million total).  In 2009, it was a subsidy of $2,924,005 and fees of $1,535,940 (nearly $4.5 million).  Finally, in 2010, the numbers were $3,233,520 in subsidy and $1,585,556 in fees ($4.8 million).  This is a lot of money in a state that is more or less broke, and in which tuition and fees continue to rise.  The net?
2008: A loss of over $3 million
2009: A loss of $5 million
2010: Another loss of $4 million
Yeah, there is no way they are covering that with merchandizing, and given the relatively poor quality of the teams coming out of their program in recent years, I doubt this is spurring serious alumni donations.
Oh, and WTF is going on with Virginia’s (my old athletic department) numbers?  In 2010, they ran an $11 million dollar profit, but charged the students $12 million in fees?  This makes absolutely no sense . . . I have to assume the data here is screwed up somehow, as that would work out to around $870 per student (including grads and professional students)!  There is no way that flies there.  I can only assume we are looking at a lot of misreported fees – I mean, looking at dollars in and dollars out, why not just cut the fees down to $1 million and go for revenue neutrality?