Entries tagged with “research”.


So, DfID paid London’s School of Oriental and African Studies (SOAS) more than $1 million to answer a pretty important question: Whether or not Fairtrade certification improves growers’ lives. As has shown up in the media (see here and here) and around the development blogosphere (here), the headline finding of the report was unexpected: wage workers on Fairtrade-certified sites made less than those working on regular farms. Admittedly, this is a pretty shocking finding, as it undermines the basic premise of Fairtrade.

Edit 12 June: As Matt Collin notes in a comment below, this reading of the study is flawed, as it was not set up to capture the wage effects of Fairtrade. There were no baselines, and without baselines it is impossible to tell if there were improvements in Fairtrade sites – in short, the differences seen in the report could just be pre-existing differences, not a failure of Fairtrade. See the CGDev blog post on this here. So the press’ reading of this report is pretty problematic.

At the same time, this whole discussion completely misses the point. Fairtrade doesn’t work as a development tool because, in the end, Fairtrade does absolutely nothing to address the structural inequalities faced by those in the primary sector of the global economy relative to basically everyone else. Paying an African farmer a higher wage/better price means they are now a slightly wealthier farmer. They are still exposed to environmental shocks like drought and flooding, still tied to shocks and trends in global commodities markets over which they have almost no leverage at all, often still producing commodities (like coffee and cocoa) for which demand is very, very elastic, and in the end still living in states without safety nets to help them weather these economic and environmental shocks. Yes, I think African farmers are stunningly resilient, intelligent people (I write about this a lot). But the convergence of the challenges I just listed means that most farmers in the Global South are addressing one or more of them almost all the time, and the cost of managing these challenges is high (both in terms of hedging and coping). Incremental changes in agricultural incomes will be absorbed, by and large, by these costs – this is not a transformative development pathway.

So why is everyone freaking out at the $1 million dollar finding – even if that finding misrepresents the actual findings of the report? Because it brutally rips the Fairtrade band-aid off the global economy, and strips away any feeling of “doing our part” from those who purchase Fairtrade products. But of course, those of us who purchase Fairtrade products were never doing our part. If anything, we were allowing the shiny idea of better incomes and prices to obscure the structural problems that would always limit the impact of Fairtrade in the lives of the poor.

Andy Sumner was kind enough to invite me to provide a blog entry/chapter for his forthcoming e-book The Donors’ Dilemma: Emergence, Convergence and the Future of Aid. I decided to use the platform as an opportunity to expand on some of my thoughts on the future of food aid and food security in the context of a changing climate.

My central point:

By failing to understand existing agricultural practices as time-tested parts of complex structures of risk management that include concerns for climate variability, we overestimate the current vulnerability of many agricultural systems to the impacts of climate change, and underestimate the risks we create when we wipe these systems away in favor of “more efficient”, more productive systems meant to address this looming global food crisis.

Why does this matter?

In ignoring existing systems and their logic in the name of addressing a crisis that has not yet arrived, development aid runs a significant risk of undermining the nascent turn toward addressing vulnerability, and building resilience, in the policy and implementation world by unnecessarily increasing the vulnerability of the poorest populations.

The whole post is here, along with a number of other really interesting posts on the future of aid here. Head over and offer your thoughts…

Nick Kristof’s piece decrying the distance between academia and the rest of society has, predictably, triggered a screaming firestorm in academia. That’s what you get when you poke the (over)educated, seriously literate beast. A lot of the criticism is very well written and thought out (outstanding examples here and here). But I fear that Kristof’s central message, that society needs a more engaged academia, is getting lost here. My main problem was not that Kristof was arguing for a more engaged academy, but that his prescriptions for how to bring about that engagement did not address the real incentives and barriers that academics negotiate when they try to engage with public debate.

So, in the interest of constructive criticism, I have some suggestions for things that Mr. Kristof might consider looking into – throwing a light on these challenges would actually serve to highlight the real, and often absurdly unnecessary, barriers between the academy and society. This is obviously just a tiny sample of potential topics, drawn from my own experiences in a top-tier department in a large, Research-1 state institution.

  1. Examine the system by which departments are “ranked” in the United States: The National Research Council (NRC) ranks departments at (not so) regular intervals, creating a sort of BCS ranking of departments, with about the same amount of accuracy and certainty. By and large, academics know these rankings are garbage, but administrations love to trot them out to demonstrate the excellence of their institution, and therefore justify the institutional budget/tuition/etc. But here’s a fun fact: if you dig into what counts in the rankings, you can quickly see why university administrations don’t necessarily care for academic outreach. For example, did you know that authoring an NRC report (which is seriously prestigious) DOES NOT COUNT AS A MEASURABLE PUBLICATION IN THE NRC RANKINGS? I know this because my department ran into this problem the last time around, with at least three members of our faculty losing multiple publications because the NRC did not count ITS OWN PUBLICATIONS. If those pubs were excluded, you can imagine that basically all reports in all contexts were excluded. So if administrations love rankings, and rankings hate outreach, you’re not going to get much outreach.
  2. Consider how academic evaluation’s over-focus on the number of articles produced creates less interesting, more arcane academic outputs: The production of knowledge in academia has, for some time, been driven by expectations of ever-greater output (as measured in research dollars and publications) with less input (fewer faculty members). These expectations govern everything from the evaluation of departments to individual tenure decisions. As a result, the publication requirements for tenure have become ever-more challenging, with expectations for the number of publications produced rising so steeply that many who recently got tenure might have published more articles than their very senior colleagues published to become full professors even two decades ago. This is driven by everything from departmental-level politics to the NRC rankings themselves, though I suspect a strong trickle-down effect here. In any case, this has created a crisis of knowledge production in which professors are incentivized to produce what my colleague Carl Dahlman once called the minimum publishable unit (MPU). Because expectations of performance are more and more heavily based on quantitative output (thanks, NRC!), as opposed to the quality of that output, it makes sense for faculty to shy away from “big question” articles that might chew up a lot of their data and interesting ideas, and instead package that same set of ideas as two or three smaller, much more arcane publications. This is a very real pressure: when I put out my retheorization of livelihoods approaches a year ago, more than one colleague suggested that I would have been better cutting its 15000 words into two 8500 word pieces, as it would have counted for more in my annual evaluation. Nothing has driven us toward a proliferation of small, specialized journals carrying tiny, arcane articles quite like this drive for quantification and greater production. Undoing this really awful trend would help a lot, as academics would be freed up to think big thoughts again, both in journals and in other fora. One way to help: publicize the alt-metrics movement (start at the LSE Impact Blog and work from there) that attempts to move beyond a system of academic assessment that reflects a long-dead era of publication and communication.
  3. Focus on how for-profit academic publishers wall off knowledge from the public: Academics must publish to survive professionally, and the best journals in nearly every field are the last profitable properties for a number of publishing houses. These publishers benefit from free labor on the part of authors, reviewers, and the nearly-free labor of editors, and often the subsidy of taxpayer-funded research, yet charge exorbitant amounts for subscriptions to their journals – in the case of public universities, bleeding the taxpayer once again. Academics are absolutely responsible for this situation – after all, we collectively define what the good journals are, and as I’ve argued before we could change our minds if we wanted to. But academia takes time to change, and could use a push. Where is the push from the federal government to demand that the results of taxpayer-funded research be made available to the taxpayers immediately? What happened to the initial push from the Obama White House on this issue? It seems to be a topic ripe for a good investigative journalist.

And, for good measure, an interesting trend that will likely lead to a more engaged academia:

  1. The shift in acceptable academic funding: Until very recently, academic grants from traditional agencies like the National Science Foundation or the National Institutes of Health were given exalted status, with all other forms of funding occupying lesser rungs on the great chain of funding. Thus, to get tenure, many (biophysical science/social science) academics really had to land one of these grants. The programs associated with these grants very often rewarded pure research and actively discouraged “applied” work, and even today the NSF’s requirements for “impact” are fairly surficial. Contracts were very second-tier, and often not taken seriously in one’s academic review. Now, thanks to funding crunches in both universities and the funding agencies, any research-looking dollars have started looking good to university administrations, and contracts are more and more being evaluated alongside more traditional academic grants. There is a tremendous opportunity here to engage academia through this mechanism. [Full disclosure: I’ve been funded in the past by NSF and by the National Geographic Society, but today roughly 90% of my funding comes directly or indirectly from development donors like USAID in the form of contracts or grants]

This is hardly a comprehensive list of things into which a serious journalist could shed light on, and perhaps help leverage change. I’m just typing quickly here. If you have other ideas for things that journalists should be examining, please leave them in the comments or email them to me: ed at edwardrcarr.com   I will append them to this post as they come in, attributing them (or not, depending on the wishes of contributors) in the post.

Edit 17 February: If you want to move beyond criticism (and snark), join me in thinking about things that Mr. Kristof should look into/write about if he really wants a more engaged academia here.

In his Saturday column, Nick Kristof joins a long line of people, academics and otherwise, who decry the distance between academia and society. While I greatly appreciate his call to engage more with society and its questions (something I think I embody in my own career), I found his column to be riddled with so many misunderstandings/misrepresentations of academia that, in the end, he contributes nothing to the conversation.

What issues, you ask?

1) He misdiagnoses the problem

If you read the column quickly, it seems that Kristof blames academic culture for the lack of public engagement he decries. This, of course, ignores the real problem, which is more accurately diagnosed by Will McCants’s (oddly marginalized) quotes in the column. Sure, there are academics out there with no interest in public engagement. And that is fine, by the way – people can make their own choices about what they do and why. But to suggest that all of academia is governed by a culture that rejects public engagement deeply misrepresents the problem. The problem is the academic rewards system which currently gives us job security and rewards for publishing in academic journals, and nearly nothing for public outreach. To quote McCants:

If the sine qua non for academic success is peer-reviewed publications, then academics who ‘waste their time’ writing for the masses will be penalized.

This is not a problem of academic culture, this is a problem of university management – administrations decide who gets tenure, and on what standard. If university administrations decided to halve the number of articles required for tenure, and replaced that academic production with a demand that professors write a certain number of op-eds, run blogs with a certain number of monthly visitors, or participate in policy development processes, I assure you the world would be overrun with academic engagement. So if you want more engagement, go holler at some university presidents and provosts, and lay off the assistant professors.

2) Kristof takes aim at academic prose – but not really:

 …academics seeking tenure must encode their insights into turgid prose.

Well, yes. There is a lot of horrific prose in academia – but Kristof seems to suggest that crap writing is a requirement of academic work. It is not – I guarantee you that the best writers are generally cited a lot more than the worst. So Kristof has unfairly demonized academia as willfully holding the public at bay with its crappy writing, which completely misdiagnoses the problem. The problem is that the vast majority of academia isn’t trained in writing (beyond a freshman composition course), there is no money in academia for the editorial staff that professional writers (and columnists) rely on to clean up their own turgid prose, and the really simple fact that we all tend to write like what we read. Because academic prose is mostly terrible, people who read it tend to write terrible prose. This is why I am always reading short fiction (Pushcart Prize, Best American Short Stories, etc.) alongside my work reading…

If you want better academic prose, budget for the same editorial support, say, that the New York Times or the New Yorker provide for their writers. I assure you, academic writing would be fantastic almost immediately.

Side note: Kristof implicitly sets academic writing against all other sources of writing, which leads me to wonder if he’s ever read a policy document. I helped author one, and I read many, while at USAID. The prose was generally horrific…

3) His implicit prescription for more engaged writing is a disaster

Kristof notes that “In the late 1930s and early 1940s, one-fifth of articles in The American Political Science Review focused on policy prescriptions; at last count, the share was down to 0.3 percent.” In short, he sees engagement as prescription. Which is exactly the wrong way to go about it. I have served as a policy advisor to a political appointee. I can assure you that handing a political appointee a prescription is no guarantee they will adopt it. Indeed, I think they are probably less likely to adopt it because it isn’t their idea. Policy prescriptions preclude ownership of the conclusion and needed responses by the policymaker. Better to lay out clear evidence for the causes of particular challenges, or the impacts of different decisions. Does academia do enough of this? Probably not. But for heaven’s sake, don’t start writing prescriptive pieces. All that will do is perpetuate our marginality through other means.

4) He confuses causes and effects in his argument that political diversity produces greater societal impact.

Arguing that the greater public engagement of economists is about their political diversity requires ignoring most of the 20th century history of thought within which disciplines took shape. Just as geography became a massive discipline in England and other countries with large colonial holdings because of the ways that discipline fit into national needs, so economics became massive here in the US in response to various needs at different times that were captured (for better or for worse) by economics. I would argue that the political diversity in economics is a product of its engagement with the political sphere, as people realized that economic thought could shift/drive political agendas…not the other way around.

5) There is a large movement underway in academia to rethink “impact”.

There is too much under this heading to cover in a single post. But go visit the LSE Impact Blog to see the diversity of efforts to measure academic impact currently in play – everything from rethinking traditional journal metrics to looking at professors’ reach on Twitter. Mr. Kristof is about 4 years late to this argument.

In short, Kristof has recognized a problem that has been discussed…forever, by an awful lot of people. But he clearly has no idea where the problem comes from, and therefore offers nothing of use when it comes to solutions. All this column does is perpetuate several misunderstandings of academia that have contributed to its marginalization – which seems to be the opposite of the columns’ intent.

I’m a big fan of accountability when it comes to aid and development. We should be asking if our interventions have impact, and identifying interventions that are effective means of addressing particular development challenges. Of course, this is a bit like arguing for clean air and clean water. Seriously, who’s going to argue for dirtier water or air. Who really argues for ineffective aid and development spending?

Nobody.

More often than not, discussions of accountability and impact serve only to inflate narrow differences in approach, emphasis, or opinion into full on “good guys”/ “bad guys” arguments, where the “bad guys” are somehow against evaluation, hostile to the effective use of aid dollars, and indeed actively out to hurt the global poor. This serves nothing but particular cults of personality and, in my opinion, serves to squash out really important problems with the accountability/impact agenda in development. And there are major problems with this agenda as it is currently framed – around the belief that we have proven means of measuring what works and how, if only we would just apply those tools.

When we start from this as a foundation, the accountability discussion is narrowed to a rather tepid debate about the application of the right tools to select the right programs. If all we are really talking about are tools, any skepticism toward efforts to account for the impact of aid projects and dollars is easily labeled an exercise in obfuscation, a refusal to “learn what works,” or an example of organizations and individuals captured by their own intellectual inertia. In narrowing the debate to an argument about the willingness of individuals and organizations to apply these tools to their projects, we are closing off discussion of a critical problem in development: we don’t actually know exactly what we are trying to measure.

Look, you can (fairly easily) measure the intended impact of a given project or program if you set things up for monitoring and evaluation at the outset.  Hell, with enough time and money, we can often piece enough data together to do a decent post-hoc evaluation. But both cases assume two things:

1)   The project correctly identified the challenge at hand, and the intervention was actually foundational/central to the needs of the people at hand.

This is a pretty weak assumption. I filled up a book arguing that a lot of the things that we assume about life for the global poor are incorrect, and therefore that many of our fundamental assumptions about how to address the needs of the global poor are incorrect. And when much of what we do in development is based on assumptions about people we’ve never met and places we’ve never visited, it is likely that many projects which achieve their intended outcomes are actually doing relatively little for their target populations.

Bad news: this is pretty consistent with the findings of a really large academic literature on development. This is why HURDL focuses so heavily on the implementation of a research approach that defines the challenges of the population as part of its initial fieldwork, and continually revisits and revises those challenges as it sorts out the distinct and differentiated vulnerabilities (for explanation of those terms, see page one of here or here) experienced by various segments of the population.

Simply evaluating a portfolio of projects in terms of their stated goals serves to close off the project cycle into an ever more hermetically-sealed, self-referential world in which the needs of the target population recede ever further from design, monitoring, and evaluation. Sure, by introducing that drought-tolerant strain of millet to the region, you helped create a stable source of household food that guards against the impact of climate variability. This project could record high levels of variety uptake, large numbers of farmers trained on the growth of that variety, and even improved annual yields during slight downturns in rain. By all normal project metrics, it would be a success. But if the biggest problem in the area was finding adequate water for household livestock, that millet crop isn’t much good, and may well fail in the first truly dry season because men cannot tend their fields when they have to migrate with their animals in search of water.  Thus, the project achieved its goal of making agriculture more “climate smart,” but failed to actually address the main problem in the area. Project indicators will likely capture the first half of the previous scenario, and totally miss the second half (especially if that really dry year comes after the project cycle is over).

2)   The intended impact was the only impact of the intervention.

If all that we are evaluating is the achievement of the expected goals of a project, we fail to capture the wider set of impacts that any intervention into a complex system will produce. So, for example, an organization might install a borehole in a village in an effort to introduce safe drinking water and therefore lower rates of morbidity associated with water-borne illness. Because this is the goal of the project, monitoring and evaluation will center on identifying who uses the borehole, and their water-borne illness outcomes. And if this intervention fails to lower rates of water-borne illness among borehole users, perhaps because post-pump sanitation issues remain unresolved by this intervention, monitoring and evaluation efforts will likely grade the intervention a failure.

Sure, that new borehole might not have resulted in lowered morbidity from water-borne illness. But what if it radically reduced the amount of time women spent gathering water, time they now spend on their own economic activities and education…efforts that, in the long term, produced improved household sanitation practices that ended up achieving the original goal of the borehole in an indirect manner? In this case, is the borehole a failure? Well, in one sense, yes – it did not produce the intended outcome in the intended timeframe. But in another sense, it had a constructive impact on the community that, in the much longer term, produced the desired outcome in a manner that is no longer dependent on infrastructure. Calling that a failure is nonsensical.

Nearly every conversation I see about aid accountability and impact suffers from one or both of these problems. These are easy mistakes to make if we assume that we have 1) correctly identified the challenges that we should address and 2) we know how best to address those challenges. When these assumptions don’t hold up under scrutiny (which is often), we need to rethink what it means to be accountable with aid dollars, and how we identify the impact we do (or do not) have.

What am I getting at? I think we are at a point where we must reframe development interventions away from known technical or social “fixes” for known problems to catalysts for change that populations can build upon in locally appropriate, but often unpredictable, ways. The former framing of development is the technocrats’ dream, beautifully embodied in the (failing) Millennium Village Project, just the latest incarnation of Mitchell’s Rule of Experts or Easterly’s White Man’s Burden. The latter requires a radical embrace of complexity and uncertainty that I suspect Ben Ramalingan might support (I’m not sure how Owen Barder would feel about this). I think the real conversation in aid/development accountability and impact is about how to think about these concepts in the context of chaotic, complex systems.

Since returning to academia in August of 2012, I’ve been pretty swamped. Those who follow this blog, or my twitter feed, know that my rate of posting has been way, way down. It’s not that I got bored with social media, or tired of talking about development, humanitarian assistance, and environmental change. I’ve just been swamped. The transition back to academia took much more out of me than I expected, and I took on far, far too much work. The result – a lot of lost sleep, and a lapsed social media profile in the virtual world, and a lapsed social life in the real world.

One of the things I’ve been working on is getting and organizing enough support around here to do everything I’m supposed to be doing – that means getting grad students and (coming soon) a research associate/postdoc to help out. Well, we’re about 75% of the way there, and if I wait for 100% I’ll probably never get to introduce you all to HURDL…

HURDL is the Humanitarian Response and Development Lab here at the Department of Geography at the University of South Carolina. It’s also a less-than-subtle wink at my previous career in track and field. HURDL is the academic home for me and several (very smart) grad students, and the institution managing about five different workflows for different donors and implementers.  Basically, we are the qualitative/social science research team for a series of different projects that range from policy development to project design and implementation. Sometimes we are doing traditional academic research. Mostly, we do hybrid work that combines primary research with policy and/or implementation needs. I’m not going to go into huge detail here, because we finally have a lab website up. The site includes pages for our personnel, our projects, our lab-related publications, and some media (still under development). We’ll need to put up a news feed and likely a listing of the talks we give in different places.

Have a look around. I think you’ll have a sense of why I’ve been in a social media cave for a while. Luckily, I am surrounded by really smart, dedicated people, and am in a position to add at least one more staff position soon, so I might actually be back on the blog (and sleeping more than 6 hours a night) again soon!

Let us know what you think – this is just a first cut at the page. We’d love suggestions, comments, whatever you have – we want this to be an effective page, and a digital ambassador for our work…

I’ve always been a bit skeptical of development programs that claim to work on issues of environmental governance. Most donor-funded environmental governance work stems from concerns about issues like sustainability and climate change at the national to global scale. These are legitimate challenges that require attention. However, such programs often strike me as instances of thinking globally, but implementing locally (and ideally someplace else). You see, there are things that we in the wealthiest countries should be doing to mitigate climate change and make the world a more sustainable place. But they are inconvenient. They might cost us a bit of money. They might make us do a few things differently. So we complain about them, and they get implemented slowly, if ever.

Yet somehow we fail to see how this works in exactly the same manner when we implement programs that are, for example, aimed at the mitigation of climate change in the Global South. These programs tend to take away particular livelihoods activities and resources (such as cutting trees, burning charcoal, or fishing and hunting particular species), which is inconvenient, tends to reduce household access to food and income, and forces changes upon people – all of which they don’t really like. So it is sort of boggling to me that we are surprised when populations resist these programs and projects.

I’m on this topic because, while conducting preliminary fieldwork in Zambia’s Kazungula District last week, I had yet another experience of this problem. In the course of a broad conversation on livelihoods, vulnerabilities, and opportunities in his community, a senior man raised charcoal production as an alternative livelihood in the area (especially in the dry season, when there is little water for gardening/farming and no nearby source of fishing). Noting that charcoal production was strictly limited for purposes of limiting the impacts of climate change*, a rationale whose legitimacy he did not challenge, he complained that addressing the issue of charcoal production is not well understood or accepted by the local population. He argued that much of the governance associated with this effort consisted of agents of the state telling people “it’s an offense” and demanding they stop cutting trees and burning charcoal without explaining why it is an offense. He then pointed to one of his sons and said “how can you tell him ‘don’t cut this tree’? And his fields are flooding [thus destroying his crops, a key source of food and income].” But the quote that pulled it all together…

“Don’t make people be rude or be criminals. Give them a policy that will open them.”

The text is clear here: if you are going to take away a portion of our livelihoods for the sake of the environment, please give us an alternative so we can comply. This is obvious – and yet to this point I think the identification and implementation of alternative livelihoods in the context of environmental governance programs is, at best, uneven.

But the subtext might be more important: If you don’t give us an alternative, you make us into criminals because we will be forced to keep practicing these now-banned activities. And when that happens, we will never view the regulations or those that enforce them as legitimate. In other words, the way we tend to implement environmental governance programming undermines the legitimacy of the governance structures we are trying to put in place.

Oops.

The sad part is that there have been innumerable cases of just the phenomena I encountered last week at other times and in other places. They’ve been documented in reports and refereed publications. Hell, I’ve heard narratives like this in the course of my work in Ghana and Malawi. But environmental governance efforts continue to inadequately explain their rationales to the populations most affected by their implementation. They continue to take away livelihoods activities from those that need them most in the name of a greater good for which others pay no tangible price. And they continue to be surprised when people ignore the tenets of the program, and begin to question the legitimacy of any governance structure that would bring such rules into effect. Environmental governance is never going to work if it is the implementation of a “think globally, implement locally (ideally someplace else)” mentality. It has to be thought, understood, and legitimized in the place it will be implemented, or it will fail.

 

 

* Yes, he really said that, as did a lot of other people. The uniformity of that answer strikes me as the product of some sort of sensitization campaign that, to be honest, is pretty misplaced. There are good local environmental reasons for controlling deforestation, but the contribution of charcoal production to the global emissions budget is hilariously small.

So, some of you might have wondered where the guy who ground out a lot of longish (too-longish?), wonky blog posts has gone over the past year and a half or so. Well, the transition back to academia was much bumpier than I had anticipated. Funding for research takes time to arrive, as does the support (i.e. skilled labor) necessary to make that research happen. And then there is the fact I teach two classes a semester – and they are not small classes. I just finished my annual reporting for 2013, and because of this exercise I know that I taught 261 students last year. In four courses – one of which was a 12-person graduate seminar, so you do the math on my average undergraduate class size. It’s…not ideal.

I’m also now dealing with a complete reversal of my situation back in 2009-10, when I decided to leave academia for a while and go work at USAID. Back then, I felt completely disconnected from development policy and implementation. I was frustrated and bored. Now, I have a small lab running five different projects, only one of which is “pure” research. But we are not fully staffed yet – we’re about to search for a research associate to take up some of the load – and the result has been a lot of nights with less than six hours of sleep. This is hard, but as I remind people, it beats being ignored.

So, until about a week ago, I simply could not get my head above water long enough to blog. I think that is going to change over the next few months, as we get things under control in the lab. So, for that small but dedicated fanbase of the longish, wonky development blog posts, soon you will have more to read.

In the meantime, I’ve finally updated my personal homepage. There are new publications up, new preprints up, and a new mission statement on the home page. This week, I will walk you through these new pubs and ideas. I’m also at work on a new lab page. This will introduce you to a new cast of characters, and a new set of projects, that should keep things interesting around here for a while. I’m not yet sure about the relationship between the lab and this blog – I have to work that out. But the lab will have a twitter account, likely an Instagram account (we’re going to be going a lot of places), and the web page will have project-related videos. It should be pretty cool.

Thanks for bearing with me over the past year and a half. Watch this space – it should get interesting.

There is a lot of hue and cry about the issue of loss and damage at the current Conference of the Parties (COP-19). For those unfamiliar with the topic, in a nutshell the loss and damage discussion is one of attributing particular events and their impacts on poorer countries to climate variability and change that has, to this point, been largely driven by activities in the wealthier countries. At a basic level, this question makes sense and is, in the end, inevitable. Those who have contributed the most (and by the most, I mean nearly all) to the anthropogenic component of climate change are not experiencing the same level of impact from that climate change – either because they see fewer extreme events, more attenuated long-term trends, or simply have substantially greater capacity to manage individual events and adapt to longer-term changes. This is fundamentally unfair. But it is also a development challenge.

The more I work in this field, and the more I think about it, the more I am convinced that the future of development lies in creating the strong, stable foundations upon which individuals can innovate in locally-appropriate ways. These foundations are often tenuous in poorer countries, and the impacts of climate change and variability (mostly variability right now) certainly do not help. Most agrarian livelihoods systems I have worked with in sub-Saharan Africa are massively overbuilt to manage climate extremes (i.e. flood or drought) that, while infrequent, can be catastrophic. The result: in “good” or “normal” years, farmers are hedging away very significant portions of their agricultural production, through such decisions as the siting of farms, the choice of crops, or the choice of varieties. I’ve done a back-of-the-envelope calculation of this cost of hedging in the communities I’ve worked with in Ghana, and the range is between 6% and 22% of total agricultural production each year. That is, some of these farmers are losing 22% of their total production because they are unnecessarily siting their fields in places that will perform poorly in all but the most extreme (dry or wet) years. When you are living on the local equivalent of $1.25/day, this is a massive hit to one’s income, and without question a huge barrier to transformative local innovations. Finding ways to help minimize the cost of hedging, or the need for hedging, is critical to development in many parts of the Global South.

Therefore, a stream of finance attached to loss and damage could be a really big deal for those in the Global South, something perhaps as important as debt relief was to the MDRI countries. We need to sort out loss and damage. But NOT NOW.

Why not? Simply put, we don’t have the faintest idea what we are negotiating right now. The attribution of particular events to anthropogenic climate change and variability is inordinately difficult (it is somewhat easier for long-term trends, but this has its own problem – it takes decades to establish the trend). However, for loss and damage to work, we need this attribution, as it assigns responsibility for particular events and their costs to those who caused those events and costs. Also, we need means of measuring the actual costs of such events and trends – and we don’t have that locked down yet, either. This is both a technical and a political question: what can we measure, and how should we measure it is a technical question that remains unanswered. But what should we measure is a political question – just as certain economic stimuli have multiplier effects through an economy, disasters and long-term degradation have radiating “multipliers” through economies. Where do we stop counting the losses from an event or trend? We don’t have an answer to that, in part because we don’t yet have attribution, nor do we have the tools to measure costs even if we had attribution.

So, negotiating loss and damage now is a terrible idea. Rich countries could find themselves facing very large bills without the empirical evidence to justify the size of the bills or their responsibility for paying them – which will make such bills political nonstarters in rich countries. In short, this process has to deliver a bill that everyone agrees should be paid, and that the rich countries agree can be paid. At the same time, poorer countries need to be careful here – because we don’t have strong attribution or measurements of costs, there is a real risk that they could negotiate for too little – not enough to actually invest in the infrastructure and processes needed to ensure a strong foundation for local innovation. Either outcome would be a disaster. And these are the most likely outcomes of any negotiation conducted in blindly.

I’m glad loss and damage is on the table. I hope that more smart people start looking into it in their research and programs, and that we rapidly build an evidence base for attribution and costing. That, however, will take real investment by the richest countries (who can afford it), and that investment has not been forthcoming.  If we should be negotiating for anything right now, it should be for funds to push the frontiers of our knowledge of attribution and costing so that we can get to the table with evidence as soon as humanly possible.

CGD has an interesting short essay up, written by Matthew Darling, Saugato Datta, and Sendhil Mullainathan, entitled “The Nature of the BEast: What Behavioral Economics Is Not.” The piece aims to dispel a few myths about behavioral economics, while offering a quick summary of what this field is, and what its goals are. I’ve been looking around for a good short primer on BE, and so I had high hopes for this piece…unfortunately, for two reasons the piece did not live up to expectations.

First, the authors tie themselves in a strange knot as they try to argue that behavioral economics is not about controlling behavior. While they note that BE studies and tools could be used to nudge human behavior in particular directions, they argue that “What distinguishes the behavioral toolset [from those of marketers, for example], however, is that so many of the tools are about helping people to make the choices that they themselves want to make.” This claim sidesteps a very important question: how do we know what choices they want to make? What we see as problematic livelihoods outcomes might not, in fact, be all that problematic to those living those outcomes, and indeed might have local rationales that are quite reasonable. While this might seem an obvious point, most BE work that I have seen seems to rest on a near-total lack of understanding of why those under investigation engage in the behaviors that “require explanation”. Therefore, the claim that BE helps people make the choices they want to make is, in fact, rather patriarchal in that the determination of what choices people want to make does not rest with those people, but with the behavioral economist. Sadly, this is a fairly accurate representation of much work done under the heading of BE. It would have been better if the authors had simply pointed out that BE is no more obsessed with incentives than any other part of economics, and if people are worried about behavioral control, they’d best have a look at the US (or their own national) tax code and focus their anxiety there.

Second, the authors argue “Behavioral economics differs from standard economics in that it uses a more realistic (and more complicated) model for people [and their decisions].” Honestly, I have seen no evidence for a coherent model of humans or their behavior in BE. What I have seen is a lot of rigorous data collection, the results of which are then shoehorned into some sort of implicit explanatory framework laden with unexamined assumptions that generally do not hold in the real world. Rigorously identifying when particular stimuli result in different behaviors is not the same thing as explaining how those stimuli bring about those behaviors. BE is rather good at the former, and not very good at all at the latter. The authors are right – we need more realistic and complicated models of human decision-making, and there are some out there (for example, see here and here – email me if you need a copy of either .pdf). BE would do well to actually read something outside of economics if it is serious about this goal. There are a couple of disciplines out there (for example, anthropology, geography, some aspects of sociology and social history) that have long operated with complex framings of human behavior, and have already derived many of the lessons that BE is just now (re)discovering. In this light, then, this short paper does show us what BE isn’t: it isn’t anthropology, geography, or any other social science that has already engaged the same questions as BE, but with more complex framings of human behavior and more rigorous interpretations of observed outcomes. And if it isn’t that, what exactly is the point of this field of inquiry?