Academia


I’m getting a bit better at updating my website…probably because I have more to update. Specifically, I’ve put up some new work on the publications page. There, you will find:

On the preprints page, I have two new pieces up:

Also be sure to check out the HURDL website. We’ve got new pubs up, and the last member of the lab (Bob Greeley) finally has a bio up!

Nick Kristof’s piece decrying the distance between academia and the rest of society has, predictably, triggered a screaming firestorm in academia. That’s what you get when you poke the (over)educated, seriously literate beast. A lot of the criticism is very well written and thought out (outstanding examples here and here). But I fear that Kristof’s central message, that society needs a more engaged academia, is getting lost here. My main problem was not that Kristof was arguing for a more engaged academy, but that his prescriptions for how to bring about that engagement did not address the real incentives and barriers that academics negotiate when they try to engage with public debate.

So, in the interest of constructive criticism, I have some suggestions for things that Mr. Kristof might consider looking into – throwing a light on these challenges would actually serve to highlight the real, and often absurdly unnecessary, barriers between the academy and society. This is obviously just a tiny sample of potential topics, drawn from my own experiences in a top-tier department in a large, Research-1 state institution.

  1. Examine the system by which departments are “ranked” in the United States: The National Research Council (NRC) ranks departments at (not so) regular intervals, creating a sort of BCS ranking of departments, with about the same amount of accuracy and certainty. By and large, academics know these rankings are garbage, but administrations love to trot them out to demonstrate the excellence of their institution, and therefore justify the institutional budget/tuition/etc. But here’s a fun fact: if you dig into what counts in the rankings, you can quickly see why university administrations don’t necessarily care for academic outreach. For example, did you know that authoring an NRC report (which is seriously prestigious) DOES NOT COUNT AS A MEASURABLE PUBLICATION IN THE NRC RANKINGS? I know this because my department ran into this problem the last time around, with at least three members of our faculty losing multiple publications because the NRC did not count ITS OWN PUBLICATIONS. If those pubs were excluded, you can imagine that basically all reports in all contexts were excluded. So if administrations love rankings, and rankings hate outreach, you’re not going to get much outreach.
  2. Consider how academic evaluation’s over-focus on the number of articles produced creates less interesting, more arcane academic outputs: The production of knowledge in academia has, for some time, been driven by expectations of ever-greater output (as measured in research dollars and publications) with less input (fewer faculty members). These expectations govern everything from the evaluation of departments to individual tenure decisions. As a result, the publication requirements for tenure have become ever-more challenging, with expectations for the number of publications produced rising so steeply that many who recently got tenure might have published more articles than their very senior colleagues published to become full professors even two decades ago. This is driven by everything from departmental-level politics to the NRC rankings themselves, though I suspect a strong trickle-down effect here. In any case, this has created a crisis of knowledge production in which professors are incentivized to produce what my colleague Carl Dahlman once called the minimum publishable unit (MPU). Because expectations of performance are more and more heavily based on quantitative output (thanks, NRC!), as opposed to the quality of that output, it makes sense for faculty to shy away from “big question” articles that might chew up a lot of their data and interesting ideas, and instead package that same set of ideas as two or three smaller, much more arcane publications. This is a very real pressure: when I put out my retheorization of livelihoods approaches a year ago, more than one colleague suggested that I would have been better cutting its 15000 words into two 8500 word pieces, as it would have counted for more in my annual evaluation. Nothing has driven us toward a proliferation of small, specialized journals carrying tiny, arcane articles quite like this drive for quantification and greater production. Undoing this really awful trend would help a lot, as academics would be freed up to think big thoughts again, both in journals and in other fora. One way to help: publicize the alt-metrics movement (start at the LSE Impact Blog and work from there) that attempts to move beyond a system of academic assessment that reflects a long-dead era of publication and communication.
  3. Focus on how for-profit academic publishers wall off knowledge from the public: Academics must publish to survive professionally, and the best journals in nearly every field are the last profitable properties for a number of publishing houses. These publishers benefit from free labor on the part of authors, reviewers, and the nearly-free labor of editors, and often the subsidy of taxpayer-funded research, yet charge exorbitant amounts for subscriptions to their journals – in the case of public universities, bleeding the taxpayer once again. Academics are absolutely responsible for this situation – after all, we collectively define what the good journals are, and as I’ve argued before we could change our minds if we wanted to. But academia takes time to change, and could use a push. Where is the push from the federal government to demand that the results of taxpayer-funded research be made available to the taxpayers immediately? What happened to the initial push from the Obama White House on this issue? It seems to be a topic ripe for a good investigative journalist.

And, for good measure, an interesting trend that will likely lead to a more engaged academia:

  1. The shift in acceptable academic funding: Until very recently, academic grants from traditional agencies like the National Science Foundation or the National Institutes of Health were given exalted status, with all other forms of funding occupying lesser rungs on the great chain of funding. Thus, to get tenure, many (biophysical science/social science) academics really had to land one of these grants. The programs associated with these grants very often rewarded pure research and actively discouraged “applied” work, and even today the NSF’s requirements for “impact” are fairly surficial. Contracts were very second-tier, and often not taken seriously in one’s academic review. Now, thanks to funding crunches in both universities and the funding agencies, any research-looking dollars have started looking good to university administrations, and contracts are more and more being evaluated alongside more traditional academic grants. There is a tremendous opportunity here to engage academia through this mechanism. [Full disclosure: I’ve been funded in the past by NSF and by the National Geographic Society, but today roughly 90% of my funding comes directly or indirectly from development donors like USAID in the form of contracts or grants]

This is hardly a comprehensive list of things into which a serious journalist could shed light on, and perhaps help leverage change. I’m just typing quickly here. If you have other ideas for things that journalists should be examining, please leave them in the comments or email them to me: ed at edwardrcarr.com   I will append them to this post as they come in, attributing them (or not, depending on the wishes of contributors) in the post.

Edit 17 February: If you want to move beyond criticism (and snark), join me in thinking about things that Mr. Kristof should look into/write about if he really wants a more engaged academia here.

In his Saturday column, Nick Kristof joins a long line of people, academics and otherwise, who decry the distance between academia and society. While I greatly appreciate his call to engage more with society and its questions (something I think I embody in my own career), I found his column to be riddled with so many misunderstandings/misrepresentations of academia that, in the end, he contributes nothing to the conversation.

What issues, you ask?

1) He misdiagnoses the problem

If you read the column quickly, it seems that Kristof blames academic culture for the lack of public engagement he decries. This, of course, ignores the real problem, which is more accurately diagnosed by Will McCants’s (oddly marginalized) quotes in the column. Sure, there are academics out there with no interest in public engagement. And that is fine, by the way – people can make their own choices about what they do and why. But to suggest that all of academia is governed by a culture that rejects public engagement deeply misrepresents the problem. The problem is the academic rewards system which currently gives us job security and rewards for publishing in academic journals, and nearly nothing for public outreach. To quote McCants:

If the sine qua non for academic success is peer-reviewed publications, then academics who ‘waste their time’ writing for the masses will be penalized.

This is not a problem of academic culture, this is a problem of university management – administrations decide who gets tenure, and on what standard. If university administrations decided to halve the number of articles required for tenure, and replaced that academic production with a demand that professors write a certain number of op-eds, run blogs with a certain number of monthly visitors, or participate in policy development processes, I assure you the world would be overrun with academic engagement. So if you want more engagement, go holler at some university presidents and provosts, and lay off the assistant professors.

2) Kristof takes aim at academic prose – but not really:

 …academics seeking tenure must encode their insights into turgid prose.

Well, yes. There is a lot of horrific prose in academia – but Kristof seems to suggest that crap writing is a requirement of academic work. It is not – I guarantee you that the best writers are generally cited a lot more than the worst. So Kristof has unfairly demonized academia as willfully holding the public at bay with its crappy writing, which completely misdiagnoses the problem. The problem is that the vast majority of academia isn’t trained in writing (beyond a freshman composition course), there is no money in academia for the editorial staff that professional writers (and columnists) rely on to clean up their own turgid prose, and the really simple fact that we all tend to write like what we read. Because academic prose is mostly terrible, people who read it tend to write terrible prose. This is why I am always reading short fiction (Pushcart Prize, Best American Short Stories, etc.) alongside my work reading…

If you want better academic prose, budget for the same editorial support, say, that the New York Times or the New Yorker provide for their writers. I assure you, academic writing would be fantastic almost immediately.

Side note: Kristof implicitly sets academic writing against all other sources of writing, which leads me to wonder if he’s ever read a policy document. I helped author one, and I read many, while at USAID. The prose was generally horrific…

3) His implicit prescription for more engaged writing is a disaster

Kristof notes that “In the late 1930s and early 1940s, one-fifth of articles in The American Political Science Review focused on policy prescriptions; at last count, the share was down to 0.3 percent.” In short, he sees engagement as prescription. Which is exactly the wrong way to go about it. I have served as a policy advisor to a political appointee. I can assure you that handing a political appointee a prescription is no guarantee they will adopt it. Indeed, I think they are probably less likely to adopt it because it isn’t their idea. Policy prescriptions preclude ownership of the conclusion and needed responses by the policymaker. Better to lay out clear evidence for the causes of particular challenges, or the impacts of different decisions. Does academia do enough of this? Probably not. But for heaven’s sake, don’t start writing prescriptive pieces. All that will do is perpetuate our marginality through other means.

4) He confuses causes and effects in his argument that political diversity produces greater societal impact.

Arguing that the greater public engagement of economists is about their political diversity requires ignoring most of the 20th century history of thought within which disciplines took shape. Just as geography became a massive discipline in England and other countries with large colonial holdings because of the ways that discipline fit into national needs, so economics became massive here in the US in response to various needs at different times that were captured (for better or for worse) by economics. I would argue that the political diversity in economics is a product of its engagement with the political sphere, as people realized that economic thought could shift/drive political agendas…not the other way around.

5) There is a large movement underway in academia to rethink “impact”.

There is too much under this heading to cover in a single post. But go visit the LSE Impact Blog to see the diversity of efforts to measure academic impact currently in play – everything from rethinking traditional journal metrics to looking at professors’ reach on Twitter. Mr. Kristof is about 4 years late to this argument.

In short, Kristof has recognized a problem that has been discussed…forever, by an awful lot of people. But he clearly has no idea where the problem comes from, and therefore offers nothing of use when it comes to solutions. All this column does is perpetuate several misunderstandings of academia that have contributed to its marginalization – which seems to be the opposite of the columns’ intent.

I just finished reading Geoff Dabelko’s “The Periphery isn’t Peripheral” on Ensia. In this piece, Geoff diagnoses the problems that beset efforts to address linked environmental and development problems, and offers some thoughts on how to address them. I love his typology of tyrannies that beset efforts to build and implement good, integrative (i.e. cross-sectoral) programs. I agreed with his suggestions on how to make integrative work more acceptable/mainstream in development. And by the end, I was worried about how to make his suggestions reality within the donors and implementers that really need to take on this message.

Geoff’s four tyrannies (Tyranny of the Inbox; Tyranny of Immediate Results; Tyranny of the Single Sector; Tyranny of the Unidimensional Measurement of Success) that he sees crippling environment-and-development programming are dead on. Those of us working in climate change are especially sensitive to tyranny #2, the Tyranny of Immediate Results. How the hell are we supposed to demonstrate results on an adaptation program that is meant to address challenges that are not just happening now, but will intensify over a 30 year horizon? Does our inability to see the future mean that this programming is inherently useless or inefficient? No. But because it is impossible to measure future impact now, adaptation programs are easy to attack…

As a geographer, I love Geoff’s “Tyranny of the Single Sector” – geographers generally cannot help but start integrating things across sectors (that’s what our discipline does, really). In my experiences in the classroom and the donor world, integrative thinking eludes a lot more people than I ever thought possible. Our absurd system of performance measurement in public education is not helping – trust me. But even when you find an integrative thinker, they may not be doing much integrative work. Sometimes people simply can’t see outside their own training and expertise. Sometimes they are victims of tyranny #1 (Tyranny of the Inbox), where they are too busy dealing with immediate challenges within their sector to think across sectors – lord knows, that defined the last 6 months of my life at USAID.

And Geoff’s fourth tyranny speaks right to my post from the other day – the Tyranny of the Unidimensional Measurement of Success. Read Geoff, and then read my post, and you will see why he and I get along so well.

Now, Geoff does not stop with a diagnosis – he suggests that integrative thinking in development will require some changes to how we do our jobs, and provides some illustrations of integrative projects that have produced better results to bolster his argument. While I like all of his suggestions, what concerns me is that these suggestions are easier said than done. For example, Geoff is dead right when he says that:

We must reward, rather than punish, cross-disciplinary or cross-sectoral approaches; define success in a way that encourages, rather than discourages, positive outcomes in multiple arenas; and foster monitoring and evaluation plans that embrace, rather than ignore, different timescales and multiple indicators.”

But how, exactly, are we to do this? What HR levers exist that we can use to make this happen? How much leeway do appointees and other executive-level donor staff have with regard to changing rewards and evaluations? And are the right people in charge to make such changes possible? A lot of people rise through donor organizations by being very good at sectoral work. Why would they reward people for doing things differently?

Similarly, I wonder how we can actually get more long-term thinking built into the practice and implementation of development. How do we really overcome the Tyranny of the Inbox, and the Tyranny of Immediate Results? This is not merely a mindset problem, this is a problem of budget justifications to an often-hostile congress that wants to know what you have done for them lately. Where are our congressional champions to make this sort of change possible?

Asking Geoff to fix all our problems in a single bit of writing is completely unfair. That is the Tyranny of What do We do Now? In the best tradition of academic/policy writing, his piece got me thinking (constructively) about what needs to happen if we are to do a better job of achieving something that looks like sustainable development going forward. For that reason alone it is well worth your time. Go read.

I’m a big fan of accountability when it comes to aid and development. We should be asking if our interventions have impact, and identifying interventions that are effective means of addressing particular development challenges. Of course, this is a bit like arguing for clean air and clean water. Seriously, who’s going to argue for dirtier water or air. Who really argues for ineffective aid and development spending?

Nobody.

More often than not, discussions of accountability and impact serve only to inflate narrow differences in approach, emphasis, or opinion into full on “good guys”/ “bad guys” arguments, where the “bad guys” are somehow against evaluation, hostile to the effective use of aid dollars, and indeed actively out to hurt the global poor. This serves nothing but particular cults of personality and, in my opinion, serves to squash out really important problems with the accountability/impact agenda in development. And there are major problems with this agenda as it is currently framed – around the belief that we have proven means of measuring what works and how, if only we would just apply those tools.

When we start from this as a foundation, the accountability discussion is narrowed to a rather tepid debate about the application of the right tools to select the right programs. If all we are really talking about are tools, any skepticism toward efforts to account for the impact of aid projects and dollars is easily labeled an exercise in obfuscation, a refusal to “learn what works,” or an example of organizations and individuals captured by their own intellectual inertia. In narrowing the debate to an argument about the willingness of individuals and organizations to apply these tools to their projects, we are closing off discussion of a critical problem in development: we don’t actually know exactly what we are trying to measure.

Look, you can (fairly easily) measure the intended impact of a given project or program if you set things up for monitoring and evaluation at the outset.  Hell, with enough time and money, we can often piece enough data together to do a decent post-hoc evaluation. But both cases assume two things:

1)   The project correctly identified the challenge at hand, and the intervention was actually foundational/central to the needs of the people at hand.

This is a pretty weak assumption. I filled up a book arguing that a lot of the things that we assume about life for the global poor are incorrect, and therefore that many of our fundamental assumptions about how to address the needs of the global poor are incorrect. And when much of what we do in development is based on assumptions about people we’ve never met and places we’ve never visited, it is likely that many projects which achieve their intended outcomes are actually doing relatively little for their target populations.

Bad news: this is pretty consistent with the findings of a really large academic literature on development. This is why HURDL focuses so heavily on the implementation of a research approach that defines the challenges of the population as part of its initial fieldwork, and continually revisits and revises those challenges as it sorts out the distinct and differentiated vulnerabilities (for explanation of those terms, see page one of here or here) experienced by various segments of the population.

Simply evaluating a portfolio of projects in terms of their stated goals serves to close off the project cycle into an ever more hermetically-sealed, self-referential world in which the needs of the target population recede ever further from design, monitoring, and evaluation. Sure, by introducing that drought-tolerant strain of millet to the region, you helped create a stable source of household food that guards against the impact of climate variability. This project could record high levels of variety uptake, large numbers of farmers trained on the growth of that variety, and even improved annual yields during slight downturns in rain. By all normal project metrics, it would be a success. But if the biggest problem in the area was finding adequate water for household livestock, that millet crop isn’t much good, and may well fail in the first truly dry season because men cannot tend their fields when they have to migrate with their animals in search of water.  Thus, the project achieved its goal of making agriculture more “climate smart,” but failed to actually address the main problem in the area. Project indicators will likely capture the first half of the previous scenario, and totally miss the second half (especially if that really dry year comes after the project cycle is over).

2)   The intended impact was the only impact of the intervention.

If all that we are evaluating is the achievement of the expected goals of a project, we fail to capture the wider set of impacts that any intervention into a complex system will produce. So, for example, an organization might install a borehole in a village in an effort to introduce safe drinking water and therefore lower rates of morbidity associated with water-borne illness. Because this is the goal of the project, monitoring and evaluation will center on identifying who uses the borehole, and their water-borne illness outcomes. And if this intervention fails to lower rates of water-borne illness among borehole users, perhaps because post-pump sanitation issues remain unresolved by this intervention, monitoring and evaluation efforts will likely grade the intervention a failure.

Sure, that new borehole might not have resulted in lowered morbidity from water-borne illness. But what if it radically reduced the amount of time women spent gathering water, time they now spend on their own economic activities and education…efforts that, in the long term, produced improved household sanitation practices that ended up achieving the original goal of the borehole in an indirect manner? In this case, is the borehole a failure? Well, in one sense, yes – it did not produce the intended outcome in the intended timeframe. But in another sense, it had a constructive impact on the community that, in the much longer term, produced the desired outcome in a manner that is no longer dependent on infrastructure. Calling that a failure is nonsensical.

Nearly every conversation I see about aid accountability and impact suffers from one or both of these problems. These are easy mistakes to make if we assume that we have 1) correctly identified the challenges that we should address and 2) we know how best to address those challenges. When these assumptions don’t hold up under scrutiny (which is often), we need to rethink what it means to be accountable with aid dollars, and how we identify the impact we do (or do not) have.

What am I getting at? I think we are at a point where we must reframe development interventions away from known technical or social “fixes” for known problems to catalysts for change that populations can build upon in locally appropriate, but often unpredictable, ways. The former framing of development is the technocrats’ dream, beautifully embodied in the (failing) Millennium Village Project, just the latest incarnation of Mitchell’s Rule of Experts or Easterly’s White Man’s Burden. The latter requires a radical embrace of complexity and uncertainty that I suspect Ben Ramalingan might support (I’m not sure how Owen Barder would feel about this). I think the real conversation in aid/development accountability and impact is about how to think about these concepts in the context of chaotic, complex systems.

Since returning to academia in August of 2012, I’ve been pretty swamped. Those who follow this blog, or my twitter feed, know that my rate of posting has been way, way down. It’s not that I got bored with social media, or tired of talking about development, humanitarian assistance, and environmental change. I’ve just been swamped. The transition back to academia took much more out of me than I expected, and I took on far, far too much work. The result – a lot of lost sleep, and a lapsed social media profile in the virtual world, and a lapsed social life in the real world.

One of the things I’ve been working on is getting and organizing enough support around here to do everything I’m supposed to be doing – that means getting grad students and (coming soon) a research associate/postdoc to help out. Well, we’re about 75% of the way there, and if I wait for 100% I’ll probably never get to introduce you all to HURDL…

HURDL is the Humanitarian Response and Development Lab here at the Department of Geography at the University of South Carolina. It’s also a less-than-subtle wink at my previous career in track and field. HURDL is the academic home for me and several (very smart) grad students, and the institution managing about five different workflows for different donors and implementers.  Basically, we are the qualitative/social science research team for a series of different projects that range from policy development to project design and implementation. Sometimes we are doing traditional academic research. Mostly, we do hybrid work that combines primary research with policy and/or implementation needs. I’m not going to go into huge detail here, because we finally have a lab website up. The site includes pages for our personnel, our projects, our lab-related publications, and some media (still under development). We’ll need to put up a news feed and likely a listing of the talks we give in different places.

Have a look around. I think you’ll have a sense of why I’ve been in a social media cave for a while. Luckily, I am surrounded by really smart, dedicated people, and am in a position to add at least one more staff position soon, so I might actually be back on the blog (and sleeping more than 6 hours a night) again soon!

Let us know what you think – this is just a first cut at the page. We’d love suggestions, comments, whatever you have – we want this to be an effective page, and a digital ambassador for our work…

I’ve raised the issue of celebrity humanitarianism several times on this blog (here, here, and here). It is a fraught territory that generally raises strong feelings among the development community. Done wrong, it can be disastrous. Done well…well, honestly, there is a debate about whether or not it can be done well.

I had the opportunity to move beyond the blog and spend some time in academic thought on this topic – birthed from Twitter, no less! When some colleagues approached me about writing a chapter about Bono and celebrity humanitarianism for a book they were putting together, I really wanted to take up the offer but lacked the time necessary to really write a good chapter. So I put out a call for co-authors on Twitter, and Ami Shah took me up on it. She roped a colleague, historian Bruce Hall, into the project, which proved to be a boon to us. The product, “Bono, Band Aid, and Before: Celebrity Humanitarianism, Music and the Objects of its Actionwas a really interesting chapter (I learned things from my co-authors in writing it) on the history of celebrity humanitarianism, and how the “new” celebrity wonkery of Bono and others is, in fact, nothing new at all.

The chapter will appear in 2014 in the book Soundscapes of Wellbeing in Popular Music. (Andrews, Gavin J., Paul Kingsbury, and Robin A. Kearns, eds. Burlington, VT: Ashgate). I have posted a preprint version of our chapter on my preprints page (link here). If you are into this topic, I think you will find it an interesting read…

Over the past year, I’ve been working with Mary Thompson (one of my now-former students – well done, Dr. Thompson) on a report for USAID that explores how the Agency, and indeed development more broadly, approaches the issue of gender and adaptation in agrarian settings. The report was an idea that was hatched back when I was still at USAID. Basically, I noticed that most gender assessments seemed to start with a general “there are men, and there are women, and they are different, so we should assess that” approach. This binary approach is really problematic for several reasons.

  • First, not all women (or men) are the same – a wealthy woman is likely have different experiences and opportunities than a poor woman, for example. Lumping all women together obscures these important differences.
  • Second, different aspects of one’s identity matter more or less, depending on the situation. To understand the decisions I make in my daily life, you would have to account for the fact that sometimes my decisions are shaped by the fact I am professor (such as when I am in the classroom), and other times where what I do is influenced by my role as a father. In both cases, I am still a man – but I occupy two different identity spaces, where my gender might not be as important as my profession or my status as a (somewhat) responsible adult in the house.
  • Third, this approach assumes that there are gendered differences in the context of adaptation to climate change and variability in all situations. While there are often important gendered differences in exposure, sensitivity, and adaptive capacity in relation to the impacts of climate change and variability, this is not always the case.

My colleagues in both the Office of Gender Equality and Women’s Empowerment (GENDEV) and the Office of Global Climate Change agreed that these issues were problematic. They enthusiastically supported an effort to assess the current state of knowledge on gender and adaptation, and to illustrate the importance of doing gender differently through case studies.

Mary and I reviewed the existing literature on gender and adaptation in agrarian settings, exploring how the issue has been addressed in the past. We also focused on a small emerging literature in adaptation that takes a more productive approach to gender that acknowledges and wrestles with the fact that gender roles really take much of their meaning, responsibilities, and expectations from the intersection of gender and other social categories (especially age, ethnicity, and livelihood/class). You can find a first version of this review in the annex of the report. However, Mary and I substantially revised and expanded this literature review for an article now in press at Geography Compass. A preprint version is available on the preprints page of my website.

The bulk of the report – and the part probably of greatest interest to most of my readers – are three case studies that empirically illustrate how taking a binary approach to gender makes it very difficult to identify some of the most vulnerable people in a given place or community, and therefore very different to understand their particular challenges and opportunities. These cases are drawn from my research in Ghana and Mali, and Mary’s dissertation work in Malawi. They make a powerful case for doing gender assessments differently.

This report is not the end of the story – my lab and I are still working with GENDEV and the Office of Global Climate Change at USAID, now identifying missions with adaptation projects that will allow us to implement parallel gender assessments taking a more complex approach to the issue. We hope to demonstrate to these missions the amount of important information generated by this more complex approach, show that greater complexity does not have to result in huge delays in project design or implementation, and ideally influence their project design and implementation such that these projects result in better outcomes.

More to come…

First up on my week up update posts is a re-introduction to my reworked livelihoods approach. As some of you might remember, the formal academic publication laying out the theoretical basis for this approach came out in early 2013. This approach presented in the article is the conceptual foundation for much of the work we are doing in my lab. This pub is now up on my home page, via the link above or through a link on the publications page.

The premise behind this approach, and why I developed it in the first place, is simple. Most livelihoods approaches implicitly assume that the primary motivation for livelihoods decisions is the maximization of some sort of material return on that activity. Unfortunately, in almost all cases this is a massive oversimplification of livelihoods decision-making processes, and in many cases is fundamentally incorrect. Think about the number of livelihoods studies where there are many decisions or behaviors that seem illogical when held up to the logic of material maximization (which would be any good livelihoods study, really). We spend a lot of time trying to explain these decisions away (idiosyncrasy, incomplete information, etc.). But this makes no sense – if you are living on $1.25 a day, and you are illogical or otherwise making decisions against interest, you are likely dead. So there must be a logic behind these decisions, one that we must engage if we are to understand why people do what they do, and if we are to design and implement development interventions that are relevant to the needs of the global poor. My livelihoods approach provides a means of engaging with and explaining these behaviors built on explicit, testable framings of decision-making, locally-appropriate divisions of the population into relevant groupings (i.e. gender, age, class), and the consideration of factors from the local to the global scale.

The article is a straight-ahead academic piece – to be frank, the first half of the article is not that accessible to those without backgrounds in social theory and livelihoods studies. However, the second half of the article is a case study that lays out what the approach allows the user to see and explain, which should be of interest to most everyone who works with livelihoods approaches.

For those who would like a short primer on the approach and what it means in relatively plain English, I’ve put up a “top-line messages” document on the preprints page of my website.

Coming soon is an implementation piece that guides the user through the actual use of the approach. I field-tested the approach in Kaffrine, Senegal with one of my graduate students from May-July 2013. I am about to put the approach to work in a project with the Red Cross in the Zambezi Basin in Zambia next month. In short, this is not just a theoretical pipe dream – it is a real approach that works. In fact, the reason we are working with Red Cross is because Pablo Suarez of Boston University and the Red Cross Climate Centre read the academic piece and immediately grasped what it could do, and then reached out to me to bring me into one of their projects. The implementation piece is already fully drafted, but I am circulating it to a few people in the field to get feedback before I submit it for review or post it to the preprints page. I am hoping to have this up by the end of January.  Once that is out the door, I will look into building a toolkit for those who might be interested in using the approach.

I’m really excited by this approach, and the things that are emerging from it in different places (Mali, Zambia, and Senegal, at the moment). I would love feedback on the concept or its use – I’m not a defensive or possessive person when it comes to ideas, as I think debate and critique tend to make things stronger. The reason I am developing a new livelihoods approach is because the ones we have simply don’t explain the things we need to know, and the other tools of development research that dominate the field at the moment (i.e. RCTs) cannot address the complex, integrative questions that drive outcomes at the community level. So consider all of this a first draft, one that you can help bring to final polished form!

So, some of you might have wondered where the guy who ground out a lot of longish (too-longish?), wonky blog posts has gone over the past year and a half or so. Well, the transition back to academia was much bumpier than I had anticipated. Funding for research takes time to arrive, as does the support (i.e. skilled labor) necessary to make that research happen. And then there is the fact I teach two classes a semester – and they are not small classes. I just finished my annual reporting for 2013, and because of this exercise I know that I taught 261 students last year. In four courses – one of which was a 12-person graduate seminar, so you do the math on my average undergraduate class size. It’s…not ideal.

I’m also now dealing with a complete reversal of my situation back in 2009-10, when I decided to leave academia for a while and go work at USAID. Back then, I felt completely disconnected from development policy and implementation. I was frustrated and bored. Now, I have a small lab running five different projects, only one of which is “pure” research. But we are not fully staffed yet – we’re about to search for a research associate to take up some of the load – and the result has been a lot of nights with less than six hours of sleep. This is hard, but as I remind people, it beats being ignored.

So, until about a week ago, I simply could not get my head above water long enough to blog. I think that is going to change over the next few months, as we get things under control in the lab. So, for that small but dedicated fanbase of the longish, wonky development blog posts, soon you will have more to read.

In the meantime, I’ve finally updated my personal homepage. There are new publications up, new preprints up, and a new mission statement on the home page. This week, I will walk you through these new pubs and ideas. I’m also at work on a new lab page. This will introduce you to a new cast of characters, and a new set of projects, that should keep things interesting around here for a while. I’m not yet sure about the relationship between the lab and this blog – I have to work that out. But the lab will have a twitter account, likely an Instagram account (we’re going to be going a lot of places), and the web page will have project-related videos. It should be pretty cool.

Thanks for bearing with me over the past year and a half. Watch this space – it should get interesting.

Next Page »