So, I have news. In August, I will become a Full Professor and Director of the Department of International Development, Community, and Environment at Clark University. It is an honor to be asked to lead a program with such a rich history, at such an exciting time for both it and the larger Clark community. The program uniquely links the various aspects of my research identity within a single department, and further supports those interests through the work of a fantastic Graduate School of Geography, the George Perkins Marsh Institute, and the Graduate School of Management. At a deeply personal level, this also marks a homecoming for me – I grew up in New Hampshire, in a town an hour’s drive from Worcester. My mother is still there, and many friends are still in the region. In short, this was a convergence of factors that was completely unique, and in the end I simply could not pass on this opportunity.

This, of course, means that after twelve years, I will be leaving the University of South Carolina. This was a very difficult decision – there was no push factor that led me to consider the Clark opportunity. Indeed, I was not looking for another job – this one found me. I owe a great deal to USC, the Department of Geography, and the Walker Institute for International and Area Studies. They gave me resources, mentoring, space, networks, support, etc., all of which were integral in building my career. Without two Walker Institute small grants, the fieldwork in 2004 and 2005 that led to so many publications, including Delivering Development, would never have happened. The department facilitated my time at USAID, and the subsequent creation of HURDL. I will always owe a debt to South Carolina and my colleagues here, and I leave a robust institution that is headed in exciting directions.

As I move, so moves HURDL. The lab will take up residence in the Marsh Institute at Clark some time in late summer, assuming my fantastic research associate Sheila Onzere does not finally lose her mind dealing with all of the things I throw at her. But if Sheila is sane, we’ll be open for business and looking for more opportunities and partners very soon!

I’ve been writing here on Open the Echo Chamber since July of 2010. Good lord, that is a long time. I’ve cranked out well over 250,000 words on the site (plus or minus 30 articles, or about three books, worth of writing). And for all of that effort, I have received exactly no credit at all for this in my academic job. In my annual reviews and promotion packets, I can shove this work under “service”, but 1) most of my colleagues probably wouldn’t agree with that categorization and 2) nobody in academia gets much of anything for their service contributions unless they are a full-on administrator. I don’t blog for my academic career, I blog as a means of getting ideas outside the rigidity of the peer-review publishing world, the ways it gates off knowledge from those that might use it, and the ways it can police away innovative new thought that challenges existing powers. So, when I recently stumbled across The Winnower, I got excited. “Publish my posts with review and a DOI?” I thought. “Make my posts citable in major journals and technical reports?” I chortled. “Further blur the lines between my academic publishing and the stuff I do on this blog?” I fairly giggled. Yeah, I need to give this a try.

Let me explain:

According to the lovely people at Google Analytics, in that time nearly 50,000 users have committed to over 100,000 pageviews. For a blog that is home to some long, wonky posts, that is pretty amazing. Readership comes from all over the world, with the top 10 countries looking like this:

  1. United States 37,530(52.18%)
  2. United Kingdom 7,990(11.11%)
  3. Canada 4,186(5.82%)
  4. Australia 1,949(2.71%)
  5. India 1,339(1.86%)
  6. Germany 1,019(1.42%)
  7. Netherlands 834(1.16%)
  8. Kenya 773(1.07%)
  9. Philippines 722 (1.00%)
  10. France 695 (.97%)

It is remarkable that Google lists visitors from 192 different countries and territories. And when you drill down to cities, it gets pretty cool as well:

  1. Washington 5,023(6.98%)
  2. London 3,419(4.75%)
  3. New York 2,965(4.12%)
  4. Columbia 2,326(3.23%)
  5. Irmo 1,273(1.77%)
  6. Toronto 769(1.07%)
  7. Seattle 713(0.99%)
  8. Fonthill 676(0.94%)
  9. Melbourne 642(0.89%)
  10. Nairobi 604(0.84%)
  11. Sydney 532(0.74%)
  12. Cambridge MA 517(0.72%)
  13. Oxford 500(0.70%)
  14. Ottawa 466(0.65%)
  15. San Francisco 459(0.64%)
  16. Arlington 457(0.64%)
  17. Chicago 429(0.60%)
  18. Durham 393(0.55%)
  19. Boston 367(0.51%)
  20. Montreal 364(0.51%)

I’ve known who I was reaching for a while – I get informal notes and phone calls from people at various institutions letting me know they liked (mostly) or disliked/had issues with (sometimes) things I have written. Compared to many blogs, I don’t get that many readers. But my readers are my target audience – they are the folks who work in development and climate change. Well, that, and my students here at the University of South Carolina (hence the Columbia and Irmo numbers).

The one big problem for me, and this blog, has been the level of effort it requires, and the ways in which it could (and could not) be used in my primary sectors of employment, academia and development consulting. Though the world is changing fast, the fact is most people still will not take a blog post as seriously as an academic article. That is probably a good thing – there is a lot of crap out on blogs. At the same time, there are really good blogs out there, some of which produce better work/scholarship than you find in the peer-reviewed literature. Finding ways to help people sort out what is good and what is crap, and finding ways to make social media/blog posts viable sources for academic and consulting work, is important to me.

So, starting today, I have linked this blog to The Winnower. When I produce a post with enough intellectual content, I will cross-post it to the Winnower, where it will be subject to a review process, after which it will receive a DOI, making it a real publication in the eyes of many journals and other sources (hell, it will fit under “other academic contributions” on my CV, so there). I’m excited about what The Winnower is trying to do (as you might already know I find academic publishing structures deeply frustrating: just look here, here, here, and here), and if my work serves to further their mission, and their efforts serve to further blur the lines between the ways in which I disseminate my work, I’m happy to give it a go.

Here is my author page at The Winnower. I’ve currently got six old posts up for review – six posts that were viewed by an average of well over a thousand readers each. So I know you all care about these posts and topics. Go ahead and review them, comment on them, help me make them better…and help The Winnower succeed.

Welcome to the future. Maybe.

Five and half years ago, at the end of the spring semester of 2009, I sat down and over the course of 30 days drafted my book Delivering Development. The book was, for me, many things: an effort to impose a sort of narrative on the work I’d been doing for 12 years in Ghana and other parts of Africa; an effort to escape the increasingly claustrophobic confines of academic writing and debates; and an effort to exorcise the growing frustration and isolation I felt as an academic working on international development in a changing climate, but without a meaningful network into any development donors. Most importantly, however, it was a 90,000 word scream at the field that could be summarized in three sentences:

  1. Most of the time, we have no idea what the global poor are doing or why they are doing it.
  2. Because of this, most of our projects are designed for what we think is going on, which rarely aligns with reality
  3. This is why so many development projects fail, and if we keep doing this, the consequences will get dire

The book had a generous reception, received very fair (if sometimes a bit harsh) reviews, and actually sold a decent number of copies (at least by the standards of the modern publishing industry, which was in full collapse by the time the book appeared in January 2011). Maybe most gratifying, I heard from a lot of people who read the book and who heard the message, or for whom the book articulated concerns they had felt in their jobs.

This is not to say the book is without flaws. For example, the second half of the book, the part addressing the implications of being wrong about the global poor, was weaker than the first – and this is very clear to me now, as the former employee of a development donor. Were I writing the book now, I would do practically nothing to the first half, but I would revise several parts of the second half (and the very dated scenarios chapter really needs revision at this point, anyway). But, five and a half years after I drafted it, I can still say one thing clearly.


Well, I was right about point #1 above, anyway. The newest World Development Report from the World Bank has empirically demonstrated what was so clear to me and many others, and what I think I did a very nice job of illustrating in Delivering Development: most people engaged in the modern development industry have very little understanding of the lives and thought processes of the global poor, the very people that industry is meant to serve. Chapter 10 is perfectly titled: “The biases of development professionals.” All credit to the authors of the report for finally turning the analytic lens on development itself, as it would have been all too easy to simply talk about the global poor through the lens of perception and bias. And when the report turns to development professionals’ perceptions…for the love of God. Just look at the findings on page 188. No, wait, let me show you some here:

Screen Shot 2014-12-21 at 10.05.06 PM


For those who are chart-challenged, let me walk you through this. In three settings, the survey asked development professionals what percentage of their beneficiaries thought “what happens in the future depends on me.” For the bottom third, the professionals assumed very few people would say this. Except that a huge number of very poor people said this, in all settings. In short, the development professionals were totally wrong about what these people thought, which means they don’t understand their mindsets, motivations, etc. Holy crap, folks. This isn’t a near miss. This is I-have-no-idea-what-I-am-talking-about stuff here. These are the error bars on the initial ideas that lead to projects and programs at development donors.

WDR’s frames these findings in pretty stark terms (page 180):

Perhaps the most pressing concern is whether development professionals understand the circumstances in which the beneficiaries of their policies actually live and the beliefs and attitudes that shape their lives.

And their proposed solution is equally pointed (page 190):

For project and program design, development professionals should “eat their own dog food”: that is, they should try to experience firsthand the programs and projects they design.

Yes. Or failing that, they should really start either reading the work of people who can provide that experience for them, or start funding the people who can generate the data that allows for this experience (metaphorically).

On one hand, I am thrilled to see this point in mainstream development conversation. On the other…I said this five years ago, and not that many people cared. Now the World Bank says it…or maybe more to the point, the World Bank says it in terms of behavioral economics, and everyone gets excited. Well, my feelings on this are pretty clear:

  1. Just putting this in terms of behavioral economics is actually putting the argument out there in the least threatening manner possible, as it is still an argument from economics that preserves that disciplinary perspective’s position of superiority in development
  2. The things that behavioral economics have been “discovering” about the global poor that anthropology, geography, sociology, and social history have been saying for decades. Further, their analyses generally lack explanatory rigor or anything resembling external validity – see my posts here, here, and here.

Also, the WDR never makes a case for why we should care that we are probably misunderstanding/ misrepresenting the global poor. As a result, this just reads as an extended “oopsie!” piece that needs not be seriously addressed as long as we look a little sheepish – then we can get back to work. But getting this stuff wrong is really, really important – this was the central point of the second half of Delivering Development (a point that Duncan Green unfortunately missed in his review). We can design projects that not only fail to make things better, we can actually make things much worse: we can kill people by accident. We can gum up the global environment, which is not going to only hurt some distant, abstract global poor person – it will hit those in the richest countries, too. We can screw up the global economy, another entity that knows few borders and over which nobody has complete control. This is not “oopsie!” This is a disaster that requires serious attention and redress.

So, good first step World Bank, but not far enough. Delivering Development still goes a lot further than you are willing to now. Delivering Development goes much further than behavioral development economics has gone, or really can go. Time to catch up to the real nature of this problem, and the real challenges it presents. Time to catch up to things I was writing five years ago, before it’s too late.

From my recent post over on HURDLblog, my lab’s group blog, on the challenges of thinking productively about gender and adaptation:

My closing point caused a bit of consternation (I can’t help it – it’s what I do). Basically, I asked the room if the point of paying attention to gender in climate services was to identify the particular needs of men and women, or to identify and address the needs of the most vulnerable. I argued that approaches to gender that treat the categories “man” and “women” as homogenous and essentially linked to particular vulnerabilities might achieve the former, but would do very little to achieve the latter. Mary Thompson and I have produced a study for USAID that illustrates this point empirically. But there were a number of people in the room that got a bit worked up by this point. They felt that I was arguing that gender no longer mattered, and that my presentation marked a retreat from years of work that they and others had put in to get gender to the table in discussions of adaptation and climate services. Nothing could be further from the truth.

Read the full post here.

I’m getting a bit better at updating my website…probably because I have more to update. Specifically, I’ve put up some new work on the publications page. There, you will find:

On the preprints page, I have two new pieces up:

Also be sure to check out the HURDL website. We’ve got new pubs up, and the last member of the lab (Bob Greeley) finally has a bio up!

Nick Kristof’s piece decrying the distance between academia and the rest of society has, predictably, triggered a screaming firestorm in academia. That’s what you get when you poke the (over)educated, seriously literate beast. A lot of the criticism is very well written and thought out (outstanding examples here and here). But I fear that Kristof’s central message, that society needs a more engaged academia, is getting lost here. My main problem was not that Kristof was arguing for a more engaged academy, but that his prescriptions for how to bring about that engagement did not address the real incentives and barriers that academics negotiate when they try to engage with public debate.

So, in the interest of constructive criticism, I have some suggestions for things that Mr. Kristof might consider looking into – throwing a light on these challenges would actually serve to highlight the real, and often absurdly unnecessary, barriers between the academy and society. This is obviously just a tiny sample of potential topics, drawn from my own experiences in a top-tier department in a large, Research-1 state institution.

  1. Examine the system by which departments are “ranked” in the United States: The National Research Council (NRC) ranks departments at (not so) regular intervals, creating a sort of BCS ranking of departments, with about the same amount of accuracy and certainty. By and large, academics know these rankings are garbage, but administrations love to trot them out to demonstrate the excellence of their institution, and therefore justify the institutional budget/tuition/etc. But here’s a fun fact: if you dig into what counts in the rankings, you can quickly see why university administrations don’t necessarily care for academic outreach. For example, did you know that authoring an NRC report (which is seriously prestigious) DOES NOT COUNT AS A MEASURABLE PUBLICATION IN THE NRC RANKINGS? I know this because my department ran into this problem the last time around, with at least three members of our faculty losing multiple publications because the NRC did not count ITS OWN PUBLICATIONS. If those pubs were excluded, you can imagine that basically all reports in all contexts were excluded. So if administrations love rankings, and rankings hate outreach, you’re not going to get much outreach.
  2. Consider how academic evaluation’s over-focus on the number of articles produced creates less interesting, more arcane academic outputs: The production of knowledge in academia has, for some time, been driven by expectations of ever-greater output (as measured in research dollars and publications) with less input (fewer faculty members). These expectations govern everything from the evaluation of departments to individual tenure decisions. As a result, the publication requirements for tenure have become ever-more challenging, with expectations for the number of publications produced rising so steeply that many who recently got tenure might have published more articles than their very senior colleagues published to become full professors even two decades ago. This is driven by everything from departmental-level politics to the NRC rankings themselves, though I suspect a strong trickle-down effect here. In any case, this has created a crisis of knowledge production in which professors are incentivized to produce what my colleague Carl Dahlman once called the minimum publishable unit (MPU). Because expectations of performance are more and more heavily based on quantitative output (thanks, NRC!), as opposed to the quality of that output, it makes sense for faculty to shy away from “big question” articles that might chew up a lot of their data and interesting ideas, and instead package that same set of ideas as two or three smaller, much more arcane publications. This is a very real pressure: when I put out my retheorization of livelihoods approaches a year ago, more than one colleague suggested that I would have been better cutting its 15000 words into two 8500 word pieces, as it would have counted for more in my annual evaluation. Nothing has driven us toward a proliferation of small, specialized journals carrying tiny, arcane articles quite like this drive for quantification and greater production. Undoing this really awful trend would help a lot, as academics would be freed up to think big thoughts again, both in journals and in other fora. One way to help: publicize the alt-metrics movement (start at the LSE Impact Blog and work from there) that attempts to move beyond a system of academic assessment that reflects a long-dead era of publication and communication.
  3. Focus on how for-profit academic publishers wall off knowledge from the public: Academics must publish to survive professionally, and the best journals in nearly every field are the last profitable properties for a number of publishing houses. These publishers benefit from free labor on the part of authors, reviewers, and the nearly-free labor of editors, and often the subsidy of taxpayer-funded research, yet charge exorbitant amounts for subscriptions to their journals – in the case of public universities, bleeding the taxpayer once again. Academics are absolutely responsible for this situation – after all, we collectively define what the good journals are, and as I’ve argued before we could change our minds if we wanted to. But academia takes time to change, and could use a push. Where is the push from the federal government to demand that the results of taxpayer-funded research be made available to the taxpayers immediately? What happened to the initial push from the Obama White House on this issue? It seems to be a topic ripe for a good investigative journalist.

And, for good measure, an interesting trend that will likely lead to a more engaged academia:

  1. The shift in acceptable academic funding: Until very recently, academic grants from traditional agencies like the National Science Foundation or the National Institutes of Health were given exalted status, with all other forms of funding occupying lesser rungs on the great chain of funding. Thus, to get tenure, many (biophysical science/social science) academics really had to land one of these grants. The programs associated with these grants very often rewarded pure research and actively discouraged “applied” work, and even today the NSF’s requirements for “impact” are fairly surficial. Contracts were very second-tier, and often not taken seriously in one’s academic review. Now, thanks to funding crunches in both universities and the funding agencies, any research-looking dollars have started looking good to university administrations, and contracts are more and more being evaluated alongside more traditional academic grants. There is a tremendous opportunity here to engage academia through this mechanism. [Full disclosure: I’ve been funded in the past by NSF and by the National Geographic Society, but today roughly 90% of my funding comes directly or indirectly from development donors like USAID in the form of contracts or grants]

This is hardly a comprehensive list of things into which a serious journalist could shed light on, and perhaps help leverage change. I’m just typing quickly here. If you have other ideas for things that journalists should be examining, please leave them in the comments or email them to me: ed at   I will append them to this post as they come in, attributing them (or not, depending on the wishes of contributors) in the post.

Edit 17 February: If you want to move beyond criticism (and snark), join me in thinking about things that Mr. Kristof should look into/write about if he really wants a more engaged academia here.

In his Saturday column, Nick Kristof joins a long line of people, academics and otherwise, who decry the distance between academia and society. While I greatly appreciate his call to engage more with society and its questions (something I think I embody in my own career), I found his column to be riddled with so many misunderstandings/misrepresentations of academia that, in the end, he contributes nothing to the conversation.

What issues, you ask?

1) He misdiagnoses the problem

If you read the column quickly, it seems that Kristof blames academic culture for the lack of public engagement he decries. This, of course, ignores the real problem, which is more accurately diagnosed by Will McCants’s (oddly marginalized) quotes in the column. Sure, there are academics out there with no interest in public engagement. And that is fine, by the way – people can make their own choices about what they do and why. But to suggest that all of academia is governed by a culture that rejects public engagement deeply misrepresents the problem. The problem is the academic rewards system which currently gives us job security and rewards for publishing in academic journals, and nearly nothing for public outreach. To quote McCants:

If the sine qua non for academic success is peer-reviewed publications, then academics who ‘waste their time’ writing for the masses will be penalized.

This is not a problem of academic culture, this is a problem of university management – administrations decide who gets tenure, and on what standard. If university administrations decided to halve the number of articles required for tenure, and replaced that academic production with a demand that professors write a certain number of op-eds, run blogs with a certain number of monthly visitors, or participate in policy development processes, I assure you the world would be overrun with academic engagement. So if you want more engagement, go holler at some university presidents and provosts, and lay off the assistant professors.

2) Kristof takes aim at academic prose – but not really:

 …academics seeking tenure must encode their insights into turgid prose.

Well, yes. There is a lot of horrific prose in academia – but Kristof seems to suggest that crap writing is a requirement of academic work. It is not – I guarantee you that the best writers are generally cited a lot more than the worst. So Kristof has unfairly demonized academia as willfully holding the public at bay with its crappy writing, which completely misdiagnoses the problem. The problem is that the vast majority of academia isn’t trained in writing (beyond a freshman composition course), there is no money in academia for the editorial staff that professional writers (and columnists) rely on to clean up their own turgid prose, and the really simple fact that we all tend to write like what we read. Because academic prose is mostly terrible, people who read it tend to write terrible prose. This is why I am always reading short fiction (Pushcart Prize, Best American Short Stories, etc.) alongside my work reading…

If you want better academic prose, budget for the same editorial support, say, that the New York Times or the New Yorker provide for their writers. I assure you, academic writing would be fantastic almost immediately.

Side note: Kristof implicitly sets academic writing against all other sources of writing, which leads me to wonder if he’s ever read a policy document. I helped author one, and I read many, while at USAID. The prose was generally horrific…

3) His implicit prescription for more engaged writing is a disaster

Kristof notes that “In the late 1930s and early 1940s, one-fifth of articles in The American Political Science Review focused on policy prescriptions; at last count, the share was down to 0.3 percent.” In short, he sees engagement as prescription. Which is exactly the wrong way to go about it. I have served as a policy advisor to a political appointee. I can assure you that handing a political appointee a prescription is no guarantee they will adopt it. Indeed, I think they are probably less likely to adopt it because it isn’t their idea. Policy prescriptions preclude ownership of the conclusion and needed responses by the policymaker. Better to lay out clear evidence for the causes of particular challenges, or the impacts of different decisions. Does academia do enough of this? Probably not. But for heaven’s sake, don’t start writing prescriptive pieces. All that will do is perpetuate our marginality through other means.

4) He confuses causes and effects in his argument that political diversity produces greater societal impact.

Arguing that the greater public engagement of economists is about their political diversity requires ignoring most of the 20th century history of thought within which disciplines took shape. Just as geography became a massive discipline in England and other countries with large colonial holdings because of the ways that discipline fit into national needs, so economics became massive here in the US in response to various needs at different times that were captured (for better or for worse) by economics. I would argue that the political diversity in economics is a product of its engagement with the political sphere, as people realized that economic thought could shift/drive political agendas…not the other way around.

5) There is a large movement underway in academia to rethink “impact”.

There is too much under this heading to cover in a single post. But go visit the LSE Impact Blog to see the diversity of efforts to measure academic impact currently in play – everything from rethinking traditional journal metrics to looking at professors’ reach on Twitter. Mr. Kristof is about 4 years late to this argument.

In short, Kristof has recognized a problem that has been discussed…forever, by an awful lot of people. But he clearly has no idea where the problem comes from, and therefore offers nothing of use when it comes to solutions. All this column does is perpetuate several misunderstandings of academia that have contributed to its marginalization – which seems to be the opposite of the columns’ intent.

I just finished reading Geoff Dabelko’s “The Periphery isn’t Peripheral” on Ensia. In this piece, Geoff diagnoses the problems that beset efforts to address linked environmental and development problems, and offers some thoughts on how to address them. I love his typology of tyrannies that beset efforts to build and implement good, integrative (i.e. cross-sectoral) programs. I agreed with his suggestions on how to make integrative work more acceptable/mainstream in development. And by the end, I was worried about how to make his suggestions reality within the donors and implementers that really need to take on this message.

Geoff’s four tyrannies (Tyranny of the Inbox; Tyranny of Immediate Results; Tyranny of the Single Sector; Tyranny of the Unidimensional Measurement of Success) that he sees crippling environment-and-development programming are dead on. Those of us working in climate change are especially sensitive to tyranny #2, the Tyranny of Immediate Results. How the hell are we supposed to demonstrate results on an adaptation program that is meant to address challenges that are not just happening now, but will intensify over a 30 year horizon? Does our inability to see the future mean that this programming is inherently useless or inefficient? No. But because it is impossible to measure future impact now, adaptation programs are easy to attack…

As a geographer, I love Geoff’s “Tyranny of the Single Sector” – geographers generally cannot help but start integrating things across sectors (that’s what our discipline does, really). In my experiences in the classroom and the donor world, integrative thinking eludes a lot more people than I ever thought possible. Our absurd system of performance measurement in public education is not helping – trust me. But even when you find an integrative thinker, they may not be doing much integrative work. Sometimes people simply can’t see outside their own training and expertise. Sometimes they are victims of tyranny #1 (Tyranny of the Inbox), where they are too busy dealing with immediate challenges within their sector to think across sectors – lord knows, that defined the last 6 months of my life at USAID.

And Geoff’s fourth tyranny speaks right to my post from the other day – the Tyranny of the Unidimensional Measurement of Success. Read Geoff, and then read my post, and you will see why he and I get along so well.

Now, Geoff does not stop with a diagnosis – he suggests that integrative thinking in development will require some changes to how we do our jobs, and provides some illustrations of integrative projects that have produced better results to bolster his argument. While I like all of his suggestions, what concerns me is that these suggestions are easier said than done. For example, Geoff is dead right when he says that:

We must reward, rather than punish, cross-disciplinary or cross-sectoral approaches; define success in a way that encourages, rather than discourages, positive outcomes in multiple arenas; and foster monitoring and evaluation plans that embrace, rather than ignore, different timescales and multiple indicators.”

But how, exactly, are we to do this? What HR levers exist that we can use to make this happen? How much leeway do appointees and other executive-level donor staff have with regard to changing rewards and evaluations? And are the right people in charge to make such changes possible? A lot of people rise through donor organizations by being very good at sectoral work. Why would they reward people for doing things differently?

Similarly, I wonder how we can actually get more long-term thinking built into the practice and implementation of development. How do we really overcome the Tyranny of the Inbox, and the Tyranny of Immediate Results? This is not merely a mindset problem, this is a problem of budget justifications to an often-hostile congress that wants to know what you have done for them lately. Where are our congressional champions to make this sort of change possible?

Asking Geoff to fix all our problems in a single bit of writing is completely unfair. That is the Tyranny of What do We do Now? In the best tradition of academic/policy writing, his piece got me thinking (constructively) about what needs to happen if we are to do a better job of achieving something that looks like sustainable development going forward. For that reason alone it is well worth your time. Go read.

I’m a big fan of accountability when it comes to aid and development. We should be asking if our interventions have impact, and identifying interventions that are effective means of addressing particular development challenges. Of course, this is a bit like arguing for clean air and clean water. Seriously, who’s going to argue for dirtier water or air. Who really argues for ineffective aid and development spending?


More often than not, discussions of accountability and impact serve only to inflate narrow differences in approach, emphasis, or opinion into full on “good guys”/ “bad guys” arguments, where the “bad guys” are somehow against evaluation, hostile to the effective use of aid dollars, and indeed actively out to hurt the global poor. This serves nothing but particular cults of personality and, in my opinion, serves to squash out really important problems with the accountability/impact agenda in development. And there are major problems with this agenda as it is currently framed – around the belief that we have proven means of measuring what works and how, if only we would just apply those tools.

When we start from this as a foundation, the accountability discussion is narrowed to a rather tepid debate about the application of the right tools to select the right programs. If all we are really talking about are tools, any skepticism toward efforts to account for the impact of aid projects and dollars is easily labeled an exercise in obfuscation, a refusal to “learn what works,” or an example of organizations and individuals captured by their own intellectual inertia. In narrowing the debate to an argument about the willingness of individuals and organizations to apply these tools to their projects, we are closing off discussion of a critical problem in development: we don’t actually know exactly what we are trying to measure.

Look, you can (fairly easily) measure the intended impact of a given project or program if you set things up for monitoring and evaluation at the outset.  Hell, with enough time and money, we can often piece enough data together to do a decent post-hoc evaluation. But both cases assume two things:

1)   The project correctly identified the challenge at hand, and the intervention was actually foundational/central to the needs of the people at hand.

This is a pretty weak assumption. I filled up a book arguing that a lot of the things that we assume about life for the global poor are incorrect, and therefore that many of our fundamental assumptions about how to address the needs of the global poor are incorrect. And when much of what we do in development is based on assumptions about people we’ve never met and places we’ve never visited, it is likely that many projects which achieve their intended outcomes are actually doing relatively little for their target populations.

Bad news: this is pretty consistent with the findings of a really large academic literature on development. This is why HURDL focuses so heavily on the implementation of a research approach that defines the challenges of the population as part of its initial fieldwork, and continually revisits and revises those challenges as it sorts out the distinct and differentiated vulnerabilities (for explanation of those terms, see page one of here or here) experienced by various segments of the population.

Simply evaluating a portfolio of projects in terms of their stated goals serves to close off the project cycle into an ever more hermetically-sealed, self-referential world in which the needs of the target population recede ever further from design, monitoring, and evaluation. Sure, by introducing that drought-tolerant strain of millet to the region, you helped create a stable source of household food that guards against the impact of climate variability. This project could record high levels of variety uptake, large numbers of farmers trained on the growth of that variety, and even improved annual yields during slight downturns in rain. By all normal project metrics, it would be a success. But if the biggest problem in the area was finding adequate water for household livestock, that millet crop isn’t much good, and may well fail in the first truly dry season because men cannot tend their fields when they have to migrate with their animals in search of water.  Thus, the project achieved its goal of making agriculture more “climate smart,” but failed to actually address the main problem in the area. Project indicators will likely capture the first half of the previous scenario, and totally miss the second half (especially if that really dry year comes after the project cycle is over).

2)   The intended impact was the only impact of the intervention.

If all that we are evaluating is the achievement of the expected goals of a project, we fail to capture the wider set of impacts that any intervention into a complex system will produce. So, for example, an organization might install a borehole in a village in an effort to introduce safe drinking water and therefore lower rates of morbidity associated with water-borne illness. Because this is the goal of the project, monitoring and evaluation will center on identifying who uses the borehole, and their water-borne illness outcomes. And if this intervention fails to lower rates of water-borne illness among borehole users, perhaps because post-pump sanitation issues remain unresolved by this intervention, monitoring and evaluation efforts will likely grade the intervention a failure.

Sure, that new borehole might not have resulted in lowered morbidity from water-borne illness. But what if it radically reduced the amount of time women spent gathering water, time they now spend on their own economic activities and education…efforts that, in the long term, produced improved household sanitation practices that ended up achieving the original goal of the borehole in an indirect manner? In this case, is the borehole a failure? Well, in one sense, yes – it did not produce the intended outcome in the intended timeframe. But in another sense, it had a constructive impact on the community that, in the much longer term, produced the desired outcome in a manner that is no longer dependent on infrastructure. Calling that a failure is nonsensical.

Nearly every conversation I see about aid accountability and impact suffers from one or both of these problems. These are easy mistakes to make if we assume that we have 1) correctly identified the challenges that we should address and 2) we know how best to address those challenges. When these assumptions don’t hold up under scrutiny (which is often), we need to rethink what it means to be accountable with aid dollars, and how we identify the impact we do (or do not) have.

What am I getting at? I think we are at a point where we must reframe development interventions away from known technical or social “fixes” for known problems to catalysts for change that populations can build upon in locally appropriate, but often unpredictable, ways. The former framing of development is the technocrats’ dream, beautifully embodied in the (failing) Millennium Village Project, just the latest incarnation of Mitchell’s Rule of Experts or Easterly’s White Man’s Burden. The latter requires a radical embrace of complexity and uncertainty that I suspect Ben Ramalingan might support (I’m not sure how Owen Barder would feel about this). I think the real conversation in aid/development accountability and impact is about how to think about these concepts in the context of chaotic, complex systems.

Since returning to academia in August of 2012, I’ve been pretty swamped. Those who follow this blog, or my twitter feed, know that my rate of posting has been way, way down. It’s not that I got bored with social media, or tired of talking about development, humanitarian assistance, and environmental change. I’ve just been swamped. The transition back to academia took much more out of me than I expected, and I took on far, far too much work. The result – a lot of lost sleep, and a lapsed social media profile in the virtual world, and a lapsed social life in the real world.

One of the things I’ve been working on is getting and organizing enough support around here to do everything I’m supposed to be doing – that means getting grad students and (coming soon) a research associate/postdoc to help out. Well, we’re about 75% of the way there, and if I wait for 100% I’ll probably never get to introduce you all to HURDL…

HURDL is the Humanitarian Response and Development Lab here at the Department of Geography at the University of South Carolina. It’s also a less-than-subtle wink at my previous career in track and field. HURDL is the academic home for me and several (very smart) grad students, and the institution managing about five different workflows for different donors and implementers.  Basically, we are the qualitative/social science research team for a series of different projects that range from policy development to project design and implementation. Sometimes we are doing traditional academic research. Mostly, we do hybrid work that combines primary research with policy and/or implementation needs. I’m not going to go into huge detail here, because we finally have a lab website up. The site includes pages for our personnel, our projects, our lab-related publications, and some media (still under development). We’ll need to put up a news feed and likely a listing of the talks we give in different places.

Have a look around. I think you’ll have a sense of why I’ve been in a social media cave for a while. Luckily, I am surrounded by really smart, dedicated people, and am in a position to add at least one more staff position soon, so I might actually be back on the blog (and sleeping more than 6 hours a night) again soon!

Let us know what you think – this is just a first cut at the page. We’d love suggestions, comments, whatever you have – we want this to be an effective page, and a digital ambassador for our work…

Next Page »