Academic Adaptation and "The New Communications Climate"

Andrew Revkin has a post up on Dot Earth that suggests some ways of rethinking scientific engagement with the press and the public.  The post is something of a distillation of a more detailed piece in the WMO Bulletin.  Revkin was kind enough to solicit my comments on the piece, as I have appeared in Dot Earth before in an effort to deal with this issue as it applies to the IPCC, and this post is something of a distillation of my initial rapid response.
First, I liked the message of these two pieces a lot, especially the push for a more holistic engagement with the public through different forms of media, including the press.  As Revkin rightly states, we need to “recognize that the old model of drafting a press release and waiting for the phone to ring is not the path to efficacy and impact.” Someone please tell my university communications office.
A lot of the problem stems from our lack of engagement with professionals in the messaging and marketing world.  As I said to the very gracious Rajendra Pachauri in an email exchange back when we had the whole “don’t talk to the media” controversy:

I am in no way denigrating your [PR] efforts. I am merely suggesting that there are people out there who spend their lives thinking about how to get messages out there, and control that message once it is out there. Just as we employ experts in our research and in these assessment reports precisely because they bring skills and training to the table that we lack, so too we must consider bringing in those with expertise in marketing and outreach.

I assume that a decent PR team would be thinking about multiple platforms of engagement, much as Revkin is suggesting.  However, despite the release of a new IPCC communications strategy, I’m not convinced that the IPCC (or much of the global change community more broadly) yet understands how desperately we need to engage with professionals on this front.  In some ways, there are probably good reasons for the lack of engagement with pros, or with the “new media.” For example, I’m not sure Twitter will help with managing climate change rumors/misinformation as it is released, if only because we are now too far behind the curve – things are so politicized that it is too late for “rapid response” to misinformation. I wish we’d been on this twenty years ago, though . . .
But this “behind the curve” mentality does not explain our lack of engagement.  Instead, I think there are a few other things lurking here.  For example, there is the issue of institutional politics. I love the idea of using new media/information and communication technologies for development (ICT4D) to gather and communicate information, but perhaps not in the ways Revkin suggests.  I have a section later in Delivering Development that outlines how, using existing mobile tech in the developing world, we could both get better information about what is happening to the global poor (the point of my book is that, as I think I demonstrate in great detail, we actually have a very weak handle on what is going on in most parts of the developing world) and could empower the poor to take charge of efforts to address the various challenges, environmental, economic, political and social, that they face every day.  It seems to me, though, that the latter outcome is a terrifying prospect for some in development organizations, as this would create a much more even playing field of information that might force these organizations to negotiate with and take seriously the demands of the people with whom they are working.  Thus, I think we get a sort of ambiguity about ICT4D in development practice, where we seem thrilled by its potential, yet continue to ignore it in our actual programming.  This is not a technical problem – after all, we have the tech, and if we want to do this, we can – it is a problem of institutional politics.  I did not wade into a detailed description of the network I envision in the book because I meant to present it as a political challenge to a continued reticence on the part of many development organizations and practitioners to really engage the global poor (as opposed to tell them what they need and dump it on them).  But my colleagues and I have a detailed proposal for just such a network . . . and I think we will make it real one day.
Another, perhaps more significant barrier to major institutional shifts with regard to outreach is the a chicken-and-egg situation of limited budgets and a dominant academic culture that does not understand media/public engagement or politics very well and sees no incentive for engagement.  Revkin nicely hits on the funding problem as he moves past simply beating up on old-school models of public engagement:

As the IPCC prepares its Fifth Assessment Report, it does so with what, to my eye, appears to be an utterly inadequate budget for communicating its findings and responding in an agile way to nonstop public scrutiny facilitated by the Internet.

However, as much as I agree with this point (and I really, really agree), the problem here is not funding unto itself – it is the way in which a lack of funding erases an opportunity for cultural change that could have a positive feedback effect on the IPCC, global assessments, and academia more generally that radically alters all three. The bulk of climate science, as well as social impact studies, come from academia – which has a very particular culture of rewards.  Virtually nobody in academia is trained to understand that they can get rewarded for being a public intellectual, for making one’s work accessible to a wide community – and if I am really honest, there are many places that actively discourage this engagement.  But there is a culture change afoot in academia, at least among some of us, that could be leveraged right now – and this is where funding could trigger a positive feedback loop.
Funding matters because once you get a real outreach program going, productive public engagement would result in significant personal, intellectual and financial benefits for the participants that I believe could result in very rapid culture change.  My twitter account has done more for the readership of my blog, and for my awareness of the concerns and conversations of the non-academic development world, than anything I have ever done before – this has been a remarkable personal and intellectual benefit of public engagement for me.  As universities continue to retrench, faculty find themselves ever-more vulnerable to downsizing, temporary appointments, and a staggering increase in administrative workload (lots of tasks distributed among fewer and fewer full-time faculty).  I fully expect that without some sort of serious reversal soon, I will retire thirty-odd years hence as an interesting and very rare historical artifact – a professor with tenure.  Given these pressures, I have been arguing to my colleagues that we must engage with the public and with the media to build constituencies for what we do beyond our academic communities.  My book and my blog are efforts to do just this – to become known beyond the academy such that I, as a public intellectual, have leverage over my university, and not the other way around.  And I say this as someone who has been very successful in the traditional academic model.  I recognize that my life will need to be lived on two tracks now – public and academic – if I really want to help create some of the changes in the world that I see as necessary.
But this is a path I started down on my own, for my own idiosyncratic reasons – to trigger a wider change, we cannot assume that my academic colleagues will easily shed the value systems in which they were intellectually raised, and to which they have been held for many, many years.  Without funding to get outreach going, and demonstrate to this community that changing our model is not only worthwhile, but enormously valuable, I fear that such change will come far more slowly than the financial bulldozers knocking on the doors of universities and colleges across the country.  If the IPCC could get such an effort going, demonstrate how public outreach improved the reach of its results, enhanced the visibility and engagement of its participants, and created a path toward the progressive politics necessary to address the challenge of climate change, it would be a powerful example for other assessments.  Further, the participants in these assessments would return to their campuses with evidence for the efficacy and importance of such engagement . . . and many of these participants are senior members of their faculties, in a position to midwife major cultural changes in their institutions.
All this said, this culture change will not be birthed without significant pains.  Some faculty and members of these assessments want nothing to do with the murky world of politics, and prefer to continue operating under the illusion that they just produce data and have no responsibility for how it is used.  And certainly the assessments will fear “politicization” . . . to which I respond “too late.”  The question is not if the findings of an assessment will be politicized, but whether or not those who best understand those findings will engage in these very consequential debates and argue for what they feel is the most rigorous interpretation of the data at hand.  Failure to do so strikes me as dereliction of duty.  On the other hand, just as faculty might come to see why public engagement is important for their careers and the work they do, universities will be gripped with contradictory impulses – a publicly-engaged faculty will serve as a great justification for faculty salaries, increased state appropriations, new facilities, etc.  Then again, nobody likes to empower the labor, as it were . . .
In short, in thinking about public engagement and the IPCC, Revkin is dredging up a major issue related to all global assessments, and indeed the practices of academia.  I think there is opportunity here – and I feel like we must seize this opportunity.  We can either guide a process of change to a productive end, or ride change driven by others wherever it might take us.  I prefer the former.

The Qualitative Research Challenge to RCT4D: Part 2

Well, the response to part one was great – really good comments, and a few great response posts.  I appreciate the efforts of some of my economist colleagues/friends to clarify the terminology and purpose behind RCTs.  All of this has been very productive for me – and hopefully for others engaged in this conversation.
First, a caveat: On the blog I tend to write quickly and with minimal editing – so I get a bit fast and loose at times – well, faster and looser than I intend.  So, to this end, I did not mean to suggest that nobody was doing rigorous work in development research – in fact, the rest of my post clearly set out to refute that idea, at least in the qualitative sphere.  But I see how Marc Bellemare might have read me that way.  What I should have said was that there has always been work, both in research and implementation, where rigorous data collection and analysis were lacking.  In fact, there is quite a lot of this work.  I think we can all agree this is true . . . and I should have been clearer.
I have also learned that what qualitative social scientists/social theorists mean by theory, and what economists mean by theory, seems to be two different things.  Lee defined theory as “formal mathematical modeling” in a comment on part 1 of this series of posts, which is emphatically not what a social theorist might mean.  When I say theory, I am talking about a conjectural framing of a social totality such that complex causality can at least be contained, if not fully explained.  This framing should have reference to some sort of empirical evidence, and therefore should be testable and refinable over time – perhaps through various sorts of ethnographic work, perhaps through formal mathematical modeling of the propositions at hand (I do a bit of both, actually).  In other words, what I mean by theory (and what I focus on in my work) is the establishment of a causal architecture for observed social outcomes.  I am all about the “why it worked” part of research, and far less about the “if it worked” questions – perhaps mostly because I have researched unintended “development interventions” (i.e. unplanned road construction, the establishment of a forest reserve that alters livelihoods resource access, etc.) that did not have a clear goal, a clear “it worked!” moment to identify.  All I have been looking at are outcomes of particular events, and trying to establish the causes of those outcomes.  Obviously, this can be translated to an RCT environment because we could control for the intervention and expected outcome, and then use my approaches to get at the “why did it work/not work” issues.
It has been very interesting to see the economists weigh in on what RCTs really do – they establish, as Marc puts it, “whether something works, not in how it works.”  (See also Grant’s great comment on the first post).  I don’t think that I would get a lot of argument from people if I noted that without causal mechanisms, we can’t be sure why “what worked” actually worked, and whether the causes of “what worked” are in any way generalizable or transportable.  We might have some idea, but I would have low confidence in any research that ended at this point.  This, of course, is why Marc, Lee, Ruth, Grant and any number of other folks see a need for collaboration between quant and qual – so that we can get the right people, with the right tools, looking at different aspects of a development intervention to rigorously establish the existence of an impact, and the establish an equally rigorous understanding of the causal processes by which that impact came to pass.  Nothing terribly new here, I think.  Except, of course, for my continued claim that the qualitative work I do see associated with RCT work is mostly awful, tending toward bad journalism (see my discussion of bad journalism and bad qualitative work in the first post).
But this discussion misses a much larger point about epistemology – what I intended to write in this second part of the series all along.  I do not see the dichotomy between measuring “if something works” and establishing “why something worked” as analytically valid.  Simply put, without some (at least hypothetical) framing of causality, we cannot rigorously frame research questions around either question.  How can you know if something worked, if you are not sure how it was supposed to work in the first place?  Qualitative research provides the interpretive framework for the data collected via RCT4D efforts – a necessary framework if we want RCT4D work to be rigorous.  By separating qualitative work from the quant oriented RCT work, we are assuming that somehow we can pull data collection apart from the framing of the research question.  We cannot – nobody is completely inductive, which means we all work from some sort of framing of causality.  The danger is when we don’t acknowledge this simple point – under most RCT4D work, those framings are implicit and completely uninterrogated by the practitioners.  Even where they come to the fore (Duflo’s 3 I s), they are not interrogated – they are assumed as framings for the rest of the analysis.
If we don’t have causal mechanisms, we cannot rigorously frame research questions to see if something is working – we are, as Marc says, “like the drunk looking for his car keys under the street lamp when he knows he lost them elsewhere, because the only place he can actually see is under the street lamp.”  Only I would argue we are the drunk looking for his keys under a streetlamp, but he has no idea if they are there or not.
In short, I’m not beating up on RCT4D, nor am I advocating for more conversation – no, I am arguing that we need integration, teams with quant and qual skills that frame the research questions together, that develop tests together, that interpret the data together.  This is the only way we will come to really understand the impact of our interventions, and how to more productively frame future efforts.  Of course, I can say this because I already work in a mixed-methods world where my projects integrate the skills of GIScientists, land use modelers, climate modelers, biogeographers and qualitative social scientists – in short, I have a degree of comfort with this sort of collaboration.  So, who wants to start putting together some seriously collaborative, integrated evaluations?

The Qualitative Research Challenge to RCT4D: Part 1

Those following this blog (or my twitter feed) know that I have some issues with RCT4D work.  I’m actually working on a serious treatment of the issues I see in this work (i.e. journal article), but I am not above crowdsourcing some of my ideas to see how people respond.  Also, as many of my readers know, I have a propensity for really long posts.  I’m going to try to avoid that here by breaking this topic into two parts.  So, this is part 1 of 2.
To me, RCT4D work is interesting because of its emphasis on rigorous data collection – certainly, this has long been a problem in development research, and I have no doubt that the data they are gathering is valid.  However, part of the reason I feel confident in this data is because, as I raised in an earlier post,  it is replicating findings from the qualitative literature . . . findings that are, in many cases, long-established with rigorously-gathered, verifiable data.  More on that in part 2 of this series.
One of the things that worries me about the RCT4D movement is the (at least implicit, often overt) suggestion that other forms of development data collection lack rigor and validity.  However, in the qualitative realm we spend a lot of time thinking about rigor and validity, and how we might achieve both – and there are tools we use to this end, ranging from discursive analysis to cross-checking interviews with focus groups and other forms of data.  Certainly, these are different means of establishing rigor and validity, but they are still there.
Without rigor and validity, qualitative research falls into bad journalism.  As I see it, good journalism captures a story or an important issue, and illustrates that issue through examples.  These examples are not meant to rigorously explain the issue at hand, but to clarify it or ground it for the reader.  When journalists attempt to move to explanation via these same few examples (as far too often columnists like Kristof and Friedman do), they start making unsubstantiated claims that generally fall apart under scrutiny.  People mistake this sort of work for qualitative social science all the time, but it is not.  Certainly there is some really bad social science out there that slips from illustration to explanation in just the manner I have described, but this is hardly the majority of the work found in the literature.  Instead, rigorous qualitative social science recognizes the need to gather valid data, and therefore requires conducting dozens, if not hundreds, of interviews to establish understandings of the events and processes at hand.
This understanding of qualitative research stands in stark contrast to what is in evidence in the RCT4D movement.  For all of the effort devoted to data collection under these efforts, there is stunningly little time and energy devoted to explanation of the patterns seen in the data.  In short, RCT4D often reverts to bad journalism when it comes time for explanation.  Patterns gleaned from meticulously gathered data are explained in an offhand manner.  For example, in her (otherwise quite well-done) presentation to USAID yesterday, Esther Duflo suggested that some problematic development outcomes could be explained by a combination of “the three I s”: ideology, ignorance and inertia.  This is a boggling oversimplification of why people do what they do – ideology is basically nondiagnostic (you need to define and interrogate it before you can do anything about it), and ignorance and inertia are (probably unintentionally) deeply patronizing assumptions about people living in the Global South that have been disproven time and again (my own work in Ghana has demonstrated that people operate with really fine-grained information about incomes and gender roles, and know exactly what they are doing when they act in a manner that limits their household incomes – see here, here and here).  Development has claimed to be overcoming ignorance and inertia since . . . well, since we called it colonialism.  Sorry, but that’s the truth.
Worse, this offhand approach to explanation is often “validated” through reference to a single qualitative case that may or may not be representative of the situation at hand – this is horribly ironic for an approach that is trying to move development research past the anecdotal.  This is not merely external observation – I have heard from people working inside J-PAL projects that the overall program puts little effort into serious qualitative work, and has little understanding of what rigor and validity might mean in the context of qualitative methods or explanation.  In short, the bulk of explanation for these interesting patterns of behavior that emerges from these studies resorts to uninterrogated assumptions about human behavior that do not hold up to empirical reality.  What RCT4D has identified are patterns, not explanations – explanation requires a contextual understanding of the social.
Coming soon: Part 2 – Qualitative research and the interpretation of empirical data

On field experience and playing poor

There is a great post up at Good on “Pretending to be Poor” experiments, where participants try to live on tiny sums of money (i.e. $1.50/day) to better understand the plight of the global poor.  Cord Jefferson refers to this sort of thing as “playing poor”, at least in part because participants don’t really live on $1.50 a day . . . after all, they are probably not abandoning their secure homes, and probably not working the sort of dangerous, difficult job that pays such a tiny amount.  Consuming $1.50/day is one thing.  Living on it is entirely another.  (h/t to Michael Kirkpatrick at Independent Global Citizen for pointing out the post).
This, for me, brings up another issue – the “authenticity” of the experiences many of us have had while doing fieldwork (or working in field programs), an issue that has been amplified by what seems to be the recent discovery of fieldwork by the RCT trials for development crowd (I still can’t get over the idea that they think living among the poor is a revolutionary idea).  The whole point of participant observation is to better understand what people do and why they do it by experiencing, to some extent, their context – I find it inordinately difficult to understand how people even begin to meaningfully parse social data without this sort of grounding.  In a concrete way, having malaria while in a village does help one come to grips with the challenges this might pose to making a living via agriculture in a rather visceral way.  So too, living in a village during a drought that decimated a portion of the harvest, by putting me in a position where I had to go a couple of (intermittent) days without food, and with inadequate food for quite a few more, helped me to come to grips with both the capacity and the limitations of the livelihoods strategies in the villages I write about in Delivering Development, and at least a limited understanding of the feelings of frustration and inadequacy that can arise when things go wrong in rural Africa, even as livelihoods strategies work to prevent the worst outcomes.
But the key part of that last sentence was “at least a limited understanding.”  Being there is not the same thing as sharing the experience of poverty, development, or disaster.  When I had malaria, I knew what clinics to go to, and I knew that I could afford the best care available in Cape Coast (and that care was very good) – I was not a happy guy on the morning I woke up with my first case, but I also knew where to go, and that the doctor there would treat me comprehensively and I would be fine.  So too with the drought – the villages I was living in were, at most, about 5 miles (8km) from a service station with a food mart attached.  Even as I went without food for a day, and went a bit hungry for many more, I knew in the back of my mind that if things turned dire, I could walk that distance and purchase all of the food I needed.  In other words, I was not really experiencing life in these villages because I couldn’t, unless I was willing to throw away my credit card, empty my bank account, and renounce all of my upper-class and government colleagues and friends.  Only then would I have been thrown back on only what I could earn in a day in the villages and the (mostly appalling) care available in the rural clinic north of Eguafo.  I was always critically aware of this fact, both in the moment and when writing and speaking about it since.  Without that critical awareness, and a willingness to downplay our own (or other’s) desire to frame our work as a heroic narrative, there is a real risk in creating our own versions of “playing poor” as we conduct fieldwork.

I'm a talking head . . .

Geoff Dabelko, Sean Peoples, Schuyler Null and the rest of the good folks at the Environmental Change and Security Program at the Woodrow Wilson Center for Scholars were kind enough to interview me about some of the themes in Delivering Development.  They’ve posted the video on te ECSP’s blog, The New Security Beat (you really should be checking them out regularly). So, if you want to see/hear me (as opposed to read me), you can go over to their blog, or just click below.

Little milestones

A quick thank you to everyone who has stopped by this blog over the past 10 months. Google Analytics tells me that my 10,000 individual visitor arrived at some point earlier today . . . which sort of blows my mind. Yeah, some of you all seem to crap 10,000 visitors on a Monday – I know. But hey, I started the blog largely at the behest of my publisher, as a means of getting myself and Delivering Development out there. It has become a lot more than that for me – it lets me vent to a really interesting readership, and helps me to control my lecturing withdrawals while I am on leave from academia. I appreciate all the comments, emails, tweets and retweets – I’ve learned a lot from this effort, and the community it seems to have brought me. I will attempt to remain suitably entertaining/intelligent going forward . . .
Now, I want every single one of you to go out and buy a copy of my book.  Pronto.

Right tool for the job

Sasha Dichter has an interesting post about marketing and the poor – my initial reaction was annoyance, as I grow weary of the gratuitous academia-bashing that takes place in some corners of the aid world. The post is sullied by a few needless kicks to the academic straw-man that I found off-putting.  But, digging past that, I found myself largely in agreement with two big points.
First, Dichter raises and then dismisses an all-to-common frustrating assumption (that ties into one of my posts yesterday about the appropriation of qualitative research and findings by economists):

Ivory tower development practitioners don’t respect the poor, think of them as inanimate beneficiaries, and so practitioners don’t take real needs and aspirations into account.

As he implies, this attitude is neither useful nor really accurate – it doesn’t get us down the road toward explaining why things go wrong.  I made the same point about development agencies and workers in Delivering Development:

The vast majority of people working for development organizations are intelligent and good-hearted. They care deeply about the plight of the global poor and labor each day on projects and policies that might, finally, reverse the trends of inequality and unsustainability that mark life in much of the world . . . If these agencies and individuals are, by and large, trying their hardest to do good and have billions of dollars to work with, why are they failing?

So, moving forward with that sense of kinship, I found his next point spot on:

Ivory tower development practitioners are crappy marketers.

Enought with the “ivory tower” bashing, Sasha – you are obscuring a really good point here.  Way back last summer, when I got myself embroiled in a bruhaha over how members of the IPCC were supposed to communicate with the press that eventually made its way into the New York Times via Dot Earth, I found myself having email conversations with Rajendra Pachauri (who was actually very gracious and engaged).  In the course of our exchanges, I argued exactly the same point Sasha is making, but in the context of how we message information about climate change.

I am merely suggesting that there are people out there who spend their lives thinking about how to get messages out there, and control that message once it is out there. Just as we employ experts in our research and in these assessment reports precisely because they bring skills and training to the table that we lack, so too we must consider bringing in those with expertise in marketing and outreach.

I’m not sure how well I was heard on this, though they do have a head of outreach in the secretariat now . . .
In short, good point Sasha.  Now, could you go easy on the ivory tower bashing while making it?  Believe it or not, many of us know about this problem and would love to work with people with the expertise to fix it.

And another thing . . .

Would folks who know precious little about development please stop telling everyone what the discipline of development looks like?  Seriously. Francis Fukuyama has a piece in the American Interest in which he decries the lack of what he calls “large perspective” work in the social sciences. Admittedly, I have some sympathy for his position here – like all academic disciplines, the social sciences generally reward narrow specialization, or at least that is what most of us are trained to believe.  I think there is another way to succeed in academia, a path I am taking – to write not only high quality, refereed research in one’s field(s), but also general-audiences works that gain a wider profile (that was the point of writing Delivering Development).  When you reach audiences beyond academia, you develop other lines of influence, other sources of funding . . . and generally give yourself some space in your home institution, as nobody wants to fire/lose the visible public intellectual.  Sadly, few of us choose the buck the system in this manner, and therefore become slaves to our journals and their relatively narrow audiences.
I also like Fukuyama’s clear argument about the goals of social science:

“The aspiration of social science to replicate the predictability and formality of certain natural sciences is, in the end, a hopeless endeavor. Human societies, as Friedrich Hayek, Karl Popper and others understood, are far too complex to model at an aggregate level.”

Yes, yes, a thousand times yes.  When we refuse to admit this, we empower the people who are willing to take problematic data and jam it through dicey quantitative tools to produce semi-coherent, super-shallow analyses that appear to present simple framings of the world and solutions to our problems while in fact they obscure any real understanding of what is going on, and what might be done.
But in between these two points, made at the beginning and end of the article, respectively, Fukuyama populates his piece with a number of statements about development that range from the problematic to the factually incorrect.  In the end, I am forced to conclude that he has little, if any, understanding of contemporary development in theory or practice.  Sadly, this did not keep him from making a number of sweeping, highly erroneous statements.  For example, at one point he makes the claim

Few scholars have sought to understand development as an inter-connected process with political, economic and social parts.

This claim exists to further his argument that development is plagued by siloed thinking that has led to intellectual incoherence and failed policy. While I might agree about development having problems with its intellectual coherence, he is totally wrong in this claim. It only holds up if one chooses to NOT use something as ubiquitous as Google Scholar (let alone Web of Science) to examine the literature of the past 20 years.  Anthropologists, geographers and sociologists have been doing just this sort of work, mostly at the community level, all along.  Often the lessons of this work are not aimed beyond the communities in which the work was undertaken, but there is a giant volume of work out there that has long taken this interconnection seriously.
Further, Fukuyama’s ignorance of the current state of the discipline and practice of development shows in his claim:

While paying lip service to the importance of institutions, most economists and field practitioners still see politics as at best an obstacle to the real work of development, which is improvement in incomes, health, education and the like, and not as an independent objective of development strategy. (Amartya Sen is an important exception to this generalization.) The democracy promotion agencies, for their part, spend relatively little time worrying about economic growth, social policy or public health, which in their view are goods often used by authoritarian regimes to buy off populations and prevent democratization.

While some economists still treat “the social” as maximizing behavior warped by a bunch of externalities, those that are any good concern themselves with politics (at scales from the state to the household).  Practitioners, perhaps more than anyone else, know that politics are hugely important to the work of development.  Sen has a wide purchase and following throughout development, including at my current employer.  And how does one then address the Democracy and Governance Office in my Bureau – they are, without question, a democracy promotion office . . . but their whole lives revolve around linking this to various other development efforts like economic growth or public health. When he claims that those who work for USAID “do not seek an understanding of the political context within which aid is used and abused” he’s simply factually incorrect. Basically, Fukuyama is just throwing out huge claims that have little or no anchor in the reality of contemporary development agencies or practice.
Fukuyama’s article was not really about development – it was about understanding social change.  However, in using development as his foil in this piece, Fukuyama has done a great disservice to the contemporary discipline – both in its good and bad aspects.  Like those who would give us useless universalizing generalizations and predictions from their social inquiries, Fukuyama’s (mis)reading of development makes it harder to see where the real problems are, and how we might address them.

Qualitative research was (already) here . . .

You know, qualitative social scientists of various stripes have long complained of their marginalization in development.  Examples abound of anthropologists, geographers, and sociologists complaining about the influence of the quantitatively-driven economists (and to a lesser extent, some political scientists) over development theory and policy.  While I am not much for whining, these complaints are often on the mark – quantitative data (of the sort employed by economists, and currently all the rage in political science) tends to carry the day over qualitative data, and the nuanced lessons of ethnographic research are dismissed as unimplementable, ideosyncratic/place-specific, without general value, etc.  This is not to say that I have an issue with quantitative data – I believe we should employ the right tool for the job at hand.  Sadly, most people only have either qualitative or quantitative skills, making the selection of appropriate tools pretty difficult . . .
But what is interesting, of late, is what appears to be a turn toward the lessons of the qualitative social sciences in development . . . only without actually referencing or reading those qualitative literatures.  Indeed, the former quantitative masters of the development universe are now starting to figure out and explore . . . the very things that the qualitative community has known for decades. What is really frustrating and galling is that these “new” studies are being lauded as groundbreaking and getting great play in the development world, despite the fact they are reinventing the qualitative wheel, and without much of the nuance of the current qualitative literature and its several decades of nuance.
What brings me to today’s post is the new piece on hunger in Foreign Policy by Abhijit Banerjee and Esther Duflo.  On one hand, this is great news – good to see development rising to the fore in an outlet like Foreign Policy.  I also largely agree with their conclusions – that the poverty trap/governance debate in development is oversimplified, that food security outcomes are not explicable through a single theory, etc.  On the other hand, from the perspective of a qualitative researcher looking at development, there is nothing new in this article.  Indeed, the implicit premise of the article is galling: When they argue that to address poverty, “In practical terms, that meant we’d have to start understanding how the poor really live their lives,” the implication is that nobody has been doing this.  But what of the tens of thousands of anthropologists, geographers and sociologists (as well as representatives of other cool, hybridized fields like new cultural historians and ethnoarchaeologists).  Hell, what of the Peace Corps?
Whether intentional or not, this article wipes the qualitative research slate clean, allowing the authors to present their work in a methodological and intellectual vacuum.  This is the first of my problems with this article – not so much with its findings, but with its appearance of method.  While I am sure that there is more to their research than presented in the article, the way their piece is structured, the case studies look like evidence/data for a new framing of food security.  They are not – they are illustrations of the larger conceptual points that Banerjee and Duflo are making.  I am sure that Banerjee and Duflo know this, but the reader does not – instead, most readers will think this represents some sort of qualitative research, or a mixed method approach that takes “hard numbers” and mixes it in with the loose suppositions that Banerjee and Duflo offer by way of explanation for the “surprising” outcomes they present.  But loose supposition is not qualitative research – at best, it is journalism. Bad journalism. My work, and the work of many, many colleagues, is based on rigorous methods of observation and analysis that produce validatable data on social phenomena.  The work that led to Delivering Development and many of my refereed publications took nearly two years of on-the-ground observation and interviewing, including follow-ups, focus groups and even the use of archaeology and remotely-sensed data on land use to cross-check and validate both my data and my analyses.
The result of all that work was a deep humility in the face of the challenges that those living in places like Coastal Ghana or Southern Malawi manage on a day-to-day basis . . . and deep humility when addressing the idea of explanation.  This is an experience I share with countless colleagues who have spent a lot of time on the ground in communities, ministries and aid organizations, a coming to grips with the fact that massively generalizable solutions simply don’t exist in the way we want them to, and that singular interventions will never address the challenges facing those living in the Global South.
So, I find it frustrating when Banerjee and Duflo present this observation as in any way unique:

What we’ve found is that the story of hunger, and of poverty more broadly, is far more complex than any one statistic or grand theory; it is a world where those without enough to eat may save up to buy a TV instead, where more money doesn’t necessarily translate into more food, and where making rice cheaper can sometimes even lead people to buy less rice.

For anyone working in food security – that is, anyone who has been reading the literature coming out of anthropology, geography, sociology, and even some areas of ag econ, this is not a revelation – this is standard knowledge.  A few years ago I spent a lot of time and ink on an article in Food Policy that tried to loosely frame a schematic of local decision-making that leads to food security outcomes – an effort to systematize an approach to the highly complex sets of processes and decisions that produce hunger in particular places because there is really no way to get a single, generalized statistic or finding that will explain hunger outcomes everywhere.
In other words: We know.  So what do you have to tell us?
The answer, unfortunately, is not very much . . . because in the end they don’t really dive into the social processes that lead to the sorts of decisions that they see as interesting or counterintuitive.  This is where the heat is in development research – there are a few of us working down at this level, trying to come up with new framings of social process that move us past a reliance solely on the blunt tool of economistic rationality (which can help explain some behaviors and decisions) toward a more nuanced framing of how those rationalities are constructed by, and mobilize, much larger social processes like gender identification.  The theories in which we are dealing are very complex, but they do work (at least I think my work with governmentality is working – but the reviewers at Development and Change might not agree).
And maybe, just maybe, there is an opening to get this sort of work out into the mainstream, to get it applied – we’re going to try to do this at work, pulling together resources and interests across two Bureaus and three offices to see if a reframing of livelihoods around Foucault’s idea of governmentality can, in fact, get us better resolution on livelihoods and food security outcomes than current livelihoods models (which mostly assume that decisionmaking is driven by an effort to maximize material returns on investment and effort). Perhaps I rest too much faith on the idea of evidence, but if we can implement this idea and demonstrate that it works better, perhaps we will have a lever with which to push oversimplified economistic assumptions out of the way, while still doing justice to the complexity of social process and explanation in development.

Optimism in numbers

Tom over at A View from the Cave has a really interesting observation at the end of his post on the Mortensen scandal the other day:

I have been conducting interviews with the Knowledge Management team with UNICEF and the one today go to discussing the access of information. I was struck when the gentleman I was interviewing said, “There are hundreds of offices and thousands of people in UNICEF. Any idea that I come with has likely been already done by 50 people and better than what I had imagined.” We need to access this information and share it with each other so that a story like this will not go the same route.

I know that this is not a new observation – hell, it is practically the mantra of the ICT for development crowd – but I want to point out something that gets lost in its common repetition: optimism.  The interviewee above was not disparaging the idea of access to information, but instead showing tremendous humility in the face of a vast, talented organization.  Tom’s point was to move from this humble observation to (quite rightly) point out that while great ideas may exist within the organization, until they are accessed or shared they are just potential energy.
This is the same thing I tried to leave readers with as one of the takeaways from Delivering Development.  As I argue:

We probably overlook significant problems every day, as our measurements fail to capture them, and we are likely mismeasuring many of those we can see. However, this is not failure; this is hope. If we acknowledge that these are, indeed, significant problems that must be addressed if we wish to build a sustainable future, then we can abandon the baggage of decades of failure. We can open ourselves up to innovation that might be unimaginable from within the echo chamber of contemporary globalization and development . . .

This uncertainty, for me, is hope. There are more than 6.5 billion people on this planet. Surely at least several of them have innovative and exciting ideas about how to address the challenges facing their lives, ideas that might be applicable in other places or be philosophically innovative. We will not know unless we ask, unless we actively go looking for these ideas and empower those who have them to express them to the world.

In short, Tom’s interviewee sees 50,000 people as a hopeful resource.  I see the nearly 7 billion people on this planet in the same way.  I am optimistic about the “potential energy” for addressing global challenges that exists out there in the world.  That said, it will be nothing but potential until we empower people to convert it into kinetic actions.  Delivering Development provides only the loosest schematic of one way of thinking about doing this (there is a much, much more detailed project/workplan behind that loose schematic) that was presented to raise a political challenge the the status quo focus on experts and “developed country” institutions in development – if we know that people living in the Global South have good ideas, and we can empower these people to share their ideas and solutions, why don’t we?
Sometimes optimism requires a lead blocker.  I’m happy to play that role . . . hopefully someone is following me through the line.