Entries tagged with “Nick Kristof”.


Nick Kristof’s piece decrying the distance between academia and the rest of society has, predictably, triggered a screaming firestorm in academia. That’s what you get when you poke the (over)educated, seriously literate beast. A lot of the criticism is very well written and thought out (outstanding examples here and here). But I fear that Kristof’s central message, that society needs a more engaged academia, is getting lost here. My main problem was not that Kristof was arguing for a more engaged academy, but that his prescriptions for how to bring about that engagement did not address the real incentives and barriers that academics negotiate when they try to engage with public debate.

So, in the interest of constructive criticism, I have some suggestions for things that Mr. Kristof might consider looking into – throwing a light on these challenges would actually serve to highlight the real, and often absurdly unnecessary, barriers between the academy and society. This is obviously just a tiny sample of potential topics, drawn from my own experiences in a top-tier department in a large, Research-1 state institution.

  1. Examine the system by which departments are “ranked” in the United States: The National Research Council (NRC) ranks departments at (not so) regular intervals, creating a sort of BCS ranking of departments, with about the same amount of accuracy and certainty. By and large, academics know these rankings are garbage, but administrations love to trot them out to demonstrate the excellence of their institution, and therefore justify the institutional budget/tuition/etc. But here’s a fun fact: if you dig into what counts in the rankings, you can quickly see why university administrations don’t necessarily care for academic outreach. For example, did you know that authoring an NRC report (which is seriously prestigious) DOES NOT COUNT AS A MEASURABLE PUBLICATION IN THE NRC RANKINGS? I know this because my department ran into this problem the last time around, with at least three members of our faculty losing multiple publications because the NRC did not count ITS OWN PUBLICATIONS. If those pubs were excluded, you can imagine that basically all reports in all contexts were excluded. So if administrations love rankings, and rankings hate outreach, you’re not going to get much outreach.
  2. Consider how academic evaluation’s over-focus on the number of articles produced creates less interesting, more arcane academic outputs: The production of knowledge in academia has, for some time, been driven by expectations of ever-greater output (as measured in research dollars and publications) with less input (fewer faculty members). These expectations govern everything from the evaluation of departments to individual tenure decisions. As a result, the publication requirements for tenure have become ever-more challenging, with expectations for the number of publications produced rising so steeply that many who recently got tenure might have published more articles than their very senior colleagues published to become full professors even two decades ago. This is driven by everything from departmental-level politics to the NRC rankings themselves, though I suspect a strong trickle-down effect here. In any case, this has created a crisis of knowledge production in which professors are incentivized to produce what my colleague Carl Dahlman once called the minimum publishable unit (MPU). Because expectations of performance are more and more heavily based on quantitative output (thanks, NRC!), as opposed to the quality of that output, it makes sense for faculty to shy away from “big question” articles that might chew up a lot of their data and interesting ideas, and instead package that same set of ideas as two or three smaller, much more arcane publications. This is a very real pressure: when I put out my retheorization of livelihoods approaches a year ago, more than one colleague suggested that I would have been better cutting its 15000 words into two 8500 word pieces, as it would have counted for more in my annual evaluation. Nothing has driven us toward a proliferation of small, specialized journals carrying tiny, arcane articles quite like this drive for quantification and greater production. Undoing this really awful trend would help a lot, as academics would be freed up to think big thoughts again, both in journals and in other fora. One way to help: publicize the alt-metrics movement (start at the LSE Impact Blog and work from there) that attempts to move beyond a system of academic assessment that reflects a long-dead era of publication and communication.
  3. Focus on how for-profit academic publishers wall off knowledge from the public: Academics must publish to survive professionally, and the best journals in nearly every field are the last profitable properties for a number of publishing houses. These publishers benefit from free labor on the part of authors, reviewers, and the nearly-free labor of editors, and often the subsidy of taxpayer-funded research, yet charge exorbitant amounts for subscriptions to their journals – in the case of public universities, bleeding the taxpayer once again. Academics are absolutely responsible for this situation – after all, we collectively define what the good journals are, and as I’ve argued before we could change our minds if we wanted to. But academia takes time to change, and could use a push. Where is the push from the federal government to demand that the results of taxpayer-funded research be made available to the taxpayers immediately? What happened to the initial push from the Obama White House on this issue? It seems to be a topic ripe for a good investigative journalist.

And, for good measure, an interesting trend that will likely lead to a more engaged academia:

  1. The shift in acceptable academic funding: Until very recently, academic grants from traditional agencies like the National Science Foundation or the National Institutes of Health were given exalted status, with all other forms of funding occupying lesser rungs on the great chain of funding. Thus, to get tenure, many (biophysical science/social science) academics really had to land one of these grants. The programs associated with these grants very often rewarded pure research and actively discouraged “applied” work, and even today the NSF’s requirements for “impact” are fairly surficial. Contracts were very second-tier, and often not taken seriously in one’s academic review. Now, thanks to funding crunches in both universities and the funding agencies, any research-looking dollars have started looking good to university administrations, and contracts are more and more being evaluated alongside more traditional academic grants. There is a tremendous opportunity here to engage academia through this mechanism. [Full disclosure: I’ve been funded in the past by NSF and by the National Geographic Society, but today roughly 90% of my funding comes directly or indirectly from development donors like USAID in the form of contracts or grants]

This is hardly a comprehensive list of things into which a serious journalist could shed light on, and perhaps help leverage change. I’m just typing quickly here. If you have other ideas for things that journalists should be examining, please leave them in the comments or email them to me: ed at edwardrcarr.com   I will append them to this post as they come in, attributing them (or not, depending on the wishes of contributors) in the post.

Edit 17 February: If you want to move beyond criticism (and snark), join me in thinking about things that Mr. Kristof should look into/write about if he really wants a more engaged academia here.

In his Saturday column, Nick Kristof joins a long line of people, academics and otherwise, who decry the distance between academia and society. While I greatly appreciate his call to engage more with society and its questions (something I think I embody in my own career), I found his column to be riddled with so many misunderstandings/misrepresentations of academia that, in the end, he contributes nothing to the conversation.

What issues, you ask?

1) He misdiagnoses the problem

If you read the column quickly, it seems that Kristof blames academic culture for the lack of public engagement he decries. This, of course, ignores the real problem, which is more accurately diagnosed by Will McCants’s (oddly marginalized) quotes in the column. Sure, there are academics out there with no interest in public engagement. And that is fine, by the way – people can make their own choices about what they do and why. But to suggest that all of academia is governed by a culture that rejects public engagement deeply misrepresents the problem. The problem is the academic rewards system which currently gives us job security and rewards for publishing in academic journals, and nearly nothing for public outreach. To quote McCants:

If the sine qua non for academic success is peer-reviewed publications, then academics who ‘waste their time’ writing for the masses will be penalized.

This is not a problem of academic culture, this is a problem of university management – administrations decide who gets tenure, and on what standard. If university administrations decided to halve the number of articles required for tenure, and replaced that academic production with a demand that professors write a certain number of op-eds, run blogs with a certain number of monthly visitors, or participate in policy development processes, I assure you the world would be overrun with academic engagement. So if you want more engagement, go holler at some university presidents and provosts, and lay off the assistant professors.

2) Kristof takes aim at academic prose – but not really:

 …academics seeking tenure must encode their insights into turgid prose.

Well, yes. There is a lot of horrific prose in academia – but Kristof seems to suggest that crap writing is a requirement of academic work. It is not – I guarantee you that the best writers are generally cited a lot more than the worst. So Kristof has unfairly demonized academia as willfully holding the public at bay with its crappy writing, which completely misdiagnoses the problem. The problem is that the vast majority of academia isn’t trained in writing (beyond a freshman composition course), there is no money in academia for the editorial staff that professional writers (and columnists) rely on to clean up their own turgid prose, and the really simple fact that we all tend to write like what we read. Because academic prose is mostly terrible, people who read it tend to write terrible prose. This is why I am always reading short fiction (Pushcart Prize, Best American Short Stories, etc.) alongside my work reading…

If you want better academic prose, budget for the same editorial support, say, that the New York Times or the New Yorker provide for their writers. I assure you, academic writing would be fantastic almost immediately.

Side note: Kristof implicitly sets academic writing against all other sources of writing, which leads me to wonder if he’s ever read a policy document. I helped author one, and I read many, while at USAID. The prose was generally horrific…

3) His implicit prescription for more engaged writing is a disaster

Kristof notes that “In the late 1930s and early 1940s, one-fifth of articles in The American Political Science Review focused on policy prescriptions; at last count, the share was down to 0.3 percent.” In short, he sees engagement as prescription. Which is exactly the wrong way to go about it. I have served as a policy advisor to a political appointee. I can assure you that handing a political appointee a prescription is no guarantee they will adopt it. Indeed, I think they are probably less likely to adopt it because it isn’t their idea. Policy prescriptions preclude ownership of the conclusion and needed responses by the policymaker. Better to lay out clear evidence for the causes of particular challenges, or the impacts of different decisions. Does academia do enough of this? Probably not. But for heaven’s sake, don’t start writing prescriptive pieces. All that will do is perpetuate our marginality through other means.

4) He confuses causes and effects in his argument that political diversity produces greater societal impact.

Arguing that the greater public engagement of economists is about their political diversity requires ignoring most of the 20th century history of thought within which disciplines took shape. Just as geography became a massive discipline in England and other countries with large colonial holdings because of the ways that discipline fit into national needs, so economics became massive here in the US in response to various needs at different times that were captured (for better or for worse) by economics. I would argue that the political diversity in economics is a product of its engagement with the political sphere, as people realized that economic thought could shift/drive political agendas…not the other way around.

5) There is a large movement underway in academia to rethink “impact”.

There is too much under this heading to cover in a single post. But go visit the LSE Impact Blog to see the diversity of efforts to measure academic impact currently in play – everything from rethinking traditional journal metrics to looking at professors’ reach on Twitter. Mr. Kristof is about 4 years late to this argument.

In short, Kristof has recognized a problem that has been discussed…forever, by an awful lot of people. But he clearly has no idea where the problem comes from, and therefore offers nothing of use when it comes to solutions. All this column does is perpetuate several misunderstandings of academia that have contributed to its marginalization – which seems to be the opposite of the columns’ intent.

So, I’m finally back in academia, with some time to start writing again…and able to do so without worrying about who I might annoy.  Ah, the joys of tenure.  Actually, I shouldn’t make that sound so glib – the fact is, this is what tenure is for: it allows people like me to argue about important ideas and take politically challenging positions without having to worry about our incomes.

Quickly, then, I would like to make a point about a Nick Kristof column that appeared back in early July (but I had to shut up about at the time).  In it, he talks about some USAID-funded food security programs that work “with local farmers to promote new crops and methods so that farmers don’t have to worry about starving in the first place.”  Nothing wrong with that – this just makes good sense, really, given the dramatic economic and environmental changes that so many folks must address in their everyday lives and livelihoods.  But then Kristof describes the program via an anecdote:

Jonas Kabudula is a local farmer whose corn crop completely failed, and he said that normally he and his family would now be starving. But, with the help of a U.S.A.I.D. program, he and other farmers also planted chilies, a nontraditional crop that doesn’t need much rain.

“Other crops wither, and the chilies survive,” Kabudula told me. What’s more, each bag of chilies is worth about five bags of corn, so he and other villagers have been able to sell the chilies and buy all the food they need.

“If it weren’t for the chilies,” said another farmer, Staford Phereni, “we would have no food.”

Er, this is not resilience.  Sure, it is a different crop, with different biophysical needs than maize…but they still have to sell it to get the money to eat.  Chilies are, in the end, seasoning – in economic terms, there is a lot of price elasticity in there, as people can just choose not to season their food if they run out of money.  So, when all hell breaks loose in a country, such as when a drought compromises the principal food crop, a large percentage of the people who would buy chilies (other farmers) cannot do so, depressing the price and lowering the relative value of the chilies versus needed food items (the prices of which are likely rising as demand for alternatives to maize kick in) – in short, your cash crop buys you less food than it did under good conditions.  You end up just as screwed as everyone else, albeit a few weeks later.  Further, this all presumes that markets are functioning at anything like regular levels, which is a bad bet when things really get stressed.  Basically, this program gets resilience wrong because it fails to capture all of the things that people are vulnerable to: it isn’t just the climate, it is also the market.  Yes, you’ve addressed at least some of the climate vulnerability…by pushing people onto a precarious market likely to be upset by the very climate conditions you are trying to address in the first place.  Oops.  More income is not necessarily more resilience if that income can be destabilized by the very thing it was meant to help address.

Given all of this, it seems to me that Kristof has missed the really important issue here: if this actually worked for this farmer, we need to know why it worked given all that could go wrong, and build on that.  However, he doesn’t dive into that, at least in part because I think he sees the project success in this case as an expected outcome, the sort of thing that “should happen” because more income means more resilience and therefore less vulnerability to climate change and food insecurity.  And that has everything to do with how we in development talk and think about vulnerability and resilience.  While Kristof does not use these terms, they are implicit in his thinking about how this program helped this farmer – the farmer had other options that allowed him to address a climate-related challenge by increasing his income (or at least holding the line in bad situations), making him more resilient/less vulnerable than his neighbors in the face of this challenge.  However, this example in this column is an argument for why we should be worried about the ways in which development has started slinging these terms around of late.  It is unclear to me how this program can really address vulnerability or build resilience because it seems that it does not really address some significant factors shaping local vulnerability, nor has it really identified why those who display resilience in the face of a climate/food security challenge really are having better outcomes.

Granted, I am griping about one part of the program (the others actually sound quite interesting and reasonable), but this part just trips my vulnerability/resilience switch…

It comes down to this bit of bad news: until we come to grips with how we understand and define vulnerability and resilience, and do a better job of grounding these concepts in place, we will continue to design programs and projects that just trade one risk/vulnerability for another.  That’s no way to get our job done.

Well, the response to part one was great – really good comments, and a few great response posts.  I appreciate the efforts of some of my economist colleagues/friends to clarify the terminology and purpose behind RCTs.  All of this has been very productive for me – and hopefully for others engaged in this conversation.

First, a caveat: On the blog I tend to write quickly and with minimal editing – so I get a bit fast and loose at times – well, faster and looser than I intend.  So, to this end, I did not mean to suggest that nobody was doing rigorous work in development research – in fact, the rest of my post clearly set out to refute that idea, at least in the qualitative sphere.  But I see how Marc Bellemare might have read me that way.  What I should have said was that there has always been work, both in research and implementation, where rigorous data collection and analysis were lacking.  In fact, there is quite a lot of this work.  I think we can all agree this is true . . . and I should have been clearer.

I have also learned that what qualitative social scientists/social theorists mean by theory, and what economists mean by theory, seems to be two different things.  Lee defined theory as “formal mathematical modeling” in a comment on part 1 of this series of posts, which is emphatically not what a social theorist might mean.  When I say theory, I am talking about a conjectural framing of a social totality such that complex causality can at least be contained, if not fully explained.  This framing should have reference to some sort of empirical evidence, and therefore should be testable and refinable over time – perhaps through various sorts of ethnographic work, perhaps through formal mathematical modeling of the propositions at hand (I do a bit of both, actually).  In other words, what I mean by theory (and what I focus on in my work) is the establishment of a causal architecture for observed social outcomes.  I am all about the “why it worked” part of research, and far less about the “if it worked” questions – perhaps mostly because I have researched unintended “development interventions” (i.e. unplanned road construction, the establishment of a forest reserve that alters livelihoods resource access, etc.) that did not have a clear goal, a clear “it worked!” moment to identify.  All I have been looking at are outcomes of particular events, and trying to establish the causes of those outcomes.  Obviously, this can be translated to an RCT environment because we could control for the intervention and expected outcome, and then use my approaches to get at the “why did it work/not work” issues.

It has been very interesting to see the economists weigh in on what RCTs really do – they establish, as Marc puts it, “whether something works, not in how it works.”  (See also Grant’s great comment on the first post).  I don’t think that I would get a lot of argument from people if I noted that without causal mechanisms, we can’t be sure why “what worked” actually worked, and whether the causes of “what worked” are in any way generalizable or transportable.  We might have some idea, but I would have low confidence in any research that ended at this point.  This, of course, is why Marc, Lee, Ruth, Grant and any number of other folks see a need for collaboration between quant and qual – so that we can get the right people, with the right tools, looking at different aspects of a development intervention to rigorously establish the existence of an impact, and the establish an equally rigorous understanding of the causal processes by which that impact came to pass.  Nothing terribly new here, I think.  Except, of course, for my continued claim that the qualitative work I do see associated with RCT work is mostly awful, tending toward bad journalism (see my discussion of bad journalism and bad qualitative work in the first post).

But this discussion misses a much larger point about epistemology – what I intended to write in this second part of the series all along.  I do not see the dichotomy between measuring “if something works” and establishing “why something worked” as analytically valid.  Simply put, without some (at least hypothetical) framing of causality, we cannot rigorously frame research questions around either question.  How can you know if something worked, if you are not sure how it was supposed to work in the first place?  Qualitative research provides the interpretive framework for the data collected via RCT4D efforts – a necessary framework if we want RCT4D work to be rigorous.  By separating qualitative work from the quant oriented RCT work, we are assuming that somehow we can pull data collection apart from the framing of the research question.  We cannot – nobody is completely inductive, which means we all work from some sort of framing of causality.  The danger is when we don’t acknowledge this simple point – under most RCT4D work, those framings are implicit and completely uninterrogated by the practitioners.  Even where they come to the fore (Duflo’s 3 I s), they are not interrogated – they are assumed as framings for the rest of the analysis.

If we don’t have causal mechanisms, we cannot rigorously frame research questions to see if something is working – we are, as Marc says, “like the drunk looking for his car keys under the street lamp when he knows he lost them elsewhere, because the only place he can actually see is under the street lamp.”  Only I would argue we are the drunk looking for his keys under a streetlamp, but he has no idea if they are there or not.

In short, I’m not beating up on RCT4D, nor am I advocating for more conversation – no, I am arguing that we need integration, teams with quant and qual skills that frame the research questions together, that develop tests together, that interpret the data together.  This is the only way we will come to really understand the impact of our interventions, and how to more productively frame future efforts.  Of course, I can say this because I already work in a mixed-methods world where my projects integrate the skills of GIScientists, land use modelers, climate modelers, biogeographers and qualitative social scientists – in short, I have a degree of comfort with this sort of collaboration.  So, who wants to start putting together some seriously collaborative, integrated evaluations?

Those following this blog (or my twitter feed) know that I have some issues with RCT4D work.  I’m actually working on a serious treatment of the issues I see in this work (i.e. journal article), but I am not above crowdsourcing some of my ideas to see how people respond.  Also, as many of my readers know, I have a propensity for really long posts.  I’m going to try to avoid that here by breaking this topic into two parts.  So, this is part 1 of 2.

To me, RCT4D work is interesting because of its emphasis on rigorous data collection – certainly, this has long been a problem in development research, and I have no doubt that the data they are gathering is valid.  However, part of the reason I feel confident in this data is because, as I raised in an earlier post,  it is replicating findings from the qualitative literature . . . findings that are, in many cases, long-established with rigorously-gathered, verifiable data.  More on that in part 2 of this series.

One of the things that worries me about the RCT4D movement is the (at least implicit, often overt) suggestion that other forms of development data collection lack rigor and validity.  However, in the qualitative realm we spend a lot of time thinking about rigor and validity, and how we might achieve both – and there are tools we use to this end, ranging from discursive analysis to cross-checking interviews with focus groups and other forms of data.  Certainly, these are different means of establishing rigor and validity, but they are still there.

Without rigor and validity, qualitative research falls into bad journalism.  As I see it, good journalism captures a story or an important issue, and illustrates that issue through examples.  These examples are not meant to rigorously explain the issue at hand, but to clarify it or ground it for the reader.  When journalists attempt to move to explanation via these same few examples (as far too often columnists like Kristof and Friedman do), they start making unsubstantiated claims that generally fall apart under scrutiny.  People mistake this sort of work for qualitative social science all the time, but it is not.  Certainly there is some really bad social science out there that slips from illustration to explanation in just the manner I have described, but this is hardly the majority of the work found in the literature.  Instead, rigorous qualitative social science recognizes the need to gather valid data, and therefore requires conducting dozens, if not hundreds, of interviews to establish understandings of the events and processes at hand.

This understanding of qualitative research stands in stark contrast to what is in evidence in the RCT4D movement.  For all of the effort devoted to data collection under these efforts, there is stunningly little time and energy devoted to explanation of the patterns seen in the data.  In short, RCT4D often reverts to bad journalism when it comes time for explanation.  Patterns gleaned from meticulously gathered data are explained in an offhand manner.  For example, in her (otherwise quite well-done) presentation to USAID yesterday, Esther Duflo suggested that some problematic development outcomes could be explained by a combination of “the three I s”: ideology, ignorance and inertia.  This is a boggling oversimplification of why people do what they do – ideology is basically nondiagnostic (you need to define and interrogate it before you can do anything about it), and ignorance and inertia are (probably unintentionally) deeply patronizing assumptions about people living in the Global South that have been disproven time and again (my own work in Ghana has demonstrated that people operate with really fine-grained information about incomes and gender roles, and know exactly what they are doing when they act in a manner that limits their household incomes – see here, here and here).  Development has claimed to be overcoming ignorance and inertia since . . . well, since we called it colonialism.  Sorry, but that’s the truth.

Worse, this offhand approach to explanation is often “validated” through reference to a single qualitative case that may or may not be representative of the situation at hand – this is horribly ironic for an approach that is trying to move development research past the anecdotal.  This is not merely external observation – I have heard from people working inside J-PAL projects that the overall program puts little effort into serious qualitative work, and has little understanding of what rigor and validity might mean in the context of qualitative methods or explanation.  In short, the bulk of explanation for these interesting patterns of behavior that emerges from these studies resorts to uninterrogated assumptions about human behavior that do not hold up to empirical reality.  What RCT4D has identified are patterns, not explanations – explanation requires a contextual understanding of the social.

Coming soon: Part 2 – Qualitative research and the interpretation of empirical data