Entries tagged with “policy”.


Nick Kristof’s piece decrying the distance between academia and the rest of society has, predictably, triggered a screaming firestorm in academia. That’s what you get when you poke the (over)educated, seriously literate beast. A lot of the criticism is very well written and thought out (outstanding examples here and here). But I fear that Kristof’s central message, that society needs a more engaged academia, is getting lost here. My main problem was not that Kristof was arguing for a more engaged academy, but that his prescriptions for how to bring about that engagement did not address the real incentives and barriers that academics negotiate when they try to engage with public debate.

So, in the interest of constructive criticism, I have some suggestions for things that Mr. Kristof might consider looking into – throwing a light on these challenges would actually serve to highlight the real, and often absurdly unnecessary, barriers between the academy and society. This is obviously just a tiny sample of potential topics, drawn from my own experiences in a top-tier department in a large, Research-1 state institution.

  1. Examine the system by which departments are “ranked” in the United States: The National Research Council (NRC) ranks departments at (not so) regular intervals, creating a sort of BCS ranking of departments, with about the same amount of accuracy and certainty. By and large, academics know these rankings are garbage, but administrations love to trot them out to demonstrate the excellence of their institution, and therefore justify the institutional budget/tuition/etc. But here’s a fun fact: if you dig into what counts in the rankings, you can quickly see why university administrations don’t necessarily care for academic outreach. For example, did you know that authoring an NRC report (which is seriously prestigious) DOES NOT COUNT AS A MEASURABLE PUBLICATION IN THE NRC RANKINGS? I know this because my department ran into this problem the last time around, with at least three members of our faculty losing multiple publications because the NRC did not count ITS OWN PUBLICATIONS. If those pubs were excluded, you can imagine that basically all reports in all contexts were excluded. So if administrations love rankings, and rankings hate outreach, you’re not going to get much outreach.
  2. Consider how academic evaluation’s over-focus on the number of articles produced creates less interesting, more arcane academic outputs: The production of knowledge in academia has, for some time, been driven by expectations of ever-greater output (as measured in research dollars and publications) with less input (fewer faculty members). These expectations govern everything from the evaluation of departments to individual tenure decisions. As a result, the publication requirements for tenure have become ever-more challenging, with expectations for the number of publications produced rising so steeply that many who recently got tenure might have published more articles than their very senior colleagues published to become full professors even two decades ago. This is driven by everything from departmental-level politics to the NRC rankings themselves, though I suspect a strong trickle-down effect here. In any case, this has created a crisis of knowledge production in which professors are incentivized to produce what my colleague Carl Dahlman once called the minimum publishable unit (MPU). Because expectations of performance are more and more heavily based on quantitative output (thanks, NRC!), as opposed to the quality of that output, it makes sense for faculty to shy away from “big question” articles that might chew up a lot of their data and interesting ideas, and instead package that same set of ideas as two or three smaller, much more arcane publications. This is a very real pressure: when I put out my retheorization of livelihoods approaches a year ago, more than one colleague suggested that I would have been better cutting its 15000 words into two 8500 word pieces, as it would have counted for more in my annual evaluation. Nothing has driven us toward a proliferation of small, specialized journals carrying tiny, arcane articles quite like this drive for quantification and greater production. Undoing this really awful trend would help a lot, as academics would be freed up to think big thoughts again, both in journals and in other fora. One way to help: publicize the alt-metrics movement (start at the LSE Impact Blog and work from there) that attempts to move beyond a system of academic assessment that reflects a long-dead era of publication and communication.
  3. Focus on how for-profit academic publishers wall off knowledge from the public: Academics must publish to survive professionally, and the best journals in nearly every field are the last profitable properties for a number of publishing houses. These publishers benefit from free labor on the part of authors, reviewers, and the nearly-free labor of editors, and often the subsidy of taxpayer-funded research, yet charge exorbitant amounts for subscriptions to their journals – in the case of public universities, bleeding the taxpayer once again. Academics are absolutely responsible for this situation – after all, we collectively define what the good journals are, and as I’ve argued before we could change our minds if we wanted to. But academia takes time to change, and could use a push. Where is the push from the federal government to demand that the results of taxpayer-funded research be made available to the taxpayers immediately? What happened to the initial push from the Obama White House on this issue? It seems to be a topic ripe for a good investigative journalist.

And, for good measure, an interesting trend that will likely lead to a more engaged academia:

  1. The shift in acceptable academic funding: Until very recently, academic grants from traditional agencies like the National Science Foundation or the National Institutes of Health were given exalted status, with all other forms of funding occupying lesser rungs on the great chain of funding. Thus, to get tenure, many (biophysical science/social science) academics really had to land one of these grants. The programs associated with these grants very often rewarded pure research and actively discouraged “applied” work, and even today the NSF’s requirements for “impact” are fairly surficial. Contracts were very second-tier, and often not taken seriously in one’s academic review. Now, thanks to funding crunches in both universities and the funding agencies, any research-looking dollars have started looking good to university administrations, and contracts are more and more being evaluated alongside more traditional academic grants. There is a tremendous opportunity here to engage academia through this mechanism. [Full disclosure: I’ve been funded in the past by NSF and by the National Geographic Society, but today roughly 90% of my funding comes directly or indirectly from development donors like USAID in the form of contracts or grants]

This is hardly a comprehensive list of things into which a serious journalist could shed light on, and perhaps help leverage change. I’m just typing quickly here. If you have other ideas for things that journalists should be examining, please leave them in the comments or email them to me: ed at edwardrcarr.com   I will append them to this post as they come in, attributing them (or not, depending on the wishes of contributors) in the post.

Edit 17 February: If you want to move beyond criticism (and snark), join me in thinking about things that Mr. Kristof should look into/write about if he really wants a more engaged academia here.

In his Saturday column, Nick Kristof joins a long line of people, academics and otherwise, who decry the distance between academia and society. While I greatly appreciate his call to engage more with society and its questions (something I think I embody in my own career), I found his column to be riddled with so many misunderstandings/misrepresentations of academia that, in the end, he contributes nothing to the conversation.

What issues, you ask?

1) He misdiagnoses the problem

If you read the column quickly, it seems that Kristof blames academic culture for the lack of public engagement he decries. This, of course, ignores the real problem, which is more accurately diagnosed by Will McCants’s (oddly marginalized) quotes in the column. Sure, there are academics out there with no interest in public engagement. And that is fine, by the way – people can make their own choices about what they do and why. But to suggest that all of academia is governed by a culture that rejects public engagement deeply misrepresents the problem. The problem is the academic rewards system which currently gives us job security and rewards for publishing in academic journals, and nearly nothing for public outreach. To quote McCants:

If the sine qua non for academic success is peer-reviewed publications, then academics who ‘waste their time’ writing for the masses will be penalized.

This is not a problem of academic culture, this is a problem of university management – administrations decide who gets tenure, and on what standard. If university administrations decided to halve the number of articles required for tenure, and replaced that academic production with a demand that professors write a certain number of op-eds, run blogs with a certain number of monthly visitors, or participate in policy development processes, I assure you the world would be overrun with academic engagement. So if you want more engagement, go holler at some university presidents and provosts, and lay off the assistant professors.

2) Kristof takes aim at academic prose – but not really:

 …academics seeking tenure must encode their insights into turgid prose.

Well, yes. There is a lot of horrific prose in academia – but Kristof seems to suggest that crap writing is a requirement of academic work. It is not – I guarantee you that the best writers are generally cited a lot more than the worst. So Kristof has unfairly demonized academia as willfully holding the public at bay with its crappy writing, which completely misdiagnoses the problem. The problem is that the vast majority of academia isn’t trained in writing (beyond a freshman composition course), there is no money in academia for the editorial staff that professional writers (and columnists) rely on to clean up their own turgid prose, and the really simple fact that we all tend to write like what we read. Because academic prose is mostly terrible, people who read it tend to write terrible prose. This is why I am always reading short fiction (Pushcart Prize, Best American Short Stories, etc.) alongside my work reading…

If you want better academic prose, budget for the same editorial support, say, that the New York Times or the New Yorker provide for their writers. I assure you, academic writing would be fantastic almost immediately.

Side note: Kristof implicitly sets academic writing against all other sources of writing, which leads me to wonder if he’s ever read a policy document. I helped author one, and I read many, while at USAID. The prose was generally horrific…

3) His implicit prescription for more engaged writing is a disaster

Kristof notes that “In the late 1930s and early 1940s, one-fifth of articles in The American Political Science Review focused on policy prescriptions; at last count, the share was down to 0.3 percent.” In short, he sees engagement as prescription. Which is exactly the wrong way to go about it. I have served as a policy advisor to a political appointee. I can assure you that handing a political appointee a prescription is no guarantee they will adopt it. Indeed, I think they are probably less likely to adopt it because it isn’t their idea. Policy prescriptions preclude ownership of the conclusion and needed responses by the policymaker. Better to lay out clear evidence for the causes of particular challenges, or the impacts of different decisions. Does academia do enough of this? Probably not. But for heaven’s sake, don’t start writing prescriptive pieces. All that will do is perpetuate our marginality through other means.

4) He confuses causes and effects in his argument that political diversity produces greater societal impact.

Arguing that the greater public engagement of economists is about their political diversity requires ignoring most of the 20th century history of thought within which disciplines took shape. Just as geography became a massive discipline in England and other countries with large colonial holdings because of the ways that discipline fit into national needs, so economics became massive here in the US in response to various needs at different times that were captured (for better or for worse) by economics. I would argue that the political diversity in economics is a product of its engagement with the political sphere, as people realized that economic thought could shift/drive political agendas…not the other way around.

5) There is a large movement underway in academia to rethink “impact”.

There is too much under this heading to cover in a single post. But go visit the LSE Impact Blog to see the diversity of efforts to measure academic impact currently in play – everything from rethinking traditional journal metrics to looking at professors’ reach on Twitter. Mr. Kristof is about 4 years late to this argument.

In short, Kristof has recognized a problem that has been discussed…forever, by an awful lot of people. But he clearly has no idea where the problem comes from, and therefore offers nothing of use when it comes to solutions. All this column does is perpetuate several misunderstandings of academia that have contributed to its marginalization – which seems to be the opposite of the columns’ intent.

One of the things I have had the privilege to witness over the past two years is the movement of a large donor toward a very serious monitoring and evaluation effort aimed at its own programs.  While I know some in the development community, especially in academia, are skeptical of any new initiative that claims to want to do a better job of understanding the impact of programs, and learning from existing programs, what I saw in practice leads me to believe that this is a completely sincere effort with a lot of philosophical buy-in.

That said, there are significant barriers coming for monitoring and evaluation in development.  I’m not sure that those making evaluation policy fully grasp these barriers, and as a result I don’t see evidence that they are being effectively addressed by anyone.  Until they are, this sincere effort is likely to underperform, if not run aground.

In this post, I want to point out a huge institutional/structural problem for M&E: the conflict of interest that is created on the implementation side of things.  On one hand, donors are telling people that we need to learn about what works, and that monitoring and evaluation is not meant to be punitive, but part of a learning process to help all of us do our jobs better.  On the other hand, at most donors the budgets are under pressure, and the message from the top is that development must focus on “what works.”  Think about what this means to a mission director or a chief of party.  On one hand, they are told that M&E is about learning, and failure is now to be expected and can be tolerated as long as we learn why the failure occurred and can remedy the problem and prevent that problem in the future in other places.  On the other, they are told that budgets will focus on what works.  So if they set up rigorous M&E, they are likely to identify programs that are underperforming (and perhaps learn why)…but there is no guarantee that this learning won’t result in that program being cut, with a commensurate loss of staff and budget.  I have yet to see anyone meaningfully address this conflict of interest, and until someone figures out how to do so, there will be significant and creative resistance to the implementation of rigorous M&E.

Any ideas, folks?  Surely some of you have seen this at work…

Simply put, the donors are going to have to decide what is more important – learning what works, and improving on development’s 60+ year track record of spotty results with often limited correlation to programs and projects, or maintaining the appearance of efficiency and efficacy by cutting anything that does not seem to work, and likely throwing out a lot of babies with the bathwater. I know which one I would choose.  It remains unclear where the donors’ choices will fall.  In a politically challenging environment, the pressure to go with the latter approach is high, and the protection of a learning agenda that will really change how development works will require substantial political courage.  That courage exists…but whether or not it comes to the fore is a different question.

One of the things I am (not so) fond of saying is that when it comes to climate, I am not really worried about what I do know – it’s the things that I don’t know, and cannot predict, that worry me the most. The climate displays many characteristics of a nonlinear complex system, which means that we cannot assume that any changes in this system will come in a steady manner – even a fast but steady manner. Instead, the geologic record suggests that this system changes in a linear manner (i.e. slowly warms up, with related shifts in sea level, precipitation, wind patterns and ocean circulation) up to a certain point before changing state – that is, shifting all of these patterns rather dramatically into a new state that conveys the extra energy in the atmosphere through the Earth system in a different manner. These state changes are frightening to me because they are highly unpredictable (we are not sure where the thresholds for these changes are) and, at their worst, they could introduce biophysical changes like increased temperature and rates of evaporation and decreased rainfall with such speed (i.e. in a decade or two, as opposed to over centuries) that the rate of change outpaces the capacity of biomes to adapt, and the constituent species in those biomes to evolve. This is not some random concern about biodiversity – people seem to forget that agricultural systems are ecosystems; radically simplified ecosystems, to be sure, but still ecosystems. They are actually terribly unstable ecosystems because they are so simple (they have little resilience to change, as there are so few components that shifting any one of them can introduce huge changes to the whole system), and so the sort of nonlinear changes I am describing have particular salience for our food supply. I am not a doomsday scenario kind of guy – I like to think of myself as a hopelessly realistic optimist – but I admit that this sort of thing worries me a lot.

So, to put this another way: we are running like hell down a long hallway toward an open door into a darkened room. We can’t see what’s in the room, and it is coming up fast. Most normal people would probably slow down and enter the room cautiously so as to avoid a nasty collision with something in the dark. When it comes to climate change, though, our current behavior is akin to running right into that room at full speed and hoping with all our might that there is nothing in the way.

This is a really, really stupid way of addressing the challenge of climate change.

The good news on this front is that we are starting to see the emergence of a literature on the early warning for these tipping points. I had a post on this recently, and now the July issue of Nature Climate Change has a review article by Timothy Lenton on early warning of tipping points. It is a really excellent piece – it lays out what we are currently doing, shows the limitations of what we can do, points to significant challenges both in the science and in the policy realm, and tries to chart a path forward. I think Lenton comes in a bit science-heavy in this piece, though. While he raises the issues of false alarms and missed alarms, he spends nearly all his time looking at methods for reducing the occurrence of these events. This is all well and good, but false and missed alarms are inevitable when trying to predict the behavior of complex systems. Yes, we need more and better science, but we also need to be thinking about how we address the loss of policymaker confidence in the wake of false alarms or missed alarms.

To get to this point, I think we need to be looking to arenas where people have a lot of experience communicating levels of risk and the importance of addressing that risk – the insurance industry. Most readers of this blog will have some form of insurance – be it health insurance, life insurance, car insurance, etc. I have all three. Every month, I pay a premium for a product I sincerely hope I never have to use. I’d rather hang on to that money (with a family the size of mine, it gets steep), but the cost of a catastrophic event in any of these areas would be so high that I gladly continue to pay. We need to encourage the insurance industry (they are already working on this issue, as they stand to lose a hell of a lot of money unless they can get their actuarial tables adjusted) to start communicating their sense of the likely future costs of climate change, and the costs associated with potential state changes – and do so in the same way that they sell us insurance policies. Why do we have scientists working on the marketing of our ideas? We are not trained for this, and most of my colleagues lack the salesman’s charisma that the climate change issue so desperately needs.

It’s time for a serious conversation about how science and the for-profit risk management world can start working together to better translate likely future climate impacts into likely future costs that everyone can understand. Science simply does not carry the weight we need in policy circles – the good data and rigorous analysis that are central to scientific legitimacy are, in the policy realm, simply seen as means to achieving a particular viewpoint, not an ever-improving approximation of how the world works. Until the climate science (and social science) community grasps this, I fear we will continue to talk past far too many people – and if we allow this to happen, we become part of the problem.