Entries tagged with “peer review”.


Nick Kristof’s piece decrying the distance between academia and the rest of society has, predictably, triggered a screaming firestorm in academia. That’s what you get when you poke the (over)educated, seriously literate beast. A lot of the criticism is very well written and thought out (outstanding examples here and here). But I fear that Kristof’s central message, that society needs a more engaged academia, is getting lost here. My main problem was not that Kristof was arguing for a more engaged academy, but that his prescriptions for how to bring about that engagement did not address the real incentives and barriers that academics negotiate when they try to engage with public debate.

So, in the interest of constructive criticism, I have some suggestions for things that Mr. Kristof might consider looking into – throwing a light on these challenges would actually serve to highlight the real, and often absurdly unnecessary, barriers between the academy and society. This is obviously just a tiny sample of potential topics, drawn from my own experiences in a top-tier department in a large, Research-1 state institution.

  1. Examine the system by which departments are “ranked” in the United States: The National Research Council (NRC) ranks departments at (not so) regular intervals, creating a sort of BCS ranking of departments, with about the same amount of accuracy and certainty. By and large, academics know these rankings are garbage, but administrations love to trot them out to demonstrate the excellence of their institution, and therefore justify the institutional budget/tuition/etc. But here’s a fun fact: if you dig into what counts in the rankings, you can quickly see why university administrations don’t necessarily care for academic outreach. For example, did you know that authoring an NRC report (which is seriously prestigious) DOES NOT COUNT AS A MEASURABLE PUBLICATION IN THE NRC RANKINGS? I know this because my department ran into this problem the last time around, with at least three members of our faculty losing multiple publications because the NRC did not count ITS OWN PUBLICATIONS. If those pubs were excluded, you can imagine that basically all reports in all contexts were excluded. So if administrations love rankings, and rankings hate outreach, you’re not going to get much outreach.
  2. Consider how academic evaluation’s over-focus on the number of articles produced creates less interesting, more arcane academic outputs: The production of knowledge in academia has, for some time, been driven by expectations of ever-greater output (as measured in research dollars and publications) with less input (fewer faculty members). These expectations govern everything from the evaluation of departments to individual tenure decisions. As a result, the publication requirements for tenure have become ever-more challenging, with expectations for the number of publications produced rising so steeply that many who recently got tenure might have published more articles than their very senior colleagues published to become full professors even two decades ago. This is driven by everything from departmental-level politics to the NRC rankings themselves, though I suspect a strong trickle-down effect here. In any case, this has created a crisis of knowledge production in which professors are incentivized to produce what my colleague Carl Dahlman once called the minimum publishable unit (MPU). Because expectations of performance are more and more heavily based on quantitative output (thanks, NRC!), as opposed to the quality of that output, it makes sense for faculty to shy away from “big question” articles that might chew up a lot of their data and interesting ideas, and instead package that same set of ideas as two or three smaller, much more arcane publications. This is a very real pressure: when I put out my retheorization of livelihoods approaches a year ago, more than one colleague suggested that I would have been better cutting its 15000 words into two 8500 word pieces, as it would have counted for more in my annual evaluation. Nothing has driven us toward a proliferation of small, specialized journals carrying tiny, arcane articles quite like this drive for quantification and greater production. Undoing this really awful trend would help a lot, as academics would be freed up to think big thoughts again, both in journals and in other fora. One way to help: publicize the alt-metrics movement (start at the LSE Impact Blog and work from there) that attempts to move beyond a system of academic assessment that reflects a long-dead era of publication and communication.
  3. Focus on how for-profit academic publishers wall off knowledge from the public: Academics must publish to survive professionally, and the best journals in nearly every field are the last profitable properties for a number of publishing houses. These publishers benefit from free labor on the part of authors, reviewers, and the nearly-free labor of editors, and often the subsidy of taxpayer-funded research, yet charge exorbitant amounts for subscriptions to their journals – in the case of public universities, bleeding the taxpayer once again. Academics are absolutely responsible for this situation – after all, we collectively define what the good journals are, and as I’ve argued before we could change our minds if we wanted to. But academia takes time to change, and could use a push. Where is the push from the federal government to demand that the results of taxpayer-funded research be made available to the taxpayers immediately? What happened to the initial push from the Obama White House on this issue? It seems to be a topic ripe for a good investigative journalist.

And, for good measure, an interesting trend that will likely lead to a more engaged academia:

  1. The shift in acceptable academic funding: Until very recently, academic grants from traditional agencies like the National Science Foundation or the National Institutes of Health were given exalted status, with all other forms of funding occupying lesser rungs on the great chain of funding. Thus, to get tenure, many (biophysical science/social science) academics really had to land one of these grants. The programs associated with these grants very often rewarded pure research and actively discouraged “applied” work, and even today the NSF’s requirements for “impact” are fairly surficial. Contracts were very second-tier, and often not taken seriously in one’s academic review. Now, thanks to funding crunches in both universities and the funding agencies, any research-looking dollars have started looking good to university administrations, and contracts are more and more being evaluated alongside more traditional academic grants. There is a tremendous opportunity here to engage academia through this mechanism. [Full disclosure: I’ve been funded in the past by NSF and by the National Geographic Society, but today roughly 90% of my funding comes directly or indirectly from development donors like USAID in the form of contracts or grants]

This is hardly a comprehensive list of things into which a serious journalist could shed light on, and perhaps help leverage change. I’m just typing quickly here. If you have other ideas for things that journalists should be examining, please leave them in the comments or email them to me: ed at edwardrcarr.com   I will append them to this post as they come in, attributing them (or not, depending on the wishes of contributors) in the post.

Edit 17 February: If you want to move beyond criticism (and snark), join me in thinking about things that Mr. Kristof should look into/write about if he really wants a more engaged academia here.

In his Saturday column, Nick Kristof joins a long line of people, academics and otherwise, who decry the distance between academia and society. While I greatly appreciate his call to engage more with society and its questions (something I think I embody in my own career), I found his column to be riddled with so many misunderstandings/misrepresentations of academia that, in the end, he contributes nothing to the conversation.

What issues, you ask?

1) He misdiagnoses the problem

If you read the column quickly, it seems that Kristof blames academic culture for the lack of public engagement he decries. This, of course, ignores the real problem, which is more accurately diagnosed by Will McCants’s (oddly marginalized) quotes in the column. Sure, there are academics out there with no interest in public engagement. And that is fine, by the way – people can make their own choices about what they do and why. But to suggest that all of academia is governed by a culture that rejects public engagement deeply misrepresents the problem. The problem is the academic rewards system which currently gives us job security and rewards for publishing in academic journals, and nearly nothing for public outreach. To quote McCants:

If the sine qua non for academic success is peer-reviewed publications, then academics who ‘waste their time’ writing for the masses will be penalized.

This is not a problem of academic culture, this is a problem of university management – administrations decide who gets tenure, and on what standard. If university administrations decided to halve the number of articles required for tenure, and replaced that academic production with a demand that professors write a certain number of op-eds, run blogs with a certain number of monthly visitors, or participate in policy development processes, I assure you the world would be overrun with academic engagement. So if you want more engagement, go holler at some university presidents and provosts, and lay off the assistant professors.

2) Kristof takes aim at academic prose – but not really:

 …academics seeking tenure must encode their insights into turgid prose.

Well, yes. There is a lot of horrific prose in academia – but Kristof seems to suggest that crap writing is a requirement of academic work. It is not – I guarantee you that the best writers are generally cited a lot more than the worst. So Kristof has unfairly demonized academia as willfully holding the public at bay with its crappy writing, which completely misdiagnoses the problem. The problem is that the vast majority of academia isn’t trained in writing (beyond a freshman composition course), there is no money in academia for the editorial staff that professional writers (and columnists) rely on to clean up their own turgid prose, and the really simple fact that we all tend to write like what we read. Because academic prose is mostly terrible, people who read it tend to write terrible prose. This is why I am always reading short fiction (Pushcart Prize, Best American Short Stories, etc.) alongside my work reading…

If you want better academic prose, budget for the same editorial support, say, that the New York Times or the New Yorker provide for their writers. I assure you, academic writing would be fantastic almost immediately.

Side note: Kristof implicitly sets academic writing against all other sources of writing, which leads me to wonder if he’s ever read a policy document. I helped author one, and I read many, while at USAID. The prose was generally horrific…

3) His implicit prescription for more engaged writing is a disaster

Kristof notes that “In the late 1930s and early 1940s, one-fifth of articles in The American Political Science Review focused on policy prescriptions; at last count, the share was down to 0.3 percent.” In short, he sees engagement as prescription. Which is exactly the wrong way to go about it. I have served as a policy advisor to a political appointee. I can assure you that handing a political appointee a prescription is no guarantee they will adopt it. Indeed, I think they are probably less likely to adopt it because it isn’t their idea. Policy prescriptions preclude ownership of the conclusion and needed responses by the policymaker. Better to lay out clear evidence for the causes of particular challenges, or the impacts of different decisions. Does academia do enough of this? Probably not. But for heaven’s sake, don’t start writing prescriptive pieces. All that will do is perpetuate our marginality through other means.

4) He confuses causes and effects in his argument that political diversity produces greater societal impact.

Arguing that the greater public engagement of economists is about their political diversity requires ignoring most of the 20th century history of thought within which disciplines took shape. Just as geography became a massive discipline in England and other countries with large colonial holdings because of the ways that discipline fit into national needs, so economics became massive here in the US in response to various needs at different times that were captured (for better or for worse) by economics. I would argue that the political diversity in economics is a product of its engagement with the political sphere, as people realized that economic thought could shift/drive political agendas…not the other way around.

5) There is a large movement underway in academia to rethink “impact”.

There is too much under this heading to cover in a single post. But go visit the LSE Impact Blog to see the diversity of efforts to measure academic impact currently in play – everything from rethinking traditional journal metrics to looking at professors’ reach on Twitter. Mr. Kristof is about 4 years late to this argument.

In short, Kristof has recognized a problem that has been discussed…forever, by an awful lot of people. But he clearly has no idea where the problem comes from, and therefore offers nothing of use when it comes to solutions. All this column does is perpetuate several misunderstandings of academia that have contributed to its marginalization – which seems to be the opposite of the columns’ intent.

A recent article in the Chronicle of Higher Education notes that Elsevier, the Dutch academic publishing giant, has started issuing takedown orders to Academia.edu, a social-networking website for academics where many members post .pdf versions of their work for sharing. In fact, I received a notification from Academia.edu yesterday that one of my posted articles had received a takedown notice from Elsevier – it is a piece I am the fourth author on, but I still like the piece and find myself greatly annoyed this happened. On the other hand, it was sort of inevitable – I’ve published a good bit, and a lot of my stuff is available in various forms in various locations, so sooner or later one of those repositories was going to receive a takedown notice.

The Chronicle article is fine – basically, a rehash of the ongoing debate about academic publishing, profit models, and the rights of researchers to disseminate their research findings. But the comments section of the piece is a microcosm of why this debate persists – basically, the commenters sit on two sides: “information should be free and accessible” versus “if you don’t like it, stop signing contracts/publishing with journals that restrict your rights as an author.” This is not helpful – most academics want their work to be free, and we are not idiots when it comes to the contracts we sign when we publish. We sign them BECAUSE WE HAVE TO.

For those who are not academics, let me walk you through the problem. For academics in research-focused universities (and increasingly in teaching-focused institutions) a record of publication is our legitimacy, our standing in our discipline, our leverage for higher salaries or new jobs. And while the pervasiveness of electronic resources and networks have started to change the publishing landscape, as of now there still exists a hierarchy of journals in each discipline. And for most of us, that hierarchy matters – you simply must publish at least some pieces in the top tier of journals if you are to be tenured and promoted, and if you are to be taken seriously within your discipline. This is institutional reality. And guess who controls nearly all of those journals? For-profit academic publishers like Elsevier.

Let me lay this out in a simple scenario: You are a tenure-track assistant professor, and after a few years of research, data analysis, and writing, you’ve finally gotten a manuscript accepted by one of the very top journals in your field. You NEED this publication to ensure that your tenure file, which will go into review in the coming year, will be reviewed positively. Soon after your notification of the article’s acceptance, you receive the publishing contract from Elsevier/Springer/whoever and it says the usual restrictive things about not posting your own work. You hate this, as it means that those without access to academic libraries and interlibrary loan will likely have to pay $30 or more to access your article – in other words, nobody outside of academia will access or read your work. But if you refuse to sign, the publisher will not publish your manuscript. Here is your dilemma: at this point, do you withdraw the manuscript and send it to a new journal with more liberal author rights? If you do, you are certainly sending it to a lower-ranked journal, and you will have to go through peer review all over again, ensuring that the manuscript will not be accepted or published by the time your tenure file is submitted…which will really hurt your tenure case. Or do you sign the stupid contract because you absolutely must have this publication?

I think everyone reading this knows which way this decision is going to go. So do the publishers. This is why the model persists, people – not because academics are stupid, but because we are trapped in an institutional model that gives us very few degrees of freedom on this issue. It’s also not because academics are greedy. Note that I never talked about money, because academics DO NOT GET PAID FOR THESE ARTICLES. At all.

This is why I argued that a real change in this model will require disciplinary reorientations/reorganizations that recognize a whole new set of publishers/journals as legitimate/important outlets. It is the only way academia can really undermine the for-profit academic publishers and end the practice of restrictive publishing and dissemination contracts, as it would make the boycott/avoidance of such publishers a real possibility within the institutional realities of academia today.

Until disciplines, or at the very least particular institutions that are seen as academic leaders, start to recognize alternative journals or means of publication as legitimate outputs that will facilitate a path to tenure and promotion, we will be having the same conversations about academic publishing. Of course, there is one other possible lever that I have raised before – the White House could issue an executive directive reorganizing federally-funded research such that copyright does not attach. Federal employees currently publish in academic journals without transferring copyright (in these situations, there is no copyright to transfer to a journal), so there is a model in place for this. In the end, this makes a great deal of sense no matter how you feel about academic publishing, as these publications represent findings that were obtained via the expenditure of public money, so allowing private profit from such “public goods” is pretty perverse.

The White House appears to have considered this, but there has been little recent noise on this front – perhaps because of major exertions by the publishers to neuter this effort. My guess is that the decision-makers in the White House don’t really understand academic publishing and the institutional structures that maintain it (as opposed to OSTP, which is staffed with people who do, but serve in a mostly advisory capacity to the decision-makers). If they did, they would realize that most arguments for the persistence of exclusive publishing rights with for-profit academic publishers in the era of the Internet make no sense at all. It’s harder for an industry lobby to win an argument when those they are lobbying actually understand the rules of the game…

So, climate change and conflict is back in the media, seemingly with the strength of science behind it.  I’ve been a rather direct, harsh critic of some work on this connection before, at least in part because I am deeply concerned that work on this subject (which remains preliminary) might disproportionately influence policy decisions in unproductive or even problematic directions (i.e. by contributing to the unnecessary militarization of development aid and humanitarian assistance).  So, when CNN, the Guardian, and other media outlets jumped on a new paper in Science (sorry, paywalled) last week, and one of the authors was responsible for the paper I critiqued so harshly before, I felt compelled to read it – especially after seeing Keith Kloor’s great post on the issue. After reading it, I feel compelled to comment on it.

My response is lengthy, so for those on a time budget, I offer some takeaway points. The main post, with details, follows.

Takeaway points

  • The Hsaing, et al paper in Science makes claims that are much more nuanced than what is represented either in the press releases from Princeton and Berkeley, or in many of the media stories (especially the big outlets) about them.
    • The actual findings of the paper simply reiterate long-held understandings of the connection between climate change and conflict
    • These findings are, in summary:
      • The climate affects many arenas, including food supplies, markets, and employment. The climate affects each of these in different ways in different places.
      • Climate-related changes in one or more of those arenas could (but do not always) affect rates of conflict
      • Even when climate-related changes to these arenas do provoke conflict, the provocation can occur in any number of locally-specific ways
      • Therefore, all we can really say is that climate change might affect rates of conflict in different ways in different places in the future
    • We already knew all of this
      • The authors’ claims (as stated in this press release from Princeton) that this study was necessary to establish a causal relationship between changing climate conditions and conflict is based on a straw man of “people” who have been skeptical of “an individual study here or there.”
      • Much of the literature, and those working on this issue, have long accepted the idea of a complex link between changing climate/weather conditions and conflict. The real question is that of how climate variability and change contribute to rates of conflict.
      • The paper does not answer this question
  • The quantification of increased risk of conflict in the paper is problematic, as the authors appear to assume a constant relationship, year-to-year or season-to-season, between climate conditions and their influence on various drivers of conflict.
    • This assumption has long been discarded in studies of food security and famine
    • This assumption likely introduces significant margins of error to the findings of this paper regarding increased risk of conflict associated with climate change
  • The paper does not address the real research frontier in the study of conflict and climate change because it does not further our understanding of how climate variability and change result in increased risk of conflict
    • To the author’s credit, the paper does not purport to explain how observed climate variability and change are translated into conflict
    • The paper merely summarizes existing literature exploring this issue
    • The findings of the paper do not present an opportunity to adjust policy, programs, or diplomacy to avoid future conflicts, as they do not identify specific issues that should be addressed by such efforts.
    • To some extent, this makes the critique under #2 above irrelevant – the “risk of conflict” figures were never actionable anyway
  • Media coverage of this paper amounts to much ado about nothing new

 

Main Post

The Hsaing, et al paper bears little resemblance to the media stories written about it. It makes very measured, fairly contained claims about climate change and conflict that, if represented accurately in the media, probably would not have made for interesting stories. That said, the article deserves critical attention on its own terms so we can understand what, if any, new information is here.

First, I want to start with the good in this paper. This is a substantially more careful paper than the one I critiqued before, both with regard to its attention to existing work on the subject and to the claims it makes about the connections between climate change and conflict. The authors deserve credit for noting the long history of qualitative work on conflict and the environment, a literature often ignored by those conducting large, more quantitative studies. They also should be commended for their caution in identifying causal relationships, instead of basic correlations.

In my opinion, this much more measured approach to thinking about climate change and conflict has resulted in more nuanced claims. First, as the authors note:

“Social conflicts at all scales and levels of organization appear susceptible to climatic influence, and multiple dimensions of the climate system are capable of influencing these various outcomes.”

But later in the paper, the authors temper this point:

“However, it is not true that all types of climatic events influence all forms of human conflict or that climatic conditions are the sole determinant of human conflict. The influence of climate is detectable across contexts, but we strongly emphasize that it is only one of many factors that contribute to conflict.”

And in the end, the big summary (my emphasis):

“The above evidence makes a prima facie case that future anthropogenic climate change could worsen conflict outcomes across the globe in comparison to a future with no climatic changes, given the large expected increase in global surface temperatures and the likely increase in variability of precipitation across many regions over coming decades”

Every bit of this is fine with me. Indeed, had the reporting on this paper been as nuanced as the claims it actually makes…there probably wouldn’t have been any reporting on the paper. The hook “the climate affects a lot of things, and some of those things could affect rates of conflict, so climate change might affect rates of conflict in different ways in different places in the future” isn’t exactly exciting.

And this is where I have to critique the article. My critique has two sides, one intellectual and one from a policy perspective. They are closely linked and blend into one another, and so I present them both below.

Intellectually, I fundamentally question the contribution of this paper. In a nutshell, there is almost nothing new here. Yes, there appear to be some new quantifications of the risk of conflict under different climate situations, and I will return to those in a minute. But overall, the claims made in this paper are exactly the claims that have been made by many others, in many other venues, for a while. For example, the Office of Conflict Management and Mitigation at USAID put out a report back in 2009 (yes, four years ago) that reviewed the existing literature on the subject and came to more or less the same conclusions as this “new” article.  So I was a little bothered by the Princeton press release for this paper in which quoted lead author Solomon Hsaing several times, because I think his justification for the paper is based on a straw man:

“We think that by collecting all the research together now, we’re pretty clearly establishing that there is a causal relationship between the climate and human conflict,” Hsiang said. “People have been skeptical up to now of an individual study here or there. But considering the body of work together, we can now show that these patterns are extremely general. It’s more of the rule than the exception.

I’d love to know who the “people” are who think there is no relationship between climate conditions and human conflict. Critiques of the study of this connection (at least credible critiques) have not so much argued that there is no connection, but that the connections are very complex and not well-captured in large-scale studies using quantitative tools.  So, when Hsaing goes on to say:

“Whether there is a relationship between climate and conflict is not the question anymore. We now want to understand what’s causing it,” Hsiang said. “Once we understand what causes this correlation we can think about designing effective policies or institutions to manage or interrupt the link between climate and conflict.”

…he’s really making a rather grand claim for an article that just tells us what we already knew – that there is a connection between climate conditions and human conflict. And he is burying the real lede here…that the contribution we need, now, is to understand how these causal relationships come to be. This argument for “where we should go next” is also a bit grand, seeing as everyone from academics to USAID’s Office of Conflict Management and Mitigation have been conducting detailed, qualitative studies of these relationships for some time now because we already knew a) that there were relationships between climate and conflict and b) we needed to establish what caused those relationships.

Second, I feel this article suffers from a critical methodological flaw, in that the authors never address the variable coupling of climate outcomes and changes in even those drivers of conflict identified in the literature. For example, it is not at all uncommon to have market shifts take place seasonally, in a manner that can be either coupled or uncoupled with shifts in climate: that is, sometimes a bad rainy season damages local harvests and drives market prices for food up, while other times it could be a great rainy season and a very productive harvest, but factors on regional or global markets could still generate price spikes that end up limiting people’s access to food. In both situations, the people in question would experience a food stress, one closely linked to climate variability, and the other experience a food stress uncoupled from climate. This is why, as I argued back during the Horn of Africa Famine, drought does not equal famine. Famines are far more highly correlated to market conditions than climate conditions. Sometimes climate events like a failed rainy season can trigger a famine by pushing markets and other factors over key thresholds. However, we’ve also had famines in times of normal or even favorable climatic conditions for agriculture.

Simply put, the authors appear to assume a constant relationship between a conflict driver like access to food and the local/regional/global climate. To be fair, this seems to be a pretty prevalent assumption in the literature.  But to the point, this is a bad bet. As best I can tell, the authors have not managed to address the intermittent coupling of conflict drivers like access to food and markets with climatic conditions in their analysis. This, to me, casts significant doubt on their findings that risk of inter-group conflict will rise 14% at one standard deviation of temperature rise – in short, this is far too precise a claim for a study with such large margins for error built into its design.  My suspicion here is that the margin of error introduced by this problem is probably larger than their analytical findings, rendering them somewhere between weak and meaningless. And this, to be honest, was the only really original contribution in the paper.

Third (as I begin to pivot from intellectual to policy critique), while the authors claim to have focused on causal relationships (a claim I think should be tempered by my methodological concerns above), they cannot explain those relationships. I’ve made this point before: in the social sciences, causality is not explanation. Even if we accept that the authors have indeed established causal relationships between climate variability and change and the risk of conflict/rates of conflict, they do not know exactly how these changes in climate actually create these outcomes. This is clear in the section of the paper titled “Plausible Mechanisms”, in which the authors conduct a review of the existing literature (much of which is qualitative) to lay out a set of potential pathways by which their observed relationships might be explained. But nothing in this study allows the authors to choose between any of these explanations…which means that all the authors have really accomplished here is to establish, by different means, exactly what the qualitative literature has known for a long time. To repeat:

  1. The climate affects many arenas, including food supplies, markets, and employment. The climate affects each of these in different ways in different places.
  2. Climate-related changes in one or more of those arenas could (but do not always) affect rates of conflict
  3. Even when climate-related changes to these arenas do provoke conflict, the provocation can occur in any number of locally-specific ways
  4. Therefore, all we can really say is that climate change might affect rates of conflict in different ways in different places in the future

We already knew all of this.

At this point, allow me to pivot fully to my fourth critique, which comes from a policy perspective. People tend to see me as an academic, and forget that I served as the first climate change coordinator for the Bureau for Democracy, Conflict, and Humanitarian Assistance (DCHA) at USAID. I was Nancy Lindborg’s first climate advisor – indeed, it was in this role that I found myself first dealing with issues of conflict and climate change, as I was responsible both for briefing my Bureau’s leadership on these issues and guiding the programming of the Bureau’s dedicated climate change budget (some of which I directed into more research on this topic). In short, I do know something about policymaking and the policy environment. And what I know is this: this paper gives us nothing actionable to address. Even if I accept the finding of 14% greater risk of intergroup conflict at one standard deviation of temperature increase, what am I supposed to do about it? Without an explanation for how this temperature rise produces this greater risk, I have no means of targeting programs, diplomacy, or other resources to address the things that create this greater risk. In short, this paper tells me what I already knew (that climate variability and change can contribute to conflict risk) without giving me anything concrete I can work on. If I were still briefing Nancy, my summary of this paper would be:

  1. There is nothing new in this paper. Its key findings are those of CMM’s (four-year-old) report, and are already well-established in the literature
  2. The paper does not provide any new information about how climate change and variability might contribute to increased conflict risk, and therefore presents nothing new that might serve to guide future policy, programs, or diplomacy
  3. I have methodological concerns with the paper that lead me to believe that the rates of increased risk of conflict reported in this paper are likely stated with too much confidence. These rates of heightened risk should not be cited until put under significant scrutiny by the academic and policy community*.

In summary, the supportable parts of this paper are nothing new – it is a reasonable summary of the issues with establishing a connection between climate change and conflict, and a decent (if truncated) review of the existing literature on the subject (I’d suggest that a real review article of this subject would have to go wider and look at the conflict and environment literature more broadly). But it doesn’t say anything new that really bears up to scrutiny, and even if the “risk of conflict” figures are correct, the paper provides no information that might guide policy, programs, or diplomacy in a manner that could avoid such conflicts. For that information, we have to return to the qualitative research community, which has long espoused the same general findings as those in this paper.

The press releases from Princeton and Berkeley, and the more hyped of the media coverage we’ve seen around this paper (likely driven by those press releases) is much ado about nothing new.

 

 

*In my third point I am indeed taking issue with the peer review process that brought this paper to publication. I believe that Science wanted this paper for the same reason Nature wanted the last one: headlines. Let’s see how the findings here stand up to serious scrutiny.

A great deal has been written about the tragic death of Aaron Swartz, so much that I considered remaining a reader and observer without offering comment.  But the Swartz case has me thinking again about access to academic research. Not one academic author of those articles was negatively impacted by Swartz’s act (downloading millions of scholarly articles from JSTOR with the intent of posting them online for free) – the more easily accessible the article, the more likely it is to be read and cited…and that is why we write articles.  It seems to me that most people don’t understand the fundamental absurdity of copyright in academic publishing.

I quote from one transfer-of-copyright document I recently had to sign:

In order to ensure both the widest dissemination and protection of material published in our journal, we ask Authors to transfer to [Journal Name] the rights of copyright in the articles they contribute. This enables our publisher, on behalf of [Journal Name] to ensure protection against infringement.

The whole point of publication is to get people to read and use my ideas – the very idea of infringement is pretty vague here.  I do not receive a cent for any academic article I publish, so infringement won’t affect my income. Anyone who plagiarizes me and gets caught will lose his or her career – I don’t need copyright for that. So there is no reason for me to sign this document. But what the document leaves vague is the fact this is not a voluntary transfer – the journal will not publish an article without such an agreement, and without publications the typical academic will have a pretty short career.  In short, the average academic is forced to sign away their rights to their work if they want to have a career (no publications means no tenure).  I don’t care about my rights, honestly, except when my work then ends up behind a paywall, downloadable at $30 a pop, nobody who needs to access it (i.e. colleagues in the Global South, or even colleagues at most development donors) can access it. Somebody is making a lot of money of my work and the work of my colleagues (see this article too), but it isn’t me.

However, there does seem to be an out here, at least for employees of state institutions, or those whose research is funded is funded under a federal contract.  From the same agreement I just quoted:

I hereby assign to [Journal Name] the copyright in the above specified manuscript (government authors not transferring copyright hereby assign a non-exclusive license to publish)… [my emphasis]

While I am sure this is not how it was intended when written (it is a clause to allow federal employees to publish publicly-funded research), I wonder if those of us either employed by a public entity, either directly or under a contract, can invoke that status to shift our copyright transfers into “non-exclusive licenses to publish.”  This would remove the copyright infringement argument used against Swartz, thus making it easier to pull articles from behind paywalls into the public sphere.  In short, we need to stop transferring copyright to for-profit entities any way we can…but this needs to happen in a manner that doesn’t blow up everyone’s careers.  Until the senior faculty in each discipline decide to intervene and shift emphasis to low cost, open-access journals, this could be a useful first step.  And low cost can be done – see Simon Batterbury’s comment about the Journal of Political Ecology on the post in the last hyperlink.

In short, academics need to step up and start resisting an academic publishing machine that makes serious money off of our job requirements, but provides little in return.  If we do so, perhaps we won’t need folks like Aaron Swartz to liberate our work – we can do it ourselves.

Vincent Calcagno has a fascinating piece up at the LSE impact blog, in which he looks at the review and publication histories of an absolute pile of articles.  There are whole set if interesting findings there that are well worth the read.  For example:

But, surprisingly, we found that about 75 per cent of all articles we declared to have been submitted to the publishing journal on first intention. Even assuming that, for some reason, authors were less likely to respond in the case of a resubmission, we still find that a majority of published articles are first-intent submissions. This suggests that authors are, overall, quite apt at targeting a proper journal and, conversely, that journals make sure they have a sufficient public: no journal was found to be entirely dependent on resubmissions from others.

However, the finding I found most interesting was this:

in a given journal and a given year, an article that had been resubmitted from another journal was on average more cited than a first-intent submission. Resubmissions were less likely to receive zero or one citation (about 15 per cent less, controlling for publication year and journal) and more likely to receive several (e.g. 10 and 50) citations, shifting the mean to higher values. This intriguing result suggests a “benefit of rejection”. The simplest explanation would be that the review process and the greater amount of time spent working on resubmitted manuscripts does improve them and makes them more cited, although other mechanisms could be invoked.

I wonder, though, if there is another factor that should be considered.  Peer review is inherently conservative – there is a lot of thought policing that goes on through this process (I’ve gone on about this before, here and here).  I wonder how many of the “resubmissions” were rejected not because of insufficient quality, but because they were doing interesting work that threatened one or more reviewers, leading to rejection.  This makes sense, as new and edgier work will eventually get cited more than middle-of-the-road replication of old results – at least, that has been my experience.  So perhaps Calgano has given us empirical evidence for the intellectual policing function of peer review.

So, a while back I decided to talk about how I negotiate peer review, semi-liveblogging my response to a revise and resubmit request from a pretty big development journal (see part 1, part 2 and part 3).  Well, I now have a response to my resubmission . . .

No.  To quote: “after much deliberation, the editors have reached a rather difficult decision. [The editors] feel that they cannot accept your revised paper.”

Yep, I have gone from revise and resubmit to outright reject.  This is . . . unusual, to be honest. More unusual, however, is the rationale for the rejection.  To quote from the decision:

What makes this difficult is that [the editors] recognize that you have in fact taken account of what the referees said, and have tried to accommodate their comments, but the editors feel that what has emerged from the revision process is not an appropriate paper for Development and Change.

Translation: you did what we asked, and addressed the referee comments, but in doing so you ended up with a paper that we think belongs at another journal.  Well, fair enough, this happens.  But why it is not appropriate is a little odd:

While they still believe that there is an interesting idea at the core of your paper, they don’t feel that the revisions have solved the initial problems, and they are not convinced that further rounds of revision would be any more successful. The intended contribution of the paper appears to be theoretical, but the paper hasn’t managed to work out that contribution in a way that will be accessible / comprehensible to our readers.

Soooo . . . I have an interesting paper, but the editors more or less think their readership can’t deal with the complexity of the argument.  [Note: I am disregarding the assessment that my revisons have not solved the initial problems, since they already have said that I took account of the referees’ issues – this is a contradiction I am just going to leave aside. That, and they did not show me any reviewer comments, so I have no idea what I did not resolve]  One of my colleagues has called this the oddest rejection he has ever seen.

Now, I want to be clear – the folks at the journal with whom I interacted throughout this process were very responsive and polite, and were kind even in their rejection (they were quite apologetic, actually).  I would submit to this journal again, though I admit to wondering exactly what aspect of my work might fit here, as I am confused by what they believe the capacity of their readers to be.

This, folks, is the nature of peer review – sometimes, you just have no idea what happened.  I am not privy to the internal conversations of the editorial board, and will not pretend to know exactly what happened here.  What makes this hard is that I did not receive any substantive comments on this second round of review, so I have no guidance at all on edits.  I am rereading the paper, adding a citation I had missed earlier, and making minor tweaks to the argument (the article I missed before actually strengthens the case for what I am doing in the manuscript).  I’ve sent it off to a trusted senior colleague to have a look, and to see where he thinks it might go next.  I will probably sound out the next editor in advance, just to make sure that s/he thinks the paper is appropriate before starting a long review process again . . .

Two years and counting, folks, since my initial submission.

Any editors out there interested?  Anybody?

Following on my previous post, another thought that springs from personal experience and its convergence with someone’s research.  If you look at my Google Scholar profile, you will note that in 2011 my citation counts exploded (by social science standards, mind you – in the qualitative social sciences an article with 50 citations or more is pretty huge).  Now, part of this is probably a product of my academic maturation – a number of articles now getting attention have been around for 3-4 years, which is about how long it takes for things to work their way into the literature.  However, I’ve also seen a surge in a few older pieces that had previously plateaued in terms of citations.  This can’t be attributed to a new surge in interest in a particular topic, as these articles cross a range of development issues.  However, they all seem to be surging since I got on Twitter and joined the blogosphere.  Bascially, it seems a new circle (circles?) of interested folks now has access to my work and ideas, and the result is that my work is finding its way into a new set of venues/disciplines that it might otherwise not have reached.  It is hard to be sure about this, as my 18 months on the blog and 1 year on twitter are just at the edges of how long it takes to get an article written, submitted, accepted and published, but clearly something is happening here . . .

This seems to be borne out by some work done by Gunther Eysenbach examining the relationship between tweets (references to a paper on twitter) and the level of citation that paper eventually enjoyed.  Eysenbach found that “highly tweeted” papers tended to become highly cited papers, though the study was quite preliminary (h/t to Martin Fenner at Gobbledygook.  You can find links to Eysenbach’s paper and Martin’s thoughts on it here).  This makes sense to me – but it requires a bit more study.  I like what Fenner and his colleagues are trying to do now, capturing the type of reference made in the tweet (supporting/agreeing, discussing, disagreeing, etc.).  Frankly, references in general should be subject to such scrutiny.  As one of my colleagues once said, if citation counts are all that matter we should write the worst paper ever on a subject, jam it into some journal that did not know better, publicize it and wait for the piles of angry negative citations to pile in . . . only we just have to count the citations, not admit that we are being cited because people hate us!

The altmetrics movement is starting to take off in academia (see, for example, this very cool discussion) I have not yet seen any discussion, though, of what social media might do to journal prestige.  While there will always be flagship journals to which disciplines full of tenure-track faculty will bow, once tenure is achieved this sort of homage becomes less important.  Given what I am seeing with regard to my citations right now, my desire to have my work have impact beyond my discipline and the academy, and my concerns for the policing effect of peer review (which emerges most acutely in flagship journals – see my posts here and here), why should I struggle to get my work into a flagship journal when I can get a quick turnaround and publication in a smaller journal, still have the stamp of peer review on the piece, and then promote it via social media to a crowd more than willing to have a look?  If I (or anyone else) can drive citations through mild self-promotion via social media, does the journal it is published in really matter that much?  I wonder what sort of effect this might have on the structure of publishing now – will flagship journals have to become more nimble and responsive, or will they soldier on without changes?  Will smaller journals sense this opportunity and move into this gap?  Will my colleagues embrace the rising influence of social media on academic practice?

Does any of this matter?  Not really.  If the emerging studies on social media and citation are correct, and my trends are sustainable, then one day I will be one of the “important” folks with a lot of citations . . . and I will be training my students to engage in conventional and non-conventional ways.  I will not be the only one.  Those of us who engage with social media, and train our students to do so, will eventually win this race.  Change is coming to academia, but the nature and importance of that change remain up in the air . . .

Been a while . . . been busy.  And yes, I stole that post title from Ralph Nader . . .

As those who follow this blog know, one of my big concerns is with the walls that academia is building around itself through practices like the current incarnation of peer review in specialist journals. It’s not that I have a problem with peer review at all – I think it is an important tool through which we improve and vet academic work. Anything that survives peer review is by and large more reliable than an unvetted website (like this one, for example).

But the practice of peer review in contemporary academia has turned really problematic. Most respected journals are more expensive than ever, making access to them the near-sole province of academics with access to libraries willing to purchase such journals. The pressure to publish increases all the time, both in rising demands on individual researchers (my requirements for tenure were much tougher than most requirements from a generation before) and in terms of an ever-expanding academic community. The proliferation of published work that has emerged from these two trends has not really improved the quality of information or the pace of advances – there is still a lot of good work out there, but it is harder and harder to find in an ever-growing pile of average and even not-so-good work. And I have found peer review to often function as a means of policing new ideas, slowing the flow of innovative ideas into academia not because the ideas are unsupported, but because these ideas and findings run contrary to previously-accepted ideas upon which many reviewers might have done their work. This byzantine politics of peer review is not well-understood by those outside the academic tent, and does little to improve our public image.

So I am wondering where the tipping point is that might bring about something new. Social media is nice, but it is not peer-reviewed. I tend to think about it as advertizing that points me to useful content, but not as content itself (I have a post on this coming next). I still want peer-review, or something like it. So, a modest proposal: senior colleagues of mine in Geography – yes, those of you who are full professors at the top of the profession, who have nothing to lose from a change in the status quo at this point – who will get together and identify a couple of open-access, very low-cost journals and more or less pronounce them valid (probably in part by blessing them with a few of your own papers to start). Don’t pick the ones that want to charge $1500 in publishing fees – those are absurd. But pick something different . . .

This, I think, is all it would take to start a real movement in my discipline – admittedly, a small discipline, so maybe easier to move. Just making our publications open to all is a tiny first step, but an important one – once a wider community has access to our ideas, they can respond and prompt us for new ones. Collaborations can emerge that should have emerged long ago. Colleagues (and research subjects) in the Global South will be able to read what is written about their environments, economies and homes, improving our responsiveness to those with whom, and hopefully for whom, we work. First steps can be catalytic . . .

I’ve made a few changes to my personal homepage (www.edwardrcarr.com).  This included cleaning up a few things, adding a few book reviews for Delivering Development, and updating my CVs.  However, today, for the first time since I set my homepage up, I have added a page . . . there is now a page for pre-prints.  I have become thoroughly fed up with the gatekeeping and slow pace of academic publishing – I was annoyed to start with, but after more than a year in an agency, and about 18 months engaged with a much wider environment/development community via the blog and twitter, I have come to realize that academic publishing, for all its rigor and legitimacy, is something of a liability.  There is no way anyone is going to wait around for my work, or anyone else’s work, to wend its way through peer review and the inevitable publication delays before it appears in print.

To address this, I am now posting work that I have submitted for review – it is polished, and sometimes it has seen a round of peer review already (those will be marked revised and resubmitted).  However, they are not fully finished, peer-approved work – which means they will likely change a little before they come out in final form.  My goal is to make this stuff available more or less as soon as I submit it.  I am open to comments and suggestions – I can still work them in before the final version goes out!

Some of you might wonder how this could affect the idea of double-blind peer review.  Well, in my experience, double-blind peer review in development studies – or indeed in any of the qualitative social sciences – is largely a joke.  In my field, we tend to invest a lot of time and effort working in a particular place, and so it is very, very easy to figure out who is writing about what.  I often know who the author of a piece is as soon as I read the abstract – and there are always enough details in any manuscript to facilitate a quick Google search that will identify the author.  Both pieces that I currently have on my website work from material for which I am well-known within my field.  For example, just mentioning the villages of Dominase and Ponkrum in Ghana in the livelihoods piece pretty much tells everyone who it is.  And the piece on academic engagement with development practice comes directly from a panel at last year’s Association of American Geographers Annual Meeting which was attended by more than 100 people, as well as an extended listserv exchange in the fall of 2010 that was sent out to several thousand subscribers of various lists.  Again, pretty much everyone will be able to figure out who wrote it.

So, the work is now up there for your perusal.  Have a look, and let me know what you think . . .