Entries tagged with “publishing”.

Unsolicited publishing advice/reviewing rant to follow. Brace yourselves.

When writing an article based on the quantitative analysis of a phenomena, whatever it may be and however novel your analysis, you are not absolved from reading/understanding the conceptual literature (however qualitative) addressing that phenomena. Sure, you might be using a larger dataset than ever used before. Certainly, the previous literature might have been case-study based, and therefore difficult to generalize. But that doesn’t give you a pass to just ignore that existing literature.

  • That literature establishes the meanings of the concepts you are measuring/testing
  • That literature captures the current state of knowledge on those concepts
  • Often, that literature (if qualitative, especially if ethnographic) can get at explanations for the phenomena that cannot be had through qualitative methods alone

If you ignore this literature:

  • You’ll just ask questions that have already been answered. Everybody hates that, especially time-constrained reviewers who already know the answers to your questions because they actually have read/contributed to the literature you ignored.
  • You’ll likely end up with results that don’t make sense, and with no means of explaining or even addressing them. Editors and reviewers hate that, too.
  • Your results, even if they appear to be statistically significant, will be crap. I don’t care how sophisticated your quantitative analysis is, or how innovative your tools might be, you are shoving crap into a very innovative, sophisticated tool, which means that all you’ll get out the other end is crap. Reviewers hate crap. Editors hate crap. And your crap is probably not actionable (and really shouldn’t be), so nobody outside academia will like your crap.

Please don’t generate more crap. There is plenty around.

Finally, a note on professionalism and your career: Citing around people who have worked on the phenomena you are investigating because you are trying to capture a particular field of knowledge is awful intellectual practice that, beyond needlessly slowing the pace of innovation in the field in question, will never work…because editors will send the people you are not citing the article for review. And they will wreck you.

I’ve been writing here on Open the Echo Chamber since July of 2010. Good lord, that is a long time. I’ve cranked out well over 250,000 words on the site (plus or minus 30 articles, or about three books, worth of writing). And for all of that effort, I have received exactly no credit at all for this in my academic job. In my annual reviews and promotion packets, I can shove this work under “service”, but 1) most of my colleagues probably wouldn’t agree with that categorization and 2) nobody in academia gets much of anything for their service contributions unless they are a full-on administrator. I don’t blog for my academic career, I blog as a means of getting ideas outside the rigidity of the peer-review publishing world, the ways it gates off knowledge from those that might use it, and the ways it can police away innovative new thought that challenges existing powers. So, when I recently stumbled across The Winnower, I got excited. “Publish my posts with review and a DOI?” I thought. “Make my posts citable in major journals and technical reports?” I chortled. “Further blur the lines between my academic publishing and the stuff I do on this blog?” I fairly giggled. Yeah, I need to give this a try.

Let me explain:

According to the lovely people at Google Analytics, in that time nearly 50,000 users have committed to over 100,000 pageviews. For a blog that is home to some long, wonky posts, that is pretty amazing. Readership comes from all over the world, with the top 10 countries looking like this:

  1. United States 37,530(52.18%)
  2. United Kingdom 7,990(11.11%)
  3. Canada 4,186(5.82%)
  4. Australia 1,949(2.71%)
  5. India 1,339(1.86%)
  6. Germany 1,019(1.42%)
  7. Netherlands 834(1.16%)
  8. Kenya 773(1.07%)
  9. Philippines 722 (1.00%)
  10. France 695 (.97%)

It is remarkable that Google lists visitors from 192 different countries and territories. And when you drill down to cities, it gets pretty cool as well:

  1. Washington 5,023(6.98%)
  2. London 3,419(4.75%)
  3. New York 2,965(4.12%)
  4. Columbia 2,326(3.23%)
  5. Irmo 1,273(1.77%)
  6. Toronto 769(1.07%)
  7. Seattle 713(0.99%)
  8. Fonthill 676(0.94%)
  9. Melbourne 642(0.89%)
  10. Nairobi 604(0.84%)
  11. Sydney 532(0.74%)
  12. Cambridge MA 517(0.72%)
  13. Oxford 500(0.70%)
  14. Ottawa 466(0.65%)
  15. San Francisco 459(0.64%)
  16. Arlington 457(0.64%)
  17. Chicago 429(0.60%)
  18. Durham 393(0.55%)
  19. Boston 367(0.51%)
  20. Montreal 364(0.51%)

I’ve known who I was reaching for a while – I get informal notes and phone calls from people at various institutions letting me know they liked (mostly) or disliked/had issues with (sometimes) things I have written. Compared to many blogs, I don’t get that many readers. But my readers are my target audience – they are the folks who work in development and climate change. Well, that, and my students here at the University of South Carolina (hence the Columbia and Irmo numbers).

The one big problem for me, and this blog, has been the level of effort it requires, and the ways in which it could (and could not) be used in my primary sectors of employment, academia and development consulting. Though the world is changing fast, the fact is most people still will not take a blog post as seriously as an academic article. That is probably a good thing – there is a lot of crap out on blogs. At the same time, there are really good blogs out there, some of which produce better work/scholarship than you find in the peer-reviewed literature. Finding ways to help people sort out what is good and what is crap, and finding ways to make social media/blog posts viable sources for academic and consulting work, is important to me.

So, starting today, I have linked this blog to The Winnower. When I produce a post with enough intellectual content, I will cross-post it to the Winnower, where it will be subject to a review process, after which it will receive a DOI, making it a real publication in the eyes of many journals and other sources (hell, it will fit under “other academic contributions” on my CV, so there). I’m excited about what The Winnower is trying to do (as you might already know I find academic publishing structures deeply frustrating: just look here, here, here, and here), and if my work serves to further their mission, and their efforts serve to further blur the lines between the ways in which I disseminate my work, I’m happy to give it a go.

Here is my author page at The Winnower. I’ve currently got six old posts up for review – six posts that were viewed by an average of well over a thousand readers each. So I know you all care about these posts and topics. Go ahead and review them, comment on them, help me make them better…and help The Winnower succeed.

Welcome to the future. Maybe.

A recent article in the Chronicle of Higher Education notes that Elsevier, the Dutch academic publishing giant, has started issuing takedown orders to Academia.edu, a social-networking website for academics where many members post .pdf versions of their work for sharing. In fact, I received a notification from Academia.edu yesterday that one of my posted articles had received a takedown notice from Elsevier – it is a piece I am the fourth author on, but I still like the piece and find myself greatly annoyed this happened. On the other hand, it was sort of inevitable – I’ve published a good bit, and a lot of my stuff is available in various forms in various locations, so sooner or later one of those repositories was going to receive a takedown notice.

The Chronicle article is fine – basically, a rehash of the ongoing debate about academic publishing, profit models, and the rights of researchers to disseminate their research findings. But the comments section of the piece is a microcosm of why this debate persists – basically, the commenters sit on two sides: “information should be free and accessible” versus “if you don’t like it, stop signing contracts/publishing with journals that restrict your rights as an author.” This is not helpful – most academics want their work to be free, and we are not idiots when it comes to the contracts we sign when we publish. We sign them BECAUSE WE HAVE TO.

For those who are not academics, let me walk you through the problem. For academics in research-focused universities (and increasingly in teaching-focused institutions) a record of publication is our legitimacy, our standing in our discipline, our leverage for higher salaries or new jobs. And while the pervasiveness of electronic resources and networks have started to change the publishing landscape, as of now there still exists a hierarchy of journals in each discipline. And for most of us, that hierarchy matters – you simply must publish at least some pieces in the top tier of journals if you are to be tenured and promoted, and if you are to be taken seriously within your discipline. This is institutional reality. And guess who controls nearly all of those journals? For-profit academic publishers like Elsevier.

Let me lay this out in a simple scenario: You are a tenure-track assistant professor, and after a few years of research, data analysis, and writing, you’ve finally gotten a manuscript accepted by one of the very top journals in your field. You NEED this publication to ensure that your tenure file, which will go into review in the coming year, will be reviewed positively. Soon after your notification of the article’s acceptance, you receive the publishing contract from Elsevier/Springer/whoever and it says the usual restrictive things about not posting your own work. You hate this, as it means that those without access to academic libraries and interlibrary loan will likely have to pay $30 or more to access your article – in other words, nobody outside of academia will access or read your work. But if you refuse to sign, the publisher will not publish your manuscript. Here is your dilemma: at this point, do you withdraw the manuscript and send it to a new journal with more liberal author rights? If you do, you are certainly sending it to a lower-ranked journal, and you will have to go through peer review all over again, ensuring that the manuscript will not be accepted or published by the time your tenure file is submitted…which will really hurt your tenure case. Or do you sign the stupid contract because you absolutely must have this publication?

I think everyone reading this knows which way this decision is going to go. So do the publishers. This is why the model persists, people – not because academics are stupid, but because we are trapped in an institutional model that gives us very few degrees of freedom on this issue. It’s also not because academics are greedy. Note that I never talked about money, because academics DO NOT GET PAID FOR THESE ARTICLES. At all.

This is why I argued that a real change in this model will require disciplinary reorientations/reorganizations that recognize a whole new set of publishers/journals as legitimate/important outlets. It is the only way academia can really undermine the for-profit academic publishers and end the practice of restrictive publishing and dissemination contracts, as it would make the boycott/avoidance of such publishers a real possibility within the institutional realities of academia today.

Until disciplines, or at the very least particular institutions that are seen as academic leaders, start to recognize alternative journals or means of publication as legitimate outputs that will facilitate a path to tenure and promotion, we will be having the same conversations about academic publishing. Of course, there is one other possible lever that I have raised before – the White House could issue an executive directive reorganizing federally-funded research such that copyright does not attach. Federal employees currently publish in academic journals without transferring copyright (in these situations, there is no copyright to transfer to a journal), so there is a model in place for this. In the end, this makes a great deal of sense no matter how you feel about academic publishing, as these publications represent findings that were obtained via the expenditure of public money, so allowing private profit from such “public goods” is pretty perverse.

The White House appears to have considered this, but there has been little recent noise on this front – perhaps because of major exertions by the publishers to neuter this effort. My guess is that the decision-makers in the White House don’t really understand academic publishing and the institutional structures that maintain it (as opposed to OSTP, which is staffed with people who do, but serve in a mostly advisory capacity to the decision-makers). If they did, they would realize that most arguments for the persistence of exclusive publishing rights with for-profit academic publishers in the era of the Internet make no sense at all. It’s harder for an industry lobby to win an argument when those they are lobbying actually understand the rules of the game…

A great deal has been written about the tragic death of Aaron Swartz, so much that I considered remaining a reader and observer without offering comment.  But the Swartz case has me thinking again about access to academic research. Not one academic author of those articles was negatively impacted by Swartz’s act (downloading millions of scholarly articles from JSTOR with the intent of posting them online for free) – the more easily accessible the article, the more likely it is to be read and cited…and that is why we write articles.  It seems to me that most people don’t understand the fundamental absurdity of copyright in academic publishing.

I quote from one transfer-of-copyright document I recently had to sign:

In order to ensure both the widest dissemination and protection of material published in our journal, we ask Authors to transfer to [Journal Name] the rights of copyright in the articles they contribute. This enables our publisher, on behalf of [Journal Name] to ensure protection against infringement.

The whole point of publication is to get people to read and use my ideas – the very idea of infringement is pretty vague here.  I do not receive a cent for any academic article I publish, so infringement won’t affect my income. Anyone who plagiarizes me and gets caught will lose his or her career – I don’t need copyright for that. So there is no reason for me to sign this document. But what the document leaves vague is the fact this is not a voluntary transfer – the journal will not publish an article without such an agreement, and without publications the typical academic will have a pretty short career.  In short, the average academic is forced to sign away their rights to their work if they want to have a career (no publications means no tenure).  I don’t care about my rights, honestly, except when my work then ends up behind a paywall, downloadable at $30 a pop, nobody who needs to access it (i.e. colleagues in the Global South, or even colleagues at most development donors) can access it. Somebody is making a lot of money of my work and the work of my colleagues (see this article too), but it isn’t me.

However, there does seem to be an out here, at least for employees of state institutions, or those whose research is funded is funded under a federal contract.  From the same agreement I just quoted:

I hereby assign to [Journal Name] the copyright in the above specified manuscript (government authors not transferring copyright hereby assign a non-exclusive license to publish)… [my emphasis]

While I am sure this is not how it was intended when written (it is a clause to allow federal employees to publish publicly-funded research), I wonder if those of us either employed by a public entity, either directly or under a contract, can invoke that status to shift our copyright transfers into “non-exclusive licenses to publish.”  This would remove the copyright infringement argument used against Swartz, thus making it easier to pull articles from behind paywalls into the public sphere.  In short, we need to stop transferring copyright to for-profit entities any way we can…but this needs to happen in a manner that doesn’t blow up everyone’s careers.  Until the senior faculty in each discipline decide to intervene and shift emphasis to low cost, open-access journals, this could be a useful first step.  And low cost can be done – see Simon Batterbury’s comment about the Journal of Political Ecology on the post in the last hyperlink.

In short, academics need to step up and start resisting an academic publishing machine that makes serious money off of our job requirements, but provides little in return.  If we do so, perhaps we won’t need folks like Aaron Swartz to liberate our work – we can do it ourselves.

Vincent Calcagno has a fascinating piece up at the LSE impact blog, in which he looks at the review and publication histories of an absolute pile of articles.  There are whole set if interesting findings there that are well worth the read.  For example:

But, surprisingly, we found that about 75 per cent of all articles we declared to have been submitted to the publishing journal on first intention. Even assuming that, for some reason, authors were less likely to respond in the case of a resubmission, we still find that a majority of published articles are first-intent submissions. This suggests that authors are, overall, quite apt at targeting a proper journal and, conversely, that journals make sure they have a sufficient public: no journal was found to be entirely dependent on resubmissions from others.

However, the finding I found most interesting was this:

in a given journal and a given year, an article that had been resubmitted from another journal was on average more cited than a first-intent submission. Resubmissions were less likely to receive zero or one citation (about 15 per cent less, controlling for publication year and journal) and more likely to receive several (e.g. 10 and 50) citations, shifting the mean to higher values. This intriguing result suggests a “benefit of rejection”. The simplest explanation would be that the review process and the greater amount of time spent working on resubmitted manuscripts does improve them and makes them more cited, although other mechanisms could be invoked.

I wonder, though, if there is another factor that should be considered.  Peer review is inherently conservative – there is a lot of thought policing that goes on through this process (I’ve gone on about this before, here and here).  I wonder how many of the “resubmissions” were rejected not because of insufficient quality, but because they were doing interesting work that threatened one or more reviewers, leading to rejection.  This makes sense, as new and edgier work will eventually get cited more than middle-of-the-road replication of old results – at least, that has been my experience.  So perhaps Calgano has given us empirical evidence for the intellectual policing function of peer review.

Earlier this week, Linda Raftree pointed me to this article, which references another article that calls blogging without tenure “an extreme sport” because of the risks involved.  It is a little hard for me to comment on this specifically, as I did not start blogging until after I had tenure – not because I was afraid of blogging, but because it never occurred to me to blog before then (basically, my agent and my publisher pushed me to blog to promote my book).  I did plenty of “public sphere” writing, such as op-eds in The State (Columbia, SC).  Hell, right before I went up for tenure I published one titled “Governor’s energy report has no clothes.”  I walked into my chair’s office the day it was published, and he shook his head and said “not exactly keeping your head down, are you?”  The op-ed had no impact on my tenure at all.  In most cases, neither will blogging.

I think most academics are far too timid when it comes to public expression.  They fear reprisals against their careers, but rarely seem to be able to articulate where such reprisals might come from or how they might actually create harm.  I am sure there are indeed cases of highly dysfunctional situations where individual’s careers might be harmed by the public expression of their views on a given subject within their expertise, but such situations are volatile for many reasons and blogging is unlikely to ever be the cause of career problems.  In fact, I am convinced that there is far more upside to blogging than there might ever be a downside.  On the upside:

1) As I recently noted, my blog and twitter accounts appear to have done a great deal to spread my work around, and to get that work used (at least by other writers).  Find me a department that will complain about your rapidly rising citation counts.

2) You will develop a whole new community of colleagues, and they will bring new ideas and perspectives that you simply cannot get talking to people in your department, or even in your discipline.  These ideas and perspectives can be challenging, but if you can harness them, they can carry your thinking to new and innovative places.

3) When you develop a public persona, you can build a degree of freedom from problematic situations in your home institution.  You can cultivate a community in which there might be several people interested in giving you a job.  Further, universities love publicly-visible faculty, because they are easy to point to when someone asks what the faculty contribute to the larger society (and yes, this does get asked often).

4) You practice speaking in multiple registers: we all write academic articles, and if you are on the tenure track I hope you’ve figured that process out.  But do you know how to engage the person on the street?  Taxpayers fund a lot of research, and explaining to them why they should be happy they are funding yours is a worthwhile skill.  You can’t do that through a journal article, or in the language of your discipline.

On the downside:

1) Bill Easterly said it best: the blog is a hungry mouth.  It can be hard to keep up with posting, especially when you have a bunch of other stuff going on during the semester.

2) You will be exposed to griefers – the internet is a harsh place.  People will say nasty things about you and your ideas.  If you are fragile, do not try this at home.

Anyway, these are just my quick thoughts on blogging and academia, and I am sure my thoughts are incomplete and others will have something to add.  Indeed, you should check out Marc Bellemare’s recent post on things he has learned as an untenured blogger.  Speaking for myself, though, I have not regretted blogging at all, and aside from sometimes being exhausted after finishing a post, I have yet to see a serious drawback from doing so – but the benefits have been remarkable.


Following on my previous post, another thought that springs from personal experience and its convergence with someone’s research.  If you look at my Google Scholar profile, you will note that in 2011 my citation counts exploded (by social science standards, mind you – in the qualitative social sciences an article with 50 citations or more is pretty huge).  Now, part of this is probably a product of my academic maturation – a number of articles now getting attention have been around for 3-4 years, which is about how long it takes for things to work their way into the literature.  However, I’ve also seen a surge in a few older pieces that had previously plateaued in terms of citations.  This can’t be attributed to a new surge in interest in a particular topic, as these articles cross a range of development issues.  However, they all seem to be surging since I got on Twitter and joined the blogosphere.  Bascially, it seems a new circle (circles?) of interested folks now has access to my work and ideas, and the result is that my work is finding its way into a new set of venues/disciplines that it might otherwise not have reached.  It is hard to be sure about this, as my 18 months on the blog and 1 year on twitter are just at the edges of how long it takes to get an article written, submitted, accepted and published, but clearly something is happening here . . .

This seems to be borne out by some work done by Gunther Eysenbach examining the relationship between tweets (references to a paper on twitter) and the level of citation that paper eventually enjoyed.  Eysenbach found that “highly tweeted” papers tended to become highly cited papers, though the study was quite preliminary (h/t to Martin Fenner at Gobbledygook.  You can find links to Eysenbach’s paper and Martin’s thoughts on it here).  This makes sense to me – but it requires a bit more study.  I like what Fenner and his colleagues are trying to do now, capturing the type of reference made in the tweet (supporting/agreeing, discussing, disagreeing, etc.).  Frankly, references in general should be subject to such scrutiny.  As one of my colleagues once said, if citation counts are all that matter we should write the worst paper ever on a subject, jam it into some journal that did not know better, publicize it and wait for the piles of angry negative citations to pile in . . . only we just have to count the citations, not admit that we are being cited because people hate us!

The altmetrics movement is starting to take off in academia (see, for example, this very cool discussion) I have not yet seen any discussion, though, of what social media might do to journal prestige.  While there will always be flagship journals to which disciplines full of tenure-track faculty will bow, once tenure is achieved this sort of homage becomes less important.  Given what I am seeing with regard to my citations right now, my desire to have my work have impact beyond my discipline and the academy, and my concerns for the policing effect of peer review (which emerges most acutely in flagship journals – see my posts here and here), why should I struggle to get my work into a flagship journal when I can get a quick turnaround and publication in a smaller journal, still have the stamp of peer review on the piece, and then promote it via social media to a crowd more than willing to have a look?  If I (or anyone else) can drive citations through mild self-promotion via social media, does the journal it is published in really matter that much?  I wonder what sort of effect this might have on the structure of publishing now – will flagship journals have to become more nimble and responsive, or will they soldier on without changes?  Will smaller journals sense this opportunity and move into this gap?  Will my colleagues embrace the rising influence of social media on academic practice?

Does any of this matter?  Not really.  If the emerging studies on social media and citation are correct, and my trends are sustainable, then one day I will be one of the “important” folks with a lot of citations . . . and I will be training my students to engage in conventional and non-conventional ways.  I will not be the only one.  Those of us who engage with social media, and train our students to do so, will eventually win this race.  Change is coming to academia, but the nature and importance of that change remain up in the air . . .

Been a while . . . been busy.  And yes, I stole that post title from Ralph Nader . . .

As those who follow this blog know, one of my big concerns is with the walls that academia is building around itself through practices like the current incarnation of peer review in specialist journals. It’s not that I have a problem with peer review at all – I think it is an important tool through which we improve and vet academic work. Anything that survives peer review is by and large more reliable than an unvetted website (like this one, for example).

But the practice of peer review in contemporary academia has turned really problematic. Most respected journals are more expensive than ever, making access to them the near-sole province of academics with access to libraries willing to purchase such journals. The pressure to publish increases all the time, both in rising demands on individual researchers (my requirements for tenure were much tougher than most requirements from a generation before) and in terms of an ever-expanding academic community. The proliferation of published work that has emerged from these two trends has not really improved the quality of information or the pace of advances – there is still a lot of good work out there, but it is harder and harder to find in an ever-growing pile of average and even not-so-good work. And I have found peer review to often function as a means of policing new ideas, slowing the flow of innovative ideas into academia not because the ideas are unsupported, but because these ideas and findings run contrary to previously-accepted ideas upon which many reviewers might have done their work. This byzantine politics of peer review is not well-understood by those outside the academic tent, and does little to improve our public image.

So I am wondering where the tipping point is that might bring about something new. Social media is nice, but it is not peer-reviewed. I tend to think about it as advertizing that points me to useful content, but not as content itself (I have a post on this coming next). I still want peer-review, or something like it. So, a modest proposal: senior colleagues of mine in Geography – yes, those of you who are full professors at the top of the profession, who have nothing to lose from a change in the status quo at this point – who will get together and identify a couple of open-access, very low-cost journals and more or less pronounce them valid (probably in part by blessing them with a few of your own papers to start). Don’t pick the ones that want to charge $1500 in publishing fees – those are absurd. But pick something different . . .

This, I think, is all it would take to start a real movement in my discipline – admittedly, a small discipline, so maybe easier to move. Just making our publications open to all is a tiny first step, but an important one – once a wider community has access to our ideas, they can respond and prompt us for new ones. Collaborations can emerge that should have emerged long ago. Colleagues (and research subjects) in the Global South will be able to read what is written about their environments, economies and homes, improving our responsiveness to those with whom, and hopefully for whom, we work. First steps can be catalytic . . .

I’ve made a few changes to my personal homepage (www.edwardrcarr.com).  This included cleaning up a few things, adding a few book reviews for Delivering Development, and updating my CVs.  However, today, for the first time since I set my homepage up, I have added a page . . . there is now a page for pre-prints.  I have become thoroughly fed up with the gatekeeping and slow pace of academic publishing – I was annoyed to start with, but after more than a year in an agency, and about 18 months engaged with a much wider environment/development community via the blog and twitter, I have come to realize that academic publishing, for all its rigor and legitimacy, is something of a liability.  There is no way anyone is going to wait around for my work, or anyone else’s work, to wend its way through peer review and the inevitable publication delays before it appears in print.

To address this, I am now posting work that I have submitted for review – it is polished, and sometimes it has seen a round of peer review already (those will be marked revised and resubmitted).  However, they are not fully finished, peer-approved work – which means they will likely change a little before they come out in final form.  My goal is to make this stuff available more or less as soon as I submit it.  I am open to comments and suggestions – I can still work them in before the final version goes out!

Some of you might wonder how this could affect the idea of double-blind peer review.  Well, in my experience, double-blind peer review in development studies – or indeed in any of the qualitative social sciences – is largely a joke.  In my field, we tend to invest a lot of time and effort working in a particular place, and so it is very, very easy to figure out who is writing about what.  I often know who the author of a piece is as soon as I read the abstract – and there are always enough details in any manuscript to facilitate a quick Google search that will identify the author.  Both pieces that I currently have on my website work from material for which I am well-known within my field.  For example, just mentioning the villages of Dominase and Ponkrum in Ghana in the livelihoods piece pretty much tells everyone who it is.  And the piece on academic engagement with development practice comes directly from a panel at last year’s Association of American Geographers Annual Meeting which was attended by more than 100 people, as well as an extended listserv exchange in the fall of 2010 that was sent out to several thousand subscribers of various lists.  Again, pretty much everyone will be able to figure out who wrote it.

So, the work is now up there for your perusal.  Have a look, and let me know what you think . . .

Colleague Ben Neimark at ODU recently asked me a tough question: “What makes for good (helpful to get published, strengthened,  intellectually creativity, etc.) peer review?”  I figured this might be of wider interest to academic colleagues, as well as those who see the entire academic publishing world as somewhat opaque.  So . . .

I think the challenge in producing a good peer review is to balance its dual imperative .  There is the part of peer review that ensures quality and offers constructive criticism (and I have received some in the case of my current livelihoods work – see here, here and here -, and have had some reviewers offer great stuff in the past).  Then there is the disciplinary policing that goes on through peer review, where reviewers don’t examine the quality of the data or argument, but simply argue against it because it challenges convention (which the reviewer likely belongs to or established) – see my comments about reviewer 1 at the bottom of this post.  This second function makes innovation very challenging unless you are very, very hardheaded (which I am).

In a nutshell, though, I think good peer review is that which looks at a paper for its stated aims and evaluates

  1. are those stated aims actually new and interesting and
  2. did the paper achieve the stated aims.

If standard 1) is not met, a good peer reviewer should be able to suggest where the real contribution of the paper lies – i.e. by suggesting literatures into which the author should place the manuscript.  If standard 2) is not met, the reviewer should explain exactly how and why this happened, and what sorts of remedial steps might solve the problem(s).  That is my minimum take . . .

I’m happy to hear the opinions of others . . .