Entries tagged with “qualitative methods”.
Did you find what you wanted?
Sun 24 Jul 2016
Unsolicited publishing advice/reviewing rant to follow. Brace yourselves.
When writing an article based on the quantitative analysis of a phenomena, whatever it may be and however novel your analysis, you are not absolved from reading/understanding the conceptual literature (however qualitative) addressing that phenomena. Sure, you might be using a larger dataset than ever used before. Certainly, the previous literature might have been case-study based, and therefore difficult to generalize. But that doesn’t give you a pass to just ignore that existing literature.
- That literature establishes the meanings of the concepts you are measuring/testing
- That literature captures the current state of knowledge on those concepts
- Often, that literature (if qualitative, especially if ethnographic) can get at explanations for the phenomena that cannot be had through qualitative methods alone
If you ignore this literature:
- You’ll just ask questions that have already been answered. Everybody hates that, especially time-constrained reviewers who already know the answers to your questions because they actually have read/contributed to the literature you ignored.
- You’ll likely end up with results that don’t make sense, and with no means of explaining or even addressing them. Editors and reviewers hate that, too.
- Your results, even if they appear to be statistically significant, will be crap. I don’t care how sophisticated your quantitative analysis is, or how innovative your tools might be, you are shoving crap into a very innovative, sophisticated tool, which means that all you’ll get out the other end is crap. Reviewers hate crap. Editors hate crap. And your crap is probably not actionable (and really shouldn’t be), so nobody outside academia will like your crap.
Please don’t generate more crap. There is plenty around.
Finally, a note on professionalism and your career: Citing around people who have worked on the phenomena you are investigating because you are trying to capture a particular field of knowledge is awful intellectual practice that, beyond needlessly slowing the pace of innovation in the field in question, will never work…because editors will send the people you are not citing the article for review. And they will wreck you.
Tue 2 Dec 2014
Posted by Ed under Adaptation, Climate Change, Livelihoods, research
Comments Off on Co-production: The next “participation”?
Those of you who’ve read this blog before know that I have a lot of issues with “technology-will-fix-it” approaches to development program and project design (what Evgeny Morozov calls “solutionism”). My main issue is that such approaches generally don’t work. Despite a very, very long history of such interventions and their outcomes demonstrating this point, the solutionist camp in development seems to grow stronger all the time. If I hear one more person tell me that mobile phones are going to fix [insert development challenge here], I am going to scream. And don’t even get me started about “apps for development,” which is really just a modified incarnation of “mobile phones will fix it” predicated on the proliferation of smartphones around the world. Both arguments, by the way, were on full display at the Conference on the Gender Dimensions of Weather and Climate Services I attended at the WMO last month. Then again, so were really outdated framings of gender. Perhaps this convergence of solutionism and reductionist framings of social difference means something about both sets of ideas, no?
At the moment I’m particularly concerned about the solutionist tendency in weather and climate services for development. At this point, I don’t think there is anything controversial in arguing that the bulk of services in play today were designed by climate scientists/information providers who operated with the assumption that information – any information – is at least somewhat useful to whoever gets it, and must be better than leaving people without any information. With this sort of an assumption guiding service development, it is understandable that nobody would have thought to engage the presumptive users of the service. First, it’s easy to see how some might have argued that the science of the climate is the science of the climate – so citizen engagement cannot contribute much to that. Second, while few people might want to admit this openly, the fact is that climate-related work in the Global South, like much development work, carries with it an implicit bias against the capabilities and intelligence of the (often rural and poor) populations they are meant to serve. The good news is that I have seen a major turn in this field over the past four years, as more and more people working in this area have come to realize that the simple creation and provision of information is not enough to ensure any sort of impact on the lives of presumptive end-users of the information – the report I edited on the Mali Meteorological Service’s Agrometeorological Advisory Program is Exhibit A at the moment.
So, for the first time, I see climate service providers trying to pay serious attention to the needs of the populations they are targeting with their programs. One of the potentially important ideas I see emerging in this vein is that of “co-production”: the design and implementation of climate services that involves the engagement of both providers and a wide range of users, including the presumptive end users of the services. The idea is simple: if a meteorological service wants to provide information that might meet the needs of some/all of the citizens it serves, that service should engage those citizens – both as individuals and via the various civil society organizations to which they might belong – in the process of identifying what information is needed, and how it might best be delivered.
So what’s the problem? Simple: While I think that most people calling for the co-production of climate services recognize that this will be a complex, fraught process, there is a serious risk that co-production could be picked up by less-informed actors and used as a means of pushing aside the need for serious social scientific work on the presumptive users of these services. It’s pretty easy to argue that if we are incorporating their views and ideas into the design of climate services, there is really no need for serious social scientific engagement with these populations, as co-production cuts out the social-science middleman and gets us the unmitigated, unfiltered voice of the user.
If this sounds insanely naïve to you, it is*. But it is also going to be very, very attractive to at least some in the climate services world. Good social science takes time and money (though nowhere near as much time or money as most people think). And cutting time and cost out of project design, including M&E design, speeds implementation. The pressure to cut out serious field research is, and will remain, strong. Further, the bulk of the climate services community is on the provider side. They’ve not spent much, if any, time engaging with end users, and generally have no training at all in social science. All of those lessons that the social sciences have learned about participatory development and its pitfalls (for a fantastic overview, read this) have not yet become common conversation in climate services. Instead, co-production sounds like a wonderful tweak to the solutionist mentality that dominates climate services, a change that does not challenge the current framings of the use and utility of information, or the ways in which most providers do business. Instead, you keep doing what you do, but you talk to the end users while you do it, which will result in better project outcomes.
But for co-production to replace the need for deep social scientific engagement with the users of climate services, certain conditions must be met. First of all, you have to figure out how, exactly you are going to actually incorporate user information, knowledge, and needs into the design and delivery of a climate service. This isn’t just a matter of a few workshops – how, exactly, are those operating in a nomothetic scientific paradigm supposed to engage and meaningfully incorporate knowledge from very different epistemological framings of the world? This issue, by itself, is generating significant literature…which mostly suggests this sort of engagement is really hard. So, until we’ve worked out that issue, co-production looks a bit like this:
Climate science + end user input => Then a miracle happens => successful project
That, folks, is no way to design a project. Oh, but it gets better. You see, the equation above presumes there is a “generic user” out there that can be engaged in a straightforward manner, and for whom information works in the same manner. Of course, there is no such thing – even within a household, there are often many potential users of climate information in their decision-making. They may undertake different livelihoods activities that are differently vulnerable to particular impacts of climate variability and change. They may have very different capacities to act on information – after all, when you don’t own a plow or have the right to use the family plow, it is very difficult to act on a seasonal agricultural advisory that tells you to plant right away. Climate services need serious social science, and social scientists, to figure out who the end users are – to move past presumption to empirical analysis – and what their different needs might be. Without such work, the above equation really looks more like:
Climate science => Then a miracle happens => you identify appropriate end users => end user input => Then another miracle happens => successful project
Yep, two miracles have to happen if you want to use co-production to replace serious social scientific engagement with the intended users of climate services. So, who wants to take a flyer with some funding and see how that goes? Feel free to read the Mali report referenced above if you’d like to find out**.
Co-production is a great idea – and one I strongly support. But it will be very hard, and it will not speed up the process of climate service design or implementation, nor will it allow for the cutting of corners in other parts of the design process. Co-production will only work in the context of deep understandings of the targeted users of a given service, to understand who we should be co-producing with, and for what purpose. HURDL continues to work on this issue in Mali, Senegal, and Zambia – watch this space in the months ahead.
*Actually, it doesn’t matter how it sounds: this is a very naïve assumption regardless.
** Spoiler: not so well. To be fair to the folks in Mali, their program was designed as an emergency measure, not a research or development program, and so they rushed things out to the field making a lot of assumptions under pressure.
Tue 4 Feb 2014
Since returning to academia in August of 2012, I’ve been pretty swamped. Those who follow this blog, or my twitter feed, know that my rate of posting has been way, way down. It’s not that I got bored with social media, or tired of talking about development, humanitarian assistance, and environmental change. I’ve just been swamped. The transition back to academia took much more out of me than I expected, and I took on far, far too much work. The result – a lot of lost sleep, and a lapsed social media profile in the virtual world, and a lapsed social life in the real world.
One of the things I’ve been working on is getting and organizing enough support around here to do everything I’m supposed to be doing – that means getting grad students and (coming soon) a research associate/postdoc to help out. Well, we’re about 75% of the way there, and if I wait for 100% I’ll probably never get to introduce you all to HURDL…
HURDL is the Humanitarian Response and Development Lab here at the Department of Geography at the University of South Carolina. It’s also a less-than-subtle wink at my previous career in track and field. HURDL is the academic home for me and several (very smart) grad students, and the institution managing about five different workflows for different donors and implementers. Basically, we are the qualitative/social science research team for a series of different projects that range from policy development to project design and implementation. Sometimes we are doing traditional academic research. Mostly, we do hybrid work that combines primary research with policy and/or implementation needs. I’m not going to go into huge detail here, because we finally have a lab website up. The site includes pages for our personnel, our projects, our lab-related publications, and some media (still under development). We’ll need to put up a news feed and likely a listing of the talks we give in different places.
Have a look around. I think you’ll have a sense of why I’ve been in a social media cave for a while. Luckily, I am surrounded by really smart, dedicated people, and am in a position to add at least one more staff position soon, so I might actually be back on the blog (and sleeping more than 6 hours a night) again soon!
Let us know what you think – this is just a first cut at the page. We’d love suggestions, comments, whatever you have – we want this to be an effective page, and a digital ambassador for our work…
Tue 31 Dec 2013
First up on my week up update posts is a re-introduction to my reworked livelihoods approach. As some of you might remember, the formal academic publication laying out the theoretical basis for this approach came out in early 2013. This approach presented in the article is the conceptual foundation for much of the work we are doing in my lab. This pub is now up on my home page, via the link above or through a link on the publications page.
The premise behind this approach, and why I developed it in the first place, is simple. Most livelihoods approaches implicitly assume that the primary motivation for livelihoods decisions is the maximization of some sort of material return on that activity. Unfortunately, in almost all cases this is a massive oversimplification of livelihoods decision-making processes, and in many cases is fundamentally incorrect. Think about the number of livelihoods studies where there are many decisions or behaviors that seem illogical when held up to the logic of material maximization (which would be any good livelihoods study, really). We spend a lot of time trying to explain these decisions away (idiosyncrasy, incomplete information, etc.). But this makes no sense – if you are living on $1.25 a day, and you are illogical or otherwise making decisions against interest, you are likely dead. So there must be a logic behind these decisions, one that we must engage if we are to understand why people do what they do, and if we are to design and implement development interventions that are relevant to the needs of the global poor. My livelihoods approach provides a means of engaging with and explaining these behaviors built on explicit, testable framings of decision-making, locally-appropriate divisions of the population into relevant groupings (i.e. gender, age, class), and the consideration of factors from the local to the global scale.
The article is a straight-ahead academic piece – to be frank, the first half of the article is not that accessible to those without backgrounds in social theory and livelihoods studies. However, the second half of the article is a case study that lays out what the approach allows the user to see and explain, which should be of interest to most everyone who works with livelihoods approaches.
For those who would like a short primer on the approach and what it means in relatively plain English, I’ve put up a “top-line messages” document on the preprints page of my website.
Coming soon is an implementation piece that guides the user through the actual use of the approach. I field-tested the approach in Kaffrine, Senegal with one of my graduate students from May-July 2013. I am about to put the approach to work in a project with the Red Cross in the Zambezi Basin in Zambia next month. In short, this is not just a theoretical pipe dream – it is a real approach that works. In fact, the reason we are working with Red Cross is because Pablo Suarez of Boston University and the Red Cross Climate Centre read the academic piece and immediately grasped what it could do, and then reached out to me to bring me into one of their projects. The implementation piece is already fully drafted, but I am circulating it to a few people in the field to get feedback before I submit it for review or post it to the preprints page. I am hoping to have this up by the end of January. Once that is out the door, I will look into building a toolkit for those who might be interested in using the approach.
I’m really excited by this approach, and the things that are emerging from it in different places (Mali, Zambia, and Senegal, at the moment). I would love feedback on the concept or its use – I’m not a defensive or possessive person when it comes to ideas, as I think debate and critique tend to make things stronger. The reason I am developing a new livelihoods approach is because the ones we have simply don’t explain the things we need to know, and the other tools of development research that dominate the field at the moment (i.e. RCTs) cannot address the complex, integrative questions that drive outcomes at the community level. So consider all of this a first draft, one that you can help bring to final polished form!
Mon 8 Jul 2013
Ok, so that title was meant to goad my fellow anthropologists, but before everyone freaks out, let me explain what I mean. The best anthropology, to quote Marshall Sahlins, “consists of making the apparently wild thought of others logically compelling in their own cultural settings and intellectually revealing of the human condition.” This is, of course, not bound by time. Understanding the thought of others, wherever and whenever it occurs, helps to illuminate the human condition. In that sense, ethnographies are forever.
However, in the context of development and climate change, ethnography has potential value beyond this very broad goal. The understandings of human behavior produced through ethnographic research are critical to the achievement of the most noble and progressive goals of development*. As I have argued time and again, we understand far less about what those in the Global South are doing than we think, and I wrote a book highlighting how our assumptions about life in such places are a) mostly incorrect and b) potentially very dangerous to the long-term well-being of everyone on Earth. To correct this problem, development research, design, and monitoring and evaluation all need much, much more engagement with qualitative research, including ethnographic work. Such work brings a richness to our understanding of other people, and lives in other places, that is invaluable to the design of progressive programs and projects that meet the actual (as opposed to assumed) needs of the global poor now and in the future.
As I see it, the need for ethnographic work in development presents two significant problems. The first, which I have discussed before, is the dearth of such work in the world. Everyone seems to think the world is crawling with anthropologists and human geographers who do this sort of work, but how many books and dissertations are completed each year? A thousand? Less? Compare that to the two billion (or more) poor people living in low-income countries (and that leaves aside the billion or so very poor that Andy Sumner has identified as living in middle-income countries). A thousand books for at least two billion people? No problem, it just means that each book or dissertation has to cover the detailed experiences, motivations, and emotions of two million people. I mean, sure, the typical ethnography addresses an N that ranges from a half dozen to communities of a few hundred, but surely we can just adjust the scale…
OK, so there is a huge shortage of this work, and we need much, much more of it. Well, the good new is that people have been doing this sort of work for a long time. Granted, the underlying assumptions about other people have shifted over time (“scientific racism” was pretty much the norm back in the first half of the 20th Century), but surely the observations of human behavior and thought might serve to fill the gaps from which we currently suffer, right. After all, if a thousand people a year knocked out a book or dissertation over the past hundred years, surely our coverage will improve. Right?
Well, maybe not. Ethnographies describe a place and a time, and most of the Global South is changing very, very rapidly. Indeed, it has been changing for a while, but of late the pace of change seems to be accelerating (again, see Sumner’s work on the New Bottom Billion). Things change so quickly, and can change so pervasively, that I wonder how long it takes for many of the fundamental observations about life and thought that populate ethnographies to become historical relics that tell us a great deal about a bygone era, but do not reflect present realities. For example, in my work in Ghana, I drew upon some of the very few ethnographies of the Akan, written during the colonial era. These were useful for the archaeological component of my work, as they helped me to contextualize artifacts I was recovering from the time of those ethnographies. But their descriptions of economic practice, local politics, social roles, and livelihoods really had very little to do with life in Ghana’s Central Region in the late 1990s. In terms of their utility for interpreting contemporary life among the Akan, they had, for all intents and purposes, expired.
So, the questions I pose here:
1) How do we know when an ethnography has expired? Is it expired when any aspect of the ethnography is no longer true, or when a majority of its observations no longer hold?
2) Whatever standard we might hold them to, how long does it take to reach that standard? Five years? Ten years? Thus far, my work from 2001 in Ghana seems to be holding, but things are wobbling a bit. It is possible that a permanent shift in livelihoods took place in 2006 (I need to examine this), which would invalidate the utility of my earlier work for project design in this area.
These are questions worth debating. If we are to bring more qualitative, ethnographic work to the table in development, we have to find ways to improve our coverage of the world and our ability to assess the resources from which we might draw.
*I know some people think that “noble” and “progressive” are terms that cannot be applied to development. I’m not going to take up that debate here.
Mon 4 Feb 2013
I have a confession. For a long time now I have found myself befuddled by those who claim to have identified the causes behind observed outcomes in social research via the quantitative analysis of (relatively) large datasets (see posts here, here, and here). For a while, I thought I was seeing the all-to-common confusion of correlation and causation…except that a lot of smart, talented people seemed to be confusing correlation with causation. This struck me as unlikely.
Then, the other day in seminar (I was covering for a colleague in our department’s “Contemporary Approaches to Geography” graduate seminar, discussing the long history of environmental determinism within and beyond the discipline), I found myself in a similar discussion related to explanation…and I think I figured out what has been going on. The remote sensing and GIS students in the course, all of whom are extraordinarily well-trained in quantitative methods, got to thinking about how to determine if, in fact, the environment was “causing” a particular behavior*. In the course of this discussion, I realized that what they meant by “cause” was simple (I will now oversimplify): when you can rule out/control for the influence of all other possible factors, you can say that factor X caused event Y to happen. Indeed, this does establish a causal link. So, I finally get what everyone was saying when they said that, via well-constructed regressions, etc., one can establish causality.
So it turns out I was wrong…sort of. You see, I wasn’t really worried about causality…I was worried about explanation. My point was that the information you would get from a quantitative exercise designed to establish causal relationships isn’t enough to support rigorous project and program design. Just because you know that the construction of a borehole in a village caused girl-child school attendance to increase in that village doesn’t mean you know HOW the borehole caused this change in school attendance to happen. If you cannot rigorously explain this relationship, you don’t understand the mechanism by which the borehole caused the change in attendance, and therefore you don’t really understand the relationship. In the “more pure” biophysical sciences**, this isn’t that much of a problem because there are known rules that particles, molecules, compounds, and energy obey, and therefore under controlled conditions one can often infer from the set of possible actors and actions defined by these rules what the causal mechanism is.
But when we study people it is never that simple. The very act of observing people’s behaviors causes shifts in that behavior, making observation at best a partial account of events. Interview data are limited by the willingness of the interviewee to talk, and the appropriateness of the questions being asked – many times I’ve had to return to an interviewee to ask a question that became evident later, and said “why didn’t you tell me this before?” (to which they answer, quite rightly, with something to the effect of “you didn’t ask”). The causes of observed human behavior are staggeringly complex when we get down to the real scales at which decisions are made – the community, household/family, and individual. Decisions may vary by time of the year, or time of day, and by the combination of gender, age, ethnicity, religion, and any other social markers that the group/individual chooses to mobilize at that time. In short, just because we see borehole construction cause increases in girl-child school attendance over and over in several places, or even the same place, doesn’t mean that the explanatory mechanism between the borehole and attendance is the same at all times.
Understanding that X caused Y is lovely, but in development it is only a small fraction of the battle. Without understanding how access to a new borehole resulted in increased girl-child school attendance, we cannot scale up borehole construction in the context of education programming and expect to see the same results. Further, if we do such a scale-up, and don’t get the same results, we won’t have any idea why. So there is causality (X caused Y to happen) and there are causal mechanisms (X caused Y to happen via Z – where Z is likely a complex, locally/temporally specific alignment of factors).
Unfortunately, when I look at much quantitative development research, especially in development economics, I see a lot of causality, but very little work on causal mechanisms that get us to explanation. There is a lot of story time, “that pivot from the quantitative finding to the speculative explanation.” In short, we might be programming development and aid dollars based upon evidence, but much of the time that evidence only gets us part of the way to what we really need to know to really inform program and project design.
This problem is avoidable –it does not represent the limits of our ability to understand the world. There is one obvious way to get at those mechanisms – serious, qualitative fieldwork. We need to be building research and policy teams where ethnographers and other qualitative social scientists learn to respect the methods and findings of their quantitative brethren such that they can target qualitative methods at illuminating the mechanisms driving robust causal relationships. At the same time, the quantitative researchers on these teams will have to accept that they have only partially explained what we need to know when they have established causality through their methods, and that qualitative research can carry their findings into the realm of implementation.
The bad news for everyone…for this to happen, you are going to have to pick your heads up out of your (sub)disciplinary foxholes and start reading across disciplines in your area of interest. Everyone talks a good game about this, but when you read what keeps getting published, it is clear that cross-reading is not happening. Seriously, the number of times I have seen people in one field touting their “new discoveries” about human behavior that are already common conversation in other disciplines is embarrassing…or at least it should be to the authors. But right now there is no shame in this sort of thing, because most folks (including peer reviewers) don’t read outside their disciplines, and therefore have no idea how absurd these claims of discovery really are. As a result, development studies gives away its natural interdisciplinary advantage and returns to the problematic structure of academic knowledge and incentives, which not only enable, but indeed promote narrowly disciplinary reading and writing.
Development donors, I need a favor. I need you to put a little research money on the table to learn about whatever it is you want to learn about. But when you do, I want you to demand it be published in a multidisciplinary development-focused journal. In fact, please start doing this for all of your research-related money. People will still pursue your money, as the shrinking pool of research dollars is driving academia into your arms. Administrators like grant and contract money, and so many academics are now being rewarded for bringing in grants and contracts from non-traditional sources (this is your carrot). Because you hold the carrot, you can draw people in and then use “the stick” inherent in the terms of the grant/contract to demand cross-disciplinary publishing that might start to leverage change in academia. You all hold the purse, so you can call the tune…
*Spoiler alert: you can’t. Well, you probably can if 1) you pin the behavior you want to explain down to something extraordinarily narrow, 2) can limit the environmental effect in question to a single independent biophysical process (good luck with that), and 3) limit your effort to a few people in a single place. But at that point, the whole reason for understanding the environmental determinant of that behavior starts to go out the window, as it would clearly not be generalizable beyond the study. Trust me, geography has been beating its head against this particular wall for a century or more, and we’ve buried the idea. Learn from our mistakes.
**by “more pure” I am thinking about those branches of physics, chemistry, and biology in which lab conditions can control for many factors. As soon as you get into field sciences, or starting asking bigger questions, complexity sets in and things like causality get muddied in the manner I discuss below…just ask an ecologist.
This paper is currently available for review at The Winnower.
Wed 15 Aug 2012
Alright, last post I laid out an institutional problem with M&E in development – the conflict of interest between achieving results to protect one’s budget and staff, and the need to learn why things do/do not work to improve our effectiveness. This post takes on a problem in the second part of that equation – assuming we all agree that we need to know why things do/do not work, how do we go about doing it?
As long-time readers of this blog (a small, but dedicated, fanbase) know, I have some issues with over-focusing on quantitative data and approaches for M&E. I’ve made this clear in various reactions to the RCT craze (see here, here, here and here). Because I framed my reactions in terms of RCTs, I think some folks think I have an “RCT issue.” In fact, I have a wider concern – the emerging aggressive push for quantifiable data above all else as new, more rigorous implementation policies come into effect. The RCT is a manifestation of this push, but really is a reflection of a current fad in the wider field. My concern is that the quantification of results, while valuable in certain ways, cannot get us to causation – it gets us to really, really rigorously established correlations between intervention and effect in a particular place and time (thoughtful users of RCTs know this). This alone is not generalizable – we need to know how and why that result occurred in that place, to understand the underlying processes that might make that result replicable (or not) in the future, or under different conditions.
As of right now, the M&E world is not doing a very good job of identifying how and why things happen. What tends to happen after rigorous correlation is established is what a number of economists call “story time”, where explanation (as opposed to analysis) suddenly goes completely non-rigorous, with researchers “supposing” that the measured result was caused by social/political/cultural factor X or Y, without any follow on research to figure out if in fact X or Y even makes sense in that context, let alone whether or not X or Y actually was causal. This is where I fear various institutional pushes for rigorous evaluation might fall down. Simply put, you can measure impact quantitatively – no doubt about it. But you will not be able to rigorously say why that impact occurred unless someone gets in there and gets seriously qualitative and experiential, working with the community/household/what have you to understand the processes by which the measured outcome occurred. Without understanding these processes, we won’t have learned what makes these projects and programs scalable (or what prevents them from being scaled) – all we will know is that it worked/did not work in a particular place at a particular time.
So, we don’t need to get rid of quantitative evaluation. We just need to build a strong complementary set of qualitative tools to help interpret that quantitative data. So the next question to you, my readers: how are we going to build in the space, time, and funding for this sort of complementary work? I find most development institutions to be very skeptical as soon as you say the words qualitative…mostly because it sounds “too much like research” and not enough like implementation. Any ideas on how to overcome this perception gap?
(One interesting opportunity exists in climate change – a lot of pilot projects are currently piloting new M&E approaches, as evaluating impacts of climate change programming requires very long-term horizons. In at least one M&E effort I know of, there is talk of running both quantitative and qualitative project evaluations to see what each method can and cannot answer, and how they might fit together. Such a demonstration might catalyze further efforts…but this outcome is years away)
Mon 16 Apr 2012
Posted by Ed under Academia, Adaptation, Africa, Climate Change, Delivering Development, development, environment, globalization, Livelihoods, policy, research
Comments Off on Another talk – Gainsville, anyone?
I will be speaking about my book and research at the University of Florida on Friday as part of the Glen R. Anderson Visiting Lectureship. Poster here:
Hope to see folks there!
Wed 9 Nov 2011
Posted by Ed under Adaptation, Africa, Climate Change, Delivering Development, development, environment, globalization, Livelihoods, migration, research, sustainable development
Comments Off on Upcoming Talk
I’ll be running my mouth about the book again at Chatham University on December 2nd. Chatham has some very cool stuff going in sustainability and the environment (a new school!), including a new Eden Hall Campus in Richland Township, PA. My talk will actually be out on that campus, and not in the Shadyside campus . . . directions are here.
The flyer (they’ve done a nice job on it):
Hope to see some of you there . . .
Tue 8 Nov 2011
Posted by Ed under development, Development Institutions, environment, Livelihoods, research
Comments Off on A Personal Publication Fix
I’ve made a few changes to my personal homepage (www.edwardrcarr.com). This included cleaning up a few things, adding a few book reviews for Delivering Development, and updating my CVs. However, today, for the first time since I set my homepage up, I have added a page . . . there is now a page for pre-prints. I have become thoroughly fed up with the gatekeeping and slow pace of academic publishing – I was annoyed to start with, but after more than a year in an agency, and about 18 months engaged with a much wider environment/development community via the blog and twitter, I have come to realize that academic publishing, for all its rigor and legitimacy, is something of a liability. There is no way anyone is going to wait around for my work, or anyone else’s work, to wend its way through peer review and the inevitable publication delays before it appears in print.
To address this, I am now posting work that I have submitted for review – it is polished, and sometimes it has seen a round of peer review already (those will be marked revised and resubmitted). However, they are not fully finished, peer-approved work – which means they will likely change a little before they come out in final form. My goal is to make this stuff available more or less as soon as I submit it. I am open to comments and suggestions – I can still work them in before the final version goes out!
Some of you might wonder how this could affect the idea of double-blind peer review. Well, in my experience, double-blind peer review in development studies – or indeed in any of the qualitative social sciences – is largely a joke. In my field, we tend to invest a lot of time and effort working in a particular place, and so it is very, very easy to figure out who is writing about what. I often know who the author of a piece is as soon as I read the abstract – and there are always enough details in any manuscript to facilitate a quick Google search that will identify the author. Both pieces that I currently have on my website work from material for which I am well-known within my field. For example, just mentioning the villages of Dominase and Ponkrum in Ghana in the livelihoods piece pretty much tells everyone who it is. And the piece on academic engagement with development practice comes directly from a panel at last year’s Association of American Geographers Annual Meeting which was attended by more than 100 people, as well as an extended listserv exchange in the fall of 2010 that was sent out to several thousand subscribers of various lists. Again, pretty much everyone will be able to figure out who wrote it.
So, the work is now up there for your perusal. Have a look, and let me know what you think . . .