Archive for February, 2013

Eric Cantor’s recent call to shift funding from the social sciences to the hard sciences (“Funds currently spent by the government on social science — including on politics of all things — would be better spent helping find cures to diseases”) reflects a profound misunderstanding of the complementary role these two epistemological arenas play.  John Sides has covered a range of reasons why the social sciences should not be seen as superfluous to needs, all centering on the fact that social phenomena are central to human well-being and happiness.  As he notes:

My problem with this laser focus on the hard sciences and on medicine is that it pretends that people’s quality of life simply depends on physical phenomena—how fast computers are or how much their knee hurts and so on.   That’s simply not true.  Much of people’s happiness—indeed, including whether they have access to computers or can endure a physical malady—depends on social phenomena.

Even more compelling is Mark Slouka’s 2009 article in Harpers, which offers one of the clearest defenses of the humanities I have ever read: simply put, without the humanities it is very difficult to be a functional citizen in a democracy (but in their absence it is very easy to produce a docile population of workers).

Let me take Slouka’s argument past what really read like something of an either/or tradeoff between the humanities and what he called “mathandscience” and toward a point of complementarity here: simply put, science is a way of seeing the world that enables particular understandings of that world. Science has facilitated spectacular changes in the way we live, from household technologies to medical advances.  But science is but one way of seeing the world, one that does not tell us what we should do, or what else we should do.  Those questions are the province of ethics, justice, and empathy. Science is poorly equipped to address any of these.

This is why science and technology require the social sciences and humanities. They help us separate what is possible in the world from what should be done in the world. Remember, history is littered with examples of highly rational, scientific projects that killed huge numbers of people in the name of a greater good or a logical goal (anyone remember the Soviet collectivization of agriculture under Stalin? How about the far less brutal, but still problematic ujamaa collectivization in Tanzania?). Without the arts, humanities, and social sciences, we are left with a tool (science) and no guidance about how to use it.  Further, the growing field of science and technology studies shows that the capacities of particular technologies, in and of themselves, tell us little about who will adopt them and why. Trevor Birkenholtz’s work in India, for example, demonstrates that farmers continue to use tubewells, even though they know that this practice contributes to groundwater depletion, because the use of tubewells is closely bound up in one’s identity as a good and prosperous farmer.  Without such insights, how can we work with farmers in this region to identify locally-appropriate alternative water-supply technologies?

Cantor, and those like him, live in an odd world where technologies and commodities are social goods unto themselves with universal and obvious value. Existing social scientific work already demonstrates this to be untrue. Defunding such work will not make his beliefs more true, it will just make it harder to make the world a better place with the scientific tools we have and will develop in the future.

I have a confession. For a long time now I have found myself befuddled by those who claim to have identified the causes behind observed outcomes in social research via the quantitative analysis of (relatively) large datasets (see posts here, here, and here).  For a while, I thought I was seeing the all-to-common confusion of correlation and causation…except that a lot of smart, talented people seemed to be confusing correlation with causation.  This struck me as unlikely.

Then, the other day in seminar (I was covering for a colleague in our department’s “Contemporary Approaches to Geography” graduate seminar, discussing the long history of environmental determinism within and beyond the discipline), I found myself in a similar discussion related to explanation…and I think I figured out what has been going on.  The remote sensing and GIS students in the course, all of whom are extraordinarily well-trained in quantitative methods, got to thinking about how to determine if, in fact, the environment was “causing” a particular behavior*. In the course of this discussion, I realized that what they meant by “cause” was simple (I will now oversimplify): when you can rule out/control for the influence of all other possible factors, you can say that factor X caused event Y to happen.  Indeed, this does establish a causal link.  So, I finally get what everyone was saying when they said that, via well-constructed regressions, etc., one can establish causality.

So it turns out I was wrong…sort of. You see, I wasn’t really worried about causality…I was worried about explanation. My point was that the information you would get from a quantitative exercise designed to establish causal relationships isn’t enough to support rigorous project and program design. Just because you know that the construction of a borehole in a village caused girl-child school attendance to increase in that village doesn’t mean you know HOW the borehole caused this change in school attendance to happen.  If you cannot rigorously explain this relationship, you don’t understand the mechanism by which the borehole caused the change in attendance, and therefore you don’t really understand the relationship. In the “more pure” biophysical sciences**, this isn’t that much of a problem because there are known rules that particles, molecules, compounds, and energy obey, and therefore under controlled conditions one can often infer from the set of possible actors and actions defined by these rules what the causal mechanism is.

But when we study people it is never that simple.  The very act of observing people’s behaviors causes shifts in that behavior, making observation at best a partial account of events. Interview data are limited by the willingness of the interviewee to talk, and the appropriateness of the questions being asked – many times I’ve had to return to an interviewee to ask a question that became evident later, and said “why didn’t you tell me this before?”  (to which they answer, quite rightly, with something to the effect of “you didn’t ask”).  The causes of observed human behavior are staggeringly complex when we get down to the real scales at which decisions are made – the community, household/family, and individual. Decisions may vary by time of the year, or time of day, and by the combination of gender, age, ethnicity, religion, and any other social markers that the group/individual chooses to mobilize at that time.  In short, just because we see borehole construction cause increases in girl-child school attendance over and over in several places, or even the same place, doesn’t mean that the explanatory mechanism between the borehole and attendance is the same at all times.

Understanding that X caused Y is lovely, but in development it is only a small fraction of the battle.  Without understanding how access to a new borehole resulted in increased girl-child school attendance, we cannot scale up borehole construction in the context of education programming and expect to see the same results.  Further, if we do such a scale-up, and don’t get the same results, we won’t have any idea why.  So there is causality (X caused Y to happen) and there are causal mechanisms (X caused Y to happen via Z – where Z is likely a complex, locally/temporally specific alignment of factors).

Unfortunately, when I look at much quantitative development research, especially in development economics, I see a lot of causality, but very little work on causal mechanisms that get us to explanation.  There is a lot of story time, “that pivot from the quantitative finding to the speculative explanation.”  In short, we might be programming development and aid dollars based upon evidence, but much of the time that evidence only gets us part of the way to what we really need to know to really inform program and project design.

This problem is avoidable –it does not represent the limits of our ability to understand the world. There is one obvious way to get at those mechanisms – serious, qualitative fieldwork.  We need to be building research and policy teams where ethnographers and other qualitative social scientists learn to respect the methods and findings of their quantitative brethren such that they can target qualitative methods at illuminating the mechanisms driving robust causal relationships. At the same time, the quantitative researchers on these teams will have to accept that they have only partially explained what we need to know when they have established causality through their methods, and that qualitative research can carry their findings into the realm of implementation.

The bad news for everyone…for this to happen, you are going to have to pick your heads up out of your (sub)disciplinary foxholes and start reading across disciplines in your area of interest.  Everyone talks a good game about this, but when you read what keeps getting published, it is clear that cross-reading is not happening.  Seriously, the number of times I have seen people in one field touting their “new discoveries” about human behavior that are already common conversation in other disciplines is embarrassing…or at least it should be to the authors. But right now there is no shame in this sort of thing, because most folks (including peer reviewers) don’t read outside their disciplines, and therefore have no idea how absurd these claims of discovery really are. As a result, development studies gives away its natural interdisciplinary advantage and returns to the problematic structure of academic knowledge and incentives, which not only enable, but indeed promote narrowly disciplinary reading and writing.

Development donors, I need a favor. I need you to put a little research money on the table to learn about whatever it is you want to learn about. But when you do, I want you to demand it be published in a multidisciplinary development-focused journal.  In fact, please start doing this for all of your research-related money. People will still pursue your money, as the shrinking pool of research dollars is driving academia into your arms. Administrators like grant and contract money, and so many academics are now being rewarded for bringing in grants and contracts from non-traditional sources (this is your carrot). Because you hold the carrot, you can draw people in and then use “the stick” inherent in the terms of the grant/contract to demand cross-disciplinary publishing that might start to leverage change in academia. You all hold the purse, so you can call the tune…

 

 

 

*Spoiler alert: you can’t.  Well, you probably can if 1) you pin the behavior you want to explain down to something extraordinarily narrow, 2) can limit the environmental effect in question to a single independent biophysical process (good luck with that), and 3) limit your effort to a few people in a single place. But at that point, the whole reason for understanding the environmental determinant of that behavior starts to go out the window, as it would clearly not be generalizable beyond the study. Trust me, geography has been beating its head against this particular wall for a century or more, and we’ve buried the idea.  Learn from our mistakes.

 

**by “more pure” I am thinking about those branches of physics, chemistry, and biology in which lab conditions can control for many factors. As soon as you get into field sciences, or starting asking bigger questions, complexity sets in and things like causality get muddied in the manner I discuss below…just ask an ecologist.