Entries tagged with “Thomas Friedman”.


As many of you know, I tend to post when provoked to rage by something in the press/literature/world. These days, I am massively overtasked, which means I need special levels of rage to post. So hooray to Tom Friedman, who in his utterly frustrating column yesterday actually managed to get me there.

I’m going to set aside my issues with the Friedman-standard reductionist crap in the column. Ken Opalo killed it anyway, so just read his post. Instead, I want to spend a few words excoriating Friedman for his lazy, stereotypical portrayal of my friend and colleague Ousmane Ndiaye in that column. First, as has been noted a few times, Ousmane is a climatologist with a Ph.D. This is NOT THE SAME THING AS A WEATHERMAN. Just Google the two, for heaven’s sake. What Ousmane is trained in is high-end physical science, and he is good at it. Really good at it.

But what is really remarkable about Ousmane, and totally elided in Friedman’s lazy, lazy writing, is that he is no office-bound monotonic weatherman. First, Ousmane is really, really funny. I’ve never seen him not funny, ever – even in serious meetings. Which makes me wonder how hard Friedman, who writes “”His voice is a monotone,” is working to fit Ousmane into the box of “scientist” as Friedman understands it.

Second, Ousmane does remarkable work engaging farmers across Senegal. I have seen him in farmer meetings, talking about seasonal forecasts. He cares deeply about these farmers, and how well he is able to communicate forecasts to them. I’ve also seen him at Columbia University, in scientific meetings, moving between professors and development donors, talking about new ideas and new challenges that need to be addressed. He moves between these worlds easily, a skill far too lacking in the climate change community.

What I am saying here is simple: Friedman missed the fact that he had the star right in front of him, clicking away at the computer. He needed a counterpoint for his rapper, and a sad caricature of Ousmane became that counterpoint. And because of the need to present Ousmane as the boring scientist, Friedman totally missed how unbelievably apocalyptic the figures he was hearing really are, especially for rain-fed agriculturalists in Senegal. A 2C rise in temperature over the last 60 or so years means that, almost certainly, some varieties of important cereals are no longer germinating, or having trouble germinating. The fact Senegal is currently 5C over normal temperature is unholy – and were this to hold up, would totally crush this year’s harvest (planting starts in about a month, so keep an eye on this) because very little would germinate properly at that level.

Ousmane was describing the apocalypse, and Friedman was fixated on a clicking mouse. Friedman owes Ousmane an apology for this pathetic caricature, and he owes the rest of us an apology for the ways in which his lazy plot and the characters he needed to occupy it resulted in a complete burial of the lede: climate change is already reaching crisis levels in some parts of the world.

 

P.S., if you want to see some of the work that has started to emerge from working alongside Ousmane, check out this and this.

Those following this blog (or my twitter feed) know that I have some issues with RCT4D work.  I’m actually working on a serious treatment of the issues I see in this work (i.e. journal article), but I am not above crowdsourcing some of my ideas to see how people respond.  Also, as many of my readers know, I have a propensity for really long posts.  I’m going to try to avoid that here by breaking this topic into two parts.  So, this is part 1 of 2.

To me, RCT4D work is interesting because of its emphasis on rigorous data collection – certainly, this has long been a problem in development research, and I have no doubt that the data they are gathering is valid.  However, part of the reason I feel confident in this data is because, as I raised in an earlier post,  it is replicating findings from the qualitative literature . . . findings that are, in many cases, long-established with rigorously-gathered, verifiable data.  More on that in part 2 of this series.

One of the things that worries me about the RCT4D movement is the (at least implicit, often overt) suggestion that other forms of development data collection lack rigor and validity.  However, in the qualitative realm we spend a lot of time thinking about rigor and validity, and how we might achieve both – and there are tools we use to this end, ranging from discursive analysis to cross-checking interviews with focus groups and other forms of data.  Certainly, these are different means of establishing rigor and validity, but they are still there.

Without rigor and validity, qualitative research falls into bad journalism.  As I see it, good journalism captures a story or an important issue, and illustrates that issue through examples.  These examples are not meant to rigorously explain the issue at hand, but to clarify it or ground it for the reader.  When journalists attempt to move to explanation via these same few examples (as far too often columnists like Kristof and Friedman do), they start making unsubstantiated claims that generally fall apart under scrutiny.  People mistake this sort of work for qualitative social science all the time, but it is not.  Certainly there is some really bad social science out there that slips from illustration to explanation in just the manner I have described, but this is hardly the majority of the work found in the literature.  Instead, rigorous qualitative social science recognizes the need to gather valid data, and therefore requires conducting dozens, if not hundreds, of interviews to establish understandings of the events and processes at hand.

This understanding of qualitative research stands in stark contrast to what is in evidence in the RCT4D movement.  For all of the effort devoted to data collection under these efforts, there is stunningly little time and energy devoted to explanation of the patterns seen in the data.  In short, RCT4D often reverts to bad journalism when it comes time for explanation.  Patterns gleaned from meticulously gathered data are explained in an offhand manner.  For example, in her (otherwise quite well-done) presentation to USAID yesterday, Esther Duflo suggested that some problematic development outcomes could be explained by a combination of “the three I s”: ideology, ignorance and inertia.  This is a boggling oversimplification of why people do what they do – ideology is basically nondiagnostic (you need to define and interrogate it before you can do anything about it), and ignorance and inertia are (probably unintentionally) deeply patronizing assumptions about people living in the Global South that have been disproven time and again (my own work in Ghana has demonstrated that people operate with really fine-grained information about incomes and gender roles, and know exactly what they are doing when they act in a manner that limits their household incomes – see here, here and here).  Development has claimed to be overcoming ignorance and inertia since . . . well, since we called it colonialism.  Sorry, but that’s the truth.

Worse, this offhand approach to explanation is often “validated” through reference to a single qualitative case that may or may not be representative of the situation at hand – this is horribly ironic for an approach that is trying to move development research past the anecdotal.  This is not merely external observation – I have heard from people working inside J-PAL projects that the overall program puts little effort into serious qualitative work, and has little understanding of what rigor and validity might mean in the context of qualitative methods or explanation.  In short, the bulk of explanation for these interesting patterns of behavior that emerges from these studies resorts to uninterrogated assumptions about human behavior that do not hold up to empirical reality.  What RCT4D has identified are patterns, not explanations – explanation requires a contextual understanding of the social.

Coming soon: Part 2 – Qualitative research and the interpretation of empirical data

OK, two posts for today, because I can’t help myself. Yeah, I am a social scientist. Which means that people either think I run control experiments on various populations (an idea that freaks me out)*, or they think that I have no method to my research at all – I just sort of run around, talk to a few people until I get bored or run out of money, and then come back and write it up.

Of course, both views are crap.  Good social science is founded on rigorous fieldwork and data whose validity can be verified.  How one collects that data, and verifies that validity, varies – it depends on what you are studying.  For whatever reason, though, people have a hard time understanding this.  Quick story: a former chair of my department, during a debate about field methods, actually once asked me if it was really possible to teach someone to do interviews and participant observation.  My response: “I didn’t pop out of the womb able to do this, you know.”  End of discussion, thankfully.

But now I have found someone who has written this up nicely – Wronging Rights (absolutely hilarious, and totally awful, all at the same time – just go read for a bit and then feel bad about yourself for laughing.  Everyone does) has a great post on the subject that links to a series of even better posts at Texas in Africa that covers it (see the Wronging Rights post link to connect to the relevant Texas in Africa posts).

Social scientists, get to reading.  Journalists, read this and understand why you are not social scientists.  Especially you, Thomas Friedman.  And the rest of you . . . never, ever ask me if you can teach someone to do social science . . .

*controlled experiment: what, am I supposed to pick two identical villages (no such thing), and then start to work with one village while studiously ignoring the other village no matter what happens to that community (i.e. drought, food insecurity, disease, what have you) because I need to preserve the integrity of my control group?  There are other ways to establish the validity of one’s results . . .