Entries tagged with “qualitative research”.
Did you find what you wanted?
Sun 24 Jul 2016
Unsolicited publishing advice/reviewing rant to follow. Brace yourselves.
When writing an article based on the quantitative analysis of a phenomena, whatever it may be and however novel your analysis, you are not absolved from reading/understanding the conceptual literature (however qualitative) addressing that phenomena. Sure, you might be using a larger dataset than ever used before. Certainly, the previous literature might have been case-study based, and therefore difficult to generalize. But that doesn’t give you a pass to just ignore that existing literature.
- That literature establishes the meanings of the concepts you are measuring/testing
- That literature captures the current state of knowledge on those concepts
- Often, that literature (if qualitative, especially if ethnographic) can get at explanations for the phenomena that cannot be had through qualitative methods alone
If you ignore this literature:
- You’ll just ask questions that have already been answered. Everybody hates that, especially time-constrained reviewers who already know the answers to your questions because they actually have read/contributed to the literature you ignored.
- You’ll likely end up with results that don’t make sense, and with no means of explaining or even addressing them. Editors and reviewers hate that, too.
- Your results, even if they appear to be statistically significant, will be crap. I don’t care how sophisticated your quantitative analysis is, or how innovative your tools might be, you are shoving crap into a very innovative, sophisticated tool, which means that all you’ll get out the other end is crap. Reviewers hate crap. Editors hate crap. And your crap is probably not actionable (and really shouldn’t be), so nobody outside academia will like your crap.
Please don’t generate more crap. There is plenty around.
Finally, a note on professionalism and your career: Citing around people who have worked on the phenomena you are investigating because you are trying to capture a particular field of knowledge is awful intellectual practice that, beyond needlessly slowing the pace of innovation in the field in question, will never work…because editors will send the people you are not citing the article for review. And they will wreck you.
Mon 8 Jul 2013
Ok, so that title was meant to goad my fellow anthropologists, but before everyone freaks out, let me explain what I mean. The best anthropology, to quote Marshall Sahlins, “consists of making the apparently wild thought of others logically compelling in their own cultural settings and intellectually revealing of the human condition.” This is, of course, not bound by time. Understanding the thought of others, wherever and whenever it occurs, helps to illuminate the human condition. In that sense, ethnographies are forever.
However, in the context of development and climate change, ethnography has potential value beyond this very broad goal. The understandings of human behavior produced through ethnographic research are critical to the achievement of the most noble and progressive goals of development*. As I have argued time and again, we understand far less about what those in the Global South are doing than we think, and I wrote a book highlighting how our assumptions about life in such places are a) mostly incorrect and b) potentially very dangerous to the long-term well-being of everyone on Earth. To correct this problem, development research, design, and monitoring and evaluation all need much, much more engagement with qualitative research, including ethnographic work. Such work brings a richness to our understanding of other people, and lives in other places, that is invaluable to the design of progressive programs and projects that meet the actual (as opposed to assumed) needs of the global poor now and in the future.
As I see it, the need for ethnographic work in development presents two significant problems. The first, which I have discussed before, is the dearth of such work in the world. Everyone seems to think the world is crawling with anthropologists and human geographers who do this sort of work, but how many books and dissertations are completed each year? A thousand? Less? Compare that to the two billion (or more) poor people living in low-income countries (and that leaves aside the billion or so very poor that Andy Sumner has identified as living in middle-income countries). A thousand books for at least two billion people? No problem, it just means that each book or dissertation has to cover the detailed experiences, motivations, and emotions of two million people. I mean, sure, the typical ethnography addresses an N that ranges from a half dozen to communities of a few hundred, but surely we can just adjust the scale…
OK, so there is a huge shortage of this work, and we need much, much more of it. Well, the good new is that people have been doing this sort of work for a long time. Granted, the underlying assumptions about other people have shifted over time (“scientific racism” was pretty much the norm back in the first half of the 20th Century), but surely the observations of human behavior and thought might serve to fill the gaps from which we currently suffer, right. After all, if a thousand people a year knocked out a book or dissertation over the past hundred years, surely our coverage will improve. Right?
Well, maybe not. Ethnographies describe a place and a time, and most of the Global South is changing very, very rapidly. Indeed, it has been changing for a while, but of late the pace of change seems to be accelerating (again, see Sumner’s work on the New Bottom Billion). Things change so quickly, and can change so pervasively, that I wonder how long it takes for many of the fundamental observations about life and thought that populate ethnographies to become historical relics that tell us a great deal about a bygone era, but do not reflect present realities. For example, in my work in Ghana, I drew upon some of the very few ethnographies of the Akan, written during the colonial era. These were useful for the archaeological component of my work, as they helped me to contextualize artifacts I was recovering from the time of those ethnographies. But their descriptions of economic practice, local politics, social roles, and livelihoods really had very little to do with life in Ghana’s Central Region in the late 1990s. In terms of their utility for interpreting contemporary life among the Akan, they had, for all intents and purposes, expired.
So, the questions I pose here:
1) How do we know when an ethnography has expired? Is it expired when any aspect of the ethnography is no longer true, or when a majority of its observations no longer hold?
2) Whatever standard we might hold them to, how long does it take to reach that standard? Five years? Ten years? Thus far, my work from 2001 in Ghana seems to be holding, but things are wobbling a bit. It is possible that a permanent shift in livelihoods took place in 2006 (I need to examine this), which would invalidate the utility of my earlier work for project design in this area.
These are questions worth debating. If we are to bring more qualitative, ethnographic work to the table in development, we have to find ways to improve our coverage of the world and our ability to assess the resources from which we might draw.
*I know some people think that “noble” and “progressive” are terms that cannot be applied to development. I’m not going to take up that debate here.
Mon 4 Feb 2013
I have a confession. For a long time now I have found myself befuddled by those who claim to have identified the causes behind observed outcomes in social research via the quantitative analysis of (relatively) large datasets (see posts here, here, and here). For a while, I thought I was seeing the all-to-common confusion of correlation and causation…except that a lot of smart, talented people seemed to be confusing correlation with causation. This struck me as unlikely.
Then, the other day in seminar (I was covering for a colleague in our department’s “Contemporary Approaches to Geography” graduate seminar, discussing the long history of environmental determinism within and beyond the discipline), I found myself in a similar discussion related to explanation…and I think I figured out what has been going on. The remote sensing and GIS students in the course, all of whom are extraordinarily well-trained in quantitative methods, got to thinking about how to determine if, in fact, the environment was “causing” a particular behavior*. In the course of this discussion, I realized that what they meant by “cause” was simple (I will now oversimplify): when you can rule out/control for the influence of all other possible factors, you can say that factor X caused event Y to happen. Indeed, this does establish a causal link. So, I finally get what everyone was saying when they said that, via well-constructed regressions, etc., one can establish causality.
So it turns out I was wrong…sort of. You see, I wasn’t really worried about causality…I was worried about explanation. My point was that the information you would get from a quantitative exercise designed to establish causal relationships isn’t enough to support rigorous project and program design. Just because you know that the construction of a borehole in a village caused girl-child school attendance to increase in that village doesn’t mean you know HOW the borehole caused this change in school attendance to happen. If you cannot rigorously explain this relationship, you don’t understand the mechanism by which the borehole caused the change in attendance, and therefore you don’t really understand the relationship. In the “more pure” biophysical sciences**, this isn’t that much of a problem because there are known rules that particles, molecules, compounds, and energy obey, and therefore under controlled conditions one can often infer from the set of possible actors and actions defined by these rules what the causal mechanism is.
But when we study people it is never that simple. The very act of observing people’s behaviors causes shifts in that behavior, making observation at best a partial account of events. Interview data are limited by the willingness of the interviewee to talk, and the appropriateness of the questions being asked – many times I’ve had to return to an interviewee to ask a question that became evident later, and said “why didn’t you tell me this before?” (to which they answer, quite rightly, with something to the effect of “you didn’t ask”). The causes of observed human behavior are staggeringly complex when we get down to the real scales at which decisions are made – the community, household/family, and individual. Decisions may vary by time of the year, or time of day, and by the combination of gender, age, ethnicity, religion, and any other social markers that the group/individual chooses to mobilize at that time. In short, just because we see borehole construction cause increases in girl-child school attendance over and over in several places, or even the same place, doesn’t mean that the explanatory mechanism between the borehole and attendance is the same at all times.
Understanding that X caused Y is lovely, but in development it is only a small fraction of the battle. Without understanding how access to a new borehole resulted in increased girl-child school attendance, we cannot scale up borehole construction in the context of education programming and expect to see the same results. Further, if we do such a scale-up, and don’t get the same results, we won’t have any idea why. So there is causality (X caused Y to happen) and there are causal mechanisms (X caused Y to happen via Z – where Z is likely a complex, locally/temporally specific alignment of factors).
Unfortunately, when I look at much quantitative development research, especially in development economics, I see a lot of causality, but very little work on causal mechanisms that get us to explanation. There is a lot of story time, “that pivot from the quantitative finding to the speculative explanation.” In short, we might be programming development and aid dollars based upon evidence, but much of the time that evidence only gets us part of the way to what we really need to know to really inform program and project design.
This problem is avoidable –it does not represent the limits of our ability to understand the world. There is one obvious way to get at those mechanisms – serious, qualitative fieldwork. We need to be building research and policy teams where ethnographers and other qualitative social scientists learn to respect the methods and findings of their quantitative brethren such that they can target qualitative methods at illuminating the mechanisms driving robust causal relationships. At the same time, the quantitative researchers on these teams will have to accept that they have only partially explained what we need to know when they have established causality through their methods, and that qualitative research can carry their findings into the realm of implementation.
The bad news for everyone…for this to happen, you are going to have to pick your heads up out of your (sub)disciplinary foxholes and start reading across disciplines in your area of interest. Everyone talks a good game about this, but when you read what keeps getting published, it is clear that cross-reading is not happening. Seriously, the number of times I have seen people in one field touting their “new discoveries” about human behavior that are already common conversation in other disciplines is embarrassing…or at least it should be to the authors. But right now there is no shame in this sort of thing, because most folks (including peer reviewers) don’t read outside their disciplines, and therefore have no idea how absurd these claims of discovery really are. As a result, development studies gives away its natural interdisciplinary advantage and returns to the problematic structure of academic knowledge and incentives, which not only enable, but indeed promote narrowly disciplinary reading and writing.
Development donors, I need a favor. I need you to put a little research money on the table to learn about whatever it is you want to learn about. But when you do, I want you to demand it be published in a multidisciplinary development-focused journal. In fact, please start doing this for all of your research-related money. People will still pursue your money, as the shrinking pool of research dollars is driving academia into your arms. Administrators like grant and contract money, and so many academics are now being rewarded for bringing in grants and contracts from non-traditional sources (this is your carrot). Because you hold the carrot, you can draw people in and then use “the stick” inherent in the terms of the grant/contract to demand cross-disciplinary publishing that might start to leverage change in academia. You all hold the purse, so you can call the tune…
*Spoiler alert: you can’t. Well, you probably can if 1) you pin the behavior you want to explain down to something extraordinarily narrow, 2) can limit the environmental effect in question to a single independent biophysical process (good luck with that), and 3) limit your effort to a few people in a single place. But at that point, the whole reason for understanding the environmental determinant of that behavior starts to go out the window, as it would clearly not be generalizable beyond the study. Trust me, geography has been beating its head against this particular wall for a century or more, and we’ve buried the idea. Learn from our mistakes.
**by “more pure” I am thinking about those branches of physics, chemistry, and biology in which lab conditions can control for many factors. As soon as you get into field sciences, or starting asking bigger questions, complexity sets in and things like causality get muddied in the manner I discuss below…just ask an ecologist.
This paper is currently available for review at The Winnower.
Tue 9 Aug 2011
Mike Hulme has an article in the July issue of Nature Climate Change titled “Meet the humanities,”[paywalled] in which he argues that “An introduction needs to be made between the rich cultural knowledge of social studies and the natural sciences.” Overall, I like this article – Hulme understands the social science side of things, not least through his own research and his work as editor of Global Environmental Change, one of the most influential journals on the human dimensions of global change*. Critically, he lays out how, even under current efforts to include a wider range of disciplines in major climate assessments, the conversation has been dominated for so long by the biophysical sciences and economics that it is difficult for other voices to break in:
policy discussions have become “improving climate predictions” and “creating new economic policy instruments”; not “learning from the myths of indigenous cultures” or “re-thinking the value of consumption.”
Hulme is not arguing that we are wrong to be trying to improve climate predictions or develop new economic policy instruments – instead, he is subtly asking if these are the right tools for the job of addressing climate change and its impacts. My entire research agenda is one of unearthing a greater understanding of why people do what they do to make a living, how they decide what to do when their circumstances change, and what the outcomes of those decisions are for their long-term well being. Like Hulme, I am persistently surprised at the relative dearth of work on this subject – especially because the longer I work on issues of adaptation and livelihoods, the more impressed I am with the capacity of communities to adjust to new circumstances, and the less impressed I am with anyone’s ability to predictably (and productively) intervene in these adjustments.
This point gets me to my motivation for this post. Hulme could not cover everything in his short commentary, but I felt it important to identify where a qualitative social science perspective can make an immediate impact on how we think about adaptation (which really is about how we think about development, I think). I remain amazed that so many working in development fail to grasp that there is no such things as a completely apolitical, purely technical intervention. For example, in development we all too often assume that a well is just a well – that it is a technical intervention that delivers water to people. However, a well is highly political – it reshapes some people’s lives, alters labor regimes, could empower women (or be used as an excuse to extract more of their labor on farms, etc.) – all of this is contextual, and has everything to do with social relations and social power. So, we can introduce the technology of a well . . . but the idea and meaning of a well cannot be introduced in the same manner – these are produced locally, through local lenses. It is this basic failure of understanding that lies at the heart of so many failed development projects that passed technical review and various compliance reviews: they were envisioned as neutral and technical, and were probably very well designed in those arenas. However, these project designers gave little concern to the contextual, local social processes that would shape the use and outcomes of the intervention, and the result was lots of “surprise” outcomes.
When we start to approach these issues from a qualitative social scientific standpoint, or even a humanities standpoint (Hulme conflates these in his piece, I have no idea why. They are not the same), the inherent politics of development become inescapable. This was the point behind my article “The place of stories in development: creating spaces for participation through narrative analysis.” In that article, I introduce the story I used to open Delivering Development to illustrate how our lived experience of development often plays out in ways best understood as narratives, “efforts to present information as a sequence of connected events with some sort of structural coherence, transforming ‘the real into an object of desire through a formal coherence and a moral order that the real.” These narratives emerge in the stories we are told and that we overhear in the course of our fieldwork, but rarely make it into our articles or reports (though they do show up on a few fantastic aid blogs, like Shotgun Shack and Tales from the Hood). They become local color to personal stories, not sources of information that reveal the politics of our development efforts (though read the two aforementioned blogs for serious counterpoints).
In my article, I demonstrated how using the concept of narrative, drawn from the humanities, has allowed me to identify moments in which I am placed into a plot, a story of development and experience not of my making:
In this narrative [“the white man is so clever,” a phrase I heard a lot during fieldwork], I was cast as the expert, one who had knowledge and resources that could improve their lives if only I would share it with them. [The community] cast themselves in the role of recipients of this knowledge, but not participants in its formation. This narrative has been noted time and again in development studies (and post-colonial studies), and in the era of participation we are all trained to subvert it when we see it emerge in the work of development agencies, governments, and NGOs. However, we are less trained to look for its construction by those living in the Global South. In short, we are not trained to look for the ways in which others emplot us.
The idea of narrative is useful not only for identifying when weird neocolonial moments crop up, but also for destabilizing those narratives – what I call co-authoring. For example, when I returned to the site of my dissertation fieldwork a few years later, I found that my new position as a (very junior) professor created a new set of problems:
This new identity greatly hindered my first efforts at fieldwork after taking this job, as several farmers openly expected me to tell them what to plant and how to plant it. I was able to decentre this narrative when, after one farmer suggested that I should be telling him what to plant instead of asking him about his practices, I asked him ‘Do I look like a farmer?’ He paused, admitted that I did not, and then started laughing. This intervention did not completely deconstruct his narrative of white/developed and black/developing, or my emplotment in that narrative. I was still an expert, just not about farming. This created a space for him to speak freely to me about agriculture in the community, while still maintaining a belief in me as the expert.
Certainly, this is not a perfect outcome. But this is a lot better than the relationship that would have developed without an awareness of this emerging narrative, and my efforts to co-author that narrative. Long and short, the humanities have a lot to offer both studies of climate change impacts and development – if we can bring ourselves to start taking things like stories seriously as sources of data. As Hulme notes, this is not going to be an easy thing to do – there is a lot of inertia in both development and climate change studies. But changes are coming, and I for one plan to leverage them to improve our understandings of what is happening in the world as a result of our development efforts, climate change, global markets, and any number of other factors that impact life along globalization’s shoreline – and to help co-author different, and hopefully better, outcomes than what has come before.
*full disclosure: I’ve published in Global Environmental Change, and Hulme was one of the editors in charge of my article.
Thu 4 Aug 2011
Posted by Ed under Africa, Delivering Development, development, research
Comments Off on Savings is a social choice, too . . .
Marc Bellemare’s blog pointed me to an interesting paper by Pascaline Dupas and Jonathan Robinson titled “Why Don’t the Poor Save More? Evidence from Health Savings Experiments.” It is an interesting paper, taking a page from the RCT4D literature to test some different tools for savings in four Kenyan villages. I’m not going to wade into the details of the paper or its findings here (they find some tools to be more effective than others at promoting savings for health expenditures), because they are not what really caught me about this paper. Instead, what struck me was the absence of a serious consideration of “the social” in the framing of the questions asked and the results. Dupas and Robinson expected three features to impact health savings: adequate storage facilities/technology, the ability to earmark funds, and the level of social commitment of the participant. The social context of savings (or, more accurately, barriers to savings) are treated in what I must say is a terribly dismissive way [emphases are mine]:
a secure storage technology can enable individuals to avoid carrying loose cash on their person and thus allow people to keep some physical distance between themselves and their money. This may make it easier to resist temptations, to borrow the terminology in Banerjee and Mullainathan (2010), or unplanned expenditures, as many of our respondents call them. While these unplanned expenditures include luxury items such as treats, another important category among such unplanned expenditures are transfers to others.
A storage technology can increase the mental costs associated with unplanned expenditures, thereby reducing such expenditures. Indeed, if people use the storage technology to save towards a specic goal, such as a health goal in our study, people may consider the money saved as unavailable for purposes other than the specic goal – this is what Thaler (1990) coined mental accounting. By enabling such mental accounting, a designated storage place may give people the strength to resist frivolous expenditures as well as pressure to share with others, including their spouse.
I have seen many cases of unplanned expenditures to others in my fieldwork. Indeed, my village-based field crews in Ghana used to ask for payment on as infrequent a basis as possible to avoid exactly these sorts of expenditures. They would plan for large needed purchases, work until they had earned enough for that purchase, then take payment and immediately make the purchase, making their income illiquid before family members could call upon them and ask for loans or handouts.
However, the phrasing of Dupas and Robinson strikes the anthropologist/ geographer in me as dismissive. These expenses are seen as “frivolous”, things that should be “resisted”. The authors never consider the social context of these expenditures – why people agree to make them in the first place. There seems to be an implicit assumption here that people don’t know how to manage their money without the introduction of new tools, and that is not at all what I have seen (albeit in contexts other than Kenya). Instead, I saw these expenditures as part of a much larger web of social relations that implicates everything from social status to gender roles – in this context, the choice to give out money instead of saving it made much more sense.
In short, it seems to me that Dupas and Robinson are treating these savings technologies as apolitical, purely technical interventions. However, introducing new forms of savings also intervenes in social relations at scales ranging from the household to the extended family to the community. Thus, the uptake of these forms of savings will be greatly effected by contextual factors that seem to have been ignored here. Further, the durability of the behavioral changes documented in this study might be much better predicted and understood – from my perspective, the declining use of these technologies over the 33 month scope of the project was completely predictable (the decline, that is, not the size of the decline). Just because a new technology enables savings that might result in a greater standard of living for the individual or household does not mean that the technology will be seen as desirable – instead, that standard of living must also work within existing social roles and relations if these new behaviors are to endure. Therefore, we cannot really explain the declining use of these technologies over time . . . yet development is, to me, about catalyzing enduring change. While this study shows that the introduction of these technologies has at least a short term transformative effect on savings behavior, I’m not convinced this study does much to advance our understanding of how to catalyze changes that will endure.