While development – thought broadly as social/economic/political change that somehow brings about a change in peoples’ quality of life – generally entails changes in behavior, conversations about “behavior change” in development obscure important political and ethical issues around this subject, putting development programs and projects, and worse the people those programs and projects are meant to help, at risk.
We need to return to a long standing conversation about who gets to decide what behaviors need changing. Most contemporary conversations about behavior change invoke simple public health examples that obscure the politics of behavior change (such as this recent New York Times Opinionator Piece). This piece appears to address the community and household politics of change (via peer pressure), but completely ignores the fact that every intervention mentioned was introduced by someone outside these communities. This is easy to ignore because handwashing or the use of chlorine in drinking water clearly reduces morbidity, nobody benefits from such morbidity, and addressing the causes of that morbidity requires interventions that engage knowledge and technology that, while well-established, were created someplace else.
But if we open up this conversation to other sorts of examples, the picture gets much more complicated. Take, for example, agricultural behaviors. An awful lot of food security/agricultural development programming these days discusses behavior change, ranging from what crops are grown to how farmers engage with markets. Here, the benefits of this behavior change are less clear, and less evenly-distributed through the population. Who decides what should be grown, and on what basis? Is improved yield or increased incomes enough justification to “change behaviors”? Such arguments presume shockingly simple rationales for observed behaviors, such as yield maximization, and often implicitly assume that peasant farmers in the Global South lack information and understandings that would produce such yields, thus requiring “education” to make better decisions. As I’ve argued time and again, and demonstrated empirically several times, most livelihoods decisions are a complex mix of politics, local environment, economy, and social issues that these farmers weigh in the context of surprisingly detailed information (see this post or my book for a discussion of farm allocation in Ghanaian households that illustrates this point). In short, when we start to talk about changing peoples’ behaviors, we often have no idea what it is that we are changing.
The fact we have little to no understanding of the basis on which observed decisions are made is a big, totally undiscussed problem for anyone interested in behavior change. In development, we design programs and projects based on presumptions about people’s motivations, but those presumptions are usually anchored in our own experiences and perceptions – which are quite often different from those with whom we work in the Global South (see the discussion of WEIRD data in psychology, for example here). When we don’t know why people are doing the things we do, we cannot understand the opportunities and challenges that come with those activities/behaviors. This allows an unexamined bias against the intelligence and experience of the global poor to enter and lurk behind this conversation.
Such bias isn’t just politically/ethically problematic – it risks real programmatic disasters. For example, when we perceive “inefficiency” on many African farms, we are often misinterpreting hedging behaviors necessary to manage worst-case scenarios in a setting where there are no safety nets. Erasing such behaviors in the name of efficiency (which will increase yields or incomes) can produce better outcomes…until the situation against which the farmers were hedged arises. Then, without the hedge, all hell can break loose. Among the rural agricultural communities in which I have been working for more than 15 years, such hedges typically address climate and/or market variability, which produce extremes at frequent, if irregular, intervals. Stripping the hedges from these systems presumes that the good years will at least compensate for the bad…a dangerous assumption based far more on hope or optimism than evidence in most places where these projects are designed and implemented. James Scott’s book The Art of Not Being Governed provides examples of agrarian populations that fled the state in the face of “modernization” efforts not because they were foolish or backward, but because they saw such programs as introducing unacceptable risks into their lives (see also this post for a similar discussion in the context of food security).
This is why my lab uses an approach (on a number of projects ranging from climate services evaluation and design to disaster risk reduction) that starts from the other direction – we begin by identifying and explaining particular behaviors relevant to the challenge, issue, or intervention at hand, and then start thinking about what kinds of behavioral change are possible and acceptable to the people with whom we work. We believe that this is both more effective (as we actually identify the rationales for observed behaviors before intervening) and safer (as we are less likely to design/condone interventions that increase vulnerability) than development programming based on presumption.
This is not to say that we should simply valorize all existing behaviors in the Global South. There are inefficiencies out there that could be reduced. There are things like handwashing that are simple and important. Sometimes farmers can change their practices in small ways that do not entail big shifts in risk or vulnerability. Our approach to project design and assessment helps to identify just such situations. But on the whole, we need to think much more critically about what we are assuming when we insist on a particular behavior change, and then replace those assumptions with information. Until we do, behavior change discussions will run the risk of uncritically imposing external values and assumptions on otherwise coherent systems, producing greater risk and vulnerability than existed before. Nobody could call that development.