Wed 29 May 2013
I’ve just spent nearly three weeks in Senegal, working on the design, monitoring, and evaluation of a CCAFS/ANACIM climate services project in the Kaffrine Region. It was a fantastic time – I spent a good bit of time out in three villages in Kaffrine implementing my livelihoods as governmentality approach (for now called the LAG approach) to gather data that can inform our understanding of what information will impact which behaviors for different members of these communities.
This work also included a week-long team effort to build an approach to monitoring and evaluation for this project that might also yield broader recommendations for M&E of climate services projects in other contexts. The conversations ranged from fascinating to frustrating, but in the process I learned an enormous amount and, I think, gained some clarity on my own thinking about project design, monitoring, and evaluation. For the purposes of this blog, I want to elaborate on one of my long-standing issues in development – the use of panel surveys, or even broad baseline surveys, to design policies and programs.
At best, people seem to assume that the big survey instrument helps us to identify the interesting things that should be explained through detailed work. At worst, people use these instruments to identify issues to be addressed, without any context through which to interpret the patterns in the data. Neither case is actually all that good. Generally, I often find the data from these surveys to be disaggregated/aggregated in inappropriate manners, aimed at the wrong issues, and rife with assumptions about the meaning of the patterns in the data that have little to do with what is going on in the real world (see, for example, my article on gendered crops, which was inspired by a total misreading of Ghanaian panel survey data in the literature). This should be of little surprise: the vast bulk of these tools are designed in the abstract – without any prior reference to what is happening on the ground.
What I am arguing here is simple: panel surveys, and indeed any sort of baseline survey, are not an objective, inductive data-gathering process. They are informed by assumptions we all carry with us about causes and effects, and the motivations for human behavior. As I have said time and again (and demonstrated in my book Delivering Development), in the world of development these assumptions are more often than not incorrect. As a result, we are designing broad survey instruments that ask the wrong questions of the wrong people. The data from these instruments is then interpreted through often-inappropriate lenses. The outcome is serious misunderstandings and misrepresentations of life on globalization’s shoreline. These misunderstandings, however, carry the hallmarks of (social) scientific rigor even as they produce spectacular misrepresentations of the decisions, events, and processes we must understand if we are to understand, let alone address, the challenges facing the global poor. And we wonder why so many projects and policies produce “surprise” results contrary to expectations and design? These are only surprising because the assumptions that informed them were spectacularly wrong.
This problem is easily addressed, and we are in the process of demonstrating how to do it in Kaffrine. There are baseline surveys of Kaffrine, as well as ongoing surveys of agricultural production by the Senegalese agricultural staff in the region. But none of these is actually tied to any sort of behavioral model for livelihoods or agricultural decision-making. As a result, we can’t rigorously interpret any patterns we might find in the data. So what we are doing in Kaffrine (following the approach I used in my previous work in Ghana) is spending a few weeks establishing a basic understanding of the decision-making of the target population for this particular intervention. We will then refine this understanding by the end of August through a full application of the LAG approach, which we will use to build a coherent, complex understanding of livelihoods decision-making that will define potential pathways of project impact. This, in turn, will shape the design of this program in future communities as it scales out, make sense of the patterns in the existing baseline data and the various agricultural services surveys taking places in the region, and enable us to build simple monitoring tools to check on/measure these pathways of impact as the project moves forward. In short, by putting in two months of serious fieldwork up front, we will design a rigorous project based on evidence for behavioral and livelihoods outcomes. While this will not rule out surprise outcomes (African farmers are some pretty innovative people who always seem to find a new way to use information or tools), I believe that five years from now any surprises will be minor ones within the framework of the project, as opposed to shocks that result in project failure.
Incidentally, the agricultural staff in Kaffrine agrees with my reading of the value of their surveys, and is very excited to see what we can add to the interpretation of their data. They are interested enough to provide in-town housing for my graduate student, Tshibangu Kalala, who will be running the LAG approach in Kaffrine until mid-July. Ideally, he’ll break it at its weak points, and by late July or early August we’ll have something implementable, and by the end of September we should have a working understanding of farmer decision-making that will help us make sense of existing data while informing the design of project scale up.