Entries tagged with “HURDL”.


Sheila Navalia Onzere

June 19, 1977 – August 31, 2019

It is with a sense of incredible loss that I report the death of Sheila Onzere, HURDL’s research scientist. Sheila died yesterday in Nairobi, Kenya, after a sudden illness. She had been home, taking care of her mother and working some short-term contracts while HURDL waited for longer-term work to come through. The shock is overwhelming. I was messaging with her last week. Multiple members of the HURDL family were messaging with her yesterday morning. We were all talking about projects and plans in a future that now will not happen. None of us know how to process that.

Sheila came to HURDL in September of 2014. The lab had only existed for a little over 18 months when she joined. To that point, I had been the only non-student member of the team, but the amount of work we were doing had ramped up and it was clear we needed another professional to keep things moving. I put out an ad for a research associate, and narrowed the pool to a few candidates. I still remember the Skype interview with her – all the members of HURDL at the time, Kwame Owusu-Daaku, Tshibangu Kalala, and Daniel Abrahams, piled into my office and subjected this poor Kenyan woman, operating on a weak internet connection, to the full HURDL experience – questions followed by digressions followed by jokes followed by nobody listening to me at all. In retrospect, it was an ideal interview, as it presented the most honest picture of HURDL possible – and Sheila took the job. The lab and all its members were much better for it.

Sheila made us a better organization. She brought a Ph.D. in Sociology (Iowa State) to the lab (though this lab full of geographers will always claim her BA in Human Geography from Moi University was the one that counted), and with it substantial experience working with farmers both in the US and in Africa. But as much as her technical and academic skill, Sheila brought a sense of responsibility and enthusiasm to the work of the lab. Her willingness to always step in and cover something not only kept HURDL glued together, it helped establish the ethic that made the lab such a fun and interesting place to work. She was kind, generous, and very funny. Her laugh was infectious, a reward for anyone who could make it emerge. Even her sigh of exasperation (which I elicited plenty of times) was surprisingly kind and gentle. She was unique, perfect for HURDL and all the people that had the fortune to work in it while she was a part of the team.

A while back, commenting on the structure and organization of HURDL, someone told me that it ran more like a family than a formal organization. This was not meant as a complement, but a statement identifying an institutional weakness. I disagreed then, and I disagree now. That observation helped me understand what I loved about the lab and the people in it. We were, and are, a family. When the lab came to my house to eat and hang out, it felt like a family dinner. That feeling is what makes the day-to-day of the lab worth it. We’ve lost a family member, and we are mourning like a family. It hurts intensely, but that is because Sheila meant so much to all of us. I would not have it any other way.

 

So, I have news. In August, I will become a Full Professor and Director of the Department of International Development, Community, and Environment at Clark University. It is an honor to be asked to lead a program with such a rich history, at such an exciting time for both it and the larger Clark community. The program uniquely links the various aspects of my research identity within a single department, and further supports those interests through the work of a fantastic Graduate School of Geography, the George Perkins Marsh Institute, and the Graduate School of Management. At a deeply personal level, this also marks a homecoming for me – I grew up in New Hampshire, in a town an hour’s drive from Worcester. My mother is still there, and many friends are still in the region. In short, this was a convergence of factors that was completely unique, and in the end I simply could not pass on this opportunity.

This, of course, means that after twelve years, I will be leaving the University of South Carolina. This was a very difficult decision – there was no push factor that led me to consider the Clark opportunity. Indeed, I was not looking for another job – this one found me. I owe a great deal to USC, the Department of Geography, and the Walker Institute for International and Area Studies. They gave me resources, mentoring, space, networks, support, etc., all of which were integral in building my career. Without two Walker Institute small grants, the fieldwork in 2004 and 2005 that led to so many publications, including Delivering Development, would never have happened. The department facilitated my time at USAID, and the subsequent creation of HURDL. I will always owe a debt to South Carolina and my colleagues here, and I leave a robust institution that is headed in exciting directions.

As I move, so moves HURDL. The lab will take up residence in the Marsh Institute at Clark some time in late summer, assuming my fantastic research associate Sheila Onzere does not finally lose her mind dealing with all of the things I throw at her. But if Sheila is sane, we’ll be open for business and looking for more opportunities and partners very soon!

From my recent post over on HURDLblog, my lab’s group blog, on the challenges of thinking productively about gender and adaptation:

My closing point caused a bit of consternation (I can’t help it – it’s what I do). Basically, I asked the room if the point of paying attention to gender in climate services was to identify the particular needs of men and women, or to identify and address the needs of the most vulnerable. I argued that approaches to gender that treat the categories “man” and “women” as homogenous and essentially linked to particular vulnerabilities might achieve the former, but would do very little to achieve the latter. Mary Thompson and I have produced a study for USAID that illustrates this point empirically. But there were a number of people in the room that got a bit worked up by this point. They felt that I was arguing that gender no longer mattered, and that my presentation marked a retreat from years of work that they and others had put in to get gender to the table in discussions of adaptation and climate services. Nothing could be further from the truth.

Read the full post here.

I’m a big fan of accountability when it comes to aid and development. We should be asking if our interventions have impact, and identifying interventions that are effective means of addressing particular development challenges. Of course, this is a bit like arguing for clean air and clean water. Seriously, who’s going to argue for dirtier water or air. Who really argues for ineffective aid and development spending?

Nobody.

More often than not, discussions of accountability and impact serve only to inflate narrow differences in approach, emphasis, or opinion into full on “good guys”/ “bad guys” arguments, where the “bad guys” are somehow against evaluation, hostile to the effective use of aid dollars, and indeed actively out to hurt the global poor. This serves nothing but particular cults of personality and, in my opinion, serves to squash out really important problems with the accountability/impact agenda in development. And there are major problems with this agenda as it is currently framed – around the belief that we have proven means of measuring what works and how, if only we would just apply those tools.

When we start from this as a foundation, the accountability discussion is narrowed to a rather tepid debate about the application of the right tools to select the right programs. If all we are really talking about are tools, any skepticism toward efforts to account for the impact of aid projects and dollars is easily labeled an exercise in obfuscation, a refusal to “learn what works,” or an example of organizations and individuals captured by their own intellectual inertia. In narrowing the debate to an argument about the willingness of individuals and organizations to apply these tools to their projects, we are closing off discussion of a critical problem in development: we don’t actually know exactly what we are trying to measure.

Look, you can (fairly easily) measure the intended impact of a given project or program if you set things up for monitoring and evaluation at the outset.  Hell, with enough time and money, we can often piece enough data together to do a decent post-hoc evaluation. But both cases assume two things:

1)   The project correctly identified the challenge at hand, and the intervention was actually foundational/central to the needs of the people at hand.

This is a pretty weak assumption. I filled up a book arguing that a lot of the things that we assume about life for the global poor are incorrect, and therefore that many of our fundamental assumptions about how to address the needs of the global poor are incorrect. And when much of what we do in development is based on assumptions about people we’ve never met and places we’ve never visited, it is likely that many projects which achieve their intended outcomes are actually doing relatively little for their target populations.

Bad news: this is pretty consistent with the findings of a really large academic literature on development. This is why HURDL focuses so heavily on the implementation of a research approach that defines the challenges of the population as part of its initial fieldwork, and continually revisits and revises those challenges as it sorts out the distinct and differentiated vulnerabilities (for explanation of those terms, see page one of here or here) experienced by various segments of the population.

Simply evaluating a portfolio of projects in terms of their stated goals serves to close off the project cycle into an ever more hermetically-sealed, self-referential world in which the needs of the target population recede ever further from design, monitoring, and evaluation. Sure, by introducing that drought-tolerant strain of millet to the region, you helped create a stable source of household food that guards against the impact of climate variability. This project could record high levels of variety uptake, large numbers of farmers trained on the growth of that variety, and even improved annual yields during slight downturns in rain. By all normal project metrics, it would be a success. But if the biggest problem in the area was finding adequate water for household livestock, that millet crop isn’t much good, and may well fail in the first truly dry season because men cannot tend their fields when they have to migrate with their animals in search of water.  Thus, the project achieved its goal of making agriculture more “climate smart,” but failed to actually address the main problem in the area. Project indicators will likely capture the first half of the previous scenario, and totally miss the second half (especially if that really dry year comes after the project cycle is over).

2)   The intended impact was the only impact of the intervention.

If all that we are evaluating is the achievement of the expected goals of a project, we fail to capture the wider set of impacts that any intervention into a complex system will produce. So, for example, an organization might install a borehole in a village in an effort to introduce safe drinking water and therefore lower rates of morbidity associated with water-borne illness. Because this is the goal of the project, monitoring and evaluation will center on identifying who uses the borehole, and their water-borne illness outcomes. And if this intervention fails to lower rates of water-borne illness among borehole users, perhaps because post-pump sanitation issues remain unresolved by this intervention, monitoring and evaluation efforts will likely grade the intervention a failure.

Sure, that new borehole might not have resulted in lowered morbidity from water-borne illness. But what if it radically reduced the amount of time women spent gathering water, time they now spend on their own economic activities and education…efforts that, in the long term, produced improved household sanitation practices that ended up achieving the original goal of the borehole in an indirect manner? In this case, is the borehole a failure? Well, in one sense, yes – it did not produce the intended outcome in the intended timeframe. But in another sense, it had a constructive impact on the community that, in the much longer term, produced the desired outcome in a manner that is no longer dependent on infrastructure. Calling that a failure is nonsensical.

Nearly every conversation I see about aid accountability and impact suffers from one or both of these problems. These are easy mistakes to make if we assume that we have 1) correctly identified the challenges that we should address and 2) we know how best to address those challenges. When these assumptions don’t hold up under scrutiny (which is often), we need to rethink what it means to be accountable with aid dollars, and how we identify the impact we do (or do not) have.

What am I getting at? I think we are at a point where we must reframe development interventions away from known technical or social “fixes” for known problems to catalysts for change that populations can build upon in locally appropriate, but often unpredictable, ways. The former framing of development is the technocrats’ dream, beautifully embodied in the (failing) Millennium Village Project, just the latest incarnation of Mitchell’s Rule of Experts or Easterly’s White Man’s Burden. The latter requires a radical embrace of complexity and uncertainty that I suspect Ben Ramalingan might support (I’m not sure how Owen Barder would feel about this). I think the real conversation in aid/development accountability and impact is about how to think about these concepts in the context of chaotic, complex systems.

Since returning to academia in August of 2012, I’ve been pretty swamped. Those who follow this blog, or my twitter feed, know that my rate of posting has been way, way down. It’s not that I got bored with social media, or tired of talking about development, humanitarian assistance, and environmental change. I’ve just been swamped. The transition back to academia took much more out of me than I expected, and I took on far, far too much work. The result – a lot of lost sleep, and a lapsed social media profile in the virtual world, and a lapsed social life in the real world.

One of the things I’ve been working on is getting and organizing enough support around here to do everything I’m supposed to be doing – that means getting grad students and (coming soon) a research associate/postdoc to help out. Well, we’re about 75% of the way there, and if I wait for 100% I’ll probably never get to introduce you all to HURDL…

HURDL is the Humanitarian Response and Development Lab here at the Department of Geography at the University of South Carolina. It’s also a less-than-subtle wink at my previous career in track and field. HURDL is the academic home for me and several (very smart) grad students, and the institution managing about five different workflows for different donors and implementers.  Basically, we are the qualitative/social science research team for a series of different projects that range from policy development to project design and implementation. Sometimes we are doing traditional academic research. Mostly, we do hybrid work that combines primary research with policy and/or implementation needs. I’m not going to go into huge detail here, because we finally have a lab website up. The site includes pages for our personnel, our projects, our lab-related publications, and some media (still under development). We’ll need to put up a news feed and likely a listing of the talks we give in different places.

Have a look around. I think you’ll have a sense of why I’ve been in a social media cave for a while. Luckily, I am surrounded by really smart, dedicated people, and am in a position to add at least one more staff position soon, so I might actually be back on the blog (and sleeping more than 6 hours a night) again soon!

Let us know what you think – this is just a first cut at the page. We’d love suggestions, comments, whatever you have – we want this to be an effective page, and a digital ambassador for our work…