One of the things I have had the privilege to witness over the past two years is the movement of a large donor toward a very serious monitoring and evaluation effort aimed at its own programs. While I know some in the development community, especially in academia, are skeptical of any new initiative that claims to want to do a better job of understanding the impact of programs, and learning from existing programs, what I saw in practice leads me to believe that this is a completely sincere effort with a lot of philosophical buy-in.
That said, there are significant barriers coming for monitoring and evaluation in development. I’m not sure that those making evaluation policy fully grasp these barriers, and as a result I don’t see evidence that they are being effectively addressed by anyone. Until they are, this sincere effort is likely to underperform, if not run aground.
In this post, I want to point out a huge institutional/structural problem for M&E: the conflict of interest that is created on the implementation side of things. On one hand, donors are telling people that we need to learn about what works, and that monitoring and evaluation is not meant to be punitive, but part of a learning process to help all of us do our jobs better. On the other hand, at most donors the budgets are under pressure, and the message from the top is that development must focus on “what works.” Think about what this means to a mission director or a chief of party. On one hand, they are told that M&E is about learning, and failure is now to be expected and can be tolerated as long as we learn why the failure occurred and can remedy the problem and prevent that problem in the future in other places. On the other, they are told that budgets will focus on what works. So if they set up rigorous M&E, they are likely to identify programs that are underperforming (and perhaps learn why)…but there is no guarantee that this learning won’t result in that program being cut, with a commensurate loss of staff and budget. I have yet to see anyone meaningfully address this conflict of interest, and until someone figures out how to do so, there will be significant and creative resistance to the implementation of rigorous M&E.
Any ideas, folks? Surely some of you have seen this at work…
Simply put, the donors are going to have to decide what is more important – learning what works, and improving on development’s 60+ year track record of spotty results with often limited correlation to programs and projects, or maintaining the appearance of efficiency and efficacy by cutting anything that does not seem to work, and likely throwing out a lot of babies with the bathwater. I know which one I would choose. It remains unclear where the donors’ choices will fall. In a politically challenging environment, the pressure to go with the latter approach is high, and the protection of a learning agenda that will really change how development works will require substantial political courage. That courage exists…but whether or not it comes to the fore is a different question.