If evidence-informed decision-making is our religion, we fall prey to this deadly sin

By Danielle Doughman

Ms Danielle Doughman

Monitoring, evaluation, and learning (MEL). Sometimes just thinking about it can make us break out into a cold sweat. Nearly everyone I have worked with knows it is important – like eating your veggies – but few translate the knowledge into action and leadership for better MEL. And still fewer consider it a part of their own work responsibilities unless “MEL” appears in their job title. 

A solid foundation of monitoring and evaluation serves organizations (and their various projects) well, especially in rapidly shifting political contexts, where monitoring is not just a nice-to-have, it’s a must-have. Well-planned MEL that is in place from the start – not an add-on, donor check-box, or a one-off – but rather part and parcel of implementation is crucial to know what we’re doing right, where we need to shift gears, and have a reality check on activities that are off track or completely missing the mark.

I am no MEL expert, yet I recognize common problems that crop up. Judging from my own experience and observations, MEL tends to sneak up on us, despite best intentions. We get caught up in the absorbing work of implementing a project or program, and then, before you know it, MEL comes back on the radar because – yikes! – it is nearly time for donor reporting or there is an urgent request for ‘impact’ data from the boss.

And my experience is not an isolated one. Action research from 2016 identified common barriers facing South African non-governmental organizations around MEL. Too often, organizations felt MEL to be imposed upon them by donors, too quantitative, or even potentially jeopardizing the very relationships with government officials or community members they worked so hard to cultivate.

The L in MEL might as well stand for “lost” – a lost opportunity. Even when things go exactly as hoped, learning is important to document, share, and discuss. When projects don’t go according to plan, learning and reflection are even more critical so that we might channel learning to recalibrate for the next time.

Evidence-informed decision-making (EIDM) is the common thread across the diversity of African Evidence Network members. We believe in it with an almost religious fervor. That’s why MEL deficits among us are a cardinal sin: we want government decision-makers to use evidence to inform budgeting, programming, and policy-making, and yet sometimes we are guilty of not doing the same, because we already ‘know’ intuitively what works based on our depth of experience, we don’t have time/money, or both.

My guess is that in many organizations, MEL is under-staffed, under-resourced, and under-prioritised at least some of the time. Sometimes, an entire organization relies on a single MEL professional and is all too happy to let that person single-handedly shoulder the responsibility. But such arrangements result in missed opportunities for organizational learning and skills sharing, as the MEL lives with one person and dies when that person leaves. MEL cannot be apart from the day-to-day or left to experts or consultants – ideally, all project staff should contribute to MEL and benefit from learning and reflection. Even when staffing and skills are robust, there may be too little time to reflect and learn from the rich information. The expectation that staff share MEL responsibilities is a culture-shift that has to be led from the highest levels of leadership. It’s that important.

People across the continent are innovating new approaches to age-old MEL challenges. For example, some are thinking creatively about how African values such as ubuntu that embody interconnectedness, generosity, and compassion might underpin approaches to MEL. As Zenda Ofir writes, ‘The fundamental idea is not to “indigenise” or “Africanise”; the idea is to improve development.’ Her Made in Africa Evaluation blog series (and more here) is a worthwhile read, and the related series on the NICE Framework offers concrete steps on how to re-think evaluation. The African Evaluation Association launched its South 2 South Initiative in 2018 to further develop new, better, and collaborative means of evaluation.

A shared infrastructure may be one way to catalyze ubuntu, to take just one example, in MEL. One way to engage in MEL for evidence-informed policy-making might be to develop a shared a tool that tracks ‘observable and measurable champion traits’ based on ideas from a 2010 paper from The Aspen Institute and the Center for Evaluation Innovation. Traits may range from simple awareness of the importance of EIDM, to promoting EIDM among peers, to actively using evidence for decision-making. The depth and frequency of such traits could lend themselves to quantitative measurement that could contribute meaningfully to MEL. 

Because at least some of the champions will be the same people engaged by multiple organizations as a part of the EIDM process – and because the information can be time-consuming to get, especially in cases where there is unreliable or unavailable public information on government processes–there may be appetite for a collaborative MEL platform that avoids duplication, complements the narrow information that a single organization gathers, and potentially provides a longer time frame for tracking evidence use beyond the life of a single project or grant. Such a tool could also organically lead to learning between organizations and create opportunities for EIDM collaboration even beyond MEL. The Center for Global Development’s Evaluation Gap Working Group proposed a related idea of sharing ‘a common infrastructure to carry out functions that are most effectively accomplished jointly’ in its final report. Because EIDM is part art and part science, such a platform may include case studies or stories to bring the numbers to life.

As non-MEL professionals, our MEL skills may never be as proficient as we would like, and One True Method for measuring evidence use may not exist.  And that is OK! We should get over our “angst” and get on with it, as Sarah Lucas writes, by measuring what we can, even if imperfect or imprecise, or not as comprehensive as might be ideal, while also developing better, culturally relevant, and responsive ways as we go.

With an increasingly shared understanding among government officials that evidence is an essential, expected component of their decision-making, there’s no better time than now to ramp up MEL and share stories of what works, and what doesn’t, in different contexts. As the proverb goes: The best time to plant a tree was 20 years ago. The second best time is now.