By Louis Gerald Maluwa
Reclining on my couch one Friday night, I went through a bunch of completed evaluation questionnaires that I had collected from a group of evaluation respondents. I had over a hundred questionnaires, each containing 52 mixed questions. Now that the data collection task was finalised, all eyes were on me. Stakeholders were anxiously waiting for answers – they desired to know how well their programme had performed vis-à-vis its intended outcomes over a five year period.
I went through the participants’ responses, wondering how I would go about analysing these data, let alone organising the findings into a sensible evaluation report. It seemed an impossible task; there were simply too many evaluation questions for this kind of evaluation. In addition, more than half of the questions, much as they sounded smart, did not necessarily speak to the purpose of the evaluation. I got irritated at the sudden realisation that I would possibly not use most of the participants’ responses during data analysis, a situation that left me muttering, “Why did I collect all these data in the first place”? In other words, why did I formulate so many evaluation questions, most of which were not necessarily important?
Wait a minute! Did I develop all these evaluation questions? Well, most of the questions were basically developed by the programme implementation team, and I had merely endorsed or refined them. Nevertheless I could not deny that I had found all these questions striking because they were all relevant to the programme in question. But were they well suited to the purpose of this particular evaluation? I had overlooked this one critical question. No wonder I was finding it difficult to analyse the data at hand; they were too mixed up to address the specific objectives of the evaluation. The most appropriate move was, therefore, to exclude most of the collected data from my data analysis, because they proved less relevant to the evaluation at hand. What a waste of resources!
Shaken by this disturbing realisation, I suddenly remembered two very important ‘companions’ to better evaluation – Theories of Change and Logic Models. But what exactly are they?
A Theory of Change (ToC), to start with, is a strategic picture of the multiple interventions required to produce the early and intermediate outcomes that are pre-conditions of reaching an ultimate goal (Harvard). More simply put, a ToC provides a roadmap to get you from one point to another (Centre for Theory of Change). A Logic Model, on the other hand, is a tactical explanation of the process of producing a given outcome through the employment of relevant intervention(s). Basically, it outlines the intervention inputs and activities, the outputs they will produce, and the connections between these outputs and the desired outcomes. Thus, a ToC summarises work at strategic level, while a logic model unpacks the practical implementation of the ToC (Harvard).
In order to achieve better evaluation, it is imperative that one first understands the ToC underlying the programme whose aspect is being evaluated. If the programme does not have a formal ToC, the evaluator must assist the relevant stakeholders – through the provision of technical advice – in developing one. The ToC helps the evaluator understand the desired programme goal and the interventions required to achieve this goal. Having developed and/or understood the ToC, the second step to improved evaluation is to examine the Logic Model of the intervention under evaluation. Usually this intervention is highlighted in the ToC already. Where the intervention does not have a coherent Logic Model, it is imperative that the evaluator develops one; this must be done in collaboration with the relevant stakeholders. This Logic Model must include the following intervention components: inputs (i.e. the resources required to execute the intervention); activities that need to be accomplished; outputs (i.e. direct products of the activities); outcomes to be achieved; and the desired impact (i.e. long-term goal). The Logic Model must also outline the specific indicators for measuring the aspects of each component (i.e. input indicators, process indicators, output indicators, outcome indicators, and impact indicators). For example, if the desired intervention outcome is ‘reduction in poverty’, one possible indicator could be ‘percentage of population living below poverty line’.
After developing a coherent Logic Model, the third step in the evaluation process is to identify the component that needs to be evaluated. Do you want to evaluate the inputs (input evaluation), the activities (process evaluation), the outputs (output evaluation), the outcomes (outcome evaluation), or the impact (impact evaluation)? This step is very important in evaluation as it assists the evaluator in developing more targeted evaluation questions that are highly relevant to the aspect(s) being evaluated. If one wants to evaluate the inputs of a programme, for instance, he or she needs to develop evaluation questions that address only the input indicators outlined in the Logic Model. It is only when one starts to evaluate the upper level components of the Logic Model (i.e. activities>outputs>outcomes>impact) that the evaluation questions begin to increase. This is because one needs to include evaluation questions from the lower level component(s) when evaluating the upper level component(s). In other words, an evaluator may not conduct an evaluation of the upper level component independent of the evaluation of the lower level component(s), as effective evaluation of an upper level component requires an appreciation of the lower level component.
Using the example of an outcome indicator provided earlier (‘percentage of population living below poverty line’), a relevant evaluation question could be, ‘What is your average monthly household income’? A far more targeted question, right? One is far more likely to develop more accurately targeted evaluation questions if he or she uses indicators (outlined in a Logic Model) as guidelines. And regardless of the number of evaluation questions that one ends up developing (which is likely to be quite a number for upper level components of the Logic Model), the chances are that all these questions will speak directly to the purpose of the evaluation at hand, making it easy for one to analyse all the generated data and to write a coherent evaluation report.
The importance of using these two companions to analysing and reporting on data hit home for me that Friday night as I turned restlessly on my couch, my eyes glued to my old leaky fridge. Had I used a ToC and Logic Model, I would have avoided the mostly irrelevant evaluation questions that resulted in the generation of mostly irrelevant data. I learnt my lesson then and have since vowed to myself to always regard ToCs and Logic Models as an indispensable guide to better evaluation.