Quality Programming - The art (and science) of setting program targets

 


Once you have a program or project, we assume everything that needs to be done is clear and clear to everyone.  However, there is always no dull moment for Monitoring and Evaluation nerds. You have all those figures in the project document and your job is to make sure everyone knows the best way to get those figures or to get to them (achieve them).  That is target setting. For the sake of this blog, I keep monitoring and evaluation together, mainly because it is usually the same individual or team responsible for both roles.

Indicators are what drives monitoring. But targets will decide how monitoring will be done, how results will be interpreted, and how success will be celebrated. But it is worth knowing wrong targets may point to issues related to limited understanding of the intervention the target beneficiaries.


We have all been in a place where our tasks included everyone asking you to set targets every year. And they even emphasize, targets that are ambitious and realistic.  Usually it ended up with some nice-looking targets. But required more than knowledge of excel. For  example in  HIV programs, targets should be following  a cascade from General population – coming for HIV testing- Testing positive -  being enrolled in care (before test and treat) – being retained in care(on ART) – retained on ART -  treatment outcomes.  To set targets, you must understand how the different components link to each other, meaning knowledge of the program logic is key.   In the case of HIV programs, pick any national HIV/AIDS strategic plan of any country and look at the cascade. In most cases, it will not add up or the cascade story will not come out clearly.    Factoring in referral success rates, retention rates, survival rates and treatment success rate (Viral load suppression) to the targets will lead to asking questions like what is going on here.  The same applies to other programs.

Knowledge of the demographic profile of the target community or targeted beneficiaries. Sex ratio of the general population is not same as sex ratio among unemployed youth in the target community. Much as it will give you a guide, you should be able to know the age and sex composition of program target population. All reporting will at the end of the day ask you how many children (boys and girls) and Adults (males and females) did you serve.  Usually it not feasible to set targets at disaggregated level but where it is necessary to ensure targeted program delivery, make sure program delivery registers give you that information and extrapolate it.

In addition to knowledge of how the program is designed to work and how it actually works, it is always important to factor in maturity and decay. Mature programs are likely to have targets that show maintenance and sustaining. Limited scale up means targets will not be increasing exponentially or reducing sharply.  For some programs, it is important for some indicators to be kept at a certain level for other contributing factors to be successful. For example you need to make sure “percentage of households with appropriate hand washing facilities functional” , “percentage of care givers with sufficient knowledge”  and “Percentage of Households using clean water” are maintained at a certain level for Children nutrition or/and health indicators are achieved. Also, there is a time lag between interventions in sanitation affecting a percentage point in health.  

When a program is scaling up, the story is different. The targets set must show the direction and magnitude of change expected, and realistic based on evidence. Also related to maturity, you need to know the direction of change.  Usually indicators on service utilization, such as vaccination (as it is seen for COVID), and those that rely on change in social norms, they tend to  change slowly in first years, before picking up in later years and even rapidly.  Look at contraceptive prevalence rate for Rwanda. It was almost steady in 2000 and 2005 Demographic and Health Survey. But it rapidly increased after that, affecting stock levels of commodities. For such indicators, you need to understand and estimate when the rapid increase is likely to take place. Service utilization registers are a good data source for monitoring such indicators.  

Development programs rarely deal with the interventions that has an impact that decays quickly, but that happens when there are issues with implementation fidelity, sufficiency of intensity and reach, mass population movements, addressing key determinants or underlying causes. Monitoring is expected to keep on top of these as they affect the targets. Mass population movements affect the denominator and any indicator measured using the population as a denominator will show a downward trend.   If the arriving or departing population does not have same characteristics, then the program will be affected as whole. 

Evaluation will come in to look at what has been done.  For any program that has been implemented and monitored correctly, the behavior of monitoring data will be a flag for evaluation. For example, if the direction of change of an indicator is expected to be a decrease and you observe an increase, then evaluation should be conducted. If targets set to be achieved in 4 years are achieve in 6 months, then there is something wrong and an evaluative exercise must be conducted to understand.  We have all seen the “waves” charts for new COVID19 cases, same applies to malaria incidence. They naturally happen in waves and targets for such indicators should never be on number of cases. But a gradient of the cumulative curve would be a better target (such as flattening the curve). 

 

To conclude, target setting should not be overlooked in planning. Sometimes it could be a meaningful contribution your program to the literature on quality programming. Some thinking must be included. 

Comments

Popular posts from this blog

Why the World Needs an International Federation of Professional Evaluators – Now More Than Ever

"Use This Information The Way You Want" - institutionalizing Monitoring, Evaluation, Accountability and Learning (MEAL) in service organizations

Have You Considered and Prepared for Artificial intelligence (AI) in Your Monitoring and Evaluation Career?