Highlights of our Monitoring and Evaluation Practices

November 1, 2011

Share this story:
  • linkedin
  • google
  • youtube_screen_shot_1.png
    youtube_screen_shot_1.png Photo: youtube_screen_shot_1.png

At Mercy Corps, we believe that measurement of program results is a critical success factor on the path to achieving high quality results. Without data on our programs, we are unable to know what we should be replicating, where we should make course corre

Measurement, Evaluation and Learning at Mercy Corps
At Mercy Corps, we believe that measurement of program results is a critical success factor on the path to achieving high quality results.

“Both excellence and innovation can only be achieved if we have strong evidence-based information about our current work so that we can constantly evolve FORWARD instead of simply re-inventing the wheel over and over again.” (Mercy Corps team member in response to the question: Why invest in monitoring and evaluation?)

At Mercy Corps, we believe that measurement of program results is a critical success factor on the path to achieving high quality results. Without data on our programs, we are unable to know what we should be replicating, where we should make course corrections and what learning we should disseminate. In line with the full commitment and focus the agency has placed on impact, particularly over the last couple of years, we’re driving measurement, evaluation and learning throughout the agency. It often isn’t easy and the data often leads to further questions – however the quest for better results in turn drives more rigorous program design and implementation.

Below you will find examples of some successful strategies we are employing to better understand our results.

We’re using some non-traditional approaches.
The Local Empowerment for Peace program in Kenya worked to strengthen the ability of youth to address root causes of the post-election violence and to promote peace and reconciliation at the community level. When evaluating these programs, in addition to using more traditional data collection methodologies, we incorporated two methodologies particularly suited to working with a youth population – participatory video and Most Significant Change . The videos can be watched on You Tube at: http://www.youtube.com/view_play_list?p=A76473A504A7A5EC

We’re winning research grants to take a rigorous look at programs that span multiple countries.
The USAID-funded Evaluation and Assessment of Poverty and Conflict Interventions (EAPC) research project aimed to strengthen our ability to evaluate the impact of programs at the intersection of peace-building and economic development. Over the 18 month life of the project, the Youth and Conflict Management technical team worked with field teams in Ethiopia, Indonesia, and Uganda to: 1) develop indicators and data collection tools; 2) field test these indicators and tools; and 3) begin to assess several theories of change that inform Mercy Corps’ programs.

We’re conducting primary research.
We’ve developed an agency research agenda that defines a set of high-priority issues to study in order to provide direction to our learning initiatives. This research agenda reflects the areas in which the agency has prioritized building its thought leadership.
We have recently researched what makes youth prone to engage in violent movements, and what program strategies show the greatest potential to mitigate this risk. The research provides hard evidence on a number of social, political, and economic factors that influence youth propensity towards violence in Kenya and Liberia. The findings shed light on the conditions under which young people’s economic conditions are a major driver of violence, and by extension, where economic incentives for youth are likely to be effective in mitigating conflict. The research brief can be found at: http://www.mercycorps.org/resources/youthEDconflictstudy.

We’re partnering with leading researchers.
In 2010, Mercy Corps partnered with the Feinstein institute at Tufts University for a livelihoods and conflict analysis in the Shinile Zone, Somali Region of Ethiopia. The underlying question for the analysis was the extent to which aid actors should integrate peace-building and livelihoods programming as part of long-term development strategies in this zone. The analysis found the two to be mutually supportive approaches. Research findings highlighted factors --such as land tenure, livestock development, marketing, education, health, governance, and service delivery-- that are integral to economic development and peace-building in a pastoral context. The report can be found at: http://www.mercycorps.org/resources/movingupormovingout.

But it doesn’t stop there. We already see how these investments in measurement, learning and impact are changing practice and establishing Mercy Corps as a thought leader.

Monitoring and evaluation can affect larger changes in society.
A baseline study in Indonesia for a child nutrition project that was intended to support school lunches actually found that intestinal worms (which cause anemia) were the leading cause of malnutrition. So the project changed course and began giving de-worming tablets to combat the resulting anemia rather than school lunches. Based on the results of the mid-term evaluation that showed this intervention to produce dramatic results in decreasing malnutrition rates, we successfully advocated for the government of Indonesia to adopt aspects of the program and it has since been rolled out by the government in additional regions.”

Consistent monitoring and evaluation can improve our programming.
On a support visit to Kosovo, we looked at monitoring data on gender of beneficiaries and realized we weren't reaching women and youth to the level we were intending. This led to a discussion of how future program designs could explicitly focus more on these groups and contributed to the development of a concept note for youth and women participation in civil society and local government.

In Guatemala, the TIERRAS program charted the number and types of land conflicts resolved per month, and regularly adjusted the program according to this data. From this and survey data the program learned that they needed to focus increasingly on advocacy and public information campaigns, and tailored subsequent phases of the program accordingly, including increased funding and additional programming components.

As an agency committed to helping to improve the lives of the most vulnerable we have a moral and ethical responsibility to be conducting research and collecting data so we can make data-based management decisions and determine which interventions are the most effective in these difficult, transitional environments.

1 Most Significant Change is an approach that collects a series of stories from program participants that are analyzed in successive rounds by stakeholder groups to emerge with the most significant or meaningful examples of changes brought about during the program.

Search publications archive