Outcome Mapping

How do we Evaluate Programmes?

Data Analysis methods

 

for further information: outcome mapping - virtual learning community

see IDRC manual

 

 

See also:

 

I think what is new about their concept of outcome mapping is the fact that they have recognised that:

·       it is more important to achieve changes in behaviours and attitudes of persons- i.e. something where organisations can attribute the change to their intervention/programme. So, it is more important to measure change in who they call “boundary partners”, people you directly work with.

·       you cannot attribute or measure easily the long term impacts because the work of many players contributes to it

 

What is importanti s to identify the “need” of such approaches.

You told me something very relevant about the fact that you are not able to keep track of outcomes because of your system … that you are not able to reward performing teams … you are able to monitor money being spent but …  please clarify properly these issues … one will never understand the novelties  (supposed or real) unless one does not clarify what was the problem in the old system ….

 

Method developed by The International Development Research Centre (IDRC) has developed an innovative approach to evaluation. Their outcome mapping approach does not attempt to replace more traditional forms of evaluation, but to supplement them by focusing on related behavioral change.

In short, outcome mapping focuses on one specific type of result: outcomes as behavioral change. As you probably recall, outcomes are defined as changes in the behavior, relationships, activities, or actions of other people, groups, and organizations with whom a program works directly.

Outcomes can be logically linked to a project, program, or policy’s activities. This logical link can occur even if they are not the cause of these activities. When using outcome mapping, the focus is on outcomes rather than the achievement of development impacts, because these are too "downstream" and are the result of many efforts and interventions. To accurately assess any one organization’s contributions to impact, IDRC argues, is futile. Instead, outcome mapping seeks to look at behaviors to help improve the performance of projects, programs, and policies, by providing new tools, techniques, and resources to contribute to the development process. While recognizing the importance of impact as the ultimate goal, outcome mapping can provide information that programs require to improve their performance.

Boundary partners are individuals, groups, and organizations who interact with projects, programs, and policy. They are also those who may have the most opportunities for influence. Outcome mapping assumes that the boundary partners control change. It also assumes that it is their role as external agents that provides them with access to new resources, ideas, or opportunities for a certain period of time. By focusing on these behavior changes, output mapping supports what practitioners in development have known for some time; that the most successful programs are those that transfer power and responsibility to people acting within the project or program.

The focus of outcome mapping is people. It is a shift away from assessing the development impact of a project or program and towards describing changes in the way people behave through actions and relationships alone or within groups and/or organizations.

Many programs, especially those focusing on capacity building, can better plan for and assess their contributions to development by focusing on behavior. For example, a program may have the objective to provide communities with access to cleaner water by installing purification filters. With the traditional method of evaluation, the results might be measured by counting the number of filters installed and measuring the changes in the level of contaminants in the water before and after the filters were installed. An outcome mapping approach would focus on behavior. It would start with the premise that water does not remain clean without people being able to maintain its quality over time. The outcomes of the program would then be evaluated by focusing on the behavior of those responsible for water purity: specifically, changes in their acquisition and use of appropriate tools, skills, and knowledge. Outcome mapping would evaluate how people monitor the contaminant levels, change filters, or bring in experts when required.

Outcome mapping does not attempt to replace the more traditional forms of evaluation. Instead, outcome mapping supplements other forms by focusing on behavioral change.

 

Three Stages of Outcome Mapping

Outcome mapping is divided into three stages:

  1. intentional design
  2. outcome and performance monitoring
  3. evaluation planning.

Intentional Design

This first stage, intentional design, helps a program establish consensus on the macro level changes it will help bring about. It then plans the strategies it will use. It helps answer four questions:

  1. Why? What is the vision to which the program wants to contribute?
  2. Who? Who are the program’s boundary partners?
  3. What? What are the changes that are being sought?
  4. How? How will the program contribute to the change process?

Outcome and Performance Monitoring

The second stage, outcome and performance monitoring stage, provides a framework for the ongoing monitoring of the program. It sets ways to monitor the actions and the boundary partners’ progress toward the achievement of outcomes. This framework is based largely upon systemized self-assessment. It provides the following data collection tools for elements identified in the intentional design stage:

  1. an "outcome journal"
  2. a "strategy journal"
  3. a "performance journal."
  4. An outcome journal includes regular entries for each boundary partner that the program has identified as a priority. The outcome journal rates progress markers. Progress markers articulate the results that the program has helped to achieve.

    The rating scale for the progress markers are categorized as one of the following

    1. expect to see
    2. like to see
    3. love to see.

    In addition, each progress marker is under the above category are rated as:

    1. low
    2. medium
    3. high.

    A strategy journal records data on the strategies being employed to encourage change in the boundary partners. The evaluators fill this out during the program’s regular monitoring meetings. It is used to help determine if the program is making optimum contributions to the achievement of outcomes. It also helps determine if modifications need to be made to help achieve outcomes.

    The following are examples of planning and management questions that might be considered during monitoring meetings:

    1. What are we doing well and what should we continue doing?
    2. What are we doing "okay" or badly and what can we improve?
    3. What strategies or practices do we need to add?
    4. What strategies or practices do we need to give up?
    5. How are/should we be responding to the changes in boundary partners’ behaviour?
    6. Who is responsible? What are the time lines?
    7. Has any issue come up that we need to evaluate in greater depth? What? When? Why? How?

    A performance journal records data on how the program is operating as an organization to fulfill its mission. Entries in the performance journal are added during the regular monitoring The journal includes information on the organizational practices being employed by the those in the program that help the program remain relevant, sustainable, and connected to its environment. The entries in the journal should not just ask "How well have we done?" It should also ask, "How can we improve?"

    Evaluation Planning

    The third stage, evaluation planning, helps the program identify evaluation priorities and develop an evaluation plan.

    Figure 7.2 shows the three stages of outcome mapping. It also shows a detail of each stage, including the steps in each stage.

    Fig. 7.2: Detail of Three Stages of Outcome Mapping. Source: Earl, Carden, & Smutylo. 2001. p. 4.

    You can see an example of outcome mapping, named "Sustainable Coastal Communities: Tools for Building Sustainable Coastal Communities" at the following website:

    http://seagrant.gso.uri.edu/scc/index.html

     

 

   Outcome mapping principles
At IDRC, evaluation is viewed as an integral part of good project and program management. Corporate and program learning and improvement drives evaluation activities, with collegial participation by stakeholders as a key ingredient. IDRC has chosen to use evaluation first as a corporate learning tool, believing this to be the best way to strengthen its accountability function. The following principles, which guide evaluation at IDRC, are embedded in the process of Outcome Mapping and argue for the relevance of evaluation as an integral component of a program's learning system:

 

·      Evaluation is intended to improve program planning and delivery — It contributes to decision making and strategy formulation at all levels. To increase the likelihood of obtaining useful findings, programs are assessed strategically, based on the client's purpose and information needs.

·      Evaluations are designed to lead to action — To be useful, evaluations need to produce relevant, action-oriented findings. This is fostered by sustained involvement and ownership on the part of the client and stakeholders throughout the process.

·      No single, best, generic evaluation method exists — Each case requires tools and methods appropriate to the data that is to be gathered and analyzed, and appropriate to the client's needs. Credible evaluations interlace quantitative and qualitative data from several sources.

·      Evaluations should enlist the participation of relevant stakeholders — Those affected by the outcome of an evaluation have a right to be involved in the process. Their participation will help them to better understand the evaluation's purpose and process, and will promote stakeholder contribution to, and acceptance of, the evaluation results. This increases the likelihood that evaluation findings will be utilized.

·      Evaluation processes should meet standards for ethical research — Participants in the process should be able to act and share information fully without fear that the information they provide could be used against them at a later time.

·      Monitoring and evaluation planning add value at the design stage of a program — They can make the program more efficient and effective by helping to clarify the results to be achieved. Also, knowing what information will be used will allow people to collect it as it becomes available. This will reduce the amount of financial and human resources required and improve the team's ability to report on, and learn from, its experiences.

·      Evaluation should be an asset for those being evaluated —Evaluation can impose a considerable time and resource burden on recipient institutions. Evaluations should generate information that benefits the recipient institution.

·      Evaluation is both science and art — The art of identifying critical issues to be evaluated, organizing them conceptually, and getting the appropriate people to participate in the collection, interpretation, and utilization of the evaluation information is as important as the systematic collection and analysis of reliable data.

·      Evaluations are a means of negotiating different realities — Evaluations provide opportunities for program stakeholders to reconcile their various perspectives or versions of reality.

·      Evaluations should leave behind an increased capacity to use evaluation findings — Organizations need some level of internal evaluation capacity in order to be able to devise, participate in, or utilize evaluations effectively. Exclusive reliance on external expertise can limit an organization's ability to be clear and specific about its goals and to learn and apply lessons. Specific strategies can be built into evaluations that are aimed explicitly at fostering these organizational characteristics.

 

 

 

 

for further information: outcome mapping - virtual learning community

 See aslo

Definition of change

Elements of Organizational Culture

Organizational and Individual Change

 

Other resources:

ODI: Drivers of change