Common Myths and Misconceptions About Evaluation

How do we Evaluate Programmes?

 

 

Myth: We can’t afford an evaluation.

Fact: Rarely does anyone have access to adequate resources for an ideal health communication program, much less an ideal evaluation. Nevertheless, including evaluation as a part of your work yields the practical benefit of telling you how well your program is working and what needs to be changed. With a little creative thinking, some form of useful evaluation can be included in almost any budget.

Myth: Evaluation is too complicated. No one here knows how to do it.

Fact: Many sources of help are available for designing an evaluation. Several pertinent texts are included in the selected readings at the end of this section. If your organization does not have employees with the necessary skills, find help at a nearby university or from someone related to your program (e.g., a board member, a volunteer, or someone from a partner organization). Also, contact an appropriate clearinghouse or Federal agency and ask for evaluation reports on similar programs to use as models. If the program has enough resources, hire a consultant with experience in health communication evaluation. Contact other communication program managers for recommendations.

Myth: Evaluation takes too long.

Fact: Although large, complicated outcome evaluation studies take time to design and analyze (and may require a sufficient time lapse for changes in attitudes or behavior to become clear), other types of evaluation can be conducted in a few weeks or months, or even as little as a day. A well-planned evaluation can proceed in tandem with program development and implementation activities. Often, evaluation seems excessively time-consuming only because it is left until the end of the program.

Myth: Program evaluation is too risky. What if it shows our funding source (or boss) that we haven’t succeeded?

Fact: A greater problem is having no results at all. A well-designed evaluation will help you measure and understand the results (e.g., if an attitude or a perception did not change, why not?). This information can direct future initiatives and help the public health community learn more about how to communicate effectively. The report should focus on what you have learned from completing the program evaluation.

Myth: We affected only 30 percent of our intended audience. Our program is a failure.

Fact: Affecting 30 percent of the intended audience is a major accomplishment; it looks like a failure only if your program’s objectives were set unrealistically high. Remember to report your results in the context of what health communication programs can be expected to accomplish. If you think the program has affected a smaller proportion of the intended audience than you wanted, consult with experts (program planning, communication, or behavioral) before setting objectives for future programs.

Myth: If our program is working, we should see results very soon.

Fact: Results will vary depending on the program, the issue, and the intended audience. Don’t expect instant results; creating and sustaining change in attitudes and particularly in behavior or behavioral intentions often takes time and commitment. Your program may show shorter term, activity-related results when you conduct your process evaluation; these changes in knowledge, information seeking, and skills may occur sooner than more complex behavioral changes.