
Chapter 8: Evaluation Concepts & Strategies
Introduction

Introduction
This section is intended as an overview of some of the most important evaluation strategies you will want to consider in your engagement initiatives. Evaluation at its heart is simple: did what we tried to do work? Why or why not? And how can we improve both the process and the outcomes?
This section provides a deeper dive into evaluation. It provides practitioners and researchers alike with:
- The rationale for, and approaches to evaluation in engagement projects
- An overview of evaluation and evaluation types and theory
- How to develop and implement evaluation plans
- Tools, strategies, and metrics you may want to consider as you develop the evaluation strategies most appropriate to your specific project
- A description of the use of evaluation findings
- Evaluation in the Water for Agriculture context
Role & Purpose of Evaluation in Engagement[1]
Rather than being considered an ‘event’ or ‘instrument’, such as a survey, evaluation is best thought of as an on-going process to provide information to improve the engagement process and assess engagement outcomes. This includes how it is impacting participants, partner agencies, and the community, and how it is being influenced by both internal and external factors.
Thus, evaluation is a systematic process to determine the merit, value, or worth of an engagement effort and it should not be conducted simply to prove that a project worked, but also how it worked and how to improve the way it worked.
More specifically, evaluation is an accountability measure of project effectiveness and management as well as a learning tool for engagement efforts, projects, funders, and practitioners.
Practice Tip: Evaluation Goals
It is critical that your evaluation strategies and metrics are directly tied to the goals of your project.
If your community engagement effort is designed to address a specific issue, then your evaluation goal will be to assess the issue-specific outcomes that have occurred as a result of your program’s efforts. Examples might include increasing the acreage of riparian buffers or the number of farmers implementing conservation practices.
If your goal is to enhance local community participation or involvement, your evaluation goals might consider how well the program reached the intended audience, involved residents in decision-making, empowered them to implement strategies on their own, or simply increased community understanding and knowledge about the issue. In either case, deciding ahead of time what’s important to your project – and to your stakeholders – is a critical first step.
Evaluation can be used, among other things to:
- Gain insight and assess needs and wants of stakeholders and participants
- Reinforce purposes and goals of the program and stimulate dialogue and raise awareness about community or project related issues
- Improve how things are done including refining plans for introducing a new practice, determining the extent to which project plans were successful, and improving educational or communication materials
- Decide where to allocate future resources
- Determine the effects of the program on skills development of participants and changes in behavior over time, including after the conclusion of the engagement
[1] This chapter draws extensively from Alter, Driver, Frumento, Howard, Shuffstall and Whitmer (2017) Community engagement for collective action: a handbook for practitioners. Invasive Animals CRC, Australia.
Tools & worksheets
A Checklist to Help Focus Your Evaluation
A checklist, created by the CDC, to support you in crafting and assessing specific evaluation questions within a broader program or project evaluation.
Developing Your Engagement Plan
Guide and worksheet for developing an effective evaluation plan
Evaluations Questions Checklist for Program Evaluation
The purpose of this checklist is to aid in developing effective and appropriate evaluation questions and in assessing the quality of existing questions. It identifies characteristics of good evaluation questions, based on the relevant literature and our own experience with evaluation design, implementation, and use.
Additional resources
Developing a Logic Model: Teaching and Training Guide
This guide, written by Ellen Taylor-Powell, PhD and Ellen Henert, describes logic models and guides the reader through how to create and implement one. Contains many useful worksheets as appendices.
Phases of Data Analysis
This brief, written by Glenn D. Israel, covers the phases of data analysis for evaluation of an extension program.
Linking Extension Program Design with Evaluation Design for Improved Evaluation
Radhakrishna R, Chaudhary AK, Tobin D (2019) 3/16/2020 Linking Extension Program Design with Evaluation Design for Improved Evaluation, Journal of Extension 57(4).
Abstract: We present a framework to help those working in Extension connect program designs with appropriate evaluation designs to improve evaluation. The framework links four distinct Extension program domains—service, facilitation, content transformation, and transformative education—with three types of evaluation design—preexperimental, quasi-experimental, and true experimental. We use examples from Extension contexts to provide detailed information for aligning program design and evaluation design. The framework can be of value to various audiences, including novice evaluators, graduate students, and non-social scientists, involved in carrying out systematic evaluation of Extension programs.
Collecting Evaluation Data: An Overview of Sources and Methods
This brief, written by Ellen Taylor-Powell and Sara Steele, offers an overview of sources and methods in evaluation-related data collection, with a focus on extension.
Capturing Change: Comparing Pretest-Posttest and Retrospective Evaluation Methods
This brief, written by Jessica L. O’Leary and Glenn D. Israel compares two models of extension evaluation: “pretest-posttest” and “retrospective”.
Evaluation Models
In this monograph by Daniel L. Stufflebeam, the author reviews the dominant evaluation models used in the United States between 1960 and 1999 and argues which should be brought forward into the 21st century and which should be left behind.
Evaluability Assessment: Examining the Readiness of a Program for Evaluation
“The purpose of this briefing is to introduce program managers to the concept of Evaluability Assessment.”
Developing a Concept of Extension Program Evaluation
This brief offers an introduction to extension program evaluation and the many dimensions of its design and implementation.
Guiding Principles for Evaluators
This brief guide from the American Evaluation Association “can help you identify the basic ethical behavior to expect of yourself and of any evaluator”.
Developing an Effective Evaluation Plan: A guidebook
Developing an Effective Evaluation Plan. Atlanta, Georgia: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health; Division of Nutrition, Physical Activity, and Obesity, 2011.
This guidebook, produced by the CDC, provides a framework laying out a six-step process for the decisions and activities involved in conducting an evaluation.