
Chapter 8: Evaluation Concepts & Strategies
Evaluation Types

Evaluation Types
By ensuring your evaluation goals are carefully aligned with your engagement project goals you will be best positioned to develop your approach as strategically and effectively as possible. Regardless of your community engagement strategy or project, you will likely have two primary goals driving your evaluation effort:
- To inform the development of your activities and programs (formative evaluation)
- To assess the outcomes and impacts of your activities and programs (summative evaluation)

Formative Evaluation
Formative evaluation is designed to provide information to guide program improvement during the implementation phase. It can include two types of formative evaluation: (1) process evaluation, which focuses on assessing whether the engagement processes and activities are achieving the project goals, and (2) developmental evaluation, which is designed to identify any unexpected events and changes that occur during the project and that may influence the conduct or potential outcomes of the project.
For example, in the Water for Agriculture project, the team conducted regular formative evaluation through surveys of participants and brief feedback activities at the conclusion of events. These evaluation activities were intended to assess the extent to which the team’s activities and events were helping to achieve the goals of building trust and relationships as well as moving the group toward their objectives. Based on this feedback, facilitators adapted their approaches to the team’s events and communication practices so they could respond to concerns raised.
Summative Evaluation
Summative evaluation informs judgments about whether the program worked (i.e., whether the goals and objectives were met). Summative evaluation is to document outcomes and long-term impact of engagement practice. The summative evaluation is built upon and designed to assess clearly identified criteria, metrics and evidence. For example, in the Water for Agriculture project, interviews were conducted at the conclusion of the formal set of activities to assess the types of learning, relationship-building, and organizational changes that the group members felt were achieved.
Summative Strategies
- Outcome evaluation of the observable conditions of a specific population, organizational attribute, or social condition that a program is expected to have changed. Outcome evaluation tends to focus on conditions or behaviors that the program was expected to affect most directly and immediately.
- Impact evaluation examines the program’s long-term goals. Summative, outcome, and impact evaluation are appropriate to conduct when the program either has been completed or has been ongoing for a substantial period of time. If resources are available, evaluation professionals can assess whether the engagement process is ready for evaluation.
Formative and Summative Metrics
Metrics are best thought of as the specific data points you want to collect to assess formative and summative concepts in your overall evaluation strategy. Examples of each might include:
Formative Evaluation Metrics
- Representativeness
- Inclusivity
- Participation rate
- Identification of common goals
- Fairness
- Satisfaction
- Effectiveness (process and methods)
- Transparency
- Incorporation of diverse values and beliefs into discussion
- Trust
- Communication
- Continuity
- Knowledge gain
Summative Evaluation Metrics
- Policy/decision influence
- Adequate time to develop solutions or regulations
- Reduction of legal challenges
- Agency or organization responsiveness
- Trust
- Social, economic, environmental impact
- Participants’ values/opinions
- Conflict resolution
- Volunteer time and effort
- Effectiveness and cost effectiveness
- Savings or resources generated
- Effect on the planning process
Adapted from Rowe, Gene, and Lynn J. Frewer. “Evaluation public-participation exercises: a research agenda.” Science, technology & human values 29, no. 4 (204): 512-556.).
In addition to standard formative evaluation strategies, you may also want to consider additional evaluation approaches developmental evaluation, principles-focused evaluation, or utilization-focused evaluation.
Developmental evaluation
Developmental evaluation – also called real-time, emergent or adaptive evaluation – is mainly appropriate for issues which are highly emergent, dynamic, and rapidly changing such as climate change. Developmental evaluation is an innovative approach by seeing what emerges and then adjusting to what is needed with less focused outcomes.


Principles-focused evaluation
Principles-focused evaluation focuses on principles which in turn inform and guide stakeholder choices and decisions. The principles for evaluation are determined from stakeholders’ experiences, values, expertise and past research. In principle-focused evaluation, the evaluator determines whether identified principles are actionable, clear, and meaningful, whether stakeholders can follow these principles, and whether these principles can lead to desired outcomes.
Utilization-focused evaluation
Utilization-focused evaluation is mainly focused on the use of evaluation findings and geared towards making sure program evaluations made an impact. It mainly focuses on the use of evaluation findings and its potential users.

Tools & worksheets
A Checklist to Help Focus Your Evaluation
A checklist, created by the CDC, to support you in crafting and assessing specific evaluation questions within a broader program or project evaluation.
Developing Your Engagement Plan
Guide and worksheet for developing an effective evaluation plan
Evaluations Questions Checklist for Program Evaluation
The purpose of this checklist is to aid in developing effective and appropriate evaluation questions and in assessing the quality of existing questions. It identifies characteristics of good evaluation questions, based on the relevant literature and our own experience with evaluation design, implementation, and use.
Additional resources
Developing a Logic Model: Teaching and Training Guide
This guide, written by Ellen Taylor-Powell, PhD and Ellen Henert, describes logic models and guides the reader through how to create and implement one. Contains many useful worksheets as appendices.
Phases of Data Analysis
This brief, written by Glenn D. Israel, covers the phases of data analysis for evaluation of an extension program.
Linking Extension Program Design with Evaluation Design for Improved Evaluation
Radhakrishna R, Chaudhary AK, Tobin D (2019) 3/16/2020 Linking Extension Program Design with Evaluation Design for Improved Evaluation, Journal of Extension 57(4).
Abstract: We present a framework to help those working in Extension connect program designs with appropriate evaluation designs to improve evaluation. The framework links four distinct Extension program domains—service, facilitation, content transformation, and transformative education—with three types of evaluation design—preexperimental, quasi-experimental, and true experimental. We use examples from Extension contexts to provide detailed information for aligning program design and evaluation design. The framework can be of value to various audiences, including novice evaluators, graduate students, and non-social scientists, involved in carrying out systematic evaluation of Extension programs.
Collecting Evaluation Data: An Overview of Sources and Methods
This brief, written by Ellen Taylor-Powell and Sara Steele, offers an overview of sources and methods in evaluation-related data collection, with a focus on extension.
Capturing Change: Comparing Pretest-Posttest and Retrospective Evaluation Methods
This brief, written by Jessica L. O’Leary and Glenn D. Israel compares two models of extension evaluation: “pretest-posttest” and “retrospective”.
Evaluation Models
In this monograph by Daniel L. Stufflebeam, the author reviews the dominant evaluation models used in the United States between 1960 and 1999 and argues which should be brought forward into the 21st century and which should be left behind.
Evaluability Assessment: Examining the Readiness of a Program for Evaluation
“The purpose of this briefing is to introduce program managers to the concept of Evaluability Assessment.”
Developing a Concept of Extension Program Evaluation
This brief offers an introduction to extension program evaluation and the many dimensions of its design and implementation.
Guiding Principles for Evaluators
This brief guide from the American Evaluation Association “can help you identify the basic ethical behavior to expect of yourself and of any evaluator”.
Developing an Effective Evaluation Plan: A guidebook
Developing an Effective Evaluation Plan. Atlanta, Georgia: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health; Division of Nutrition, Physical Activity, and Obesity, 2011.
This guidebook, produced by the CDC, provides a framework laying out a six-step process for the decisions and activities involved in conducting an evaluation.