Why use evaluations
A more in-depth explanation and guide to completing an evaluability assessment can be found in the resource section at the end of this document. The Policy on Evaluation requires that departments prepare annual or multi-year Departmental Evaluation Plans to identify priority evaluations and evaluation-related activities. A specific type of evaluability assessment process used to support departmental risk-based assessment of evaluation priorities is contained in the Guide to Developing a Risk-based Departmental Evaluation Plan.
A Needs Assessment can be useful for determining whether a problem or need exists within a community, organization or target group and then describing that problem.
Recommendations can then be made for ways to reduce that problem. This process typically involves interviews and consultations with stakeholders as well as document reviews and research of relevant information. A more in depth explanation and guide to completing a needs assessment can be found in the resource section at the end of this document. The existence of reliable data supporting a needs assessment is an important factor to justify major policy or program changes in departmental Cabinet submissions.
Every initiative has a strategy or plan that dictates how it is intended to work. The initiative's theory states that if its plan is followed and implemented faithfully, then the intended outcomes will be achieved.
A process evaluation can be conducted at any point in the initiative's lifecycle and is used to assess whether and to what degree this plan was followed and the extent to which early outcomes are achieved. Accurate and detailed information about the initiative and its activities and goals are a necessity in order to make the linkages between its various components and the achievement of outcomes.
The results of a process evaluation can be used to make improvements to the initiative. This type of evaluation determines what changes, if any, occurred and if they are in line with the initiative's theory. An important aspect of this assessment is determining whether those outcomes occurred due to the initiative itself impact or attribution , whether some change may have occurred without the program intervention deadweight or may have been achieved by other external factors?
The findings from this type of evaluation can be used not only for making improvements to the initiative, but for summative decision making as well, that is will the program continue as is, expand, reduce or be eliminated. The document "Developing an Accountability Framework Resource and Reference Guide" contains a section requiring departments to identify if and when formative or summative evaluations of new programs and major new policies will take place.
Such a review process often involves a comprehensive review of an entire department or organization often with a budget reduction target assigned. An efficiency assessment is used to determine the value or benefit of an initiative in relation to its cost.
Whether the evaluation focuses on cost-benefit, cost-effectiveness or both depends on the evaluation's scope. A cost-benefit analysis seeks to compare the total costs of implementing an initiative to the total net benefits, while cost-effectiveness analyses assess the value-for-money of an initiative based on the costs required to produce various outcomes. Typically, this type of evaluation is recommended after the initiative has been in place for a period of time so that actual outcome data is available.
A more in-depth explanation and guide to completing an efficiency assessment can be found in the resource section at the end of this document. Health Education and Behavior;25 3 Greene, J. Qualitative program evaluation: practice and promise. Haddix, A. Prevention effectiveness: a guide to decision analysis and economic evaluation.
Hennessy, M. In Statistics in Community health and development , edited by Stroup. Henry, G. Graphing data. In Handbook of applied social research methods , edited by Bickman. Practical sampling. Institute of Medicine. Improving health in the community: a role for performance monitoring.
Sanders Chair. The program evaluation standards: how to assess evaluations of educational programs. Kaplan, R. The balanced scorecard: measures that drive performance. Harvard Business Review ;Jan-Feb Kar, S. Health promotion indicators and actions.
Knauft, E. What independent sector learned from an evaluation of its own hard-to -measure programs. In A vision of evaluation, edited by ST Gray. Washington, DC: Independent Sector. Lipsy, M. Design sensitivity: statistical power for applied experimental research.
In Handbook of applied social research methods, edited by Bickman, L. Lipsey, M. Theory as method: small theories of treatments. New Directions for Program Evaluation; 57 What can you build with thousands of bricks? Musings on the cumulation of knowledge in program evaluation. New Directions for Evaluation; 76 : Love, A. Internal evaluation: building organizations from within. Miles, M. Qualitative data analysis: a sourcebook of methods.
National Quality Program. National Quality Program , vol. National Institute of Standards and Technology. Health care criteria for performance excellence , vol. National Quality Program, Newcomer, K. Using statistics appropriately. Patton, M. Qualitative evaluation and research methods. Patton, M Toward distinguishing empowerment evaluation and placing it in a larger context. Evaluation Practice;18 2 Utilization-focused evaluation.
Perrin, B. Effective use and misuse of performance measurement. American Journal of Evaluation ;19 3 Perrin, E, Koshel J. Assessment of performance measures for community health and development, substance abuse, and mental health.
Phillips, J. Handbook of training evaluation and measurement methods. Poreteous, N. Program evaluation tool kit: a blueprint for community health and development management. Posavac, E. Program evaluation: methods and case studies. Preskill, H. Evaluative inquiry for learning in organizations. Public Health Functions Project. The public health workforce: an agenda for the 21st century.
Washington, DC: U. Public Health Training Network. Practical evaluation of public health programs. Reichardt, C. Rossi, P. Evaluation: a systematic approach. Rush, B. Program logic models: expanding their role and structure for program planning and evaluation. Canadian Journal of Program Evaluation; Sanders, J. Uses of evaluation as a means toward organizational effectiveness.
In A vision of evaluation , edited by ST Gray. Schorr, L. Common purpose: strengthening families and neighborhoods to rebuild America. Scriven, M. A minimalist theory of evaluation: the least theory that practice requires. American Journal of Evaluation. Shadish, W. Foundations of program evaluation. Evaluation theory is who we are. American Journal of Evaluation 1 Shulha, L. Evaluation use: theory, research, and practice since Evaluation Practice. Sieber, J. Planning ethically responsible research.
Steckler, A. Toward integrating qualitative and quantitative methods: an introduction. Health Education Quarterly; Taylor-Powell, E. Evaluating collaboratives: reaching the potential. Teutsch, S. A framework for assessing the effectiveness of disease and injury prevention. Torres, R. Evaluation strategies for communicating and reporting: enhancing learning in organizations. United Way of America. Measuring program outcomes: a practical approach. General Accounting Office. Case study evaluations.
General Accounting Office, Designing evaluations. Managing for results: measuring program results that are under limited federal control.
Washington, DC: Prospective evaluation methods: the prosepctive evaluation synthesis. The evaluation synthesis. Using statistical sampling. Wandersman, A. Comprehensive quality programming and accountability: eight essential strategies for implementing successful prevention programs. Journal of Primary Prevention ;19 1 Weiss, C. Nothing as practical as a good theory: exploring theory-based evaluation for comprehensive community initiatives for families and children.
Kubisch, A. Have we learned anything new about the use of evaluation? American Journal of Evaluation;19 1 How can theory-based evaluation make greater headway? Evaluation Review ;21 4 The W. Foundation Evaluation Handbook. Battle Creek, MI: W. Wong-Reiger, D. Using program logic models to plan and evaluate education and prevention programs.
Ottawa, Ontario: Canadian Evaluation Society. Wholey, S. Handbook of Practical Program Evaluation. Jossey-Bass, This book serves as a comprehensive guide to the evaluation process and its practical applications for sponsors, program managers, and evaluators. Yarbrough, B.
Sage Publications. Yin, R. Case study research: design and methods. Skip to main content. Toggle navigation Navigation. Introduction to Evaluation » Section 1. Chapter Chapter 36 Sections Section 1. Community-based Participatory Research Section 3. Section 4. Choosing Evaluators Section 5. Developing an Evaluation Plan Section 6. Participatory Evaluation.
The Tool Box needs your help to remain available. Toggle navigation Chapter Sections. Section 1. Learn how program evaluation makes it easier for everyone involved in community health and development work to evaluate their efforts.
Why evaluate community health and development programs? How do you evaluate a specific program? A framework for program evaluation What are some standards for "good" program evaluation?
Applying the framework: Conducting optimal evaluations Around the world, there exist many programs and interventions developed to improve conditions in local communities. Examples of different types of programs include: Direct service interventions e.
For example, it complements program management by: Helping to clarify program plans Improving communication among partners Gathering the feedback needed to improve and be accountable for program effectiveness It's important to remember, too, that evaluation is not a new activity for those of us working to improve our communities.
Before your organization starts with a program evaluation, your group should be very clear about the answers to the following questions: What will be evaluated? What criteria will be used to judge program performance? What standards of performance on the criteria must be reached for the program to be considered successful? What evidence will indicate performance on the criteria relative to the standards?
What conclusions about program performance are justified based on the available evidence? What will be evaluated? Drive Smart, a program focused on reducing drunk driving through public education and intervention. The number of community residents who are familiar with the program and its goals The number of people who use "Safe Rides" volunteer taxis to get home The percentage of people who report drinking and driving The reported number of single car night time crashes This is a common way to try to determine if the number of people who drive drunk is changing What standards of performance on the criteria must be reached for the program to be considered successful?
A random telephone survey will demonstrate community residents' knowledge of the program and changes in reported behavior Logs from "Safe Rides" will tell how many people use their services Information on single car night time crashes will be gathered from police records What conclusions about program performance are justified based on the available evidence?
Are the changes we have seen in the level of drunk driving due to our efforts, or something else? Or if no or insufficient change in behavior or outcome, Should Drive Smart change what it is doing, or have we just not waited long enough to see results?
The following framework provides an organized approach to answer these questions. A framework for program evaluation Program evaluation offers a way to understand and improve community health and development practice using methods that are useful, feasible, proper, and accurate. The framework contains two related dimensions: Steps in evaluation practice, and Standards for "good" evaluation.
Engage stakeholders Describe the program Focus the evaluation design Gather credible evidence Justify conclusions Ensure use and share lessons learned Understanding and adhering to these basic steps will improve most evaluation efforts. There are 30 specific standards, organized into the following four groups: Utility Feasibility Propriety Accuracy These standards help answer the question, "Will this evaluation be a 'good' evaluation? Engage Stakeholders Stakeholders are people or organizations that have something to gain or lose from what will be learned from an evaluation, and also in what will be done with that knowledge.
Three principle groups of stakeholders are important to involve: People or organizations involved in program operations may include community members, sponsors, collaborators, coalition partners, funding officials, administrators, managers, and staff. People or organizations served or affected by the program may include clients, family members, neighborhood organizations, academic institutions, elected and appointed officials, advocacy groups, and community residents.
Individuals who are openly skeptical of or antagonistic toward the program may also be important to involve. Opening an evaluation to opposing perspectives and enlisting the help of potential program opponents can strengthen the evaluation's credibility. They shouldn't be confused with primary intended users of the program, although some of them should be involved in this group. In fact, primary intended users should be a subset of all of the stakeholders who have been identified.
A successful evaluation will designate primary intended users, such as program staff and funders, early in its development and maintain frequent interaction with them to be sure that the evaluation specifically addresses their values and needs.
Describe the Program A program description is a summary of the intervention being evaluated. There are several specific aspects that should be included when describing a program. Statement of need A statement of need describes the problem, goal, or opportunity that the program addresses; it also begins to imply what the program will do in response. Expectations Expectations are the program's intended results. Activities Activities are everything the program does to bring about changes.
Resources Resources include the time, talent, equipment, information, money, and other assets available to conduct program activities. Stage of development A program's stage of development reflects its maturity.
Context A description of the program's context considers the important features of the environment in which the program operates.
Logic model A logic model synthesizes the main program elements into a picture of how the program is supposed to work. Focus the Evaluation Design By focusing the evaluation design, we mean doing advance planning about where the evaluation is headed, and what steps it will take to get there.
Among the issues to consider when focusing an evaluation are: Purpose Purpose refers to the general intent of the evaluation. There are at least four general purposes for which a community group might conduct an evaluation: To gain insight.
This happens, for example, when deciding whether to use a new approach e. Knowledge from such an evaluation will provide information about its practicality. For a developing program, information from evaluations of similar programs can provide the insight needed to clarify how its activities should be designed. To improve how things get done. This is appropriate in the implementation stage when an established program tries to describe what it has done.
This information can be used to describe program processes, to improve how the program operates, and to fine-tune the overall strategy. Evaluations done for this purpose include efforts to improve the quality, effectiveness, or efficiency of program activities. To determine what the effects of the program are.
Evaluations done for this purpose examine the relationship between program activities and observed consequences. For example, are more students finishing high school as a result of the program?
Programs most appropriate for this type of evaluation are mature programs that are able to state clearly what happened and who it happened to.
Such evaluations should provide evidence about what the program's contribution was to reaching longer-term goals such as a decrease in child abuse or crime in the area. This type of evaluation helps establish the accountability, and thus, the credibility, of a program to funders and to the community.
To affect those who participate in it. The logic and reflection required of evaluation participants can itself be a catalyst for self-directed change.
And so, one of the purposes of evaluating a program is for the process and results to have a positive influence. Such influences may: Empower program participants for example, being part of an evaluation can increase community members' sense of control over the program ; Supplement the program for example, using a follow-up questionnaire can reinforce the main messages of the program ; Promote staff development for example, by teaching staff how to collect, analyze, and interpret evidence ; or Contribute to organizational growth for example, the evaluation may clarify how the program relates to the organization's mission.
Users Users are the specific individuals who will receive evaluation findings. Uses Uses describe what will be done with what is learned from the evaluation. Methods The methods available for an evaluation are drawn from behavioral science and social research and development. Agreements Agreements summarize the evaluation procedures and clarify everyone's roles and responsibilities. Gather Credible Evidence Credible evidence is the raw material of a good evaluation.
The following features of evidence gathering typically affect how credible it is seen as being: Indicators Indicators translate general concepts about the program and its expected effects into specific, measurable parts. Examples of indicators include: The program's capacity to deliver services The participation rate The level of client satisfaction The amount of intervention exposure how many people were exposed to the program, and for how long they were exposed Changes in participant behavior Changes in community conditions or norms Changes in the environment e.
Sources Sources of evidence in an evaluation may be people, documents, or observations. Quality Quality refers to the appropriateness and integrity of information gathered in an evaluation. Quantity Quantity refers to the amount of evidence gathered in an evaluation. Logistics By logistics , we mean the methods, timing, and physical infrastructure for gathering and handling evidence.
Justify Conclusions The process of justifying conclusions recognizes that evidence in an evaluation does not necessarily speak for itself. The principal elements involved in justifying conclusions based on evidence are: Standards Standards reflect the values held by stakeholders about the program.
Interpretation Interpretation is the effort to figure out what the findings mean. Judgements Judgments are statements about the merit, worth, or significance of the program. Recommendations Recommendations are actions to consider as a result of the evaluation. Three things might increase the chances that recommendations will be relevant and well-received: Sharing draft recommendations Soliciting reactions from multiple stakeholders Presenting options instead of directive advice Justifying conclusions in an evaluation is a process that involves different possible steps.
Ensure Use and Share Lessons Learned It is naive to assume that lessons learned in an evaluation will necessarily be used in decision making and subsequent action.
The elements of key importance to be sure that the recommendations from an evaluation are used are: Design Design refers to how the evaluation's questions, methods, and overall processes are constructed. Preparation Preparation refers to the steps taken to get ready for the future uses of the evaluation findings.
Feedback Feedback is the communication that occurs among everyone involved in the evaluation. Patton, M. Qualitative Research Evaluation Methods. Thomson, G. Measuring the success of EE programs. Canadian Parks and Wilderness Society. Skip to main content. Evaluation: What is it and why do it?
Table of Contents What is evaluation? Should I evaluate my program? What type of evaluation should I conduct and when? What makes a good evaluation? How do I make evaluation an integral part of my program? How can I learn more? What is evaluation? Experts stress that evaluation can: Improve program design and implementation.
Demonstrate program impact. Within the categories of formative and summative, there are different types of evaluation. Which of these evaluations is most appropriate depends on the stage of your program: Type of Evaluation Purpose Formative 1.
Needs Assessment Determines who needs the program, how great the need is, and what can be done to best meet the need. For more information, Needs Assessment Training uses a practical training module to lead you through a series of interactive pages about needs assessment. Process or Implementation Evaluation Examines the process of implementing the program and determines whether the program is operating as planned.
Can be done continuously or as a one-time assessment. Results are used to improve the program. Summative 1. Outcome Evaluation Investigates to what extent the program is achieving its outcomes. These outcomes are the short-term and medium-term changes in program participants that result directly from the program. Impact Evaluation Determines any broader, longer-term changes that have occurred as a result of the program. These impacts are the net effects, typically on the entire school, community, organization, society, or environment.
EE impact evaluations may focus on the educational, environmental quality, or human health impacts of EE programs. Before Program Begins. These summative evaluations build on data collected in the earlier stages. To what extent is the need being met?
0コメント