Description
Due 4/7/19 by 3PM EST
To ensure the success of a program evaluation, a social worker must generate a specific detailed plan. That plan should describe the goal of the evaluation, the information needed, and the methods and analysis to be used. In addition, the plan should identify and address the concerns of stakeholders. A social worker should present information about the plan in a manner that the stakeholders can understand. This will help the social worker receive the support necessary for a successful evaluation.
To prepare for this Assignment, identify a program evaluation you would like to conduct for a program with which you are familiar. Consider the details of the evaluation, including the purpose, specific questions to address, and type of information to collect.
Then, consider the stakeholders that would be involved in approving that evaluation. Review the resources for samples of program evaluations.
Submit the following: (Be very specific and expound on ideas, use APA references and use bulleted points) – Please note Kellogg file can be download via internet. The file is too large to send) You can also use 2 new peer reviewed articles as well.
- A 1-page stakeholder analysis that identifies the stakeholders, their role in the agency and any concerns that they might have about the proposed program evaluation
- A 2- to 3-page draft of the program evaluation plan to submit to the stakeholders that:
- Identifies the purpose of the evaluation
- Describes the questions that will be addressed and the type of information that will be collected
- Addresses the concerns of the stakeholders that you identified in your Stakeholder Analysis
References
- Dudley, J. R. (2014). Social work evaluation: Enhancing what we do. (2nd ed.) Chicago, IL: Lyceum Books.
-
- Chapter 1, “Evaluation and Social Work: Making the Connection” (pp. 1–26)
- Chapter 4, “Common Types of Evaluations” (pp. 71-89)
- Chapter 5, “Focusing an Evaluation” (pp. 90-105)
- T.K., & Royse, D. (2010). Program evaluation studies. In B. Thyer (Ed.), The handbook of social work research methods (2nd ed., pp.221-240). Thousand Oaks, CA: Sage. (PDF).
- W. K. Kellogg Foundation. (n.d.). W.K. Kellogg foundation evaluation handbook. Retrieved October 08, 2013, from http:www.wkkf.org/knowledge-center/resources/2010/w-k-kellogg-foundation-evaluation-handbook.aspx
- Chapter 3, “Three Levels of Evaluation” (pp. 14-18)
- Chapter 5, “Planning and ImplementingProject-Level Evaluation” (pp. 47-104)
Studies
TK Logan and David Royse
variety of programs have been developed to address social problems such
as drug addiction, homelessness, child abuse, domestic violence, illiteracy,
and poverty. The goals of these programs may include directly addressing
the problem origin or moderating the effects of these problems on individuals, families, and communities. Sometimes programs are developed
to prevent something from happening such as drug use, sexual assault, or crime.
These kinds of problems and programs to help people are often what allracts many
social workers to the profession; we want to be part of the mechanism through which
society provides assistance to those most in need. Despite low wages, bureaucratic red
tape, and routinely uncooperative clients, we tirelessly provide services that are invaluable
but also at various Limes may be or become insufficient or inappropriate. But without
conducting evaluation, we do not know whether our programs are helping or hurting,
that is, whether they only postpone the hunt for real solutions or truly construct new
futures for our clients. This chapter provides an overview of program evaluation in general and outlines the primary considerations in designing program evaluations.
Evaluation can be done informally or formally. We are constantly, as consumers, informally evaluating products, services, and in formation. For example, we may choose not to
return to a store or an agency again if we did not evaluate the experience as pleasant.
Similarly, we may mentally take note of unsolicited comments or anecdotes from clients and
draw conclusions about a program. Anecdotal and informal approaches such as these generally are not regarded as carrying scientific credibility. One reason is that decision biases
play a role in our “informal” evaluation. Specifically, vivid memories or strongly negative or
positive anecdotes will be overrepresented in our summaries of how things are evaluated.
This is why objective data are necessary to truly understand what is or is not working.
By contrast, formal evaluations systematically examine data from and about programs
and their outcomes so that better decisions can be made about the interventions designed
to address the related social problem. Thus, program evaluation involves the usc of social
research meLhodologies to appraise and improve the ways in which human services, polici~s, and programs are conducted. Formal eval. uation, by its very nature, is applied research.
Formal program evaluations attempt to answer the following general question: Does
the p rogram work? Program evaluation may also address questions such as the following:
Do our clients get better? How does our success rate compare to those of other programs
or agencies? Can the same level of success be obtained through less expensive means?
A
221
222
PART
II •
QUANTITATIVE A PPROACHES: TYPES OF STUD IES
What is the experience o f the typical client? Should this program be terminated and its
funds applied elsewhere?
Ideally, a thorough program evaluation would address more complex questions in
three main areas: (1) Does the program produce the intended outcomes and avoid unintended negative outcomes? (2) For whom does the program work best and under what
conditions? and (3) Ilow well was a p rogram model developed in one setting adapted to
another setting?
Evaluation has taken an especially prominent role in practi.ce today because of the focu~
on evidence-based practice in social programs. Social work, as a profession, has been asked
to use evidence-based practice as an ethical obligation (Kessler, Gira, & Poertner, 2005).
Evidence-based practice is defined diLTerently, but most definitions include using program
evaluation data to help determine best practices in whatever area of social programming is
being considered. In other words, evidence-based practice includes using objective indicators of success in addition to practice or more subjective indicators of success.
Formal program evaluations can be found on just about every topic. For instance,
Fraser, Nelson, and Rivn rd (1997) h ave examined the effectiveness of family preservation
services; Kirby, Korpi, Adivi, and Weissman (1997) have evalu ated an AIDS and pregnancy prevention middle school program. Morrow-Howell, Beeker-Kemppainen, and
Judy ( 1998) evaluated an intervention designed to reduce the risk of suicide in elderly
adult clients of a crisis hotline. Richter, Snider, and Gorey ( 1997) used a quasi-experimental
design to study the effects of a group work interven tion on female survivors of childhood
sexual abuse. Leukefeld and colleagues ( 1998) examined the effects of an I IlV prevention
intervention with injecting drug and crack users. Logan and colleagues (2004) examined
the effects of a drug court in terven tion as well as the costs of drug court compared with
t he economic benefits of the drug court program.
Basic Evaluation Considerations
Before beginning a program evaluntion, several issues must be initially considered. These
issues are decisions 1hat are critical in determining the evaluation methodology and goals.
Although you may not have complete answers to these questions when beginning to plan
an evaluation, these questions help in developing the plan and must be answered before
a n evaluation ca n be carried out. We can 1.um up these considerations with the following
questions: who, what, where, when, and why.
First, who will do the evaluation? This seems like a simple question at first glance.
llowever, this particular consideration has major implications for the evaluation results.
Program evaluators can be categorized as being either internal or external. An internal
evaluator is someone who is a program staff member or regular agency employee, whereas
an external evaluator is a professional, on contract, hired for the specific purpose of evaluation. There are advantages nnd disadvantages to using either type of evaluator. For
example, the internal evaluator probably will be very familia r with the staff and the
program . This may save a lot of planning time. The d isadvnn tage is that evaluations completed by an internal evaluator may be considered less valid by outside agencies, including
the funding source. The external evaluator generally is thought to be less biased in terms of
evaluation outcomes because he or she has no personal investment in the program. One
disadvantage is that an external evaluator frequently is viewed as an “outsider” by the staff
within an agency. This may affect the amount of time necessar)’ to conduct the eva luation
or cause problems in the overall evaluation if agency staff are reluctant to cooperate.
CHAPTER
13 •
PROGRAM E VALUATION
S1UD I ES
223
Second, what resources are available to conduct the evaluation? Hiring an outside evaluator can be expensive, while having a staff person conduct the evaluation may be less
expensive. So, in a sense, you may be trading credibility for less cost. In fact, each methodological decision will have a trade-off in credibility, level of information, and resources
(including time and money). Also, t he amount and level of information as well as the
research design ..ciU be determined, to some e11.”1ent, by what resources are available. A
comprehensive and rigorous eval uation does take significant resources.
Third, where will the information come from? If an evaluation can be done using existing data, the cost will be lower than if data must be collected from numerous people such
as clien ts and/or staff across m ultiple sites. So having some sense of where the data will
come from is important.
Fou rth, when is the evaluation information needed? In other words, what is the timeframe for the evaluation? The timeframe will affect costs and design of research methods.
Fifth, why is the evaluation being conducted? Is the evaluation being conducted at the
request of the fun ding source? Is it being conducted to improve services? Is it being conducted to document the cost-benefit trade-off of the program? If future program funding
decisions will depend on the results of the evaluation, then a lot more importance will be
attached to it than 1f a new manager simply wants to know whether clients were satisfied
with services. The more that is riding on an evaluation, the more attention will be given
to the methodology and the more threatened staff can be, especially if they think that th e
purpose of the evaluation is to downsize and trim excess employees. In other words, there
arc many reasons an evaluation is being considered, and these reasons may have implications for the evaluation methodology and implementation.
Once the issues described above have been considered, more complex questions and
trade-offs will be needed in planning the evaluation. Specifically, six main issues guide
and shape the design of any program evaluation effort and m ust be given thoughtful and
deliberate consideration.
L Defining the goal of the program evaluation
2. Understanding the level of information needed for the program evaluation
3. Determining the methods and analysis that need to be used for the program evaluation
4. Considering issues that might arise and strategies to keep the evaluation on course
5. Developing results into a useful format for the program stakeholders
6. Providing practical and useful feedback about the program strengths and weaknesses as well as providing information about next steps
Defining the Goal of the Program Evaluation
It is essential that the evaluator has a firm understanding of the short- and long-term
objectives of the evaluation. Imagine being hired for a position but not being given a job
description or informed aboul how the job fits into the overall organization. Without
knowing why an evaluation is called for or needed, the evaluator might attempt to answer
a different set of c.1uestions from those of interest to the agency director or advisory board.
The management might want Lo know why the majority of clients do not return after one
or two visits, whereas the evaluator might think that his or her task is to determine
224
PART
II •
QUANTITATIVF APPROACHES: TYPlS Or SIUDIES
whether clien ts who received group therapy sessions were better off than clients who
received ind ividual counseling.
In defining the goals of t he program evaluation, severa l steps should be taken. First, the
program goals should be examined. These can be learned through examining official
program documents as well as through talking to key program stakeholders. In clarifying
the overall purpose of the evaluation, it is critical to talk with different program “stakeholders.” Scriven ( 199 1) defines a program stakeholder as “one who has a substantial ego,
credibility, power, futures, or other capital invested in the program . . .. This includes
program staff and many who arc no t actively involved in the day-to-day operations”
(p. 334). Stakeholders include both supporters and opponents of the program as well as
program clients or consumers or even potential consumers or clients. lt is essential that
the evaluator obtain a variety of different views about the program. By listening and considering stakeholder perspectives, the evaluator can ascertain the most important aspects
of the program to target for the evaluation by looking for overlapping concerns, questions, and comments from the various stakeholders. However, it is important that the
stakeholders have some agreement on what program success means. Otherw ise, it may be
difficult to conduct a satisfactory evaluation.
It is also important to consult the extant literature to understand what similar
programs have used to evaluate their outcomes as well as to understand the theoretical
basis of the program in defining the program evaluation goals. Furthermore, it is critical
that the evaluator works closely with whoever initiated the evaluation to set priorities for
the evaluation. This process should identify the intended o utcomes of the program and
which of those outcomes, if not all of them, will be evaluated. Taking the evaluation a step
further, it may be important to include the examination of unintended negative outcomes
that may result from the program. Stakeholders and the literature will also help to determine those kinds of outcomes.
Once the overall purpose and priorities of the evaluation are established, it is a good
idea to develop a written agreement, especially if the eva I uator is an external one.
Misunderstandings can and will occu r months later if things are not written in black
and white.
Understanding the Level of Information
Needed for the Program Evaluation
The success of the program evaluation revolves around the evaluator’s ability to develop
practical, researchable questions. A good rule to follow is to focus the evaluation on one
or two key questions. Too many questions can lengthen the process and overwhelm the
evaluator with too much data that, instead of facilitating a decision, might produce
inconsistent findings. Sometimes, funding sources require only that some vague undefined type of evaluation is conducted. The funding sources might nei ther expect nor
desire dissertation-quality research; they simply migh Lexpect “good fa ith” efforts when
beginning evaluation processes. Other agencies may be quite demanding in the types and
forms of data to be provided. Obviously, the choice of methodology, data collection
procedures, and reporting formats will be strongly affected by the purpose, objectives,
and questions examined in the study.
It is important to note the difference between general research and evaluation. In
research, the investigator often· focuses on questions based on theoretical considerations
or hypotheses gene rated to hu ilcl on research in a specific area of study. Although
CHAPTER
13 •
PROGRAM EVALUATION $ TUU IES
225
program evaluations may foc us on an intervention derived from a theory, the evaluation questions should, first and foremost, be driven by the program’s objectives. The evaluator is less concerned with buildi ng on prior litera ture or contributing to the
development of practice theory than with determining whether a program worked in a
specific community or location.
There are actually two main types of evaluation questions. There are quc~>tions that
focus on client outcomes, such as, “What impact did the program have?” These kinds of
questions are addressed by using outcome evaluation methods. Then there are questions
that ask, “Did the program achieve its goals?” “Did the program adhere to the specified
procedures or standards?” o r “vVhat was learned in operating this program?” These kinds
of questions are addressed by using process evaluation methods. We will examine both of
these two types of evaluation approaches in the following sections.
Process Evaluation
Process evaluations offer a “snapshot” of the program at any given time. Process evaluations typically describe the day-to-day program efforts; program modifica tions and
changes; outside events that influenced the program; people and institutions involved;
culture, customs, and traditions that evolved; and sociodemographic makeup of the clientele (Scarpitti, Inciardi, & Pottieger, 1993). Process evaluation is concerned with identifying p rogram strengths and weaknesses. T his level of program cvaluarion can be usefuhn
several ways, including providing a contex-t within which to interpret program outcomes
and so that other agencies or localities wishing to sta rr similar programs can benefit without having to make the same mistakes.
As an example, Bentelspacher, DeSilva, Goh, and LaRowe ( 1996) conducted a process
evaluation of the cultural compatibility of psychoeducational fam ily group treatment
with eth nic Asian clients. As another example, Logan, Williams, Leukefeld, and Minton
(2000) conducted a detailed process evaluation of the drug court programs before undertaking an outcome evalual ion of the same programs. The Logan et al. sl udy used multiple
methods to conduct the process evaluatio n, including .in-depth i nterviews with the
program administrative personnel, inten,iews with each of five judges involved in the
program, surveys and face- to-face interviews with 22 randomly selected current clients,
and surveys of all program staff, 19 community treatment provider representatives, 6 randomly selected defense attorney representatives, 4 prosecuting attorney representatives, l
representative 6:om the probation and parole office, 1 representa tive from the local
county jail, and 2 police departmen l representatives. In all, 69 different individuals representing I 0 different agency perspectives provided information about the drug court
program. Also, all agency documents were examined and analyzed, observations of various aspects of the program process were conducted, and client intake data were analyzed
as part of the process evaluation. The results were all integrated and compiled into one
comprehensive repor t.
What makes a process evaluation so important is that researchers often have relied only
on selected program outcome indicators such as termination and grad uation rates or
number of rearrests to determine effectiveness. However, to better understand how and
why a program such as drug court is effective, an analysis of how the p rogram was conceptualized, implemented, and revised is needed. Consider this exan1ple-say one outcome
evaluation of a drug court program showed a graduation rate of 80% of those who began
the program, while another outcome evaluation found that only 40o/o of those who began
the program graduated. Then, the graduates of the second program were more likely to be
free from substance usc and criminal behaviors at the l2-month foUow-up than the graduates
226
PART II
•
QuANTITATIVE APPROACHES: TYPES OJ SJUDIES
from the first program. A process evaluation could help to explain the specific differences
in factors such as selection (how clients get into the programs), treatment plans, monitoring, program length, and other program features that may influence how many people
graduate and slay free from drugs and criminal behavior at follow-up. Tn other words, a
process evaluation, in contrast to an examina tion of program outcome only, can provide a
clearer and more comprehensive pictme of how drug cou rt affects those involved in the
program. More specifically, a process evaluation can provide information about program
aspects that need to be improved and those that work well (Scarpilli, Inciardi, & Pottieger,
1993). Finally, a process evaluation may help to facilitate replication of the drug cou rt
program in other areas. This often is referred to as technology transfer.
A different but related process evaluation goal might be a description of the failures
and departures from the way in which the intervention originally was designed. How were
the staff trained and hired? Did the intervention depart from the treatment manual recommendations? Influences that shape and affect the intervention that clients receive need
to be identified because they affect the fidelity of the treatment p rogram (e.g., delayed
funding or staff hires, changes in policies or procedu res). “/hen program implementation
