Description
Due 04/29/2019 by 6 pm EST
Group programs are common in social work. Just as with other types of programs, social workers must understand the options available to them and know how to select the appropriate research design.
For this Discussion, you evaluate group research design methods that can be used for an outcome evaluation of a foster parent training program. You also generate criteria to be measured in the program.
To prepare for this Discussion, review the “Social Work Research: Planning a Program Evaluation” case study in this week’s resources, Plummer, S.-B., Makris, S., & Brocksen S. (Eds.). (2014b). Social work case studies: Concentration year. Retrieved from http://www.vitalsource.com , and the section of “Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources”, titled “Overview of Methods to Collect Information.”
- Post your explanation of which group research design and data collection method from those outlined in the Resources you selected as appropriate for the “Social Work Research: “Planning a Program Evaluation” case study and why.
- Then, generate criteria to be measured using the research design by identifying a specific outcome and a method for measuring that outcome.
- Specify who will collect the data and how the data will be collected.
References
Dudley, J. R. (2014). Social work evaluation: Enhancing what we do. (2nd ed.) Chicago, IL: Lyceum Books.
- Chapters 9, “Is the Intervention Effective?” (pp. 213–250)
- Chapter 10, “Analyzing Evaluation Data” (pp. 255–275)
McNamara, C. (2006b). Reasons for priority on implementing outcomes-based evaluation. In Basic guide to outcomes-based evaluation for nonprofit organizations with very limited resources. Retrieved from: http://managementhelp.org/evaluation/outcomes-eval…
Plummer, S.-B., Makris, S., & Brocksen S. (Eds.). (2014b). Social work case studies: Concentration year. Baltimore, MD: Laureate International Universities Publishing. [Vital Source e-reader].
Read the following section:
- “Social Work Research: Planning a Program Evaluation”
Social Work Research: Planning a Program Evaluation
Joan is a social worker who is currently enrolled in a social work PhD program. She is planning to conduct her dissertation research project with a large nonprofit child welfare organization where she has worked as a site coordinator for many years. She has already approached the agency director with her interest, and the leadership team of the agency stated that they would like to collaborate on the research project.
The child welfare organization at the center of the planned study has seven regional centers that operate fairly independently. The primary focus of work is on foster care; that is, recruiting and training foster parents and running a regular foster care program with an emphasis on family foster care. The agency has a residential program as well, but it will not participate in the study. Each of the regional centers services about 45–50 foster parents and approximately 100 foster children. On average, five to six new foster families are recruited at each center on a quarterly basis. This number has been consistent over the past 2 years.
Recently it was decided that a new training program for incoming foster parents would be used by the organization. The primary goals of this new training program include reducing foster placement disruptions, improving the quality of services delivered, and increasing child well-being through better trained and skilled foster families. Each of the regional centers will participate and implement the new training program. Three of the sites will start the program immediately, while the other four centers will not start until 12 months from now. The new training program consists of six separate 3-hour training sessions that are typically conducted in a biweekly format. It is a fairly proceduralized training program; that is, a very detailed set of manuals and training materials exists. All trainings will be conducted by the same two instructors. The current training program that it will replace differs considerably in its focus, but it also uses a 6-week, 3-hour format. It will be used by those sites not immediately participating until the new program is implemented.
Joan has done a thorough review of the foster care literature and has found that there has been no research on the training program to date, even though it is being used by a growing number of agencies. She also found that there are some standardized instruments that she could use for her study. In addition, she would need to create a set of Likert-type scales for the study. She will be able to use a group design because all seven regional centers are interested in participating and they are starting the training at different times.
for Nonprofit Organizations with Very
Limited Resources
© Copyright Carter McNamara, MBA, PhD, Authenticity Consulting, LLC.
Much of the content of
this topic came from this
Description
This document provides guidance toward basic planning
and implementation of an outcomes-based evaluation
process (also called outcomes evaluation) in nonprofit
organizations. This document provides basic guidance -particularly to small nonprofits with very limited
resources.
NOTE: This free, basic, online guide makes occasional
references to certain pages in the United Way of
America’s book, Measuring Program Outcomes: A
Practical Approach (1996). That United Way book is an
excellent resource! However, it can be somewhat overwhelming for nonprofits that have
very limited resources. This free online guide (that are reading now) can help nonprofits
carry out their own basic outcomes evaluation planning. This online guide can also help
small nonprofits to make the most of that United Way book — however, you do
not have to have that United Way book in order to carry out your own basic outcomes
evaluation plan by using this online guide. (Still, small nonprofits are encouraged to get
the United Way book, for example, to later round out basic evaluation plans developed
from this online guide and/or to learn more than provided in this basic guide about
outcomes evaluation. To get the United Way book, call 703-212-6300 and ask about
item #0989.)
book:
NOTE: Outcomes-based evaluation is but one type of evaluation — there are many
types of evaluations. The reader would gain deeper understanding about outcomesbased evaluation by reading about the broader topic of evaluation. To do so, read Basic
Guide to Program Evaluation. This online basic guide about outcomes-based evaluation
was designed by modified the Basic Guide to Program Evaluation.
Table of Contents
•
Reasons for Priority on Outcomes-Based Evaluation
Basic Principles for Small Nonprofits to Remember Before Starting Outcomes Planning
What is Outcomes-Based Evaluation?
Myths to Get Out of the Way Before You Start Your Outcomes Planning
Planning Any Type of Evaluation Includes Answers to These Very Basic Questions
Planning Your Outcomes Evaluation — Step 1: Getting Ready
Planning Your Outcomes Evaluation — Step 2: Choosing Outcomes
Planning Your Outcomes Evaluation — Step 3: Selecting Indicators
Planning Your Outcomes Evaluation — Step 4: Planning Data/Info Collection
Planning Your Outcomes Evaluation — Step 5: Piloting/Testing
Planning Your Outcomes Evaluation — Step 6: Analyzing/Reporting Results
Useful Online Resources
Reasons for Priority on Implementing
Outcomes-Based Evaluation
•
•
•
•
•
There are decreasing funds for nonprofits
Yet there are increasing community needs
Thus, there is more focus on whether nonprofit programs are really making a
difference — and outcomes evaluation focuses on whether programs are really
making a difference for clients
Previous evaluation measures were on, for example, how much money spent,
number of people served and on client satisfaction — these measures don’t really
assess impacts on clients
Outcomes evaluation looks at impacts/benefits to clients during and after
participation in your programs
Basic Principles for Small Nonprofits to
Remember Before Starting
Nonprofit personnel do not have to be experts in outcomes-based evaluation in order to
carry out a useful outcomes evaluation plan.
•
•
•
•
In most major activities in life and work, there is a “20% of effort that generates
80% of the results”. This basic guide will give you the direction to accomplish
that 20% needed to develop an outcomes evaluation plan for your organization.
Once you’ve carried out the guidelines in this basic guide, you can probably let
experience and funders help you with the rest of your outcomes evaluation
planning, particularly as you implement your evaluation plan during its first year.
In life (particularly for us adults), problems exist often because we’re making
things far too complex, not because we’re making things far too simple. Often,
people who are new to evaluation get “mindcramp”, that is, they think too hard
about evaluation. It’s actually a fairly simple notion — just don’t think so hard
about it!
Start small, start now and grow as you’re able.
•
Ready, fire, aim!
What is Outcomes-Based Evaluation?
A Basic Definition
As noted above, outcomes evaluation looks at impacts/benefits/changes to your clients
(as a result of your program(s) efforts) during and/or after their participation in your
programs. Outcomes evaluation can examine these changes in the short-term,
intermediate term and long-term (we’ll talk more about this later on below.)
Basic Components and Key Terms in Outcomes Evaluation
Outcomes evaluation is often described first by looking at its basic components.
Outcomes evaluation looks at programs as systems that have inputs,
activities/processes, outputs and outcomes — this system’s view is useful in examining
any program!
•
•
•
•
Inputs –
These are materials and resources that the program uses in its activities, or
processes, to serve clients, eg, equipment, staff, volunteers, facilities, money,
etc. These are often easy to identify and many of the inputs seem common to
many organizations and programs.
Activities –
These are the activities, or processes, that the program undertakes with/to the
client in order to meet the clients’ needs, for example, teaching, counseling,
sheltering, feeding, clothing, etc. Note that when identifying the activities in a
program, the focus is still pretty much on the organization or program itself, and
still is not so much on actual changes in the client.
Outputs –
These are the units of service regarding your program, for example, the number
of people taught, counseled, sheltered, fed, clothed, etc. The number of clients
served, books published, etc., very often indicates nothing at all about the actual
impacts/benefits/changes in your clients who went through the program — the
number of clients served merely indicates the numerical number of clients who
went through your program.
Outcomes –
These are actual impacts/benefits/changes for participants during or after your
program
— for example, for a smoking cessation program, an outcome might be
“participants quit smoking” (notice that this outcome is quite different than
outputs, such as the “number of clients who went through the cessation
program”)
— These changes, or outcomes, are usually expressed in terms of:
— — knowledge and skills (these are often considered to be rather short-term
outcomes)
•
•
— — behaviors (these are often considered to be rather intermediate-term
outcomes)
— — values, conditions and status (these are often considered to be rather longterm outcomes)
Outcome targets –
These are the number and percent of participants that you want to achieve the
outcome, for example, an outcome goal of 5,000 teens (10% of teens in
Indianapolis) who quit smoking over the next year
Outcome indicators –
These are observable and measurable “milestones” toward an outcome target.
These are what you’d see, hear, read, etc., that would indicate to you whether
you’re making any progress toward your outcome target or not, for example, the
number and percent of teen participants who quit smoking right after the
program and six months after the program — these indicators give you a strong
impression as to whether 5,000 teens will quit or not over the next year from
completing your program.
NOTE: Take a few minutes and really notice the differences between:
— Outputs (which indicate hardly anything about the changes in clients — they’re
usually just numbers)
— Outcomes (which indicate true changes in your clients)
— Outcome targets (which specify how much of your outcome you hope to achieve)
— Outcome indicators (which you can see, hear, read, etc. and suggest that you’re
making progress toward your outcome target or not)
Typically, the above concepts are organized into a logic model, which depicts the
general order in which the concepts are integrated with each other. For more clarity,
see Guidelines and Framework for Developing a Basic Logic Model
Common Myths to Get Out of the Way Before
You Start Planning
Myth: Evaluation is a complex science. I don’t have time to learn it!
No! It’s a practical activity. If you can run an organization, you can surely implement an
evaluation process!
Myth: It’s an event to get over with and then move on!
No! Outcomes evaluation is an ongoing process. It takes months to develop, test and
polish — however, many of the activities required to carry out outcomes evaluation are
activities that you’re either already doing or you should be doing. Read on …
Myth: Evaluation is a whole new set of activities – we don’t have the
resources
No! Most of these activities in the outcomes evaluation process are normal
management activities that need to be carried out anyway in order to evolve your
organization to the next level.
Myth: There’s a “right” way to do outcomes evaluation. What if I don’t
get it right?
No! Each outcomes evaluation process is somewhat different, depending on the needs
and nature of the nonprofit organization and its programs. Consequently, each
nonprofit is the “expert” at their outcomes plan. Therefore, start simple, but start and
learn as you go along in your outcomes planning and implementation.
Myth: Funders will accept or reject my outcomes plan
No! Enlightened funders will (at least, should?) work with you, for example, to polish
your outcomes, indicators and outcomes targets. Especially if your’s is a new nonprofit
and/or new program, then you very likely will need some help — and time — to develop
and polish your outcomes plan.
Myth: I always know what my clients need – I don’t need outcomes
evaluation to tell me if I’m really meeting the needs of my clients or not
You don’t always know what you don’t know about the needs of your clients – outcomes
evaluation helps ensure that you always know the needs of your clients. Outcomes
evaluation sets up structures in your organization so that you and your organization are
very likely always focused on the current needs of your clients. Also, you won’t always
be around – outcomes help ensure that your organization is always focused on the
most appropriate, current needs of clients even after you’ve left your organization.
Planning Any Type of Evaluation Includes
Answers to These Very Basic Questions
Evaluation often seems like a “heavy”, complex activity to those who are not familiar
with the real nature of evaluation. Actually, planning any kind of evaluation often
requires answers to some very basic questions, including:
•
•
•
•
•
•
What decisions do you want to be able to make as a result of your evaluation?
Who are primary audiences for the results?
What kinds of info are needed?
When is info needed?
Where get that info and how?
What resources are available to get the info, analyze it and report it?
•
How report that info in useful fashion?
Planning Your Outcomes Evaluation — Step 1:
Getting Ready
•
•
•
•
•
•
•
Read Step 1 (Chapter 1) of UW book Measuring Program Outcomes: A Practical
Approach (1996) if you have it (otherwise, you’ll still benefit from this section on
this web page)
You can very likely draft your own version of most of your outcomes evaluation
plan and then have others review your drafts of those sections of the plan. (This
“short-cut” approach to outcomes evaluation planning might be questioned by
some experts on outcomes — but then small nonprofits rarely have the resources
to fully carry out the comprehensive and detailed steps often recommended by
outcomes evaluation resources.)
Remember that you don’t have to be an expert to start the planning process -each plan is different — ultimately, you’re the expert at your process and your
plan
Do consider getting a grant to support development of your plan, eg, maybe
$3,000 to $5,000, particularly to have evaluation expertise to review your plans
and your methods of data collection — if you can’t get this grant, you still can
proceed with your plan
DO tap the many resources available to help you (useful online resources are
listed below)
Now pick one program to evaluate that has a reasonably clear group of clients
and clear methods to provide services to them — in other words, make sure that
you have a program to evaluate!
NOTE: Soon, you should train at least one board member and staff member
about outcomes — consider using this very basic online guide
Planning Your Outcomes Evaluation — Step 2:
Choosing Outcomes
Preparation
•
•
Note that a logic model for your program is depiction of inputs, activities,
outputs and outcomes (short-term, intermediate and long-term) regarding your
program. Take a look at the information in Introduction to Program Logic Model
Reread the myths listed above – don’t worry about competing the “perfect” logic
model – ultimately, you’re the expert here
Now Identify Your Outcomes (including short-term,
intermediate and long-term)
•
•
•
•
•
•
•
•
Now fill in a logic model for the program to which you want to apply outcomesbased evaluation — see the example logic model and framework — BUT first read
the next several bullets below in this section:
To identify outcomes, consider: “enhanced …”, “increased …”, “more …”, “new
…”, “altered …”, etc.
Note that it can be quite a challenge to identify outcomes for some types of
programs, including those that are preventative (health programs, etc.),
developmental (educational, etc.), or “one-time” or anonymous (food shelves,
etc.) in nature. In these cases, it’s fair to give your best shot to outcomes
planning and then learn more as you actually apply your outcomes evaluation
plan. Also seek help and ideas about outcomes from other nonprofits that
provide services similar to yours. Programs that are remedial in nature (that is,
that are geared to address current and observable problems, such as teen
delinquency, etc.) are often easier to associate with outcomes.
Start with short-term outcomes
Regarding identifying short-term outcomes, think 0-6 months:
— Imagine your client in the program or a day after leaving the program
— What knowledge and skills do you prefer? Actually see?
Regarding identifying intermediate outcomes, think 3-9 months:
— Imagine your client 3-9 months after leaving the program
— What behaviors do you prefer? Actually see?
Regarding long-term outcomes, think 6-12 months:
— Imagine your client 6-12 months after leaving the program
— What values, attitudes, status would you prefer to be the fullest extent of
benefit for the client? Actually see?
Now “chain” the short-term, intermediate- and long-term outcomes by applying
the following sentence to them:
— “if this short-term occurs, then the intermediate occurs, and if this
intermediate occurs, then this long-term occurs — AGAIN, don’t worry about
getting it perfect — trust your intuition
Planning Your Outcomes Evaluation — Step 3:
Selecting Indicators
Preparation
•
•
•
Read Step 3 in UW book Measuring Program Outcomes: A Practical
Approach (1996) if you have it (otherwise, you’ll still benefit from this section on
this web page) – especially look at examples on pages 66-67.
Identify at least one indicator per outcome (note that sometimes indicators are
called performance standards)
When selecting indicators, ask:
— What would I see, hear, read about clients that means progress toward the
•
outcome?
— Include numbers and percent regarding the client’s behavior , eg, “2,000 of
the participants (50%) of our participants will quick smoking by the end of the
program” and “3,000 of the participants (75%) of our participants will quick
smoking one month after the program”
— If is your first outcomes plan that you’ve ever done or the program is just
getting started, then don’t spend a great deal of time trying to find the perfect
numbers and percentages for your indicators
Fill in your indicators in the Framework for a Basic Outcomes-Based Evaluation
Plan. Also, carry over the outcomes you identified from the example logic
model to the basic evaluation plan.
Planning Your Outcomes Evaluation — Step 4:
Planning Data/Information
Preparation
•
•
•
Read Step 4 in UW book Measuring Program Outcomes: A Practical
Approach (1996) if you have it (otherwise, you’ll still benefit from this section on
this web page) — especially look at
— Page 86 (+/-’s of data sources)
— Page 88 (major data collection methods)
— Pages 90-93
A useful resource at this point might be Overview of Useful Methods to Collect
Information
Now might be the best time to get some evaluation expertise, for example, a
consultant or utilize a local nonprofit service provider to help you review your
drafted outcomes and indicators. The expert is also worth their “weight in gold”
when reviewing methods to collect data.
Get Your Work Reviewed Now By Others
•
If you’ve drafted outcomes and indicators yourself, get them reviewed by:
— Board members
— Staff
— Client in program? Finished with the program?
— Evaluation consultant?
Identify Data Sources and Methods to Collect Data
•
For each indicator, identify what information you will need to collect/measure to
assess that indicator. Consider:
— Current program records and data collection
— What you see during the program
— Ask staff for ideas
•
•
•
•
•
•
Is it practical to get that data?
— What will it cost?
— Who will do it?
— How can you make the time?
When to collect data?
— Depends on indicator
— Consider: before/after program, 6 months after, 12 months after
Data collection methods:
— Questionnaires?
— Interviews?
— Surveys?
— Document review?
— Other(s)?
Get evaluation consultant/expertise?
Pretest your data collection methods (eg, have a few staff quickly answer the
questionnaires to ensure the questions are understandable)
Write a brief procedure to specify:
— What data is collected?
— Who collects it?
— How they collect it?
— When they collect it?
— What do they do with it?
Planning Your Outcomes Evaluation — Step 5:
Piloting/Testing
•
•
•
•
•
If your’s is a small nonprofit, then it’s very likely that you don’t have nearly the
resources to invest in applying your complete outcomes evaluation process in
order to test it out.
In that case, then the first year of applying your outcomes process is the same
as piloting your process.
During the first year, notice problems and improvements, etc.
Document these in your evaluations plan.
If something happens to you so that you leave the organization, the organization
should not have to completely recreate an outcomes plan. Be sure that write
down any suggestions to improve the plan.
Planning Your Outcomes Evaluation — Step 6:
Analyzing/Reporting
Preparation
•
•
Strongly consider getting evaluation expertise now to review, not only your
methods of data collection mentioned above, but also how you can analyze the
data that you collect and how to report results of that analyses.
Before you analyze your data, always make and retain copies of your data.
Analyzing Your Data
•
•
For dealing with numerical data with numbers, rankings:
— Tabulate the information, i.e., add up the ratings, rankings, yes’s, no’s for
each question.
— For ratings and rankings, consider computing a mean, or average, for each
question.
— Consider conveying the range of answers, e.g., 20 people ranked “1”, 30
ranked “2”, and 20 people ranked “3”.
To analyze comments, etc. (that is, data that is not numerical in nature):
— Read through all the data
— Organize comments into similar categories, e.g., concerns, suggestions,
strengths, etc.
— Label the categories or themes, e.g., concerns, suggestions, etc.
— Attempt to identify patterns, or associations and causal relationships in the
themes
Reporting Your Evaluation Results
•
•
•
Level and scope of information in report depends for whom the report is
intended, e.g., funders, board, staff, clients, etc.
Be sure employees have a chance to carefully review and discuss the report
before sent out
Funders will likely require a report that includes executive summary – the
summary should highlight key points from the evaluation, and not be a Table of
Contents
Example of Evaluation Report Contents
•
•
•
•
•
Title Page (name of the organization that is being, or has a
product/service/program that is being, evaluated; date)
Table of Contents
Executive Summary (one-page, concise overview of findings and
recommendations)
Purpose of the Report (what type of evaluation(s) was conducted, what decisions
are being aided by the findings of the evaluation, who is making the decision,
etc.)
Background About Organization and Product/Service/Program that is being
evaluated
— a) Organization Description/History
— b) Product/Service/Program Description (that is being evaluated)
— — i) Problem Statement (in the case of nonprofits, description of the
community need that is being met by the product/service/program)
— — ii) Overall Goal(s) of Product/Service/Program
•
•
•
•
•
— — iii) Outcomes (or client/customer impacts) and Performance Measures (that
can be measured as indicators toward the outcomes)
— — iv) Activities/Technologies of the Product/Service/Program (general
description of how the product/service/program is developed and delivered)
— — v) Staffing (description of the number of personnel and roles in the
organization that are relevant to developing and delivering the
product/service/program)
Overall Evaluation Goals (eg, what questions are being answered by the
evaluation)
Methodology
— a) Types of data/information that were collected
— b) How data/information were collected (what instruments were used, etc.)
— c) How data/information were analyzed
— d) Limitations of the evaluation (eg, cautions about findings/conclusions and
how to use the findings/conclusions, etc.)
Interpretations and Conclusions (from analysis of the data/information)
Recommendations (regarding the decisions that must be made about the
product/service/program)
Appendices: content of the appendices depends on the goals of the evaluation
report, eg.:
— a) Instruments used to collect data/information
— b) Data, eg, in tabular format, etc.
— c) Testimonials, comments made by users of the product/service/program
— d) Case studies of users of the product/service/program
— e) Logic model
— f) Evaluation plan with specified outcomes, sources to collect data, data
collection methods, who will collect data, etc.
Useful Online Resources
Note that specific online resources are listed above in the sections in which those
resources are most appropriate.
General Resources
Program Evaluation
What is a Program Logic Model? (logic model captures inputs, activities, outputs,
outcomes)
Program Manager’s Guide to Evaluation
Outcome Indicators Project
Maran Subramain on Communications Challenges in Evaluation
Measuring Outcomes
Developing a Plan for Outcomes Measurement
For the Category of Evaluations (Many Kinds):
To round out your knowledge of this Library topic, you may want to review some
related topics, available from the link below. Each of the related topics includes free,
online resources.
Also, scan the Recommended Books listed below. They have been selected for their
relevance and highly practical nature.
Related Library Topics
Recommended Books
Purchase answer to see full
attachment
