Introduction

1. What are family support services?


2. What is evaluation?

3. How can we evaluate family support services?

4. Where does measuring outcomes fit?

5. Why do we want to measure outcomes in family support?

6. How, “in theory” can we measure outcomes in family support?

7. What are some of the paradoxes and dilemmas in practice? How do we respond?

8. What is realistic? Who can do what?

9. What tools are available on this site for family support services? How can they be used?

Endnote 1: Data collation and analysis

Endnote 2: Feedback and ongoing development

Endnote 3: Connections and Links

Endnote 4: Developing this guide

  Measuring Outcomes in Family Support : Practitioners' Guide Version 1.0  

2. What is evaluation?

Evaluation is the process of determining the merit, worth or value of something.

Worthwhileness

It is a process of asking and answering questions about worthwhileness.

  • Has something worthwhile been happening?
  • Is something worthwhile happening?
  • Could something more worthwhile be happening?

There are no value free evaluations. Worthwhileness implies values.

Key question: Whose values will be used to judge the worth of what is happening?

Evaluation questions

We often judge worthwhileness in terms of effectiveness, efficiency, adequacy and appropriateness.

  • Effectiveness, e.g. are we making a difference?
  • Efficiency, e.g. can we achieve more with less?
  • Adequacy, e.g.
    Are we adequately meeting the needs of the client?
    Are we adequately meeting the needs of the community?
  • Appropriateness, e.g. is what we do appropriate in relation to the wider context, eg community attitudes, the funding guidelines, staff skills and expertise?

An evaluation process

Evaluation is a process that includes:

  • Having a purpose
  • Asking a question
  • Identifying the information needed to answer the question
  • Designing and testing a method for collecting the information
  • Collecting the information
  • Analysing the information
  • Determining the answer to the question
  • Using the answer.

Key question: Who are the users of the evaluation? What will they find useful?

Evaluation strategies and tools


Many different strategies and tools are used in evaluation processes, e.g.

  • Interviews
  • Questionnaires
  • Focus groups
  • Assessments
  • Case reviews
  • Peer review
  • Statistical analysis

Key question: What is the most appropriate mix of strategies and tools for any particular evaluation process?

Types of processes to be evaluated


There are many different types of processes to be evaluated, e.g.

  • Manufacturing processes (e.g. making light bulbs)
  • Administrative processes (e.g. doing the payroll)
  • Service processes (e.g. banking)
  • Human service processes (i.e. processes where people change during the process, e.g. counselling)
  • Community development processes (e.g. the community identifying community needs and meeting them).

The characteristics of these processes are different, for example, manufacturing and administrative processes compared with human service and community development processes:

  • are more precisely defined
  • are more standardised
  • have clearer cause and effect links
  • are often made up of countable and measurable steps.

Human service and community development processes compared with manufacturing and administrative processes:

  • are less precisely defined
  • are more individualised
  • have multiple cause and multiple effects - it is hard to show cause and effect links
  • involve people making choices to participate.

In community development processes and some human service processes:

  • Each process is unique - it is not a standardised process
  • The specific goals to be achieved may not be known at the beginning of the processes - it is an open-ended process not a pre-determined one
  • Often the goals to be achieved are difficult to precisely define
  • The steps in the process are often not known in advance; nor are they precisely defined
  • Different people in the service process may have different values and so make different judgements about the worth of the service process
  • People in the processes are part of families, friends, neighbourhoods, work teams, communities
  • Service provides are parts of service networks
  • There are many causes and many effects and so it is hard to show cause and effect relationships
  • People in the processes make choices about their commitment and participation
  • People in the processes may want different things.
  • There are often conflicts between good service practice and good data collection practice
  • There are often conflicting demands between client confidentiality and data collection
  • Clients may speak many different languages
  • Clients may include people with low literacy levels, physical and intellectual disabilities and mental health issues - many measurement tools are often not suited for use with these people.

These characteristics affect the appropriateness of the strategies and tools. For example it is not appropriate to use unit costing in community development processes because unit costing requires standardised processes whereas community development processes are individualised and open-ended.

Key question: Are the evaluation strategies and tools appropriate for the kinds of processes being evaluated?

Uncertainty, self-delusion and rigour

The differences in characteristics of the various types of processes mean there is more uncertainty in evaluating human service and community development processes than there is in evaluating manufacturing or administrative processes.

For the same reasons, there is more possibility for staff in human services and community development processes thinking they are doing a good job when they are not (self-delusion), compared with staff in manufacturing and administrative processes. It is obvious if the payroll is not completed on time or a television is not functioning. It is often not so obvious that a human service or community development process is not working well.

More rigour is required in the evaluation of human services and community development processes (compared with manufacturing or administrative processes) to ensure we are reasonably certain of what is happening in the processes being evaluated.

Rules of thumb for rigour

In human services and community development some rules of thumb for rigour are:

  • Use many different kinds of evaluation strategies (any particular evaluation strategy has limitations; a mix of strategies will help make up for the limitations inherent in any one strategy)
  • Always use a mix of:
    • collaborative reflection and dialogue strategies (people need to discuss such questions as: What are our questions? Whose values will we use to judge the worth of the program? What does this data mean? )
    • listening to peoples ‘experiences' strategies (for example we need to hear clients describe what the program was like for them, staff to describe what their work is like for them)
    • facts and figures strategies (for example we need to know how many clients we have, who they are (gender, ages, languages spoken, etc) how much service they receive, what the service costs per hour to provide)
  • Ask different people the same things (a client, a family member, a staff person and someone from a funding agency may have different answers to the question: is the service of value for this client? We need to hear all their answers).
  • Involve people with contrary views (because there is a high risk of self-delusion in evaluating human services it is useful to make sure we hear from our critics; those with different views from ours - to really listen and see if they have a worthwhile point to make).

Rules of thumb for numerical data

The uncertainty inherent in the nature of the processes in human services and community development processes means that it is not possible to use numerical to be "the judge" of our performance.

As a rule of thumb use numerical data in human services and community development to help ask good questions rather than be "the judge" of our performance.

Key question: Given the nature of the change processes we are evaluating, have we sufficient evaluation strategies in place to convince a reasonable person about the worth of what we are doing?