no one ever listens to me, anyhow...
post in response to what I would tell an NPO board about outcome evaluation:
If I were invited to do a presentation on this particular subject (to ANY NPO board) here are the points I would raise (in no particular order):
1) NPO's do not have the financial nor staff resources to do rigorous outcome evaluations.
2) NPO's many times, do not have the opportunity to choose the outcomes that they want/need to track.
3) Outcome evaluation, in many ways, is more about fulfilling funding requirements than about the collection of relevant data that will assist NPO's in improving service delivery and better serving clients.
4) In short, outcome evaluation is NOT client driven, it is funder driven.
5) NPO Boards should serve as advocates to change #4.
6) The primary (ideal?) purpose of NPO outcome evaluation is to see to what extent one can reasonably discern if the NPO is meeting its mission; this is the litmus test for NPO's -- their bottom line if you will. For-profits bottom line is profit; NPO's bottom line is achieving the mission.
7) Outcomes do not tell the entire story, especially quantitative outcomes only.
8) Implementation fidelity is a challenge with outcome evaluation as it is with any other program.
9) Outcome evaluation should never be used as a performance measurement for staff as this is not what it is about (see #6 above).
10) Outcomes that can actually be used to assist staff in improving service delivery and better meet the needs of clients should be tailored to that agency; standardized outcomes are more about hoop-jumping than anything else.
11) The notion of setting targets for outcomes is anathema to client-centered service delivery; there are a host of factors beyond an agency's control that influence client outcomes; the notion of setting a target for client change is naive as an agency cannot influence client outcomes as one could if clients were employees (i.e., by rewards and/or
12) NPO outcome evaluation is not an exact science; many clients refuse to complete outcome data, do not return for services after an initial visit, etc. Hence the extent to which one can actually claim generalizable findings to specific programs is tempered much, if not most, of the time.