U.S. flag

An official website of the United States government, Department of Justice.

Maximize Your Evaluation Dollars

Date Published
July 1, 2006
By
Edwin Zedlewski
Mary B. Murphy

You are a State program administrator and want to know the impact your programs have. One statewide program provides mentors to both teens and their parents. Should you try to discover whether the mentored teens are less prone to delinquency? If you find that they are, should you dig deeper and determine if it is because of the teen mentor or the parent mentor?

You are a county manager who funds a local program that makes housing and transitional services available to offenders returning to their communities. Could an evaluation decipher which aspects of the program are the most influential in determining whether clients recidivate?

You manage a Federal program supported in part by funds from an Attorney General’s initiative to make troubled families more functional. How can you increase the program’s prospects for success?

One of the most important aspects of managing a criminal justice program is ensuring that the program is meeting its objectives. An evaluation is the best way to accomplish that.

But evaluations can be expensive, particularly evaluations to identify the precise impact a program is having. A rigorous, scientific impact evaluation typically costs NIJ between $500,000 and $1.5 million. A poor choice about which programs are suitable for evaluation is more than just a waste of time—it’s a waste of millions of dollars.

The NIJ Approach: An Evaluability Assessment

NIJ has developed a way to identify programs that are likely to yield evaluations that maximize the agency’s return on its investments. By adopting NIJ’s approach, program administrators at all levels of government may save considerable time and money.[1]

The first step is to assess a program’s “evaluability”—that is, to gauge which programs can sustain a rigorous outcome evaluation. The evaluability assessment takes 1 to 5 days and is guided by some common sense questions:

  • Are program components stable or still evolving?
  • Can we trace logical and plausible connections between a program’s activities and its intended outcomes?
  • Are there enough cases or observations to permit statistically robust conclusions?
  • Can we isolate the program’s effects from other related forces operating in the community?

Many programs can be summarily rejected after answering these initial questions. For example, a program that has few participants would be unsuitable for a rigorous, scientific evaluation. Alternatively, one that would require 10 to 20 years of followup is not a practical candidate for a low-cost, 2-year evaluation.

Take a Closer Look

Next, NIJ reads the complete files of potential programs. Programs that are funded through a grant, for example, will have a grant application that explains the program’s goals and activities, developmental history, quality of the data systems, and numbers of clients served. Typically, the initial screening involved in this step reduces the list of candidates to 20 to 25 percent of the original pool.

If additional insight is needed, evaluators can conduct telephone interviews with the program’s management, review progress reports and other grant materials, and gather other information to answer outstanding questions about the programs. They should ask the following questions:

  • What do we already know about programs like these from the research literature?
  • What could an evaluation of this program add?
  • Which audiences would benefit from an evaluation and what could they do with the findings?
  • Are the program managers interested in being evaluated?
  • Is the program director already planning an evaluation? If so, evaluators should further inquire:
    • What data systems exist that would facilitate an evaluation?
    • What key data elements are contained in these systems?
    • Are there data to estimate unit costs of services or activities?
    • Are there data about possible comparison samples?
    • How useful are the data systems to an impact evaluation?

Program managers must be able to explain how the program’s primary activities contribute to its eventual goals and identify other local programs serving similar populations that could be used for outcome comparison.

Conduct a Site Visit

If the program seems promising after a rigorous screening, a site visit may be in order. Site visits usually take an entire day and spark rich interactions that reveal operational strengths and flaws that might not otherwise be visible.

During a site visit, evaluators should determine:

  • If the program is being implemented as described in the application.
  • What components of the program would be the most sensible to evaluate.
  • What outcomes could be assessed and by what measures.

Next, evaluators should speak with the following individuals:

  • Key program staff. Do staff members tell consistent stories about the program? Are their backgrounds appropriate for the program’s activities?
  • Program partners. What services do partners provide or receive? How integral are they to the success of the program? What do partners see as the program’s strengths and weaknesses?
  • Program director. Does the director understand the demands that an evaluation will place on staff? Will the director make the changes necessary to support the evaluation?

Assess the Target Population. Evaluators should determine a number of factors about the target population—its size, its characteristics, and the way in which program staff identify it. Is entry into the program voluntary? Who will be excluded from the program? Evaluators also must learn if participants’ characteristics have changed over time, and whether there are shortcomings or gaps in how the program delivers the intervention.

Evaluators then must decide whether to interview members of the target population or program participants. If interviews are conducted, participants should be asked what they think the program does and how they would assess the services received. This information is invaluable in assessing the success of the program, identifying problems in its implementation, and improving the delivery of services in the future.

Examine the Data. Evaluators should then examine data systems to identify what kind of data are available; whether it is complete; whether routine reports are produced; and what specific input, process, and outcome measures the data support. Do the data systems follow participants over time, and if so, do the records allow evaluators to identify services delivered to each individual?

Evaluators need data systems that are organized, complete, and current—or else be prepared to spend considerable time and resources collecting data and implementing quality control measures.

Select Evaluation Design. Using the information gathered during the screening and site visit, evaluators must then determine the best evaluation design. The answers to a few key questions will aid in that decision:

  • Are there enough participants so evaluators can make random assignments to test and control groups?
  • If there are not enough participants, can the evaluator find a highly comparable group (with similar demographics, risk factors, and so forth) that does not receive services?
  • How large would program and comparison samples be after the intended period of observation?
  • What services would a control or comparison sample receive?

Finalizing the Assessment Recommendation

At the conclusion of the assessment process, evaluators write a report that recommends whether the program should be evaluated. The reports typically contain all the information collected, including sample data forms and program brochures, and discuss the ramifications of various design options.

Evaluability assessments not only guide decisions about which programs are good candidates for an outcome evaluation, they also help evaluators develop the research design and estimate the cost. Assessments also initiate and foster relationships that will prove helpful when evaluations reach rocky points and negotiations become necessary.

This process has worked well for NIJ. State and local agencies can achieve a similar level of success and minimize evaluation risks by following NIJ’s approach to evaluability assessments.

About the Authors

Edwin Zedlewski is the Acting Deputy Assistant Director for Research and Evaluation at NIJ. Mary B. Murphy is the Managing Editor of the NIJ Journal.

About This Article

This article appeared in NIJ Journal Issue 254, July 2006.

Date Published: July 1, 2006