skip to Main Content
Bubble Wrap Your Evaluation

Bubble wrap your evaluation

When you buy something fragile, shop assistants around the world will often ask if you want it wrapped in protective packaging. Depending on where you are this might be newspaper, tissue paper, or—the much beloved bubble wrap.

Alas, the same can’t be said for evaluation.

Unsurprisingly, when you decide to begin an evaluation there’s typically no salesperson standing by asking if you’d like to protect your new investment from the bumps and bruises of the outside world.

But what if there was?

In a post a few weeks ago, we described a series of 10 evaluation risk factors: 10 structural and contextual features that tend to occur only in evaluations where things go wrong. We called these evaluation risk factors. These risk factors included things like commissioners who had no idea why they were doing the evaluation (“Our funders told us we had to!”), staff turnover (“Sorry, that person no longer works here”) or complicated reporting and accountability structures.

In the same research study we used to identify this list of risk factors, we also identified seven protective factors—i.e., elements of an evaluation context that seem to play the role of bubble wrap and protect evaluation processes (and products) from some of the ups and downs of doing evaluation in the outside world.

Much like protective factors in health and youth programming help to mitigate or eliminate risks for individuals, families or communities, protective factors in evaluation serve as extra padding against the political, organizational or interpersonal factors that might undermine and complicate an evaluation process.

In our study, evaluations tended to turn out well when:

  1. Commissioners had a clear understanding of why they’re doing the evaluation—and how they planned to use its findings.
  2. There were strong, positive connections—both between the evaluation team and their counterparts at the organization being evaluated, and amongst those in the program team itself.
  3. Organizational leaders publicly demonstrated their commitment to this, specific evaluation: seeing it through, and learning from its findings.
  4. The evaluation team had easy access to the people, places and data they needed to do the evaluation.
  5. Organizational leaders and program staff were open—explicitly so—to diverse views.
  6. The evaluation team had the appropriate skills and capabilities for this specific evaluation, and
  7. The program being evaluated already had systems in place to collect high quality data.

Cumulative effects

Like our risk factors, protective factors seemed to have cumulative effects. When evaluation scenarios had 3 or more protective factors (and 1 or fewer risk factors), things tended to go well. In contrast, when evaluation situations had 3 or more risk factors (and 1 or fewer protective factors), this tended to get a little sticky.

Implications for practice?

If you’re starting out on an evaluation journey, here are three ways you might use our list of protective factors to bubble wrap your evaluation.

  1. Situational assessment. Use the list as a checklist to see how many protective factors you already have in place. This will give you a sense of how your evaluation might fare against the bumps and bruises that could emerge throughout an evaluation journey!
  2. A guide for planning. Alternatively, use the list as a guide for planning your next evaluation so that you can prepare the ground—up front—to make sure you have as many protective factors in place as possible.
  3. A conversation starter. Finally, use the list as a discussion guide to inform early conversations between an evaluation commissioner and an evaluation team. Use it as a way to explicitly articulate principles you want to have in place: a purposeful evaluation design, strong relationships, leadership support, openness to diverse views…so that there is a shared understanding up front about the value these might play in supporting your evaluation’s success.
Back To Top