skip to Main Content
Practice. Feedback. Repeat.

Practice. Feedback. Repeat.

When I was four, I wanted to be a ballet dancer.

My mother enrolled me in dance classes and for the next 13 years I went to ballet three times a week—spending an hour or two each lesson practicing exercises designed to build my skills and refine my craft.

By the time I was nine, I wanted to be a swimmer. I’d visited the Australian Institute of Sport on a school excursion and decided that was the place for me. I joined our local swimming club and, four or five mornings a week, went to swimming training from 5:30am, repeatedly practicing drills and races and form.

Well, I didn’t become a dancer and I’m not a professional swimmer. Instead, I decided I wanted to become an evaluator.

Unlike many in the field, I’m not an ‘accidental evaluator’—someone who fell into the profession and realized along the way they enjoyed evaluation. Instead, I explicitly set out to become an evaluator after years in development programming left me a little doubtful about the value of my work.

After deciding to become an evaluator, what did I do? I enrolled in a Masters program at a university in Australia, which allowed me to take classes in evaluation.

In those classes, I…

  • Went to lectures where people talked about doing evaluation…
  • I read books and articles about other people doing evaluation…
  • And I wrote academic papers about evaluation theories and concepts.

At no point did I actually do an evaluation.

Don’t get me wrong, I learnt a great deal through that program. But, in contrast to my earlier efforts to become a dancer, and then a swimmer, I didn’t have the opportunity to practice—under the supervision of an expert—those critical tasks that make up evaluation work. Nor did I have the opportunity to get feedback on the quality of my performance.

In short, the opportunities for practice and feedback that were so readily available to a would-be dancer and a would-be swimmer were not available to a would-be evaluator.

Developing expertise

Research on expertise sys that deliberate practice—i.e. repeated practice of critical tasks—is one of the best predictors of performance across a wide range of domains. Studies have shown this to be the case for violinists, athletes, chess players, medical practitioners, even educators. The more we engage in deliberate practice—repeatedly practicing things just outside our comfort zone, then hearing quick corrective feedback on our performance—the more we improve.

Yet for evaluators, there are limited opportunities to engage in ongoing practice of critical evaluation tasks like interpreting evaluation contexts, designing evaluations or communicating findings

A practice-based approach to evaluation training

I’d like to argue for a practice-based approach to evaluation training. One that allows would-be evaluators to engage in repeated practice of critical evaluation tasks: tasks like meeting with stakeholders to determine the parameters of an evaluation, designing evaluations to align with those parameters, or delivering evaluation findings, all the while receiving feedback on performance from more experienced others. I call this the EvalPractice model.

In a recent research on evaluation study, an initial pilot of the EvalPractice model suggested this approach has potential to improve foundational evaluation capabilities like situation awareness. This is great, but only the beginning.

So what?  

Well-known evaluator Dan Stufflebeam argued that “the success and failure of evaluation as a profession depends on sound evaluation [training] programs that provide a continuing flow of excellently qualified and motivated evaluators.”

It may be that we need a new way of training our new and novice evaluators if we hope to support the continued advancement of our field.

If you’d like to talk about ways to embed practice and feedback into your evaluation training, contact us at CERE.

Back To Top