Looking for Gold: Vaulting through Evaluation Challenges
We’re all enjoying the Rio Olympics right now, and witnessing incredible athletic feats. Some sports achievements are very easy to measure. Take track and field, for example. In a race, the runner who crosses the finish line first wins. In soccer or water polo, the team that scores the most points wins the game.
In other Olympic sports, however, the winner may not be so nearly clear-cut. In gymnastics, an athlete prepares a program to the best of her (or his) ability. The judges score her on many elements, and the highest score among the athletes determines the winner. The gymnast will not know immediately following her program whether or not she won, as her competitors follow her.
Similarly, instructional designers don’t face an immediate, clear-cut path to knowing if their training “stuck” — and if the learners are able to apply what they’ve learned to their jobs. However, with evaluations, instructional designers can learn a great deal about their training programs’ success.
The Kirkpatrick evaluation levels assess learner reaction (level 1), learning (level 2), application of skills to the job (level 3), and business results (level 4). Challenges may arise with executing each evaluation level. Let’s look at a few of the challenges in levels 1-3 and see how instructional designers might mitigate them.
Measuring learner reaction, level 1, can be done by a course evaluation form, show of learners’ hands, verbal check-ins, or via a “dashboard” that measures the pace of the course and learner energy levels. The trainer notes learner reactions during the course and can use these to amend the course both “on the fly” and after. Clients may, however, read too much into level 1 results and want to change the learning solution without further assessment. It is important to explain to clients exactly what level 1 assessments measure — solely learner reaction — and to use other evaluation tools as well.
Level 2 evaluations, or measurements of learning, frequently take the form of final written tests or simulations. However, writing assessment questions can often be a challenge in itself. If the questions are too difficult for learners to understand, this could impact the validity of the test. Similarly, it can be hard to write appropriate “distractor” questions that are not obviously incorrect. After all, some people pass a learning assessment with flying colors — without ever taking the class or having done the job for which they are training.
The other issue for instructional designers is that there is not necessarily a direct correlation between level 2 “test” success and on-the-job achievement. Extensive knowledge may not lead to professional efficacy. To test for skills, ask assessment questions that explore the best way to execute certain job tasks or make on-the-job decisions, rather than questions seeking rote knowledge.
Level 3 evaluations measure application of skills to the job, often via learner or manager surveys, interviews with learners or managers, or on-the-job observation. An important consideration when measuring at level 3 is to ensure appropriate resources and tools — coaching, performance support, etc. — are in place to reinforce learning and support behavioral change.
The “success case method” created by Robert Brinkerhoff also may reveal successful on-the-job behavioral changes. With this method, stakeholders interview a sample of learners — typically high performers — to determine which factors have made them successful — and then corroborate this success with independent evidence. Because the strong learners share which influences contributed to their success, their interview responses can yield clues to help the less successful learners.
The success/case method can be very effective, giving stakeholders qualitative feedback beyond data, says Paula Spizziri, an EnVision consultant. “Sometimes you don’t know what’s behind the curtain…this gives you that,” she said. However, an organization using the success/case method needs to allocate time and resources for the interviews and independent data collection.
While all evaluation methods have some challenges, many of these hurdles can be mitigated. Because learning success, like that of a gymnastics program, may not be immediate, the evaluation process strengthens your training program.