Building Bridges

A, B, C, or All of the Above?

Have you ever taken a learning assessment—aka test—that was frustratingly difficult or fabulously easy? Did you get distracted taking the test, wondering, “What were they thinking?” as you tried to regurgitate trite facts or parse out a complex question and series of responses?

Early in my consulting career, I was asked to review a learning assessment written by a content expert. It was on a topic I knew very little about. I ended up passing the assessment by selecting those answers that seemed correct —without even reading the questions!

That experience set me on a path to understand what it was about so many learning assessments that made them so easy to pass. My analysis and research resulted in the creation of our Writing Effective Learning (WELTM) Assessments workshop.

In the workshop, we discuss the WHY of creating learning assessments (focusing on a learner orientation rather than simply a goal to “check the box” that a course was completed) and WHEN to create them (hint: NOT as an afterthought once the course is ready to launch).

The real not-so-secret key to success in writing learning assessments is correlating learning objectives to questions, and we spend some time on this in the workshop. A guideline we discuss is to create 2-3 test questions (items) for each learning objective. In addition, each item should support at least one learning objective. That eliminates many of the unnecessary questions that require simple regurgitation of facts to which the learner can easily look up the answer. It also makes it a little more challenging to write substantive questions.

We spend most of the workshop identifying common issues and rewriting questions. Common issues include using “all of the above” too frequently, writing responses that are more or less equal in length and level of detail, and avoiding two-option questions (think “true/false”) as they promote guessing.

Once the questions are prepared, how do we know they are clear, not overly complex, and not too easy? There are many ways to “test” the test questions, including having target learners, “naïve” learners (those who are unfamiliar with the material), and content experts complete the test and provide feedback to the test designers.

Once the course is launched with its learning assessment, do we simply provide the learner with a pass or fail score—and if it is a fail, require retake until the learner passes? Or, do we conduct an item analysis to identify any items that are commonly answered incorrectly, then correct the item or the training?

I’d like to propose that, over the next few weeks, you review your learning assessment strategy, prepare a plan to create more robust tests that are aligned with the learning objectives, and find various ways to “test the test.” I’d be happy to weigh in if you’d like to brainstorm some ideas together.