would like to thank Dr. Barbara Martin for
contributing this month’s tip on Evaluating Learner Performance.
Dr. Martin will be teaching more about this subject at DSA’s
The Criterion Referenced Testing Workshop in San Francisco.
you know that the savvy test-taker can pass most multiple choice tests written
by novice test-writers by either selecting response choice “c” or
the longest response? It’s true. Novice test-writers overuse response “c” and
they also write longer correct answers because they provide more detail in
the correct response. Incorrect responses are often less detailed and shorter.
Check a few of your own self-made tests to see if you have overused response
choice “c” and if you have longer correct answers.
Multiple choice tests are the favorite type of test for many training programs
because they allow course developers to test large amounts of content in a relatively
short timeframe. They can also be machine scored. However, good multiple choice
items are difficult to write. Novice test writers often cue learners to the correct
answer without intending to do so. Some of the most common ways to cue learners
- Using grammatical
cues in the stem, e.g., singular or plural words, “a” or “an”
the correct response longer or shorter than the others
- Overusing one
response location for the correct answer
- Using “all of
the above” or “none
of the above”
- Writing response choices
that are not plausible
The first four bullets
address “format” guidelines for multiple
choice items, while the last bullet addresses a “content” guideline.
The format guidelines are relatively easy to apply once you understand them.
making all the response choices plausible is much more difficult. It is important
to remember that anytime a test-writer uses a response that is not plausible,
e.g., the Easter Bunny or something that would never be chosen as a correct
response, the test-taker is being cued that this response is incorrect. You
the learner a freebie. Just think, if one of the four responses is not plausible,
the test-taker only has to choose between three possible answers. This gives
the test-taker a 33% chance of guessing the correct answer! This is good if
you are the test-taker; bad if you are trying to get an accurate account of
the learner knows.
Multiple choice items and tests are a great way to evaluate learner knowledge
and skills providing you are not giving away the answer by writing poor test
items. If you want to evaluate learners’ performance rather than knowledge,
a better strategy is to use a checklist. We will address checklists in a future
Until next time,
Barbara Martin is an active DSA associate and teaches The Instructional
Developer Workshop, The Course Developer Workshop, and The Criterion
Referenced Testing Workshop. She has written many articles and an award
winning book on the Cognitive Learning Domain.
Article © 2005 Darryl
L. Sink & Associates, Inc.