Testing and Evaluation: The Top Ten List
would like to thank Dr. Barbara Martin for her insights on testing
and evaluation (with a wink and a nod to David Letterman).
Many organizations and companies are jumping on the “let’s
test after training” bandwagon. And they have good reasons for
making the jump. Companies want to know if their training dollars are
reaping the intended benefits. It is certainly important to ask if
participants have acquired the knowledge, skills, and attitudes needed
to help companies meet their business goals. However, testing is a
huge undertaking. Writing tests is hard work and expensive. Likewise,
checking to see if transfer of training has occurred is time consuming
and it is difficult to assess. It’s easy to throw up one’s
hands and jump off the bandwagon before even getting on. With that
in mind, here is a testing and evaluation Top Ten List to get you thinking
about how to maximize your jump onto the testing bandwagon.
testing were simple, everyone would do it!
Pat yourself on the back for deciding to start a testing program.
Begin by deciding what kind of information will give you the most benefits
from testing. Then start small and learn from your experiences.
- To address
objective-based testing, write a test blueprint or specification
A testing blueprint is an overview of the number and types of questions
that will be written for each objective. Preparing a blueprint gives
you a big picture perspective of how many items you will need to prepare.
You may decide that there is not enough time in this life (and the
next) to write all the items you would like to. The blueprint lets
you make decisions before you ever write the first item.
- Checklists are
valuable tools for assessing products (e. g., tangible objects) and
the best “test” is a checklist.
Because a checklist has a list of tasks and skills, these components
are often enabling
skills for the larger skill. You may not need to create additional
tests if you have a good checklist.
have to test everything you teach!
a risk rating diagram to help you determine which objectives are
most important to test. Test those objectives that
have the “greatest
chance of error” and the “greatest consequence of error” for
- If you only have
time to test one objective, test the key or terminal objective.
Developers often forget to test the most important objective -- the
key or terminal objective. This objective is the one that most closely
mirrors on-the-job performance and should always be tested even if
you cannot tests the enabling objectives.
the test to on-the-job performance.
Test items should reflect as closely as possible what participants
are expected to do on-the-job. Test the application of concepts and
principles and problem solving before you test recall and recognition
well written objective is the end all and be all of writing good
Since objectives specify what participants will be learning and how
they will be performing, there should be a 1-to-1 match between the
learning objectives and the test items or checklists. Be sure each
objective includes an action, the conditions under which testing will
take place, and a standard of performance.
attitude change is important, give the participants a questionnaire
or survey to see if your training is having an impact.
We want participants to value the content we are teaching because
if they do the participants are more likely to use what they have learned
back on-the-job. However, we need to know if we have influenced their
attitudes. Administer a questionnaire or survey to check.
- Good tests and
checklists must be valid and reliable.
fairly easy to write test items and checklists, but much harder
to write valid and reliable tests. Use SMEs to check
for content accuracy and then conduct a pilot test to make sure that
tests and checklists are valid.
And the Number 1 Tip:
- The only thing
that really matters is whether participants can transfer what they
have learned in training to their jobs.
Good tests administered
immediately after training ask the learner to apply what
they learned, that is, they evaluate whether the learner can transfer
skills and knowledge back to the job. Once participants
are back on the job, data from follow-up surveys and interviews will
help companies determine whether participants are correctly performing
Barbara Martin is an active DSA associate and teaches The Instructional
Developer Workshop, The Course Developer Workshop, and The Criterion
Referenced Testing Workshop. She has written many articles and
an award winning book on the designing instruction for affective
Article © 2006
Darryl Sink & Associates, Inc