This is the final post in a series on systematic education design. (Check out previous topics: introduction, needs assessment, objectives, instructional methods and assessment.)
Program evaluation attempts to measure the impact of an educational intervention. The intended v. delivered v. received curriculum can be influenced by numerous learner and environmental factors. Thus, program evaluation should be integrated into the initial design of a curriculum to ensure that the goals of the curriculum are met.
There are numerous curriculum evaluation models. Stufflebeam recognizes 22 different models.[1] However, these models can be broadly categorized as follows:
1. Goal-based
• Comparison of input / output
2. Operationally-oriented
• what are the optimal processes
3. Expert-oriented
• external credentialing
– peer review
– regulatory bodies
4. Stakeholder-oriented
• value to students, society, faculty etc.
The Kirkpatrick model,first published in 1959 for the evaluation of business programs, has been the most widely adopted model in HPE curriculum evaluation.
Level 1: Reaction. This level measures participant reaction to and satisfaction with the program and the learning environment.
Level 2: Learning. Changes in knowledge, skills, and/or attitudes constitute learning in the Kirkpatrick model.
Level 3: Behaviour. This level determines whether changes in behaviour have occurred as a result of the program. Kirkpatrick stresses the importance of having information on levels 1 and 2 in order to interpret the results of a level 3 evaluation. Specifically, if no behaviour change occurs, it is useful to determine whether this is due to participant dissatisfaction with the program (level 1) or a failure to accomplish the learning objectives (level 2), or whether the lack of change in behaviour is due to factors beyond the scope of the program (e.g., a lack of desire, opportunity, support, or rewards for changing behaviour).
Level 4: Results. Level 4 looks at the final results that occurred because the participants attended the program. Results can be thought of as “the bottom line”: the impact of the program.
Other educational experts have added a fifth level to Kirkpatrick’s model. Additions include:
- Return on investment2
- Societal impact3,4
The education literature has been criticized for failing to focus program evaluations at the pinnacle level (results). However, David Cook in his 2013 ICRE plenary address cautions us about the danger of this approach. Check out a summary of this work (and a key reference) here.
Please share this series with colleagues about to develop a curriculum. Hopefully, it will prevent the most common CE consultation I receive, “Hey, I’m halfway through building this curriculum, but I have this problem that I didn’t anticipate when I started…”
———————–
[1] Interestingly, Stufflebeam does not include Kirkpatrick’s stages of training evaluation. Apparently, academic program evaluation does not cross-pollinate with corporate training evaluation. Also of note, the term ‘model’ is used in the vernacular sense and not explicitly. Many of the ‘models’ discussed by Stufflebeam are more precisely ‘approaches.’————————
References
1. Stufflebeam D. Evaluation models. New Directions for Evaluation. San Francisco: Jossey-Bass; 2001.
2. Phillips J. Measuring the results of training. In: Craig R, ed. The ASTD training and development handbook: A guide to human resource development. 4th ed. New York: McGraw-Hill; 1996:313-341.
3. Kaufman R, Keller J. Levels of evaluation: Beyond Kirkpatrick. Human Resource Development Quarterly. 1994;5(4):31-80.
4. Kaufman R, Keller J, Watkins R. What works and what doesn’t: Evaluation beyond Kirkpatrick. Performance and Instruction. 1996;35(2):8-12.
Image courtesy of Educational Design: a CanMEDS guide for the health professions.