This week the Key Literature in Medical Education podcast tackles one of my favourite papers from 2015. I wish I had thought to write this paper and translate the work of Messick or Kane for Clinician Educators. But who’s kidding who? The authors of the manuscript discussed on KeyLIME are waaay smarter and have done a great job in making scary topics (statistics!) relevant and accessible for educators.
So, if you are afraid of commitment, check out the abstract below before downloading the podcast here.
If you want to see other work by this research group on validity, check out this post from 2013.
– Jonathan (@sherbino)
KeyLIME Session 101 – Article under review:
View/download the abstract here.
Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane’s framework. Medical Education. 2015 Jun;49 (6):560-75
Reviewer: Jonathan Sherbino
Background
One of the key components of CBME is programmatic assessment. Programmatic assessment requires longitudinal sampling of representative and authentic performance of trainees. Of course, hidden within this statement is a challenge facing training programs and certification bodies. How do you defend the validity of the high stakes decisions (i.e. summative assessment) that are the outputs of assessment programs?
Traditional psychometric approaches include:
- Content – how the assessment items are created;
- Criterion – how the scores correlated with gold/reference standards; and
- Construct – how well an abstract concept relates to an concrete trait, where a theoretical link between abstract and concrete exists.
This approach to validity struggles with both the number of required samples to determine “competence” (e.g. there is a large number of competencies within a competency framework) and the measurement error of direct observation assessments typical of work-based assessment.
What to do? This paper offers a counterpoint to a traditional understanding of validity that builds on the work of Messick and Kane to provide a contemporary approach to validity. In essence, how can we know that our judgments are “true”.
Purpose
“To offer a practical introduction to the key concepts of Kane’s framework that educators will find accessible and applicable to a wide range of assessment tools and activities.”
Type of Paper
Theory Paper
Key Conclusions
The authors conclude “…validation is not an endpoint but a process. Stating that a test has been ‘validated’ merely means that the process has been applied, but does not indicate the intended interpretation, the result of the validation process or the context in which this was done. Secondly, validation ideally begins with a clear statement of the proposed interpretation and use (decision), continues with a carefully planned interpretation/use argument that defines key claims and assumptions, and only then proceeds with the collection and organisation of logical and empirical evidence into a substantiated validity argument. Thirdly, educators should focus on the weakest links (most questionable assumptions) in the chain of inference. Fourthly, in all of the clinical and educational examples cited herein, the Scoring, Generalisation and Extrapolation evidence is fairly strong; only when we attempt to infer actionable Implications, moving from the real world score to specific decisions, do important deficiencies come to light. For this reason, we believe that the Implications and associated decisions are ultimately the most important inferences in the validity argument.”
Spare Keys – other take home points for clinician educators
This manuscript is a great example of translating theoretical concepts into highly relevant and applicable lessons for Clinician Educators. This author group’s previous work on simulation for assessment revealed a significant gap in the literature around the validity of tools and programs in practice. This paper bridges this gap in the literature by suggesting a workable solution.
Access KeyLIME podcast archives here