Is assessment validity a social imperative? Read on, and check out the podcast here (or on iTunes!)
————————————————————————–
KeyLIME Session 194:
Listen to the podcast.
Reference:
Marceau et. al.,Validity as a social imperative for assessment in health professions education: a concept analysis. Med Educ. 2018 Jun;52(6):641-653
Reviewer: Jason Frank (@drjfrank)
Background
KeyLIMErs know that we talk about assessment a lot. Assessment has always been the sexier older sibling to other major aspects of #meded, such as curriculum. Assessment always gets the publications, the grants, the exam committees, the competence committees, the appeals, the lawyers, the Karolinska Prizes, the Hollywood glitz. Curriculum stays home and watches Netflix again. But are we really fooling ourselves into keeping busy with all of this “assessment” activity? Are all these forms and ratings just the meded equivalent of Monty Python’s machine-that-goes-bing? (See: https://www.youtube.com/watch?v=arCITMfxvEc). This speaks to the perennial question of validity.
**Warning: this episode of the KeyLIME podcast will be filled with multiple #meded bingo words, like validity, competence, competency, quality, assessment, and professional.**
How do we know that our assessments are “valid” or “quality”? On the Podcast we often refer admiringly of the Messick approach popularized by David Cook. Here are 2 classic approaches to thinking about the validity of assessment:
- Validity as a test characteristic (psychometrics)
- Validity as an argument-based chain of evidence (Messick & Kane)
However, perhaps we’ve forgotten that the whole purpose of assessment is to advance learning and to determine levels of ability. After all, society has a stake in assessment too…
Purpose
Enter the authors of this novel paper by Marceau et al (including prominent meded contributors Meredith Young, Christina St.Onge) from Quebec, Canada. In previous work, St Onge conducted a discourse analysis of meded assessment texts and identified a 3rd major approach to validity: assessment validity as a societal imperative. In this study, the authors set out to clarify the emerging concept of validity as a social imperative.
Key Points on Method
The authors chose Rodgers evolutionary concept analysis method from nursing for their task. While this is not something I have seen before, I found it straightforward and similar to other iterative qualitative document analytic approaches.
The authors searched multiple databases in French & English using search terms derived from the previous discourse paper on assessment validity by St Onge and included works from 1999 (Messick’s publishing) to 2016. They used inclusion & exclusion criteria, as well as a data extraction tool. They identified 867 papers initially, refining them to 67 for inclusion. The majority of the papers were review articles.
The researchers then looked for text relevant to the concept’s attributes, antecedents, and consequences.
Key Outcomes
The group found some interesting outcomes:
- Antecedents included:
- Greater calls for social accountability directed to professional bodies
- Rise of CBME
- Greater calls to protect patients
- Ensuring all graduates meet minimum standards and all graduates are prepared for practice
- Problems with validity evidence of existing assessment strategies
- Attributes included themes of protecting the public and assessment of learning:
- Using evidence considered credible to society
- Validation embedded in assessment processes
- Validity evidence for score interpretation
- Justification of assessment via multiple sources of data
- Consequence
- Societal input or engagement supports trust in assessment
In some ways, these findings are an argument for the “why” of good assessment…
Key Conclusions
The authors conclude…
That there is an emerging, powerful new lens through which to see assessment validity: the societal imperative. They clarified its characteristics and relations to other theories of validity and suggested that 21st century programmatic assessment will need to place greater emphasis on qualitative input and less on psychometrics.
This is a fantastic paper that opens up a new way of thinking about assessment and lends support to the rationale for CBME.
Spare Keys – other take home points for clinician educators
- This paper is a great example of meded as a field: bringing in methods and concepts from other domains of scholarship.
- All of us in meded should consider this paper when contemplating our own assessment systems.
Access KeyLIME podcast archives here
Check us out on iTunes