#AppliedMedEdMethods: Validity – Don’t Worry… It’s an Argument, Not Math

SHARE:
POSTED BY:

(This is the fifth post of our #AppliedMedEdMethods101 series. View the others here: Beyond the RCTPre-Post Simulation; Discourse Analysis and Retrospective Cohort Studies)

By Rose Hatala 

You are a Clinician Educator who has recently taken on revamping programmatic assessment in your residency training program.  As you examine your overall assessment blueprint, you realize that your program is lacking any assessment of residents’ clinical skills in the workplace. You’ve heard about the mini-CEX, an assessment tool that requires a brief, focused observation of a real patient encounter. Following the encounter, the faculty rates performance using global rating scales and provides verbal and narrative feedback on the resident’s clinical skills. You know that the mini-CEX is used in other residency programs and you wonder whether this assessment tool would be right for yours?

As Clinician Educators, we are often faced with the task of either implementing or adapting an established assessment tool for use in our educational setting.  When doing this, we need to carefully consider the purpose of our assessment, whether or not the tool is the right one for that purpose and, once implemented, whether or not the tool is working as intended.  In other words, we need to examine the validity evidence supporting our entire assessment approach.

What is validity?  In a recent discourse analysis, St. Onge and colleagues have highlighted that the term ‘validity’ reflects different conceptualizations prevalent across our HPE literature, from a focus on validity as a test characteristic (commonly resulting in the misused statement “we implemented a valid assessment tool”) to a focus on validity as an argument built upon a chain of (validity) evidence, to a focus on validity as a social imperative (looking at the consequences of an assessment on individuals and society).

From my personal perspective, the evidentiary-chain approach as developed by Michael Kane is intuitive and logical and includes elements of the social imperative when we consider the consequences of assessment.   Even if Kane’s framework doesn’t work for you, I stress the importance of using a systematic approach in order to determine if an assessment tool is working in your setting as you intended.  It’s also important to consider assessment validation as an ongoing process. An assessment doesn’t magically cross a finish line and get granted the stamp “valid assessment”. Rather, we continually examine whether the decisions we are making using the tool are well-supported.  If so, we continue to use it as implemented; if not, we make changes and re-assess.

Following Kane’s framework and coming back to our practical problem of whether the mini-CEX is the right tool for our purposes, we begin by clarifying what educational decision we intend to make using this tool.  This step is akin to a researcher spending a lot of upfront time crafting their research question.  If done well, the design of the study is clear.  Similarly, if we are clear about the decisions we want our assessments to support, then the type of validity evidence we need to gather becomes clear. In our case, we want to use the scores and comments from the mini-CEX to inform the meetings between residents and their longitudinal coaches, where they examine whether they are achieving their learning goals and set new learning plans.   Notice that the decision we are making (“What are the relative strengths and weaknesses of my resident’s clinical skills?”) is very different from using the mini-CEX to make a decision such as ‘Has my resident successfully completed a period of remediation focused on clinical competence?” Where we change our intended decision, so changes the focus of our validation efforts.

For the details of Kane’s framework and the types of validity evidence that can be examined, I refer you to the annotated references.  However, I want to highlight the most important and under-studied aspect when considering assessment validation: consequences. Assessments have consequences, both intended and unintended, on the learner, the learning system and ultimately on patients and society. Thus, consequences (or as St.Onge has labelled them, social imperatives)should be paramount in our validation efforts. Imagine that during the mini-CEX, a resident changes their typical behaviour and performs an OSCE-style physical examination that ignores the patient’s comfort.  The faculty attends to the patient’s comfort, corrects the resident in mid-course, and reminds them that the purpose of the assessment is to observe their authentic clinical skills.  If this became a consistent pattern of what most residents did during their mini-CEXs, we would conclude that the unintended consequence of this assessment was unacceptable to our patients and was steering our residents towards behaviours that they don’t do in real life.  We could take steps to remedy this (hence, the importance of seeing assessment validation as a process) and engage in further resident and faculty development to be clear about the purpose of the assessment and then re-assess if the observations were more patient-focussed. Or we could decide that the tool is not suitable for our purposes and try implementing a different assessment approach.  What this scenario highlights is the importance of examining the consequences of both our assessment approaches, and subsequent decisions as key components of assessment validation.

Take Home Points:

  1. Assessment validation is a process, not an outcome.
  2. Use a systematic approach or framework to develop your validation process.
  3. Actively seek the consequences of the assessment on learners, faculty, programs, patients, and society.

Annotated References:

1. St-Onge, C., Young, M., Eva, K. W., & Hodges, B. (2016). Validity: one word with a plurality of meanings. Advances in Health Sciences Education, 22(4), 853–867.

Discourse analysis of how validity is conceptualized in HPE.  Understanding our conceptualization of validity may help educators and researchers better align their recommendations and practices in assessment validation.

2. Cook, D. A., Brydges, R., Ginsburg, S., & Hatala, R. (2015). A contemporary approach to validity arguments: a practical guide to Kane’s framework. Medical Education, 49(6), 560–575.

For those educators interested in understanding and applying Kane’s framework of assessment validation, this ‘how-to’ primer will be helpful.

3. Hawkins, R. E., Margolis, M. J., Durning, S. J., & Norcini, J. J. (2010). Constructing a Validity Argument for the Mini-Clinical Evaluation Exercise: A Review of the Research. Academic Medicine, 85(9), 1453–1461.

A lovely example of examining the validity evidence supporting an assessment tool using Kane’s framework.  For educators specifically interested in the mini-CEX, this paper provides ‘one-stop-shopping’ regarding the validity evidence that was present up to 2009.

Featured image via markmags on Pixabay 

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.