Assessment Literature (from the last 12 months) that every CE should read: Part 1

SHARE:
POSTED BY:

By Jonathan Sherbino (@sherbino)

PARTTzeenieWheenie_Mechanical_alarm_clock_number_tiles_2

The International Conference on Residency Education (ICRE) has just finished.  If you’ve never been, you should definitely think about it.  It’s the largest medical education conference in the world dedicated to residency- (i.e. postgraduate in Canada, graduate in the rest of the world?) training.

At ICRE, I gave a talk called “The (Disputed) Top Assessment Papers of 2013” with Eric Holmboe from the American Board of Internal Medicine.  From our list of 10, I’ve narrowed the field to three papers from the last 12 months that I think you should read (at least the abstract!)  (BTW: the ‘rigorous’ process to determine the top papers was an ad hoc polling of friends and colleagues – many of them journal editors.  My sources are anonymous to prevent bribery by authors trying to make next year’s list.  Of course, I’m always available…)

Three papers to add to your library are:

1.  Technology-Enhanced Simulation to Assess Health Professionals: A Systematic Review of Validity Evidence, Research Methods, and Reporting Quality.

Cook DA, R Brydges , B Zendejas, SJ Hamstra, R Hatala. 2013. Academic Medicine. 88(6):872-83.

This is an exemplary systematic review of the use of simulation to assess learners. (Full disclosure: I’ve published two other studies with Dave Cook using a related data set.) 417 studies involving 19,075 learners were included. Unfortunately, nearly all studies were of mediocre quality per the medical education research study quality instrument (MERSQI). Dave Cook and his team do an outstanding job of articulating a contemporary validity framework that moves past classic psychometric theory.  Unfortunately (again) they demonstrate that one third of studies reported no validity evidence and one third of studies only reported either content, reliability or relations evidence. Their conclusions are: “validity theory is rich, but the practice of validation is often impoverished. Most of the 417 studies in this sample offered only limited validity evidence, and nearly half reported only one element of new evidence. By far the most commonly reported source of validity evidence—and the sole source for one-third of studies—was the relatively weak design of expert–novice comparison. The average number of validity elements decreased slightly or remained constant in more recent studies, suggesting that conditions are not improving… Only six studies acknowledged the current unified evidence-oriented framework.”

If you want to read more about contemporary validity framework check out the work of Messick or Kane or what Cook et al. have written as a companion piece to the AM paper. (Cook D et al. What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. 2013.  Advances in Health Sciences Education : Theory and Practice. Epub ahead of print.)

My take home from this study is that all CEs should review this paper before they start to design their own simulation-based assessment instrument.  Examine this existing (comprehensive) list to find a high quality tool that you can adapt.

Coming Friday are papers #2 and #3

———

If you are interested in keeping up to date on key literature in medical education, check out the KeyLIME Podcast!

Image courtesy of Creative Commons

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.