#KeyLIMEPodcast 109: Diagnosis – The Bias Emperor has no clothes

SHARE:
POSTED BY:

This week the Key Literature In Medical Education podcast covers a **conflict of interest alert** paper that challenges the foundation of an important topic in #meded: clinical reasoning and cognitive bias.  The ICE blog has discussed this previously herehere and most recently here.

As always the abstract (below) will get you started.  But for more of the debate, check out the podcast here.

So, how do you approach teaching clinical reasoning / diagnostic error in your program?  Does this publication change your approach?

– Jonathan (@sherbino)


KeyLIME Session 109 – Article under review:

Listen to the podcast

View/download the abstract here.

New KeyLIME Podcast Episode Image

Zwaan L1, Monteiro S2, Sherbino J3, Ilgen J4, Howey B5, Norman G2. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Quality and Safety. 2016 Jan;[ePub ahead of print]

Reviewer: Jason R. Frank (@drjfrank)

Background

[Note:  KeyLIME co-host Jon Sherbino is a coauthor on this study, but I chose it without his input-JRF]

We have discussed the 21st century imperative of patient safety & quality on KeyLIME many times. There is a strong desire in the medical and meded communities to find interventions that make a difference. Our review of a handover bundle in KeyLIME episode is an example.

Teaching about cognitive biases is another example of an educational intervention that has been heavily promoted in meded in an effort to decrease diagnostic error.  Human brains are fallible, we are told, so we must go metacognitive to prevent bad things from happening to our patients and our medicolegal insurance rates. Cognitive science enthusiasts go even further, emphasizing that the two modes of thinking in our brains–called unimaginatively System 1 & System 2–are at fault. We need to teach health professionals, they would say, to slow down and consider our own thinking about a case to really manage it well.

The problem is that there is no real evidence supporting this premise that a significant component of diagnostic error is due to cognitive biases. This hypothesis may even conflict with studies of clinical reasoning that suggest expert heuristics allow a clinician to recognize exemplars rapidly and more accurately than if they slow down to contemplate potential biases. What is a clinician educator to do in the face of these two competing schools of thought?

Purpose

Enter Laura Zwaan et al. Zwaan is from the Netherlands, and her colleagues include some Canadians. They set out to experimentally determine if experts in cognitive biases would agree on the presence of particular biases in a mock clinical workup. They also wondered if the number of biases changed depending on whether the diagnosis was correct or not.

Type of Paper

Research: Experimental

Key Points on Methods

Following a pretest of an instrument, the authors recruited participants from the Society to Improve Diagnosis in Medicine, whom they presumed would be experts in the field of cognitive biases and diagnosis. Of 113 respondents, 71 met the inclusion criteria (physician, consent, can read the English materials). 37 of those agreed to complete a survey (52%). Participants were asked to identify the cognitive biases in 12 general medicine vignettes. The cases were carefully designed to suggest one or another diagnosis, and the patient outcome. “Consistent” cases suggested the diagnosis was correct. Another “inconsistent” version suggested the diagnosis was incorrect. The authors compared the number of biases identified in each type of case and the agreement about which biases were present.

Key Outcomes

These expert reviewers were more jaded when the patient outcome did not go well…Only 8% of reviewers felt a diagnostic error was present when the patient outcome was consistent with the initial case diagnosis, but this rose to 60% when the outcome was inconsistent. When a diagnostic error was felt to present, the number of identified biases rose from a mean of 1.27 to 3.22 (p<0.0001). Interestingly, when the case outcome suggested a correct diagnosis, the average number of cognitive biases identified was 1.75. When incorrect, there were 3.45 biases (p<0.00001). There was essentially no agreement among these experts as to when a given bias was present (kappa ranging from 0 to 0.04). Individual biases were 73-125% more likely to be identified if the patient outcome implied an incorrect diagnosis.

Key Conclusions

The authors conclude that physician reviewers, even when experts in clinical reasoning and cognitive biases, do not agree on the presence or absence of a given bias. Diagnostic error appeared to be related to whether it seemed the clinician had already made the correct diagnosis. Hindsight bias heavily influenced the reviewers, as they identified twice as many biases operating in cases where there was a poor patient outcome.

The authors acknowledged several limitations: small n, unclear definitions of experts in this field, the possibility that the cases lack detail, and unclear definitions of biases in the literature.

Is this the “stake through the heart” of the Cognitive Bias movement? It certainly seems to suggest that, if experts cannot agree on the biases for a given case, then it is not truly teachable. Furthermore, is it really a viable construct if the number of biases changes based on a retrospective view of patient outcomes? This paper should send clinical reasoning researchers back to their drawing boards.

Spare Keys – other take home points for clinician educators

This is a great example of clever experimental testing relevant to medical education: a simple survey with a randomized feature yielded important insights.

This paper provides a cautionary tale regarding the way many currently teach diagnostic reasoning and cognitive biases.

Shout out

Good paper, Jon!

Listen to the podcast here

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.