#KeyLIMEPodcast 234: Watch your words!

SHARE:
POSTED BY:

Clinical reasoning. What comes to mind when you hear this term? Does it line up with what your colleagues think? Lara’s paper selection touches on a interesting topic: how the meaning term “clinical reasoning” can change depending on who is using it and how the different definitions of the topic could potentially affect teaching and assessment. Listen here to hear what the hosts had to say.

————————————————————————–

KeyLIME Session 234

Listen to the podcast.

Reference

New KeyLIME Podcast Episode Image

Young et al., The terminology of clinical reasoning in health professions education: Implications and considerations. Medical Teacher. 2019 Jul 17:1-8

Reviewer

Lara Varpio (@LaraVarpio)

Background

The foundational argument of the manuscript goes like this:

  • Clinical reasoning is a key element of medical training BUT it is not a single, homogenous construct. It is used by different professions, it is underpinned by different theoretical frameworks, and many different terms are often used interchangeably to mean clinical reasoning (some of these include: diagnostic reasoning, intuitive process, and—my favorite—contextualized reasoning).
  • The problem is that there are different interpretations of the term clinical reasoning. So when any one school or any one PERSON says they teach clinical reasoning, that could mean any of a variety of things.
  • The upshot of that is that there are differences in the focus of our teaching and the focus of our assessments. How can we have sound pedagogical and assessment practices when we are all addressing something different when we say we’re addressing clinical reasoning.

Purpose

The purpose of this study is (1) to discuss the terms used in reference to clinical reasoning; (2) describe how the research team categorized those terms in relation to the meanings inferred by each term and (3) to report where there are disagreements in those interpretations.

Key Points on the Methods

  • This paper builds off a previous publication that reported a BEME review. In that literature review, the researchers identified and analyzed 625 papers that referred to clinical reasoning.
  • In the research reported in this manuscript we’re discussing today, the authors looked at those 625 papers to see if a term other than clinical reasoning was used. The data presented in this paper is the analysis of those other terms that were used to describe clinical reasoning.
  • So there were 110 terms that they found. Those 110 terms appeared 693 times across the papers
  • In the first phase of coding, 3 members of the research team worked to categorize like terms with like terms. They generated 6 categories.
  • Then, in phase 2 of the coding, they tested these six categories. They had 2 other members of the team assign all 110 words to one of the six categories. So here’s where things started to fall apart — they only had consensus with 43 of the 110 terms.
  • So then in Phase 3, the first thing the team did was make sure that their categories made sense, that the problem with the coding wasn’t the categories. So the analysis team verified their interpretations of the categories and decided that the categories made sense. The team held similar frameworks in mind when they made the categorization decisions. So the problem didn’t lie in the categories NOR did it lie in how the coders understood the categories. The problem was that each individual coder was interpreting the different terms and their inferred meanings of the terms in different ways.
  • So the team did one more round of coding to look at the terms where 2 analysis team members agreed on the coding to see if, through discussion, they could increase that to get a majority or maybe even a full agreement across coders.

Key Outcomes

  • After all this work, they were able to reach full consensus on 67% of the words, and have a majority consensus on another 15% — so that’s 82% of the words accounted for, but that leaves 18% of the terms related to clinical reasoning that the team could come to NO consensus on in terms how it should be categorized.
  • Of the 5 most used terms — so that’s critical thinking, diagnostic reasoning, decision making, problem solving, and clinical judgment, the team could only reach consensus on the meaning of 2 of those terms. The other 3 — they couldn’t reach consensus.

Key Conclusions

The take away message here is that the term clinical reasoning is a multidimensional concept. The authors are not arguing that everyone use ONE dimension or one definition. They are simply laying out the extent of the diversity so that when we teach or assess clinical reasoning, we need to be explicit about our conceptualization. Without that clarity, we can’t build on each others’ work.

Spare Keys – other take home points for clinician educators

One of the things I want to applaud the authors for is not shying away from publishing what was, quite frankly, a white hot mess of a data set. If we don’t tell the stories of things that don’t work, then we make research look like one good news story after another. I’m a big believer that those failures can teach us important lessons too so we need to publish them.

Access KeyLIME podcast archives here

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.