#KeyLIMEpodcast 124: Reality Check… How Clinical Competency Committees REALLY Work

SHARE:
POSTED BY:

Happy New Year!  I hope that everyone enjoyed a holiday season with family and friends.  Of course, my New Year resolution to be optimistic unraveled a bit when I went back to work in the emergency department this week…

This year the ICE blog has some exciting new features coming this year…. the first teaser… an ebook on education theory.  More details to come in the following months.

The Key Literature in Medical Education podcast starts off the year discussing competency committees.  If you have a competency committee or are establishing one, learn from the experience of these early innovators from California.  There are some definite pearls that you want to adopt and pitfalls tht you must avoid.

The abstract is below, while the good stuff is on the podcast.

– Jonathan

————————————————————————–

KeyLIME Session 124 – Article under review:

Listen to the podcast

View/download the abstract here.

new-keylime-podcast-episode-image.jpg

Hauer KE, Chesluk B, Iobst W, Holmboe E, Baron RB, Boscardin CK, Cate OT, O’Sullivan PS. Reviewing residents’ competence: a qualitative study of the role of clinical competency committees in performance assessment. Academic Medicine. 2015 Aug;90(8):1084-92

Reviewer: Jonathan Sherbino (@sherbino)

Background

As competency-based medical education (CBME) initiatives are implemented in systems around the world, the intended designs meet the realities of complex clinical and education systems.  One of the key principles of CBME is programmatic assessment – a systematic, longitudinal sampling of sentinel abilities of trainees with an emphasis on work-based assessments.  Clinical competency committees (CCC) are an element of programmatic assessment, taking aggregated data and making decisions about a trainee’s progression towards competence.  CCCs permit the decoupling of a judgment about global performance of a resident from a front line faculty member.  Previously a faculty member would need to both teach the resident as they developed their abilities during a rotation AND provide a summative assessment of their performance.  Programmatic assessment requires the faculty member to provide low stakes assessments based on how a resident performed that day or that week.  A CCC takes aggregated data and provides regular summative  (i.e. high stakes) assessments.  This removes the tension from the faculty member as both coach and judge.

This paper is one of the earliest papers to describe how the intended design of CCCs actually works in the real world.

Purpose

To determine how clinical competency committees (CCC) actually work?

Type of Paper

Research: Qualitative (semi-structure interviews)

Key Points on Methods

  • Semistructured interviews, 34 PDs from 5 institutions in California
    • Purposive sampling (22 large v. 12 small programs; 15 procedural v. 19 non-procedural)
  • Conventional (i.e. code directly from data) content analysis (i.e. naturalistic, sense making to identify themes)
  • Analysis via constant comparative and discrepant case analysis (i.e. fair dealing)
    • Two independent coders
  • Sampled to sufficiency

Key Outcomes

  • Membership 3-25
  • Weekly – yearly meetings
  • Problem Identification Model
    • Predominant
    • Purpose of review to identify the few struggling residents
  • Developmental Model
    • Planned series of steps to mastery
  1. Use of residents’ performance data
  • Variety of assessment instruments
    • Mainly faculty global assessment
  • CCC members valued as additional sources of info
  • Systems data (incident reports, pt complaints) key red flags
  • Informal hallway conversations (always about a prob)
  • Mainly a normative ref
    • Usually dichotomous
    • Minimal description of developmental trajectory
  1. CCC member engagement
  • Some fac dev
  • Faculty provide credibility to process
  • Decision making usually inferred rather than systematic deliberation
  • Detailed data review uncommon
  1. Implications for residents
  • 50% reviewed everyone, 50% reviewed only struggling residents
  • concern about forward feeding in large CCC impacted residents future training

Key Conclusions

The authors conclude…

Institutions orient resident performance review toward problem identification; a developmental approach is uncommon. Clarifying the purpose of resident performance review and employing efficient information systems that synthesize performance data and engage residents and faculty in purposeful feedback discussions could enable the meaningful implementation”

Spare Keys – other take home points for clinician educators

Another reminder for CEs that what we design is rarely implemented in perfect detail.  Program evaluation is key as a means for quality assurance AND quality improvement.

Shout out

Thanks to Karen Hauer for sharing some of the specific details of this study in person during her ICRE session on the topic.

Access KeyLIME podcast archives here

Check us out on iTunes

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.