#KeyLIMEPodcast 187: See One, Do One, Call Me Later—Is Supervision Overrated?

SHARE:
POSTED BY:

Read on, and check out the podcast here.

————————————————————————–

KeyLIME Session 187:

Listen to the podcast.

Reference:

New KeyLIME Podcast Episode Image

Finn KM et al Effect of Increased Inpatient Attending Physician Supervision on Medical Errors, Patient Safety, and Resident Education: A Randomized Clinical Trial. JAMA Intern Med. 2018 Jul 1;178(7):952-959.

Reviewer: Jason Frank (@drjfrank)

Background

How were you supervised during your training? Did it matter?

KeyLIMErs will know we often discuss that one of the megatrends shaping 21st century health professions education is a drive for greater patient safety and social accountability. Numerous reports from the Institute of Medicine and others have raised serious concerns about our training and practice environments. At the same time, we are experiencing calls for change to how we prepare health professionals, citing a need for much greater direct observation and coaching over remote supervision.

The proposed answer to both of these drivers of change is greater supervision. Preliminary studies in a variety of settings have compared remote or hands-off supervision (“call me when you need me”) to direct supervision (“right here with you”), and suggested it improves patient outcome (e.g. surgical mortality). But does it really matter how much time you spend with an attending on an Internal Medicine ward?

Purpose

Finn et al from the Mass General Hospital set out to “determine the effect of increased attending physician supervision on an inpatient general medicine service on patient safety and educational outcomes…”

Key Points on Method

The authors decided that an RCT was needed to really get to the bottom of the question. They chose a single academic hospital inpatient service and included 22/24 “top rated” teachers to be randomized to a protocol of “increased direct supervision” vs “standard supervision”, 2 weeks at a time for 9 months. (They avoided the summer months when new interns arrive – shudder!) Standard supervision involved the attending MD meeting all new patients admitted to the service, but then only hearing about patients in a meeting room after that. The intervention group had the teacher join the team on daily “working rounds”, when all patients will be seen. All faculty experienced both protocols over the year.

The authors defined the primary outcome as the rate of medical errors identified in a standardized chart review by trained nurses. Secondary outcomes included:

  1. Length of patient stay
  2. Utilization of radiology, consults
  3. Number of orders
  4. ICU admissions
  5. Patient satisfaction
  6. Length of time on work rounds
  7. Teacher & learner satisfaction with care & education
  8. Learner perceived autonomy, and
  9. How much the interns talked.

Are these really the most apropos outcomes? There were no measures of teaching behaviours or effectiveness.

The study was powered to find a 40% difference between the 2 groups, derived from a study of pediatric handover(!).


Key Outcomes

22 attending physicians oversaw 1259 hospitalizations, or about 5772 patient-days, roughly split between the two groups. The patients had similar demographics and acuity.

The medical error rate was 107.6 in the standard group vs 91.1/1000 patient-days in the intervention, a 15.3% difference that was not statistically significant. Preventable adverse events came to 80 vs 70.9 (p=0.36), again favoring the intervention, again a high p-value. Nearly all the bad stuff was classified as “low harm”.

Most other clinical measures, including LOS and ICU admissions were essentially identical.  The length of time rounding was roughly the same. As was patient and family satisfaction.

However, the intervention trainees wrote more orders after work rounds and rated their autonomy and efficiency lower. Attending MDs were more likely to rate work-life balance as poor, but quality of care and knowledge of patients higher. Interns spoke less (is that a good thing or a bad thing? Jon was really chatty as a trainee…)

Bottom line: Attendings thought it was better, but busier. Trainees thought it was worse. There were fewer errors in the direct supervision group, but the p-value wasn’t there.

Key Conclusions

The authors conclude…”Our study provides further evidence that increased supervision may not increase patient safety”. They go on to say that it may also harm peer-learning and active engagement of trainees in the workplace.

However, there was a trend to positive outcomes in a possibly-underpowered trial, and several threats to validity. For example, there was no way of ensuring that the teaching was effective.

Spare Keys- other take home points for clinician educators

  1. Overall, the paper was well-written. It is a good example of how to describe a complex methodology as compactly as possible.
  2. The RCT is rarely the gold standard for #meded interventions—our interventions are often of the complex kind, and the real-world is rarely a controlled laboratory.
  3. Beware the simplistic logic of changing one feature and saying it doesn’t work in #meded—the literature is littered with detritus of this unsophisticated thinking, and we all suffer.
  4. We need to really think through the best outcome measures for our studies. Consider getting input from others who think about your phenomenon or setting in a different way to inform your protocol.

Access KeyLIME podcast archives here

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.