#KeyLIMEPodcast 201: Holiday Special

SHARE:
POSTED BY:

It’s our KeyLIME Holiday Special! Today we’ve got a special gift for you – not only one, but three action-packed (and kind of off-the-wall) papers. Are they cookies and milk or lumps of coal? Read on, and check out the podcast here.

————————————————————————–

KeyLIME Session 201:

Listen to the podcast.

Reference # 1:

New KeyLIME Podcast Episode Image

Wigley C et al., Santa’s little helpers: a novel approach to developing patient information leaflets BMJ 2017;359:

 


Reviewer:
Peaches Monkeybum (@sherbino)

Background and Purpose

There are three things that never lie: drunks, yoga pants and little kids. I once asked my kids what they thought about KeyLIME. After their screaming laughter settled down, it was apparent that they thought it was crazy that anyone (ANYONE) would ever want to listen to me talk… like for 20 minutes. And more importantly, why couldn’t I talk normal. “Daddy, we think you make up some of these words.” (It’s actually true.)

My point is that communication – particularly medical written communication – could benefit from the no punches pulled, clarity of child-like communication. The old adage “if you can’t explain it to a child, you don’t understand it” is really true.

Enter this paper from a friend of a friend of the show that “the average readability of several patient information leaflets for one common orthopaedic procedure and then revised these leaflets with the help of a group of very bright and helpful children.

Key Outcomes and Conclusions

Six NHS patient information leaflets had a mean SMOG (simple measure of gobbledygook) of 17.

Fifty-seven children 8 to 10 years old produced a leaflet that discussed:

  • Indications – “your hip is old and rotten”
  • Complications – “the surgeons make a mistake and cut the wrong thing”
  • before elements – “show up on time”
  • and, after elements of hip arthroplasty – “you might feel nauseous”

Spare Keys – other take home points

As medical educators, we should consider the assumptions we have regarding teaching patient communication. Previous research has demonstrated poor patient compliance with follow-up plans, even written follow-up plans. Are we assuming a high SMOG-comprehension with our patients? Maybe we should aim for clear, simple instructions.

References # 2 and 3:

Bartlett M. The gift of food and the utility of student feedback. Med Educ. 2018 Oct;52(10):1000-1002

Hessler M et al. Availability of cookies during an academic course session affects evaluation of teaching. Med Educ. 2018; 52(10):1064-1072.

Reviewer: Puddin’ Angel Pants (@drjfrank)

Background and Purpose

Student feedback on medical teaching? Bah, humbug! KeyLIMErs know that we have discussed the promise & perils of student evaluation of teaching (aka “SET”) on the Podcast before [Episode 85 The Wars over Student Assessment of Teaching. Episode 125 SET up to fail: The wars over Student assessment of teaching Part II. Episode 169 Gender bias in student evaluations of clinical teachers?]. We’ve also discussed several of the biases known to influence SET, including teacher attractiveness, gender, age, stringency, etc, etc. But what can a medical teacher REALLY do to get those teaching scores up? How about chocolate??

Hessler et al organized a clever randomized trial of the effect of cookies on clinical teaching scores. Maggie Bartlett from UDundee followed on with some wisdom about the implications for SET.

Key Outcomes and Conclusions

Hessler randomized 112 students taking an emergency medicine course to 10 cookie groups and 10 controls. Teachers and course content, as well as the course overall were significantly positively rated in the cookie group. A multiple regression analysis showed that the access to cookies accounted for >6% of the variance. The authors caution that SET is a flawed technique for evaluating teachers.

Bartlett’s editorial is filled with juicy pearls related to SET. She reminds us how powerful and how flawed SET is in contemporary higher ed institutions. Student ratings deeply effect teacher behaviours, and can influence promotions and careers. Bartlett also points out the power of the cookie: sweet, fatty payloads delivered straight to bloodstream releases a pleasant carpet-bombing of the cortex with weaponized dopamine. This is the ketamine of happy student raters. Beyond the physiologic effects, there are the whole socio-cultural effects of sharing precious resources, gifting, reciprocity, acceptance, and relationship building. Finally, she flags that we may be driving desperate teachers to spread too much sugary good cheer and give all our trainees a metabolic derangement, risking the world’s healthcare. Frightening indeed. Finally, if the human brain can be so easily swayed, why do we trust SET at all? Is it time to abandon it?

Spare Keys – other take home points

These two works remind us once again:

  1. The human brain is a marvelous, flawed, and influenceable organ
  2. Student evaluations of teaching are also flawed, and we should keep looking for alternatives
  3. We can do clever RCTs in meded
  4. Jon, Linda, and I are now craving cookies, and have no allergies, and are very open to being biased about future papers…Just saying.

 

Reference # 4:

Ewers R. Boring speakers talk for longer. Correspondence Nature Vol 561. September 2018.

Reviewer: Tinker McJingles (@LindaSMedEd)

Background

There we sit, captive, in the middle of a row, with the speaker (be it at a conference, grand rounds, a research symposium…) droning on and on and on and on. Your mind wanders. Biologic urges overcome you. And you cannot escape!

So is it the interesting speakers who just have so much to say who speak for longer, or is it the boring speakers who just cannot get their talk focused and organized who have a longer duration?

Purpose

To investigate whether longer talks are more boring, or (to put it another way) whether boring talks are longer. [note – that repetition is boring]

Methods

  • Setting: an unnamed international conference
  • ‘Subjects’: 50 talks given in a 12-minute time slot
  • Independent variable: length of the talk in minutes and seconds – timed by author
  • Dependent variable: a measure of how boring the talk was – the measurement ‘instrument’ was the author, who formed a binary (yes, no) opinion of whether the talk was boring or not after 4 minutes of lecture – long before he knew what the duration would be. This instrument was alas not validated, although, being the same person, there was excellent inter-rater reliability.

Key Outcomes and Conclusions

34 interesting talks = 11 minutes, 42 seconds.
16 boring talks = 13 minutes, 12 seconds
t-test, t = 2.91, P = 0.007

For every 70 seconds that a speaker droned on, the odds that their talk had been boring doubled.

Conclusions: Boring talks that seem interminable actually do go on for longer.
Boring speakers wasted a statistically significant 1.5 min.

Spare Keys – other take home points

Even for a small study, get the right methods. This one might have benefited from a few more ‘boring raters’ – or perhaps a scale?

But, I talk too much …

 

Access KeyLIME podcast archives here

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.