Jon selects a letter to the editor from the NEJM, which he describes as a representation of how common problems can be addressed with fresh perspectives. He notes it is an example of how HPE researchers can use research letters to reach a larger audience. Listen to more of his comments here.
————————————————————————–
KeyLIME Session 214
Listen to the podcast.
Reference
Laufer S et al., Sensor Technology in Assessments of Clinical Skill. N Engl J Med. 2015 Feb 19;372(8):784-6.
Reviewer
Jonathan Sherbino (@sherbino)
Background
I recently had a meeting with the head of our local medical robotics lab. It highlighted for me the generational gap between cutting edge #meded research questions and cutting edge clinical medicine research questions. I was there to discuss innovative direct observation assessments (the hallmark of emerging programmatic assessment systems) and my surgeon colleague wanted to talk about the pilot study they had launched the day before – a study of the accuracy of breast biopsy performed without a clinician present, using a robot equipped with AI. Yep. Generational differences. I wanted to talk about rater variance with observing performance. My colleague had taken the human out of the equation altogether. Awkward.
This encounter speaks to the assumptions and starting positions that inform our questions and colour our solutions as clinician educators. Perhaps, we should take a bigger step back. This study – a letter to the editor in the NEJM – is a great illustration of how we might want to address common problems with fresh perspectives.
Purpose
“We hypothesized that sensor technology would help to characterize successful and unsuccessful CBE [clinical breast examination] techniques at a level of detail that is not possible with observation alone.”
Key Points on the Methods
- Convenience sample of physicians in practice, recruited at the American Society of Breast Surgeons, American Academy of Family Physicians, and American College of Obsetricians and Gynecologists annual conferences
- Random ordered testing on 4 partial task trainers embedded with pressure sensors
- A clinical history was read and a simulated exam was performed
Key Outcomes
- n=553
- 44% family physicians, 28% surgeons, 28% gynecologists
- >50% performed > 10 CBE per week
- >50% teach CBE
There was no association between force and accuracy for superficial masses.
Deep masses required a stronger force of palpation to achieve accuracy. Inadequate palpation force leads to significant risk of inaccurate diagnosis.
Key Conclusions
The authors conclude…
“Since variations in force cannot be reliably measured by means of human observation, our findings underscore the potential for sensor technology to add value to existing, observation-based assessments of clinical performance. Integration of sensors into clinical-skills assessments may allow for objective, evidence-based training, assessment, and credentialing.”
Spare Keys – other take home points for clinician educators
This paper serves as a nice template of how HPE researchers might think of structuring research reports to reach a larger audience. If the usual #meded journals feel a bit like an echo chamber, or a big part of your audience is missing, the research letter (or the JAMA med ed special edition) may be an appropriate alternative.
Access KeyLIME podcast archives here