#KeyLIMEPodcast 233: Evidence. Experience. Tomato. Tom-ah-to?

SHARE:
POSTED BY:

Jon’s paper selection focuses on the topic of evidence in the health professions education, a topic that has been gaining more attention over the years. His co-hosts agree that we need to encourage medical educators to take an evidence informed approached, but debate whether it always effective. Listen in to find out what they said. 

————————————————————————–

KeyLIME Session 233

Listen to the podcast.

Reference

New KeyLIME Podcast Episode Image

Thomas et al., Use of evidence in health professions education: Attitudes, practices, barriers and supports Medical Teacher. 2019 May 3:1-11

Reviewer

Jon Sherbino (@sherbino)

Background

As a McMaster faculty member – the so-called home of Evidence-based Medicine (EBM) – I am contractually obligated to use the word “evidence” every 15th word during any public presentation.  Gord Guyatt personally reviews my PowerPoint deck before I submit to conference administrators.  So, you can imagine that I’m invested in evidence, whether I like it or not.

Beyond my contractual obligations, I’m starting to reach that point in my career, where “everything old is new again.”  By this I mean, my discoveries from the literature, the sharing of tacit knowledge in grad school and the innovations discussed early in my career are starting to be recycled, as so-called novel iterations, once again.  I don’t mean that I have seen everything.  Rather, it seems HPE does a really good job of re-inventing the wheel.  More worrisome, some innovations, while attractive based on surface features, have demonstrable evidence of ineffectiveness.  See for example: 3-D anatomy e-modules or self-directed learning.

My tongue-in-cheek opening comments may require a scoping review.  Perhaps, the HPE community should emphasize and engage in promoting evidence-informed, not ad hoc, educational interventions. In an age of burgeoning curricula, can we improve efficiency and avoid ineffective learning?   If you’re on the fence about the unity of evidence-informed education, read on to see what colleagues from around the world think about this topic.

Purpose

We sought to answer:

  1. What are HP educators’ attitudes toward using educational research evidence to inform their teaching and assessment practices?
  2. To what extent do HP educators use educational research evidence in their practice?
  3. What are the individual and organizational supports and barriers of evidence-informed education from the perspective of HP educators?

Key Points on the Methods

A survey of 2015 members of the Association of Medical Educators of Europe amid attendees of the 2015 AMEE conference who self-identified as “individuals who teach in academic and clinical settings, as well as those involved in educational planning, administration and/or research.”

Participants received an email invitation to complete the survey.  A single follow-up email was sent at a two-week interval.

The survey was informed by existing discourses regarding HPE construction of evidence and the influences of knowledge translation. The survey items were reviewed by content experts.  The survey was pilot tested by a representative sample (n=10) of the target population to provide final refinement of the instrument.

The survey consisted of 42 items, completed using a 5-point Likert scale, plus an additional demographic section.

Analysis was via descriptive statistics.  The five-point scale was collapsed to three points.

Chi square statistics where conducted to determine differences between survey item responses as a function of demographic variables.

Key Outcomes

There were 318 respondents (10% of a potential 3200). Two thirds of respondents identified as physicians/pharmacists/dentists.  22% of respondents self-identified as non-clinicians. 10% identified as researchers.  More than half worked in an academic/university environment.

Approximately half of the respondents participated in research activities with three quarters searching the literature for HPE evidence.

The majority (87%) had access to the literature with two-thirds reporting enough time to read and critique it.

While nearly every respondent (90%) agreed that evidence can improve HPE, there was equal support for and against the routine application of evidence to HPE practice. When asked about the role of experience in making an educational decision there was an even distribution among agree, neutral and disagree.

Nearly 60% agreed that HP educators do not use HPE research findings with 17% agreeing that having evidence to inform practice was too difficult as a HP educator!

An additional ~ 40 comparisons between survey items and demographic variables are reported, but only six achieve statistical significance when a Bonferroni correction is applied (post hoc by me, as the authors do not perform this measure and supply many additional associations using a traditional p < 0.05.) In other words, I would caution against reading too much into Tables 2 to 5; I do not report the associations as determined by type of work environment, qualifications, years of experience etc.

Key Conclusions

From a six-paragraph conclusion, I highlight the following author conclusions:

As [evidence informed HPE] gains increasing attention among scholars, so are discussions on (1) what constitutes “evidence” in HPE); (2) the relevance of educational research in the “real world”; (3) the quality and strength of available evidence; (4) the readiness of evidence for implementation in educational settings and last but not least, (5) the context-specificity of the educational research, potentially restricting its application in different educational settings and with different levels of learners.”

Spare Keys – other take home points for clinician educators

When making multiple comparisons from a data set, that are not determined in advance, the risk of finding random associations are high.  This can be adjusted for using certain statistical measures.  Moreover, it is equally important to report the effect size or odds ratio of the difference so that the reader can determine whether the statistical difference between values achieves educational (clinical) significance.

A 10% response rate is lower than the typical rate of 30% required of large, heterogeneous survey populations.  The reader must decide on the degree of generalization (and the bias induced by both responders and non-responders) of the findings.

Access KeyLIME podcast archives here

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.