#AppliedMedEdMethods101: Designing a Pre- Post Simulation Study

SHARE:
POSTED BY:

(This is the second post of our #AppliedMedEdMethods101 series. View the first here: Beyond the RCT)

By Irene Ma (@irenema99)

You have been tasked with teaching your residents lumbar puncture and will be running a simulation-based teaching session with them. You wonder how effective your training is on improving their skills. You decide to conduct a pre-post intervention study. Since all the residents in your program are required to attend the 1-hour training session, a randomized controlled trial is neither possible nor feasible, as you can’t ethically randomize learners to no training nor is there sufficient time to do a cross-over trial.

For a pre-post intervention study, in its simplest form, learning outcomes pre-intervention in a single group are compared to learning outcomes of the same group of learners post-intervention. Any learning gains may then (arguably) be attributed to the effect of the educational intervention itself.

Strengths of this study design include its simplicity; having each learner serve as his/her own control obviates the need for matching variables.  Often, this study design may be the only design feasible within a given program depending on time and scheduling constraints. However, the lack of true experimentation (e.g. randomization) makes it impossible to assert that learning gains are caused by the intervention. The very act of measuring learning outcomes at baseline may have resulted in some learning, via such mechanism as test-enhanced learning. As such, one can only report associations between the intervention and outcomes.

Nonetheless, not all is lost. Careful attention to the following five areas should help minimize problems often encountered in the conduct of pre-post studies. First, with respect to outcome selection, select a learning outcome that is educationally worthy. Consider the four levels of evaluation under the Kirkpatrick model (reaction, learning, behavior, results).1 Avoid evaluating only lower-level outcomes such as learner satisfaction or self-reported confidence (i.e. reaction).

Second area relates to outcome measurement. What tool will you use to measure that outcome?  The tool should be relevant and should capture all (or most) aspects of the outcome of interest. You may see this termed in the literature as construct relevance. What validity evidence has been reported in the literature to support the interpretation of scores arising from using the tool? What sources of validity evidence will your study be able to capture? Considering these aspects early at the study design stage will help alleviate headaches down the line. Second, how will you measure these outcomes?  Issues such as rater training are worth paying attention to. Qualification and training of raters should be noted as this information may ultimately impact the generalizability of your results.  For outcomes that are subjective in nature, blinding should be considered. Otherwise, raters, knowing that the participants just completed training may wittingly or unwittingly rate performances higher than at baseline, thereby overestimating learning gains. Consider having more than one rater such that inter-rater reliability can be estimated. Last but not least, consider the timing of your post-assessment. While measuring learning outcomes immediately post-training may be convenient, recognize that measurements then may reflect immediate recall only and not true learning. Some delay, if feasible, would be preferred. This delay should ideally be long enough so that it is not measuring immediate recall, but not so long that intervening clinical exposure or other learning may have taken place.

A third area to consider is confounders. A confounder is a variable that is associated with both the intervention and the learning outcome, and its presence may spuriously influence study results. For example, in claiming that learning outcomes are better amongst junior learners compared with senior learners, is it possible that another factor/confounder is at play?  For instance, perhaps junior learners were more likely to have been previously exposed to simulation, and that many higher level learners, never having been exposed to simulation previously, were unable to take advantage of the training?  Recognizing (and measuring) these potential confounders ahead of time will allow you to adjust for these later in the analyses stage.

A fourth area to consider is statistics. Because the same participants are involved pre- and post-intervention, the usual statistical tests such as t-tests and chi-square tests do not apply. Paired tests are needed. If more than two time points are measured, then more sophisticated analyses such as repeated measures or longitudinal data analyses should be used.

A final area relates to reporting issues. Reporting should adhere to guidelines.2 Although these  are reporting guidelines, many elements relate to study design issues. Paying attention to these elements early will improve the quality of both the conduct and the reporting of your study.

Key Learning Points

  • Pre-post intervention studies are simple and easy to conduct but cannot attribute causative relationship between intervention and outcomes.
  • Attention paid to outcome selection, outcome measurements, potential confounders, and appropriate statistical analyses will help strengthen your study.
  • Going through items listed in reporting guidelines at the study design stage can help avoid common study design flaws.

Reference

Kirkpatrick D.L., & Kirkpatrick J.D. (2006). Evaluating Training Programs: The Four Levels (3rd ed). San Francisco, CA: Barrett-Koehler Publishers.

Annotated References

1. National Heart, Lung, and Blood Institute.  Quality Assessment Tool for Before-After (Pre-Post) Studies with No Control Group. Available from:  https://www.nhlbi.nih.gov/health-pro/guidelines/in-develop/cardiovascular-risk-reduction/tools/before-after [Accessed 23rd August 2017].

This reporting guidelines lay out, in checklist format, elements that should be reported in a pre-post intervention study. Using this checklist early will also help you in focusing on important study design elements that will ultimately impact on the quality of your study.

2. Barsuk JH, Cohen ER, Feinglass J et al. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med 2009;169(15):1420-3.

In this well-conducted pre-post intervention study, the authors compared patient outcomes before and after simulation-based training in central venous catheter insertion. Primary outcome chosen is clinically important and measured by assessors who were blinded to the nature of the study. The authors even took an extra step to add in a control (untrained) group.

Featured image via markmags on Pixabay 

 

 

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.