How am I doing? Getting an answer through planning and design of programmatic assessment

SHARE:
POSTED BY:

By: Shelley Ross

“How am I doing?” This question is at the top of every learner’s mind throughout their educational trajectory (and is also something that physicians in practice think about periodically as they go through their careers). Additionally, “How am I doing?” is a question contemplated by managers, administrators, and educators in health professions training programs in relation to both the teaching and the assessment of the learners for whom the program is accountable.

Answering the question “How am I doing?” has become less and less simple in the last few decades It was so much easier when the answer to “How am I doing?” could be provided by a score on an objective test of knowledge… However, in the complex world of modern medicine, it is no longer enough for learners and physicians to have extensive medical knowledge (although the need to “Know” will always be a critical component of physician competence). Healthcare professionals must also be able to communicate well with patients, work effectively with colleagues both within their own profession and those from other fields, be skilled in navigating the healthcare systems within which they work, and be able to make the right decisions for individual patients (each of whom comes with their own context and complexities). The competency-based medical education (CBME) approach to training is intended to result in health professions education programs that are outcomes-based. This approach includes both intentional design of curricula and training experiences that facilitate the development of specific competencies, and assessment that focuses on capturing learner progress towards competence.

While CBME sounds great on paper, and is based in good theory and evidence, it turns out that the assessment of competence is not as straightforward as learners and programs might want it to be. For the most part, the individuals involved in health professions education – both learners and teachers – come from a background of science. Steeped in the scientific method, there is an expectation of objectivity and the ability to measure specific variables to arrive at a conclusion that can be defended. Processes and measurements should be held equal across conditions in order to trust that conclusions are valid. Deviating from this approach feels wrong – subjectivity has no place in psychometrics.

However, the measurement of competence means assessing whether a learner is “doing the right thing, in the right way, for the right reasons”1 (p. 144). This means watching learners in the workplace, it means talking to learners about their thinking and reasoning, and it means that it is not possible to use one specific measurement tool or approach to collect evidence about competence.

This is where programmatic assessment comes in. While the 2005 article by van der Vleuten and Schuwirth 2 is generally cited as the initial introduction of programmatic assessment in health professions education, there were earlier articles that emphasized the need for sampling across multiple assessment opportunities 3 and for approaches to assessment where there was intentional integration of multiple types of evidence 4. This initial introduction of programmatic assessment was an academic discussion in the literature, and it took some time for the concept to become part of the conversations happening at the level of health professions education programs.

For many clinical educators, though, programmatic assessment is still a bit intimidating and sometimes misunderstood. The CBME literature itself can appear contradictory – programmatic assessment is described as ‘an arrangement of assessment methods planned to optimize its fitness for purpose’5, yet there are publications describing programmatic assessment in CBME that use only one measurement or method of assessment. Some authors stress that data formative (low stakes) assessment must never be used in decision-making, while other authors are equally adamant that data from both formative and summative (high stakes) assessments must be combined in order to give a clear picture of learner competence. The sheer volume of articles being published about assessment in CBME add to the confusion – it is simply impossible for the average clinical educator to be able to make time to read every article and keep track of the varying points of view.

One thing that is consistent: the concept of programmatic assessment makes sense in CBME to answer that fundamental question of “How am I doing?” 6. A key component that is core to the design of programmatic assessment is that the assessment methods that are used are selected because they are the right tool to capture the right information in the context of the type of training that is happening. There will be differences in what programmatic assessment looks like in different programs – and that is okay. It should look different because the context of training is highly variable between programs, especially at the postgraduate education level. Highly procedural specialty training focuses on competencies that can be assessed well through the Entrustable Professional Activities approach; generalist specialties with less differentiated presentations and a broad range of patient populations will need a different approach to capturing information about learner competence.  

Designing and implementing programmatic assessment will continue to be challenging for some years. The concept that different programs within the same institution may use a different set of approaches to assess competence feels wrong to many clinical educators – Where is the consistency between programs? The beauty of programmatic assessment is in those very differences: it is the result of using the right tools, for the right reasons, to get the right information to answer the question, “How am I doing?”

About the author: Shelley Ross, PhD, is President of the Canadian Association for Medical EducatioN, as well as Academic Director, Teaching and Learning Strategic Planning Initiatives, at the University of Alberta\’s Faculty of Medicine and Dentistry.

References:

1.Covey SR, Merrill AR, Merrill RR. “First Things First”. 1995; Simon and Schuster: New York, NY.

2. van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Med Educ 2005; 39(3): 309–317.

3. van der Vleuten CP. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ Theory Pract 1996; 1(1):41–67.

4. Baartman LK, Bastiaens TJ, Kirschner PA, Van der Vleuten CP. Evaluating assessment quality in competence-based education: a qualitative comparison of two frameworks. Educ Res Rev 2007; 2(2): 114–129.

5. van der Vleuten CP, Schuwirth LW, Driessen EW, Dijkstra J, Tigelaar D, Baartman LK, van Tartwijk J. A model for programmatic assessment fit for purpose. Med Teach 2012; 34(3):205–214.

6. Ross S, Hauer K, Wycliffe-Jones K, Hall AK, Molgaard L, Richardson D, Oswald A, Bhanji F. Key considerations in planning and designing programmatic assessment in competency-based medical education. Med Teach 2021; 43(7):758-764.

 

The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our ‘About’ page

 

Related Posts

Be the First to Know
As soon as a new article is published, let us email you.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Topics

Subscribe to our Newsletter

We post three times a week – Mondays, Wednesdays and Fridays! Sign up to our newsletter to receive a bi monthly digest of our posts.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.