University of Toronto
Toronto, ON
Program Director: Andrew Steel, MD
Type of Program: Adult Critical Care Medicine Residency and International Fellowships
Abstract Authors: Dominique Piquette, MD, PhD; Christie Lee, MD; David Hall, MD, PhD; and Andrew Steel, MD
The Royal College of Physicians and Surgeons of Canada (RCPSC) recently started the progressive implementation of competency-based training across Canadian specialty postgraduate programs. As such, Canada is joining other countries (e.g., the UK and Netherlands) in a worldwide movement towards competency-based medical education. According to this approach, trainees must demonstrate specific competencies by the end of their training, documented during frequent work-based assessments completed by multiple assessors and for a range of clinical activities. Individual trainees build a portfolio including formative and summative assessments used by programs for pass/fail decisions. Accountability and learning are two frequently claimed benefits of competency-based training, but have yet to be confirmed by empirical evidence. Many questions remain regarding the best way to design and implement a program of assessment as part of competency-based training. Known challenges include a lack of popularity among trainees and assessors and a lack of reliability of many individual assessment tools used to measure trainee performance. Other areas of uncertainty include the effects of assessors’ cognition and expertise, as well of the role of different response formats, on the quality of individual assessments, the best strategy to combine multiple assessments using different formats, and the quality of feedback provided to trainees.
The Adult Critical Care Medicine Residency and International Fellowships at the University of Toronto has undertaken the design, implementation, and evaluation of a program of formative assessment in anticipation of the transition to competency-based training required by the RCPSC. A core group of local educators have determined goals and priorities for our assessment program. We chose an activity-based framework of assessment, using critical care entrustable professional activities as a unit of evaluation. Following published guidelines, we created an assessment map and selected assessment tools based on the best available evidence, our program goals, and local contexts. These tools will be progressively implemented in five teaching hospitals where trainees rotate, and contribute to trainee learning portfolio. Individual data will be combined every 3 months to provide supported feedback to the trainees. We will evaluate the program using a 2x2 approach as previously published. Expected and emerging processes and outcomes will be measured to answer predetermined evaluation questions related, for example, to supporting validity evidence, trainee and faculty engagement and perceptions, and unintended consequences on learning. Data collection and analysis will occur iteratively and combine quantitative and qualitative data. The results will inform program changes locally, and more broadly, the implementation of competency-based training in critical care.