University of Wisconsin School of Medicine and Public Health
Madison, WI
Program Director: Joshua Smith
Program Type: Pulmonary/Critical Care
Abstract Authors: Joshua Smith, Jami Simpson, Bryna Ebben, Mark Regan
Description of Fellowship Program: Our fellowship is affiliated with a quaternary referral hospital as well as a VA Hospital. We have 24 faculty members, 12 of whom practice in the ICU. Our mentorship program pairs trainees with key clinical education faculty members to provide role models who exemplify our values and offer support via regular meetings throughout training. We offer rotations in the SICU, neurosurgery, acute care surgery, and trauma.
Abstract
Background
The ACGME’s Next Accreditation System requires training programs to document milestone performance at 6 month intervals during a trainee’s education. Our Clinical Competency Committee (CCC) meets to discuss all trainees’ performance on the milestone scale. Our program identified barriers that made the CCC meetings and milestone selection process difficult: rotational evaluations provided limited insight into a trainee’s performance, mentors did not have access to trainees’ data until immediately before the CCC meeting, and the trainees did not have a central repository of all their performance data. We aimed to redesign our evaluation system and create an electronic portfolio for our fellows that would be readily available and mobile for use in a variety of settings.
Our trainees rotate with numerous faculty members over the course of a month. Frequently, the end of rotation feedback included comments like “good fellow” or “should read more.” In order to elicit more specific, task-directed feedback, we developed an evaluation tool that is sent to faculty on a daily basis. We modified our previous monthly evaluations and mapped them to ACGME milestones. These daily evaluations were created in Qualtrics and faculty were sent a text message every day during the work week (Monday through Friday) as a reminder to complete the evaluation.
We allowed our ICU faculty to choose one topic to evaluate each day (Image 1). Our goal was to have faculty complete at least two evaluations per week recognizing clinical services and expectations of trainees could prevent opportunities to provide meaningful, specific feedback. We aimed to have eight evaluations to be completed per fellow per month, for a total of 16 evaluations per month (2 fellows rotating per month).
We assessed the faculty performance of these evaluations from March through October 2017, with a goal of 128 total evaluations performed. During this time, seventy six evaluations were performed on nine fellows, for a 76% completion of target number of evaluations. Fellows had a mean of 4.8 evaluations per month, with a range from 0 to 12. For fellows with low numbers of daily evaluations, we elicited additional feedback from faculty. Among the evaluations completed, we identified 34 comments with meaningful specific feedback directed at individual tasks (40.7% of evaluations).
To combat the latter issues, our fellowship coordinator created individualized electronic portfolios in an intranet website called MyPort. MyPort is password protected to allow access to the mentor, trainee, and program directors. The portfolio is organized to provide an at-a-glance view of a trainee’s competency throughout training. We do this by offering graphic representation for peer-peer comparison data such as in-training exams, procedure log completion, and milestone placements (Image 2). The trainee is able to view their personal scores relative to the class average. Since MyPort is mobile friendly, it can be viewed on the go for instant review of valuable feedback. All daily evaluations are made available on MyPort at the completion of a rotation for fellows to view. This tool has allowed for more effective mentor-mentee relationships since both parties have access to up-to-date information and can formulize action plans in a partnership.
With these innovations, we believe we are transitioning our fellowship from the paper age to the digital age. In the process, we are providing our fellows with meaningful feedback and a user friendly system to review their performance over time. We recognize we still have areas of improvement as we have not achieved our target goal of evaluations completed each month. We aim to use the data collected to provide further feedback to our faculty to improve evaluation completion with inclusion of meaningful feedback.