University of Wisconsin Madison
Madison, WI
Program Director: Mark Regan MD
Program Type: Pulmonary and Critical Care Medicine
Abstract Authors: Mark Regan MD, Jami Simpson MS
RATIONALE
The Next Accreditation System (NAS) education reform, issued by the ACGME, requires programs to assess fellow competency based on milestones and entrustable professional activities (EPA).This project identifies how we created NAS evaluations and direct observation tools (DOT) to meet the reform standards. We were aware this assessment redesign would be a dramatic change for our faculty, so we placed importance on minimizing the faculty burden when completing these assessments.
One of the strengths of our program is the high faculty to fellow ratio, which provides for aggregated assessments of fellows’ performance. However, our staffing schedules are uncoupled (faculty rotate weekly on the services independent of fellow schedules), which can make it difficult for faculty to have enough interaction with fellows to provide a full evaluation. Other factors that lower evaluation completion rates include the length of evaluations and their accessibility, both perceived as placing additional burden on faculty. Together, these factors contributed to a low completion rate of fellow evaluations (76% in 2011-2012).
To address both the reform standards and low evaluation completion rates, we initiated a new evaluation deployment plan at the same time that we introduced the new assessment tools.
METHODS
In June 2012, we held quarterly Education Committee meetings to review NAS standards, select assessment tools to modify, and to create 8 EPAs and their curricular milestones. We created the milestones to progress in complexity: Level 1(PGY4), Level 2 (PGY5), Level 3 (PGY6, competent), Level 4 (Moving towards Mastery, faculty level) and Level 5 (Mastery, national recognition). We selected the following rotations to create the NAS evaluations: Critical Care Unit, Pulmonary Consults, and Advanced Pulmonary Service. Next, we created 5 direct observation tools (DOTs) to be used during a real-time encounter. Our DOTs are “google-forms” stored on a password protected site. The DOTs were stored on the home screen of Mini iPads provided to each fellow. Fellows were responsible for asking the faculty to complete the appropriate DOT depending on the procedure. The DOTs allowed us to remove questions from the monthly evaluation, making the evaluations shorter. Finally, instead of sending a NAS evaluation request to faculty after their one week rotation with the fellow, we solicited their general feedback about the fellow in an informal email. The attending faculty who worked with the fellow at the end of the month was given the cumulative weekly feedback email responses, and asked to complete the NAS evaluation and discuss the feedback with the fellow at the end of their monthly rotation.
To prepare the faculty for the implementation of the new plan, we conducted three Professional Development Training Sessions: 1. Introduce NAS, 2. Identify new Levels, 3. Practice using the DOTs.
RESULTS
By June 2013, and October 2014 we achieved an 86% and 100% completion rate of NAS evaluations respectively. This was a significant improvement over our baseline completion rate of 76% in 2012. We also averaged 1 DOT per fellow per clinical month, which we feel can still be improved upon with more frequent reminders to fellows.
CONCLUSIONS
We are quite pleased with the quality of the assessment data and encouraged by the adaptation and gradual increase in the usage of the DOTs. The previously used Likert scale was too subjective, allowing each faculty to use their own measures to gauge a fellow’s competency. The new scale provides more specific / granular expectations for each level, standardizing the measures for each EPA. By eliminating the subjectivity of the scale, we collect better quality data from the NAS evaluations. Table 1 is a comparison of our previous evaluation tool’s Likert scale and report versus the new milestones and summary graph.