No CrossRef data available.
Published online by Cambridge University Press: 13 May 2020
Introduction: The Emergency Medicine Specialty Committee of the Royal College of Physicians and Surgeons of Canada (RCPSC) has specified that resuscitation Entrustable Professional Activities (EPAs) can be assessed in either the workplace or simulation environments; however, there is minimal evidence that such clinical performance correlates. We sought to determine the relationship between assessments in the workplace versus simulation environments among junior emergency medicine residents. Methods: We conducted a prospective observational study to compare workplace and simulation resuscitation performance among all first-year residents (n = 9) enrolled in the RCPSC-Emergency Medicine program at the University of Ottawa. All scores from Foundations EPA #1 (F1) were collected during the 2018-2019 academic year; this EPA focuses on initiating and assisting in the resuscitation of critically ill patients. Workplace performance was assessed by clinical supervisors by direct observation during clinical shifts. Simulation performance was assessed by trained simulation educators during regularly-scheduled sessions. We present descriptive statistics and within-subjects analyses of variance. Results: We collected a total of 104 workplace and 36 simulation assessments. Interobserver reliability of simulation assessments was high (ICC = 0.863). We observed no correlation between mean EPA scores assigned in the workplace and simulation environments (Spearman's rho=−0.092, p = 0.813). Scores in both environments improved significantly over time (F(1,8) = 18.79, p < 0.001, ηp2 = 0.70), from 2.9(SD = 1.2) in months 1-4 to 3.5(0.2) in months 9-12 (p = 0.002). Workplace scores (3.4(0.1)) were consistently higher than simulation scores (2.9(0.2)) (F(1,8) = 7.16, p = 0.028, ηp2 = 0.47). Conclusion: We observed no correlation between EPA F1 ratings of resuscitation performance between the workplace and simulation environments. Further studies should seek to clarify this relationship to inform our ongoing use of simulation to assess clinical competence.