Objective: We examined the impact of video editing and rater expertise in surgical resident evaluation on operative performance ratings of surgical trainees. Design: Randomized independent review of intraoperative video. Setting: Operative video was captured at a single, tertiary hospital in Boston, MA. Participants: Six common general surgery procedures were video recorded of 6 attending-trainee dyads. Full-length and condensed versions (n = 12 videos) were then reviewed by 13 independent surgeon raters (5 evaluation experts, 8 nonexperts) using a crossed design. Trainee performance was rated using the Operative Performance Rating Scale, System for Improving and Measuring Procedural Learning (SIMPL) Performance scale, the Zwisch scale, and ten Cate scale. These ratings were then standardized before being compared using Bayesian mixed models with raters and videos treated as random effects. Results: Editing had no effect on the Operative Performance Rating Scale Overall Performance (-0.10, p = 0.30), SIMPL Performance (0.13, p = 0.71), Zwisch (-0.12, p = 0.27), and ten Cate scale (-0.13, p = 0.29). Additionally, rater expertise (evaluation expert vs. nonexpert) had no effect on the same scales (-0.16 (p = 0.32), 0.18 (p = 0.74), 0.25 (p = 0.81), and 0.25 (p = 0.17). Conclusions: There is little difference in operative performance assessment scores when raters use condensed videos or when raters who are not experts in surgical resident evaluation are used. Future validation studies of operative performance assessment scales may be facilitated by using nonexpert surgeon raters viewing videos condensed using a standardized protocol.
Bibliographical noteFunding Information:
Funding: The project was supported by a grant from the Association of Surgical Education (ASE) and Association of Program Directors in Surgery (APDS).
© 2020 Association of Program Directors in Surgery
Copyright 2020 Elsevier B.V., All rights reserved.
- operative evaluation
- residency training
PubMed: MeSH publication types
- Journal Article