Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery

Grace L. Paley, Rebecca Grove, Tejas C. Sekhar, Jack Pruett, Michael V. Stock, Tony N. Pira, Steven M. Shields, Evan L. Waxman, Bradley S. Wilson, Mae O. Gordon, Susan M. Culican

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

OBJECTIVE: To test whether crowdsourced lay raters can accurately assess cataract surgical skills.

DESIGN: Two-armed study: independent cross-sectional and longitudinal cohorts.

SETTING: Washington University Department of Ophthalmology.

PARTICIPANTS AND METHODS: Sixteen cataract surgeons with varying experience levels submitted cataract surgery videos to be graded by 5 experts and 300+ crowdworkers masked to surgeon experience. Cross-sectional study: 50 videos from surgeons ranging from first-year resident to attending physician, pooled by years of training. Longitudinal study: 28 videos obtained at regular intervals as residents progressed through 180 cases. Surgical skill was graded using the modified Objective Structured Assessment of Technical Skill (mOSATS). Main outcome measures were overall technical performance, reliability indices, and correlation between expert and crowd mean scores.

RESULTS: Experts demonstrated high interrater reliability and accurately predicted training level, establishing construct validity for the modified OSATS. Crowd scores were correlated with (r = 0.865, p < 0.0001) but consistently higher than expert scores for first, second, and third-year residents (p < 0.0001, paired t-test). Longer surgery duration negatively correlated with training level (r = -0.855, p < 0.0001) and expert score (r = -0.927, p < 0.0001). The longitudinal dataset reproduced cross-sectional study findings for crowd and expert comparisons. A regression equation transforming crowd score plus video length into expert score was derived from the cross-sectional dataset (r 2 = 0.92) and demonstrated excellent predictive modeling when applied to the independent longitudinal dataset (r 2 = 0.80). A group of student raters who had edited the cataract videos also graded them, producing scores that more closely approximated experts than the crowd.

CONCLUSIONS: Crowdsourced rankings correlated with expert scores, but were not equivalent; crowd scores overestimated technical competency, especially for novice surgeons. A novel approach of adjusting crowd scores with surgery duration generated a more accurate predictive model for surgical skill. More studies are needed before crowdsourcing can be reliably used for assessing surgical proficiency.

Original languageEnglish (US)
Pages (from-to)1077-1088
Number of pages12
JournalJournal of surgical education
Volume78
Issue number4
Early online dateFeb 25 2021
DOIs
StatePublished - Jul 1 2021

Bibliographical note

Funding Information:
SMC and ELW have received honoraria for invited lectures including the AUPO Excellence in Medical Education (ELW) and Straatsma (SMC) Awards. MOG and BSW have grant funding from NIH (NCATS UG1 EY025182, NEI UG1 EY025181, NEI R01 EY026199, NEI R01 EY026641, NEI R21 EY030524 and NEI R21 EY031125, NEI UG1 EY025183, NEI UG1 EY025182, NEI R01 EY026199, respectively). No financial disclosures for any other author.

Funding Information:
This work was supported in part by an unrestricted grant from Research to Prevent Blindness, Inc. to the Department of Ophthalmology and Visual Sciences at Washington University; Vision Core Grant P30 EY 02687 from the National Institutes of Health; the Elizabeth Ann Brandom Charitable Lead Trust; and Experiment.com Crowdfunding Platform. The sponsors or funding organizations had no role in the design or conduct of this research.

Publisher Copyright:
© 2021 The Author(s)

Keywords

  • Crowdsourcing
  • cataract surgery
  • phacoemulsification
  • surgical assessment
  • surgical competence
  • Clinical Competence
  • Cataract
  • Reproducibility of Results
  • Cross-Sectional Studies
  • Internship and Residency
  • Humans
  • Washington
  • Longitudinal Studies

PubMed: MeSH publication types

  • Research Support, Non-U.S. Gov't
  • Journal Article
  • Research Support, N.I.H., Extramural

Fingerprint

Dive into the research topics of 'Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery'. Together they form a unique fingerprint.

Cite this