Quality of Human Expert vs Large Language Model-Generated Multiple-Choice Questions in the Field of Mechanical Ventilation

  • Critical Care Education Research Consortium

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Background: Although mechanical ventilation (MV) is a critical competency in critical care training, standardized methods for assessing MV-related knowledge are lacking. Traditional multiple-choice question (MCQ) development is resource intensive, and prior studies have suggested that generative AI tools could streamline question creation. However, the quality of AI-generated MCQs remains unclear. Research Question: Are MCQs generated by ChatGPT noninferior to human expert (HE)-created questions in terms of quality and relevance for MV education? Study Design and Methods: Three key MV topics were selected: Equation of Motion and Ohm's Law, Tau and Auto-PEEP, and Oxygenation. Fifteen learning objectives were used to generate 15 AI-written MCQs via a standardized prompt with ChatGPT-o1 (preview model; made available September 12, 2024). A group of 31 faculty experts, all of whom regularly teach MV, evaluated both AI- and HE-generated MCQs. Each MCQ was assessed based on its alignment with learning objectives, accuracy of chosen answer, clarity of the question stem, plausibility of distractor options, and difficulty level. The faculty members were blinded to the provenance of the MCQ questions. The noninferiority margin was predefined as 15% of the total possible score (–3.45). Results: AI-generated MCQs were statistically noninferior to the HE-written MCQs (95% upper CI, [–1.15, ∞]). In additions, respondents were unable to reliably differentiate AI-generated MCQs from HE-written MCQs (P = .32). Interpretation: Our results suggest that AI-generated MCQs using ChatGPT-o1 are comparable in quality to those written by HEs. Given the time and resource-intensive nature of human MCQ development, AI-assisted question generation may serve as an efficient and scalable alternative for medical education assessment, even in highly specialized domains such as MV.

Original languageEnglish (US)
Pages (from-to)1425-1432
Number of pages8
JournalCHEST
Volume168
Issue number6
DOIs
StatePublished - Dec 2025

Bibliographical note

Publisher Copyright:
© 2025 American College of Chest Physicians

Keywords

  • artificial intelligence
  • education
  • knowledge assessment
  • large language models
  • mechanical ventilation
  • medical education
  • multiple-choice questions
  • respiratory physiology

Fingerprint

Dive into the research topics of 'Quality of Human Expert vs Large Language Model-Generated Multiple-Choice Questions in the Field of Mechanical Ventilation'. Together they form a unique fingerprint.

Cite this