TY - JOUR
T1 - Quality of Human Expert vs Large Language Model-Generated Multiple-Choice Questions in the Field of Mechanical Ventilation
AU - Critical Care Education Research Consortium
AU - Safadi, Sami
AU - Amirahmadi, Roxana
AU - Tlimat, Abdulhakim
AU - Rovinski, Randal
AU - Sun, Junfeng
AU - Lee, Burton W.
AU - Seam, Nitin
N1 - Publisher Copyright:
© 2025 American College of Chest Physicians
PY - 2025/12
Y1 - 2025/12
N2 - Background: Although mechanical ventilation (MV) is a critical competency in critical care training, standardized methods for assessing MV-related knowledge are lacking. Traditional multiple-choice question (MCQ) development is resource intensive, and prior studies have suggested that generative AI tools could streamline question creation. However, the quality of AI-generated MCQs remains unclear. Research Question: Are MCQs generated by ChatGPT noninferior to human expert (HE)-created questions in terms of quality and relevance for MV education? Study Design and Methods: Three key MV topics were selected: Equation of Motion and Ohm's Law, Tau and Auto-PEEP, and Oxygenation. Fifteen learning objectives were used to generate 15 AI-written MCQs via a standardized prompt with ChatGPT-o1 (preview model; made available September 12, 2024). A group of 31 faculty experts, all of whom regularly teach MV, evaluated both AI- and HE-generated MCQs. Each MCQ was assessed based on its alignment with learning objectives, accuracy of chosen answer, clarity of the question stem, plausibility of distractor options, and difficulty level. The faculty members were blinded to the provenance of the MCQ questions. The noninferiority margin was predefined as 15% of the total possible score (–3.45). Results: AI-generated MCQs were statistically noninferior to the HE-written MCQs (95% upper CI, [–1.15, ∞]). In additions, respondents were unable to reliably differentiate AI-generated MCQs from HE-written MCQs (P = .32). Interpretation: Our results suggest that AI-generated MCQs using ChatGPT-o1 are comparable in quality to those written by HEs. Given the time and resource-intensive nature of human MCQ development, AI-assisted question generation may serve as an efficient and scalable alternative for medical education assessment, even in highly specialized domains such as MV.
AB - Background: Although mechanical ventilation (MV) is a critical competency in critical care training, standardized methods for assessing MV-related knowledge are lacking. Traditional multiple-choice question (MCQ) development is resource intensive, and prior studies have suggested that generative AI tools could streamline question creation. However, the quality of AI-generated MCQs remains unclear. Research Question: Are MCQs generated by ChatGPT noninferior to human expert (HE)-created questions in terms of quality and relevance for MV education? Study Design and Methods: Three key MV topics were selected: Equation of Motion and Ohm's Law, Tau and Auto-PEEP, and Oxygenation. Fifteen learning objectives were used to generate 15 AI-written MCQs via a standardized prompt with ChatGPT-o1 (preview model; made available September 12, 2024). A group of 31 faculty experts, all of whom regularly teach MV, evaluated both AI- and HE-generated MCQs. Each MCQ was assessed based on its alignment with learning objectives, accuracy of chosen answer, clarity of the question stem, plausibility of distractor options, and difficulty level. The faculty members were blinded to the provenance of the MCQ questions. The noninferiority margin was predefined as 15% of the total possible score (–3.45). Results: AI-generated MCQs were statistically noninferior to the HE-written MCQs (95% upper CI, [–1.15, ∞]). In additions, respondents were unable to reliably differentiate AI-generated MCQs from HE-written MCQs (P = .32). Interpretation: Our results suggest that AI-generated MCQs using ChatGPT-o1 are comparable in quality to those written by HEs. Given the time and resource-intensive nature of human MCQ development, AI-assisted question generation may serve as an efficient and scalable alternative for medical education assessment, even in highly specialized domains such as MV.
KW - artificial intelligence
KW - education
KW - knowledge assessment
KW - large language models
KW - mechanical ventilation
KW - medical education
KW - multiple-choice questions
KW - respiratory physiology
UR - https://www.scopus.com/pages/publications/105021110213
UR - https://www.scopus.com/pages/publications/105021110213#tab=citedBy
U2 - 10.1016/j.chest.2025.07.005
DO - 10.1016/j.chest.2025.07.005
M3 - Article
C2 - 40684906
AN - SCOPUS:105021110213
SN - 0012-3692
VL - 168
SP - 1425
EP - 1432
JO - CHEST
JF - CHEST
IS - 6
ER -