Investigating crowdsourcing to generate distractors for multiple-choice assessments

Travis Scheponik, Enis Golaszewski, Geoffrey Herman, Spencer Offenberger, Linda Oliva, Peter A.H. Peterson, Alan T. Sherman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect answer choices) in multiple-choice concept inventories (conceptual tests of understanding). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several question stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a representative distractor for each of these groups. We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out responses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective distractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the traditional method of having one or more experts draft distractors, building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments.

Original languageEnglish (US)
Title of host publicationNational Cyber Summit (NCS) Research Track, 2019
EditorsKim-Kwang Raymond Choo, Thomas H. Morris, Gilbert L. Peterson
PublisherSpringer
Pages185-201
Number of pages17
ISBN (Print)9783030312381
DOIs
StatePublished - 2020
EventNational Cyber Summit, NCS 2019 - Huntsville, United States
Duration: Jun 4 2019Jun 6 2019

Publication series

NameAdvances in Intelligent Systems and Computing
Volume1055
ISSN (Print)2194-5357
ISSN (Electronic)2194-5365

Conference

ConferenceNational Cyber Summit, NCS 2019
Country/TerritoryUnited States
CityHuntsville
Period6/4/196/6/19

Bibliographical note

Funding Information:
Acknowledgments. This work was supported in part by the U.S. Department of Defense under CAE-R grants H98230-15-1-0294, H98230-15-1-0273, H98230-17-1-0349, and H98230-17-1-0347; and by the National Science Foundation under SFS grants 1241576, 1753681, and 1819521, and DGE grant 1820531.

Publisher Copyright:
© Springer Nature Switzerland AG 2020.

Keywords

  • Amazon mechanical turk
  • Concept inventories
  • Crowdsourcing
  • Cybersecurity assessment tools (CATS) project
  • Cybersecurity concept inventory (CCI)
  • Cybersecurity education
  • Digital logic concept inventory (DLCI)
  • Distractors
  • Multiple-choice questions

Fingerprint

Dive into the research topics of 'Investigating crowdsourcing to generate distractors for multiple-choice assessments'. Together they form a unique fingerprint.

Cite this