Abstract
Effective distractors in multiple-choice items should attract lower ability students, those with misconceptions or limited knowledge and skills, because they are based on common misconceptions or errors in logic. A large, multi-state data set collected for a quasi-experimental study on test modifications was analyzed to measure the impact on distractor functioning. The key modification of interest was the removal of the weakest of three distractors, from 39 items in reading and 39 items in mathematics. Distractor functioning was neither systematically improved nor systematically weakened through the modification process. However, more than 70% of the distractors became more discriminating. A moderate correlation between distractor selection rate and distractor discrimination, in mathematics, may have indicated that the modified items were being missed by the appropriate students. Implications of these findings for test developers are discussed.
Original language | English (US) |
---|---|
Journal | SAGE Open |
Volume | 4 |
Issue number | 4 |
DOIs | |
State | Published - Dec 1 2014 |
Bibliographical note
Funding Information:The author(s) disclosed receipt of the following financial support for the research and/or authorship of this article: The current study was implemented as part of the Consortium for Alternate Assessment Validity and Experimental Studies (CAAVES) project, a multi-state project funded by the U.S. Department of Education (Award to Idaho Department of Education; #S368A0600012).
Keywords
- Achievement
- Education
- Educational measurement and assessment
- Reliability and validity
- Social sciences
- Special education