Abstract
Constructed-response items are commonly used in educational and psychological testing, and the answers to those items are typically scored by human raters. In the current rater monitoring processes, validity scoring is used to ensure that the scores assigned by raters do not deviate severely from the standards of rating quality. In this article, an adaptive rater monitoring approach that may potentially improve the efficiency of current rater monitoring practice is proposed. Based on the Rasch partial credit model and known development in multidimensional computerized adaptive testing, two essay selection methods—namely, the D-optimal method and the Single Fisher information method—are proposed. These two methods intend to select the most appropriate essays based on what is already known about a rater’s performance. Simulation studies, using a simulated essay bank and a cloned real essay bank, show that the proposed adaptive rater monitoring methods can recover rater parameters with much fewer essay questions. Future challenges and potential solutions are discussed in the end.
Original language | English (US) |
---|---|
Pages (from-to) | 60-79 |
Number of pages | 20 |
Journal | Applied Psychological Measurement |
Volume | 41 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1 2017 |
Bibliographical note
Publisher Copyright:© 2016, © The Author(s) 2016.
Keywords
- Fisher information matrix
- Rasch partial credit model
- essay selection
- interim scoring