Experimental evidence on the effectiveness of automated essay scoring in teacher education cases

Eric Riedel, Sara L. Dexter, Cassandra Scharber, Aaron Doering

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy preservice teachers in four teacher education classes were assigned to complete two cases. Each student was randomly assigned to either a condition where the AES was available (experimental condition) or a condition where the AES was unavailable (control condition). Students in the experimental condition who opted to use the AES submitted more highly rated final, human-scored essays (in the second of two case studies) and conducted more relevant searches (in both of the two case studies) than students either in the control condition or in the experimental condition who chose not to use the scorer.

Original languageEnglish (US)
Pages (from-to)267-287
Number of pages21
JournalJournal of Educational Computing Research
Volume35
Issue number3
DOIs
StatePublished - Dec 1 2006

Fingerprint Dive into the research topics of 'Experimental evidence on the effectiveness of automated essay scoring in teacher education cases'. Together they form a unique fingerprint.

Cite this