Evaluating the validity of educational rating data

Michael Harwell

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

The use of trained raters has a long tradition in educational research. A standard feature of studies employing raters is the use of indica ors of agreement among raters such as interrater reliability coefficients. Surprisingly, the validity of rating data has received relatively little attention, raising the undesirable prospect of ratings with satisfactory reliability but little validity. This article suggests two complementary frameworks for providing validity evidence for rating data. One conceptualizes raters as data collection instruments that should be subject to traditional procedures for establishing validity evidence; another evaluates studies employing raters from an experimental design perspective, permitting the internal validity of the study to be assessed and used as an indicator of the extent to which ratings are attributable to the training of the raters. Two studies employing raters are used to illustrate these ideas.

Original languageEnglish (US)
Pages (from-to)25-37
Number of pages13
JournalEducational and Psychological Measurement
Volume59
Issue number1
DOIs
StatePublished - Feb 1999

Fingerprint

Dive into the research topics of 'Evaluating the validity of educational rating data'. Together they form a unique fingerprint.

Cite this