Direct behavior rating (DBR): Generalizability and dependability across raters and observations

Theodore J. Christ, T. Chris Riley-Tillman, Sandra M. Chafouleas, Christina H. Boice

Research output: Contribution to journalArticlepeer-review

17 Scopus citations


Generalizability theory was used to examine the generalizability and dependability of outcomes from two single-item Direct Behavior Rating (DBR) scales: DBR of actively manipulating and DBR of visually distracted. DBR is a behavioral assessment tool with specific instrumentation and procedures that can be used by a variety of service delivery providers (e.g., teacher, teacher aide, parent, etc.) to collect time-series data on student behavior. The purpose of this study was to extend the findings presented by Chafouleas et al. with an examination of DBR outcomes as they are generalized across raters and rating occasions. One hundred twenty-five undergraduates viewed and rated student behavior on video clips while the children engaged in an unsolvable Lego puzzle task. A series of decision studies were used to evaluate the effects of alternate assessment conditions (variable numbers of raters and rating occasions) and interpretive assumptions (definitions of the universe of generalization). Results support the general conclusion that ratings from individual or small groups of simultaneous raters, when generalized only to that specific individual or group of individuals, can approach reliability criteria for low- and high-stakes decisions. Implications are discussed.

Original languageEnglish (US)
Pages (from-to)825-843
Number of pages19
JournalEducational and Psychological Measurement
Issue number5
StatePublished - 2010

Bibliographical note

Copyright 2015 Elsevier B.V., All rights reserved.


  • classroom behavior
  • direct behavior rating
  • direct observation
  • educational measurement
  • generalizability theory
  • rating scale
  • social behavior


Dive into the research topics of 'Direct behavior rating (DBR): Generalizability and dependability across raters and observations'. Together they form a unique fingerprint.

Cite this