Resolving the assessment center construct validity problem (as we know it)

Research output: Contribution to journalArticlepeer-review

37 Scopus citations


Ongoing concern about the construct validity of assessment center dimensions has focused on postexercise dimension ratings (PEDRs) that are consistently found to reflect exercise variance to a greater degree than dimension variance. Here, we present a solution to this problem. Based on the argument that PEDRs are an intermediate step toward an overall dimension rating, and that the overall dimension rating should be the focus of inquiry, we demonstrate that correlated sources of dimension variance accumulate and increasingly displace uncorrelated sources of both systematic variance and error. Viewing overall dimension ratings as a composite of PEDRs, we show dimension variance will commonly quickly overtake exercise-specific variance as the dominant source of variance as ratings from multiple exercises are combined. We embed our results in a new framework for categorizing different levels of construct variance dominance, and our results indicate that with as few as two exercises, dimension variance can reach our lowest level of construct variance dominance. However, the largest source of dimension variance is a general factor. We conclude that the construct validity problem in assessments centers never existed as historically framed, but the presence of a general factor may limit interpretation for developmental purposes.

Original languageEnglish (US)
Pages (from-to)38-47
Number of pages10
JournalJournal of Applied Psychology
Issue number1
StatePublished - Jan 1 2014


  • Assessment center
  • Construct validity
  • Dimension variance


Dive into the research topics of 'Resolving the assessment center construct validity problem (as we know it)'. Together they form a unique fingerprint.

Cite this