Assessment centers versus cognitive ability tests: Challenging the conventional wisdom on criterion-related validity

Paul R Sackett, Oren R. Shewach, Heidi N. Keiser

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

Separate meta-analyses of the cognitive ability and assessment center (AC) literatures report higher criterion-related validity for cognitive ability tests in predicting job performance. We instead focus on 17 samples in which both AC and ability scores are obtained for the same examinees and used to predict the same criterion. Thus, we control for differences in job type and in criteria that may have affected prior conclusions. In contrast to Schmidt and Hunter's (1998) meta-analysis, reporting mean validity of .51 for ability and .37 for ACs, we found using random-effects models mean validity of .22 for ability and .44 for ACs using comparable corrections for range restriction and measurement error in the criterion. We posit that 2 factors contribute to the differences in findings: (a) ACs being used on populations already restricted on cognitive ability and (b) the use of less cognitively loaded criteria in AC validation research.

Original languageEnglish (US)
Pages (from-to)1435-1447
Number of pages13
JournalJournal of Applied Psychology
Volume102
Issue number10
DOIs
StatePublished - Oct 2017

Keywords

  • Assessment center
  • Cognitive ability
  • Job performance
  • Validation

Fingerprint Dive into the research topics of 'Assessment centers versus cognitive ability tests: Challenging the conventional wisdom on criterion-related validity'. Together they form a unique fingerprint.

Cite this