Abstract
Separate meta-analyses of the cognitive ability and assessment center (AC) literatures report higher criterion-related validity for cognitive ability tests in predicting job performance. We instead focus on 17 samples in which both AC and ability scores are obtained for the same examinees and used to predict the same criterion. Thus, we control for differences in job type and in criteria that may have affected prior conclusions. In contrast to Schmidt and Hunter's (1998) meta-analysis, reporting mean validity of .51 for ability and .37 for ACs, we found using random-effects models mean validity of .22 for ability and .44 for ACs using comparable corrections for range restriction and measurement error in the criterion. We posit that 2 factors contribute to the differences in findings: (a) ACs being used on populations already restricted on cognitive ability and (b) the use of less cognitively loaded criteria in AC validation research.
Original language | English (US) |
---|---|
Pages (from-to) | 1435-1447 |
Number of pages | 13 |
Journal | Journal of Applied Psychology |
Volume | 102 |
Issue number | 10 |
DOIs | |
State | Published - Oct 2017 |
Keywords
- Assessment center
- Cognitive ability
- Job performance
- Validation