Are Fit Indices Used to Test Psychopathology Structure Biased? A Simulation Study

Ashley L. Greene, Nicholas R. Eaton, Kaiqiao Li, Miriam K. Forbes, Robert F. Krueger, Kristian E. Markon, Irwin D. Waldman, David C. Cicero, Christopher C. Conway, Anna R. Docherty, Eiko I. Fried, Masha Y. Ivanova, Katherine G. Jonas, Robert D. Latzman, Christopher J. Patrick, Ulrich Reininghaus, Jennifer L. Tackett, Aidan G.C. Wright, Roman Kotov

Research output: Contribution to journalArticlepeer-review

99 Scopus citations

Abstract

Structural models of psychopathology provide dimensional alternatives to traditional categorical classification systems. Competing models, such as the bifactor and correlated factors models, are typically compared via statistical indices to assess how well each model fits the same data. However, simulation studies have found evidence for probifactor fit index bias in several psychological research domains. The present study sought to extend this research to models of psychopathology, wherein the bifactor model has received much attention, but its susceptibility to bias is not well characterized. We used Monte Carlo simulations to examine how various model misspecifications produced fit index bias for 2 commonly used estimators, WLSMV and MLR. We simulated binary indicators to represent psychiatric diagnoses and positively skewed continuous indicators to represent symptom counts. Across combinations of estimators, indicator distributions, and misspecifications, complex patterns of bias emerged, with fit indices more often than not failing to correctly identify the correlated factors model as the data-generating model. No fit index emerged as reliably unbiased across all misspecification scenarios. Although, tests of model equivalence indicated that in one instance fit indices were not biased-they favored the bifactor model, albeit not unfairly. Overall, results suggest that comparisons of bifactor models to alternatives using fit indices may be misleading and call into question the evidentiary meaning of previous studies that identified the bifactor model as superior based on fit. We highlight the importance of comparing models based on substantive interpretability and their utility for addressing study aims, the methodological significance of model equivalence, as well as the need for implementation of statistical metrics that evaluate model quality.

Original languageEnglish (US)
JournalJournal of abnormal psychology
DOIs
StatePublished - Oct 2019

Bibliographical note

Publisher Copyright:
© 2019 American Psychological Association.

Keywords

  • Bifactor model
  • Factor analysis
  • Fit index bias
  • Model evaluation
  • Monte Carlo simulation
  • Mental Disorders/diagnosis
  • Computer Simulation
  • Humans
  • Models, Psychological

PubMed: MeSH publication types

  • Journal Article

Fingerprint

Dive into the research topics of 'Are Fit Indices Used to Test Psychopathology Structure Biased? A Simulation Study'. Together they form a unique fingerprint.

Cite this