Are Fit Indices Used to Test Psychopathology Structure Biased? A Simulation Study

Ashley L. Greene, Nicholas R. Eaton, Kaiqiao Li, Miriam K. Forbes, Robert F. Krueger, Kristian E. Markon, Irwin D. Waldman, David C. Cicero, Christopher C. Conway, Anna R. Docherty, Eiko I. Fried, Masha Y. Ivanova, Katherine G. Jonas, Robert D. Latzman, Christopher J. Patrick, Ulrich Reininghaus, Jennifer L. Tackett, Aidan G.C. Wright, Roman Kotov

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Structural models of psychopathology provide dimensional alternatives to traditional categorical classification systems. Competing models, such as the bifactor and correlated factors models, are typically compared via statistical indices to assess how well each model fits the same data. However, simulation studies have found evidence for probifactor fit index bias in several psychological research domains. The present study sought to extend this research to models of psychopathology, wherein the bifactor model has received much attention, but its susceptibility to bias is not well characterized. We used Monte Carlo simulations to examine how various model misspecifications produced fit index bias for 2 commonly used estimators, WLSMV and MLR. We simulated binary indicators to represent psychiatric diagnoses and positively skewed continuous indicators to represent symptom counts. Across combinations of estimators, indicator distributions, and misspecifications, complex patterns of bias emerged, with fit indices more often than not failing to correctly identify the correlated factors model as the data-generating model. No fit index emerged as reliably unbiased across all misspecification scenarios. Although, tests of model equivalence indicated that in one instance fit indices were not biased-they favored the bifactor model, albeit not unfairly. Overall, results suggest that comparisons of bifactor models to alternatives using fit indices may be misleading and call into question the evidentiary meaning of previous studies that identified the bifactor model as superior based on fit. We highlight the importance of comparing models based on substantive interpretability and their utility for addressing study aims, the methodological significance of model equivalence, as well as the need for implementation of statistical metrics that evaluate model quality.

Original languageEnglish (US)
JournalJournal of abnormal psychology
DOIs
StatePublished - Jan 1 2019

Fingerprint

Psychopathology
Structural Models
Research
Mental Disorders
Psychology

Keywords

  • Bifactor model
  • Factor analysis
  • Fit index bias
  • Model evaluation
  • Monte Carlo simulation

PubMed: MeSH publication types

  • Journal Article

Cite this

Are Fit Indices Used to Test Psychopathology Structure Biased? A Simulation Study. / Greene, Ashley L.; Eaton, Nicholas R.; Li, Kaiqiao; Forbes, Miriam K.; Krueger, Robert F.; Markon, Kristian E.; Waldman, Irwin D.; Cicero, David C.; Conway, Christopher C.; Docherty, Anna R.; Fried, Eiko I.; Ivanova, Masha Y.; Jonas, Katherine G.; Latzman, Robert D.; Patrick, Christopher J.; Reininghaus, Ulrich; Tackett, Jennifer L.; Wright, Aidan G.C.; Kotov, Roman.

In: Journal of abnormal psychology, 01.01.2019.

Research output: Contribution to journalArticle

Greene, AL, Eaton, NR, Li, K, Forbes, MK, Krueger, RF, Markon, KE, Waldman, ID, Cicero, DC, Conway, CC, Docherty, AR, Fried, EI, Ivanova, MY, Jonas, KG, Latzman, RD, Patrick, CJ, Reininghaus, U, Tackett, JL, Wright, AGC & Kotov, R 2019, 'Are Fit Indices Used to Test Psychopathology Structure Biased? A Simulation Study', Journal of abnormal psychology. https://doi.org/10.1037/abn0000434
Greene, Ashley L. ; Eaton, Nicholas R. ; Li, Kaiqiao ; Forbes, Miriam K. ; Krueger, Robert F. ; Markon, Kristian E. ; Waldman, Irwin D. ; Cicero, David C. ; Conway, Christopher C. ; Docherty, Anna R. ; Fried, Eiko I. ; Ivanova, Masha Y. ; Jonas, Katherine G. ; Latzman, Robert D. ; Patrick, Christopher J. ; Reininghaus, Ulrich ; Tackett, Jennifer L. ; Wright, Aidan G.C. ; Kotov, Roman. / Are Fit Indices Used to Test Psychopathology Structure Biased? A Simulation Study. In: Journal of abnormal psychology. 2019.
@article{0af3458bb1244d7fa785c57fa311950f,
title = "Are Fit Indices Used to Test Psychopathology Structure Biased? A Simulation Study",
abstract = "Structural models of psychopathology provide dimensional alternatives to traditional categorical classification systems. Competing models, such as the bifactor and correlated factors models, are typically compared via statistical indices to assess how well each model fits the same data. However, simulation studies have found evidence for probifactor fit index bias in several psychological research domains. The present study sought to extend this research to models of psychopathology, wherein the bifactor model has received much attention, but its susceptibility to bias is not well characterized. We used Monte Carlo simulations to examine how various model misspecifications produced fit index bias for 2 commonly used estimators, WLSMV and MLR. We simulated binary indicators to represent psychiatric diagnoses and positively skewed continuous indicators to represent symptom counts. Across combinations of estimators, indicator distributions, and misspecifications, complex patterns of bias emerged, with fit indices more often than not failing to correctly identify the correlated factors model as the data-generating model. No fit index emerged as reliably unbiased across all misspecification scenarios. Although, tests of model equivalence indicated that in one instance fit indices were not biased-they favored the bifactor model, albeit not unfairly. Overall, results suggest that comparisons of bifactor models to alternatives using fit indices may be misleading and call into question the evidentiary meaning of previous studies that identified the bifactor model as superior based on fit. We highlight the importance of comparing models based on substantive interpretability and their utility for addressing study aims, the methodological significance of model equivalence, as well as the need for implementation of statistical metrics that evaluate model quality.",
keywords = "Bifactor model, Factor analysis, Fit index bias, Model evaluation, Monte Carlo simulation",
author = "Greene, {Ashley L.} and Eaton, {Nicholas R.} and Kaiqiao Li and Forbes, {Miriam K.} and Krueger, {Robert F.} and Markon, {Kristian E.} and Waldman, {Irwin D.} and Cicero, {David C.} and Conway, {Christopher C.} and Docherty, {Anna R.} and Fried, {Eiko I.} and Ivanova, {Masha Y.} and Jonas, {Katherine G.} and Latzman, {Robert D.} and Patrick, {Christopher J.} and Ulrich Reininghaus and Tackett, {Jennifer L.} and Wright, {Aidan G.C.} and Roman Kotov",
year = "2019",
month = "1",
day = "1",
doi = "10.1037/abn0000434",
language = "English (US)",
journal = "Journal of Abnormal Psychology",
issn = "0021-843X",
publisher = "American Psychological Association",

}

TY - JOUR

T1 - Are Fit Indices Used to Test Psychopathology Structure Biased? A Simulation Study

AU - Greene, Ashley L.

AU - Eaton, Nicholas R.

AU - Li, Kaiqiao

AU - Forbes, Miriam K.

AU - Krueger, Robert F.

AU - Markon, Kristian E.

AU - Waldman, Irwin D.

AU - Cicero, David C.

AU - Conway, Christopher C.

AU - Docherty, Anna R.

AU - Fried, Eiko I.

AU - Ivanova, Masha Y.

AU - Jonas, Katherine G.

AU - Latzman, Robert D.

AU - Patrick, Christopher J.

AU - Reininghaus, Ulrich

AU - Tackett, Jennifer L.

AU - Wright, Aidan G.C.

AU - Kotov, Roman

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Structural models of psychopathology provide dimensional alternatives to traditional categorical classification systems. Competing models, such as the bifactor and correlated factors models, are typically compared via statistical indices to assess how well each model fits the same data. However, simulation studies have found evidence for probifactor fit index bias in several psychological research domains. The present study sought to extend this research to models of psychopathology, wherein the bifactor model has received much attention, but its susceptibility to bias is not well characterized. We used Monte Carlo simulations to examine how various model misspecifications produced fit index bias for 2 commonly used estimators, WLSMV and MLR. We simulated binary indicators to represent psychiatric diagnoses and positively skewed continuous indicators to represent symptom counts. Across combinations of estimators, indicator distributions, and misspecifications, complex patterns of bias emerged, with fit indices more often than not failing to correctly identify the correlated factors model as the data-generating model. No fit index emerged as reliably unbiased across all misspecification scenarios. Although, tests of model equivalence indicated that in one instance fit indices were not biased-they favored the bifactor model, albeit not unfairly. Overall, results suggest that comparisons of bifactor models to alternatives using fit indices may be misleading and call into question the evidentiary meaning of previous studies that identified the bifactor model as superior based on fit. We highlight the importance of comparing models based on substantive interpretability and their utility for addressing study aims, the methodological significance of model equivalence, as well as the need for implementation of statistical metrics that evaluate model quality.

AB - Structural models of psychopathology provide dimensional alternatives to traditional categorical classification systems. Competing models, such as the bifactor and correlated factors models, are typically compared via statistical indices to assess how well each model fits the same data. However, simulation studies have found evidence for probifactor fit index bias in several psychological research domains. The present study sought to extend this research to models of psychopathology, wherein the bifactor model has received much attention, but its susceptibility to bias is not well characterized. We used Monte Carlo simulations to examine how various model misspecifications produced fit index bias for 2 commonly used estimators, WLSMV and MLR. We simulated binary indicators to represent psychiatric diagnoses and positively skewed continuous indicators to represent symptom counts. Across combinations of estimators, indicator distributions, and misspecifications, complex patterns of bias emerged, with fit indices more often than not failing to correctly identify the correlated factors model as the data-generating model. No fit index emerged as reliably unbiased across all misspecification scenarios. Although, tests of model equivalence indicated that in one instance fit indices were not biased-they favored the bifactor model, albeit not unfairly. Overall, results suggest that comparisons of bifactor models to alternatives using fit indices may be misleading and call into question the evidentiary meaning of previous studies that identified the bifactor model as superior based on fit. We highlight the importance of comparing models based on substantive interpretability and their utility for addressing study aims, the methodological significance of model equivalence, as well as the need for implementation of statistical metrics that evaluate model quality.

KW - Bifactor model

KW - Factor analysis

KW - Fit index bias

KW - Model evaluation

KW - Monte Carlo simulation

UR - http://www.scopus.com/inward/record.url?scp=85069633200&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85069633200&partnerID=8YFLogxK

U2 - 10.1037/abn0000434

DO - 10.1037/abn0000434

M3 - Article

C2 - 31318246

AN - SCOPUS:85069633200

JO - Journal of Abnormal Psychology

JF - Journal of Abnormal Psychology

SN - 0021-843X

ER -