Model selection confidence sets by likelihood ratio testing

Chao Zheng, Davide Ferrari, Yuhong Yang

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

The traditional activity of model selection aims at discovering a single model superior to other candidate models. In the presence of pronounced noise, however, multiple models are often found to explain the same data equally well. To resolve this model selection ambiguity, we introduce the general approach of model selection confidence sets (MSCSs) based on likelihood ratio testing. A MSCS is defined as a list of models statistically indistinguishable from the true model at a user-specified level of confidence, which extends the familiar notion of confidence intervals to the model-selection framework. Our approach guarantees asymptotically correct coverage probability of the true model when both sample size and model dimension increase. We derive conditions under which the MSCS contains all the relevant information about the true model structure. In addition, we propose natural statistics based on the MSCS to measure importance of variables in a principled way that accounts for the overall model uncertainty. When the space of feasible models is large, MSCS is implemented by an adaptive stochastic search algorithm which samples MSCS models with high probability. The MSCS methodology is illustrated through numerical experiments on synthetic and real data examples.

Original languageEnglish (US)
Pages (from-to)827-851
Number of pages25
JournalStatistica Sinica
Volume29
Issue number2
DOIs
StatePublished - Jan 1 2019

Fingerprint

Confidence Set
Likelihood Ratio
Model Selection
Testing
Model
Model selection
Likelihood ratio
Confidence set
Sample Selection
Stochastic Search
Stochastic Algorithms
Coverage Probability
Model Uncertainty
Multiple Models
Search Algorithm
Confidence
Confidence interval
Resolve
Sample Size
Numerical Experiment

Keywords

  • Adaptive sampling
  • Likelihood ratio test
  • Model selection confidence set
  • Optimal detectability condition

Cite this

Model selection confidence sets by likelihood ratio testing. / Zheng, Chao; Ferrari, Davide; Yang, Yuhong.

In: Statistica Sinica, Vol. 29, No. 2, 01.01.2019, p. 827-851.

Research output: Contribution to journalArticle

Zheng, Chao ; Ferrari, Davide ; Yang, Yuhong. / Model selection confidence sets by likelihood ratio testing. In: Statistica Sinica. 2019 ; Vol. 29, No. 2. pp. 827-851.
@article{2dfeb25992a9445abd9be8350e7be132,
title = "Model selection confidence sets by likelihood ratio testing",
abstract = "The traditional activity of model selection aims at discovering a single model superior to other candidate models. In the presence of pronounced noise, however, multiple models are often found to explain the same data equally well. To resolve this model selection ambiguity, we introduce the general approach of model selection confidence sets (MSCSs) based on likelihood ratio testing. A MSCS is defined as a list of models statistically indistinguishable from the true model at a user-specified level of confidence, which extends the familiar notion of confidence intervals to the model-selection framework. Our approach guarantees asymptotically correct coverage probability of the true model when both sample size and model dimension increase. We derive conditions under which the MSCS contains all the relevant information about the true model structure. In addition, we propose natural statistics based on the MSCS to measure importance of variables in a principled way that accounts for the overall model uncertainty. When the space of feasible models is large, MSCS is implemented by an adaptive stochastic search algorithm which samples MSCS models with high probability. The MSCS methodology is illustrated through numerical experiments on synthetic and real data examples.",
keywords = "Adaptive sampling, Likelihood ratio test, Model selection confidence set, Optimal detectability condition",
author = "Chao Zheng and Davide Ferrari and Yuhong Yang",
year = "2019",
month = "1",
day = "1",
doi = "10.5705/ss.202017.0006",
language = "English (US)",
volume = "29",
pages = "827--851",
journal = "Statistica Sinica",
issn = "1017-0405",
publisher = "Institute of Statistical Science",
number = "2",

}

TY - JOUR

T1 - Model selection confidence sets by likelihood ratio testing

AU - Zheng, Chao

AU - Ferrari, Davide

AU - Yang, Yuhong

PY - 2019/1/1

Y1 - 2019/1/1

N2 - The traditional activity of model selection aims at discovering a single model superior to other candidate models. In the presence of pronounced noise, however, multiple models are often found to explain the same data equally well. To resolve this model selection ambiguity, we introduce the general approach of model selection confidence sets (MSCSs) based on likelihood ratio testing. A MSCS is defined as a list of models statistically indistinguishable from the true model at a user-specified level of confidence, which extends the familiar notion of confidence intervals to the model-selection framework. Our approach guarantees asymptotically correct coverage probability of the true model when both sample size and model dimension increase. We derive conditions under which the MSCS contains all the relevant information about the true model structure. In addition, we propose natural statistics based on the MSCS to measure importance of variables in a principled way that accounts for the overall model uncertainty. When the space of feasible models is large, MSCS is implemented by an adaptive stochastic search algorithm which samples MSCS models with high probability. The MSCS methodology is illustrated through numerical experiments on synthetic and real data examples.

AB - The traditional activity of model selection aims at discovering a single model superior to other candidate models. In the presence of pronounced noise, however, multiple models are often found to explain the same data equally well. To resolve this model selection ambiguity, we introduce the general approach of model selection confidence sets (MSCSs) based on likelihood ratio testing. A MSCS is defined as a list of models statistically indistinguishable from the true model at a user-specified level of confidence, which extends the familiar notion of confidence intervals to the model-selection framework. Our approach guarantees asymptotically correct coverage probability of the true model when both sample size and model dimension increase. We derive conditions under which the MSCS contains all the relevant information about the true model structure. In addition, we propose natural statistics based on the MSCS to measure importance of variables in a principled way that accounts for the overall model uncertainty. When the space of feasible models is large, MSCS is implemented by an adaptive stochastic search algorithm which samples MSCS models with high probability. The MSCS methodology is illustrated through numerical experiments on synthetic and real data examples.

KW - Adaptive sampling

KW - Likelihood ratio test

KW - Model selection confidence set

KW - Optimal detectability condition

UR - http://www.scopus.com/inward/record.url?scp=85062091195&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062091195&partnerID=8YFLogxK

U2 - 10.5705/ss.202017.0006

DO - 10.5705/ss.202017.0006

M3 - Article

AN - SCOPUS:85062091195

VL - 29

SP - 827

EP - 851

JO - Statistica Sinica

JF - Statistica Sinica

SN - 1017-0405

IS - 2

ER -