On Statistical Efficiency in Learning

Jie Ding, Enmao Diao, Jiawei Zhou, Vahid Tarokh

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

A central issue of many statistical learning problems is to select an appropriate model from a set of candidate models. Large models tend to inflate the variance (or overfitting), while small models tend to cause biases (or underfitting) for a given fixed dataset. In this work, we address the critical challenge of model selection to strike a balance between model fitting and model complexity, thus gaining reliable predictive power. We consider the task of approaching the theoretical limit of statistical learning, meaning that the selected model has the predictive performance that is as good as the best possible model given a class of potentially misspecified candidate models. We propose a generalized notion of Takeuchi's information criterion and prove that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions. It is the first proof of the asymptotic property of Takeuchi's information criterion to our best knowledge. Our proof applies to a wide variety of nonlinear models, loss functions, and high dimensionality (in the sense that the models' complexity can grow with sample size). The proposed method can be used as a computationally efficient surrogate for leave-one-out cross-validation. Moreover, for modeling streaming data, we propose an online algorithm that sequentially expands the model complexity to enhance selection stability and reduce computation cost. Experimental studies show that the proposed method has desirable predictive power and significantly less computational cost than some popular methods.

Original languageEnglish (US)
Article number9309260
Pages (from-to)2488-2506
Number of pages19
JournalIEEE Transactions on Information Theory
Volume67
Issue number4
DOIs
StatePublished - Apr 2021

Bibliographical note

Funding Information:
Manuscript received August 25, 2019; revised October 14, 2020; accepted December 8, 2020. Date of publication December 28, 2020; date of current version March 18, 2021. This work was supported in part by the Defense Advanced Research Projects Agency (DARPA) under Grant N66001-15-C-4028 and Grant W911NF-16-1-0561 and in part by the Army Research Office (ARO) under Grant W911NF-20-1-0222. This article was presented in part at the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. (Corresponding author: Jie Ding.) Jie Ding is with the School of Statistics, University of Minnesota, Minneapolis, MN 55455 USA (e-mail: dingj@umn.edu).

Publisher Copyright:
© 1963-2012 IEEE.

Keywords

  • Cross-validation
  • Takeuchi's information criterion
  • adaptivity to oracle
  • expert learning
  • model expansion
  • model selection

Fingerprint

Dive into the research topics of 'On Statistical Efficiency in Learning'. Together they form a unique fingerprint.

Cite this