Honest leave-one-out cross-validation for estimating post-tuning generalization error

Boxiang Wang, Hui Zou

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Many machine learning models have tuning parameters to be determined by the training data, and cross-validation (CV) is perhaps the most commonly used method for selecting tuning parameters. This work concerns the problem of estimating the generalization error of a CV-tuned predictive model. We propose to use an honest leave-one-out cross-validation framework to produce a nearly unbiased estimator of the post-tuning generalization error. By using the kernel support vector machine and the kernel logistic regression as examples, we demonstrate that the honest leave-one-out cross-validation has very competitive performance even when competing with the state-of-the-art.632+ estimator.

Original languageEnglish (US)
Article numbere413
JournalStat
Volume10
Issue number1
DOIs
StatePublished - Dec 2021

Bibliographical note

Funding Information:
We thank the referees for their helpful comments and suggestions. Zou's work is supported in part by NSF grants 1915‐842 and 2015‐120.

Publisher Copyright:
© 2021 John Wiley & Sons, Ltd.

Fingerprint

Dive into the research topics of 'Honest leave-one-out cross-validation for estimating post-tuning generalization error'. Together they form a unique fingerprint.

Cite this