Abstract
Many machine learning models have tuning parameters to be determined by the training data, and cross-validation (CV) is perhaps the most commonly used method for selecting tuning parameters. This work concerns the problem of estimating the generalization error of a CV-tuned predictive model. We propose to use an honest leave-one-out cross-validation framework to produce a nearly unbiased estimator of the post-tuning generalization error. By using the kernel support vector machine and the kernel logistic regression as examples, we demonstrate that the honest leave-one-out cross-validation has very competitive performance even when competing with the state-of-the-art.632+ estimator.
Original language | English (US) |
---|---|
Article number | e413 |
Journal | Stat |
Volume | 10 |
Issue number | 1 |
DOIs | |
State | Published - Dec 2021 |
Bibliographical note
Publisher Copyright:© 2021 John Wiley & Sons, Ltd.