Abstract
Learning curves are presented as an unbiased means for evaluating the performance of models for neuroimaging data analysis. The learning curve measures the predictive performance in terms of the generalization or prediction error as a function of the number of independent examples (e.g., subjects) used to determine the parameters in the model. Cross-validation resampling is used to obtain unbiased estimates of a generic multivariate Gaussian classifier, for training set sizes from 2 to 16 subjects. We apply the framework to four different activation experiments, in this case [15O]water data sets, although the framework is equally valid for multisubject fMRI studies. We demonstrate how the prediction error can be expressed as the mutual information between the scan and the scan label, measured in units of bits. The mutual information learning curve can be used to evaluate the impact of different methodological choices, e.g., classification label schemes, preprocessing choices. Another application for the learning curve is to examine the model performance using bias/variance considerations enabling the researcher to determine if the model performance is limited by statistical bias or variance. We furthermore present the sensitivity map as a general method for extracting activation maps from statistical models within the probabilistic framework and illustrate relationships between mutual information and pattern reproducibility as derived in the NPAIRS framework described in a companion paper.
Original language | English (US) |
---|---|
Pages (from-to) | 772-786 |
Number of pages | 15 |
Journal | NeuroImage |
Volume | 15 |
Issue number | 4 |
DOIs | |
State | Published - Apr 2002 |
Bibliographical note
Funding Information:This work was supported by Human Brain Project Grant P20 MH57180 and the Danish Research Councils for the Natural and Technical Sciences through the THOR Center for Neuroinformatics. Finally, the authors thank the anonymous reviewers for many useful questions and suggestions.
Keywords
- Cross-validation
- Generalization error
- Learning curve
- Macroscopic and microscopic models
- Multisubject PET and fMRI studies
- Mutual information
- Prediction error
- Sensitivity map