Signal denoising is closely related to function estimation from noisy samples. A similar problem is also addressed in statistics (nonlinear regression) and neural network learning. Vapnik-Chervonenkis (VC) theory provides a general framework for estimation of dependencies from finite samples. This theory emphasizes model complexity control according to Structural Risk Minimization (SRM) inductive principle, which considers a nested set of models of increasing complexity (called a structure), and then selects an optimal model complexity providing minimum error for future samples. Cherkassky and Shao  recently applied VC-theory for signal denoising and estimation. This paper extends the original VC-based signal denoising to practical settings where a (noisy) signal is oversampled. We show that in such settings one needs to modify analytical VC bounds for optimal signal denoising. We also present empirical comparisons between the proposed methodology and standard VC- based denoising for univariate signals. These comparisons indicate that the proposed denoising methodology yields superior estimation accuracy and more compact signal representation for various univariate signals.
|Original language||English (US)|
|Number of pages||6|
|State||Published - Jan 1 2001|
|Event||International Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States|
Duration: Jul 15 2001 → Jul 19 2001
|Other||International Joint Conference on Neural Networks (IJCNN'01)|
|Period||7/15/01 → 7/19/01|