Nonlinear dimensionality reduction for discriminative analytics of multiple datasets

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Principal component analysis (PCA) is widely used for feature extraction and dimensionality reduction, with documented merits in diverse tasks involving high-dimensional data. PCA copes with one dataset at a time, but it is challenged when it comes to analyzing multiple datasets jointly. In certain data science settings however, one is often interested in extracting the most discriminative information from one dataset of particular interest (a.k.a. target data) relative to the other(s) (a.k.a. background data). To this end, this paper puts forth a novel approach, termed discriminative (d) PCA, for such discriminative analytics of multiple datasets. Under certain conditions, dPCA is proved to be least-squares optimal in recovering the latent subspace vector unique to the target data relative to background data. To account for nonlinear data correlations, (linear) dPCA models for one or multiple background datasets are generalized through kernel-based learning. Interestingly, all dPCA variants admit an analytical solution obtainable with a single (generalized) eigenvalue decomposition. Finally, substantial dimensionality reduction tests using synthetic and real datasets are provided to corroborate the merits of the proposed methods.

Original languageEnglish (US)
Article number8565879
Pages (from-to)740-752
Number of pages13
JournalIEEE Transactions on Signal Processing
Volume67
Issue number3
DOIs
StatePublished - Feb 1 2019

Fingerprint

Principal component analysis
Feature extraction
Decomposition

Keywords

  • Principal component analysis
  • discriminative analytics
  • kernel learning
  • multiple background datasets

Cite this

@article{e34ec13b5a5a423ea11c242ea1c3583b,
title = "Nonlinear dimensionality reduction for discriminative analytics of multiple datasets",
abstract = "Principal component analysis (PCA) is widely used for feature extraction and dimensionality reduction, with documented merits in diverse tasks involving high-dimensional data. PCA copes with one dataset at a time, but it is challenged when it comes to analyzing multiple datasets jointly. In certain data science settings however, one is often interested in extracting the most discriminative information from one dataset of particular interest (a.k.a. target data) relative to the other(s) (a.k.a. background data). To this end, this paper puts forth a novel approach, termed discriminative (d) PCA, for such discriminative analytics of multiple datasets. Under certain conditions, dPCA is proved to be least-squares optimal in recovering the latent subspace vector unique to the target data relative to background data. To account for nonlinear data correlations, (linear) dPCA models for one or multiple background datasets are generalized through kernel-based learning. Interestingly, all dPCA variants admit an analytical solution obtainable with a single (generalized) eigenvalue decomposition. Finally, substantial dimensionality reduction tests using synthetic and real datasets are provided to corroborate the merits of the proposed methods.",
keywords = "Principal component analysis, discriminative analytics, kernel learning, multiple background datasets",
author = "Jia Chen and Gang Wang and Giannakis, {Georgios B}",
year = "2019",
month = "2",
day = "1",
doi = "10.1109/TSP.2018.2885478",
language = "English (US)",
volume = "67",
pages = "740--752",
journal = "IEEE Transactions on Signal Processing",
issn = "1053-587X",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "3",

}

TY - JOUR

T1 - Nonlinear dimensionality reduction for discriminative analytics of multiple datasets

AU - Chen, Jia

AU - Wang, Gang

AU - Giannakis, Georgios B

PY - 2019/2/1

Y1 - 2019/2/1

N2 - Principal component analysis (PCA) is widely used for feature extraction and dimensionality reduction, with documented merits in diverse tasks involving high-dimensional data. PCA copes with one dataset at a time, but it is challenged when it comes to analyzing multiple datasets jointly. In certain data science settings however, one is often interested in extracting the most discriminative information from one dataset of particular interest (a.k.a. target data) relative to the other(s) (a.k.a. background data). To this end, this paper puts forth a novel approach, termed discriminative (d) PCA, for such discriminative analytics of multiple datasets. Under certain conditions, dPCA is proved to be least-squares optimal in recovering the latent subspace vector unique to the target data relative to background data. To account for nonlinear data correlations, (linear) dPCA models for one or multiple background datasets are generalized through kernel-based learning. Interestingly, all dPCA variants admit an analytical solution obtainable with a single (generalized) eigenvalue decomposition. Finally, substantial dimensionality reduction tests using synthetic and real datasets are provided to corroborate the merits of the proposed methods.

AB - Principal component analysis (PCA) is widely used for feature extraction and dimensionality reduction, with documented merits in diverse tasks involving high-dimensional data. PCA copes with one dataset at a time, but it is challenged when it comes to analyzing multiple datasets jointly. In certain data science settings however, one is often interested in extracting the most discriminative information from one dataset of particular interest (a.k.a. target data) relative to the other(s) (a.k.a. background data). To this end, this paper puts forth a novel approach, termed discriminative (d) PCA, for such discriminative analytics of multiple datasets. Under certain conditions, dPCA is proved to be least-squares optimal in recovering the latent subspace vector unique to the target data relative to background data. To account for nonlinear data correlations, (linear) dPCA models for one or multiple background datasets are generalized through kernel-based learning. Interestingly, all dPCA variants admit an analytical solution obtainable with a single (generalized) eigenvalue decomposition. Finally, substantial dimensionality reduction tests using synthetic and real datasets are provided to corroborate the merits of the proposed methods.

KW - Principal component analysis

KW - discriminative analytics

KW - kernel learning

KW - multiple background datasets

UR - http://www.scopus.com/inward/record.url?scp=85051188014&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051188014&partnerID=8YFLogxK

U2 - 10.1109/TSP.2018.2885478

DO - 10.1109/TSP.2018.2885478

M3 - Article

VL - 67

SP - 740

EP - 752

JO - IEEE Transactions on Signal Processing

JF - IEEE Transactions on Signal Processing

SN - 1053-587X

IS - 3

M1 - 8565879

ER -