Clustering Positive Definite Matrices by Learning Information Divergences

Panagiotis Stanitsas, Anoop Cherian, Vassilios Morellas, Nikolaos P Papanikolopoulos

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Data representations based on Symmetric Positive Definite (SPD) matrices are gaining popularity in visual learning applications. When comparing SPD matrices, measures based on non-linear geometries often yield beneficial results. However, a manual selection process is commonly used to identify the appropriate measure for a visual learning application. In this paper, we study the problem of clustering SPD matrices while automatically learning a suitable measure. We propose a novel formulation that jointly (i) clusters the input SPD matrices in a K-Means setup and (ii) learns a suitable non-linear measure for comparing SPD matrices. For (ii), we capitalize on the recently introduced αβ-logdet divergence, which generalizes a family of popular similarity measures on SPD matrices. Our formulation is cast in a Riemannian optimization framework and solved using a conjugate gradient scheme. We present experiments on five computer vision datasets and demonstrate state-of-the-art performance.

Original languageEnglish (US)
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1304-1312
Number of pages9
Volume2018-January
ISBN (Electronic)9781538610343
DOIs
StatePublished - Jan 19 2018
Event16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017 - Venice, Italy
Duration: Oct 22 2017Oct 29 2017

Other

Other16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017
CountryItaly
CityVenice
Period10/22/1710/29/17

Fingerprint

Computer vision
Geometry
Experiments

Cite this

Stanitsas, P., Cherian, A., Morellas, V., & Papanikolopoulos, N. P. (2018). Clustering Positive Definite Matrices by Learning Information Divergences. In Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017 (Vol. 2018-January, pp. 1304-1312). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICCVW.2017.155

Clustering Positive Definite Matrices by Learning Information Divergences. / Stanitsas, Panagiotis; Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos P.

Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. p. 1304-1312.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Stanitsas, P, Cherian, A, Morellas, V & Papanikolopoulos, NP 2018, Clustering Positive Definite Matrices by Learning Information Divergences. in Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. vol. 2018-January, Institute of Electrical and Electronics Engineers Inc., pp. 1304-1312, 16th IEEE International Conference on Computer Vision Workshops, ICCVW 2017, Venice, Italy, 10/22/17. https://doi.org/10.1109/ICCVW.2017.155
Stanitsas P, Cherian A, Morellas V, Papanikolopoulos NP. Clustering Positive Definite Matrices by Learning Information Divergences. In Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Vol. 2018-January. Institute of Electrical and Electronics Engineers Inc. 2018. p. 1304-1312 https://doi.org/10.1109/ICCVW.2017.155
Stanitsas, Panagiotis ; Cherian, Anoop ; Morellas, Vassilios ; Papanikolopoulos, Nikolaos P. / Clustering Positive Definite Matrices by Learning Information Divergences. Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017. Vol. 2018-January Institute of Electrical and Electronics Engineers Inc., 2018. pp. 1304-1312
@inproceedings{82ba72fcbd5949a481a0427cae5a7be7,
title = "Clustering Positive Definite Matrices by Learning Information Divergences",
abstract = "Data representations based on Symmetric Positive Definite (SPD) matrices are gaining popularity in visual learning applications. When comparing SPD matrices, measures based on non-linear geometries often yield beneficial results. However, a manual selection process is commonly used to identify the appropriate measure for a visual learning application. In this paper, we study the problem of clustering SPD matrices while automatically learning a suitable measure. We propose a novel formulation that jointly (i) clusters the input SPD matrices in a K-Means setup and (ii) learns a suitable non-linear measure for comparing SPD matrices. For (ii), we capitalize on the recently introduced αβ-logdet divergence, which generalizes a family of popular similarity measures on SPD matrices. Our formulation is cast in a Riemannian optimization framework and solved using a conjugate gradient scheme. We present experiments on five computer vision datasets and demonstrate state-of-the-art performance.",
author = "Panagiotis Stanitsas and Anoop Cherian and Vassilios Morellas and Papanikolopoulos, {Nikolaos P}",
year = "2018",
month = "1",
day = "19",
doi = "10.1109/ICCVW.2017.155",
language = "English (US)",
volume = "2018-January",
pages = "1304--1312",
booktitle = "Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Clustering Positive Definite Matrices by Learning Information Divergences

AU - Stanitsas, Panagiotis

AU - Cherian, Anoop

AU - Morellas, Vassilios

AU - Papanikolopoulos, Nikolaos P

PY - 2018/1/19

Y1 - 2018/1/19

N2 - Data representations based on Symmetric Positive Definite (SPD) matrices are gaining popularity in visual learning applications. When comparing SPD matrices, measures based on non-linear geometries often yield beneficial results. However, a manual selection process is commonly used to identify the appropriate measure for a visual learning application. In this paper, we study the problem of clustering SPD matrices while automatically learning a suitable measure. We propose a novel formulation that jointly (i) clusters the input SPD matrices in a K-Means setup and (ii) learns a suitable non-linear measure for comparing SPD matrices. For (ii), we capitalize on the recently introduced αβ-logdet divergence, which generalizes a family of popular similarity measures on SPD matrices. Our formulation is cast in a Riemannian optimization framework and solved using a conjugate gradient scheme. We present experiments on five computer vision datasets and demonstrate state-of-the-art performance.

AB - Data representations based on Symmetric Positive Definite (SPD) matrices are gaining popularity in visual learning applications. When comparing SPD matrices, measures based on non-linear geometries often yield beneficial results. However, a manual selection process is commonly used to identify the appropriate measure for a visual learning application. In this paper, we study the problem of clustering SPD matrices while automatically learning a suitable measure. We propose a novel formulation that jointly (i) clusters the input SPD matrices in a K-Means setup and (ii) learns a suitable non-linear measure for comparing SPD matrices. For (ii), we capitalize on the recently introduced αβ-logdet divergence, which generalizes a family of popular similarity measures on SPD matrices. Our formulation is cast in a Riemannian optimization framework and solved using a conjugate gradient scheme. We present experiments on five computer vision datasets and demonstrate state-of-the-art performance.

UR - http://www.scopus.com/inward/record.url?scp=85046169233&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85046169233&partnerID=8YFLogxK

U2 - 10.1109/ICCVW.2017.155

DO - 10.1109/ICCVW.2017.155

M3 - Conference contribution

VL - 2018-January

SP - 1304

EP - 1312

BT - Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -