Learning Discriminative αβ-Divergences for Positive Definite Matrices

A. Cherian, P. Stanitsas, M. Harandi, V. Morellas, Nikolaos P Papanikolopoulos

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possible performance. Further and as a result of their overwhelming complexity for large-scale problems, computing pairwise similarities by clever embedding of SPD matrices is often preferred to direct use of the aforementioned measures. In this paper, we propose a discriminative metric learning framework, Information Divergence and Dictionary Learning (IDDL), that not only learns application specific measures on SPD matrices automatically, but also embeds them as vectors using a learned dictionary. To learn the similarity measures (which could potentially be distinct for every dictionary atom), we use the recently introduced αß-logdet divergence, which is known to unify the measures listed above. We propose a novel IDDL objective, that learns the parameters of the divergence and the dictionary atoms jointly in a discriminative setup and is solved efficiently using Riemannian optimization. We showcase extensive experiments on eight computer vision datasets, demonstrating state-of-the-art performances.

Original languageEnglish (US)
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4280-4289
Number of pages10
ISBN (Electronic)9781538610329
DOIs
StatePublished - Dec 22 2017
Event16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy
Duration: Oct 22 2017Oct 29 2017

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
Volume2017-October
ISSN (Print)1550-5499

Other

Other16th IEEE International Conference on Computer Vision, ICCV 2017
CountryItaly
CityVenice
Period10/22/1710/29/17

Fingerprint

Glossaries
Atoms
Computer vision
Statistics
Experiments

Cite this

Cherian, A., Stanitsas, P., Harandi, M., Morellas, V., & Papanikolopoulos, N. P. (2017). Learning Discriminative αβ-Divergences for Positive Definite Matrices. In Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017 (pp. 4280-4289). [8237720] (Proceedings of the IEEE International Conference on Computer Vision; Vol. 2017-October). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICCV.2017.458

Learning Discriminative αβ-Divergences for Positive Definite Matrices. / Cherian, A.; Stanitsas, P.; Harandi, M.; Morellas, V.; Papanikolopoulos, Nikolaos P.

Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 4280-4289 8237720 (Proceedings of the IEEE International Conference on Computer Vision; Vol. 2017-October).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Cherian, A, Stanitsas, P, Harandi, M, Morellas, V & Papanikolopoulos, NP 2017, Learning Discriminative αβ-Divergences for Positive Definite Matrices. in Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017., 8237720, Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-October, Institute of Electrical and Electronics Engineers Inc., pp. 4280-4289, 16th IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 10/22/17. https://doi.org/10.1109/ICCV.2017.458
Cherian A, Stanitsas P, Harandi M, Morellas V, Papanikolopoulos NP. Learning Discriminative αβ-Divergences for Positive Definite Matrices. In Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc. 2017. p. 4280-4289. 8237720. (Proceedings of the IEEE International Conference on Computer Vision). https://doi.org/10.1109/ICCV.2017.458
Cherian, A. ; Stanitsas, P. ; Harandi, M. ; Morellas, V. ; Papanikolopoulos, Nikolaos P. / Learning Discriminative αβ-Divergences for Positive Definite Matrices. Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 4280-4289 (Proceedings of the IEEE International Conference on Computer Vision).
@inproceedings{2ef3dc10ec444b7090f8363b324b5134,
title = "Learning Discriminative αβ-Divergences for Positive Definite Matrices",
abstract = "Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possible performance. Further and as a result of their overwhelming complexity for large-scale problems, computing pairwise similarities by clever embedding of SPD matrices is often preferred to direct use of the aforementioned measures. In this paper, we propose a discriminative metric learning framework, Information Divergence and Dictionary Learning (IDDL), that not only learns application specific measures on SPD matrices automatically, but also embeds them as vectors using a learned dictionary. To learn the similarity measures (which could potentially be distinct for every dictionary atom), we use the recently introduced α{\ss}-logdet divergence, which is known to unify the measures listed above. We propose a novel IDDL objective, that learns the parameters of the divergence and the dictionary atoms jointly in a discriminative setup and is solved efficiently using Riemannian optimization. We showcase extensive experiments on eight computer vision datasets, demonstrating state-of-the-art performances.",
author = "A. Cherian and P. Stanitsas and M. Harandi and V. Morellas and Papanikolopoulos, {Nikolaos P}",
year = "2017",
month = "12",
day = "22",
doi = "10.1109/ICCV.2017.458",
language = "English (US)",
series = "Proceedings of the IEEE International Conference on Computer Vision",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "4280--4289",
booktitle = "Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017",

}

TY - GEN

T1 - Learning Discriminative αβ-Divergences for Positive Definite Matrices

AU - Cherian, A.

AU - Stanitsas, P.

AU - Harandi, M.

AU - Morellas, V.

AU - Papanikolopoulos, Nikolaos P

PY - 2017/12/22

Y1 - 2017/12/22

N2 - Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possible performance. Further and as a result of their overwhelming complexity for large-scale problems, computing pairwise similarities by clever embedding of SPD matrices is often preferred to direct use of the aforementioned measures. In this paper, we propose a discriminative metric learning framework, Information Divergence and Dictionary Learning (IDDL), that not only learns application specific measures on SPD matrices automatically, but also embeds them as vectors using a learned dictionary. To learn the similarity measures (which could potentially be distinct for every dictionary atom), we use the recently introduced αß-logdet divergence, which is known to unify the measures listed above. We propose a novel IDDL objective, that learns the parameters of the divergence and the dictionary atoms jointly in a discriminative setup and is solved efficiently using Riemannian optimization. We showcase extensive experiments on eight computer vision datasets, demonstrating state-of-the-art performances.

AB - Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possible performance. Further and as a result of their overwhelming complexity for large-scale problems, computing pairwise similarities by clever embedding of SPD matrices is often preferred to direct use of the aforementioned measures. In this paper, we propose a discriminative metric learning framework, Information Divergence and Dictionary Learning (IDDL), that not only learns application specific measures on SPD matrices automatically, but also embeds them as vectors using a learned dictionary. To learn the similarity measures (which could potentially be distinct for every dictionary atom), we use the recently introduced αß-logdet divergence, which is known to unify the measures listed above. We propose a novel IDDL objective, that learns the parameters of the divergence and the dictionary atoms jointly in a discriminative setup and is solved efficiently using Riemannian optimization. We showcase extensive experiments on eight computer vision datasets, demonstrating state-of-the-art performances.

UR - http://www.scopus.com/inward/record.url?scp=85041929874&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041929874&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2017.458

DO - 10.1109/ICCV.2017.458

M3 - Conference contribution

T3 - Proceedings of the IEEE International Conference on Computer Vision

SP - 4280

EP - 4289

BT - Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -