Gradient information for representation and modeling

Jie Ding, Robert Calderbank, Vahid Tarokh

Research output: Contribution to journalConference articlepeer-review

9 Scopus citations

Abstract

Motivated by Fisher divergence, in this paper we present a new set of information quantities which we refer to as gradient information. These measures serve as surrogates for classical information measures such as those based on logarithmic loss, Kullback-Leibler divergence, directed Shannon information, etc. in many data-processing scenarios of interest, and often provide significant computational advantage, improved stability, and robustness. As an example, we apply these measures to the Chow-Liu tree algorithm, and demonstrate remarkable performance and significant computational reduction using both synthetic and real data.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume32
StatePublished - 2019
Event33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada
Duration: Dec 8 2019Dec 14 2019

Bibliographical note

Publisher Copyright:
© 2019 Neural information processing systems foundation. All rights reserved.

Fingerprint

Dive into the research topics of 'Gradient information for representation and modeling'. Together they form a unique fingerprint.

Cite this