Characterizing the shape of activation space in deep neural networks

Thomas Gebhart, Paul Schrater, Alan Hylton

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

The representations learned by deep neural networks are difficult to interpret in part due to their large parameter space and the complexities introduced by their multi-layer structure. We introduce a method for computing persistent homology over the graphical activation structure of neural networks, which provides access to the task-relevant substructures activated throughout the network for a given input. This topological perspective provides unique insights into the distributed representations encoded by neural networks in terms of the shape of their activation structures. We demonstrate the value of this approach by showing an alternative explanation for the existence of adversarial examples. By studying the topology of network activations across multiple architectures and datasets, we find that adversarial perturbations do not add activations that target the semantic structure of the adversarial class as previously hypothesized. Rather, adversarial examples are explainable as alterations to the dominant activation structures induced by the original image, suggesting the class representations learned by deep networks are problematically sparse on the input space.

Original languageEnglish (US)
Title of host publicationProceedings - 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019
EditorsM. Arif Wani, Taghi M. Khoshgoftaar, Dingding Wang, Huanjing Wang, Naeem Seliya
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1537-1542
Number of pages6
ISBN (Electronic)9781728145495
DOIs
StatePublished - Dec 2019
Externally publishedYes
Event18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019 - Boca Raton, United States
Duration: Dec 16 2019Dec 19 2019

Publication series

NameProceedings - 18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019

Conference

Conference18th IEEE International Conference on Machine Learning and Applications, ICMLA 2019
CountryUnited States
CityBoca Raton
Period12/16/1912/19/19

Bibliographical note

Publisher Copyright:
© 2019 IEEE.

Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.

Keywords

  • Adversarial Examples
  • Deep Learning
  • Neural Networks
  • Persistent Homology
  • Topology

Fingerprint Dive into the research topics of 'Characterizing the shape of activation space in deep neural networks'. Together they form a unique fingerprint.

Cite this