Spatial-temporal Data Compression of Dynamic Vision Sensor Output with High Pixel-level Saliency using Low-precision Sparse Autoencoder

Ahmed Hasssan, Jian Meng, Yu Cao, Jae Sun Seo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Imaging innovations such as dynamic vision sensor (DVS) can significantly reduce the image data volume by tracking only the changes in events. However, when DVS camera itself moves around (e.g. self-driving cars), the DVS output stream is not sparse enough to achieve the desired hardware efficiency. In this work, we investigate designing a compact sparse auto encoder model to largely compress event-based DVS output. The proposed encoder-decoder-based autoencoder design is a shallow convolutional neural network (CNN) architecture with two convolution and inverse-convolution layers with only ~ 10k parameters. We implement quantization-aware training on our proposed model to achieve 2-bit and 4-bit precision. Moreover, we implement unstructured pruning on the encoder module to achieve >90 % active pixel compression at the latent space. The proposed autoencoder design has been validated against multiple benchmark DVS-based datasets including DVS-MNIST, N-Cars, DVS-IBM Gesture, and Prophesee Automotive Gen1 dataset. We achieve low accuracy drop of 2%, 3%, and 3.8% compared to the uncompressed baseline, with 7.08%, 1.36%, and 5.53 % active pixels in the images from the decoder (compression ratio of 13.1×, 29.1×, and 18.1×) for DVS-MNIST, N-Cars, and DVS-IBM Gesture datasets, respectively. For the Prophesee Automotive Gen1 dataset, we achieve a minimal mAP drop of 0.07 from the baseline with 9% active pixels in the images from the decoder (compression ratio of 11.9×).

Original languageEnglish (US)
Title of host publication56th Asilomar Conference on Signals, Systems and Computers, ACSSC 2022
EditorsMichael B. Matthews
PublisherIEEE Computer Society
Pages344-348
Number of pages5
ISBN (Electronic)9781665459068
DOIs
StatePublished - 2022
Externally publishedYes
Event56th Asilomar Conference on Signals, Systems and Computers, ACSSC 2022 - Virtual, Online, United States
Duration: Oct 31 2022Nov 2 2022

Publication series

NameConference Record - Asilomar Conference on Signals, Systems and Computers
Volume2022-October
ISSN (Print)1058-6393

Conference

Conference56th Asilomar Conference on Signals, Systems and Computers, ACSSC 2022
Country/TerritoryUnited States
CityVirtual, Online
Period10/31/2211/2/22

Bibliographical note

Publisher Copyright:
© 2022 IEEE.

Keywords

  • Dynamic vision sensor
  • Neural network pruning
  • Neural network training
  • Object recognition
  • Quantization
  • Sparse autoencoder
  • detection

Fingerprint

Dive into the research topics of 'Spatial-temporal Data Compression of Dynamic Vision Sensor Output with High Pixel-level Saliency using Low-precision Sparse Autoencoder'. Together they form a unique fingerprint.

Cite this