Motivated by the ever-increasing demands for limited communication bandwidth and low-power consumption, we propose a new methodology, named joint Variational Autoencoders with Bernoulli mixture models (VAB), for performing clustering in the compressed data domain. The idea is to reduce the data dimension by Variational Autoencoders (VAEs) and group data representations by Bernoulli mixture models (BMMs). Once jointly trained for compression and clustering, the model can be decomposed into two parts: a data vendor that encodes the raw data into compressed data, and a data consumer that classifies the received (compressed) data. In this way, the data vendor benefits from data security and communication bandwidth, while the data consumer benefits from low computational complexity. To enable training using the gradient descent algorithm, we propose to use the Gumbel-Softmax distribution to resolve the infeasibility of the back-propagation algorithm when assessing categorical samples.
|Original language||English (US)|
|Title of host publication||Proceedings - DCC 2020|
|Subtitle of host publication||Data Compression Conference|
|Editors||Ali Bilgin, Michael W. Marcellin, Joan Serra-Sagrista, James A. Storer|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||1|
|State||Published - Mar 2020|
|Event||2020 Data Compression Conference, DCC 2020 - Snowbird, United States|
Duration: Mar 24 2020 → Mar 27 2020
|Name||Data Compression Conference Proceedings|
|Conference||2020 Data Compression Conference, DCC 2020|
|Period||3/24/20 → 3/27/20|
Bibliographical noteFunding Information:
This work was mostly done when Suya Wu was a student at the University of Minnesota. She is now with Duke University. This work was supported in part by Office of Naval Research Grant No. N00014-18-1-2244.
© 2020 IEEE.
- Bernoulli mixture model (BMM)
- Variational autoencoder (VAE)