Everything You Always Wanted to Know About Storage Compressibility of Pre-Trained ML Models but Were Afraid to Ask

Zhaoyuan Su, Ammar Ahmed, Zirui Wang, Ali Anwar, Yue Cheng

Research output: Contribution to journalConference articlepeer-review

Abstract

As the number of pre-trained machine learning (ML) models is growing exponentially, data reduction tools are not catching up. Existing data reduction techniques are not specifically designed for pre-trained model (PTM) dataset files. This is largely due to a lack of understanding of the patterns and characteristics of these datasets, especially those relevant to data reduction and compressibility. This paper presents the first, exhaustive analysis to date of PTM datasets on storage compressibility. Our analysis spans different types of data reduction and compression techniques, from hash based data deduplication, data similarity detection, to dictionary coding compression. Our analysis explores these techniques at three data granularity levels, from model layers, model chunks, to model parameters. We draw new observations that indicate that modern data reduction tools are not effective when handling PTM datasets. There is a pressing need for new compression methods that take into account PTMs’ data characteristics for effective storage reduction. Motivated by our findings, we design Elf, a simple yet effective, error-bounded, lossy floating-point compression method. Elf transforms floating-point parameters in such a way that the common exponent field of the transformed parameters can be completely eliminated to save storage space. We develop Elves, a compression framework that integrates Elf along with several other data reduction methods. Elves uses the most effective method to compress PTMs that exhibit different patterns. Evaluation shows that Elves achieves an overall compression ratio of 1.52×, which is 1.31×, 1.32× and 1.29× higher than a general-purpose compressor (zstd), an error-bounded lossy compressor (SZ3), and the uniform model quantization, respectively, with negligible model accuracy loss.

Original languageEnglish (US)
Pages (from-to)2036-2049
Number of pages14
JournalProceedings of the VLDB Endowment
Volume17
Issue number8
DOIs
StatePublished - 2024
Event50th International Conference on Very Large Data Bases, VLDB 2024 - Guangzhou, China
Duration: Aug 24 2024Aug 29 2024

Bibliographical note

Publisher Copyright:
© 2024, VLDB Endowment. All rights reserved.

Fingerprint

Dive into the research topics of 'Everything You Always Wanted to Know About Storage Compressibility of Pre-Trained ML Models but Were Afraid to Ask'. Together they form a unique fingerprint.

Cite this