Abstract
In this paper, we advocate the notion of 'BIG' cache as an innovative abstraction for effectively utilizing the distributed storage and processing capacities of all servers in a cache network. The 'BIG' cache abstraction is proposed to partly address the problem of (cascade) thrashing in a hierarchical network of cache servers, where it has been known that cache resources at intermediate servers are poorly utilized, especially under classical cache replacement policies such as LRU. We lay out the advantages of 'BIG' cache abstraction and make a strong case both from a theoretical standpoint as well as through simulation analysis. We also develop the dCLIMB cache algorithm to minimize the overheads of moving objects across distributed cache boundaries and present a simple yet effective heuristic for addressing the cache allotment problem in the design of 'BIG' cache abstraction.
Original language | English (US) |
---|---|
Title of host publication | Proceedings - IEEE 37th International Conference on Distributed Computing Systems, ICDCS 2017 |
Editors | Kisung Lee, Ling Liu |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 742-752 |
Number of pages | 11 |
ISBN (Electronic) | 9781538617915 |
DOIs | |
State | Published - Jul 13 2017 |
Event | 37th IEEE International Conference on Distributed Computing Systems, ICDCS 2017 - Atlanta, United States Duration: Jun 5 2017 → Jun 8 2017 |
Publication series
Name | Proceedings - International Conference on Distributed Computing Systems |
---|
Other
Other | 37th IEEE International Conference on Distributed Computing Systems, ICDCS 2017 |
---|---|
Country/Territory | United States |
City | Atlanta |
Period | 6/5/17 → 6/8/17 |
Bibliographical note
Funding Information:This research was supported in part by NSF grants CNS-1411636, CNS 1618339 and CNS 1617729 and a Huawei gift.
Keywords
- BIG Cache
- Cache Replacement Policies
- Caching
- Content Network Distribution
- DCLIMB
- Hierarchical Caching