Abstract
To keep pace with the exploding data volume raised from geographical distribution edge networks, more and more edge servers have been built in recent years. As the computing power and storage capacity are different on each server, requests have to be transferred from one server to another before finally being responded and returned back to users. Such server-to-server transmission naturally introduce non negligible latency, which inevitably affects the quality of service (QoS). To eliminate this transmission latency, Internet Service Providers (ISPs) are building or renting more edge servers (both computing servers and storage servers) to reduce transmission distance and enhance the server configuration, which brings great costs. Fortunately, through a large number of real trace analysis, we found that it is possible to reduce server number while keeping the QoS! In this paper, we first disclose three key characteristics from Kuaishou Company: (1) Unbalanced request frequencies on dif-ferent servers; (2) Imprecise latency measure on server-to-server transmission; (3) Nonlinear latency reduction to server number increment. Based on these findings, we propose a frequency-aware edge storage server deployment strategy Frend that is an improved Genetic Algorithm to optimize the number of edge storage servers by the internal diffusion capability that is a new latency measure called S2SL. Through a series of experiments using real application data, we demonstrate that while achieving the same S2SL, Frend can reduce the number of required edge storage servers by up to 56% compared with the state-of-the-art Anveshak method.
Original language | English (US) |
---|---|
Title of host publication | 2021 IEEE 23rd International Conference on High Performance Computing and Communications, 7th International Conference on Data Science and Systems, 19th International Conference on Smart City and 7th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications, HPCC-DSS-SmartCity-DependSys 2021 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 107-114 |
Number of pages | 8 |
ISBN (Electronic) | 9781665494571 |
DOIs | |
State | Published - 2022 |
Event | 23rd IEEE International Conference on High Performance Computing and Communications, 7th IEEE International Conference on Data Science and Systems, 19th IEEE International Conference on Smart City and 7th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications, HPCC-DSS-SmartCity-DependSys 2021 - Haikou, Hainan, China Duration: Dec 20 2021 → Dec 22 2021 |
Publication series
Name | 2021 IEEE 23rd International Conference on High Performance Computing and Communications, 7th International Conference on Data Science and Systems, 19th International Conference on Smart City and 7th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications, HPCC-DSS-SmartCity-DependSys 2021 |
---|
Conference
Conference | 23rd IEEE International Conference on High Performance Computing and Communications, 7th IEEE International Conference on Data Science and Systems, 19th IEEE International Conference on Smart City and 7th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications, HPCC-DSS-SmartCity-DependSys 2021 |
---|---|
Country/Territory | China |
City | Haikou, Hainan |
Period | 12/20/21 → 12/22/21 |
Bibliographical note
Funding Information:The work was supported in part by the National Natural Science Foundation of China (NSFC) Youth Science Foundation under Grant 61802024, BUPT-Chuangcache Joint Laboratory under B2020009, the Key Project of Beijing Natural Science Foundation under M21030, the National Key R&D Program of China under Grant 2019YFB1802603, AND the NSFC under Grant 62072047. The work of Pengmiao Li was supported in part by the BUPT Excellent Ph.D. Students Foundation under CX2019134.
Publisher Copyright:
© 2021 IEEE.
Keywords
- Edge Computing
- Edge Storage Server
- Edge computing Server
- Frequency Latency
- Server Deployment