Reinforcement Learning for Adaptive Caching with Dynamic Storage Pricing

Alireza Sadeghi, Fatemeh Sheikholeslami, Antonio G. Marques, Georgios B. Giannakis

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Small base stations (SBs) of fifth-generation (5G) cellular networks are envisioned to have storage devices to locally serve requests for reusable and popular contents by caching them at the edge of the network, close to the end users. The ultimate goal is to smartly utilize a limited storage capacity to serve locally contents that are frequently requested instead of fetching them from the cloud, contributing to a better overall network performance and service experience. To enable the SBs with efficient fetch-cache decision-making schemes operating in dynamic settings, this paper introduces simple but flexible generic time-varying fetching and caching costs, which are then used to formulate a constrained minimization of the aggregate cost across files and time. Since caching decisions per time slot influence the content availability in future slots, the novel formulation for optimal fetch-cache decisions falls into the class of dynamic programming. Under this generic formulation, first by considering stationary distributions for the costs as well as file popularities, an efficient reinforcement learning-based solver known as value iteration algorithm can be used to solve the emerging optimization problem. Later, it is shown that practical limitations on cache capacity can be handled using a particular instance of this generic dynamic pricing formulation. Under this setting, to provide a light-weight online solver for the corresponding optimization, the well-known reinforcement learning algorithm, Q -learning, is employed to find optimal fetch-cache decisions. Numerical tests corroborating the merits of the proposed approach wrap up the paper.

Original languageEnglish (US)
Article number8790766
Pages (from-to)2267-2281
Number of pages15
JournalIEEE Journal on Selected Areas in Communications
Volume37
Issue number10
DOIs
StatePublished - Oct 2019

Bibliographical note

Funding Information:
Manuscript received December 15, 2018; revised April 5, 2019; accepted May 20, 2019. Date of publication August 7, 2019; date of current version September 16, 2019. This work was supported in part by USA NSF under Grants: 1508993, 1514056, 1711471, and 1901134, in part by the Spanish MINECO grant OMICROM under Grant TEC2013-41604-R, and in part by the URJC Mobility Program. This article was presented in part at the ICASSP 2018, Calgary, Canada. (Corresponding author: Georgios B. Giannakis.) A. Sadeghi, F. Sheikholeslami, and G. B. Giannakis are with the Digital Technology Center and the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455 USA (e-mail: sadeghi@umn.edu; sheik081@umn.edu; georgios@umn.edu).

Publisher Copyright:
© 1983-2012 IEEE.

Copyright:
Copyright 2019 Elsevier B.V., All rights reserved.

Keywords

  • Dynamic caching
  • Q-learning
  • dynamic programming
  • fetching
  • value iteration

Fingerprint Dive into the research topics of 'Reinforcement Learning for Adaptive Caching with Dynamic Storage Pricing'. Together they form a unique fingerprint.

Cite this