Deep Reinforcement Learning for Adaptive Caching in Hierarchical Content Delivery Networks

Alireza Sadeghi, Gang Wang, Georgios B. Giannakis

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Caching is envisioned to play a critical role in next-generation content delivery infrastructure, cellular networks, and Internet architectures. By smartly storing the most popular contents at the storage-enabled network entities during off-peak demand instances, caching can benefit both network infrastructure as well as end users, during on-peak periods. In this context, distributing the limited storage capacity across network entities calls for decentralized caching schemes. Many practical caching systems involve a parent caching node connected to multiple leaf nodes to serve user file requests. To model the two-way interactive influence between caching decisions at the parent and leaf nodes, a reinforcement learning (RL) framework is put forth. To handle the large continuous state space, a scalable deep RL approach is pursued. The novel approach relies on a hyper-deep Q-network to learn the Q-function, and thus the optimal caching policy, in an online fashion. Reinforcing the parent node with ability to learn-and-adapt to unknown policies of leaf nodes as well as spatio-temporal dynamic evolution of file requests, results in remarkable caching performance, as corroborated through numerical tests.

Original languageEnglish (US)
Article number8807260
Pages (from-to)1024-1033
Number of pages10
JournalIEEE Transactions on Cognitive Communications and Networking
Volume5
Issue number4
DOIs
StatePublished - Dec 2019

Fingerprint

Reinforcement learning
Internet

Keywords

  • Caching
  • deep Q-network
  • deep RL
  • function approximation
  • next-generation networks

Cite this

Deep Reinforcement Learning for Adaptive Caching in Hierarchical Content Delivery Networks. / Sadeghi, Alireza; Wang, Gang; Giannakis, Georgios B.

In: IEEE Transactions on Cognitive Communications and Networking, Vol. 5, No. 4, 8807260, 12.2019, p. 1024-1033.

Research output: Contribution to journalArticle

@article{299bcb59f6144c1e92ecc5b0f54d6a4c,
title = "Deep Reinforcement Learning for Adaptive Caching in Hierarchical Content Delivery Networks",
abstract = "Caching is envisioned to play a critical role in next-generation content delivery infrastructure, cellular networks, and Internet architectures. By smartly storing the most popular contents at the storage-enabled network entities during off-peak demand instances, caching can benefit both network infrastructure as well as end users, during on-peak periods. In this context, distributing the limited storage capacity across network entities calls for decentralized caching schemes. Many practical caching systems involve a parent caching node connected to multiple leaf nodes to serve user file requests. To model the two-way interactive influence between caching decisions at the parent and leaf nodes, a reinforcement learning (RL) framework is put forth. To handle the large continuous state space, a scalable deep RL approach is pursued. The novel approach relies on a hyper-deep Q-network to learn the Q-function, and thus the optimal caching policy, in an online fashion. Reinforcing the parent node with ability to learn-and-adapt to unknown policies of leaf nodes as well as spatio-temporal dynamic evolution of file requests, results in remarkable caching performance, as corroborated through numerical tests.",
keywords = "Caching, deep Q-network, deep RL, function approximation, next-generation networks",
author = "Alireza Sadeghi and Gang Wang and Giannakis, {Georgios B.}",
year = "2019",
month = "12",
doi = "10.1109/TCCN.2019.2936193",
language = "English (US)",
volume = "5",
pages = "1024--1033",
journal = "IEEE Transactions on Cognitive Communications and Networking",
issn = "2332-7731",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "4",

}

TY - JOUR

T1 - Deep Reinforcement Learning for Adaptive Caching in Hierarchical Content Delivery Networks

AU - Sadeghi, Alireza

AU - Wang, Gang

AU - Giannakis, Georgios B.

PY - 2019/12

Y1 - 2019/12

N2 - Caching is envisioned to play a critical role in next-generation content delivery infrastructure, cellular networks, and Internet architectures. By smartly storing the most popular contents at the storage-enabled network entities during off-peak demand instances, caching can benefit both network infrastructure as well as end users, during on-peak periods. In this context, distributing the limited storage capacity across network entities calls for decentralized caching schemes. Many practical caching systems involve a parent caching node connected to multiple leaf nodes to serve user file requests. To model the two-way interactive influence between caching decisions at the parent and leaf nodes, a reinforcement learning (RL) framework is put forth. To handle the large continuous state space, a scalable deep RL approach is pursued. The novel approach relies on a hyper-deep Q-network to learn the Q-function, and thus the optimal caching policy, in an online fashion. Reinforcing the parent node with ability to learn-and-adapt to unknown policies of leaf nodes as well as spatio-temporal dynamic evolution of file requests, results in remarkable caching performance, as corroborated through numerical tests.

AB - Caching is envisioned to play a critical role in next-generation content delivery infrastructure, cellular networks, and Internet architectures. By smartly storing the most popular contents at the storage-enabled network entities during off-peak demand instances, caching can benefit both network infrastructure as well as end users, during on-peak periods. In this context, distributing the limited storage capacity across network entities calls for decentralized caching schemes. Many practical caching systems involve a parent caching node connected to multiple leaf nodes to serve user file requests. To model the two-way interactive influence between caching decisions at the parent and leaf nodes, a reinforcement learning (RL) framework is put forth. To handle the large continuous state space, a scalable deep RL approach is pursued. The novel approach relies on a hyper-deep Q-network to learn the Q-function, and thus the optimal caching policy, in an online fashion. Reinforcing the parent node with ability to learn-and-adapt to unknown policies of leaf nodes as well as spatio-temporal dynamic evolution of file requests, results in remarkable caching performance, as corroborated through numerical tests.

KW - Caching

KW - deep Q-network

KW - deep RL

KW - function approximation

KW - next-generation networks

UR - http://www.scopus.com/inward/record.url?scp=85071539159&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071539159&partnerID=8YFLogxK

U2 - 10.1109/TCCN.2019.2936193

DO - 10.1109/TCCN.2019.2936193

M3 - Article

AN - SCOPUS:85071539159

VL - 5

SP - 1024

EP - 1033

JO - IEEE Transactions on Cognitive Communications and Networking

JF - IEEE Transactions on Cognitive Communications and Networking

SN - 2332-7731

IS - 4

M1 - 8807260

ER -