Bandit online learning with unknown delays

Bingcong Li, Tianyi Chen, Georgios B. Giannakis

Research output: Contribution to conferencePaperpeer-review

9 Scopus citations


This paper deals with bandit online learning, where feedback of unknown delay can emerge in non-stochastic multi-armed bandit (MAB) and bandit convex optimization (BCO) settings. MAB and BCO require only values of the objective function to become available through feedback, and are used to estimate the gradient appearing in the corresponding iterative algorithms. Since the challenging case of feedback with unknown delays prevents one from constructing the sought gradient estimates, existing MAB and BCO algorithms become intractable. Delayed exploration, exploitation, and exponential (DEXP3) iterations, along with delayed bandit gradient descent (DBGD) iterations are developed for MAB and BCO with unknown delays, respectively. Based on a unifying analysis framework, it is established that both DEXP3 and DBGD guarantee an Õ(√K(T + D)) regret, where D denotes the delay accumulated over T slots, and K represents the number of arms in MAB or the dimension of decision variables in BCO. Numerical tests using both synthetic and real data validate DEXP3 and DBGD.

Original languageEnglish (US)
StatePublished - 2020
Event22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 - Naha, Japan
Duration: Apr 16 2019Apr 18 2019


Conference22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019

Bibliographical note

Funding Information:
We would like to thank the anonymous reviewers for their constructive feedback. We also gratefully acknowledge the support from NSF grants 1500713, 1508993, and 1711471.

Publisher Copyright:
© 2019 by the author(s).


Dive into the research topics of 'Bandit online learning with unknown delays'. Together they form a unique fingerprint.

Cite this