Sequential change-point detection via online convex optimization

Yang Cao, Liyan Xie, Yao Xie, Huan Xu

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length) meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

Original languageEnglish (US)
Article number108
JournalEntropy
Volume20
Issue number2
DOIs
StatePublished - Feb 1 2018
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2018 by the authors.

Keywords

  • Change-point detection
  • Online algorithms
  • Sequential methods

Fingerprint

Dive into the research topics of 'Sequential change-point detection via online convex optimization'. Together they form a unique fingerprint.

Cite this