Abstract
Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length) meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.
Original language | English (US) |
---|---|
Article number | 108 |
Journal | Entropy |
Volume | 20 |
Issue number | 2 |
DOIs | |
State | Published - Feb 1 2018 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2018 by the authors.
Keywords
- Change-point detection
- Online algorithms
- Sequential methods