Nearly second-order optimality of online joint detection and estimation via one-sample update schemes

Yang Cao, Liyan Xie, Yao Xie, Huan Xu

Research output: Contribution to conferencePaperpeer-review

Abstract

Sequential hypothesis test and change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. We show that for such problems, detection procedures based on sequential likelihood ratios with simple one-sample update estimates such as online mirror descent are nearly second-order optimal. This means that the upper bound for the algorithm performance meets the lower bound asymptotically up to a log-log factor in the false-alarm rate when it tends to zero. This is a blessing, since although the generalized likelihood ratio (GLR) statistics are optimal theoretically, but they cannot be computed recursively, and their exact computation usually requires infinite memory of historical data. We prove the nearly second-order optimality by making a connection between sequential change-point detection and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical examples validate our theory.

Original languageEnglish (US)
Pages519-528
Number of pages10
StatePublished - 2018
Externally publishedYes
Event21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018 - Playa Blanca, Lanzarote, Canary Islands, Spain
Duration: Apr 9 2018Apr 11 2018

Conference

Conference21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018
Country/TerritorySpain
CityPlaya Blanca, Lanzarote, Canary Islands
Period4/9/184/11/18

Bibliographical note

Publisher Copyright:
Copyright 2018 by the author(s).

Fingerprint

Dive into the research topics of 'Nearly second-order optimality of online joint detection and estimation via one-sample update schemes'. Together they form a unique fingerprint.

Cite this