On par with data-intensive applications, the sheer size of modern linear regression problems creates an ever-growing demand for efficient solvers. Fortunately, a significant percentage of the data accrued can be omitted while maintaining a certain quality of statistical inference with an affordable computational budget. This work introduces means of identifying and omitting less informative observations in an online and data-Adaptive fashion. Given streaming data, the related maximum-likelihood estimator is sequentially found using first-and second-order stochastic approximation algorithms. These schemes are well suited when data are inherently censored or when the aim is to save communication overhead in decentralized learning setups. In a different operational scenario, the task of joint censoring and estimation is put forth to solve large-scale linear regressions in a centralized setup. Novel online algorithms are developed enjoying simple closed-form updates and provable (non)asymptotic convergence guarantees. To attain desired censoring patterns and levels of dimensionality reduction, thresholding rules are investigated too. Numerical tests on real and synthetic datasets corroborate the efficacy of the proposed data-Adaptive methods compared to data-Agnostic random projection-based alternatives.
Bibliographical noteFunding Information:
NSF grants 1343860, 1442686, 1514056, and 1500713; and NIH grant 1R01GM104975-01.
- Parameter estimation
- big data
- least squares