Big data applications demand efficient solvers capable of providing accurate solutions to large-scale problems at affordable computational costs. Processing data sequentially, online algorithms offer attractive means to deal with massive data sets. However, they may incur prohibitive complexity in high-dimensional scenarios if the entire data set is processed. It is therefore necessary to confine computations to an informative subset. While existing approaches have focused on selecting a prescribed fraction of the available data vectors, the present paper capitalizes on this degree of freedom to accelerate the convergence of a generic class of online algorithms in terms of processing time/computational resources by balancing the required burden with a metric of how informative each datum is. The proposed method is illustrated in a linear regression setting, and simulations corroborate the superior convergence rate of the recursive least-squares algorithm when the novel data selection is effected.