TY - JOUR
T1 - Large scale distributed sparse precision estimation
AU - Wang, Huahua
AU - Banerjee, Arindam
AU - Hsieh, Cho Jui
AU - Ravikumar, Pradeep
AU - Dhillon, Inderjit S.
PY - 2013
Y1 - 2013
N2 - We consider the problem of sparse precision matrix estimation in high dimensions using the CLIME estimator, which has several desirable theoretical properties. We present an inexact alternating direction method of multiplier (ADMM) algorithm for CLIME, and establish rates of convergence for both the objective and optimality conditions. Further, we develop a large scale distributed framework for the computations, which scales to millions of dimensions and trillions of parameters, using hundreds of cores. The proposed framework solves CLIME in columnblocks and only involves elementwise operations and parallel matrix multiplications. We evaluate our algorithm on both shared-memory and distributed-memory architectures, which can use block cyclic distribution of data and parameters to achieve load balance and improve the efficiency in the use of memory hierarchies. Experimental results show that our algorithm is substantially more scalable than state-of-the-art methods and scales almost linearly with the number of cores.
AB - We consider the problem of sparse precision matrix estimation in high dimensions using the CLIME estimator, which has several desirable theoretical properties. We present an inexact alternating direction method of multiplier (ADMM) algorithm for CLIME, and establish rates of convergence for both the objective and optimality conditions. Further, we develop a large scale distributed framework for the computations, which scales to millions of dimensions and trillions of parameters, using hundreds of cores. The proposed framework solves CLIME in columnblocks and only involves elementwise operations and parallel matrix multiplications. We evaluate our algorithm on both shared-memory and distributed-memory architectures, which can use block cyclic distribution of data and parameters to achieve load balance and improve the efficiency in the use of memory hierarchies. Experimental results show that our algorithm is substantially more scalable than state-of-the-art methods and scales almost linearly with the number of cores.
UR - http://www.scopus.com/inward/record.url?scp=84898963465&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84898963465&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:84898963465
SN - 1049-5258
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 27th Annual Conference on Neural Information Processing Systems, NIPS 2013
Y2 - 5 December 2013 through 10 December 2013
ER -