Abstract
A scalable method to cluster data sets too large to fit in memory is presented. This method does not depend on random subsampling, but does scan every individual data sample in a deterministic way. The original data are represented in factored form by the product of two matrices, one or both of which is very sparse. This factored form avoids the need to multiply together these two matrices by using a variant of the Principal Direction Divisive Partitioning (PDDP) algorithm which does not depend on computing the distances between the individual samples. The resulting clustering algorithm is Piecemeal PDDP (PMPDDP), in which the original data are broken up into sections which will fit into memory and clustered. The cluster centers are used to create approximations to the original data items, and each original data item is represented by a linear combination of these centers. We evaluate the performance of PMPDDP on three real data sets, and observe that the quality of the clusters of PMPDDP is comparable to PDDP for the data sets examined.
Original language | English (US) |
---|---|
Pages (from-to) | 114-135 |
Number of pages | 22 |
Journal | Computational Intelligence |
Volume | 25 |
Issue number | 2 |
DOIs | |
State | Published - May 2009 |
Keywords
- Clustering
- Large data sets
- Low-memory factorization
- PDDP