Clustering very large data sets using a low memory matrix factored representation

David Littau, Daniel Boley

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

A scalable method to cluster data sets too large to fit in memory is presented. This method does not depend on random subsampling, but does scan every individual data sample in a deterministic way. The original data are represented in factored form by the product of two matrices, one or both of which is very sparse. This factored form avoids the need to multiply together these two matrices by using a variant of the Principal Direction Divisive Partitioning (PDDP) algorithm which does not depend on computing the distances between the individual samples. The resulting clustering algorithm is Piecemeal PDDP (PMPDDP), in which the original data are broken up into sections which will fit into memory and clustered. The cluster centers are used to create approximations to the original data items, and each original data item is represented by a linear combination of these centers. We evaluate the performance of PMPDDP on three real data sets, and observe that the quality of the clusters of PMPDDP is comparable to PDDP for the data sets examined.

Original languageEnglish (US)
Pages (from-to)114-135
Number of pages22
JournalComputational Intelligence
Volume25
Issue number2
DOIs
StatePublished - May 2009

Keywords

  • Clustering
  • Large data sets
  • Low-memory factorization
  • PDDP

Fingerprint

Dive into the research topics of 'Clustering very large data sets using a low memory matrix factored representation'. Together they form a unique fingerprint.

Cite this