Dimension reduction in text classification with support vector machines

Hyunsoo Kim, Peg Rowland, Haesun Park

Research output: Contribution to journalArticlepeer-review

206 Scopus citations

Abstract

Support vector machines (SVMs) have been recognized as one of the most successful classification methods for many applications including text classification. Even though the learning ability and computational complexity of training in support vector machines may be independent of the dimension of the feature space, reducing computational complexity is an essential issue to efficiently handle a large number of terms in practical applications of text classification. In this paper, we adopt novel dimension reduction methods to reduce the dimension of the document vectors dramatically. We also introduce decision functions for the centroid-based classification algorithm and support vector classifiers to handle the classification problem where a document may belong to multiple classes. Our substantial experimental results show that with several dimension reduction methods that are designed particularly for clustered data, higher efficiency for both training and testing can be achieved without sacrificing prediction accuracy of text classification even when the dimension of the input space is significantly reduced.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
Volume6
StatePublished - 2005
Externally publishedYes

Keywords

  • Centroids
  • Dimension reduction
  • Linear discriminant analysis
  • Support vector machines
  • Text classification

Fingerprint

Dive into the research topics of 'Dimension reduction in text classification with support vector machines'. Together they form a unique fingerprint.

Cite this