Abstract
Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as 'learning with structured data' and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n3) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.
Original language | English (US) |
---|---|
Article number | 6183517 |
Pages (from-to) | 997-1003 |
Number of pages | 7 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 23 |
Issue number | 6 |
DOIs | |
State | Published - 2012 |
Bibliographical note
Funding Information:Manuscript received December 27, 2011; revised January 28, 2012; accepted January 28, 2012. Date of publication April 13, 2012; date of current version May 10, 2012. This work was supported in part by the National Science Foundation under Grant EECS-0802056.
Keywords
- Classification
- SVM+
- learning with structured data
- multitask learning
- quadratic optimization
- sequential minimal optimization
- support vector machine (SVM)