Co-tracking using semi-supervised support vector machines

Feng Tang, Shane Brennan, Qi Zhao, Hai Tao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

256 Scopus citations


This paper treats tracking as a foreground/background classification problem and proposes an online semi-supervised learning framework. Initialized with a small number of labeled samples, semi-supervised learning treats each new sample as unlabeled data. Classification of new data and updating of the classifier are achieved simultaneously in a co-training framework. The object is represented using independent features and an online support vector machine (SVM) is built for each feature. The predictions from different features are fused by combining the confidence map from each classifier using a classifier weighting method which creates a final classifier that performs better than any classifier based on a single feature. The semi-supervised learning approach then uses the output of the combined confidence map to generate new samples and update the SVMs online. With this approach, the tracker gains increasing knowledge of the object and background and continually improves itself over time. Compared to other discriminative trackers, the online semi-supervised learning approach improves each individual classifier using the information from other features, thus leading to a more robust tracker. Experiments show that this framework performs better than state-of-the-art tracking algorithms on challenging sequences.

Original languageEnglish (US)
Title of host publication2007 IEEE 11th International Conference on Computer Vision, ICCV
StatePublished - Dec 1 2007
Event2007 IEEE 11th International Conference on Computer Vision, ICCV - Rio de Janeiro, Brazil
Duration: Oct 14 2007Oct 21 2007


Other2007 IEEE 11th International Conference on Computer Vision, ICCV
CityRio de Janeiro


Dive into the research topics of 'Co-tracking using semi-supervised support vector machines'. Together they form a unique fingerprint.

Cite this