Incorporating prior knowledge of predictors into penalized classifiers with multiple penalty terms

Feng Tai, Wei Pan

Research output: Contribution to journalArticlepeer-review

50 Scopus citations


Motivation: In the context of sample (e.g. tumor) classifications with microarray gene expression data, many methods have been proposed. However, almost all the methods ignore existing biological knowledge and treat all the genes equally a priori. On the other hand, because some genes have been identified by previous studies to have biological functions or to be involved in pathways related to the outcome (e.g. cancer), incorporating this type of prior knowledge into a classifier can potentially improve both the predictive performance and interpretability of the resulting model. Results: We propose a simple and general framework to incorporate such prior knowledge into building a penalized classifier. As two concrete examples, we apply the idea to two penalized classifiers, nearest shrunken centroids (also called PAM) and penalized partial least squares (PPLS). Instead of treating all the genes equally a priori as in standard penalized methods, we group the genes according to their functional associations based on existing biological knowledge or data, and adopt group-specific penalty terms and penalization parameters. Simulated and real data examples demonstrate that, if prior knowledge on gene grouping is indeed informative, our new methods perform better than the two standard penalized methods, yielding higher predictive accuracy and screening out more irrelevant genes.

Original languageEnglish (US)
Pages (from-to)1775-1782
Number of pages8
Issue number14
StatePublished - Jul 15 2007

Bibliographical note

Funding Information:
This research was partially supported by NIH grant HL65462 and a UM AHC Faculty Research Development grant.


Dive into the research topics of 'Incorporating prior knowledge of predictors into penalized classifiers with multiple penalty terms'. Together they form a unique fingerprint.

Cite this