By exploiting cross-information among multiple imaging data, multimodal fusion has often been used to better understand brain diseases. However, most current fusion approaches are blind, without adopting any prior information. There is increasing interest to uncover the neurocognitive mapping of specific clinical measurements on enriched brain imaging data; hence, a supervised, goal-directed model that employs prior information as a reference to guide multimodal data fusion is much needed and becomes a natural option. Here, we proposed a fusion with reference model called 'multi-site canonical correlation analysis with reference + joint-independent component analysis' (MCCAR+jICA), which can precisely identify co-varying multimodal imaging patterns closely related to the reference, such as cognitive scores. In a three-way fusion simulation, the proposed method was compared with its alternatives on multiple facets; MCCAR+jICA outperforms others with higher estimation precision and high accuracy on identifying a target component with the right correspondence. In human imaging data, working memory performance was utilized as a reference to investigate the co-varying working memory-associated brain patterns among three modalities and how they are impaired in schizophrenia. Two independent cohorts (294 and 83 subjects respectively) were used. Similar brain maps were identified between the two cohorts along with substantial overlaps in the central executive network in fMRI, salience network in sMRI, and major white matter tracts in dMRI. These regions have been linked with working memory deficits in schizophrenia in multiple reports and MCCAR+jICA further verified them in a repeatable, joint manner, demonstrating the ability of the proposed method to identify potential neuromarkers for mental disorders.
Bibliographical noteFunding Information:
This work was supported in part by the National High-Tech Development Plan (863 plan) under Grant 2015AA020513, in part by the Chinese National Science Foundation under Grant 81471367, in part by the Strategic Priority Research Program of the Chinese Academy of Sciences (CAS) under Grant XDB02060005, in part the 100 Talents Plan of CAS, and in part by the NIH under Grant P20GM103472, Grant R01EB005846, and Grant 1R01EB006841.
Manuscript received May 21, 2017; revised July 2, 2017; accepted July 5, 2017. Date of publication July 11, 2017; date of current version December 29, 2017. This work was supported in part by the National High-Tech Development Plan (863 plan) under Grant 2015AA020513, in part by the Chinese National Science Foundation under Grant 81471367, in part by the Strategic Priority Research Program of the Chinese Academy of Sciences (CAS) under Grant XDB02060005, in part the 100 Talents Plan of CAS, and in part by the NIH under Grant P20GM103472, Grant R01EB005846, and Grant 1R01EB006841. (Corresponding author: Jing Sui.) S. Qi and T. Jiang are with the Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation Chinese Academy of Sciences, Beijing 100190, also with the University of Chinese Academy of Sciences, and also with the CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing 100190, China.
© 2017 IEEE.
- Multimodal fusion with reference
- supervised learning
- working memory