Abstract
How close can we zoom in to observe brain activity? Our understanding is limited by the resolution of imaging modalities that exhibit good spatial but poor temporal resolution, or vice-versa. In this paper, we propose BRAINZOOM, an efficient imaging algorithm that cross-leverages multi-modal brain signals. BRAINZOOM (a) constructs high resolution brain images from multi-modal signals, (b) is scalable, and (c) is flexible in that it can easily incorporate various priors on the brain activities, such as sparsity, low rank, or smoothness. We carefully formulate the problem to tackle nonlinearity in the measurements (via variable splitting) and auto-scale between different modal signals, and judiciously design an inexact alternating optimization-based algorithmic framework to handle the problem with provable convergence guarantees. Our experiments using a popular realistic brain signal simulator to generate fMRI and MEG demonstrate that high spatio-temporal resolution brain imaging is possible from these two modalities. The experiments also suggest that smoothness seems to be the best prior, among several we tried.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 17th SIAM International Conference on Data Mining, SDM 2017 |
Editors | Nitesh Chawla, Wei Wang |
Publisher | Society for Industrial and Applied Mathematics Publications |
Pages | 216-223 |
Number of pages | 8 |
ISBN (Electronic) | 9781611974874 |
State | Published - 2017 |
Event | 17th SIAM International Conference on Data Mining, SDM 2017 - Houston, United States Duration: Apr 27 2017 → Apr 29 2017 |
Publication series
Name | Proceedings of the 17th SIAM International Conference on Data Mining, SDM 2017 |
---|
Other
Other | 17th SIAM International Conference on Data Mining, SDM 2017 |
---|---|
Country/Territory | United States |
City | Houston |
Period | 4/27/17 → 4/29/17 |
Bibliographical note
Funding Information:This material is based upon work supported by the National Science Foundation under Grants No. IIS-1247489, IIS-1247632.
Funding Information:
This work is also partially supported by an IBM Faculty Award and a Google Focused Research Award. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, or other funding parties.
Publisher Copyright:
Copyright © by SIAM.