Multi-tasking deep network for tinnitus classification and severity prediction from multimodal structural MR images

Chieh Te Lin, Sanjay Ghosh, Leighton B. Hinkley, Corby L. Dale, Ana C.S. Souza, Jennifer H. Sabes, Christopher P. Hess, Meredith E. Adams, Steven W. Cheung, Srikantan S. Nagarajan

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Objective: Subjective tinnitus is an auditory phantom perceptual disorder without an objective biomarker. Fast and efficient diagnostic tools will advance clinical practice by detecting or confirming the condition, tracking change in severity, and monitoring treatment response. Motivated by evidence of subtle anatomical, morphological, or functional information in magnetic resonance images of the brain, we examine data-driven machine learning methods for joint tinnitus classification (tinnitus or no tinnitus) and tinnitus severity prediction. Approach: We propose a deep multi-task multimodal framework for tinnitus classification and severity prediction using structural MRI (sMRI) data. To leverage complementary information multimodal neuroimaging data, we integrate two modalities of three-dimensional sMRI—T1 weighted (T1w) and T2 weighted (T2w) images. To explore the key components in the MR images that drove task performance, we segment both T1w and T2w images into three different components—cerebrospinal fluid, grey matter and white matter, and evaluate performance of each segmented image. Main results: Results demonstrate that our multimodal framework capitalizes on the information across both modalities (T1w and T2w) for the joint task of tinnitus classification and severity prediction. Significance: Our model outperforms existing learning-based and conventional methods in terms of accuracy, sensitivity, specificity, and negative predictive value.

Original languageEnglish (US)
Article number016017
JournalJournal of neural engineering
Volume20
Issue number1
DOIs
StatePublished - Feb 1 2023

Bibliographical note

Funding Information:
The authors would like to thank Anne Findlay and all members and collaborators of the Biomagnetic Imaging Laboratory at University of California San Francisco (UCSF) for their support. The authors also extend thanks to Ali Stockness and the research team at University of Minnesota (UMN). This work was supported in part by Department of Defense CDMRP Awards: W81XWH-13-1-0494, W81XWH-18-1-0741; NIH grants: R01NS100440, R01AG062196, UCOP-MRP-17-454755, and an industry research contract from Ricoh MEG Inc.

Publisher Copyright:
© 2023 The Author(s). Published by IOP Publishing Ltd.

Keywords

  • deep learning
  • neuroimaging biomarker
  • regression and classification

PubMed: MeSH publication types

  • Journal Article
  • Research Support, U.S. Gov't, Non-P.H.S.
  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

Fingerprint

Dive into the research topics of 'Multi-tasking deep network for tinnitus classification and severity prediction from multimodal structural MR images'. Together they form a unique fingerprint.

Cite this