Human habenula segmentation using myelin content

Joo won Kim, Thomas P. Naidich, Benjamin A. Ely, Essa Yacoub, Federico De Martino, Mary E. Fowkes, Wayne K. Goodman, Junqian Xu

Research output: Contribution to journalArticlepeer-review

21 Scopus citations

Abstract

The habenula consists of a pair of small epithalamic nuclei located adjacent to the dorsomedial thalamus. Despite increasing interest in imaging the habenula due to its critical role in mediating subcortical reward circuitry, in vivo neuroimaging research targeting the human habenula has been limited by its small size and low anatomical contrast. In this work, we have developed an objective semi-automated habenula segmentation scheme consisting of histogram-based thresholding, region growing, geometric constraints, and partial volume estimation steps. This segmentation scheme was designed around in vivo 3 T myelin-sensitive images, generated by taking the ratio of high-resolution T1w over T2w images. Due to the high myelin content of the habenula, the contrast-to-noise ratio with the thalamus in the in vivo 3 T myelin-sensitive images was significantly higher than the T1w or T2w images alone. In addition, in vivo 7 T myelin-sensitive images (T1w over T2*w ratio images) and ex vivo proton density-weighted images, along with histological evidence from the literature, strongly corroborated the in vivo 3 T habenula myelin contrast used in the proposed segmentation scheme. The proposed segmentation scheme represents a step toward a scalable approach for objective segmentation of the habenula suitable for both morphological evaluation and habenula seed region selection in functional and diffusion MRI applications.

Original languageEnglish (US)
Pages (from-to)145-156
Number of pages12
JournalNeuroImage
Volume130
DOIs
StatePublished - Apr 15 2016

Bibliographical note

Funding Information:
Data were provided (in part) by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research and by the McDonnell Center for Systems Neuroscience at Washington University. We would like to thank the Brain Imaging Center (BIC) at Icahn School of Medicine at Mount Sinai for purchasing the HCP Connetome-in-a-Box data.

Funding Information:
FDM was supported by NWO VIDI grant 864-13-012 ; WG was supported by Simons Foundation grant SFARI 277909 for lateral Hb DBS in treatment-resistant depression (ClinicalTrials.gov identifier: NCT01798407); JX was supported by Radiological Society of North America (RSNA) research scholar grant RSCH1328 and Brain and Behavior Research Foundation (BBRF) young investigator grant NARSAD22324 .

Funding Information:
Data were provided (in part) by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research and by the McDonnell Center for Systems Neuroscience at Washington University. We would like to thank the Brain Imaging Center (BIC) at Icahn School of Medicine at Mount Sinai for purchasing the HCP Connetome-in-a-Box data. Conflict of interest sThe authors declare no competing financial interests. Funding FDM was supported by NWO VIDI grant 864-13-012; WG was supported by Simons Foundation grant SFARI 277909 for lateral Hb DBS in treatment-resistant depression (ClinicalTrials.gov identifier: NCT01798407); JX was supported by Radiological Society of North America (RSNA) research scholar grant RSCH1328 and Brain and Behavior Research Foundation (BBRF) young investigator grant NARSAD22324.

Publisher Copyright:
© 2016 Elsevier Inc.

Keywords

  • Geometric constraints
  • Habenula
  • Myelin map
  • Myelin-sensitive image
  • Partial volume estimation
  • Region growing
  • Subcortical segmentation

Fingerprint

Dive into the research topics of 'Human habenula segmentation using myelin content'. Together they form a unique fingerprint.

Cite this