TY - JOUR
T1 - Fully automated segmentation of brain tumor from multiparametric MRI using 3D context deep supervised U-Net
AU - Lin, Mingquan
AU - Momin, Shadab
AU - Lei, Yang
AU - Wang, Hesheng
AU - Curran, Walter J.
AU - Liu, Tian
AU - Yang, Xiaofeng
N1 - Publisher Copyright:
© 2021 American Association of Physicists in Medicine
PY - 2021/8
Y1 - 2021/8
N2 - Purpose: Owing to histologic complexities of brain tumors, its diagnosis requires the use of multimodalities to obtain valuable structural information so that brain tumor subregions can be properly delineated. In current clinical workflow, physicians typically perform slice-by-slice delineation of brain tumor subregions, which is a time-consuming process and also more susceptible to intra- and inter-rater variabilities possibly leading to misclassification. To deal with this issue, this study aims to develop an automatic segmentation of brain tumor in MR images using deep learning. Method: In this study, we develop a context deep-supervised U-Net to segment brain tumor subregions. A context block which aggregates multiscale contextual information for dense segmentation was proposed. This approach enlarges the effective receptive field of convolutional neural networks, which, in turn, improves the segmentation accuracy of brain tumor subregions. We performed the fivefold cross-validation on the Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. The BraTS 2020 testing datasets were obtained via BraTS online website as a hold-out test. For BraTS, the evaluation system divides the tumor into three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). The performance of our proposed method was compared against two state-of-the-arts CNN networks in terms of segmentation accuracy via Dice similarity coefficient (DSC) and Hausdorff distance (HD). The tumor volumes generated by our proposed method were compared with manually contoured volumes via Bland–Altman plots and Pearson analysis. Results: The proposed method achieved the segmentation results with a DSC of 0.923 ± 0.047, 0.893 ± 0.176, and 0.846 ± 0.165 and a 95% HD95 of 3.946 ± 7.041, 3.981 ± 6.670, and 10.128 ± 51.136 mm on WT, TC, and ET, respectively. Experimental results demonstrate that our method achieved comparable to significantly (p < 0.05) better segmentation accuracies than other two state-of-the-arts CNN networks. Pearson correlation analysis showed a high positive correlation between the tumor volumes generated by proposed method and manual contour. Conclusion: Overall qualitative and quantitative results of this work demonstrate the potential of translating proposed technique into clinical practice for segmenting brain tumor subregions, and further facilitate brain tumor radiotherapy workflow.
AB - Purpose: Owing to histologic complexities of brain tumors, its diagnosis requires the use of multimodalities to obtain valuable structural information so that brain tumor subregions can be properly delineated. In current clinical workflow, physicians typically perform slice-by-slice delineation of brain tumor subregions, which is a time-consuming process and also more susceptible to intra- and inter-rater variabilities possibly leading to misclassification. To deal with this issue, this study aims to develop an automatic segmentation of brain tumor in MR images using deep learning. Method: In this study, we develop a context deep-supervised U-Net to segment brain tumor subregions. A context block which aggregates multiscale contextual information for dense segmentation was proposed. This approach enlarges the effective receptive field of convolutional neural networks, which, in turn, improves the segmentation accuracy of brain tumor subregions. We performed the fivefold cross-validation on the Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. The BraTS 2020 testing datasets were obtained via BraTS online website as a hold-out test. For BraTS, the evaluation system divides the tumor into three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). The performance of our proposed method was compared against two state-of-the-arts CNN networks in terms of segmentation accuracy via Dice similarity coefficient (DSC) and Hausdorff distance (HD). The tumor volumes generated by our proposed method were compared with manually contoured volumes via Bland–Altman plots and Pearson analysis. Results: The proposed method achieved the segmentation results with a DSC of 0.923 ± 0.047, 0.893 ± 0.176, and 0.846 ± 0.165 and a 95% HD95 of 3.946 ± 7.041, 3.981 ± 6.670, and 10.128 ± 51.136 mm on WT, TC, and ET, respectively. Experimental results demonstrate that our method achieved comparable to significantly (p < 0.05) better segmentation accuracies than other two state-of-the-arts CNN networks. Pearson correlation analysis showed a high positive correlation between the tumor volumes generated by proposed method and manual contour. Conclusion: Overall qualitative and quantitative results of this work demonstrate the potential of translating proposed technique into clinical practice for segmenting brain tumor subregions, and further facilitate brain tumor radiotherapy workflow.
KW - U-Net
KW - brain tumor segmentation
KW - deep learning
KW - subregion
UR - http://www.scopus.com/inward/record.url?scp=85109366732&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85109366732&partnerID=8YFLogxK
U2 - 10.1002/mp.15032
DO - 10.1002/mp.15032
M3 - Article
C2 - 34101845
AN - SCOPUS:85109366732
SN - 0094-2405
VL - 48
SP - 4365
EP - 4374
JO - Medical Physics
JF - Medical Physics
IS - 8
ER -