Following the success of deep learning in a wide range of applications, neural network-based machine-learning techniques have received interest as a means of accelerating magnetic resonance imaging (MRI). A number of ideas inspired by deeplearning techniques for computer vision and image processing have been successfully applied to nonlinear image reconstruction in the spirit of compressed sensing for both low-dose computed tomography and accelerated MRI. The additional integration of multicoil information to recover missing k-space lines in the MRI reconstruction process is studied less frequently, even though it is the de facto standard for the currently used accelerated MR acquisitions. This article provides an overview of the recent machine-learning approaches that have been proposed specifically for improving parallel imaging. A general background introduction to parallel MRI is given and structured around the classical view of image- and k-space-based methods. Linear and nonlinear methods are covered, followed by a discussion of the recent efforts to further improve parallel imaging using machine learning and, specifically, artificial neural networks. Image domain-based techniques that introduce improved regularizers are covered as well as k-space-based methods, where the focus is on better interpolation strategies using neural networks. Issues and open problems are discussed and recent efforts for producing open data sets and benchmarks for the community are examined.
Bibliographical noteFunding Information:
Thomas Pock (firstname.lastname@example.org) received his M.S. and Ph.D. degrees in computer engineering (Telematik) from Graz University of Technology, Austria in 2004 and 2008, respectively. Following a postdoctoral position at the University of Bonn, Germany, he moved back to Graz University of Technology where he was an assistant professor at the Institute for Computer Graphics and Vision. He has been a professor of computer science at Graz University of Technology since June 2014 and a principal scientist at the Center for Vision, Automation, and Control at the Austrian Institute of Technology. His research focuses on developing mathematical models for computer vision and image processing, machine learning, and devising efficient nonsmooth optimization algorithms. In 2013, he received the Start-Preis Award from the Austrian Science Fund and the German Pattern Recognition Award from the German Association for Pattern Recognition, and, in 2014, he received a starting grant from the European Research Council.
We thank Hemant Aggarwal and Mathews Jacob for providing the model-based deep-learning reconstructions. This work was partially supported by the National Institutes of Health under grants R01EB024532, R00HL111410, P41EB015894, P41EB027061, and P41EB017183 and by the National Science Foundation Faculty Career Development Program under grant CCF-1651825.
© 1991-2012 IEEE.