TY - JOUR
T1 - Shadow Compensation for Synthetic Aperture Radar Target Classification by Dual Parallel Generative Adversarial Network
AU - Zhu, Hongliang
AU - Leung, Rocky
AU - Hong, Minyi
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2020/8
Y1 - 2020/8
N2 - Due to the incident angle of synthetic aperture radar electromagnetic wave, the ground target is usually captured with some parts missing in the raw SAR image instead of an area of shadow. In recent years, most of the deep learning methods on SAR target classification problems are only applied to the raw SAR images, which require a large number of images to train the neural network for a good result. Aiming at this problem, we propose a novel method for reconstructing the complete target profile to study the SAR target classification problem, which merges two raw SAR images with the opposite azimuth together to one new image. Then, a dual parallel generative adversarial network is proposed to extend the fused SAR image dataset. Finally, we construct a convolutional neural network to train the extended fused image dataset, which outputs the label for each ground target. The experimental result shows that the whole network framework conducted on the MSTAR dataset achieves the average classification accuracy of 99.93% with far fewer data than the state-of-the-art methods, which is a breakthrough in the research process for the SAR target classification with limited labeled data.
AB - Due to the incident angle of synthetic aperture radar electromagnetic wave, the ground target is usually captured with some parts missing in the raw SAR image instead of an area of shadow. In recent years, most of the deep learning methods on SAR target classification problems are only applied to the raw SAR images, which require a large number of images to train the neural network for a good result. Aiming at this problem, we propose a novel method for reconstructing the complete target profile to study the SAR target classification problem, which merges two raw SAR images with the opposite azimuth together to one new image. Then, a dual parallel generative adversarial network is proposed to extend the fused SAR image dataset. Finally, we construct a convolutional neural network to train the extended fused image dataset, which outputs the label for each ground target. The experimental result shows that the whole network framework conducted on the MSTAR dataset achieves the average classification accuracy of 99.93% with far fewer data than the state-of-the-art methods, which is a breakthrough in the research process for the SAR target classification with limited labeled data.
KW - generative adversarial network (GAN)
KW - MSTAR
KW - Sensor signals processing
KW - shadow
KW - synthetic aperture radar (SAR)
KW - target classification
UR - http://www.scopus.com/inward/record.url?scp=85183508307&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85183508307&partnerID=8YFLogxK
U2 - 10.1109/LSENS.2020.3009179
DO - 10.1109/LSENS.2020.3009179
M3 - Article
AN - SCOPUS:85183508307
SN - 2475-1472
VL - 4
JO - IEEE Sensors Letters
JF - IEEE Sensors Letters
IS - 8
M1 - 9140351
ER -