Autonomous underwater vehicles (AUVs) rely on a variety of sensors - acoustic, inertial and visual - for intelligent decision making. Due to its non-intrusive, passive nature and high information content, vision is an attractive sensing modality, particularly at shallower depths. However, factors such as light refraction and absorption, suspended particles in the water, and color distortion affect the quality of visual data, resulting in noisy and distorted images. AUVs that rely on visual sensing thus face difficult challenges and consequently exhibit poor performance on vision-driven tasks. This paper proposes a method to improve the quality of visual underwater scenes using Generative Adversarial Networks (GANs), with the goal of improving input to vision-driven behaviors further down the autonomy pipeline. Furthermore, we show how recently proposed methods are able to generate a dataset for the purpose of such underwater image restoration. For any visually-guided underwater robots, this improvement can result in increased safety and reliability through robust visual perception. To that effect, we present quantitative and qualitative data which demonstrates that images corrected through the proposed approach generate more visually appealing images, and also provide increased accuracy for a diver tracking algorithm.
|Original language||English (US)|
|Title of host publication||2018 IEEE International Conference on Robotics and Automation, ICRA 2018|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||7|
|State||Published - Sep 10 2018|
|Event||2018 IEEE International Conference on Robotics and Automation, ICRA 2018 - Brisbane, Australia|
Duration: May 21 2018 → May 25 2018
|Name||Proceedings - IEEE International Conference on Robotics and Automation|
|Conference||2018 IEEE International Conference on Robotics and Automation, ICRA 2018|
|Period||5/21/18 → 5/25/18|
Bibliographical notePublisher Copyright:
© 2018 IEEE.