In this paper, we present the first large-scale dataset for semantic Segmentation of Underwater IMagery (SUIM). It contains over 1500 images with pixel annotations for eight object categories: fish (vertebrates), reefs (invertebrates), aquatic plants, wrecks/ruins, human divers, robots, and sea-floor. The images have been rigorously collected during oceanic explorations and human-robot collaborative experiments, and annotated by human participants. We also present a comprehensive benchmark evaluation of several state-of-the-art semantic segmentation approaches based on standard performance metrics. Additionally, we present SUIM-Net, a fully-convolutional deep residual model that balances the trade-off between performance and computational efficiency. It offers competitive performance while ensuring fast end-to-end inference, which is essential for its use in the autonomy pipeline by visually-guided underwater robots. In particular, we demonstrate its usability benefits for visual servoing, saliency prediction, and detailed scene understanding. With a variety of use cases, the proposed model and benchmark dataset open up promising opportunities for future research in underwater robot vision.
|Original language||English (US)|
|Title of host publication||2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||8|
|State||Published - Oct 24 2020|
|Event||2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020 - Las Vegas, United States|
Duration: Oct 24 2020 → Jan 24 2021
|Name||IEEE International Conference on Intelligent Robots and Systems|
|Conference||2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020|
|Period||10/24/20 → 1/24/21|
Bibliographical noteFunding Information:
ACKNOWLEDGEMENT This work was supported by the National Science Foundation grant IIS-#1845364, the Doctoral Dissertation Fellow- ship (DDF) at the University of Minnesota and the Minnesota Robotics Institute (MnRI). We are grateful to the Bellairs Research Institute of Barbados for the field trial venue, and to the Mobile Robotics Lab of McGill University for data and resources.
© 2020 IEEE.