Abstract
In this paper, we present a motion-based robotic communication framework that enables non-verbal communication among autonomous underwater vehicles (AUVs) and human divers. We design a gestural language for AUV-to-AUV communication which can be easily understood by divers observing the conversation - unlike typical radio frequency, light, or audio-based AUV communication. To allow AUVs to visually understand a gesture from another AUV, we propose a deep network (RRCommNet) which exploits a self-attention mechanism to learn to recognize each message by extracting maximally discriminative spatio-temporal features. We train this network on diverse simulated and real-world data. Our experimental evaluations, both in simulation and in closed-water robot trials, demonstrate that the proposed RRCommNet architecture is able to decipher gesture-based messages with an average accuracy of 88-94% on simulated data and 73-83% on real data (depending on the version of the model used). Further, by performing a message transcription study with human participants, we also show that the proposed language can be understood by humans with an overall transcription accuracy of 88 %. Finally, we discuss the inference runtime of RRCommNet on embedded GPU hardware, for real-time use on board AUVs in the field.
Original language | English (US) |
---|---|
Title of host publication | IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 3085-3092 |
Number of pages | 8 |
ISBN (Electronic) | 9781665479271 |
DOIs | |
State | Published - 2022 |
Event | 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022 - Kyoto, Japan Duration: Oct 23 2022 → Oct 27 2022 |
Publication series
Name | IEEE International Conference on Intelligent Robots and Systems |
---|---|
Volume | 2022-October |
ISSN (Print) | 2153-0858 |
ISSN (Electronic) | 2153-0866 |
Conference
Conference | 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022 |
---|---|
Country/Territory | Japan |
City | Kyoto |
Period | 10/23/22 → 10/27/22 |
Bibliographical note
Funding Information:This work was supported by the US National Science Foundation award #00074041, a UMII-MnDRIVE Fellowship, and the MnRI Seed Grant. The authors are with the Department of Computer Science & Engineering and the Minnesota Robotics Institute, University of Minnesota, MN, USA.
Publisher Copyright:
© 2022 IEEE.