Tumbling Robot Control Using Reinforcement Learning: An Adaptive Control Policy That Transfers Well to the Real World

Andrew Schwartzwald, Matthew Tlachac, Luis Guzman, Athanasios Bacharis, Nikolaos Papanikolopoulos

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Tumbling robots are simple platforms that are able to traverse large obstacles relative to their size, at the cost of being difficult to control. Existing control methods apply only a subset of possible robot motions and make the assumption of flat terrain. Reinforcement learning (RL) allows for the development of sophisticated control schemes that can adapt to diverse environments. By utilizing domain randomization while training in simulation, a robust control policy can be learned that transfers well to the real world. In this article, we implement autonomous set point navigation on a tumbling robot prototype and evaluate it on flat, uneven, and valley-hill terrain. Our results demonstrate that RL-based control policies can generalize well to challenging environments that were not encountered during training. The flexibility of our system demonstrates the viability of nontraditional robots for navigational tasks.

Original languageEnglish (US)
Pages (from-to)86-95
Number of pages10
JournalIEEE Robotics and Automation Magazine
Volume30
Issue number2
DOIs
StatePublished - Jun 1 2023
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1994-2011 IEEE.

Fingerprint

Dive into the research topics of 'Tumbling Robot Control Using Reinforcement Learning: An Adaptive Control Policy That Transfers Well to the Real World'. Together they form a unique fingerprint.

Cite this