This paper addresses the problem of autonomous quadrotor navigation within indoor spaces. In particular, we focus on the case where a visual map of the area, represented as a graph of linked images, is constructed offline (from visual and potentially inertial data collected beforehand) and used to determine visual paths for the quadrotor to follow. In addition, during the actual navigation, the quadrotor employs both wide- and short-baseline random sample consensuses (RANSACs) to efficiently determine its desired motion toward the next reference image and handle special motions, such as rotations in place. In particular, when the quadrotor relies only on visual observations, it uses the 5pt and 2pt algorithms in the wide- and short-baseline RANSACs, respectively. On the other hand, when information about the gravity direction is available, significant gains in speed are realized by using the 3pt+1 and 1pt+1 algorithms instead. Lastly, we introduce an adaptive optical-flow algorithm that can accurately estimate the quadrotor’s horizontal velocity under adverse conditions (e.g., when flying over dark, textureless floors) by progressively using information from more parts of the images. The speed and robustness of our algorithms are evaluated experimentally using a commercial-off-the-shelf quadrotor navigating in the presence of dynamic obstacles (i.e., people walking) along lengthy corridors and through tight corners, as well as across building floors via poorly lit staircases.
Bibliographical noteFunding Information:
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Science Foundation [grant number IIS-1637875].
© The Author(s) 2018.
Copyright 2019 Elsevier B.V., All rights reserved.
- autonomous navigation
- visual servoing