Vision-aided inertial navigation for spacecraft entry, descent, and landing

Anastasios I. Mourikis, Nikolas Trawny, Stergios I. Roumeliotis, Andrew E. Johnson, Adnan Ansar, Larry Matthies

Research output: Contribution to journalArticlepeer-review

265 Scopus citations


In this paper, we present the vision-aided inertial navigation (VISINAV) algorithm that enables precision planetary landing. The vision front-end of the VISINAV system extracts 2-D-to-3-D correspondences between descent images and a surface map (mapped landmarks), as well as 2-D-to-2-D feature tracks through a sequence of descent images (opportunistic features). An extended Kalman filter (EKF) tightly integrates both types of visual feature observations with measurements from an inertial measurement unit. The filter computes accurate estimates of the lander's terrain-relative position, attitude, and velocity, in a resource-adaptive and hence real-time capable fashion. In addition to the technical analysis of the algorithm, the paper presents validation results from a sounding-rocket test flight, showing estimation errors of only 0.16 m/s for velocity and 6.4 m for position at touchdown. These results vastly improve current state of the art for terminal descent navigation without visual updates, and meet the requirements of future planetary exploration missions.

Original languageEnglish (US)
Pages (from-to)264-280
Number of pages17
JournalIEEE Transactions on Robotics
Issue number2
StatePublished - 2009

Bibliographical note

Funding Information:
Manuscript received January 2, 2008; revised June 23, 2008. Current version published April 3, 2009. This paper was recommended for publication by Associate Editor K. Iagnemma and Editor W. J.-P. Laumond upon evaluation of the reviewers’ comments. This work was supported in part by the NASA Mars Technology Program under Grant MTP-1263201, by the NASA ST9 New Millennium Program–System Flight Validation Projects, and by the National Science Foundation under Grant IIS-0643680. Part of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This paper was presented in part at the Proceedings of Robotics: Science and Systems, June 26–30 2007, Atlanta, GA, in part at the AIAA Infotech@Aerospace Conference, May 7–10 2007, and in part at the Proceedings of the NASA Science Technology Conference (NSTC’07), College Park, MD, June 2007.


  • Descent and landing (EDL)
  • Entry
  • Localization
  • Sensor fusion
  • Space robotics
  • Vision-aided inertial navigation


Dive into the research topics of 'Vision-aided inertial navigation for spacecraft entry, descent, and landing'. Together they form a unique fingerprint.

Cite this