IMU-Assisted Learning of Single-View Rolling Shutter Correction

Jiawei Mo, Md Jahidul Islam, Junaed Sattar

    Research output: Contribution to journalConference articlepeer-review

    3 Scopus citations

    Abstract

    Rolling shutter distortion is highly undesirable for photography and computer vision algorithms (e.g., visual SLAM) because pixels can be potentially captured at different times and poses. In this paper, we propose a deep neural network to predict depth and row-wise pose from a single image for rolling shutter correction. Our contribution in this work is to incorporate inertial measurement unit (IMU) data into the pose refinement process, which, compared to the state-of-the-art, greatly enhances the pose prediction. The improved accuracy and robustness make it possible for numerous vision algorithms to use imagery captured by rolling shutter cameras and produce highly accurate results. We also extend a dataset to have real rolling shutter images, IMU data, depth maps, camera poses, and corresponding global shutter images for rolling shutter correction training. We demonstrate the efficacy of the proposed method by evaluating the performance of Direct Sparse Odometry (DSO) algorithm on rolling shutter imagery corrected using the proposed approach. Results show marked improvements of the DSO algorithm over using uncorrected imagery, validating the proposed approach.

    Original languageEnglish (US)
    Pages (from-to)861-870
    Number of pages10
    JournalProceedings of Machine Learning Research
    Volume164
    StatePublished - 2021
    Event5th Conference on Robot Learning, CoRL 2021 - London, United Kingdom
    Duration: Nov 8 2021Nov 11 2021

    Bibliographical note

    Publisher Copyright:
    © 2021 Proceedings of Machine Learning Research. All rights reserved.

    Keywords

    • IMU
    • Learning
    • Rolling Shutter Correction

    Fingerprint

    Dive into the research topics of 'IMU-Assisted Learning of Single-View Rolling Shutter Correction'. Together they form a unique fingerprint.

    Cite this