View-dependent virtual reality content from RGB-D images

Chih Fan Chen, Mark Bolas, Evan Suma Rosenberg

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

High-fidelity virtual content is essential for the creation of compelling and effective virtual reality (VR) experiences. However, creating photorealistic content is not easy, and handcrafting detailed 3D models can be time and labor intensive. Structured camera arrays, such as light-stages, can scan and reconstruct high-fidelity virtual models, but the expense makes this technology impractical for most users. In this paper, we present a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade depth camera. The geometry model and the camera trajectories are automatically reconstructed and optimized from a RGB-D image sequence captured offline. Based on the head-mounted display (HMD) position, the three closest images are selected for real-time rendering and fused together to smooth the transition between viewpoints. The specular reflections and light-burst effects can also be preserved and reproduced. We confirmed that our method does not require technical background knowledge by testing our system with data captured by non-expert operators.

Original languageEnglish (US)
Title of host publication2017 IEEE International Conference on Image Processing, ICIP 2017 - Proceedings
PublisherIEEE Computer Society
Pages2931-2935
Number of pages5
ISBN (Electronic)9781509021758
DOIs
StatePublished - Jul 2 2017
Event24th IEEE International Conference on Image Processing, ICIP 2017 - Beijing, China
Duration: Sep 17 2017Sep 20 2017

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume2017-September
ISSN (Print)1522-4880

Other

Other24th IEEE International Conference on Image Processing, ICIP 2017
Country/TerritoryChina
CityBeijing
Period9/17/179/20/17

Bibliographical note

Funding Information:
This work is sponsored by the U.S. Army Research Laboratory (ARL) under contract number W911NF-14-D-0005. Statements and opinions expressed and content included do not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.

Keywords

  • Appearance Representation
  • Image-Based Rendering
  • Virtual Reality

Fingerprint

Dive into the research topics of 'View-dependent virtual reality content from RGB-D images'. Together they form a unique fingerprint.

Cite this