We present a complete end-to-end pipeline for generating dynamically relightable virtual objects captured using a single handheld consumer-grade RGB-D camera. The proposed system plausibly replicates the geometry, texture, illumination, and surface reflectance properties of non-Lambertian objects, making them suitable for integration within virtual reality scenes that contain arbitrary illumination. First, the geometry of the target object is reconstructed from depth images captured using a handheld camera. To get nearly drift-free texture maps of the virtual object, a set of selected images from the original color stream is used for camera pose optimization. Our approach further separates these images into diffuse (view-independent) and specular (view-dependent) components using low-rank decomposition. The lighting conditions during capture and reflectance properties of the virtual object are subsequently estimated from the computed specular maps. By combining these parameters with the diffuse texture, the reconstructed model can then be rendered in real-time virtual reality scenes that plausibly replicate real world illumination at the point of capture. Furthermore, these objects can interact with arbitrary virtual lights that vary in direction, intensity, and color.
|Original language||English (US)|
|Title of host publication||Proceedings - VRST 2020|
|Subtitle of host publication||ACM Symposium on Virtual Reality Software and Technology|
|Editors||Stephen N. Spencer|
|Publisher||Association for Computing Machinery|
|State||Published - Nov 1 2020|
|Event||26th ACM Symposium on Virtual Reality Software and Technology, VRST 2020 - Virtual, Online, Canada|
Duration: Nov 1 2020 → Nov 4 2020
|Name||Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST|
|Conference||26th ACM Symposium on Virtual Reality Software and Technology, VRST 2020|
|Period||11/1/20 → 11/4/20|
Bibliographical notePublisher Copyright:
© 2020 ACM.
- content creation
- virtual reality