This research demo showcases the results of novel approach for estimating the illumination and reflectance properties of virtual objects captured using consumer-grade RGB-D cameras. This method is implemented within a fully automatic content creation pipeline that generates photorealistic objects in real-time virtual reality scenes with dynamic lighting. The geometry of the target object is first reconstructed from depth images captured using a handheld camera. To get nearly drift-free texture maps of the virtual object, a set of selected images from the original color stream is used for camera pose optimization. Our approach further separates these images into diffuse (view-independent) and specular (view-dependent) components using low-rank decomposition. The lighting conditions during capture and reflectance properties of the virtual object are subsequently estimated from the specular maps. By combining these parameters with the diffuse texture, reconstructed objects are then rendered in a real-time virtual reality demo that plausibly replicates the real world illumination and showcases dynamic lighting with varying direction, intensity, and color.