Abstract
Registering clothes from 4D scans with vertex-accurate correspondence is challenging, yet important for dynamic appearance modeling and physics parameter estimation from real-world data. However, previous methods either rely on texture information, which is not always reliable, or achieve only coarse-level alignment. In this work, we present a novel approach to enabling accurate surface registration of texture-less clothes with large deformation. Our key idea is to effectively leverage a shape prior learned from pre-captured clothing using diffusion models. We also propose a multi-stage guidance scheme based on learned functional maps, which stabilizes registration for large-scale deformation even when they vary significantly from training data. Using high-fidelity real captured clothes, our experiments show that the proposed approach based on diffusion models generalizes better than surface registration with VAE or PCA-based priors, outperforming both optimization-based and learning-based non-rigid registration methods for both interpolation and extrapolation tests.
Original language | English (US) |
---|---|
Title of host publication | Proceedings - 2024 International Conference on 3D Vision, 3DV 2024 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 790-799 |
Number of pages | 10 |
ISBN (Electronic) | 9798350362459 |
DOIs | |
State | Published - 2024 |
Event | 11th International Conference on 3D Vision, 3DV 2024 - Davos, Switzerland Duration: Mar 18 2024 → Mar 21 2024 |
Publication series
Name | Proceedings - 2024 International Conference on 3D Vision, 3DV 2024 |
---|
Conference
Conference | 11th International Conference on 3D Vision, 3DV 2024 |
---|---|
Country/Territory | Switzerland |
City | Davos |
Period | 3/18/24 → 3/21/24 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.
Keywords
- a diffusion model for 3D surface
- Non-rigid registration
- virtual clothing