HUMBI: A Large Multiview Dataset of Human Body Expressions

Zhixuan Yu, Jae Shin Yoon, In Kyu Lee, Prashanth Venkatesh, Jaesik Park, Jihun Yu, Hyun Soo Park

Research output: Contribution to journalConference articlepeer-review

57 Scopus citations

Abstract

This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of gaze, face, hand, body, and garment from assorted people. 107 synchronized HD cam- eras are used to capture 772 distinctive subjects across gen- der, ethnicity, age, and physical condition. With the mul- tiview image streams, we reconstruct high fidelity body ex- pressions using 3D mesh models, which allows representing view-specific appearance using their canonical atlas. We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complemen- tary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets.

Original languageEnglish (US)
Article number9156401
Pages (from-to)2987-2997
Number of pages11
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
StatePublished - 2020
Event2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 - Virtual, Online, United States
Duration: Jun 14 2020Jun 19 2020

Bibliographical note

Funding Information:
This work was partially supported by National Science Foundation (No.1846031 and 1919965), National Research Foundation of Korea, and Ministry of Science and ICT of Korea (No. 2020R1C1C1015260).

Publisher Copyright:
© 2020 IEEE.

Fingerprint

Dive into the research topics of 'HUMBI: A Large Multiview Dataset of Human Body Expressions'. Together they form a unique fingerprint.

Cite this