HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge

Jae Shin Yoon, Zhixuan Yu, Jaesik Park, Hyun Soo Park

Research output: Contribution to journalArticlepeer-review

4 Scopus citations


This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of five primary body signals including gaze, face, hand, body, and garment from assorted people. 107 synchronized HD cameras are used to capture 772 distinctive subjects across gender, ethnicity, age, and style. With the multiview image streams, we reconstruct the geometry of body expressions using 3D mesh models, which allows representing view-specific appearance. We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets. Based on HUMBI, we formulate a new benchmark challenge of a pose-guided appearance rendering task that aims to substantially extend photorealism in modeling diverse human expressions in 3D, which is the key enabling factor of authentic social tele-presence. HUMBI is publicly available at

Original languageEnglish (US)
Pages (from-to)623-640
Number of pages18
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number1
StatePublished - Jan 1 2023

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.


  • 3D geometry and appearance
  • Human behavioral imaging
  • multiview dataset

PubMed: MeSH publication types

  • Journal Article
  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.


Dive into the research topics of 'HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge'. Together they form a unique fingerprint.

Cite this