Evaluating visual query methods for articulated motion video search

Cecilia Mauceri, Evan A. Suma, Samantha Finkelstein, Richard Souvenir

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

We develop and evaluate three interfaces for video search of articulated objects, specifically humans performing common actions. The three interfaces, (1) a freehand interface with motion cues (e.g., arrows), (2) an articulated human stick figure with motion cues, and (3) a keyframe interface, were designed to allow users to quickly generate motion-based queries. We performed both quantitative and qualitative analyses of the interfaces through a formal user study by measuring accuracy and speed of user input and asking the users to complete a free-response questionnaire. Our results indicate that the constrained interfaces outperform the freehand sketch-based interface, in terms of both search accuracy and query completion time. Additionally, the users described strong preferences for the search interfaces containing pre-defined models, and the generated queries were rated higher, in terms of semantic matches to the query concept.

Original languageEnglish (US)
Pages (from-to)10-22
Number of pages13
JournalInternational Journal of Human Computer Studies
Volume77
DOIs
StatePublished - May 2015
Externally publishedYes

Keywords

  • Action recognition
  • Sketch-based interface
  • Video search

Fingerprint Dive into the research topics of 'Evaluating visual query methods for articulated motion video search'. Together they form a unique fingerprint.

Cite this