We develop and evaluate three interfaces for video search of articulated objects, specifically humans performing common actions. The three interfaces, (1) a freehand interface with motion cues (e.g., arrows), (2) an articulated human stick figure with motion cues, and (3) a keyframe interface, were designed to allow users to quickly generate motion-based queries. We performed both quantitative and qualitative analyses of the interfaces through a formal user study by measuring accuracy and speed of user input and asking the users to complete a free-response questionnaire. Our results indicate that the constrained interfaces outperform the freehand sketch-based interface, in terms of both search accuracy and query completion time. Additionally, the users described strong preferences for the search interfaces containing pre-defined models, and the generated queries were rated higher, in terms of semantic matches to the query concept.
Bibliographical notePublisher Copyright:
© 2015 Elsevier Ltd. All rights reserved.
- Action recognition
- Sketch-based interface
- Video search