Automated object detection in mobile eye-tracking research: comparing manual coding with tag detection, shape detection, matching, and machine learning

Claire M. Segijn, Pernu Menheer, Garim Lee, Eunah Kim, David Olsen, Alicia Hofelich Mohr

Research output: Contribution to journalArticlepeer-review

Abstract

The goal of the current study is to compare different methods for automated object detection (i.e. tag detection, shape detection, matching, and machine learning) with manual coding on different types of objects (i.e. static, dynamic, and dynamic with human interaction) and describe the advantages and limitations of each method. We tested the methods in an experiment that utilizes mobile eye tracking because of the importance of attention in communication science and the challenges posed by this type of data when analyzing different objects because visual parameters are consistently changing within and between participants. Machine learning was found to be the most reliable method to detect all types of objects and was slightly more conservative compared to manual coding. Feature-based matching worked well for static objects. We discuss the advantages and challenges of each method along with key considerations for researchers depending on their research objective, the type of object, and the object detection method they will use.

Original languageEnglish (US)
JournalCommunication Methods and Measures
DOIs
StateAccepted/In press - 2024

Bibliographical note

Publisher Copyright:
© 2024 Taylor & Francis Group, LLC.

Fingerprint

Dive into the research topics of 'Automated object detection in mobile eye-tracking research: comparing manual coding with tag detection, shape detection, matching, and machine learning'. Together they form a unique fingerprint.

Cite this