EyeCoD: Eye Tracking System Acceleration via FlatCam-based Algorithm and Accelerator Co-Design

Haoran You, Cheng Wan, Yang Zhao, Zhongzhi Yu, Yonggan Fu, Jiayi Yuan, Shang Wu, Shunyao Zhang, Yongan Zhang, Chaojian Li, Vivek Boominathan, Ashok Veeraraghavan, Ziyun Li, Yingyan Lin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations

Abstract

Eye tracking has become an essential human-machine interaction modality for providing immersive experience in numerous virtual and augmented reality (VR/AR) applications desiring high throughput (e.g., 240 FPS), small-form, and enhanced visual privacy. However, existing eye tracking systems are still limited by their: (1) large form-factor largely due to the adopted bulky lensbased cameras; (2) high communication cost required between the camera and backend processor; and (3) potentially concerned low visual privacy, thus prohibiting their more extensive applications. To this end, we propose, develop, and validate a lensless FlatCambased eye tracking algorithm and accelerator co-design framework dubbed EyeCoD to enable eye tracking systems with a much reduced form-factor and boosted system efciency without sacrifcing the tracking accuracy, paving the way for next-generation eye tracking solutions. On the system level, we advocate the use of lensless FlatCams instead of lens-based cameras to facilitate the small form-factor need in mobile eye tracking systems, which also leaves rooms for a dedicated sensing-processor co-design to reduce the required camera-processor communication latency. On the algorithm level, EyeCoD integrates a predict-then-focus pipeline that frst predicts the region-of-interest (ROI) via segmentation and then only focuses on the ROI parts to estimate gaze directions, greatly reducing redundant computations and data movements. On the hardware level, we further develop a dedicated accelerator that (1) integrates a novel workload orchestration between the aforementioned segmentation and gaze estimation models, (2) leverages intra-channel reuse opportunities for depth-wise layers, (3) utilizes input feature-wise partition to save activation memory size, and (4) develops a sequential-write-parallel-read input buffer to alleviate the bandwidth requirement for the activation global buffer. On-silicon measurement and extensive experiments validate that our EyeCoD consistently reduces both the communication and computation costs, leading to an overall system speedup of 10.95×, 3.21×, and 12.85× over general computing platforms including CPUs and GPUs, and a prior-art eye tracking processor called CIS-GEP, respectively, while maintaining the tracking accuracy. Codes are available at https://github.com/RICE-EIC/EyeCoD.

Original languageEnglish (US)
Title of host publicationISCA 2022 - Proceedings of the 49th Annual International Symposium on Computer Architecture
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages610-622
Number of pages13
ISBN (Electronic)9781450386104
DOIs
StatePublished - Jun 18 2022
Externally publishedYes
Event49th IEEE/ACM International Symposium on Computer Architecture, ISCA 2022 - New York, United States
Duration: Jun 18 2022Jun 22 2022

Publication series

NameProceedings - International Symposium on Computer Architecture
ISSN (Print)1063-6897

Conference

Conference49th IEEE/ACM International Symposium on Computer Architecture, ISCA 2022
Country/TerritoryUnited States
CityNew York
Period6/18/226/22/22

Bibliographical note

Publisher Copyright:
© 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.

Keywords

  • Algorithm-hardware Co-Design
  • Eye Tracking Systems
  • VR/AR

Fingerprint

Dive into the research topics of 'EyeCoD: Eye Tracking System Acceleration via FlatCam-based Algorithm and Accelerator Co-Design'. Together they form a unique fingerprint.

Cite this