Abstract
Image captioning models depend on training with paired image-text corpora, which poses various challenges in describing images containing novel objects absent from the training data. While previous novel object captioning methods rely on external image taggers or object detectors to describe novel objects, we present the Attention-based Novel Object Captioner (ANOC) that complements novel object captioners with human attention features that characterize generally important information independent of tasks. It introduces a gating mechanism that adaptively incorporates human attention with self-learned machine attention, with a Constrained Self-Critical Sequence Training method to address the exposure bias while maintaining constraints of novel object descriptions. Extensive experiments conducted on the nocaps and Held-Out COCO datasets demonstrate that our method considerably outperforms the state-of-the-art novel object captioners. Our source code is available at https://github.com/chenxy99/ANOC.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 30th International Joint Conference on Artificial Intelligence, IJCAI 2021 |
Editors | Zhi-Hua Zhou |
Publisher | International Joint Conferences on Artificial Intelligence |
Pages | 622-628 |
Number of pages | 7 |
ISBN (Electronic) | 9780999241196 |
State | Published - 2021 |
Event | 30th International Joint Conference on Artificial Intelligence, IJCAI 2021 - Virtual, Online, Canada Duration: Aug 19 2021 → Aug 27 2021 |
Publication series
Name | IJCAI International Joint Conference on Artificial Intelligence |
---|---|
ISSN (Print) | 1045-0823 |
Conference
Conference | 30th International Joint Conference on Artificial Intelligence, IJCAI 2021 |
---|---|
Country/Territory | Canada |
City | Virtual, Online |
Period | 8/19/21 → 8/27/21 |
Bibliographical note
Publisher Copyright:© 2021 International Joint Conferences on Artificial Intelligence. All rights reserved.