Abstract
We consider the problem of learning textual entailment models with limited supervision (5K-10K training examples), and present two complementary approaches for it. First, we propose knowledge-guided adversarial example generators for incorporating large lexical resources in entailment models via only a handful of rule templates. Second, to make the entailment model-a discriminator-more robust, we propose the first GAN-style approach for training it using a natural language example generator that iteratively adjusts based on the discriminator's performance. We demonstrate effectiveness using two entailment datasets, where the proposed methods increase accuracy by 4.7% on SciTail and by 2.8% on a 1% training sub-sample of SNLI. Notably, even a single hand-written rule, negate, improves the accuracy on the negation examples in SNLI by 6.1%.
Original language | English (US) |
---|---|
Title of host publication | ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 2418-2428 |
Number of pages | 11 |
ISBN (Electronic) | 9781948087322 |
DOIs | |
State | Published - 2018 |
Externally published | Yes |
Event | 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018 - Melbourne, Australia Duration: Jul 15 2018 → Jul 20 2018 |
Publication series
Name | ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) |
---|---|
Volume | 1 |
Conference
Conference | 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018 |
---|---|
Country/Territory | Australia |
City | Melbourne |
Period | 7/15/18 → 7/20/18 |
Bibliographical note
Publisher Copyright:© 2018 Association for Computational Linguistics