Abstract
Learning models that are robust to distribution shifts is a key concern in the context of their real-life applicability. Invariant Risk Minimization (IRM) is a popular framework that aims to learn robust models from multiple environments. The success of IRM requires an important assumption: the underlying causal mechanisms/features remain invariant across environments. When not satisfied, we show that IRM can over-constrain the predictor and to remedy this, we propose a relaxation via partial invariance. In this work, we theoretically highlight the sub-optimality of IRM and then demonstrate how learning from a partition of training domains can help improve invariant models. Several experiments, conducted both in linear settings as well as with deep neural networks on tasks over both language and image data, allow us to verify our conclusions.
Original language | English (US) |
---|---|
Title of host publication | AAAI-23 Technical Tracks 6 |
Editors | Brian Williams, Yiling Chen, Jennifer Neville |
Publisher | AAAI press |
Pages | 7175-7183 |
Number of pages | 9 |
ISBN (Electronic) | 9781577358800 |
State | Published - Jun 27 2023 |
Event | 37th AAAI Conference on Artificial Intelligence, AAAI 2023 - Washington, United States Duration: Feb 7 2023 → Feb 14 2023 |
Publication series
Name | Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 |
---|---|
Volume | 37 |
Conference
Conference | 37th AAAI Conference on Artificial Intelligence, AAAI 2023 |
---|---|
Country/Territory | United States |
City | Washington |
Period | 2/7/23 → 2/14/23 |
Bibliographical note
Publisher Copyright:Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.