TY - JOUR
T1 - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge
AU - Wang, Jingge
AU - Xie, Liyan
AU - Xie, Yao
AU - Huang, Shao Lun
AU - Li, Yang
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2025
Y1 - 2025
N2 - Domain generalization aims at learning a universal model that performs well on unseen target domains, incorporating knowledge from multiple source domains. In this research, we consider the scenario where different domain shifts occur among conditional distributions of different classes across domains. When labeled samples in the source domains are limited, existing approaches are not sufficiently robust. To address this problem, we propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG), inspired by the concept of distributionally robust optimization. We encourage robustness over conditional distributions within class-specific Wasserstein uncertainty sets and optimize the worst-case performance of a classifier over these uncertainty sets. We further develop a test-time adaptation module, leveraging optimal transport to quantify the relationship between the unseen target domain and source domains to make adaptive inferences for target data. Experiments on the Rotated MNIST, PACS, and VLCS datasets demonstrate that our method could effectively balance the robustness and discriminability in challenging generalization scenarios.
AB - Domain generalization aims at learning a universal model that performs well on unseen target domains, incorporating knowledge from multiple source domains. In this research, we consider the scenario where different domain shifts occur among conditional distributions of different classes across domains. When labeled samples in the source domains are limited, existing approaches are not sufficiently robust. To address this problem, we propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG), inspired by the concept of distributionally robust optimization. We encourage robustness over conditional distributions within class-specific Wasserstein uncertainty sets and optimize the worst-case performance of a classifier over these uncertainty sets. We further develop a test-time adaptation module, leveraging optimal transport to quantify the relationship between the unseen target domain and source domains to make adaptive inferences for target data. Experiments on the Rotated MNIST, PACS, and VLCS datasets demonstrate that our method could effectively balance the robustness and discriminability in challenging generalization scenarios.
KW - Domain generalization
KW - Wasserstein uncertainty set
KW - distributionally robust optimization
KW - optimal transport
UR - http://www.scopus.com/inward/record.url?scp=85203457131&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85203457131&partnerID=8YFLogxK
U2 - 10.1109/JSTSP.2024.3434498
DO - 10.1109/JSTSP.2024.3434498
M3 - Article
AN - SCOPUS:85203457131
SN - 1932-4553
VL - 19
SP - 103
EP - 114
JO - IEEE Journal on Selected Topics in Signal Processing
JF - IEEE Journal on Selected Topics in Signal Processing
IS - 1
ER -