Abstract
Domain generalization aims at learning a universal model that performs well on unseen target domains, incorporating knowledge from multiple source domains. In this research, we consider the scenario where different domain shifts occur among conditional distributions of different classes across domains. When labeled samples in the source domains are limited, existing approaches are not sufficiently robust. To address this problem, we propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG), inspired by the concept of distributionally robust optimization. We encourage robustness over conditional distributions within class-specific Wasserstein uncertainty sets and optimize the worst-case performance of a classifier over these uncertainty sets. We further develop a test-time adaptation module, leveraging optimal transport to quantify the relationship between the unseen target domain and source domains to make adaptive inferences for target data. Experiments on the Rotated MNIST, PACS, and VLCS datasets demonstrate that our method could effectively balance the robustness and discriminability in challenging generalization scenarios.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 103-114 |
| Number of pages | 12 |
| Journal | IEEE Journal on Selected Topics in Signal Processing |
| Volume | 19 |
| Issue number | 1 |
| DOIs | |
| State | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.
Keywords
- Domain generalization
- Wasserstein uncertainty set
- distributionally robust optimization
- optimal transport
Fingerprint
Dive into the research topics of 'Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS