Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge

Jingge Wang, Liyan Xie, Yao Xie, Shao Lun Huang, Yang Li

Research output: Contribution to journalArticlepeer-review

Abstract

Domain generalization aims at learning a universal model that performs well on unseen target domains, incorporating knowledge from multiple source domains. In this research, we consider the scenario where different domain shifts occur among conditional distributions of different classes across domains. When labeled samples in the source domains are limited, existing approaches are not sufficiently robust. To address this problem, we propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG), inspired by the concept of distributionally robust optimization. We encourage robustness over conditional distributions within class-specific Wasserstein uncertainty sets and optimize the worst-case performance of a classifier over these uncertainty sets. We further develop a test-time adaptation module, leveraging optimal transport to quantify the relationship between the unseen target domain and source domains to make adaptive inferences for target data. Experiments on the Rotated MNIST, PACS, and VLCS datasets demonstrate that our method could effectively balance the robustness and discriminability in challenging generalization scenarios.

Original languageEnglish (US)
Pages (from-to)103-114
Number of pages12
JournalIEEE Journal on Selected Topics in Signal Processing
Volume19
Issue number1
DOIs
StatePublished - 2025
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2024 IEEE.

Keywords

  • Domain generalization
  • Wasserstein uncertainty set
  • distributionally robust optimization
  • optimal transport

Fingerprint

Dive into the research topics of 'Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge'. Together they form a unique fingerprint.

Cite this