f-GAIL: Learning f-divergence for generative adversarial imitation learning

Xin Zhang, Yanhua Li, Ziming Zhang, Zhi Li Zhang

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Imitation learning (IL) aims to learn a policy from expert demonstrations that minimizes the discrepancy between the learner and expert behaviors. Various imitation learning algorithms have been proposed with different pre-determined divergences to quantify the discrepancy. This naturally gives rise to the following question: Given a set of expert demonstrations, which divergence can recover the expert policy more accurately with higher data efficiency? In this work, we propose f-GAIL, a new generative adversarial imitation learning (GAIL) model, that automatically learns a discrepancy measure from the f-divergence family as well as a policy capable of producing expert-like behaviors. Compared with IL baselines with various predefined divergence measures, f-GAIL learns better policies with higher data efficiency in six physics-based control tasks.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: Dec 6 2020Dec 12 2020

Bibliographical note

Funding Information:
Xin Zhang and Yanhua Li were supported in part by NSF grants IIS-1942680 (CAREER), CNS-1952085, CMMI-1831140, and DGE-2021871. Ziming Zhang was supported in part by NSF CCF-2006738. Zhi-Li Zhang was supported in part by NSF grants CMMI-1831140 and CNS-1901103.

Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.

Fingerprint

Dive into the research topics of 'f-GAIL: Learning f-divergence for generative adversarial imitation learning'. Together they form a unique fingerprint.

Cite this