Imitation learning via kernel mean embedding

Kee Eung Kim, Hyun Soo Park

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Imitation learning refers to the problem where an agent learns a policy that mimics the demonstration provided by the expert, without any information on the cost function of the environment. Classical approaches to imitation learning usually rely on a restrictive class of cost functions that best explains the expert's demonstration, exemplified by linear functions of pre-defined features on states and actions. We show that the kernelization of a classical algorithm naturally reduces the imitation learning to a distribution learning problem, where the imitation policy tries to match the state-action visitation distribution of the expert. Closely related to our approach is the recent work on leveraging generative adversarial networks (GANs) for imitation learning, but our reduction to distribution learning is much simpler, robust to scarce expert demonstration, and sample efficient. We demonstrate the effectiveness of our approach on a wide range of high-dimensional control tasks.

Original languageEnglish (US)
Title of host publication32nd AAAI Conference on Artificial Intelligence, AAAI 2018
PublisherAAAI press
Pages3415-3422
Number of pages8
ISBN (Electronic)9781577358008
StatePublished - Jan 1 2018
Event32nd AAAI Conference on Artificial Intelligence, AAAI 2018 - New Orleans, United States
Duration: Feb 2 2018Feb 7 2018

Publication series

Name32nd AAAI Conference on Artificial Intelligence, AAAI 2018

Other

Other32nd AAAI Conference on Artificial Intelligence, AAAI 2018
CountryUnited States
CityNew Orleans
Period2/2/182/7/18

Fingerprint

Demonstrations
Cost functions

Cite this

Kim, K. E., & Park, H. S. (2018). Imitation learning via kernel mean embedding. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 3415-3422). (32nd AAAI Conference on Artificial Intelligence, AAAI 2018). AAAI press.

Imitation learning via kernel mean embedding. / Kim, Kee Eung; Park, Hyun Soo.

32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press, 2018. p. 3415-3422 (32nd AAAI Conference on Artificial Intelligence, AAAI 2018).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kim, KE & Park, HS 2018, Imitation learning via kernel mean embedding. in 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, AAAI press, pp. 3415-3422, 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, New Orleans, United States, 2/2/18.
Kim KE, Park HS. Imitation learning via kernel mean embedding. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press. 2018. p. 3415-3422. (32nd AAAI Conference on Artificial Intelligence, AAAI 2018).
Kim, Kee Eung ; Park, Hyun Soo. / Imitation learning via kernel mean embedding. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. AAAI press, 2018. pp. 3415-3422 (32nd AAAI Conference on Artificial Intelligence, AAAI 2018).
@inproceedings{daa4d458feee42a5ab7b42c6ed971662,
title = "Imitation learning via kernel mean embedding",
abstract = "Imitation learning refers to the problem where an agent learns a policy that mimics the demonstration provided by the expert, without any information on the cost function of the environment. Classical approaches to imitation learning usually rely on a restrictive class of cost functions that best explains the expert's demonstration, exemplified by linear functions of pre-defined features on states and actions. We show that the kernelization of a classical algorithm naturally reduces the imitation learning to a distribution learning problem, where the imitation policy tries to match the state-action visitation distribution of the expert. Closely related to our approach is the recent work on leveraging generative adversarial networks (GANs) for imitation learning, but our reduction to distribution learning is much simpler, robust to scarce expert demonstration, and sample efficient. We demonstrate the effectiveness of our approach on a wide range of high-dimensional control tasks.",
author = "Kim, {Kee Eung} and Park, {Hyun Soo}",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
series = "32nd AAAI Conference on Artificial Intelligence, AAAI 2018",
publisher = "AAAI press",
pages = "3415--3422",
booktitle = "32nd AAAI Conference on Artificial Intelligence, AAAI 2018",

}

TY - GEN

T1 - Imitation learning via kernel mean embedding

AU - Kim, Kee Eung

AU - Park, Hyun Soo

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Imitation learning refers to the problem where an agent learns a policy that mimics the demonstration provided by the expert, without any information on the cost function of the environment. Classical approaches to imitation learning usually rely on a restrictive class of cost functions that best explains the expert's demonstration, exemplified by linear functions of pre-defined features on states and actions. We show that the kernelization of a classical algorithm naturally reduces the imitation learning to a distribution learning problem, where the imitation policy tries to match the state-action visitation distribution of the expert. Closely related to our approach is the recent work on leveraging generative adversarial networks (GANs) for imitation learning, but our reduction to distribution learning is much simpler, robust to scarce expert demonstration, and sample efficient. We demonstrate the effectiveness of our approach on a wide range of high-dimensional control tasks.

AB - Imitation learning refers to the problem where an agent learns a policy that mimics the demonstration provided by the expert, without any information on the cost function of the environment. Classical approaches to imitation learning usually rely on a restrictive class of cost functions that best explains the expert's demonstration, exemplified by linear functions of pre-defined features on states and actions. We show that the kernelization of a classical algorithm naturally reduces the imitation learning to a distribution learning problem, where the imitation policy tries to match the state-action visitation distribution of the expert. Closely related to our approach is the recent work on leveraging generative adversarial networks (GANs) for imitation learning, but our reduction to distribution learning is much simpler, robust to scarce expert demonstration, and sample efficient. We demonstrate the effectiveness of our approach on a wide range of high-dimensional control tasks.

UR - http://www.scopus.com/inward/record.url?scp=85060432961&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85060432961&partnerID=8YFLogxK

M3 - Conference contribution

T3 - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018

SP - 3415

EP - 3422

BT - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018

PB - AAAI press

ER -