Abstract
In recommendation dialogs, humans commonly disclose their preference and make recommendations in a friendly manner. However, this is a challenge in developing a sociable recommendation dialog system, due to the lack of dialog dataset annotated with such sociable strategies. Therefore, we present INSPIRED, a new dataset of 1,001 human-human dialogs for movie recommendation with measures for successful recommendations. To better understand how humans make recommendations in communication, we design an annotation scheme related to recommendation strategies based on social science theories and annotate these dialogs. Our analysis shows that sociable recommendation strategies, such as sharing personal opinions or communicating with encouragement, more frequently lead to successful recommendations. Based on our dataset, we train end-to-end recommendation dialog systems with and without our strategy labels. In both automatic and human evaluation, our model with strategy incorporation outperforms the baseline model. This work is a first step for building sociable recommendation dialog systems with a basis of social science theories.
| Original language | English (US) |
|---|---|
| Title of host publication | EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 8142-8152 |
| Number of pages | 11 |
| ISBN (Electronic) | 9781952148606 |
| State | Published - 2020 |
| Externally published | Yes |
| Event | 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 - Virtual, Online Duration: Nov 16 2020 → Nov 20 2020 |
Publication series
| Name | EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference |
|---|
Conference
| Conference | 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 |
|---|---|
| City | Virtual, Online |
| Period | 11/16/20 → 11/20/20 |
Bibliographical note
Funding Information:We would like to thank members of the NLP lab at UC Davis for discussion and participation in the pilot study. We are also grateful for human evaluation participants and Mechanical Turk workers on contributions of building this dataset.
Publisher Copyright:
© 2020 Association for Computational Linguistics.