Plan ahead: Self-supervised text planning for paragraph completion task

Dongyeop Kang, Eduard Hovy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Scopus citations

Abstract

Despite the recent success of contextualized language models on various NLP tasks, language model itself cannot capture textual coherence of a long, multi-sentence document (e.g., a paragraph). Humans often make structural decisions on what and how to say about before making utterances. Guiding surface realization with such high-level decisions and structuring text in a coherent way is essentially called a planning process. Where can the model learn such high-level coherence? A paragraph itself contains various forms of inductive coherence signals called self-supervision in this work, such as sentence orders, topical keywords, rhetorical structures, and so on. Motivated by that, this work proposes a new paragraph completion task PAR-COM; predicting masked sentences in a paragraph. However, the task suffers from predicting and selecting appropriate topical content with respect to the given context. To address that, we propose a self-supervised text planner SSPlanner that predicts what to say first (content prediction), then guides the pretrained language model (surface realization) using the predicted content. SSPlanner outperforms the baseline generation models on the paragraph completion task in both automatic and human evaluation. We also find that a combination of noun and verb types of keywords is the most effective for content selection. As more number of content keywords are provided, overall generation quality also increases.

Original languageEnglish (US)
Title of host publicationEMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages6533-6543
Number of pages11
ISBN (Electronic)9781952148606
DOIs
StatePublished - 2020
Externally publishedYes
Event2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 - Virtual, Online
Duration: Nov 16 2020Nov 20 2020

Publication series

NameEMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference

Conference

Conference2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020
CityVirtual, Online
Period11/16/2011/20/20

Bibliographical note

Funding Information:
We thank Dery Lucio, Hiroaki Hayashi, Taehee Jung, Edvisees members at CMU, Hearst lab members at UC Berkeley, and anonymous reviewers at EMNLP 2020 for their helpful comments.

Publisher Copyright:
© 2020 Association for Computational Linguistics

Fingerprint

Dive into the research topics of 'Plan ahead: Self-supervised text planning for paragraph completion task'. Together they form a unique fingerprint.

Cite this