An Unsupervised Masking Objective for Abstractive Multi-Document News Summarization

Nikolai Vogler, Songlin Li, Yujie Xu, Yujian Mi, Taylor Berg-Kirkpatrick

We show that a simple unsupervised masking objective can approach near supervised performance on abstractive multi-document news summarization. Our method trains a state-of-the-art neural summarization model to predict the masked out source document with highest lexical centrality relative to the multi-document group. In experiments on the Multi-News dataset, our masked training objective yields a system that outperforms past unsupervised methods and, in human evaluation, surpasses the best supervised method without requiring access to any ground-truth summaries. Further, we evaluate how different measures of lexical centrality, inspired by past work on extractive summarization, affect final performance.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment