Published on Wed Jun 12 2019

BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization

Kai Wang, Xiaojun Quan, Rui Wang

The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of available training data.

0
0
0
Abstract

The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a novel Bi-directional Selective Encoding with Template (BiSET) model, which leverages template discovered from training data to softly select key information from each source article to guide its summarization process. Extensive experiments on a standard summarization dataset were conducted and the results show that the template-equipped BiSET model manages to improve the summarization performance significantly with a new state of the art.

Thu Oct 15 2020
NLP
GSum: A General Framework for Guided Neural Abstractive Summarization
Neural abstractive summarization models are flexible and can produce coherent summaries. But they are sometimes unfaithful and can be difficult to control. We propose a general and extensible guided summarization framework (GSum)
0
0
0
Mon Feb 05 2018
NLP
Diverse Beam Search for Increased Novelty in Abstractive Summarization
Text summarization condenses a text to a shorter version while retaining important informations. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization. However, these models still use large parts of the original text in the output summaries, making them
0
0
0
Wed Mar 25 2020
Machine Learning
Learning Syntactic and Dynamic Selective Encoding for Document Summarization
Text summarization aims to generate a headline or a short summary consisting of the major information of the source text. Recent studies employ the sequence-to-sequence framework to encode the input with a neural network. Most studies feed the encoder with the semantic word embedding but ignore the
0
0
0
Fri Sep 28 2018
NLP
The Rule of Three: Abstractive Text Summarization in Three Bullet Points
Neural network-based approaches have become widespread for abstractive text summarization. One of the reasons these previous models failed to account for information structure in a generated summary is that standard datasets include summaries of variable lengths.
0
0
0
Sun Jan 10 2021
Artificial Intelligence
Summaformers @ LaySumm 20, LongSumm 20
Automatic text summarization has been widely studied as an important task in natural language processing. We specifically look at the problem of summarizing scientific research papers from multiple domains. While leveraging latest Transformer-based models, our systems are simple, intuitive and based on how specific paper sections contribute.
0
0
0
Fri Oct 09 2020
NLP
What Have We Achieved on Text Summarization?
Deep learning has led to significant improvement in text summarization. However, gaps still exist between summaries produced by automatic summarizers and human professionals.
0
0
0