Published on Wed Jan 23 2019

Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge

Ondřej Dušek, Jekaterina Novikova, Verena Rieser

This paper provides a comprehensive analysis of the first shared task on End-to-End Natural Language Generation (NLG) It identifies avenues for future research based on the results. The winning SLUG system (Juraska et al., 2018) is aseq2seq-based.

0
0
0
Abstract

This paper provides a comprehensive analysis of the first shared task on End-to-End Natural Language Generation (NLG) and identifies avenues for future research based on the results. This shared task aimed to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena. Introducing novel automatic and human metrics, we compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures -- with the majority implementing sequence-to-sequence models (seq2seq) -- as well as systems based on grammatical rules and templates. Seq2seq-based systems have demonstrated a great potential for NLG in the challenge. We find that seq2seq systems generally score high in terms of word-overlap metrics and human evaluations of naturalness -- with the winning SLUG system (Juraska et al., 2018) being seq2seq-based. However, vanilla seq2seq models often fail to correctly express a given meaning representation if they lack a strong semantic control mechanism applied during decoding. Moreover, seq2seq models can be outperformed by hand-engineered systems in terms of overall quality, as well as complexity, length and diversity of outputs. This research has influenced, inspired and motivated a number of recent studies outwith the original competition, which we also summarise as part of this paper.

Tue Oct 02 2018
NLP
Findings of the E2E NLG Challenge
This paper summarises the experimental setup and results of the first shared task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue. We compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning.
0
0
0
Wed Mar 29 2017
Neural Networks
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
Survey of NLG is timely in view of the changes that the field has undergone over the past decade or so. Survey aims to give an up-to-date synthesis of research on core tasks.
0
0
0
Sun Nov 10 2019
NLP
Semantic Noise Matters for Neural Natural Language Generation
Neural natural language generation (NNLG) systems are known for their Pathological outputs. We find that cleaned data can improve semantic correctness by up to 97%. We also find that the most common error is omitting information, rather than hallucination.
0
0
0
Fri Apr 16 2021
NLP
IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation
IndoNLG is the first such benchmark for the Indonesian language for natural language generation (NLG) It covers six tasks: summarization, question-answering, open chitchat, as well as three different language-pairs of machine translation tasks.
1
16
129
Tue Feb 02 2021
NLP
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. New models often still evaluate on anglo-centric corpora with well-established, but flawed, metrics.
1
0
0
Wed Nov 13 2019
NLP
Unsupervised Pre-training for Natural Language Generation: A Literature Review
Unsupervised pre-training is gaining increasing popularity in computational linguistics. It has surprising success in advancing natural language understanding (NLU) However, the power of pre- training is only partially excavated when it comes to natural language generation (NLG)
0
0
0