Published on Mon Oct 19 2020

Summary-Oriented Question Generation for Informational Queries

Xusen Yin, Li Zhou, Kevin Small, Jonathan May

Users frequently ask simple factoid questions for question answering (QA) systems. Prompting users with automatically generated suggested questions (SQs) can improve user understanding of QA system capabilities. We aim to produce self-explanatory questions that focus on main document topics.

0
0
0
Abstract

Users frequently ask simple factoid questions for question answering (QA) systems, attenuating the impact of myriad recent works that support more complex questions. Prompting users with automatically generated suggested questions (SQs) can improve user understanding of QA system capabilities and thus facilitate more effective use. We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate. We satisfy these requirements by using a BERT-based Pointer-Generator Network trained on the Natural Questions (NQ) dataset. Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4). We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles -- with further confirmation via a human evaluation.

Thu Oct 08 2020
NLP
Multi-hop Inference for Question-driven Summarization
Questions-driven summarization has been recently studied as an effective approach to summarizing source documents. We propose a novel question-driven abstractive summarization method, Multi-hop Selective Generator(MSG), to incorporate multi-hop reasoning.
0
0
0
Thu Oct 01 2020
NLP
Towards Question-Answering as an Automatic Metric for Evaluating the Content Quality of a Summary
QA-based methods directly measure a summary's information overlap with a reference. QAEval out-performs current state-of-the-art metrics on most evaluations using benchmark datasets.
1
0
1
Thu Mar 11 2021
NLP
Conversational Answer Generation and Factuality for Reading Comprehension Question-Answering
0
0
0
Tue Oct 08 2019
NLP
Generating Highly Relevant Questions
The neural seq2seq based question generation (QG) is prone to generating generic and undiversified questions. In this paper, we propose two methods to address the issue. By a partial copy mechanism, we prioritize words that are morphologically close to words in the input passage.
0
0
0
Thu Apr 23 2020
NLP
QURIOUS: Question Generation Pretraining for Text Generation
Pretraining and fine-tuning approaches for text generation. We propose question generation as a pretraining method. Our text generation models pretrained with this method are better at understanding input.
0
0
0
Fri Apr 10 2020
NLP
Towards Automatic Generation of Questions from Long Answers
Automatic question generation (AQG) has broad applicability in domains such as tutoring systems, conversational agents, healthcare literacy, and information retrieval. Existing efforts at AQG have been limited to short answer lengths of up to two or three sentences. Transformer-based methods
0
0
0
Mon Jun 12 2017
NLP
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms. Experiments on two machine translation tasks show these models to be superior in
50
215
883
Mon Apr 22 2019
NLP
The Curious Case of Neural Text Degeneration
Using likelihood as a training objective leads to high quality models for a broad range of language understanding tasks. decoding strategies alone can dramatically effect the quality of machine text. Our findings motivate Nucleus Sampling, a simple but effective method to draw the best out of neural generation.
6
61
342
Wed May 06 2020
NLP
Harvesting and Refining Question-Answer Pairs for Unsupervised QA
Question Answering (QA) has shown great success thanks to the availability of large-scale datasets and the effectiveness of neural models. In this work, we introduce two approaches to improve unsupervised QA.
1
17
57
Thu Oct 11 2018
NLP
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT is designed to pre-train deep                bidirectional representations from unlabeled text. It can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
14
8
15
Thu Oct 01 2020
NLP
Towards Question-Answering as an Automatic Metric for Evaluating the Content Quality of a Summary
QA-based methods directly measure a summary's information overlap with a reference. QAEval out-performs current state-of-the-art metrics on most evaluations using benchmark datasets.
1
0
1
Tue Aug 21 2018
Artificial Intelligence
CoQA: A Conversational Question Answering Challenge
CoQA is a novel dataset for building Conversational Question Answering systems. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%)
0
0
0