Published on Tue Oct 10 2017

Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering

Seunghyun Yoon, Joongbo Shin, Kyomin Jung

We propose a novel end-to-end neural architecture for ranking candidate answers. We adapts a hierarchical recurrent neural network and a latent topic clustering module. We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products.

0
0
0
Abstract

In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module. With our proposed model, a text is encoded to a vector representation from an word-level to a chunk-level to effectively capture the entire meaning. In particular, by adapting the hierarchical structure, our model shows very small performance degradations in longer text comprehension while other state-of-the-art recurrent neural network models suffer from it. Additionally, the latent topic clustering module extracts semantic information from target samples. This clustering module is useful for any text related tasks by allowing each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data. We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products. The proposed model shows state-of-the-art results for ranking question-answer pairs.

Fri Sep 29 2017
Neural Networks
A Neural Comprehensive Ranker (NCR) for Open-Domain Question Answering
This paper proposes a novel neural machine reading model for open-domain question answering at scale. Existing machine comprehension models typically assume that a short piece of relevant text containing answers is already identified and given to the models. This assumption is not realistic for building a large-scale question answering system.
0
0
0
Fri Jun 07 2019
NLP
RankQA: Neural Question Answering with Answer Re-Ranking
The conventional paradigm in neural question answering (QA) for narrative content is limited to a two-stage process. RankQA extends the conventional process with a third stage that performs an additional answer re-ranking. It leverages different features that are directly extracted from the QA pipeline.
0
0
0
Wed Mar 04 2020
NLP
A Study on Efficiency, Accuracy and Document Structure for Answer Sentence Selection
An essential task of most Question Answering (QA) systems is to re-rank the answer candidates. Most approaches to the task use huge neural models, such as BERT,or complex attentive architectures. We argue that by exploiting the intrinsic structure of the original rank together with an
0
0
0
Wed Mar 23 2016
Neural Networks
Recurrent Neural Network Encoder with Attention for Community Question Answering
We apply a general recurrent neural network (RNN) encoder framework to community question answering (cQA) tasks. Our approach does not rely on anylinguistic processing, and can be applied to different languages or domains.
0
0
0
Mon Sep 02 2019
Machine Learning
Answering questions by learning to rank -- Learning to rank by answering questions
This article describes a method which can be used to semantically rank documents from Wikipedia or similar natural language corpora. It also proposes a model employing the semantic ranking that holds the first place in two of the most popular leaderboards for answering multiple-choice questions.
0
0
0
Tue Aug 27 2019
Machine Learning
Incremental Improvement of a Question Answering System by Re-ranking Answer Candidates using Machine Learning
We focus on improving deployed QA systems that do not allow re-training. Our re-ranking approach learns a similarity function using n-gram based features. On average, the mean reciprocal rank improves by 9.15%.
0
0
0