Published on Wed Apr 20 2016

Dialog-based Language Learning

Jason Weston

A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on fixed training sets of labeled data. We study dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner.

0
0
0
Abstract

A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on learning from fixed training sets of labeled data, with supervision either at the word level (tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation. We study this setup in two domains: the bAbI dataset of (Weston et al., 2015) and large-scale question answering from (Dodge et al., 2015). We evaluate a set of baseline learning strategies on these tasks, and show that a novel model incorporating predictive lookahead is a promising approach for learning from a teacher's response. In particular, a surprising result is that it can learn to answer questions correctly without any reward-based supervision at all.

Mon Oct 12 2020
Machine Learning
Human-centric Dialog Training via Offline Reinforcement Learning
How can we train a dialog model to produce better conversations by learning from human feedback, without the risk of humans teaching it harmful chat behaviors? We start by hosting models online, and gather human feedback from real-time, open-ended conversations, which we then use to train and improve the models.
0
0
0
Sat Nov 21 2015
Machine Learning
Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems
A long-term goal of machine learning is to build intelligent conversational agents. One recent popular approach is to train end-to-end models on a large amount of real dialog transcripts between humans. We provide a dataset covering 75k movie entities and with 3.5M training examples.
0
0
0
Fri Apr 17 2020
Neural Networks
Show Us the Way: Learning to Manage Dialog from Demonstrations
0
0
0
Thu Apr 23 2020
Neural Networks
Learning Dialog Policies from Weak Demonstrations
Deep reinforcement learning is a promising approach to training a dialog manager. Current methods struggle with the large state and action spaces of multi-domain dialog systems. Reinforced Fine-tune Learning, an extension to DQfD, enables us to overcome the domain gap.
0
0
0
Thu Oct 31 2019
Machine Learning
Neural Assistant: Joint Action Prediction, Response Generation, and Latent Knowledge Reasoning
Neural Assistant is a single neural network model that takes conversation history and an external knowledge source as input and jointly produces both text response and action to be taken as output. The model learns to reason on the provided knowledge source with weak supervision signal coming from the text generation and action prediction tasks.
0
0
0
Wed Oct 09 2019
Artificial Intelligence
Alternating Recurrent Dialog Model with Large-scale Pre-trained Language Models
Existing dialog system models require extensive human annotations and are difficult to generalize to different tasks. We propose a simple, general, and effective framework:Alternating Roles Dialog Model (ARDM) ARDM models each speaker separately and takes advantage of the large pre-trained language model.
0
0
0