Published on Fri Mar 31 2017

Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders

Tiancheng Zhao, Ran Zhao, Maxine Eskenazi

Recent neural encoder-decoder models often generate dull and generic responses. We present a novel framework that captures the discourse-level diversity in the encoder. The training procedure is improved by introducing a bag-of-word loss.

0
0
0
Abstract

While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.

Thu May 31 2018
Neural Networks
DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder
dialogWAE is a conditional Wasserstein autoencoder~(WAE) specially designed for dialogue modeling. Unlike VAEs that impose a simple distribution over the latent variables, DialogW AE models the distribution of data by training a GAN within the latent variable space.
0
0
0
Sat Mar 28 2020
NLP
Variational Transformers for Diverse Response Generation
The Variational Transformer (VT) is a feed-forward sequence model. It combines the parallelizability and global receptive field of the Transformer with the stochastic nature of the CVAE. The experimental results show that our models improve standard Transformers and other baselines.
0
0
0
Mon Feb 20 2017
NLP
Latent Variable Dialogue Models and their Diversity
We present a dialogue generation model that directly captures the variability in possible responses to a given input. Experiments show that our model generates more diverse outputs than baseline models.
0
0
0
Mon Jun 07 2021
NLP
Generating Relevant and Coherent Dialogue Responses using Self-separated Conditional Variational AutoEncoders
Conditional Variational AutoEncoder (CVAE) effectively increases the diversity and informativeness of responses in open-ended dialogue generation tasks. CVAE introduces group information to regularize the sampled latent variables, which enhances responses' relevance and coherence.
1
0
0
Tue Sep 18 2018
NLP
Better Conversations by Modeling,Filtering,and Optimizing for Coherence and Diversity
We present three enhancements to existing encoder-decoder models for conversational agents. We introduce a measure of coherence as the measure of similarity between the dialogue context and the generated response. We filter our training corpora based on the measure and train a response generator using a conditional
0
0
0
Fri Nov 22 2019
NLP
A Discrete CVAE for Response Generation on Short-Text Conversation
Some researchers propose to use the conditional variational autoencoder(CVAE) which maximizes the lower bound on the conditional log-likelihood on a continuous latent variable. With different sampled la-tent variables, the model is expected to generate diverse responses.
0
0
0