Published on Sat Apr 21 2018

A Stable and Effective Learning Strategy for Trainable Greedy Decoding

Yun Chen, Victor O. K. Li, Kyunghyun Cho, Samuel R. Bowman

Beam search is a widely used approximate search strategy for neural network decoders. The method revolves around a small neural network actor trained to observe and manipulate the hidden state of a previously-trained decoder. It requires no reinforcement learning, and can be trained reliably on a range of models.

0
0
0
Abstract

Beam search is a widely used approximate search strategy for neural network decoders, and it generally outperforms simple greedy decoding on tasks like machine translation. However, this improvement comes at substantial computational cost. In this paper, we propose a flexible new method that allows us to reap nearly the full benefits of beam search with nearly no additional computational cost. The method revolves around a small neural network actor that is trained to observe and manipulate the hidden state of a previously-trained decoder. To train this actor network, we introduce the use of a pseudo-parallel corpus built using the output of beam search on a base model, ranked by a target quality metric like BLEU. Our method is inspired by earlier work on this problem, but requires no reinforcement learning, and can be trained reliably on a range of models. Experiments on three parallel corpora and three architectures show that the method yields substantial improvements in translation quality and speed over each base system.