Published on Fri Apr 25 2014

Multitask Learning for Sequence Labeling Tasks

Arvind Agarwal, Saurabh Kataria

In this paper, we present a learning method for sequence labeling tasks. Each example sequence has multiple label sequences. Our method learns multiple models, one model for each label sequence. Each model computes the joint probability of all label sequences given the example sequence.

0
0
0
Abstract

In this paper, we present a learning method for sequence labeling tasks in which each example sequence has multiple label sequences. Our method learns multiple models, one model for each label sequence. Each model computes the joint probability of all label sequences given the example sequence. Although each model considers all label sequences, its primary focus is only one label sequence, and therefore, each model becomes a task-specific model, for the task belonging to that primary label. Such multiple models are learned {\it simultaneously} by facilitating the learning transfer among models through {\it explicit parameter sharing}. We experiment the proposed method on two applications and show that our method significantly outperforms the state-of-the-art method.