Published on Mon Oct 14 2019

Whatcha lookin' at? DeepLIFTing BERT's Attention in Question Answering

Ekaterina Arkhangelskaia, Sourav Dutta

There has been great success recently in tackling challenging NLP tasks by pre-trained and fine-tuned networks. We investigate one such model, BERT for question-answering, with the aim to analyze why it is able to achieve significantly better results than other models.

0
0
0
Abstract

There has been great success recently in tackling challenging NLP tasks by neural networks which have been pre-trained and fine-tuned on large amounts of task data. In this paper, we investigate one such model, BERT for question-answering, with the aim to analyze why it is able to achieve significantly better results than other models. We run DeepLIFT on the model predictions and test the outcomes to monitor shift in the attention values for input. We also cluster the results to analyze any possible patterns similar to human reasoning depending on the kind of input paragraph and question the model is trying to answer.