Published on Wed May 02 2018

Hypothesis Only Baselines in Natural Language Inference

Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme

We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI) This approach is able to significantly outperform a majority class baseline across a number of NLI datasets.

0
0
0
Abstract

We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI). Especially when an NLI dataset assumes inference is occurring based purely on the relationship between a context and a hypothesis, it follows that assessing entailment relations while ignoring the provided context is a degenerate solution. Yet, through experiments on ten distinct NLI datasets, we find that this approach, which we refer to as a hypothesis-only model, is able to significantly outperform a majority class baseline across a number of NLI datasets. Our analysis suggests that statistical irregularities may allow a model to perform NLI in some datasets beyond what should be achievable without access to the context.

Tue Jan 19 2021
NLP
Exploring Lexical Irregularities in Hypothesis-Only Models of Natural Language Inference
Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) is the task of predicting the entailment relation between a pair of sentences. The task has been described as a valuable testing ground for the development of semantic representations.
0
0
0
Tue Jul 09 2019
NLP
On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference
Natural Language Inference (NLI) datasets have been shown to be tainted by hypothesis-only biases. Adversarial learning may help models ignore biases and spurious correlations in data.
0
0
0
Tue Jul 09 2019
NLP
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases. We propose two probabilistic methods to build models that are more robust to such biases. Our methods predict the probability of a premise given a hypothesis and NLI label.
0
0
0
Thu Mar 05 2020
Artificial Intelligence
HypoNLI: Exploring the Artificial Patterns of Hypothesis-only Bias in Natural Language Inference
Recent studies have shown that for models trained on datasets for natural language inference (NLI), it is possible to make correct predictions by completely ignoring the premise. In this work, we manage to derive adversarial examples in terms of the hypothesis-only biases and explore eligible ways to mitigate such bias.
0
0
0
Mon Oct 12 2020
NLP
OCNLI: Original Chinese Natural Language Inference
0
0
0
Fri Sep 13 2019
NLP
End-to-End Bias Mitigation by Modelling Biases in Corpora
Recent studies have shown that strong natural language understanding(NLU) models are prone to relying on unwanted dataset biases. We propose two learning strategies to train neural models, which are more robust to such biases.
0
0
0