Published on Wed May 06 2020

What are the Goals of Distributional Semantics?

Guy Emerson

Distributional semantic models have become a mainstay in NLP. However, assessing long-term progress requires explicit goals. Future progress will require balancing the often conflicting demands of linguistic expressiveness and computational tractability.

0
0
0
Abstract

Distributional semantic models have become a mainstay in NLP, providing useful features for downstream tasks. However, assessing long-term progress requires explicit long-term goals. In this paper, I take a broad linguistic perspective, looking at how well current models can deal with various semantic challenges. Given stark differences between models proposed in different subfields, a broad perspective is needed to see how we could integrate them. I conclude that, while linguistic insights can guide the design of model architectures, future progress will require balancing the often conflicting demands of linguistic expressiveness and computational tractability.

Wed May 15 2019
NLP
What do you learn from context? Probing for sentence structure in contextualized word representations
contextualized representation models such as ELMo have recently achieved state-of-the-art results on downstream NLP tasks. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks.
0
0
0
Sat Nov 04 2017
NLP
Towards Linguistically Generalizable NLP Systems: A Workshop and Shared Task
This paper presents a summary of the first Workshop on Building Linguistically Generalizable Natural Language Processing Systems. The goal of the workshop was to bring together researchers in NLP and linguistics with a shared task aimed at testing the generalizability of NLP systems.
0
0
0
Wed Apr 14 2021
NLP
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
model (MLM) pre-training. We pre-train MLMs on sentences with randomly shuffled word order. We show that these models still achieve high accuracy after fine-tuning on many downstream tasks.
5
94
430
Sun Aug 14 2016
NLP
Proceedings of the LexSem+Logics Workshop 2016
LexSem+Logics 2016 combines the 1st Workshop on Lexical Semantics for Lesser-Resources Languages and the 3rd Workshop on Logics and Ontologies. Lexical semantics continues to play an important role in driving research in to NLP tasks.
0
0
0
Sat Nov 09 2019
Artificial Intelligence
Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview
An increasing number of works in natural language processing have addressed the effect of bias on the predicted outcomes. These works have been conducted in isolation, without a unifying framework to organize efforts within the field. Research focused on bias symptoms rather than the underlying origins could limit the development of effective
0
0
0
Tue Mar 02 2021
NLP
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
There is an ongoing debate in the NLP community whether modern language models contain linguistic knowledge. We show that language models that are significantly compressed but perform well on their pretraining objectives retain good scores when probed for linguistic structures.
1
5
38