Published on Wed Oct 02 2019

Cracking the Contextual Commonsense Code: Understanding Commonsense Reasoning Aptitude of Deep Contextual Representations

Jeff Da, Jungo Kasai

Pretrained deep contextual representations have advanced the state-of-the-art on various commonsense NLP tasks, but we lack a concrete understanding of the capabilities of these models. We investigate and challenge several aspects of BERT's representation abilities.

0
0
0
Abstract

Pretrained deep contextual representations have advanced the state-of-the-art on various commonsense NLP tasks, but we lack a concrete understanding of the capability of these models. Thus, we investigate and challenge several aspects of BERT's commonsense representation abilities. First, we probe BERT's ability to classify various object attributes, demonstrating that BERT shows a strong ability in encoding various commonsense features in its embedding space, but is still deficient in many areas. Next, we show that, by augmenting BERT's pretraining data with additional data related to the deficient attributes, we are able to improve performance on a downstream commonsense reasoning task while using a minimal amount of data. Finally, we develop a method of fine-tuning knowledge graphs embeddings alongside BERT and show the continued importance of explicit knowledge graphs.

Mon Dec 21 2020
Artificial Intelligence
CSKG: The CommonSense Knowledge Graph
Sources of commonsense knowledge support applications in natural language understanding, computer vision, and knowledge graphs. Yet, their different foci, modeling approaches, and sparse overlap make integration difficult. We consolidate seven key sources into a first integrated CommonSense Knowledge Graph (CSKG)
0
0
0
Mon Oct 12 2020
NLP
COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
The development of new commonsense knowledge graphs (CSKG) has been central to advances in natural language understanding. We propose ATOMIC 2020, a new CSKG of general-purpose commonsense knowledge containing knowledge not readily available in language models.
0
0
0
Sat Nov 28 2020
Artificial Intelligence
A Data-Driven Study of Commonsense Knowledge using the ConceptNet Knowledge Base
Acquiring commonsense knowledge and reasoning is recognized as an important frontier in achieving general Artificial Intelligence (AI) Recent research in the Natural Language Processing (NLP) community has demonstrated significant progress in this problem setting. Despite this progress, there is still a lack of understanding of the nature
0
0
0
Tue Jun 22 2021
NLP
Do Language Models Perform Generalizable Commonsense Inference?
Recent work has applied LMs to automatically populate commonsense knowledge graphs (CKGs) There is a lack of understanding on their generalization to multiple CKGs, unseen relations, and novel entities. Future work should investigate how to improve the transferability and induction of commonsense mining from
2
2
6
Mon Apr 12 2021
Artificial Intelligence
Relational world knowledge representation in contextual language models: A review
0
0
0
Mon Feb 22 2021
Artificial Intelligence
Wider Vision: Enriching Convolutional Neural Networks via Alignment to External Knowledge Bases
0
0
0