Published on Sat May 02 2020

Knowledge Base Completion: Baseline strikes back (Again)

Prachi Jain, Sushant Rathi, Mausam, Soumen Chakrabarti

Complex, when trained using all available negative samples, gives near state-of-the-art performance on all the datasets. We also highlight how various multiplicative KBC methods, recently proposed in the literature, benefit from this train-ing regime.

0
0
0
Abstract

Knowledge Base Completion (KBC) has been a very active area lately. Several recent KBCpapers propose architectural changes, new training methods, or even new formulations. KBC systems are usually evaluated on standard benchmark datasets: FB15k, FB15k-237, WN18, WN18RR, and Yago3-10. Most existing methods train with a small number of negative samples for each positive instance in these datasets to save computational costs. This paper discusses how recent developments allow us to use all available negative samples for training. We show that Complex, when trained using all available negative samples, gives near state-of-the-art performance on all the datasets. We call this approach COMPLEX-V2. We also highlight how various multiplicative KBC methods, recently proposed in the literature, benefit from this train-ing regime and become indistinguishable in terms of performance on most datasets. Our work calls for a reassessment of their individual value, in light of these findings.

Tue May 30 2017
Artificial Intelligence
Knowledge Base Completion: Baselines Strike Back
The accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyper-parameter tuning.
0
0
0
Thu Sep 14 2017
Artificial Intelligence
KBLRN : End-to-End Learning of Knowledge Base Representations with Latent, Relational, and Numerical Features
KBLRN is a framework for end-to-end learning of knowledge base representations from latent, relational, and numerical features. KBLRN integrates feature types with a novel combination of neural representation and probabilistic product of experts models.
0
0
0
Mon Aug 30 2021
NLP
Knowledge Base Completion Meets Transfer Learning
The aim of knowledge base completion is to predict unseen facts from existing facts in knowledge bases. The method works for both canonicalized knowledge bases and uncanonicalized or open knowledge bases, i.e., where more than one copy of a real-world entity or relation may exist.
1
0
0
Fri Dec 20 2019
Machine Learning
The State of Knowledge Distillation for Classification
We survey various knowledge distillation (KD) strategies for simple classification tasks. We implement a set of techniques that claim state-of-the-art accuracy. We observe that appropriately tuned classical distillation in combination with a data augmentation training scheme gives an orthogonal improvement.
0
0
0
Tue Jun 19 2018
Artificial Intelligence
Canonical Tensor Decomposition for Knowledge Base Completion
The problem of Knowledge Base Completion can be framed as a 3rd-order binary-tensor completion problem. In this light, the Canonical Tensor Decomposition(CP) seems like a natural solution. However, current implementations of CP are lagging behind their competitors.
1
0
2
Sun Nov 10 2019
NLP
A Re-evaluation of Knowledge Graph Completion Methods
Knowledge Graph Completion (KGC) aims at automatically predicting missing links for large-scale knowledge graphs. The proposed protocol is robust to handle bias in the model.
0
0
0