Published on Tue Jul 14 2020

Learning Syllogism with Euler Neural-Networks

Tiansi Dong, Chengjiang Li, Christian Bauckhage, Juanzi Li, Stefan Wrobel, Armin B. Cremers
0
0
0
Abstract

Traditional neural networks represent everything as a vector, and are able to approximate a subset of logical reasoning to a certain degree. As basic logic relations are better represented by topological relations between regions, we propose a novel neural network that represents everything as a ball and is able to learn topological configuration as an Euler diagram. So comes the name Euler Neural-Network (ENN). The central vector of a ball is a vector that can inherit representation power of traditional neural network. ENN distinguishes four spatial statuses between balls, namely, being disconnected, being partially overlapped, being part of, being inverse part of. Within each status, ideal values are defined for efficient reasoning. A novel back-propagation algorithm with six Rectified Spatial Units (ReSU) can optimize an Euler diagram representing logical premises, from which logical conclusion can be deduced. In contrast to traditional neural network, ENN can precisely represent all 24 different structures of Syllogism. Two large datasets are created: one extracted from WordNet-3.0 covers all types of Syllogism reasoning, the other extracted all family relations from DBpedia. Experiment results approve the superior power of ENN in logical representation and reasoning. Datasets and source code are available upon request.

Mon Apr 11 2016
Artificial Intelligence
Symbolic Knowledge Extraction using {\L}ukasiewicz Logics
This work describes a methodology that combines logic-based systems and connectionist systems. Our approach uses finite truth-valued {\L}ukasiewicz logic, wherein every connective can be defined by a neuron. This allowed the injection of first-order formulas into a network.
0
0
0
Wed May 31 2017
Artificial Intelligence
Propositional Knowledge Representation and Reasoning in Restricted Boltzmann Machines
The idea of representingsymbolic knowledge in connectionist networks has been well-received. This can establish a foundation for integration of scalable learning and sound reasoning. We propose a novel method to represent propositional formulas in restricted Boltzmann machines.
0
0
0
Mon Apr 11 2016
Artificial Intelligence
Knowledge Extraction and Knowledge Integration governed by {\L}ukasiewicz Logics
The development of machine learning in particular has been strongly conditioned by the lack of an appropriate interface layer between deduction, abduction and induction. Here we assume that such interface for AI emerges from an adequate Neural-Symbolic integration. This integration is made for universe of discourse described on a topos governed by a many-
0
0
0
Sun Sep 13 2020
Artificial Intelligence
Neural Networks Enhancement through Prior Logical Knowledge
KENN (Knowledge Enhanced Neural Networks) injects prior knowledge into a neural network model. In KENN, clauses are used to generate a new final layer of the neural network which modifies the initial predictions based on the knowledge.
0
0
0
Thu Jun 11 2020
NLP
Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge
Evidence suggests that large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control. In an open-domain setup, it is desirable to tap into the vast reservoir of implicit knowledge already encoded in pre- trained LMs.
1
41
226
Thu Nov 26 2020
NLP
Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical Explanations
Traditional symbolic reasoning engines have a few major drawbacks. They rely on exact matching (unification) of logical terms. They also need a precompiled rule-base of knowledge (the "knowledge acquisition" problem) To address these issues, we devise a novel FOL-based reasoner,
0
0
0