Published on Sun Jul 15 2018

Learning Probabilistic Logic Programs in Continuous Domains

Stefanie Speichert, Vaishak Belle

The field of statistical relational learning aims to unify logic and probability to reason and learn from data. A major limitation of this exciting landscape is that much of the work is limited to finite-domain discrete probability distributions. We take the first steps towards inducing probabilistic logic programs for continuous and mixed discrete-continuous data.

0
0
0
Abstract

The field of statistical relational learning aims at unifying logic and probability to reason and learn from data. Perhaps the most successful paradigm in the field is probabilistic logic programming: the enabling of stochastic primitives in logic programming, which is now increasingly seen to provide a declarative background to complex machine learning applications. While many systems offer inference capabilities, the more significant challenge is that of learning meaningful and interpretable symbolic representations from data. In that regard, inductive logic programming and related techniques have paved much of the way for the last few decades. Unfortunately, a major limitation of this exciting landscape is that much of the work is limited to finite-domain discrete probability distributions. Recently, a handful of systems have been extended to represent and perform inference with continuous distributions. The problem, of course, is that classical solutions for inference are either restricted to well-known parametric families (e.g., Gaussians) or resort to sampling strategies that provide correct answers only in the limit. When it comes to learning, moreover, inducing representations remains entirely open, other than "data-fitting" solutions that force-fit points to aforementioned parametric families. In this paper, we take the first steps towards inducing probabilistic logic programs for continuous and mixed discrete-continuous data, without being pigeon-holed to a fixed set of distribution families. Our key insight is to leverage techniques from piecewise polynomial function approximation theory, yielding a principled way to learn and compositionally construct density functions. We test the framework and discuss the learned representations.

Sat Jul 14 2018
Artificial Intelligence
Tractable Querying and Learning in Hybrid Domains via Sum-Product Networks
Probabilistic representations, such as Bayesian and Markov networks, are fundamental to much of statistical machine learning. Tractable learning is a powerful new paradigm that attempts to learn distributions that can support efficient probabilistic querying. By leveraging local structure, sum-product networks (SPNs) can capture high tree-width representations.
0
0
0
Wed Mar 20 2013
Artificial Intelligence
Symbolic Probabilistic Inference with Continuous Variables
Symbolic Probabilistic Inference (SPI) has provided an algorithm for resolving general queries in Bayesian networks. Unlike traditional Bayesian network inferencing algorithms, SPI algorithm is goal-directed. We extend the SPI algorithm to handle Bayesian Networks made up of continuous variables.
0
0
0
Mon Dec 12 2011
Artificial Intelligence
Inference in Probabilistic Logic Programs with Continuous Random Variables
Probabilistic Logic Programming (PLP) is aimed at combining statistical and logical knowledge representation and inference. PLP frameworks extend traditional logic programming semantics to a distribution semantics. However, the inference techniques used in these works rely on enumerating sets of explanations for a query answer.
0
0
0
Fri Jun 07 2019
Machine Learning
Automatic Reparameterisation of Probabilistic Programs
Probabilistic programming has emerged as a powerful paradigm in statistics, applied science, and machine learning. But performance of inference algorithms can be affected by the parameterisation used to express a model. We argue that mechanisms available in recent modeling frameworks can implement non-centring and
0
0
0
Thu Jun 28 2018
Artificial Intelligence
Polynomial-time probabilistic reasoning with partial observations via implicit learning in probability logics
In this work we consider the use of bounded-degree fragments of the "sum-of-squares" logic as a probabilistic logic. We propose to use such fragments to answer questions about whether a given probability distribution satisfies a system of constraints and bounds on expected values.
0
0
0
Thu Jan 17 2019
Artificial Intelligence
Learning Credal Sum-Product Networks
Tractable learning is a powerful new paradigm that attempts to learn distributions that support efficient probabilistic querying. By leveraging local structure, sum-product networks (SPNs) can capture high tree-width models with many hidden layers, essentially a deep architecture.
0
0
0