Published on Thu Feb 15 2018

DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training

Nathan Kallus

Standard approaches such as propensity weighting and matching/balancing fail in such settings due to miscalibrated propensity nets and inappropriate covariate representations. This is demonstrated through new theoretical characterizations of the method as well as empirical results.

0
0
0
Abstract

We study optimal covariate balance for causal inferences from observational data when rich covariates and complex relationships necessitate flexible modeling with neural networks. Standard approaches such as propensity weighting and matching/balancing fail in such settings due to miscalibrated propensity nets and inappropriate covariate representations, respectively. We propose a new method based on adversarial training of a weighting and a discriminator network that effectively addresses this methodological gap. This is demonstrated through new theoretical characterizations of the method as well as empirical results using both fully connected architectures to learn complex relationships and convolutional architectures to handle image confounders, showing how this new method can enable strong causal analyses in these challenging settings.

Tue Apr 28 2020
Machine Learning
MultiMBNN: Matched and Balanced Causal Inference with Neural Networks
Confounding is a typical hazard, where the context affects both, the treatment assignment and response. MultiMBNN outperforms the state-of-the-art algorithms for CI such as TARNet and PM.
0
0
0
Mon Jun 19 2017
Machine Learning
Deep Counterfactual Networks with Propensity-Dropout
We propose a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data. Our approach conceptualizes causal inference as a multitask learning problem. We model a subject's potential outcomes using a deep multitask network.
0
0
0
Tue Apr 30 2019
Machine Learning
Adversarial Balancing-based Representation Learning for Causal Effect Inference with Observational Data
Learning causal effects from observational data greatly benefits a variety of domains. For instance, one could estimate the impact of a new drug on specific individuals to assist the clinic and improve the survival rate. To overcome these challenges, we propose a neural network framework called Adversarial Balancing-based representation learning for Causal Effect Inference.
0
0
0
Mon Nov 23 2020
Machine Learning
Balance Regularized Neural Network Models for Causal Effect Estimation
Estimating individual and average treatment effects from observational data is an important problem in many domains such as healthcare and e-commerce. We advocate balance regularization of multi-head neural network architectures.
0
0
0
Wed Nov 21 2018
Machine Learning
Estimation of Individual Treatment Effect in Latent Confounder Models via Adversarial Learning
Estimating the individual treatment effect (ITE) from observational data is essential in medicine. Most previous work relies on the unconfoundedness assumption. If there are unmeasurable (latent) confounders, then confounding bias is introduced.
0
0
0
Thu Jan 02 2020
Machine Learning
A Loss-Function for Causal Machine-Learning
Causal machine-learning is about predicting the net-effect (true-lift) of treatments. Given data of a treatment group and a control group, it is similar to a standard supervised-learning problem. Unfortunately, there is no similarly well-defined loss function due to the lack of
0
0
0