Published on Thu Jun 06 2019

Learning to regularize with a variational autoencoder for hydrologic inverse analysis

Daniel O'Malley, John K. Golden, Velimir V. Vesselinov

Inverse problems often involve matching observational data using a physical model that takes a large number of parameters as input. These problems tend to be under-constrained and require regularization to impose additional structure on the solution in parameter space. In this work, we propose a method of regularization involving a machine learning technique.

0
0
0
Abstract

Inverse problems often involve matching observational data using a physical model that takes a large number of parameters as input. These problems tend to be under-constrained and require regularization to impose additional structure on the solution in parameter space. A central difficulty in regularization is turning a complex conceptual model of this additional structure into a functional mathematical form to be used in the inverse analysis. In this work we propose a method of regularization involving a machine learning technique known as a variational autoencoder (VAE). The VAE is trained to map a low-dimensional set of latent variables with a simple structure to the high-dimensional parameter space that has a complex structure. We train a VAE on unconditioned realizations of the parameters for a hydrological inverse problem. These unconditioned realizations neither rely on the observational data used to perform the inverse analysis nor require any forward runs of the physical model, thus making the computational cost of generating the training data minimal. The central benefit of this approach is that regularization is then performed on the latent variables from the VAE, which can be regularized simply. A second benefit of this approach is that the VAE reduces the number of variables in the optimization problem, thus making gradient-based optimization more computationally efficient when adjoint methods are unavailable. After performing regularization and optimization on the latent variables, the VAE then decodes the problem back to the original parameter space. Our approach constitutes a novel framework for regularization and optimization, readily applicable to a wide range of inverse problems. We call the approach RegAE.

Thu Dec 05 2019
Machine Learning
Solving Bayesian Inverse Problems via Variational Autoencoders
UQ-VAE is a flexible, adaptive, hybrid data/model-informed framework. It is capable of rapid modelling of the posterior distribution representing the unknown parameter of interest. Most of the information usually present in scientific inverse problems is fully utilized in the training procedure.
0
0
0
Sun Jul 19 2020
Machine Learning
Semi Conditional Variational Auto-Encoder for Flow Reconstruction and Uncertainty Quantification from Limited Observations
We present a new data-driven model to reconstruct nonlinear flow from sparse observations. The model is a version of a conditional variational auto-encoder (CVAE) which allows for probabilistic reconstruction.
0
0
0
Sat Jul 25 2020
Machine Learning
Learning Variational Data Assimilation Models and Solvers
This paper addresses variational data assimilation from a learning point of view. Data assimilation aims to reconstruct the time evolution of some state given a series of observations. Using automatic differentiation tools embedded in deep learning frameworks, we introduce end-to-end neural network architectures.
0
0
0
Sun Jun 28 2020
Machine Learning
Variational Autoencoding of PDE Inverse Problems
Modern machine learning disregards prior knowledge and physical laws. We fold the mechanistic model into a flexible data-driven surrogate to arrive at a physically structured decoder network.
0
0
0
Thu Apr 30 2020
Machine Learning
Data-Space Inversion Using a Recurrent Autoencoder for Time-Series Parameterization
Data-space inversion (DSI) and related procedures are applicable for data assimilation in subsurface flow settings. DSI operates in a Bayesian setting and provides posterior samples of the data vector. The new DSI methodology is shown to consistently outperform existing approaches.
0
0
0
Mon Jun 01 2020
Machine Learning
Analog ensemble data assimilation and a method for constructing analogs with variational autoencoders
A new method of constructing analogs using VAEs is proposed. The method using constructed analogs is found to perform as well as a full ensemble square root filter.
0
0
0