Published on Wed May 27 2020

How to do Physics-based Learning

Michael Kellman, Michael Lustig, Laura Waller

The goal of this tutorial is to explain step-by-step how to implement physics-based learning for the rapid prototyping of a computational imaging system. We provide an open-source Pytorch implementation of aphysics-based network and training procedure.

0
0
0
Abstract

The goal of this tutorial is to explain step-by-step how to implement physics-based learning for the rapid prototyping of a computational imaging system. We provide a basic overview of physics-based learning, the construction of a physics-based network, and its reduction to practice. Specifically, we advocate exploiting the auto-differentiation functionality twice, once to build a physics-based network and again to perform physics-based learning. Thus, the user need only implement the forward model process for their system, speeding up prototyping time. We provide an open-source Pytorch implementation of a physics-based network and training procedure for a generic sparse recovery problem

Mon May 22 2017
Computer Vision
Unrolled Optimization with Deep Priors
A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution. Traditionally, hand-crafted priors and iterative optimization methods have been used to solve such problems.
1
0
0
Wed Sep 16 2020
Machine Learning
Deep Learning in Photoacoustic Tomography: Current approaches and future directions
Biomedical photoacoustic tomography can provide high resolution 3D soft tissue images based on the optical absorption. The need for rapid image formation and the constraints of a clinical workflow are presenting new image reconstruction challenges.
0
0
0
Fri Jun 26 2020
Computer Vision
A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding
We introduce a general framework for designing and training neural network layers. The forward passes can be interpreted as solving non-smooth convex optimization problems. We focus on convex games, solved by local agents represented by the nodes of a graph.
0
0
0
Fri Jun 05 2020
Machine Learning
Scalable Plug-and-Play ADMM with Convergence Guarantees
plug-and-play priors (PnP) is a broadly applicable methodology for exploiting statistical priors specified as denoisers. Current PnP algorithms are impractical in large-scale settings due to their heavy computational and memory requirements. This work addresses this issue by proposing an incremental
1
0
1
Tue Jul 06 2021
Machine Learning
Solution of Physics-based Bayesian Inverse Problems with Deep Generative Priors
Inverse problems are notoriously difficult to solve because they can have no solutions, multiple solutions, or have solutions that vary significantly in response to small perturbations in measurements. Bayesian inference, which poses an inverse problem as a stochastic inference problem, addresses these difficult problems.
1
0
0
Sun Jan 13 2019
Machine Learning
Neumann Networks for Inverse Problems in Imaging
Traditional inverse problem solvers minimize a cost function consisting of a data-fit term and a regularizer. We present an end-to-end, data-driven method of solving inverse problems inspired by the Neumann series. The Neumann network outperforms traditional inverse problem solution methods.
0
0
0