Published on Sat May 01 2010

Perturbation Resilience and Superiorization of Iterative Algorithms

Y. Censor, R. Davidi, G. T. Herman

Algorithms aimed at solving some problems are discussed. For other problems, algorithms tend to need more computer memory and longer execution time. A methodology is presented to produce automatically a "superiorized version" of an algorithm.

0
0
0
Abstract

Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a "superiorized version" of it that retains its computational efficiency but nevertheless goes a long way towards solving an optimization problem. This is possible to do if the original algorithm is "perturbation resilient," which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that drive the process in the direction of the optimizer of the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image.

Wed Jun 09 2021
Machine Learning
Avoiding Traps in Nonconvex Problems
Iterative projection methods may become trapped at non-solutions when the constraints are nonconvex. Two kinds of parameters are available to help avoid this behavior and this study gives examples of both.
5
1
4
Sat May 09 2015
Machine Learning
Newton Sketch: A Linear-time Optimization Algorithm with Linear-Quadratic Convergence
We propose a randomized second-order method for optimization known as the Newton Sketch. It is based on performing an approximate Newton step using arandomly projected or sub-sampled Hessian. We also describe extensions of our methods to programs involving convex constraints.
0
0
0
Sun Jun 04 2017
Machine Learning
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory
We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices.
0
0
0
Tue Mar 19 2019
Machine Learning
Convergence Analysis of Inexact Randomized Iterative Methods
In this paper we present a convergence rate analysis of inexact variants of several randomized iterative methods. A common feature of these methods is that in their update rule a certain sub-problem needs to be solved exactly. We relax this requirement by allowing the sub- Problem to be solve inexactly.
0
0
0
Thu Nov 12 2015
Machine Learning
Random Multi-Constraint Projection: Stochastic Gradient Methods for Convex Optimization with Many Constraints
Consider convex optimization problems subject to a large number of constraints. At every iteration, the algorithms sample a number of projection points onto a randomly selected small subsets of all constraints. We prove the almost sure convergence of these algorithms.
0
0
0
Wed Nov 22 2017
Machine Learning
Run-and-Inspect Method for Nonconvex Optimization and Global Optimality Bounds for R-Local Minimizers
Many optimization algorithms converge to stationary points. The Run-and-Inspect Method adds an "inspect" phase to existing algorithms. The inspection samples a set of points in a radius around the current point.
0
0
0