Published on Fri Oct 26 2018

Stability-certified reinforcement learning: A control-theoretic perspective

Ming Jin, Javad Lavaei

We investigate the important problem of certifying stability of reinforcement learning policies when interconnected with nonlinear dynamical systems. We show that by regulating the input-output gradients of policies, strong guarantees of robust stability can be obtained.

0
0
0
Abstract

We investigate the important problem of certifying stability of reinforcement learning policies when interconnected with nonlinear dynamical systems. We show that by regulating the input-output gradients of policies, strong guarantees of robust stability can be obtained based on a proposed semidefinite programming feasibility problem. The method is able to certify a large set of stabilizing controllers by exploiting problem-specific structures; furthermore, we analyze and establish its (non)conservatism. Empirical evaluations on two decentralized control tasks, namely multi-flight formation and power system frequency regulation, demonstrate that the reinforcement learning agents can have high performance within the stability-certified parameter space, and also exhibit stable learning behaviors in the long run.

Thu Nov 12 2020
Machine Learning
Imposing Robust Structured Control Constraint on Reinforcement Learning of Linear Quadratic Regulator
This paper discusses learning a structured feedback control to obtain sufficient robustness to exogenous inputs for linear dynamic systems. The structural constraint on the controller is necessary for many cyber-physical systems, and our approach presents a design for any generic structure.
0
0
0
Mon Nov 02 2020
Machine Learning
Reinforcement Learning of Structured Control for Linear Systems with Unknown State Matrix
This paper delves into designing stabilizing feedback control gains for linear systems with unknown state matrix. We bring forth the ideas from reinforcement learning (RL) in conjunction with sufficient stability and performance guarantees. The introduced RL framework is general and can be applied to any control structure.
0
0
0
Mon Jan 04 2021
Artificial Intelligence
Derivative-Free Policy Optimization for Linear Risk-Sensitive and Robust Control Design: Implicit Regularization and Sample Complexity
Direct policy search serves as one of the workhorses in modern reinforcement learning (RL) In this work, we investigate the convergencethinkabletheory of policy gradient (PG) methods for learning the linear risk-sensitive and robust controller.
0
0
0
Tue May 23 2017
Artificial Intelligence
Safe Model-based Reinforcement Learning with Stability Guarantees
Most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. In this paper, we present a learning algorithm that explicitly considers safety. We show how the resulting algorithm can safely optimize a neural network policy.
0
0
0
Mon Dec 14 2020
Machine Learning
Safe Reinforcement Learning with Stability & Safety Guarantees Using Robust MPC
Reinforcement Learning offers tools to optimize policies based on the data retrieved from the real system subject to the policy. A formal theory detailing how safety and stability can be enforced through the parameter updates delivered by the Reinforcement Learning tools is still lacking. This paper addresses this gap.
0
0
0
Tue Aug 25 2020
Artificial Intelligence
Robust Reinforcement Learning: A Case Study in Linear Quadratic Regulation
This paper studies the robustness of reinforcement learning algorithms to small errors in the learning process. We revisit the benchmark problem of discrete-time linear quadratic regulation (LQR)
0
0
0