In this paper, we formulate the.imitation learning of linear policies as a constrained optimization problem. We present efficient methods which can be used to enforce stability and.robustness constraints during the learning processes.
When applying imitation learning techniques to fit a policy from expert
demonstrations, one can take advantage of prior stability/robustness
assumptions on the expert's policy and incorporate such control-theoretic prior
knowledge explicitly into the learning process. In this paper, we formulate the
imitation learning of linear policies as a constrained optimization problem,
and present efficient methods which can be used to enforce stability and
robustness constraints during the learning processes. Specifically, we show
that one can guarantee the closed-loop stability and robustness by posing
linear matrix inequality (LMI) constraints on the fitted policy. Then both the
projected gradient descent method and the alternating direction method of
multipliers (ADMM) method can be applied to solve the resulting constrained
policy fitting problem. Finally, we provide numerical results to demonstrate
the effectiveness of our methods in producing linear polices with various
stability and robustness guarantees.