Published on Mon Aug 03 2020

Cooperative Control of Mobile Robots with Stackelberg Learning

Joewie J. Koh, Guohui Ding, Christoffer Heckman, Lijun Chen, Alessandro Roncone

Multi-robot cooperation requires agents to make decisions consistent with the shared goal without disregarding action-specific preferences. To achieve this, we propose a method named SLiCC: Stackelberg Learning in Cooperative Control.

0
0
0
Abstract

Multi-robot cooperation requires agents to make decisions that are consistent with the shared goal without disregarding action-specific preferences that might arise from asymmetry in capabilities and individual objectives. To accomplish this goal, we propose a method named SLiCC: Stackelberg Learning in Cooperative Control. SLiCC models the problem as a partially observable stochastic game composed of Stackelberg bimatrix games, and uses deep reinforcement learning to obtain the payoff matrices associated with these games. Appropriate cooperative actions are then selected with the derived Stackelberg equilibria. Using a bi-robot cooperative object transportation problem, we validate the performance of SLiCC against centralized multi-agent Q-learning and demonstrate that SLiCC achieves better combined utility.