Paper ID: 2111.12316
A note on stabilizing reinforcement learning
Pavel Osinenko, Grigory Yaremenko, Ilya Osokin
Reinforcement learning is a general methodology of adaptive optimal control that has attracted much attention in various fields ranging from video game industry to robot manipulators. Despite its remarkable performance demonstrations, plain reinforcement learning controllers do not guarantee stability which compromises their applicability in industry. To provide such guarantees, measures have to be taken. This gives rise to what could generally be called stabilizing reinforcement learning. Concrete approaches range from employment of human overseers to filter out unsafe actions to formally verified shields and fusion with classical stabilizing controllers. A line of attack that utilizes elements of adaptive control has become fairly popular in the recent years. In this note, we critically address such an approach in a fairly general actor-critic setup for nonlinear time-continuous environments. The actor network utilizes a so-called robustifying term that is supposed to compensate for the neural network errors. The corresponding stability analysis is based on the value function itself. We indicate a problem in such a stability analysis and provide a counterexample to the overall control scheme. Implications for such a line of attack in stabilizing reinforcement learning are discussed. Furthermore, unfortunately the said problem possess no fix without a substantial reconsideration of the whole approach. As a positive message, we derive a stochastic critic neural network weight convergence analysis provided that the environment was stabilized.
Submitted: Nov 24, 2021