Cutting Your Losses: Learning Fault-Tolerant Control and Optimal Stopping under Adverse Risk

David Mguni

Recently, there has been a surge in interest in safe and robust techniques within reinforcement learning (RL). Current notions of risk in RL fail to capture the potential for systemic failures such as abrupt stoppages from system failures or surpassing of safety thresholds and the appropriate responsive controls in such instances. We propose a novel approach to risk minimisation within RL in which, in addition to taking actions that maximise its expected return, the controller learns a policy that is robust against stoppages due to an adverse event such as an abrupt failure. The results of the paper cover fault-tolerant control in \textit{worst-case scenarios} under random stopping and optimal stopping, all in unknown environments. By demonstrating that the class of problems is represented by a variant of stochastic games, we prove the existence of a solution which is a unique fixed point equilibrium of the game and characterise the optimal controller behaviour. We then introduce a value function approximation algorithm that converges to the solution through simulation in unknown environments.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment