Data-efficient and Safe Learning for Locomotion using a Simplified Model

Junhyeok Ahn, Jaemin Lee, Luis Sentis

In this letter, we formulate a novel Markov Decision Process (MDP) for safe and data-efficient learning for locomotion via a simplified model. In our previous studies of biped locomotion, we relied on a low-dimensional robot model, commonly used in high-level Walking Pattern Generators (WPGs). However, a low-level feedback controller cannot precisely track desired footstep locations due to the discrepancies between the real system and the simplified model. In this study, we propose to mitigate this problem by complementing the WPG with reinforcement learning. We formulate an MDP that incorporates the dynamic properties of a robot, desired walking directions, and footstep features. Safely and efficiently, we iteratively update a policy that determines footstep locations using a simplified model. The footstep policy of the proposed approach consists of the simple-model-based action and a parameterized stochastic action. In addition, a control barrier function process applies corrections to the above policy to prevent exploration of unsafe regions during learning. Our contributions include (1) learning-based compensation for footstep tracking errors resulting from employing the simplified model, (2) efficient and safe exploration in the learning process, and (3) scalability of the procedure to various humanoid robots.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment