Imitation learning for variable speed motion generation over multiple actions

Yuki Saigusa, Ayumu Sasagawa, Sho Sakaino, Toshiaki Tsuji

Robot motion generation methods using machine learning have been studied in recent years. Bilateral controlbased imitation learning can imitate human motions using force information. By means of this method, variable speed motion generation that considers physical phenomena such as the inertial force and friction can be achieved. Previous research demonstrated that the complex relationship between the force and speed can be learned by using a neural network model. However, the previous study only focused on a simple reciprocating motion. To learn the complex relationship between the force and speed more accurately, it is necessary to learn multiple actions using many joints. In this paper, we propose a variable speed motion generation method for multiple motions. We considered four types of neural network models for the motion generation and determined the best model for multiple motions at variable speeds. Subsequently, we used the best model to evaluate the reproducibility of the task completion time for the input completion time command. The results revealed that the proposed method could change the task completion time according to the specified completion time command in multiple motions.

Knowledge Graph



Sign up or login to leave a comment