Full Gradient DQN Reinforcement Learning: A Provably Convergent Scheme

K. Avrachenkov, V. S. Borkar, H. P. Dolhare, K. Patil

We analyze the DQN reinforcement learning algorithm as a stochastic approximation scheme using the o.d.e. (for `ordinary differential equation') approach and point out certain theoretical issues. We then propose a modified scheme called Full Gradient DQN (FG-DQN, for short) that has a sound theoretical basis and compare it with the original scheme on sample problems. We observe a better performance for FG-DQN.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment