Sample Complexity and Overparameterization Bounds for Temporal Difference Learning with Neural Network Approximation

Semih Cayci, Siddhartha Satpathi, Niao He, R. Srikant

In this paper, we study the dynamics of temporal difference learning with neural network-based value function approximation over a general state space, namely, \emph{Neural TD learning}. We consider two practically used algorithms, projection-free and max-norm regularized Neural TD learning, and establish the first convergence bounds for these algorithms. An interesting observation from our results is that max-norm regularization can dramatically improve the performance of TD learning algorithms, both in terms of sample complexity and overparameterization. In particular, we prove that max-norm regularization appears to be more effective than $\ell_2$-regularization, again both in terms of sample complexity and overparameterization. The results in this work rely on a novel Lyapunov drift analysis of the network parameters as a stopped and controlled random process.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment