Classical Policy Gradient: Preserving Bellman's Principle of Optimality

Philip S. Thomas, Scott M. Jordan, Yash Chandak, Chris Nota, James Kostas

We propose a new objective function for finite-horizon episodic Markov decision processes that better captures Bellman's principle of optimality, and provide an expression for the gradient of the objective.

Knowledge Graph



Sign up or login to leave a comment