Classical Policy Gradient: Preserving Bellman's Principle of Optimality

Philip S. Thomas, Scott M. Jordan, Yash Chandak, Chris Nota, James Kostas

We propose a new objective function for finite-horizon episodic Markov decision processes that better captures Bellman's principle of optimality, and provide an expression for the gradient of the objective.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment