In the optimization of dynamic systems, the variables typically have constraints. Such problems can be modeled as a Constrained Markov Decision Process (CMDP). This paper considers the peak constraints, where the agent chooses the policy to maximize the long-term average reward as well as satisfies the constraints at each time. We propose a model-free algorithm that converts CMDP problem to an unconstrained problem and a Q-learning based approach is used. We extend the concept of probably approximately correct (PAC) to define a criterion of $\epsilon$-optimal policy. The proposed algorithm is proved to achieve an $\epsilon$-optimal policy with probability at least $1-p$ when the episode $K\geq\Omega(\frac{I^2H^6SA\ell}{\epsilon^2})$, where $S$ and $A$ is the number of states and actions, respectively, $H$ is the number of steps per episode, $I$ is the number of constraint functions, and $\ell=\log(\frac{SAT}{p})$. We note that this is the first result on PAC kind of analysis for CMDP with peak constraints, where the transition probabilities are not known apriori. We demonstrate the proposed algorithm on an energy harvesting problem where it outperforms state-of-the-art and performs close to the theoretical upper bound of the studied optimization problem.

Thanks. We have received your report. If we find this content to be in
violation of our guidelines,
we will remove it.

Ok