Controlled Markov chains (CMCs) have wide applications in engineering and machine learning, forming a key component in many reinforcement learning problems. In this work, we consider the estimation of the transition probabilities of a finite-state finite-control CMC, and develop a minimax sample complexity bounds for nonparametric estimation of these transition probability matrices. Unlike most studies that have been done in the online setup, we consider offline MDPs. Our results are quite general, since we do not assume anything specific about the logging policy. Instead, the dependence of our statistical bounds on the logging policy comes in the form of a natural mixing coefficient. We demonstrate an interesting trade-off between stronger assumptions on mixing versus requiring more samples to achieve a particular PAC-bound. We demonstrate the validity of our results under various examples, like ergodic Markov chains, weakly ergodic inhomogenous Markov chains, and controlled Markov chains with non-stationary Markov, episodic, and greedy controls. Lastly, we use the properties of the estimated transition matrix to perform estimate the value function when the controls are stationary and Markov.