We first study the well-posedness of Bellman equation, or what is commonly known as the average cost optimality equation, for the ergodic control problem for a controlled Markov chain in a Polish space with state-dependent action space which is not necessarily compact. We deviate from the usual approach which is based on the vanishing discount method and instead map the problem to an equivalent one for a controlled split chain. We employ a stochastic representation of the Poisson equation to derive the Bellman equation. Next, under suitable assumptions, we establish convergence results for the `relative value iteration' algorithm which computes the solution of the Bellman equation recursively. In addition, we present some results concerning the stability and asymptotic optimality of the associated rolling horizon policies.