A Hidden Markov Restless Multi-armed Bandit Model for Playout Recommendation Systems

Rahul Meshram, Aditya Gopalan, D. Manjunath

We consider a restless multi-armed bandit (RMAB) in which there are two types of arms, say A and B. Each arm can be in one of two states, say $0$ or $1.$ Playing a type A arm brings it to state $0$ with probability one and not playing it induces state transitions with arm-dependent probabilities. Whereas playing a type B arm leads it to state $1$ with probability $1$ and not playing it gets state that dependent on transition probabilities of arm. Further, play of an arm generates a unit reward with a probability that depends on the state of the arm. The belief about the state of the arm can be calculated using a Bayesian update after every play. This RMAB has been designed for use in recommendation systems where the user's preferences depend on the history of recommendations. This RMAB can also be used in applications like creating of playlists or placement of advertisements. In this paper we formulate the long term reward maximization problem as infinite horizon discounted reward and average reward problem. We analyse the RMAB by first studying discounted reward scenario. We show that it is Whittle-indexable and then obtain a closed form expression for the Whittle index for each arm calculated from the belief about its state and the parameters that describe the arm. We next analyse the average reward problem using vanishing discounted approach and derive the closed form expression for Whittle index. For a RMAB to be useful in practice, we need to be able to learn the parameters of the arms. We present an algorithm derived from Thompson sampling scheme, that learns the parameters of the arms and also illustrate its performance numerically.

Knowledge Graph



Sign up or login to leave a comment