The Generalized Likelihood Ratio Test meets klUCB: an Improved Algorithm for Piece-Wise Non-Stationary Bandits

Lilian Besson, Emilie Kaufmann

We propose a new algorithm for the piece-wise \iid{} non-stationary bandit problem with bounded rewards. Our proposal, GLR-klUCB, combines an efficient bandit algorithm, klUCB, with an efficient, parameter-free, change-point detector, the Bernoulli Generalized Likelihood Ratio Test, for which we provide new theoretical guarantees of independent interest. We analyze two variants of our strategy, based on local restarts and global restarts, and show that their regret is upper-bounded by $\mathcal{O}(\Upsilon_T \sqrt{T \log(T)})$ if the number of change-points $\Upsilon_T$ is unknown, and by $\mathcal{O}(\sqrt{\Upsilon_T T \log(T)})$ if $\Upsilon_T$ is known. This improves the state-of-the-art bounds, as our algorithm needs no tuning based on knowledge of the problem complexity other than $\Upsilon_T$. We present numerical experiments showing that GLR-klUCB outperforms passively and actively adaptive algorithms from the literature, and highlight the benefit of using local restarts.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment