Balanced off-policy evaluation in general action spaces

Arjun Sondhi, David Arbour, Drew Dimmery

Estimation of importance sampling weights for off-policy evaluation of contextual bandits often results in imbalance - a mismatch between the desired and the actual distribution of state-action pairs after weighting. In this work we present balanced off-policy evaluation (B-OPE), a generic method for estimating weights which minimize this imbalance. Estimation of these weights reduces to a binary classification problem regardless of action type. We show that minimizing the risk of the classifier implies minimization of imbalance to the desired counterfactual distribution of state-action pairs. The classifier loss is tied to the error of the off-policy estimate, allowing for easy tuning of hyperparameters. We provide experimental evidence that B-OPE improves weighting-based approaches for offline policy evaluation in both discrete and continuous action spaces.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment