Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation

Erik Englesson, Hossein Azizpour

In this work we aim to obtain computationally-efficient uncertainty estimates with deep networks. For this, we propose a modified knowledge distillation procedure that achieves state-of-the-art uncertainty estimates both for in and out-of-distribution samples. Our contributions include a) demonstrating and adapting to distillation's regularization effect b) proposing a novel target teacher distribution c) a simple augmentation procedure to improve out-of-distribution uncertainty estimates d) shedding light on the distillation procedure through comprehensive set of experiments.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment