The Implicit Bias of AdaGrad on Separable Data

Qian Qian, Xiaoyuan Qian

We study the implicit bias of AdaGrad on separable linear classification problems. We show that AdaGrad converges to a direction that can be characterized as the solution of a quadratic optimization problem with the same feasible set as the hard SVM problem. We also give a discussion about how different choices of the hyperparameters of AdaGrad might impact this direction. This provides a deeper understanding of why adaptive methods do not seem to have the generalization ability as good as gradient descent does in practice.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment