On the Last Iterate Convergence of Momentum Methods

Xiaoyu Li, Mingrui Liu, Francesco Orabona

SGD with Momentum (SGDM) is widely used for large scale optimization of machine learning problems. Yet, the theoretical understanding of this algorithm is not complete. In fact, even the most recent results require changes to the algorithm like an averaging scheme and a projection onto a bounded domain, which are never used in practice. Also, no lower bound is known for SGDM. In this paper, we prove for the first time that for any constant momentum factor, there exists a Lipschitz and convex function for which the last iterate of SGDM suffers from an error $\Omega(\frac{\log T}{\sqrt{T}})$ after $T$ steps. Based on this fact, we study a new class of (both adaptive and non-adaptive) Follow-The-Regularized-Leader-based SGDM algorithms with \emph{increasing momentum} and \emph{shrinking updates}. For these algorithms, we show that the last iterate has optimal convergence $O (\frac{1}{\sqrt{T}})$ for unconstrained convex optimization problems. Further, we show that in the interpolation setting with convex and smooth functions, our new SGDM algorithm automatically converges at a rate of $O(\frac{\log T}{T})$. Empirical results are shown as well.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment