Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extra-gradient Methods in Smooth Convex-Concave Saddle Point Problems

Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil

We study the iteration complexity of the optimistic gradient descent-ascent (OGDA) method and the extra-gradient (EG) method for finding a saddle point of a convex-convex unconstrained min-max problem. To do so, we first show that both OGDA and EG can be interpreted as approximate variants of the proximal point method. We then exploit this interpretation to show that both algorithms produce iterates that remain within a bounded set. We further show that the function value of the averaged iterates generated by both of these algorithms converge with a rate of $\mathcal{O}(1/k)$ to the function value at the saddle point. Our theoretical analysis is of interest as it provides a simple convergence analysis for the EG algorithm in terms of function value without using compactness assumption. Moreover, it provides the first convergence rate estimate for OGDA in the general convex-concave setting.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment