Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints

Shaojie Li, Yong Liu

Generalization performance of stochastic optimization stands a central place in machine learning. In this paper, we investigate the excess risk performance and towards improved learning rates for two popular approaches of stochastic optimization: empirical risk minimization (ERM) and stochastic gradient descent (SGD). Although there exists plentiful generalization analysis of ERM and SGD for supervised learning, current theoretical understandings of ERM and SGD are either have stronger assumptions in convex learning, e.g., strong convexity condition, or show slow rates and less studied in nonconvex learning. Motivated by these problems, we aim to provide improved rates under milder assumptions in convex learning and derive faster rates in nonconvex learning. It is notable that our analysis span two popular theoretical viewpoints: stability and uniform convergence. To be specific, in stability regime, we present high probability rates of order $\mathcal{O} (1/n)$ w.r.t. the sample size $n$ for ERM and SGD with milder assumptions in convex learning and similar high probability rates of order $\mathcal{O} (1/n)$ in nonconvex learning, rather than in expectation. Furthermore, this type of learning rate is improved to faster order $\mathcal{O} (1/n^2)$ in uniform convergence regime. To the best of our knowledge, for ERM and SGD, the learning rates presented in this paper are all state-of-the-art.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment