Dimensionality reduction, regularization, and generalization in overparameterized regressions

Ningyuan, Huang, David W. Hogg, Soledad Villar

Overparameterization in deep learning is powerful: Very large models fit the training data perfectly and yet generalize well. This realization brought back the study of linear models for regression, including ordinary least squares (OLS), which, like deep learning, shows a "double descent" behavior. This involves two features: (1) The risk (out-of-sample prediction error) can grow arbitrarily when the number of samples $n$ approaches the number of parameters $p$, and (2) the risk decreases with $p$ at $p>n$, sometimes achieving a lower value than the lowest risk at $p

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment