|
|
Andrea Montanari (Stanford)Norbert Wiener Center Distinguished Lecturer
Time: 3:30 pm on Thursday, October 6th, 2022
From overparametrized neural networks to harmonic regression
Deep learning models are ofter trained in a regime that is forbidden by
classical statistical learning theory.
The model complexity is often larger than the sample size and the train error
does not concentrate around the test error. In fact, the model complexity can be so large that
the network interpolates noisy training data. Despite this, it behaves well on fresh test
data, a phenomenon that has been dubbed `benign overfitting.'
I will review recent progress towards understanding this phenomenon in the so called
neural-tangent regime in which the neural network can be approximated by a certain random
features model. In high-dimension (and for simple distributions of the covariates),
unregularized regression in this model turns out to be equivalent to harmonic ridge regression
with a positive ridge regularization. This equivalence provides a quantitative understanding of
benign overfitting in this regime.
[Based on joint work with Song Mei, Theodor Misiakiewicz and Yiqiao Zhong]
|
|
|