December 1 @ 8:50 am

Discover live virtual and in-person events offered by IEEE groups around the world.

Loading Events

« All Events

  • This event has passed.

Towards Understanding the Generalization Mystery in Deep Learning

November 16 @ 2:00 pm3:00 pm

A big open question in deep learning is the following: Why do over-parameterized neural networks trained with gradient descent (GD) generalize well on real datasets even though they are capable of fitting random datasets of comparable size? Furthermore, from among all solutions that fit the training data, how does GD find one that generalizes well (when such a well-generalizing solution exists)? In this talk we argue that the answer to both questions lies in the interaction of the gradients of different examples during training, and present a new theory based on this idea. The theory also explains a number of other phenomena in deep learning, such as why some examples are reliably learned earlier than others, why early stopping works, and why it is possible to learn from noisy labels. Moreover, since the theory provides a causal explanation of how GD finds a well-generalizing solution when one exists, it motivates a class of simple modifications to GD that attenuate memorization and improve generalization.


November 16
2:00 pm – 3:00 pm
Event Tags:
Affiliated Group Name: Switzerland Section Chapter,CEDA44


Affiliated Group Name
Switzerland Section Chapter,CEDA44