Why neural networks find (geometrically) simple solutions (Benoit Dherin, Google)

15.12.2022 13:15

We will start by defining a notion of geometric complexity for neural networks based on intuitive notions of volume and energy. This will be motivated by the visualization of training sequences in the case of simple 1d neural regressions. Then we will explain why for neural networks the optimization process creates a pressure to keep the network geometric complexity low. Additionally, we will see that many other common heuristics in the training of neural networks (from initialization schemes to explicit regularization strategies) have as a side effect to also keep the geometric complexity of the learned solutions low. We will conclude by explaining how this points toward a preference toward a form of harmonic map built in the commonly used training and tuning heuristics in deep learning.

Lieu

Bâtiment: Conseil Général 7-9

Room 1-15, Attn. unusual time, Séminaire "Topologie et géométrie"

Organisé par

Section de mathématiques

Intervenant-e-s

Benoit Dherin, Google

entrée libre

Classement

Catégorie: Séminaire