Department of Mathematics

Applied Mathematics

  •  Holger Rauhut, RWTH Aachen University
  •  Convergence of gradient flows for learning deep linear neural networks; zoom link @ https://sites.google.com/view/minds-seminar/home
  •  07/09/2020
  •  2:30 PM - 3:30 PM
  •  

(Part of One World MINDS seminar: https://sites.google.com/view/minds-seminar/home) \[ \] Learning neural networks amounts to minimizing a loss function over given training data. Often gradient descent algorithms are used for this task, but their convergence properties are not yet well-understood. In order to make progress we consider the simplified setting of linear networks optimized via gradient flows. We show that such gradient flow defined with respect to the layers (factors) can be reinterpreted as a Riemannian gradient flow on the manifold of rank-r matrices in certain cases. The gradient flow always converges to a critical point of the underlying loss functional and, for almost all initializations, it converges to a global minimum on the manifold of rank-k matrices for some k.

 

Contact

Department of Mathematics
Michigan State University
619 Red Cedar Road
C212 Wells Hall
East Lansing, MI 48824

Phone: (517) 353-0844
Fax: (517) 432-1562

College of Natural Science