On the Global Convergence of Gradient Descent for (...) - Bach - Workshop 3 - CEB T1 2019

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 ก.ย. 2024
  • Francis Bach (INRIA) / 05.04.2019
    On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport.
    Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension. (Joint work with Lénaïc Chizat)
    ----------------------------------
    Vous pouvez nous rejoindre sur les réseaux sociaux pour suivre nos actualités.
    Facebook : / instituthenripoincare
    Twitter : / inhenripoincare
    Instagram : / instituthenripoincare
    *************************************
    Langue : Anglais; Date : 05.04.2019; Conférencier : Bach, Francis; Évenement : Workshop 3 - CEB T1 2019; Lieu : IHP; Mots Clés :

ความคิดเห็น • 1