- 61
- 54 776
DeepMath
United States
เข้าร่วมเมื่อ 3 เม.ย. 2018
วีดีโอ
DeepMath 2021: Day 1 Session 1 (Opening Remarks, Mallat, Gauthier)
มุมมอง 6362 ปีที่แล้ว
00:00 Opening Remarks (Adam Charles | Johns Hopkins University) 07:36 Deep Network Concentration (Stephane Mallat | Collège de France) 01:14:30 Parametric Scattering Networks (Shanel Gauthier | University of Montreal) 01:36:44 General discussion
DeepMath 2021: Day 1 Session 2 (Singh, Bordelon, Zhu)
มุมมอง 2232 ปีที่แล้ว
00:00 Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels (Aarti Singh | Carnegie Mellon University) 54: 10 SGD on Structured Data: Stability and Optimal Batch Size (Blake Bordelon | Harvard University) 01:15:28 A Geometric Analysis of Neural Collapse with Unconstrained Features (Zhihui Zhu | University of Denver)
DeepMath 2021: Day 1 Session 3 (Ergen & Pilanci, Willett)
มุมมอง 3422 ปีที่แล้ว
DeepMath 2021: Day 1 Session 3 (Ergen & Pilanci, Willett)
DeepMath 2021: Day 2 Session 1 (Uhler, Kevrekidis, Loureiro)
มุมมอง 1782 ปีที่แล้ว
DeepMath 2021: Day 2 Session 1 (Uhler, Kevrekidis, Loureiro)
DeepMath 2021: Day 2 Session 2 (Arora, Wang)
มุมมอง 1692 ปีที่แล้ว
DeepMath 2021: Day 2 Session 2 (Arora, Wang)
DeepMath 2021: Day 2 Session 3 (Refinetti, Chaudhuri)
มุมมอง 1172 ปีที่แล้ว
DeepMath 2021: Day 2 Session 3 (Refinetti, Chaudhuri)
Lenka Zdeborova - Insights on gradient-based algorithms in high-dimensional non-convex learning
มุมมอง 1.1K3 ปีที่แล้ว
Lenka Zdeborova - Insights on gradient-based algorithms in high-dimensional non-convex learning
Francesa Mignacco - Dynamical Mean-Field Theory for SGD in Gaussian Mixture Classification
มุมมอง 4693 ปีที่แล้ว
Francesa Mignacco - Dynamical Mean-Field Theory for SGD in Gaussian Mixture Classification
Stefanie Jegelka - Representation and Learning in Graph Neural Networks
มุมมอง 1.1K3 ปีที่แล้ว
Stefanie Jegelka - Representation and Learning in Graph Neural Networks
Rika Antonova - Analytic Manifold Learning with Neural Networks
มุมมอง 6073 ปีที่แล้ว
Rika Antonova - Analytic Manifold Learning with Neural Networks
Yi Sun - Data Augmentation as Stochastic Optimization
มุมมอง 3463 ปีที่แล้ว
Yi Sun - Data Augmentation as Stochastic Optimization
Rene Vidal - Keynote: Mathematics of Deep Learning
มุมมอง 1.1K3 ปีที่แล้ว
Rene Vidal - Keynote: Mathematics of Deep Learning
Gitta Kutyniok - Spectral Graph Convolutional Neural Networks Do Generalize
มุมมอง 1.8K3 ปีที่แล้ว
Gitta Kutyniok - Spectral Graph Convolutional Neural Networks Do Generalize
Maksim Maydanskiy - Spatial Transformations in Convolutional Networksand Invariant Recognition
มุมมอง 2323 ปีที่แล้ว
Maksim Maydanskiy - Spatial Transformations in Convolutional Networksand Invariant Recognition
Stéphane d'Ascoli - Reconciling Double Descent With Older Ideas
มุมมอง 8063 ปีที่แล้ว
Stéphane d'Ascoli - Reconciling Double Descent With Older Ideas
Demba Ba - Deeply-Sparse Signal Representations
มุมมอง 4573 ปีที่แล้ว
Demba Ba - Deeply-Sparse Signal Representations
Eero Simoncelli - Making use of the Prior Implicit in a Denoiser
มุมมอง 7773 ปีที่แล้ว
Eero Simoncelli - Making use of the Prior Implicit in a Denoiser
Melanie Weber - Learning a Robust Large-Margin Classifier in Hyperbolic Space
มุมมอง 3373 ปีที่แล้ว
Melanie Weber - Learning a Robust Large-Margin Classifier in Hyperbolic Space
Amartya Mitra - LEAD: Least Action Dynamics forMin-Max Optimization
มุมมอง 903 ปีที่แล้ว
Amartya Mitra - LEAD: Least Action Dynamics forMin-Max Optimization
Sejun Park - Expressive Power of Narrow Networks
มุมมอง 2013 ปีที่แล้ว
Sejun Park - Expressive Power of Narrow Networks
Abdulkadir Canatar - Statistical Mechanics of Generalization in Kernel Regression
มุมมอง 3803 ปีที่แล้ว
Abdulkadir Canatar - Statistical Mechanics of Generalization in Kernel Regression
🌽
good work my brother <3.
RIP Naftali
😥 ░p░r░o░m░o░s░m░
Inspiring! Thanks.
There seems to be a typo in the slides at 9:05 ; the cumulative power should use the squared teacher weights instead of the teacher weights (it could be negative). Cf. the Nat comms 2021 paper
nice talk Eero! Great to see you adapting to these astonishing developments.
Great explanation, thanks !
Learned many great things today I am just a student from a developing country' so it was inspiring
I hope someone (his students?) will carry on this very important line of work.
RIP Naftali
This is a paper review on SDR Math carried out by Numenta, more recently: th-cam.com/video/XWL2mCht8Xs/w-d-xo.html
This Math supports the HTM (Hierarchical Temporal Memory) from Numenta (Jeff Hawkins) which uses SDR - Sparse Distributed Representations since 2002 in their models of the neocortex. Very interesting. They also have created open source models with encoders of SDRs.
wow
such a nice work
Thank you! This was very helpful
Thank; you ...... ...... ......
All New Viewers I'll suggest you contact *HAXTECHIE01* on IG. And don't go getting yourself Scammed, he's the best at carrier & Bypass Unlocking i highly Recommend him•
Never taught I could get my device running in this perfect condition he's the real deal *HAXTECHIE01* fixed it for me♧
Kudos to the great deal dude *HAXTECHIE01* via IG he's 100% good♡
are the slides available publicly?
awesome talk!
great talk :)
Nice.