AMMI Course "Geometric Deep Learning" - Lecture 1 (Introduction) - Michael Bronstein

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 มิ.ย. 2024
  • Video recording of the course "Geometric Deep Learning" taught in the African Master in Machine Intelligence in July-August 2021 by Michael Bronstein (Imperial College/Twitter), Joan Bruna (NYU), Taco Cohen (Qualcomm), and Petar Veličković (DeepMind)
    Lecture 1: Symmetry through the centuries • The curse of dimensionality • Geometric priors • Invariance and equivariance • Geometric deep learning blueprint • The "5G" of Geometric deep learning • Graphs • Grids • Groups • Geodesics • Gauges • Course outline
    Slides: bit.ly/3iw6AO9
    Additional materials: www.geometricdeeplearning.com
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 50

  •  2 ปีที่แล้ว +12

    This is truly amazing. I finished my bachelor in mathematics, with a thesis in differential geometry, and I just started studying a masters degree in Artificial Intelligence Research. I saw some articles on geometric deep learning, but nothing as complete as this. I think this beautiful field fits my interests perfectly and I think I'll orient my research career in this direction. Thank you very much for this.

  • @petergoodall6258
    @petergoodall6258 2 ปีที่แล้ว +11

    Oh wow! Ties together so many areas I’ve been interested over the years - with concrete, intuitive, applications.

  • @vinciardovangoughci7775
    @vinciardovangoughci7775 2 ปีที่แล้ว +5

    Thanks so much for doing this and putting it online for free. Generative models + Gauges fuel my dreams.

  • @fredxu9826
    @fredxu9826 2 ปีที่แล้ว +10

    What a good time to be alive! I’m going to enjoy this playlist.

  • @TheAIEpiphany
    @TheAIEpiphany 2 ปีที่แล้ว +1

    Bravo Michael! I really love that you put things into a historical context - that helps us create a map (a graph :) ) of how concepts connect and evolve and by introducing this structure into our mental models it's easier to explore this vast space of knowledge.

  • @jordanfernandes581
    @jordanfernandes581 2 ปีที่แล้ว +4

    I just started reading your book "Numerical geometry ... " today out of curiosity and this shows up on youtube. I'm looking forward to learning something new 🙂

  • @edsoncasimiro
    @edsoncasimiro 2 ปีที่แล้ว +6

    Hi Dear Professor Michael Bronstein, Congratulations for the great job you and your team are doing in the field of AI. Im going to my junior year at university and kinda failed in love with the Goemetric deep learning. Hopefully these lesson and the paper will help me to understand more about. Thanks for sharing, All the best.

  • @NoNTr1v1aL
    @NoNTr1v1aL 2 ปีที่แล้ว +3

    Amazing lecture series!

  • @Chaosdude341
    @Chaosdude341 2 ปีที่แล้ว +2

    Thank you for uploading this.

  • @marfix19
    @marfix19 2 ปีที่แล้ว +2

    This is just pure coincidence. I'm currently interested in this topic and this amazing course poped up. Thank you very much Prof. Michael for opening these resources to the public. I might try to get in touch with you or your colleagues to discuss some ideas. Regards! M Saval

  • @samm9840
    @samm9840 2 ปีที่แล้ว +1

    I had seen your previous ICLR presentation on the same topic and was still not clear about the invariance and equivariance ideas! Now finally I got hold of the concept of inductive biases (geometric priors) that must be ensured for model architectures
    1. images - shift inv. and equiv.
    2. graphs - premutation inv. and equiv.
    3. sequences/language - ??
    and for any other tasks we may encounter - we need to identify which property w.r.t. the resulting function should be invariant and equivariant! Thank you very much Sir for generously putting it all out there for the public good.

  • @droidcrackye5238
    @droidcrackye5238 2 ปีที่แล้ว +2

    Great work, thanks

  • @maximeg3659
    @maximeg3659 2 ปีที่แล้ว +1

    thanks for uploading this !

  • @abdobrahany8236
    @abdobrahany8236 2 ปีที่แล้ว +1

    Oh my God thank you very much for your effort

  • @channagirijagadish1201
    @channagirijagadish1201 2 ปีที่แล้ว

    Excellent Lecture. Thanks and appreciate it.

  • @gowtham236
    @gowtham236 2 ปีที่แล้ว +1

    This will keep me busy for the next few weeks!!

  • @sumitlahiri4973
    @sumitlahiri4973 2 ปีที่แล้ว +1

    Awesome Video !

  • @fredxu9826
    @fredxu9826 2 ปีที่แล้ว +8

    today I got the book that Dr. Bronstein suggested "The Road to Reality" by Roger Penrose...wow I wish that I had came across this book wayyy earlier. If I had this when I was in early undergraduate I would had much much more fun and motivation to study physics and mathematics. This is just amazing.

  • @jobiquirobi123
    @jobiquirobi123 2 ปีที่แล้ว +2

    Thank you!

  • @krishnaaditya2086
    @krishnaaditya2086 2 ปีที่แล้ว +1

    Awesome Thanks!

  • @rock_it_with_asher
    @rock_it_with_asher 2 ปีที่แล้ว +1

    28:32 - A moment of revelation! wow!🤯

  • @madhavpr
    @madhavpr 2 ปีที่แล้ว +1

    This is fantastic !! It's great to have access to such amazing content online. What are the prerequisites for understanding the material? I'm aware of basic signal processing, linear algebra, vector calculus and I work (mostly) on deep learning. I'm learning differential geometry (of curves and surfaces in R^3) and abstract algebra on my own. Is my background sufficient? I feel a little overwhelmed.

  • @Alejandro-hh5ub
    @Alejandro-hh5ub ปีที่แล้ว

    The portrait on the left @5:35 is Pierre de Fermat and it says Desargues 😅

  • @sowmyakrishnan240
    @sowmyakrishnan240 2 ปีที่แล้ว

    Thank you Dr. Bronstein for the extraordinary introductory lecture. Really excited to go through the rest of the lectures in this series! I have 2 questions based on the introduction:
    1) When discussing the MNIST example you mentioned that images are high dimensional. Could not understand that point as generally the images such as the MNIST dataset are considered to be 2-dimensional in other general DL/CNN courses. Can you elaborate more on how the higher dimensions emerge or how to visualize those for cases such as the MNIST dataset?
    2) In case of molecules, even though the order of nodes can vary, the neighborhood of each node remains the same under non-reactive conditions (when bond formation/breakage is not expected). In such cases, does permutation invariance only mean the order in which nodes are traversed in the graph (Like variations in atom numbering between IUPAC names of molecules)? Does permutation invariance take into account changes in node neighborhood?
    I apologize for the naive questions professor. Thank you once again for the initiative to digitize these lectures for the benefit of students and researchers.

    • @Hyb1scus
      @Hyb1scus 2 ปีที่แล้ว +1

      I don't thinhk I can answer you first question in detail but I think in a MNIST picture, there are as many dimensions as there are pixels. It is the analysis of the 256 individually or bundled through a convolution that will enable the program to determine the displayed number.

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  2 ปีที่แล้ว +1

      Each pixel is treated as a coordinate of a vector, so even 32x32 MNIST image is ~1K-dimensional

  • @vaap
    @vaap 2 ปีที่แล้ว

    banger course

  • @randalllionelkharkrang4047
    @randalllionelkharkrang4047 2 ปีที่แล้ว

    I didnt understand most things mentioned here. hopefully the later lectures make provide more insight.

  • @justinpennington509
    @justinpennington509 2 ปีที่แล้ว

    Hi Professor Bronstein, what is the practical way of handling graphs networks of different sizes? With a picture, it’s easy to maintain a consistent resolution and pixel count, but with graphs and sub graphs you could have any number of nodes. Is it typical to just pick a maximum N one would expect in practice and leave the unfilled nodes as 0 in the feature vector and adjacency matrix? If the sizes of these matrices are variable, then how does that affect the weights of the net itself?

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  2 ปีที่แล้ว +1

      the way graph functions are constructed in GNNs is by aggregating the multiset of neighbour features. This operation is done for every node of the graph. This way GNN does not depend on the number of nodes, number of neighbors, nor their order

  • @evenaicantfigurethisout
    @evenaicantfigurethisout 2 ปีที่แล้ว +2

    23:41 I don't understand why we can simply permute the nodes on the caffeine molecule willy nilly like that? The binding energy depends on what the neighboring atoms are, the number of bonds and also the type of bonds. How can all of this information be preserved if we permute it at will like this? For example the permuted vectors here show all the yellows next to each other when in the actual molecule there are no neighboring yellows at all!

    • @MichaelBronsteinGDL
      @MichaelBronsteinGDL  2 ปีที่แล้ว +2

      Molecular fingerprints are permutation invariant, but based on permutation-equivariant aggregation. The way it works is a sequence and of locally permutation-invariant aggregators (corresponding to one GNN layer) that are permutation-equivariant, followed by a permutation-invariant pooling. So graph structure is taken into account. We explain it in lectures 5-6

  • @mlworks
    @mlworks 2 ปีที่แล้ว

    Is there any book that correlates with geometric deep learning course presented in this course?

    • @marijansmetko6619
      @marijansmetko6619 2 ปีที่แล้ว +1

      This is basically the textbook: arxiv.org/abs/2104.13478

  • @xinformatics
    @xinformatics 2 ปีที่แล้ว +2

    05:08 Desargues looks strikingly similar to Pierre de Fermat. I think one of them is wrong.

  • @syedakbari845
    @syedakbari845 2 ปีที่แล้ว

    The link to the lecture slides is not working, is there anyway to still access them?

    • @ifeomaveronicanwabufo3183
      @ifeomaveronicanwabufo3183 ปีที่แล้ว

      The resources, including the slides, can be found here: geometricdeeplearning.com/lectures/

  • @mingmingtan8790
    @mingmingtan8790 2 ปีที่แล้ว

    Hi, I can't access the slide. When I clicked on it, it states that This URL has been blocked by Bitly's systems as potentially harmful.

    • @ifeomaveronicanwabufo3183
      @ifeomaveronicanwabufo3183 ปีที่แล้ว

      The resources, including the slides, can be found here: geometricdeeplearning.com/lectures/

  • @akshaysarbhukan6701
    @akshaysarbhukan6701 2 ปีที่แล้ว

    Amazing lecture. However, I was not able to understand the mathematical part. Can someone suggest to me the prerequisites for this lecture series?

  • @444haluk
    @444haluk 2 ปีที่แล้ว

    32:45 that approach is too naive. If I say "I hate nachos", it doesn't mean that I have a connection with every nacho past-present-future and I hate every single one of them uniquely. No! I just hate nachos. After 1 minute of thinking you can realize what you need is hypergraphs in almost every situation.

  • @JohnSmith-ut5th
    @JohnSmith-ut5th 2 ปีที่แล้ว

    The very fact that the human brain is captivated and fascinated by manifolds is enough to prove that the brain does not use the concept of manifolds in any manner. I'm going to tell you a scenery secret I happen to know: The brain is purely an sparse-L1 norm processor. It has no notion of "distance" except in the form of pattern matching.
    You're welcome... so now you can throw this entire video and all related research in the garbage, unless your goal is to make something better than the human brain.

  • @AtticusDenzil
    @AtticusDenzil ปีที่แล้ว

    polish accent