The ultimate intro to Graph Neural Networks. Maybe.

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 ก.พ. 2025

ความคิดเห็น • 106

  • @khushpatelmd
    @khushpatelmd 4 ปีที่แล้ว +31

    This channel is so underrated. There is lot of info if you are learning for first time. But best approach is to watch video 3-4 times.

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +7

      Thanks, I appreciate your words! If you think the channel is underrated, then do not hesitate to share out the content! 😁 By doing so, you can actively help! Thank you so much!

  • @vincent_hall
    @vincent_hall 3 ปีที่แล้ว +8

    Love it, thanks Coffee Bean.
    I'd worked with Graph Theory and worked a lot with NNs, but I didn't know what Graph convolutional NNs were.
    Thanks for updating my skills.
    I came here because I kept seeing the Graph Neutral Network term everywhere.

    • @AICoffeeBreak
      @AICoffeeBreak  3 ปีที่แล้ว +4

      Ms. Coffee Bean is so glad this video was helpful for you!

  • @malekaburaddaha5910
    @malekaburaddaha5910 4 ปีที่แล้ว +15

    I have been watching videos about GNNs for two days and I got the idea, but here I completely understood everything.
    Thank you very much I am glad that I found your channel, keep going.

  • @miladaghajohari2308
    @miladaghajohari2308 4 ปีที่แล้ว +4

    That is a nice intro. Thanks for taking the time to make it.

  • @MCMelonslice
    @MCMelonslice 2 ปีที่แล้ว +1

    i love your channel. Just found you yesterday and as a nearly complete scrub in ML this helps for a solid foundation!

  • @enicay7562
    @enicay7562 หลายเดือนก่อน +1

    Thank you Miss Coffee Bean !

  • @TheAIEpiphany
    @TheAIEpiphany 4 ปีที่แล้ว +4

    Really useful and I like the creative animations! Keep up the great work, subscribed!

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +4

      Thanks for the sub, really nice to have you around!
      I discovered your channel just yesterday and subscribed, but completely unrelated to this comment you left here! 😎

    • @TheAIEpiphany
      @TheAIEpiphany 4 ปีที่แล้ว +3

      @@AICoffeeBreak heheh nice, glad to hear that and glad to be here!

  • @DavenH
    @DavenH 3 ปีที่แล้ว +15

    Really well presented and animated. Keep it up!

  • @firdawsyahya3749
    @firdawsyahya3749 3 ปีที่แล้ว +8

    I finally have the formula on lock. Thank you

  • @DeepFindr
    @DeepFindr 4 ปีที่แล้ว +3

    I also made a video on GNNs but I have to admit yours is more compressed and gets faster "to the point" :) Well done!

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +2

      Haha, "to the point" is kind of the motto of the whole channel! But especially for everyone keen to find out everything around the topic, they have you! 😉

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +2

      BTW, great channel! Your GNN videos are also accompanied by a blog post! You have something for everyone!

    • @DeepFindr
      @DeepFindr 4 ปีที่แล้ว +1

      Thanks! Sounds good, I look forward to your future videos :)

  • @nicohambauer
    @nicohambauer 3 ปีที่แล้ว +10

    Nice! Keep up the good work! You are a true researcher and helping us other researchers to keep on track the real relevant things from literature!

  • @mirjunaid26
    @mirjunaid26 3 ปีที่แล้ว +11

    The way you break down and explain the mathematical formulae of GNN is amazing and beautiful. At the same time introducing and clearing the concept of permutation-invariance in such a short video is commendable. Thank you ❤️ Liking, sharing, & subscribing.

    • @AICoffeeBreak
      @AICoffeeBreak  3 ปีที่แล้ว +3

      Wow, thank you! This comment made my day.

  • @swarajshinde3950
    @swarajshinde3950 4 ปีที่แล้ว +4

    Thank you for such Clear and Concise information !!

  • @sir_aken9706
    @sir_aken9706 2 ปีที่แล้ว +1

    Just discovered this channel and ngl, I think im in love with coffee bean 😂 very good and succinct video

  • @chronomo97
    @chronomo97 3 ปีที่แล้ว +3

    Great intro!

  • @thomastorku9002
    @thomastorku9002 3 ปีที่แล้ว +3

    She is phenomenal

  • @pureeight7003
    @pureeight7003 3 ปีที่แล้ว +3

    I find this video very useful. I have subscribed!

  • @UnrecycleRubdish
    @UnrecycleRubdish 3 ปีที่แล้ว +8

    Very cute and entertaining idea with the coffee bean. Makes an otherwise dry subject a little bit... moist? Anyway thank you for the informative video. FYI I played it on 1.25x speed because she spoke a little bit too slow for me! Great work!

    • @AICoffeeBreak
      @AICoffeeBreak  3 ปีที่แล้ว +3

      Thanks for watching! The speaking speed is something I am still adjusting to find my pace. Happy that you used the speedup functionality to help yourself. :)

  • @prachi07kgp
    @prachi07kgp ปีที่แล้ว +1

    Very helpful video, such complicated concept explained so beautifully and in simple manner

  • @pradyumnagupta3989
    @pradyumnagupta3989 4 ปีที่แล้ว +6

    Oh my GOD , I have been trying to study gnns for so long and this is by far the best video I have seen on this topic. Thank you for clarifying that "convolution" is misleading in GCNs.

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +6

      Thanks, I really tried to convey the important concepts. The other Schnick-schnack in GNNs is very good at scaring away people trying to learn about GNNs for the first time.

  • @deepakravikumar674
    @deepakravikumar674 4 ปีที่แล้ว +4

    2.54 - I was happy
    3.00 - RIP

  • @boscojay1381
    @boscojay1381 4 ปีที่แล้ว +3

    i was about to leave.. then her voice at the end said, "hey, do not forget to like & subscribe ..", and that's how she caught her fish!

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +1

      Love your "fish-catching" formulation.😅 Is it right for her to assume from your profile picture that you like sailing?

    • @boscojay1381
      @boscojay1381 4 ปีที่แล้ว +2

      @@AICoffeeBreak you’re absolutely right! lol

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +3

      Cool! I will tell you a secret: Ms. Coffee Bean loves sailing too (but she is quite the beginner in the matter)!

  • @newbie8051
    @newbie8051 ปีที่แล้ว +1

    Great explanation, loved this.
    Thanks a ton ma'am !

  • @nikhilmanali920
    @nikhilmanali920 2 ปีที่แล้ว +1

    Thank you so much.

  • @omarmafia234
    @omarmafia234 4 ปีที่แล้ว +4

    I can not thank you enough!! Great great work!

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว

      So glad it helped! Means a lot to Ms. Coffee Bean.

  • @samarthagarwal7219
    @samarthagarwal7219 3 ปีที่แล้ว +3

    Nice explanation thanks!

  • @st0a
    @st0a ปีที่แล้ว +1

    Really cute channel. Why you don't have more subscribers is beyond my understanding.

  • @robertramji761
    @robertramji761 3 ปีที่แล้ว +3

    Such a clear and accessible explanation, thank you!!!

    • @AICoffeeBreak
      @AICoffeeBreak  3 ปีที่แล้ว +1

      Great to hear, thanks! ☺️

  • @rohith2454
    @rohith2454 2 ปีที่แล้ว +1

    Thanks a lot !

  • @keithsanders6309
    @keithsanders6309 3 ปีที่แล้ว +1

    This was a helpful and stellar video!!

  • @sukanya4498
    @sukanya4498 3 ปีที่แล้ว +1

    Great introduction with examples 👏🏽🙌🏽👍🏽!!

  • @wellisonraul5825
    @wellisonraul5825 ปีที่แล้ว +1

    Thank you!

  • @diwakerkumar5910
    @diwakerkumar5910 2 ปีที่แล้ว +1

    Thank you

  • @denshaSai
    @denshaSai 2 ปีที่แล้ว +3

    So what is the groundtruth and loss function for graphs? (how do you actually learn the weights)?

    • @AICoffeeBreak
      @AICoffeeBreak  2 ปีที่แล้ว +1

      Depends on what you are trying to do: For node classification, you have a cross-entropy loss for predicting the label of each node, for example.

    • @ditherycarbon8661
      @ditherycarbon8661 2 ปีที่แล้ว +1

      ​@@AICoffeeBreak Soo, we are still using backpropagation to learn the weights right?
      And how do we get the final output from the individual node embeddings? (Considering that the output is global for the entire graph and not local for each node)
      Thanks in advanced

    • @AICoffeeBreak
      @AICoffeeBreak  2 ปีที่แล้ว +2

      @@ditherycarbon8661 Yes, it is still backprop and gradient descent.
      If we need to make a classification of the whole graph, we need to apply the few extra classification layers on the aggregation of the whole graph.
      Just an idea (GNNs are not my area of expertise): one could still do graph classification if the GNN layers are so many that the information had time to smooth out over the entire graph. Then we could still use one output node alone to say something about the whole graph.

    • @ditherycarbon8661
      @ditherycarbon8661 2 ปีที่แล้ว +1

      @@AICoffeeBreak Thanks, makes sense

  • @ИльясХарунов
    @ИльясХарунов 2 ปีที่แล้ว +2

    I can't quite understand how the weighted sum of neighbour vectors is permutation invariant. For example if we swap node 4 and node 2 vectors, then node 4 vector will now get weight Ci2 and the node 2 vector will get Ci4. Why won't the sum change?

    • @AICoffeeBreak
      @AICoffeeBreak  ปีที่แล้ว +2

      First we multiply node's 4 vector representation with a matrix W, we also multiply the vector representation of node to with the same matrix W. So swapping them in the sum does not change anything (commutative).

  • @scottyb3b7
    @scottyb3b7 2 ปีที่แล้ว +1

    This is excellent

  • @CallSaul489
    @CallSaul489 2 ปีที่แล้ว +1

    Really nice explanation!

  • @ckng6126
    @ckng6126 3 ปีที่แล้ว +1

    Very succinct explanation

  • @cesar73silva
    @cesar73silva 4 ปีที่แล้ว +3

    Yes. This is good.

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +1

      Thanks!

    • @cesar73silva
      @cesar73silva 4 ปีที่แล้ว +1

      @@AICoffeeBreak A question, how do I get the initial representations for h_i ? Or what possible ways are there

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +2

      @@cesar73silva Depends a lot what you want to do. An an example: for NLP it is quite common to initialize h_0 with word embeddings.
      In applications where you really have nothing to start with, h_0 could be even initialized randomly.

  • @prathamprasoon2535
    @prathamprasoon2535 2 ปีที่แล้ว +1

    Awesome video!

    • @AICoffeeBreak
      @AICoffeeBreak  2 ปีที่แล้ว +1

      Thanks! You are THE Pratham Prasoon! I know you from Twitter. 😄

  • @heathenfire
    @heathenfire 2 ปีที่แล้ว +1

    Wow This was such a good explanation.

  • @sunaryaseo
    @sunaryaseo 3 ปีที่แล้ว +2

    Thanks very much for the explanation. I still didn't get it with the notation of 'W' and 'U' in the formula. Where is it exactly on the figure? is that in the edges of the graph so we can multiply with the 'H' ? or in somewhere else (another figure) you didn't show? If there is another NN after this graph, I am curious how do you connect this graph with that NN

  • @l3nn13
    @l3nn13 8 หลายเดือนก่อน

    i would love some links or names to the original papers you are referring to

  • @VenkatIyer
    @VenkatIyer 3 ปีที่แล้ว +3

    Is there an easy implementation of simple GNNs available somewhere? On github I can see only sophisticated approaches for wierd problems.

    • @AICoffeeBreak
      @AICoffeeBreak  ปีที่แล้ว +1

      If there is one, I would also like to know. :)

  • @abhinavmishra9401
    @abhinavmishra9401 4 ปีที่แล้ว +4

    This is so beautiful. I am crying. Thanks a lot

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +2

      Thank you so much! It means a lot to me that it meant something to you!

  • @bibs2091
    @bibs2091 2 ปีที่แล้ว +1

    i still have many questions in mind, but this is surely a very good introduction

  • @ehsanelahi190
    @ehsanelahi190 3 ปีที่แล้ว +2

    Good explanation of such dry topic

  • @zhangkin7896
    @zhangkin7896 3 ปีที่แล้ว +1

    Hi Ms. Coffee Ban, could I translate the video and pub to another video domain(bilibili) with that language with cite the origin from?

    • @AICoffeeBreak
      @AICoffeeBreak  3 ปีที่แล้ว +1

      Hey, thanks for reaching out! I would love to upload on bilibili myself, the problem is that I do not get past the account creation verification due to my poor Chinese. I would love some help with that. Would you mind writing me an email? ☺️

  • @sarahjamal86
    @sarahjamal86 3 ปีที่แล้ว +1

    Well done 🥳

  • @736939
    @736939 3 ปีที่แล้ว +4

    Are you sure that for each "h" we need to apply weight matrix "W", and not scalar "w"???

    • @AICoffeeBreak
      @AICoffeeBreak  3 ปีที่แล้ว +5

      Each h is a vector, therefore W is a matrix. 😃

    • @736939
      @736939 3 ปีที่แล้ว +2

      @@AICoffeeBreak Thank you.

  • @nicholasliu-sontag1585
    @nicholasliu-sontag1585 3 ปีที่แล้ว +2

    Thanks for this video. I wanted to ask - does this video not also describe GNNs with RNNs? Given that nodes you describe have some short term memory?

  • @rembautimes8808
    @rembautimes8808 4 ปีที่แล้ว +1

    The term \Sum_(j in Nbr(i))1/c_(i,j) * h_j^t * U , does this follow from the Kolmogorov-Arnold representation theorem for continuous mulitvariate functions? Excellent video I may add.

    • @AICoffeeBreak
      @AICoffeeBreak  4 ปีที่แล้ว +4

      There is certainly a striking similarity to it. But I am not really sure, I can only cite a paper I found about this and point you to it: "The Kolmogorov-Arnold representation decomposes a multivariate function into an interior and an outer function and therefore has indeed a similar structure as a neural network with two hidden layers. But there are distinctive differences. One of the main obstacles is that the outer function depends on the represented function and can be wildly varying even if the represented function is smooth."
      export.arxiv.org/pdf/2007.15884.pdf

  • @safaelaat1868
    @safaelaat1868 2 ปีที่แล้ว +1

    Thank you very much for all your videos. We always hear that computer vision performance has exceeded human performances. But where do we get this information from and how about other domains like NLP, Speech recognition, financial fraud detection, autonomous driving and malware detection ? thanks again

  • @goldfishjy95
    @goldfishjy95 3 ปีที่แล้ว +2

    Thank you so much! #prayinghandsemoji

  • @marcusbluestone2822
    @marcusbluestone2822 ปีที่แล้ว +1

    Well explained. Thanks!

  • @IndoPakComparison
    @IndoPakComparison 3 ปีที่แล้ว +1

    Very basic info but you still used terms and notations hard for a new learner. haha. but it is good one 加油。

  • @christianmoreno7390
    @christianmoreno7390 2 ปีที่แล้ว +1

    You have a beautiful accent girl

  • @tallwaters9708
    @tallwaters9708 ปีที่แล้ว +2

    GCNNs, what a horrible name!...

    • @AICoffeeBreak
      @AICoffeeBreak  ปีที่แล้ว +1

      Gone are the times of AlexNet... 😅

  • @sachin63442
    @sachin63442 3 ปีที่แล้ว

    why can't you use XGboost or decision trees for node level classification instead of GCN?

  • @baqir5652
    @baqir5652 2 ปีที่แล้ว

    I love you ma'am

  • @pranavb9768
    @pranavb9768 23 วันที่ผ่านมา

    you’re cute

  • @kvnptl4400
    @kvnptl4400 6 หลายเดือนก่อน

    Honestly, couldn't understand in the first attempt :(

  • @amanoswal7391
    @amanoswal7391 4 ปีที่แล้ว

    The coffee bean acts as a distraction. Good explanation otherwise.