Graph Convolutional Networks (GCN) | GNN Paper Explained

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 ธ.ค. 2024

ความคิดเห็น • 78

  • @TheAIEpiphany
    @TheAIEpiphany  4 ปีที่แล้ว +6

    Have a nice New Year everybody!
    What GNN paper would you like to see explained next?
    Is the font size good enough? Any feedback is welcome!

    • @shantanuchandra5374
      @shantanuchandra5374 3 ปีที่แล้ว +1

      Hi Aleksa.
      Great stuff !
      I wanted to put in a request for a recent paper by Emanuel Rossi and Michael Bronstein on dynamic graphs called Temporal-GNN (arxiv.org/abs/2006.10637
      ). I think moving to dynamic graphs from these classical static ones will be a good way forward for your series as well. What do you think?

    • @TheAIEpiphany
      @TheAIEpiphany  3 ปีที่แล้ว +1

      @@shantanuchandra5374 Thanks Shantanu! Definitely, that's an exciting area of GNN research thanks for the suggestion!

    • @TheAIEpiphany
      @TheAIEpiphany  3 ปีที่แล้ว

      @Linsu Han Awesome! I'll check that one out!

    • @shantanuchandra5374
      @shantanuchandra5374 3 ปีที่แล้ว

      @Linsu Han You might wanna check out more recent stuff for scalable GCN, like cluster GCN by Google (arxiv.org/abs/1905.07953 )

    • @krzysztofsadowski5620
      @krzysztofsadowski5620 3 ปีที่แล้ว +1

      That is the best GCN explanation I've ever seen!
      It will be great if you cover "Dynamic Graph CNN for Learning on Point Clouds" (liuziwei7.github.io/projects/DGCNN) or other paper about GNNs related to computer vision tasks :)

  • @coslee
    @coslee 3 ปีที่แล้ว +5

    Came here with a significant amount of desperation and you helped a lot. Great job!

  • @ramyadshetty9310
    @ramyadshetty9310 3 ปีที่แล้ว +6

    Great video Sir. As a beginner in GCN, its very hard to understand Research paper related to GNN.... But you have explained very clearly, one can understand easily. Thank you Sir.

  • @mdmamunurrashid6959
    @mdmamunurrashid6959 ปีที่แล้ว +2

    very excellent initiative for explaining papers 📄

  • @MachineLearningwithPhil
    @MachineLearningwithPhil 4 ปีที่แล้ว +12

    once again, thanks for a thorough read through of the paper. Keep up the great work!

    • @TheAIEpiphany
      @TheAIEpiphany  4 ปีที่แล้ว +1

      Thanks a lot buddy! I appreciate your work as well.
      I really liked your video on setting up the deep learning rig. Those are gold.

  • @nivethanyogarajah1493
    @nivethanyogarajah1493 9 หลายเดือนก่อน

    The more details, the better! This was incredible!

  • @64113981
    @64113981 3 ปีที่แล้ว +5

    Man you really earn my attention as well as respect. Awesome work!

    • @TheAIEpiphany
      @TheAIEpiphany  3 ปีที่แล้ว +1

      Thanks, super glad to hear that!

  • @pfcittolin123
    @pfcittolin123 3 ปีที่แล้ว +4

    Thanks for the detailed and very intuitive and high level explanation! You made it much easier :)

  • @TehCourier
    @TehCourier 2 ปีที่แล้ว

    My guy making an entire semester's worth of content into an hour, you are brilliant mate, keep making more videos!

  • @wazmo20
    @wazmo20 4 หลายเดือนก่อน +1

    Protector of the ocean, bestower of ML content

  • @iliassoto7032
    @iliassoto7032 2 ปีที่แล้ว

    Incredible work. The Explanation of why the authors use such complicated matrix representations (in order to scale the values according to the neighbours degrees) was key and not intuitive at all. Thank you!

  • @sampritipatel8981
    @sampritipatel8981 3 ปีที่แล้ว +2

    This was just perfect. Thankyou so much!

  • @SaimonThapa
    @SaimonThapa ปีที่แล้ว

    Thank you very much for the explanation! I took your and ChatGPT's help to understand this paper.

  • @kerou4276
    @kerou4276 ปีที่แล้ว

    This is so clear and very handy, thank you so much if you can explain the lightGCN please.

  • @arvinflores5316
    @arvinflores5316 2 ปีที่แล้ว

    Thanks for the explanation! Would love to see you explained the LightGCN paper!

  • @jackgala5670
    @jackgala5670 ปีที่แล้ว

    I love your explanation. And thankyou for taking the time to explain

  • @jiahongsu3374
    @jiahongsu3374 3 ปีที่แล้ว +2

    this is fantastic work, great job!

  • @Janamejaya.Channegowda
    @Janamejaya.Channegowda 3 ปีที่แล้ว +2

    Spectacular, keep up the good work.

  • @vadymsun2439
    @vadymsun2439 ปีที่แล้ว

    mind blowing explaination!

  • @dengdengkenya
    @dengdengkenya 2 ปีที่แล้ว

    Thank you very much for this clear and accurate explaination video! This style of instruction is especially suitable for me!

  • @LOOP1
    @LOOP1 ปีที่แล้ว

    In the beginning, the regularized term associated with the loss function conserne only then unlabeled nodes not all the nodes

  • @giannismanousaridis4010
    @giannismanousaridis4010 3 ปีที่แล้ว

    Awesome work. Thank you for sharing! I'd love also to see the spectral methods that you mentioned in more detail.

  • @wilsonlwtan3975
    @wilsonlwtan3975 9 หลายเดือนก่อน

    Pure respect for you!

  • @apocalypt0723
    @apocalypt0723 4 ปีที่แล้ว +3

    Awesome Series

  • @DED_Search
    @DED_Search 3 ปีที่แล้ว +1

    21:45, may i ask if there is a separate video on equation (4), the chebyshev polynomial? thanks.

  • @ibrahimsalim6365
    @ibrahimsalim6365 3 ปีที่แล้ว +1

    It's an amazing work. Thank you so much.

  • @НиколайНовичков-е1э
    @НиколайНовичков-е1э 2 ปีที่แล้ว +1

    Thank you!

  • @dacatman2518
    @dacatman2518 3 ปีที่แล้ว

    Would be great to get a video on where eqn 3 comes from. I know it didn't turn out to be crucial to GCN, but it seems to be a motivating concept to other methods in the field. Great video on GCN though!!

  • @TechTalk-6G
    @TechTalk-6G 3 ปีที่แล้ว +1

    Great video! Very informative.
    ~~ Not all heroes wear capes ~~

  • @AbhishekSingh-lw3yf
    @AbhishekSingh-lw3yf 2 ปีที่แล้ว +1

    great explanation, the only suggestion is to use pensil instead of mouse to annotate. Its better to read that way.

    • @TheAIEpiphany
      @TheAIEpiphany  2 ปีที่แล้ว

      It's not even mouse, it's touchpad! 😅 I've upgraded over the last couple of videos, took me a long time. Thanks for the feedback!!

  • @jacklu3402
    @jacklu3402 2 ปีที่แล้ว

    very clear,Thanks man

  • @zhangpico7th816
    @zhangpico7th816 2 ปีที่แล้ว

    Thanks for the excellent work.
    I have one question though, at 04:13, the first row of the adjacency matrix you drew, why A is connected to all other nodes?
    Because Aij equals 1 only if node i is directly connected to node j, isn't it? If it means a connection relationship, where Aij equals one when there is a path between node i and node j, then this matrix should not be called an "adjacency matrix".

  • @LiYenWee
    @LiYenWee 3 ปีที่แล้ว

    Can u explain how u get the expansion I_1*L^~_1*2L^~2-I... depending how big is the k @ 26:11?

  • @sudiptapaul2825
    @sudiptapaul2825 2 ปีที่แล้ว

    Hi Aleksa, Can you please explain the paper "FastGCN: FASTGCN: FAST LEARNING WITH GRAPH CONVOLUTIONAL NETWORKS VIA IMPORTANCE SAMPLING"?

  • @thevikinglord9209
    @thevikinglord9209 หลายเดือนก่อน

    In equation 4 we are passing in lambda but how in equation 5 we pass in a rescaled Laplacian?

  • @siyaowu7443
    @siyaowu7443 2 ปีที่แล้ว

    nice! I like this video!

  • @prajwol_poudel
    @prajwol_poudel ปีที่แล้ว

    In pythongeometric we have GCNConv and DenseGCNConv, what is the difference between these? I am trying to figure out if I can use DenseGCNConv for a sparse adjacency matrix or not.
    P

  • @dengdengkenya
    @dengdengkenya 2 ปีที่แล้ว

    To answer your question about detailing or not, I personally prefer detailed explanation by example and calculation processes.

  • @ahmedfenti9462
    @ahmedfenti9462 7 หลายเดือนก่อน

    Thank u so much

  • @bfc7649
    @bfc7649 4 หลายเดือนก่อน

    nice videos!

  • @alinamanmedia
    @alinamanmedia 3 ปีที่แล้ว

    Thanks for this great video.
    Can you guide me on how we can modify GCNs to not have the restriction in Eq.1? Are there any other approaches when edges do not encode node similarity?

  • @essy2382
    @essy2382 3 ปีที่แล้ว +1

    Hi Aleksa,
    Could you please explain what would change if the input graph to GCN were directed instead of undirected as they consider in the paper?

    • @TheAIEpiphany
      @TheAIEpiphany  3 ปีที่แล้ว

      Similarly as in GAT and other models - you'd just consider those neighboring nodes that point to the current node (i.e. those that have ingoing edges) and you'd ignore the nodes that the current node is pointing to (nodes connected to the outgoing edges).
      There are maybe other ways to do it as well.

  • @krishnachauhan2822
    @krishnachauhan2822 3 ปีที่แล้ว

    can anyone solve my confusion please....In equation (2) of the paper which he is explaining, after MLP operation ie from 1433 dimension vector to 64, are you doing normalization of all these 64 node features as 1/sert(djdi)??? I am not getting the segnifciance of laplacian here plz help.

  • @YarkoFFXI
    @YarkoFFXI ปีที่แล้ว

    How are you so good at math? Where did you learn all the concepts needed to fully understand this paper? Are you completely self taught? I'm mindblown!

  • @zahrapoorsoltani6310
    @zahrapoorsoltani6310 3 ปีที่แล้ว +1

    what is that IDE you used ?how can i access it?

  • @arda8206
    @arda8206 2 ปีที่แล้ว

    Please even go into more detail. That is all we need.

  • @DEEPAKYADAV-vb2ee
    @DEEPAKYADAV-vb2ee 3 ปีที่แล้ว

    Can you tell me diff. b/w GNN and GCN
    I m a bit confused. can you tell both are the same or different? and also GCNN is different from both of these.
    in sort : GNN vs GCN vs GCNN

  • @bogdan3209
    @bogdan3209 3 ปีที่แล้ว +1

    Nice video. But I am a bit confused why would you use gcn instead of mlp. I would expect that basic neural network can learn this aggregate thing that gcns have. why does gcns generalize better then basic mlp?

    • @TheAIEpiphany
      @TheAIEpiphany  3 ปีที่แล้ว

      Relational structural biases would be a TL;DR.
      The same reason why CNNs are better than MLPs for computer vision apps, even though theoretically MLPs should be able to learn those same biases as CNNs (weight sharing, locality and hierarchy in the case of CNNs).

  • @markoshivapavlovic4976
    @markoshivapavlovic4976 3 ปีที่แล้ว +1

    Aleksa can you maybe share somewhere your one note notes? I think it would be very nice to have summary accessible. :) @TheAIEpiphany

    • @TheAIEpiphany
      @TheAIEpiphany  3 ปีที่แล้ว +2

      Will do - in one of the next videos I'll cover my journey through GNNs and I'll have ON ready!

    • @markoshivapavlovic4976
      @markoshivapavlovic4976 3 ปีที่แล้ว +1

      @@TheAIEpiphany thanks :)

  • @peggyolson8416
    @peggyolson8416 3 ปีที่แล้ว

    You said 20 labels per class. I am not seeing this part being actually implemented in the code. Or did I miss something ?

  • @sanketjoshi8387
    @sanketjoshi8387 3 ปีที่แล้ว

    Can GCN predict the unseen nodes?

  • @_jen_z_
    @_jen_z_ 3 ปีที่แล้ว

    It was just right )

  • @ophello
    @ophello 3 ปีที่แล้ว

    What it looks like. Not “how it looks like.” That’s redundant.

  • @npr1m991
    @npr1m991 3 ปีที่แล้ว

    Awesome! keep going !😄