Intro to Relational - Graph Convolutional Networks

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ต.ค. 2024

ความคิดเห็น • 67

  • @VladIlie
    @VladIlie 3 ปีที่แล้ว +11

    Love the pace, the follow up explanations. Usually I'm left with many "unadressed frowns" after a video on DL, but I find your content delivers a more complete message. Thanks for the quality content!
    I especially appreciate the equation explanations, they make me fear the Greek less :)

  • @raunakdoesdev
    @raunakdoesdev 4 ปีที่แล้ว +10

    Thanks for producing this amazing content. It's surprising there are so few views.

  • @nirikshatk3150
    @nirikshatk3150 3 ปีที่แล้ว +1

    What a great explanation, thanks a lot for this video

  • @KisanThapaYT
    @KisanThapaYT 5 หลายเดือนก่อน +2

    Bro gave me a great intuitive and clear explanation without using fancy pictures. Greatest RGCN explanation ever.

  • @hansalas
    @hansalas 4 ปีที่แล้ว +5

    Excellent follow-up content as always..
    It would be more awesome, if there is also a jupyter notebook added as well.

    • @welcomeaioverlords
      @welcomeaioverlords  4 ปีที่แล้ว +10

      Good idea! I'm thinking about doing a blog series on GNNs that brings the code. Stay tuned.

    • @firstnamelastname4685
      @firstnamelastname4685 3 ปีที่แล้ว

      @@welcomeaioverlords any updates on this? Is there any notebook now? Thanks for the awesome content!

    • @welcomeaioverlords
      @welcomeaioverlords  3 ปีที่แล้ว

      @@firstnamelastname4685 @hansalas A relevant blog can be found here: blog.zakjost.com/post/gcn_citeseer/
      Also, sign up for the mailing list and you'll get notified when new stuff comes out. Releasing a new video soon about implementing from scratch using only NumPy.

  • @farnooshjavadi3756
    @farnooshjavadi3756 3 ปีที่แล้ว +4

    The most intuitive and clear explanation that I have found on the topic. Thanks for the effort.

  • @jtetrfs5367
    @jtetrfs5367 2 ปีที่แล้ว +4

    I would like to commend the sincerity with which you teach and illustrate. You are *not* interested in showing off how much you know. You appear to be interested only in how much of your knowledge you can transmit clearly to your viewers. Once again, well done.

  • @lingkeyu780
    @lingkeyu780 3 ปีที่แล้ว +5

    That's amazing! I have read many literatures and tutorials but this is the only one made me understand

  • @nicholasliu-sontag1585
    @nicholasliu-sontag1585 3 ปีที่แล้ว +2

    Truly excellent videos. Thank you.

  • @samuelemazzanti1225
    @samuelemazzanti1225 4 ปีที่แล้ว

    Your videos are amazing!

  • @mhadnanali
    @mhadnanali 3 ปีที่แล้ว

    You did not discuss the different node types? will the equations be the same or what?

  • @swarnavasamanta2628
    @swarnavasamanta2628 ปีที่แล้ว +1

    Wow, I can't believe this content is out in the internet for free. Thank you so much.

  • @anything_jara_hatke5237
    @anything_jara_hatke5237 3 ปีที่แล้ว +2

    Best explanation I have ever found on youtube till now. Thanks for giving us knowledge on this very less explored field.

  • @eddie3781
    @eddie3781 2 ปีที่แล้ว

    How to transfer this kind of graph to homogeneous graph? Thanks.

  • @ShivangiTomar-p7j
    @ShivangiTomar-p7j 7 หลายเดือนก่อน +1

    THE BEST VIDEOS LITERALLY! Awesome!!!!!!

  • @Murray2000
    @Murray2000 3 ปีที่แล้ว +1

    It's a little bit mind twisting to follow you without any pictures. But on the other side it helps alot because of the examples you bring in.

  • @PD-vt9fe
    @PD-vt9fe 4 ปีที่แล้ว +1

    Thank you for your awesome video. Just subscribed! Keep up the great work!

  • @VishalSharma-gp6dm
    @VishalSharma-gp6dm 8 หลายเดือนก่อน

    hey, thanks for the info,
    just a question, in case of when the node n1 is an item (not person)
    having some feature-vector v1 connected to multiple neighbours Ni's with feature vector Vi's
    can R-GCN be prone to target leakage cause it's aggregating the node-embeddings?

  • @saqibcs
    @saqibcs 2 ปีที่แล้ว

    Love from india

  • @dustinvansteeus4904
    @dustinvansteeus4904 3 ปีที่แล้ว +1

    really well done. Thank you for this explanation!

  • @saran0uri
    @saran0uri 4 ปีที่แล้ว +1

    Please share some of the best existing papers on these concepts too. Thank you!!

  • @tomw4688
    @tomw4688 3 ปีที่แล้ว +1

    Great explanations! Thank you.

  • @johnnyBrwn
    @johnnyBrwn 3 หลายเดือนก่อน

    Thanks man this makes so much sense

  • @SemirElezovikj
    @SemirElezovikj 3 ปีที่แล้ว +1

    Excellent work. Keep it up!

  • @mathom21
    @mathom21 7 หลายเดือนก่อน

    brilliant explanations!

  • @mikejason3822
    @mikejason3822 3 ปีที่แล้ว +1

    Amazing. Your explanations give a complete clear picture in just a few minutes!

  • @gabrielcbenedito
    @gabrielcbenedito 4 ปีที่แล้ว +1

    That's amazing, and this explanation was clean and just awesome! Thank you

  • @ptanisaro
    @ptanisaro 3 ปีที่แล้ว

    Thank You Very Much! Very helpful indeed.

  • @andrewminhnguyen9446
    @andrewminhnguyen9446 4 ปีที่แล้ว +1

    Thanks for this.

  • @dhruvgupta5041
    @dhruvgupta5041 2 ปีที่แล้ว

    This is my actual intro to Graph neural networks. Great explanation. Also, 'Crack the Skye' and 'Hunter' are my favourite mastodon albums. Cheers !

  • @oleksandrasaskia
    @oleksandrasaskia 9 หลายเดือนก่อน

    Love it!

  • @aravindcheruvu2671
    @aravindcheruvu2671 2 ปีที่แล้ว

    Great Video! Awesome Explanation!!

  • @rafael_l0321
    @rafael_l0321 ปีที่แล้ว

    I just finished watch the third video for today, what a lesson! Thank you very much for breaking down the equations. And the justifications for the regularizations make a lot of sense!

  • @6lack5ushi
    @6lack5ushi 4 ปีที่แล้ว +1

    Hidden Gem of a channel! God Speed sir what is your patreon!!!

    • @welcomeaioverlords
      @welcomeaioverlords  4 ปีที่แล้ว

      Thanks! No Patreon, but maybe some merch available some day.

    • @hornedbuddha
      @hornedbuddha 4 ปีที่แล้ว

      @@welcomeaioverlords I'd love to support a Patreon as well!

  • @shukrayaniredkar5606
    @shukrayaniredkar5606 3 ปีที่แล้ว

    What is node embedding part in gcn?

  • @jonathanballoch
    @jonathanballoch 4 ปีที่แล้ว

    Good video. One thing you allude to but don't explain, however, is the changes in "node type". All explanation is of variation in edge type (aka relation aka predicate). how do you accommodate multiple node types?

    • @welcomeaioverlords
      @welcomeaioverlords  4 ปีที่แล้ว +1

      Relations (an overloaded term) can be described by "(source_type, edge_type, target_type)" triples, so this accounts for node type. The twitter graph example has multiple node types (tweets and users). I hope this helps.

  • @fazlfazl2346
    @fazlfazl2346 3 ปีที่แล้ว

    Great lecture...Would be grateful if you provide some basic python code for RGNs.

    • @welcomeaioverlords
      @welcomeaioverlords  3 ปีที่แล้ว

      You might find this DGL example useful: github.com/dmlc/dgl/tree/master/examples/pytorch/rgcn-hetero

  • @BossManTee
    @BossManTee 4 ปีที่แล้ว

    Great video!
    please do some videos on graph classification methods, for example by using GCN or other methods.

  • @TheAnna1101
    @TheAnna1101 3 ปีที่แล้ว

    awesome video! may I ask a question: in the block-diagonal decomposition, i don't quite understand how do we generate the Q_br blocks, or where does it come from? thank you!

    • @welcomeaioverlords
      @welcomeaioverlords  3 ปีที่แล้ว +1

      Glad you enjoyed the video. All of the W_r parameters are learnable model parameters. Enforcing that W_r is in a block-diagonal structure of sub-matrices (Q_br) is just a constraint that forces many of the W_r elements to be zero. This is just one strategy to reduce the number of learnable parameters.

  • @franchinyama241
    @franchinyama241 4 ปีที่แล้ว

    Dude you didn’t get any sleep..

  • @Simply-Charm
    @Simply-Charm 4 ปีที่แล้ว

    Thank you so much for the great explanation.

  • @devindearaujo1768
    @devindearaujo1768 7 หลายเดือนก่อน

    Zak come back and do more videos on TH-cam

    • @welcomeaioverlords
      @welcomeaioverlords  7 หลายเดือนก่อน

      I've been thinking about it. Any ideas for videos?

    • @devindearaujo1768
      @devindearaujo1768 7 หลายเดือนก่อน

      Either another "from scratch" video, like "relational GCNs from scratch using only Numpy" for example or something on the training side, like how negative sampling can be used effectively in GNN training (I'm thinking about the ItemSage paper specifically as an example)

  • @Visium1
    @Visium1 3 ปีที่แล้ว

    fantastic content. so clear

  • @machinelearningdojowithtim2898
    @machinelearningdojowithtim2898 4 ปีที่แล้ว

    Awesome video Zak!

  • @mrknarf4438
    @mrknarf4438 3 ปีที่แล้ว

    Great content, thanks! Is that Crack the Skye on the wall? Great music taste too :D

  • @MAAditya
    @MAAditya 2 ปีที่แล้ว

    Simply amazing!

  • @aparnaambarapu8806
    @aparnaambarapu8806 4 ปีที่แล้ว

    Excellent content and waiting for more in this series!!!
    Question : The second type of regularisation used where most of the entries in the weight matrix are zeros. What do you mean by strongly interconnected within the group and not so outside? My analogy here is, it seems like the weight matrix is some kind of covariance matrix with having non-zero values when they are related and zeros elsewhere. Want to here more on this! Thanks again.

    • @welcomeaioverlords
      @welcomeaioverlords  4 ปีที่แล้ว +1

      If you think of a vector of features being multiplied by a matrix: if the matrix only had diagonal elements that were non-zero, then the output vector would essentially be multiplying each of the input components by scaling coefficients. That is to say, the first element of the output vector would only depend on the first element of the input vector.
      But if you also make the second element of the first row of the matrix non-zero, then the first element of the output would also depend on the second element of the input. In other words, the first output element would be a linear combination of the first two input elements. And so it goes: the non-zero elements of the matrix determine how the inputs can be combined to form the outputs. The best way to see this is to do this operation by hand.
      So a block-diagonal matrix would mean that the output elements would only use a small number of nearby data elements rather than everything. This is a bit hard to explain with only text, but I think if you were to write out an example, you would see how it works. I hope this helps.

    • @aparnaambarapu8806
      @aparnaambarapu8806 4 ปีที่แล้ว +1

      @@welcomeaioverlords Now I clearly understood what the weight matrix is and the way the model is regularised because of it. Thanks for the patience to explain it in simpler words.

  • @البداية-ذ1ذ
    @البداية-ذ1ذ 3 ปีที่แล้ว

    Thanks for your sharing this ,i have question regarding 2 graphs as in put to figure out the similarity, how the process could be

    • @welcomeaioverlords
      @welcomeaioverlords  3 ปีที่แล้ว +1

      You could pool the node embeddings to form a graph embedding. And then you could get a similarity score between graphs by taking a dot product. Not sure what your training signal would be though.

    • @البداية-ذ1ذ
      @البداية-ذ1ذ 3 ปีที่แล้ว

      @@welcomeaioverlords embedding 2 graphs will be the same as embadding one ,

    • @البداية-ذ1ذ
      @البداية-ذ1ذ 3 ปีที่แล้ว

      I can send you the flowchart of my project to recommend the ways should i takr

    • @welcomeaioverlords
      @welcomeaioverlords  3 ปีที่แล้ว

      You can join our Discord channel and post it there