Deep Learning 60: Architecture of Graph Neural Network

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024
  • This lecture discusses the architecture of Graph Neural Network.
    You can support the channel by clicking Join Button
    (1) Semi-Supervised Classification with Graph Convolutional Networks (arxiv.org/abs/...)
    (2) web.stanford.ed...
    #deeplearning #graphneuralnetwork

ความคิดเห็น • 46

  • @kaoutermar1542
    @kaoutermar1542 3 ปีที่แล้ว +5

    This must be the best explanation I found so far, you really covered every aspect of GNN answering many questions I had in mind, thanks a lot!

  • @Maximos80
    @Maximos80 4 ปีที่แล้ว +5

    Ahlad, you are one of the best teachers!

  • @moji9989
    @moji9989 3 ปีที่แล้ว +2

    really well-explained with nice examples! other proffs need to learn this from this guy

  • @khauyinglim986
    @khauyinglim986 3 ปีที่แล้ว

    The best and easy-understood GCN lecture!!! Brilliant thank you!

  • @omarmahtab6851
    @omarmahtab6851 ปีที่แล้ว

    One of the best videos I saw in this topic. Thanks!

  • @sarikasaxena5567
    @sarikasaxena5567 6 หลายเดือนก่อน

    Best graph neural network video

  • @mohamedabbashedjazi493
    @mohamedabbashedjazi493 3 ปีที่แล้ว +3

    Thank you for this great series on GNNs, what do you recommend as a paper to learn more variants of GNNs, I found dozens in the literature and I am confused about which one is the next. Greetings from Algeria.

  • @muhammadiqbalbazmi9275
    @muhammadiqbalbazmi9275 ปีที่แล้ว

    For understanding the smoothness of the image, Do we use the First Order Derivative (FOD) ?? I don't think so. Because It is basically used for detecting the edges, isn't it?? And the second order derivative for the texture, etc.
    Waiting for the reply. Thanks!

  • @shwetaredkar734
    @shwetaredkar734 4 ปีที่แล้ว +1

    The best video. Thanks.

  • @giyaseddinbayrak
    @giyaseddinbayrak 4 ปีที่แล้ว +4

    First, thank you for the lecture. Second, the arrows this man is drawing double like twice a second!

  • @البداية-ذ1ذ
    @البداية-ذ1ذ 3 ปีที่แล้ว

    Thanks for your perfect presentation, i just confuse from where you append node feature .the why node one take 1 0 and node five 00.

  • @aditikulkarni8918
    @aditikulkarni8918 3 ปีที่แล้ว +2

    Your Adjacency mtrix is wrong at 6,5 while calculating the Laplacian. However, you have created correct L matrix from it.

    • @nelapatilavaprasad
      @nelapatilavaprasad 3 ปีที่แล้ว

      Even in the 1st approach at th-cam.com/video/FdZ-EQkcHBo/w-d-xo.html in how to feed graphs into NN, the adjacency matrix is wrong.

  • @talhachafekar2074
    @talhachafekar2074 3 ปีที่แล้ว

    Thank you for this video! ^.^

  • @DungPham-ai
    @DungPham-ai 4 ปีที่แล้ว +1

    Thank so much

  • @adminai9450
    @adminai9450 4 ปีที่แล้ว

    Thanks for sharing Sir

  • @nelapatilavaprasad
    @nelapatilavaprasad 3 ปีที่แล้ว +2

    In the 1st approach at th-cam.com/video/FdZ-EQkcHBo/w-d-xo.html in how to feed graphs into NN, the adjacency matrix is wrong.
    But I guess everyone can write an adjacency matrix on their own thanks to your teaching.

  • @stsfaroz4299
    @stsfaroz4299 3 ปีที่แล้ว

    what if feature vectors are different for each node?

    • @DeepFindr
      @DeepFindr 3 ปีที่แล้ว +1

      If you mean different shapes (lengths) you can maybe apply padding (filling up/imputing the missing values) or alternatively pass the vectors first through different dense layers (depending on their size) to obtain node features of the same size - but this makes only sense if you don't have a lot of different types of node features.

    • @mohamadabdulkarem206
      @mohamadabdulkarem206 3 ปีที่แล้ว

      I hope that you are safe and well, could you please tell me, Is embedding like DeepWalk and node2Vec considred as feature exactraction or does it uses directly for classification or link prediction?
      Also, is the graph neural network considered as features extraction and it is input to neural metwork to do classification, or GNN is itself for classification? Is there backprobagation in Gnn? If yes where does it start for example the output from where,the backprobagation it starts ?
      Could you please sir do not ignor my question because it is very important for me? I would like to make big image for this topic
      Best regards

    • @DeepFindr
      @DeepFindr 3 ปีที่แล้ว

      @@mohamadabdulkarem206 hi! Interesting question. I would say yes, they are feature extractors. The GNN itself is also an extractor and not a predictor. But any GNN can be extended to also do classification. But the GNN layers theirselves are feature extractors.
      If you add for example two more dense layers they can serve as classifier.

    • @DeepFindr
      @DeepFindr 3 ปีที่แล้ว

      @@mohamadabdulkarem206 and yes also in GNNs there is back propagation. It all works with gradients.
      The propagation happens mainly in the aggregation and update of the node features/embeddings. A GNN is basically a collection of smaller neural networks in each area if the graph.

  • @soumambanerjee1816
    @soumambanerjee1816 3 ปีที่แล้ว

    U r awesome :)

  • @WahranRai
    @WahranRai 2 ปีที่แล้ว +1

    10:34 What about ordering the nodes from top- down, left-right (similar to sweep line technique)

  • @paulojhonny4364
    @paulojhonny4364 4 ปีที่แล้ว +3

    Thank you. Your explanation was awesome!!!! I’m looking forward to seeing the next video. :)

  • @krishnachauhan2822
    @krishnachauhan2822 ปีที่แล้ว

    I think you should end the lecture by summarising how the node embedding are used to evaluate final node embedding. Great Lectures though.

  • @MrDonald911
    @MrDonald911 4 ปีที่แล้ว +1

    Thanks for the great explanation ! I was wondering how does the logic of neighbors aggregation change if we are dealing with complete graphs ! Thanks :p

  • @hsupadrasta
    @hsupadrasta ปีที่แล้ว

    14:56 do all the nodes have same features or is it a feature embedding?

  • @НикитаБуров-ъ6р
    @НикитаБуров-ъ6р 6 หลายเดือนก่อน

    just perfect explanation

  • @aeigreen
    @aeigreen ปีที่แล้ว

    great explanation thanks

  • @binduverma506
    @binduverma506 2 ปีที่แล้ว

    Sir Which writer are you using?

  • @brendasolariberno5868
    @brendasolariberno5868 4 ปีที่แล้ว +1

    Thank you for your explanation :D

  • @kunalverma6300
    @kunalverma6300 2 ปีที่แล้ว

    @Ahlad - In your computational graph example, your target output is the feature encodings for 'A'. That is, A's ground truth is known and that is what we optimize the error on while learning the NN's in the computational graph. IF that is the case, how does this generate an embedding in a lower space?

  • @meowcrobe
    @meowcrobe 2 ปีที่แล้ว

    at 8:20 the "neighbors" encircled do not match the definition of neighbor in graphs. For this node there are only two neighbors. Neighbors of node N are nodes M, where an edge exists between the N and M.

  • @wangkuanlee3548
    @wangkuanlee3548 2 ปีที่แล้ว

    Superb teaching.
    Concise and very clear, covering from overview to detail each key concept. Thank you so much.

  • @luiseds4
    @luiseds4 3 ปีที่แล้ว

    Amazing, thank you professor !

  • @chidiedim3166
    @chidiedim3166 4 ปีที่แล้ว

    Thank you so much. I sent you connect on LinkedIn

  • @rakeshsinghrawat99
    @rakeshsinghrawat99 4 ปีที่แล้ว

    Thanks

  • @tohchengchuan6840
    @tohchengchuan6840 4 ปีที่แล้ว

    i think it should be 'CNN' instead of 'GNN' at the right hand side

  • @wahabfiles6260
    @wahabfiles6260 4 ปีที่แล้ว +1

    COMMON 3 Ads ! U WANNA TEACH OR DISTRACT?

    • @AhladKumar
      @AhladKumar  4 ปีที่แล้ว +7

      you can take youtube premium to avoid them

    • @wahabfiles6260
      @wahabfiles6260 4 ปีที่แล้ว +1

      @@AhladKumar Ok, or better yet, you can monitize in a way that we see one ad in the beginning and not get disturbed in the middle with an ad for a content that is supposed to be educational.

    • @777arunkumarl
      @777arunkumarl 4 ปีที่แล้ว +3

      Dude. He is teaching us for free. Just be grateful