GraphSAGE: Inductive Representation Learning on Large Graphs (Graph ML Research Paper Walkthrough)

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ธ.ค. 2024

ความคิดเห็น • 18

  • @TechVizTheDataScienceGuy
    @TechVizTheDataScienceGuy  3 ปีที่แล้ว +3

    Hey there,
    You can watch more Research Paper walkthroughs at - th-cam.com/video/ykClwtoLER8/w-d-xo.html
    Cheers!

  • @ratikagarg1494
    @ratikagarg1494 3 ปีที่แล้ว +3

    Great explanation!! Very helpful 🔥

  • @VALedu11
    @VALedu11 3 หลายเดือนก่อน

    thank you for such lucid explanation

  • @Alkis05
    @Alkis05 2 ปีที่แล้ว

    Jure Leskovec is such a giant in this field. Such a young guy and doing such a seminal work.

  • @arnavdman
    @arnavdman 3 ปีที่แล้ว +1

    Great stuff, helps a lot!

  • @RollingcoleW
    @RollingcoleW ปีที่แล้ว

    11:46 I am confused, you made a note about the difference between the psuedo code and the first aggregator function result. And again at 12:50 or so.

  • @utkarshkathuria2931
    @utkarshkathuria2931 ปีที่แล้ว

    Please explain the mini-batch sampling method also from the appendix of this paper.

  • @kamalakannank2741
    @kamalakannank2741 ปีที่แล้ว

    Can please explain this research paper " Deep learning for technical document classification "

  • @vinaychetnani
    @vinaychetnani 3 ปีที่แล้ว +2

    I have a doubt
    Initially you said about number of hops what we could vary, and it is usually 2-3. In pseudo algo, variable K which is referred as depth, is K equivalent to number of hops ?

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว +1

      [Edited] - sorry for the previous one. Was traveling so couldn’t understand the question fully. So essentially, Hops and depths are different things. Hops refer to from how far do we want to take in msg to update current node. Whereas depth, refers to the no. Of layers in Gnn. So essentially at every depth you will have snapshot of the entire graph with certain representation for each node aggregated from k hop neighbors. I hope i was able to explain. Thanks

  • @nouraboub4805
    @nouraboub4805 2 ปีที่แล้ว

    Very nice video 👍
    Can you send me slides please

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  2 ปีที่แล้ว

      I guess you meant annotated paper. Unfortunately don’t have it now. Space constraints 😅

  • @ramaraopirati
    @ramaraopirati 4 หลายเดือนก่อน

    Your explanation of T2 becoming 0 with sigmoid is incorrect.

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  4 หลายเดือนก่อน

      Oh okay 🤔 maybe I might have missed something. I clearly don’t remember the content of this one now. It would be really helpful, if you could point the error with timestamp and its alternate explanation, it will really benefit anyone who sees it from here on. I’ll also pin the comment to prioritise its visibility. 🙏 thank you