Stanford CS224W: Machine Learning with Graphs | 2021 | Lecture 17.2 - GraphSAGE Neighbor Sampling

แชร์
ฝัง
  • เผยแพร่เมื่อ 6 มิ.ย. 2021
  • For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: stanford.io/3Brn5kW
    Lecture 17.2 - GraphSAGE Neighbor Sampling Scaling up GNNs
    Jure Leskovec
    Computer Science, PhD
    Neighbor Sampling is a representative method used to scale up GNNs to large graphs. The key insight is that a K-layer GNN generates a node embedding by using only the nodes from the K-hop neighborhood around that node. Therefore, to generate embeddings of nodes in the mini-batch, only the K-hop neighborhood nodes and their features are needed to load onto a GPU, a tractable operation even if the original graph is large. To further reduce the computational cost, only a subset of neighboring nodes is sampled for GNNs to aggregate.
    To follow along with the course schedule and syllabus, visit:
    web.stanford.edu/class/cs224w/
    #machinelearning #machinelearningcourse

ความคิดเห็น • 3

  • @heyna88
    @heyna88 7 หลายเดือนก่อน

    Wow! Finally a good explanation about this! I have been wondering for the entire course how we were going to fit a large graph (10M+ nodes) on a GPU, given that even the most recent architectures such as H100 only reach 80GB of memory!

  • @laugh_n_share_life
    @laugh_n_share_life 9 หลายเดือนก่อน +4

    I am not going to pretend I understood this

  • @abhinav9058
    @abhinav9058 3 หลายเดือนก่อน

    What about for hetrogenous networks ?