Node2Vec: Scalable Feature Learning for Networks | ML with Graphs (Research Paper Walkthrough)

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ธ.ค. 2024

ความคิดเห็น • 38

  • @TechVizTheDataScienceGuy
    @TechVizTheDataScienceGuy  4 ปีที่แล้ว +2

    Watch more Research Paper Walkthroughs - th-cam.com/video/ykClwtoLER8/w-d-xo.html

  • @pratikdeshpande3258
    @pratikdeshpande3258 3 ปีที่แล้ว +1

    Excellent explanation! Appreciate the efforts taken to teach a complex paper in less than 15 mins. That's a skill not all have.

  • @FaryalMir
    @FaryalMir 3 ปีที่แล้ว +1

    Really appreciate the walkthrough and explanations, thanks.

  • @prateekyadav9811
    @prateekyadav9811 6 หลายเดือนก่อน

    Great presentation, thank you! As someone else said, explaining a paper in such a short amount of time is a rare skill. Commendable!
    I have a quite a basic question. All of this involves representing a node into a vector so that we can implement linear algebraic operations onto it for whatever analysis we wish to do, right? But the numbers in the vector representing a node aren't actually quantities but node ids. They might as well be names/words. So, how can we gain insights from doing math on them? Maybe I am missing something obvious. Please help, thanks!

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  5 หลายเดือนก่อน

      Hey, thanks! :)
      Node vectors are not ids.
      The node is initialised with random random vector and soon gets meaningful when loss converges.

  • @sanjaykrish8719
    @sanjaykrish8719 3 ปีที่แล้ว

    Well Explained.. Thanks

  • @mdhabibiurrahmansifat1798
    @mdhabibiurrahmansifat1798 3 ปีที่แล้ว +1

    Great, it could be better if you gave some example for any node to vec

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว

      Thanks for the feedback. You can apply node2vec on any geometric data. You might want to see my video on “DeepWalk” there I have explained karate club graph. The same example holds here as well.
      Cheers!

  • @hodaragaie9778
    @hodaragaie9778 3 ปีที่แล้ว

    Are there other graph embedding implementations that focus on representing information of graph edges instead of nodes?

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว +1

      Yeah. You can look at “edge2vec”.
      Also, one more interesting way to have edge representations i think is by using existing methods (such as node2vec, let’s say) over the transformed graph. A transformed graph is a graph where all the existing edges are considered as nodes and all existing nodes are considered as edges. I.e you basically switch nodes and edges roles.
      ^ btw just a thought!

    • @hodaragaie9778
      @hodaragaie9778 3 ปีที่แล้ว

      @@TechVizTheDataScienceGuy Yeah I had the same thought as well. Will try implementing that and see how it works.
      Thank you so much for the videos and for your response :)

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว

      Great! Thank you!

  • @DataScienceCenter
    @DataScienceCenter 2 ปีที่แล้ว

    amazing video, can you share the implementation code of node2vec.

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  2 ปีที่แล้ว

      Thanks for appreciating. There are couple of GitHub repos. that already implement it. For ex. You can check out one at github.com/eliorc/node2vec

  • @ratikagarg1494
    @ratikagarg1494 4 ปีที่แล้ว +2

    Hi there 👍

  • @romajain2425
    @romajain2425 3 ปีที่แล้ว

    Also, what is "alias samping" step in node2vec

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว

      alias sampling refers to sampling node from neighbouring set of any current node N based on (pi) - distribution that you get by multiplying initial prob distribution (w) and (alpha) which depends on p,q values

    • @romajain2425
      @romajain2425 3 ปีที่แล้ว

      @@TechVizTheDataScienceGuy Yes this is clear, however, how to decide when to sample nodes based on pi? how I am visualizing is pi gets calculated for the "sentence"/"random walk" generated, using w*alpha, for all the nodes in that walk. w*alpha gets calculated , for weighted edges, as the edge weight between two nodes multiplied with alpha (based on p and q), and then it gets multiplied like this for all the nodes in that random walk?

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว +1

      So as per the algorithm 3.2.3 - pi matrix is calculated prior to starting random walks over the graph. At any stage of random walk you calculate neighbors of current node and then refer to pi matrix to sample next node based on distribution that it holds. And then We repeat this process

    • @romajain2425
      @romajain2425 3 ปีที่แล้ว

      @@TechVizTheDataScienceGuy I assumed that during SGD, the edge weights are going to change as per optimization for Skipgram , iteratively, hence there could be a need to calculate pi every iteration, since pi = w*alpha, where w would change

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว +1

      Okay.. so sgd updates the weight of NN that we train post we get random walks based on pi sampling at every step. It’s just like training skipgram on sentences. As in the algo you can see.. sgd step is outside scope of walk computation.

  • @romajain2425
    @romajain2425 3 ปีที่แล้ว

    how is the distance d, calculated in case of weighted edges?

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว

      Since they have written that d(t,x) is shortest path and should be one of 0,1,2.
      The weight of the edges will come into play when you calculate pi(v,x). The w(v,x) will be based on wights of edge weight as initial probability distribution. Rest shortest path seems to be calculated based on hop count.

    • @romajain2425
      @romajain2425 3 ปีที่แล้ว

      @@TechVizTheDataScienceGuy okay so there is one idea , I initialised the weights of the edges equal to the tf-idf scores between the two nodes(or "words"), and since the goal is to learn homophily, I set p = 0.5 and q = 2, and then the latent representation of these nodes is what is now my word embeddings

    • @romajain2425
      @romajain2425 3 ปีที่แล้ว

      so in this case, higher tf-idf means these two nodes are important for each other/more related to each other, hence higher tf-idf also equates to higher edge weight, so does this imply more distance between the two nodes, which then negates my purpose since I wanted the nodes which are highly related to each other to be "closer" to each other in the graph ?

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว +1

      I am assuming here the edge weight refers to the co-occurrence property between two node/words. like some sort of point wise mutual information score. I did get how will you put tfidf weight on edges.. based on tfidf of nodes... because then what will edge weight signify firstly.. secondly, tfidf is associated to every word.. puting it as edge weight as a relation between two nodes won’t make sense. So ideally you can opt for semantic scores, cooccurence score, pmi, etc

    • @TechVizTheDataScienceGuy
      @TechVizTheDataScienceGuy  3 ปีที่แล้ว +1

      So considering we have some measure over edge that captures importance of nodes to each other.. which will help us build our initial distribution w.. considering authors mentioned that shortest distance can be just 0,1,2 means they are focusing on hop length. So still the final pi distribution that you get after multiplying w and alpha will tend to give more chances for you to sample high edge weight terms.. hence learning representation that gives more weightage to words associated with high edge weights. So the logic will remain intact, it’s not doing opposite of what we expect it to do. Btw Nice question. I hope I answered your query