NetSci 04-2 Eigenvector Centrality

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ธ.ค. 2024

ความคิดเห็น •

  • @rbg9854
    @rbg9854 2 ปีที่แล้ว +1

    Thank you so much for the detailed explanation! I had been looking for this kind of videos for a long time but most of the videos only have a light touch on the maths side. This is a much more comprehensive one!

  • @nilslohrberg3877
    @nilslohrberg3877 3 ปีที่แล้ว +6

    Great Video, this is the only source I could find, that finally explained to me the connection between the interpretation of the eigen centrality and its definition via the eigenvector equation. Thank you a lot!

  • @debasismondal3619
    @debasismondal3619 3 ปีที่แล้ว +6

    This video is great but underrated if one to understand the concepts of Eighenvector and Katz centrality I hardly came across any videos which explains it in so much detail.

    • @andrewbeveridge7476
      @andrewbeveridge7476  3 ปีที่แล้ว

      I have a follow-up video on Katz centrality, which helps to make that connection and how they relate to PageRank. th-cam.com/video/9xX4Z5Sfk7g/w-d-xo.html

  • @kyxas
    @kyxas 3 ปีที่แล้ว +3

    Hi Andrew, I tried solving your example and got a different eigenvector. I guess its the adjacency matrix A where the fifth row should be (0, 0, 0, 1, 0, 1) and not (1, 0, 0, 1, 0, 1).

    • @andrewbeveridge7476
      @andrewbeveridge7476  3 ปีที่แล้ว +1

      Yes, you are correct. My matrix has a typo. Thanks for pointing that out.

    • @PowerYAuthority
      @PowerYAuthority ปีที่แล้ว

      Came here to say this, was running these calculations with numpy and it wasn't matching

  • @Keyakina
    @Keyakina 8 หลายเดือนก่อน

    How did you normalize and get 0.32 instead of zero at 5:50?

    • @andrewbeveridge7476
      @andrewbeveridge7476  8 หลายเดือนก่อน +1

      I divided each entry by 52, which is the value of the largest entry. So the first entry becomes 17/52=0.329. My goal here was to make it easier for a human to compare the entries. So the first entry is about 32.9% of the largest entry. In hindsight, I shouldn't have called this "normalizing." since the resulting vector doesn't have length 1. "Rescaling" would be a better term. Thanks for the comment.

  • @brownchocohuman6595
    @brownchocohuman6595 ปีที่แล้ว +1

    I have a question.
    Instead of recursively updating the values, why don't we straight up calculate the normalised eigen vector and get the centrality value of each node from there?

    • @andrewbeveridge7476
      @andrewbeveridge7476  11 หลายเดือนก่อน

      You are correct! In practice, we just find the eigenvector for the largest eigenvalue of matrix A. This is also called the "dominant" eigenvalue of A.
      My goal at 4:30 was give an example of the convergence of this recursive update rule. After that, the video uses the spectral decomposition theorem to show that we are converging to the eigenvector for the largest eigenvalue.

  • @souravdey1227
    @souravdey1227 3 ปีที่แล้ว +1

    seriously good explanation in such a short time. only problem is suddenly the volume dipped so low that I could barely hear.

    • @andrewbeveridge7476
      @andrewbeveridge7476  3 ปีที่แล้ว +1

      You are right: the audio does dip starting around 9:00. Sorry about that. It looks like I need to upload a new version (with a new URL) to fix it. Thanks for the feedback and I'm glad the video was helpful.

  • @Kane9530
    @Kane9530 10 หลายเดือนก่อน

    Hi Andrew, for directed graphs, since the matrix is no longer symmetric, the eigenvectors are no longer necesarily orthonormal and so the spectral decomposition theorem wouldn't apply. In this case, how do we prove that the eigenvector centrality calculation works?

    • @andrewbeveridge7476
      @andrewbeveridge7476  8 หลายเดือนก่อน

      Directed networks do get tricky for a few different reasons. When the directed network is strongly connected and not k-partitite (so the adjacency matrix is primitive and irreducible), the eigenvector for the dominant eigenvalue will still determine the limiting values of the vector produced by repeatedly multiplying by A^T. This is the same idea that we use in the "power method" to find the dominant eigenvalue and eigenvector. I recommend reading up on that method.

  • @avnimishra6874
    @avnimishra6874 ปีที่แล้ว

    How you got the normalized value ? Plz tell (time 5.54)

    • @andrewbeveridge7476
      @andrewbeveridge7476  11 หลายเดือนก่อน

      We divide by the largest entry so that the largest value is 1. This makes it easy for a human to compare the relative sizes. In this case, [17, 38, 37, 52, 39, 47] becomes [17/52, 38/52, 37/52, 1, 39/52, 47/52] = [0.32, 0.73, 0.71, 1, 0.75, 0.90].

  • @eliesjj2207
    @eliesjj2207 11 หลายเดือนก่อน

    This is awesome!

  • @hernanepereira50
    @hernanepereira50 3 ปีที่แล้ว

    Hi Andrew. How are you? a_{51}=0, isn't it?

    • @andrewbeveridge7476
      @andrewbeveridge7476  3 ปีที่แล้ว

      Yes, you are right! Thank you for the correction for the matrix A starting at 6:50. So the fifth entry of Ax should be x_4 + x_6.

  • @zhichen2288
    @zhichen2288 2 ปีที่แล้ว

    excellent, thank you!