Intro to graph neural networks (ML Tech Talks)

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 มิ.ย. 2024
  • In this session of Machine Learning Tech Talks, Senior Research Scientist at DeepMind, Petar Veličković, will give an introductory presentation and Colab exercise on graph neural networks (GNNs).
    Chapters:
    0:00 - Introduction
    0:34 - Fantastic GNNs and where to find them
    7:48 - Graph data processing
    13:42 - GCNs, GATs and MPNNs
    26:12 - Colab exercise
    49:52 - Resources for further study
    Resources:
    Theoretical Foundations of GNNs → goo.gle/3xwKPSW
    Compiled resources for further study → goo.gle/3cO7gvb
    Catch more ML Tech Talks → goo.gle/ml-tech-talks
    Subscribe to TensorFlow → goo.gle/TensorFlow
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 107

  • @TheStargazer1221
    @TheStargazer1221 2 หลายเดือนก่อน +2

    Changed the literature, still incredibly humble. Great representation of a scientist.

  • @victor-iyi
    @victor-iyi ปีที่แล้ว +17

    Wow, I used to fear Graph Neural Networks thinking it was some sort of monster. But this presentation has changed everything for me. Excellent job Petar! Thank you, thank you so much!

  • @muhammadharris4470
    @muhammadharris4470 3 ปีที่แล้ว +7

    Thanks petar. Really love this intro to GNN been hearing about them for a while. needs got to know the actual graph computations and matrices with the context of ML

  • @giovannibianco5996
    @giovannibianco5996 หลายเดือนก่อน +1

    Great video Petar; now I understood everything and I will never ever have any kind of fear towards the gat. Now I am friend with the gat. We hang around often and apply leaky relu to beers in a bars. When we cross the streets he always reminds me to pay attention to the other edges and it is also very computationally efficient. Love it!

  • @Amapramaadhy
    @Amapramaadhy 3 ปีที่แล้ว +11

    Really great content and presentation. The analogy between convolutional NN and GNN is one of the best I have heard. Petar should do more lectures

  • @jingzhitay6736
    @jingzhitay6736 3 ปีที่แล้ว +2

    Thank you for this introduction! This might be the last GNN overview that I need to watch :)

  • @jimlbeaver
    @jimlbeaver 3 ปีที่แล้ว

    Thanks...great stuff. I really appreciate you taking a slow and deliberate approach to this.

  • @fredquesnel1855
    @fredquesnel1855 2 ปีที่แล้ว

    Thanks for the great tutorial! Straight to the point, easy to understand, with an exercice that is easy to follow!

  • @WickedEssi
    @WickedEssi 2 ปีที่แล้ว +2

    Great explanation. Very calm and precise. Was a pleasure to listen to.

  • @Sangel67rus
    @Sangel67rus 2 ปีที่แล้ว

    The brilliant explanations! Thank you, Petar!

  • @dori8118
    @dori8118 3 ปีที่แล้ว +4

    Thanks for video
    i was in love with knowledge graphs, i am trying to back to it some day.

  • @danielkorzekwa
    @danielkorzekwa ปีที่แล้ว

    Great talk, excellent starting point to Graph Neural Networks. Presentation first + hands on tutorial.

  • @ihmond
    @ihmond 3 ปีที่แล้ว +8

    Thank you for your sample code!
    Most of models I found are written by pytorch. So, this keras model can be my basic reference.

  • @peterkonig9537
    @peterkonig9537 2 ปีที่แล้ว

    Very clear presentation. It nicely combines concepts and exercises.

  • @frankl1
    @frankl1 3 ปีที่แล้ว

    Thanks for this intro to GNN, I enjoyed it a lot

  • @nikolayfx
    @nikolayfx 3 ปีที่แล้ว +1

    Thanks Petar for presenting GNN

  • @mytelevisionisdead
    @mytelevisionisdead 3 ปีที่แล้ว

    Clearly explained! even more impressive given the information density of the content..!

  • @NoNTr1v1aL
    @NoNTr1v1aL ปีที่แล้ว +1

    Absolutely amazing video!

  • @toandao7113
    @toandao7113 3 ปีที่แล้ว +1

    Thanks for the video. You bring useful knowledge

  • @masudcseku
    @masudcseku 3 ปีที่แล้ว +1

    Thanks Petar, very comprehensive tutorial! It will be great if you can make a tutorial on GAT ;)

  • @krishnanchari2001
    @krishnanchari2001 หลายเดือนก่อน

    Thank you, a very good lecture , I have got now confidence on the topic and would definitely try the code on a real life data set. i was looking for understanding GNN's so that I could apply it the area of finance and I got it. Thank you again Petar.

  • @stephanembatchou5300
    @stephanembatchou5300 2 ปีที่แล้ว +1

    Excellent content. Thank You!

  • @mohajeramir
    @mohajeramir 2 ปีที่แล้ว

    This is so awesome. Excellent presenter

  • @payamkhorramshahi5726
    @payamkhorramshahi5726 ปีที่แล้ว

    Very transparent tutorial ! Thank you

  • @nabeelhasan6593
    @nabeelhasan6593 3 ปีที่แล้ว +1

    This is a very good series

  • @michielim
    @michielim 2 ปีที่แล้ว

    This was so so useful - thank you!

  • @LouisChiaki
    @LouisChiaki 3 ปีที่แล้ว +7

    Glad that Google improve the ETA of my home city - Taichung! The traffic there was really bad and it must be really difficult for the model 😂 .

  • @AliMohammedBakhietIssa
    @AliMohammedBakhietIssa 2 หลายเดือนก่อน

    Many Thanks for your efforts :)

  • @randerson1184
    @randerson1184 3 ปีที่แล้ว +3

    I'm going to get a TON of use out of these! Thanks!

  • @ernestocontreras-torres9188
    @ernestocontreras-torres9188 2 ปีที่แล้ว

    Great material!

  • @brunoalvisio
    @brunoalvisio 2 ปีที่แล้ว

    Thank you for the great intro! Qq: In the equation for GCN is the bias being omitted just for clarity?

  • @innovationscode9909
    @innovationscode9909 3 ปีที่แล้ว +2

    Thanks. Great stuff. I really LOVE ML

  • @DefendIntelligence
    @DefendIntelligence 3 ปีที่แล้ว

    Thank you it was really interesting

  • @squarehead6c1
    @squarehead6c1 3 หลายเดือนก่อน

    Great tutorial!

  • @sleeping4cat
    @sleeping4cat ปีที่แล้ว

    Waiting eagerly for a custom Tensorflow Library on GNN!!

  • @cetrusbr
    @cetrusbr 2 ปีที่แล้ว

    Fantastic Lecture! Thanks Petar, congrats for the amazing job!

  • @rogiervdw
    @rogiervdw 2 ปีที่แล้ว +2

    Marvellous explanation, thank you. Typo on 17:47 sum over j \in N_i ?

  • @39srini
    @39srini 3 ปีที่แล้ว +1

    Very good useful video

  • @ThanhPham-xz2yo
    @ThanhPham-xz2yo 2 ปีที่แล้ว

    thanks for sharing!

  • @miladto
    @miladto 2 ปีที่แล้ว

    Thank you for this great Presentation. Can you please share the Colab?

  • @carltonchu1
    @carltonchu1 3 ปีที่แล้ว +1

    I just saw you on our DeepMind internal talks , then TH-cam recommended this video to my personal account ?

  • @phaZZi6461
    @phaZZi6461 3 ปีที่แล้ว +1

    thanks a lot!

  • @RAZZKIRAN
    @RAZZKIRAN 3 ปีที่แล้ว

    can we apply on text classifcation problems like sentiment analysis, online hate classifcations?

  • @cybervigilante
    @cybervigilante 3 ปีที่แล้ว +2

    Consider graphs on our level - and even people are graphs. They exist only as nodes in a higher level network. But the edges of the higher level do not connect directly to any node in the lower level graph, otherwise you just have a lower level graph. The edges exert a Bias. Biases are common in nature - hormone biases, electrical biases, thermal biases, etc.
    However, there is a counter-bias feedback from the lower level graph, which can be any organism or complex structure, which can cause some higher level edges to either disconnect or connect in a benign or malign fashion, changing the bias. We provide the feedback. This explains very many things.

  • @ExperimentalAIML
    @ExperimentalAIML 11 หลายเดือนก่อน +1

    Good explanation

  • @vasylcf
    @vasylcf 2 ปีที่แล้ว

    Thanks!

  • @Max-eo6vx
    @Max-eo6vx 2 ปีที่แล้ว

    Thank you Peter. Would you share the code or notebook?

  • @werewolf_13
    @werewolf_13 3 ปีที่แล้ว

    Hey insightful lesson!
    Can anyone give me an idea on how to prepare a dataset for GNN? especially for recommendation systems

  • @twitteranalyticsbyad3969
    @twitteranalyticsbyad3969 3 ปีที่แล้ว

    Changing Cake to Pie, Nice move :D You can only understand if you have seen Jure Leskovec's lectures.

  • @fahemhamou6170
    @fahemhamou6170 ปีที่แล้ว

    تحياتي الخالصة thank you

  • @pushkinarora5800
    @pushkinarora5800 ปีที่แล้ว

    Its a Binge watch!! epic!!

  • @sachinvithubone4278
    @sachinvithubone4278 3 ปีที่แล้ว +6

    Thanks for video. I think GNN can be used more in health care system.

  • @timfaverjon3597
    @timfaverjon3597 2 ปีที่แล้ว

    I, thank you for the video, can I find the colab somewhere ?

  • @philtoa334
    @philtoa334 10 หลายเดือนก่อน +1

    Nice.

  • @_Intake__Gourab
    @_Intake__Gourab 2 ปีที่แล้ว

    Hello, I am doing image classification using gcn, but I failed to understand how to use image data in a gcn model. I need some help!

  • @giorgigona
    @giorgigona 2 ปีที่แล้ว +1

    Where can I see the presentation slides?

  • @sunaryaseo
    @sunaryaseo 2 ปีที่แล้ว +1

    A nice tutorial, now I am thinking about how to implement GNN for signal processing such as classification/prediction problems. How do I design the graph, nodes, and edges?

  • @mohammadforutan955
    @mohammadforutan955 ปีที่แล้ว

    very useful

  • @margheritamaraschini3958
    @margheritamaraschini3958 ปีที่แล้ว +2

    Great presentation. If it can be useful, I may have found some small typos:
    - "toward a simple update rule" A~=A~+I should be A~=A+I.
    Also, in one of the instances W should be transpose (W)
    - "GCN" The subscript of the sum I think it's the other way around

    • @apaarsadhwani
      @apaarsadhwani 11 หลายเดือนก่อน

      Thanks, that was useful!

  • @rahulseetharaman4525
    @rahulseetharaman4525 2 ปีที่แล้ว

    Sir, could you please explain the part where the mask is divided by the mean ?

  • @thefastreviewer
    @thefastreviewer ปีที่แล้ว

    Is it possible to share the Colab file as well?

  • @bdegraf
    @bdegraf 11 หลายเดือนก่อน

    Is there a link to the Colab code? I see references to it but not finding it.

  • @quickpresent8987
    @quickpresent8987 2 ปีที่แล้ว

    Is anyone write the colab code following this video, I just get an error for the 'matmul

  • @phillibob55
    @phillibob55 2 ปีที่แล้ว +1

    Those getting the error at load_data(), to quote @Alex Muresan's comment:
    So, at the time of this comment (spektral._version_ == 1.0.8), loading the cora dataset would be something like this:
    cora_dataset = spektral.datasets.citation.Citation(name='cora')
    test_mask = cora_dataset.mask_te
    train_mask = cora_dataset.mask_tr
    val_mask = cora_dataset.mask_va
    graph = cora_dataset.graphs[0] # zero since it's just one graph inside, there could be multiple for other datasets
    features = graph.x
    adj = graph.a
    labels = graph.y
    Hope this is helpful!

    • @ayanansari4463
      @ayanansari4463 2 ปีที่แล้ว

      it keeps returning /usr/local/lib/python3.7/dist-packages/scipy/sparse/_index.py:126: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
      self._set_arrayXarray(i, j, x)
      not sure it this is right?

    • @phillibob55
      @phillibob55 2 ปีที่แล้ว +1

      @@ayanansari4463 it'll give this warning, but it'll still work.

  • @nastaranmarzban1419
    @nastaranmarzban1419 2 ปีที่แล้ว +7

    Hi, hope you're doing well, i have a problem, when i use
    "Spektral datasets.citation.load_data"
    I receive an error
    "Spektral datasets.citation has no attribute 'load_data' "
    Would anyone help me with this problem?
    Tkanks🙏

    • @AlexMuresan
      @AlexMuresan 2 ปีที่แล้ว +17

      So, at the time of this comment (spektral.__version__ == 1.0.8), loading the cora dataset would be something like this:
      cora_dataset = spektral.datasets.citation.Citation(name='cora')
      test_mask = cora_dataset.mask_te
      train_mask = cora_dataset.mask_tr
      val_mask = cora_dataset.mask_va
      graph = cora_dataset.graphs[0] # zero since it's just one graph inside, there could be multiple for other datasets
      features = graph.x
      adj = graph.a
      labels = graph.y
      Hope this is helpful!

    • @phillibob55
      @phillibob55 2 ปีที่แล้ว

      @@AlexMuresan Thankyou s much man!

  • @taruneswar9036
    @taruneswar9036 3 ปีที่แล้ว +1

    🙏🙏

  • @turalsadik81
    @turalsadik81 2 ปีที่แล้ว +1

    Where can I find notebook of the colab exercise?

  • @user-rn6mx2jk1o
    @user-rn6mx2jk1o ปีที่แล้ว

    Please correct me if I'm wrong, Petar, but in the tutorial, it looks like during training we are including the full graph (including test nodes) in the node-pooling step? This looks like information leakage--is there some reason I'm missing why it's considered allowed here?

    • @petarvelickovic6033
      @petarvelickovic6033 ปีที่แล้ว +1

      This is correct, and it is only allowed under the "transductive" learning regime. In this regime, you're given a static graph, and you need to 'spread labels' to all other nodes. Conversely, in 'inductive' learning you are not allowed access to test nodes at training time.
      Naturally, the transductive regime is much easier, as you can use a lot of methods that exploit the properties of the graph structure provided. In inductive learning, instead, your method needs to in principle be capable of generalising to arbitrary, unseen, structures at test time.

  • @l.g.7694
    @l.g.7694 2 ปีที่แล้ว

    Really nice presentation!
    A question regarding the colab: Anyone else having the problem that the validation accuracy stays at around 13%?

    • @l.g.7694
      @l.g.7694 2 ปีที่แล้ว

      This ... is unfortunate. I made a typo (mask = tf.reduce_mean(mask) instead of mask /= tf.reduce_mean(mask)) which I literally noticed after hitting send. Now it works.

  • @vibrationalmodes2729
    @vibrationalmodes2729 ปีที่แล้ว

    Strong last name dude (just started video, was my first impression 😂)

  • @slkslk7841
    @slkslk7841 ปีที่แล้ว

    What are Inductive problems?

  • @halilibrahimakgun7569
    @halilibrahimakgun7569 ปีที่แล้ว

    Can you share colab notebook

  • @ghensao4027
    @ghensao4027 ปีที่แล้ว

    Typo in 17:35 should iterate j over neighborhood of node N_i

  • @jtrtsay
    @jtrtsay 3 ปีที่แล้ว +6

    Love from Taichung city, Taiwan 🇹🇼

    • @ScriptureFirst
      @ScriptureFirst 3 ปีที่แล้ว +1

      A lovely city in an island nation 🇹🇼

  • @phillibob55
    @phillibob55 2 ปีที่แล้ว

    Is anyone else getting accuracies higher than 1? (I know something's wrong but I can't figure it out)

  • @user-qf5il7uq9m
    @user-qf5il7uq9m 2 ปีที่แล้ว

    Could I ask why mask should be divided by mean? Thanks

    • @AvinashRanganath
      @AvinashRanganath 2 หลายเดือนก่อน

      I think it is to prevent the model from overfitting to nodes with a larger number of edges.

  • @dennisash7221
    @dennisash7221 3 ปีที่แล้ว +2

    I am trying to follow the example but I get the following error:
    AttributeError: module 'spektral.datasets.citation' has no attribute 'load_data'
    Anyone know why this is happening, I can only see load_binary in the attributes list.

    • @sanketjoshi8387
      @sanketjoshi8387 3 ปีที่แล้ว

      Did you fix the issue?

    • @dennisash7221
      @dennisash7221 3 ปีที่แล้ว

      @@sanketjoshi8387 I have not found out what the issue is. It might be something to do with some upgrades to Python, NP or Spektral ... I am hoping someone can help

    • @satyabansahoo1862
      @satyabansahoo1862 3 ปีที่แล้ว +1

      @@dennisash7221 check the version of spektral, he is using its 0.6.2 so try using that

    • @DanielBoyles
      @DanielBoyles 3 ปีที่แล้ว +6

      # this should do it in Spektral Version 1.0.6
      # I've used the same variable names, but haven't gone through the rest of the colab tutorial as yet
      from spektral.datasets.citation import Cora
      dataset = Cora()
      graph = dataset[0]
      adj, features, labels = graph.a, graph.x, graph.y
      train_mask, val_mask, test_mask = dataset.mask_tr, dataset.mask_va, dataset.mask_te

    • @dennisash7221
      @dennisash7221 3 ปีที่แล้ว

      @@DanielBoyles awesome it seems to work, I will try to run the rest of the NB later but looks like this did the trick.

  • @wibulord926
    @wibulord926 ปีที่แล้ว

    your source code pls

  • @cia05rf
    @cia05rf ปีที่แล้ว +1

    Great video, doesn't work with spektral 1.2.0.
    To save downgrading this can be used:
    ```
    cora = spektral.datasets.citation.Cora()
    train_mask = cora.mask_tr
    val_mask = cora.mask_va
    test_mask = cora.mask_te
    graph = cora.read()[0]
    adj = cora.a
    features = graph.x
    labels = graph.y
    ```

    • @muhannadobeidat
      @muhannadobeidat ปีที่แล้ว

      Thanks for posting this. It's a time saver!

  • @iva1389
    @iva1389 2 ปีที่แล้ว

    inferring soft adjacency -- what does that even mean?

  • @asedaradioshowpodcast
    @asedaradioshowpodcast 2 ปีที่แล้ว

    27:35

  • @desrucca
    @desrucca ปีที่แล้ว +1

    Total nodes = 2708 nodes
    Train = 140 nodes
    Valid = 500 nodes
    Test = 1000 nodes
    Where did the remaining 1068 nodes gone?

    • @petarvelickovic6033
      @petarvelickovic6033 ปีที่แล้ว

      They're still there -- their labels are just not assumed used for anything (training or eval) in this particular node split.

  • @jackholloway7516
    @jackholloway7516 3 ปีที่แล้ว

    1st

  • @phillibob55
    @phillibob55 2 ปีที่แล้ว +1

    If anyone gets the "TypeError: sparse matrix length is ambiguous; use getnnz() or shape[0]" error at the matmul, use adj.todense() while calling the train_cora() method.

  • @oladipupoadekoya1559
    @oladipupoadekoya1559 2 ปีที่แล้ว

    Hello sir, Please can i have your email sir. i need you to explain how to represent my optimisation problem in GNN