Intro to graph neural networks (ML Tech Talks)
ฝัง
- เผยแพร่เมื่อ 18 มิ.ย. 2024
- In this session of Machine Learning Tech Talks, Senior Research Scientist at DeepMind, Petar Veličković, will give an introductory presentation and Colab exercise on graph neural networks (GNNs).
Chapters:
0:00 - Introduction
0:34 - Fantastic GNNs and where to find them
7:48 - Graph data processing
13:42 - GCNs, GATs and MPNNs
26:12 - Colab exercise
49:52 - Resources for further study
Resources:
Theoretical Foundations of GNNs → goo.gle/3xwKPSW
Compiled resources for further study → goo.gle/3cO7gvb
Catch more ML Tech Talks → goo.gle/ml-tech-talks
Subscribe to TensorFlow → goo.gle/TensorFlow - วิทยาศาสตร์และเทคโนโลยี
Changed the literature, still incredibly humble. Great representation of a scientist.
Wow, I used to fear Graph Neural Networks thinking it was some sort of monster. But this presentation has changed everything for me. Excellent job Petar! Thank you, thank you so much!
Thanks petar. Really love this intro to GNN been hearing about them for a while. needs got to know the actual graph computations and matrices with the context of ML
Great video Petar; now I understood everything and I will never ever have any kind of fear towards the gat. Now I am friend with the gat. We hang around often and apply leaky relu to beers in a bars. When we cross the streets he always reminds me to pay attention to the other edges and it is also very computationally efficient. Love it!
Really great content and presentation. The analogy between convolutional NN and GNN is one of the best I have heard. Petar should do more lectures
Thank you for this introduction! This might be the last GNN overview that I need to watch :)
Thanks...great stuff. I really appreciate you taking a slow and deliberate approach to this.
Thanks for the great tutorial! Straight to the point, easy to understand, with an exercice that is easy to follow!
Great explanation. Very calm and precise. Was a pleasure to listen to.
The brilliant explanations! Thank you, Petar!
Thanks for video
i was in love with knowledge graphs, i am trying to back to it some day.
Great talk, excellent starting point to Graph Neural Networks. Presentation first + hands on tutorial.
Thank you for your sample code!
Most of models I found are written by pytorch. So, this keras model can be my basic reference.
Very clear presentation. It nicely combines concepts and exercises.
Thanks for this intro to GNN, I enjoyed it a lot
Thanks Petar for presenting GNN
Clearly explained! even more impressive given the information density of the content..!
Absolutely amazing video!
Thanks for the video. You bring useful knowledge
Thanks Petar, very comprehensive tutorial! It will be great if you can make a tutorial on GAT ;)
Thank you, a very good lecture , I have got now confidence on the topic and would definitely try the code on a real life data set. i was looking for understanding GNN's so that I could apply it the area of finance and I got it. Thank you again Petar.
Excellent content. Thank You!
This is so awesome. Excellent presenter
Very transparent tutorial ! Thank you
This is a very good series
This was so so useful - thank you!
Glad that Google improve the ETA of my home city - Taichung! The traffic there was really bad and it must be really difficult for the model 😂 .
Many Thanks for your efforts :)
I'm going to get a TON of use out of these! Thanks!
Great material!
Thank you for the great intro! Qq: In the equation for GCN is the bias being omitted just for clarity?
Thanks. Great stuff. I really LOVE ML
Thank you it was really interesting
Great tutorial!
Waiting eagerly for a custom Tensorflow Library on GNN!!
Fantastic Lecture! Thanks Petar, congrats for the amazing job!
Marvellous explanation, thank you. Typo on 17:47 sum over j \in N_i ?
Very good useful video
thanks for sharing!
Thank you for this great Presentation. Can you please share the Colab?
I just saw you on our DeepMind internal talks , then TH-cam recommended this video to my personal account ?
thanks a lot!
can we apply on text classifcation problems like sentiment analysis, online hate classifcations?
Consider graphs on our level - and even people are graphs. They exist only as nodes in a higher level network. But the edges of the higher level do not connect directly to any node in the lower level graph, otherwise you just have a lower level graph. The edges exert a Bias. Biases are common in nature - hormone biases, electrical biases, thermal biases, etc.
However, there is a counter-bias feedback from the lower level graph, which can be any organism or complex structure, which can cause some higher level edges to either disconnect or connect in a benign or malign fashion, changing the bias. We provide the feedback. This explains very many things.
Good explanation
Thanks!
Thank you Peter. Would you share the code or notebook?
Hey insightful lesson!
Can anyone give me an idea on how to prepare a dataset for GNN? especially for recommendation systems
Changing Cake to Pie, Nice move :D You can only understand if you have seen Jure Leskovec's lectures.
تحياتي الخالصة thank you
Its a Binge watch!! epic!!
Thanks for video. I think GNN can be used more in health care system.
I, thank you for the video, can I find the colab somewhere ?
Nice.
Hello, I am doing image classification using gcn, but I failed to understand how to use image data in a gcn model. I need some help!
Where can I see the presentation slides?
A nice tutorial, now I am thinking about how to implement GNN for signal processing such as classification/prediction problems. How do I design the graph, nodes, and edges?
very useful
Great presentation. If it can be useful, I may have found some small typos:
- "toward a simple update rule" A~=A~+I should be A~=A+I.
Also, in one of the instances W should be transpose (W)
- "GCN" The subscript of the sum I think it's the other way around
Thanks, that was useful!
Sir, could you please explain the part where the mask is divided by the mean ?
Is it possible to share the Colab file as well?
Is there a link to the Colab code? I see references to it but not finding it.
Is anyone write the colab code following this video, I just get an error for the 'matmul
Those getting the error at load_data(), to quote @Alex Muresan's comment:
So, at the time of this comment (spektral._version_ == 1.0.8), loading the cora dataset would be something like this:
cora_dataset = spektral.datasets.citation.Citation(name='cora')
test_mask = cora_dataset.mask_te
train_mask = cora_dataset.mask_tr
val_mask = cora_dataset.mask_va
graph = cora_dataset.graphs[0] # zero since it's just one graph inside, there could be multiple for other datasets
features = graph.x
adj = graph.a
labels = graph.y
Hope this is helpful!
it keeps returning /usr/local/lib/python3.7/dist-packages/scipy/sparse/_index.py:126: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
self._set_arrayXarray(i, j, x)
not sure it this is right?
@@ayanansari4463 it'll give this warning, but it'll still work.
Hi, hope you're doing well, i have a problem, when i use
"Spektral datasets.citation.load_data"
I receive an error
"Spektral datasets.citation has no attribute 'load_data' "
Would anyone help me with this problem?
Tkanks🙏
So, at the time of this comment (spektral.__version__ == 1.0.8), loading the cora dataset would be something like this:
cora_dataset = spektral.datasets.citation.Citation(name='cora')
test_mask = cora_dataset.mask_te
train_mask = cora_dataset.mask_tr
val_mask = cora_dataset.mask_va
graph = cora_dataset.graphs[0] # zero since it's just one graph inside, there could be multiple for other datasets
features = graph.x
adj = graph.a
labels = graph.y
Hope this is helpful!
@@AlexMuresan Thankyou s much man!
🙏🙏
Where can I find notebook of the colab exercise?
anybody?
Please correct me if I'm wrong, Petar, but in the tutorial, it looks like during training we are including the full graph (including test nodes) in the node-pooling step? This looks like information leakage--is there some reason I'm missing why it's considered allowed here?
This is correct, and it is only allowed under the "transductive" learning regime. In this regime, you're given a static graph, and you need to 'spread labels' to all other nodes. Conversely, in 'inductive' learning you are not allowed access to test nodes at training time.
Naturally, the transductive regime is much easier, as you can use a lot of methods that exploit the properties of the graph structure provided. In inductive learning, instead, your method needs to in principle be capable of generalising to arbitrary, unseen, structures at test time.
Really nice presentation!
A question regarding the colab: Anyone else having the problem that the validation accuracy stays at around 13%?
This ... is unfortunate. I made a typo (mask = tf.reduce_mean(mask) instead of mask /= tf.reduce_mean(mask)) which I literally noticed after hitting send. Now it works.
Strong last name dude (just started video, was my first impression 😂)
What are Inductive problems?
Can you share colab notebook
Typo in 17:35 should iterate j over neighborhood of node N_i
Love from Taichung city, Taiwan 🇹🇼
A lovely city in an island nation 🇹🇼
Is anyone else getting accuracies higher than 1? (I know something's wrong but I can't figure it out)
Could I ask why mask should be divided by mean? Thanks
I think it is to prevent the model from overfitting to nodes with a larger number of edges.
I am trying to follow the example but I get the following error:
AttributeError: module 'spektral.datasets.citation' has no attribute 'load_data'
Anyone know why this is happening, I can only see load_binary in the attributes list.
Did you fix the issue?
@@sanketjoshi8387 I have not found out what the issue is. It might be something to do with some upgrades to Python, NP or Spektral ... I am hoping someone can help
@@dennisash7221 check the version of spektral, he is using its 0.6.2 so try using that
# this should do it in Spektral Version 1.0.6
# I've used the same variable names, but haven't gone through the rest of the colab tutorial as yet
from spektral.datasets.citation import Cora
dataset = Cora()
graph = dataset[0]
adj, features, labels = graph.a, graph.x, graph.y
train_mask, val_mask, test_mask = dataset.mask_tr, dataset.mask_va, dataset.mask_te
@@DanielBoyles awesome it seems to work, I will try to run the rest of the NB later but looks like this did the trick.
your source code pls
Great video, doesn't work with spektral 1.2.0.
To save downgrading this can be used:
```
cora = spektral.datasets.citation.Cora()
train_mask = cora.mask_tr
val_mask = cora.mask_va
test_mask = cora.mask_te
graph = cora.read()[0]
adj = cora.a
features = graph.x
labels = graph.y
```
Thanks for posting this. It's a time saver!
inferring soft adjacency -- what does that even mean?
27:35
Total nodes = 2708 nodes
Train = 140 nodes
Valid = 500 nodes
Test = 1000 nodes
Where did the remaining 1068 nodes gone?
They're still there -- their labels are just not assumed used for anything (training or eval) in this particular node split.
1st
If anyone gets the "TypeError: sparse matrix length is ambiguous; use getnnz() or shape[0]" error at the matmul, use adj.todense() while calling the train_cora() method.
Hello sir, Please can i have your email sir. i need you to explain how to represent my optimisation problem in GNN