Graphormer - Do Transformers Really Perform Bad for Graph Representation? | Paper Explained
ฝัง
- เผยแพร่เมื่อ 31 ก.ค. 2024
- ❤️ Become The AI Epiphany Patreon ❤️ ► / theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Paper: Do Transformers Really Perform Bad for Graph Representation?
In this video, I cover Graphormer a new transformer model that achieved SOTA results on the OGB large-scale challenge benchmark.
It achieved that by introducing 3 novel types of structural biases/encodings into the classic transformer encoder.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
✅ Paper: arxiv.org/abs/2106.05234
✅ What can GNNs learn blog: andreasloukas.blog/2019/12/27...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:00 Key points of the paper
02:15 Graph-level predictions and why we need structural information
05:35 GNNs basics
09:40 Centrality encoding explained in depth
14:15 Spatial encoding explained in depth
18:00 Edge encoding explained in depth
22:30 Results on OGB LSC
23:30 Graphormer handles over-smoothing
25:10 Other results and ablation study
28:20 Graphormer is more expressive than WL
30:30 Mean aggregation as a special case of Graphormer
35:00 Sum aggregation as a special case of Graphormer
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► / theaiepiphany
One-time donation:
www.paypal.com/paypalme/theai...
Much love! ❤️
Huge thank you to these AI Epiphany patreons:
Petar Veličković
Zvonimir Sabljic
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► / aleksagordic
Twitter ► / gordic_aleksa
Instagram ► / aiepiphany
Facebook ► / aiepiphany
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► / discord
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► / gordicaleksa
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#graphormer #graphs #transformers
Thanks for sharing our paper!
very detailed and your drawing helped a lot! thanks!
I would like to ask when will there be a video explaining the graphormer code or does the blogger have any recommended articles explaining the source code?
Thank you!
do you think graphormer perform the best maybe mainly because the huge number of parameters? it looks like it's somewhat 50-100 times more parameters than the other method....
It could be one of the reasons but the perfect design should be the main factor.
老哥,强啊
The output of the transformer will be node-level embeddings,can you help with what is readout function here in this paper used to get a graph level embedding, which can be further used to classify the graphs
average or sum of the node embeddings
Still waiting for sparse transformer :(
Which one? 😂 Link the paper
@@TheAIEpiphany ahh, the only one. The core of Dall-e and gpt arxiv.org/abs/1904.10509