Temporal Graph Networks (TGN) | GNN Paper Explained
ฝัง
- เผยแพร่เมื่อ 30 ก.ค. 2024
- ❤️ Become The AI Epiphany Patreon ❤️ ► / theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
A deep dive into the temporal graph networks paper.
You'll learn about:
✔️ What are dynamic graphs?
✔️ How to get a vectorized representation of time
✔️ All the nitty-gritty details behind the paper
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
✅ arxiv.org/abs/2006.10637
✅ Chris Olah on LSTMs: colah.github.io/posts/2015-08...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:00 Dynamic graphs
03:00 Suboptimal strategies
05:30 Terminology, temporal neighborhood
07:30 High-level overview of the system
08:35 We need to go deeper
13:30 Using temporal information to sample
14:10 Information leakage and the solution
16:55 Main modules explained
21:20 Memory staleness problem
24:00 Temporal graph attention
26:00 Vector representation of time
29:15 Batch size tradeoff
31:00 Results and ablation studies
33:55 Recap of the system
36:55 Some confusing parts
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► / theaiepiphany
One-time donation:
www.paypal.com/paypalme/theai...
Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition".
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL
LinkedIn ► / aleksagordic
Twitter ► / gordic_aleksa
Instagram ► / aiepiphany
Facebook ► / aiepiphany
👨👩👧👦 JOIN OUR DISCORD COMMUNITY:
Discord ► / discord
📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:
Substack ► aiepiphany.substack.com/
💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:
GitHub ► github.com/gordicaleksa
📚 FOLLOW ME ON MEDIUM:
Medium ► / gordicaleksa
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
#temporalgraphnetworks #dynamicgraphs #graphml
Thank you!
Can you implement TGN?
Really nice video! What's the interpretation of inductive and transductive in the benchmark part?
Check out my GAT jupyter notebook I've explained the difference there: github.com/gordicaleksa/pytorch-GAT/blob/main/The%20Annotated%20GAT%20(Cora).ipynb
Need some background on NLP, but definitely coming back here in the near future...
Yep unfortunately it's a universal problem, knowledge dependencies, in maths/engineering or teaching in general.
@@TheAIEpiphany Sure was too focused on ComputerVision, Yolo's and tracking stuff but theres a whole world out there... Started seeing some lessons from Stanford Cs224n, and reading this book so far: www.amazon.com/Transformers-Natural-Language-Processing-architectures-ebook/dp/B08S977X8K
Will definitely see your videos too to consolidate the subject, already subscribed here and followed you on Medium as you bring a lot of the cool stuff... Thanks for blessing community with your knowledge!
So many NN architectures to get to grips with ... !!!
That's true! Haha. It can get overwhelming. I am not there as well, but step by step. 😅
TGN makes use of node memory information, node features, edge features, temporal information and last (but definitely not least) topological information.
Will you folks stop, please? Hahah