on decoder part, if we want to get adjacency matrix, whether we should make sure of the number of node? who tell decoder about the number of vector sample from the latent space?
Hi sir, again, a great video. Can we learn from a single molecule and generate a new version with some changes on a particular property of the molecule? (my problem is how to handle only one single molecule)
Hi! Mhh I guess without a big dataset of molecules there is no way to do this. You need to learn somehow what valid molecules look like. Then you could condition the generation on the base molecule in order to generate an adjusted molecule. But I've never done this :/
When you use the inner product it forces the model to create similar embeddings for similar nodes. The advantage I see, of using a MLP, is that it can model more complex relationships between two embeddings. The inner product simply assigns a high value for high similarity and a low value for low similarity. It's (in my opinion) a bit more restrictive. On the other hand it's more efficient, since you only have to do one multiplication and have less parameters.
Hi, sequence decoders can also make sense, especially if your predictions depend on the previous state. One-shot decoders are typically harder to learn and sequential prediction often performs better. But it all depends on the use case :) With molecule generation I have the feeling that sequential works better somehow (at least from what I've tried so far)
Hey, your videos are very helpful. Do you have any recommendation for Money Laundering using Graphical Neural Networks or could you also please provide some example for Money Laundering using GNN it will be a great help as I am working on this topic currently.
Interesting! Never seen that use case for GNNs so far. I'll note it down but cannot promise as there are many videos on the list ;-) thx for the suggestion :)
on decoder part, if we want to get adjacency matrix, whether we should make sure of the number of node? who tell decoder about the number of vector sample from the latent space?
Hi, I am a big fan 😊. I appreciate all the work you did for explaining GNN and AE models so far, I learned a lot from these tutorials. Thank you 👍
This is top notch quality content, thank you!
Really enjoyed the video, great work.
Thanks!
wow what a great video.Thank you, helped me a lot.
Hi! Just wanted to know how this is coming along. Eagerly waiting to learn about the next part of work!
Next part is coming on the weekend :)
Sorry that the upload frequency is not too high!
on decoder part, if we want to get adjacency matrix, whether we should make sure of the number of node? who tell decoder about the number of vector sample from the latent space?
This architecture is not suitable for this task. It's only for link prediction.
I'll explain more in the next video
Hi sir, again, a great video. Can we learn from a single molecule and generate a new version with some changes on a particular property of the molecule? (my problem is how to handle only one single molecule)
Hi!
Mhh I guess without a big dataset of molecules there is no way to do this. You need to learn somehow what valid molecules look like.
Then you could condition the generation on the base molecule in order to generate an adjusted molecule. But I've never done this :/
If we want to use MLP for decoder part, what could be advantages instead of inner product ?
When you use the inner product it forces the model to create similar embeddings for similar nodes.
The advantage I see, of using a MLP, is that it can model more complex relationships between two embeddings. The inner product simply assigns a high value for high similarity and a low value for low similarity. It's (in my opinion) a bit more restrictive.
On the other hand it's more efficient, since you only have to do one multiplication and have less parameters.
@@DeepFindr Yes, also what could be adv or dis adv if we build sequence decoder models ? I see that why inner product is easy and effective.
Hi, sequence decoders can also make sense, especially if your predictions depend on the previous state. One-shot decoders are typically harder to learn and sequential prediction often performs better.
But it all depends on the use case :)
With molecule generation I have the feeling that sequential works better somehow (at least from what I've tried so far)
Hey, your videos are very helpful. Do you have any recommendation for Money Laundering using Graphical Neural Networks or could you also please provide some example for Money Laundering using GNN it will be a great help as I am working on this topic currently.
Interesting! Never seen that use case for GNNs so far. I'll note it down but cannot promise as there are many videos on the list ;-) thx for the suggestion :)
@@DeepFindr Thank you for quick the response !
Great video! Thanks :)
about to sleep watch later
Haha, sleep well! :P
=) Great Thanks
poor explanation
on decoder part, if we want to get adjacency matrix, whether we should make sure of the number of node? who tell decoder about the number of vector sample from the latent space?