I dont mean to be so offtopic but does anyone know a trick to get back into an instagram account..? I was stupid lost the password. I would love any help you can offer me.
At somewhere around 03:50, they pass noise vector at 4 location. No they don't. The generator network is not complete. There can be more than 4 locations where the noise vector is provided as input.
of course u can pass more than 4 ..if you stack more layers you could , in theory , control for even more granular features. So instead of getting stuck with the specifics, appreciate the fact that this gentleman is taking time out to simplify a concept that is hard to get via reading a paper ( and also maybe no need to show off ? )
Affine transformation is one of the augmentations or variations we can do a given image. Please see here for a general understanding of it: homepages.inf.ed.ac.uk/rbf/HIPR2/affine.htm In the paper they have represented A as a neural network (NN) which seems to be a leant affine transform network, which means its been trained to affine transform a given input. The advantage of using NN instead of a simple math function is that the NN can generate any number of variations of the same image. But if affine is mathematical formula, it won't be possible. I hope that clarifies :)
Learning about StyleGAN for my university course and this helped me understand the paper a lot better, thank you!
Thanks a lot. Comments like this is what keeps me going :)
I dont mean to be so offtopic but does anyone know a trick to get back into an instagram account..?
I was stupid lost the password. I would love any help you can offer me.
What a wonderful explanation! Please do post more videos like this:)
Thank you! Sure will do. Comments like this keep me going :)
At somewhere around 03:50, they pass noise vector at 4 location. No they don't. The generator network is not complete. There can be more than 4 locations where the noise vector is provided as input.
Thanks for pointing out. Let me check. Can't edit the video unfortunately but keep it up for next time. :)
@@AIBites that would be great
of course u can pass more than 4 ..if you stack more layers you could , in theory , control for even more granular features. So instead of getting stuck with the specifics, appreciate the fact that this gentleman is taking time out to simplify a concept that is hard to get via reading a paper ( and also maybe no need to show off ? )
Nice explanation
Thank you very much! Its encouraging to do more videos like this.
Very well explained....
thanks a lot !!!
amazing explanation
Can you explain the affine transformation?
Affine transformation is one of the augmentations or variations we can do a given image. Please see here for a general understanding of it: homepages.inf.ed.ac.uk/rbf/HIPR2/affine.htm
In the paper they have represented A as a neural network (NN) which seems to be a leant affine transform network, which means its been trained to affine transform a given input. The advantage of using NN instead of a simple math function is that the NN can generate any number of variations of the same image. But if affine is mathematical formula, it won't be possible. I hope that clarifies :)
Good explanation, can you please make a video on iterGAN and self-attention gan
Sure will look into more GAN stuff in the coming weeks!
Can you do a video on U-Net please?
Thanks for your request. Sure will give it a go in the coming weeks.
awesome educational video!
Thank you so much for the encouraging words!
Can you talk about: "Analyzing and Improving the Image Quality of StyleGAN"
Yup will take a look at it.
can i have the slide
unfortunate I couldn't find them as its been quite sometime since I made this video. Sorry