Can't tell you how much happy I am ❤️❤️❤️❤️❤️❤️❤️❤️...... really really means a lot to me thank you thank you so so so so much ❤️❤️❤️❤️just thanks for listing to me ....I am really really very happy ....you always support me ....you are my inspiration ...thank you sooooo much. ....# your most exited and favourite subscriber❤️❤️
Loved the explanation part! Can you please make some videos on CycleGAN and Pix2Pix... As only you are able to explain everything in minimal and clear words
have you tried to save the model ? and one more question do i need new model for each style or one model learns different style just by training only one style ?
Hi, I don't think you can freeze the parameters of a module by calling its eval() function. I think the eval() just changes some modules' behaviors such as dropout
Yeah you're right about that, did I say it would freeze the parameters in the video? I'm actually wondering if I made a mistake here, I mean it still seems to work but perhaps I should've iterated through the layers of the model and done requires_grad = False.. Need to do some experiments on this I think. EDIT: I think considering the way we're defining the optimizer by optim.Adam([generated], lr=learning_rate), i.e only with the generated image it's not going to change any of the weights of the model, so it's correct
Question: If the model doesn't change, and the style image doesn't change, why we compute at every iteration the gram matrix of the style feature from scratch? Moreover, there is no reason to compute the style features / original features from scratch too, right? Thanks in advance!
Hey. Would it be possible on your part to upload tutorials on object detection algorithms out there? Your explanation is so clear and concise. Thanks a lot for these helps.
amazing thank you for the tutorial i just refered the paper and your code made it more engraved in my mind. 1 small issue that the code is very very slow (im running on t4gpu collab) is there a way to optimise it?
Thank you for your great video❤ I have an error like this : not enough values to unpack (expected 4 , got 3) we have 3 argument width , height and channel SO what should I do about batch_size? I will be so happy if guide me about this❤
Thanks for great tutorial! Just one doubt. I always thought that optimizer only works on model only. When you passed the [generated] instead of model.parameters(), does it means it will update on the generated image rather than model weights? Thanks
It actually works on whatever you choose to be the parameters, it just so happens we almost always want to change the model parameters, but in this rare case we don't. And yes you're absolutely right, it will only update the generated image!
The best style transfer tutorial ever !
Can't tell you how much happy I am ❤️❤️❤️❤️❤️❤️❤️❤️...... really really means a lot to me thank you thank you so so so so much ❤️❤️❤️❤️just thanks for listing to me ....I am really really very happy ....you always support me ....you are my inspiration ...thank you sooooo much. ....# your most exited and favourite subscriber❤️❤️
Appreciate your support 👊
@@AladdinPersson thanks man❤️
Loved the explanation part!
Can you please make some videos on CycleGAN and Pix2Pix... As only you are able to explain everything in minimal and clear words
I appreciate the kind words 🙏 I will look into it :)
Good explanation and unique content as usual
Glad you liked it :)
Awesome videos. Really like your way of explanation
have you tried to save the model ? and one more question do i need new model for each style or one model learns different style just by training only one style ?
Hi, I don't think you can freeze the parameters of a module by calling its eval() function. I think the eval() just changes some modules' behaviors such as dropout
Yeah you're right about that, did I say it would freeze the parameters in the video? I'm actually wondering if I made a mistake here, I mean it still seems to work but perhaps I should've iterated through the layers of the model and done requires_grad = False.. Need to do some experiments on this I think.
EDIT: I think considering the way we're defining the optimizer by optim.Adam([generated], lr=learning_rate), i.e only with the generated image it's not going to change any of the weights of the model, so it's correct
Again an AMAZING Video
Question:
If the model doesn't change, and the style image doesn't change, why we compute at every iteration the gram matrix of the style feature from scratch? Moreover, there is no reason to compute the style features / original features from scratch too, right?
Thanks in advance!
Great and helpful ❤️❤️❤️
Thanks so much :)
Upload more on segmentation and detection like mask rcnn, deep lab and also abt gan ( pix2pix, cycle gan)...hope we get soon 😃
Hey. Would it be possible on your part to upload tutorials on object detection algorithms out there? Your explanation is so clear and concise. Thanks a lot for these helps.
Yes I've got a few videos that I want to do but I will most definitely do videos on object detection :)
amazing thank you for the tutorial i just refered the paper and your code made it more engraved in my mind. 1 small issue that the code is very very slow (im running on t4gpu collab) is there a way to optimise it?
Hi, one thing I didn't understand: where is the value of "generated" updated?
I didn't understand how the pixel value of generated images is changing and converging?
Thank you for your great video❤
I have an error like this : not enough values to unpack (expected 4 , got 3)
we have 3 argument width , height and channel
SO what should I do about batch_size?
I will be so happy if guide me about this❤
Great work sir
Thank you :)
Thanks for great tutorial! Just one doubt. I always thought that optimizer only works on model only. When you passed the [generated] instead of model.parameters(), does it means it will update on the generated image rather than model weights?
Thanks
It actually works on whatever you choose to be the parameters, it just so happens we almost always want to change the model parameters, but in this rare case we don't. And yes you're absolutely right, it will only update the generated image!
@@AladdinPersson Thanks!
@@AladdinPersson can you please answer my question ?)
hey! how do i save this model?
Thanks for the video btw, i enjoy coding it:)
i liked the fancy math intro
Can you make an image captioning model in pytorch using word embeddings and small datasets like flicker8k or 30k
in pytorch please
Will look into it
Thanks a lot
TO run this code do we need GPU?
Thanks! :)
Thanks man, i want to know it is from which paper, i want to read it.
Link to paper: arxiv.org/abs/1508.06576
how do aplly NST to a video?
nooooice ❤
Can I know your next plan like Computer Vision/NLP tasks
Learning more about Object Detection atm, so probably will be a vid on that soon :)
Thanks for your reply. Can you have an idea in explaining BERT related with NER which is also rare content
almost fall asleep while listening your video
Haha well there you go. If you ever have trouble sleeping you know where to find me lol
@@AladdinPersson I was almost falling sleep but woke up after I listened to ur video
Upload video of BERT model or GPT3 like model
pooing XD
haha i dont get it :P
@@AladdinPersson you have written pooing instead of pooling somewhere 🙂
yeah i was like poing poing poing!!!!