Thank you very much for this! I am currently doing my undergrad thesis in PyTorch and freaking out. Your explanation is quite clear and helpful. Keep going ^^
Hi thanx for the video, appriciate it, but I believe this tutorial was more suited for a moderate level to advanced level. I still had many concepts to dig and I thought you were skipping on many things that were still new to me. May be you can make another video where you can guide data loading process and training process more in details. Thanx again
when i am going to train vgg model with 38 classes then that error occur: RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[2, 38, 224, 224] to have 3 channels, but got 38 channels instead, when i use summarytool, how to slove the error
Hi. In eval mode you notify all the layers that you are not in training mode. Which also has an impact on e.g. dropout and batchnorm layers. With no_grad (freeze) - this is often used for training to avoid computing the gradient for X number of layers. But indeed the goal of the two functions looks a bit the same - do not compute the gradient.
@@danielac520 Glad it was helpful. And true. The parameters are not updated when the gradients are. You would then need to update them with something like: optimizer.step()
Thank you very much for this! I am currently doing my undergrad thesis in PyTorch and freaking out. Your explanation is quite clear and helpful.
Keep going ^^
same :D
That was excellent! a great help for me , you described as clear and clean as possible
Thank you for the information!
Thanks ,nicely explained
Hi thanx for the video, appriciate it, but I believe this tutorial was more suited for a moderate level to advanced level. I still had many concepts to dig and I thought you were skipping on many things that were still new to me. May be you can make another video where you can guide data loading process and training process more in details. Thanx again
Noted. Thanks for the input Murtaza :)
Great stuff!
will the same code with num-out 200 work for 200 class classification with such great accuracy
when i am going to train vgg model with 38 classes then that error occur: RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[2, 38, 224, 224] to have 3 channels, but got 38 channels instead, when i use summarytool, how to slove the error
please tell me how can I build a confusion matrix from this.,
i can save this model as usual (by using alexnet) and use it with other model on open cv right?
hi! which is the difference between freezing the layers or using model.eval()?
Hi. In eval mode you notify all the layers that you are not in training mode. Which also has an impact on e.g. dropout and batchnorm layers.
With no_grad (freeze) - this is often used for training to avoid computing the gradient for X number of layers.
But indeed the goal of the two functions looks a bit the same - do not compute the gradient.
@@DennisMadsen Thanks for your answer! I understand, Anyway computing the gradient does not imply the update of parameters right ?
@@danielac520 Glad it was helpful. And true. The parameters are not updated when the gradients are. You would then need to update them with something like: optimizer.step()
@@DennisMadsen Got it :)
Is there an example how we can use our own trained models in transfer learning of other images in keras library
nice
sir please make lecture on GAN
Hereby put on my video lidt Muhammad. Thanks a lot for the suggestion!
unable to train my data