Hi Gabriel. I am one of your subscribers. Loving it everyday!! But I have a request, Can you please also show the predictions on the images after training a model? Then it will be helpful for beginners like me. Please also show predictions on images after training and how we can deploy the models on web.
Thank you so much for the great lecture! When I used early stopping it only trains 7-10 epochs test loss is going down but val loss is going up. if i don't use the early stopping, then it has the 73% of test accuracy but it has the overfitting problem. I think using early stopping/not using both val loss is going up. Please give me an advice, thank you so much for your time! I tried to change the learning rate, and used drop out but it doesn't help.
Really Good video , i learn more and more everyday by watching your videos , thank you a lot for your efforts but i have one question , i didn't understand why did we use a pretrained model , if you could clarify that part for me i would be grateful .
just some thing to be aware of. You do not have this problem in your code because you created a validation data generator. However you can also generate validation data within model.fit by setting the parameter validation_split=some value. The hidden danger here comes from this note in the documentation for this parameter: "Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance." Note that the validation data is selected from the end of the training data BEFORE shuffling. So depending on the structure of the training data you can end up with validation data that is NOT representative of the training data distribution. Therefore it is essential to shuffle the training data BEFORE you provide it to model.fit. Discovered this problem the hard way because my training data was sequentially ordered by class so I ended up with validation data that was only selected from a single class. Again just an FYI but it might save you from a headache in the future.
Awesome goodness being waiting something like this image classifier using pre trained model , thank you so much Gabriel
No problem! :)
Hi Gabriel. I am one of your subscribers. Loving it everyday!! But I have a request, Can you please also show the predictions on the images after training a model? Then it will be helpful for beginners like me. Please also show predictions on images after training and how we can deploy the models on web.
Hi Sonu,
I'm glad you're enjoying it!
I will definitely include this in a future video :)
great job
Thank you so much for the great lecture!
When I used early stopping it only trains 7-10 epochs
test loss is going down but val loss is going up.
if i don't use the early stopping, then it has the 73% of test accuracy but it has the overfitting problem.
I think using early stopping/not using both val loss is going up. Please give me an advice, thank you so much for your time!
I tried to change the learning rate, and used drop out but it doesn't help.
Really Good video , i learn more and more everyday by watching your videos , thank you a lot for your efforts but i have one question , i didn't understand why did we use a pretrained model , if you could clarify that part for me i would be grateful .
just some thing to be aware of. You do not have this problem in your code because you created a validation data generator. However
you can also generate validation data within model.fit by setting the parameter validation_split=some value. The hidden danger here
comes from this note in the documentation for this parameter:
"Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or keras.utils.Sequence instance." Note that the validation data is selected from the end of the training data BEFORE shuffling. So depending on the structure of the training data you can end up with validation data that is NOT representative of the training data distribution. Therefore it is essential to shuffle the training data BEFORE you provide it to model.fit. Discovered this problem the hard way because my training data was sequentially ordered by class so I ended up with validation data that was only selected from a single class. Again just an FYI but it might
save you from a headache in the future.
Thanks for the heads up!
good stuff
Nice tutorial keep coming up with more such deep learning datasets!
Very Helpful. I myself am trying to put together a capstone with a CNN model in biology.
Can we count those also?