Early Stopping. The Most Popular Regularization Technique In Machine Learning.
ฝัง
- เผยแพร่เมื่อ 25 ก.ย. 2024
- Train a model for too long, and it will stop generalizing appropriately. Don't train it long enough, and it won't learn.
That's a critical tradeoff when building a machine learning model, and finding the perfect number of iterations is essential to achieving the results we expect.
Early stopping is one of the most popular regularization techniques to train machine learning models. It's both easy to implement and very effective.
🔔 Subscribe for more stories: www.youtube.co...
📚 My 3 favorite Machine Learning books:
• Deep Learning With Python, Second Edition - amzn.to/3xA3bVI
• Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow - amzn.to/3BOX3LP
• Machine Learning with PyTorch and Scikit-Learn - amzn.to/3f7dAC8
Twitter: / svpino
Disclaimer: Some of the links included in this description are affiliate links where I'll earn a small commission if you purchase something. There's no cost to you.
Excellent high-level explanation of this topic. 10/10. Thank you for your hard work!
Glad you liked it!
You sounds like a hero of all Machine Learning realm. Thank you very much for the video sir.
Wow, your teaching skills are excellent
Thanks!
You've got yourself a subscriber brother, great edit, clarity.
Thanks for the sub!
I just discovered one of the best channels
implemented early stopping auto-saving best model for os-cnn, great explanation! I love your content
Glad it was helpful!
great video! thanks.
That was awesome! My teacher, my mentor, you are just the person I had the opportunity to learn from the most in my whole carrier. Thank you for sharing stuff like this and also for the time we spent working together in the past. I noticed you are now doing what you love the more... just keep teaching
Yo! What’s up! Thanks for the comment! Love ya, man!
One of the most interesting video to learn from. Thanks!
Supreme edit ! can't wait to see your channel grow ...
Awesome Explanation and The Thumbnail is lit 🔥
Sir your Explanation is excellent please make Videos on Regularization for Deep learning Parameter norm Penalties, Norm Penalties as Constrained Optimization
This is exactly what I’m building! I’m creating variable training sets to see which training set size has best performance.
Sounds great!
Great stuff! Looking forward to hearing more insights!
Just what I was looking for ! how about doing a full machine learning course and simplify concepts with the same approach you did in this video ?
Working on it.
Really like the way you explained it, thanks a lot
Glad it was helpful!
I want to say one thing, As many times I see your videos, i get inspired to work on the suggestion and improve my model. ❤The best explanation ever. Watching over several times
Great explanation as always, Santiago 💯💯🔥
Appreciate it!
OMG superbbbbbbbbbbbbb, today itself I came across this confusion (I've just started ML, today started with linear regression, so it's not always linear ahh satisfaction :))
Glad it was helpful!
nice analogy
Thanks :)
Fantastic explanation 👏
Glad you liked it
Amazingly creative explanation, new sub ❤
Early stopping is kinda frowned upon right? Since it does regularization and training at the same time.
I think the preferred methods are L1, L2 and dropout iirc?
Nice video, thanks
Great video, thanks! At 05:33, you mention the requirement of a "performance metric" - isn't this just the loss function the model is being optimized for?
It depends. Sometimes you need something business-specific. For example, what’s the impact of the model in real life?
I wish you could provide a technical walkthrough alongside the theoretical
Sir, I had a question. What would be difference between num of boost rounds vs early stopping rounds, since both are available as parameters in xgb.train. I am a little lost as to what would be difference between the two ?
Any clarification on the same is appreciated.
0:50 isn't this model over fitting to data?
Kind of…
As far as i know santiago , epochs are in neural network right how can i use early stopping in ML ? I know it might be a noob question but i didn't really get it.
Any algorithm that relies on an iterative approach can benefit from early stopping. But yes, we primarily use it with neural networks.
Amazing content, please keep going Santiago
Thanks, will do!
You could tie the deceleration of the validation loss to the learn rate after a certain threshold.
Right on
Great video. Guess you're still limited by the input data though - especially in the omics fields
i hate how he tricks you into watching another video and you can't ignore because of how good the one you're currently watching is
Wait was this supposed to be an allegory for self improvement or what
Is it just me or does this guy sound like PewDiePie
Can an AI be trained to be better at training AI? You're a machine learning expert with a ton of manual work - why not get machine learning to do it for you?
poor show