Early Stopping. The Most Popular Regularization Technique In Machine Learning.

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ก.ย. 2024
  • Train a model for too long, and it will stop generalizing appropriately. Don't train it long enough, and it won't learn.
    That's a critical tradeoff when building a machine learning model, and finding the perfect number of iterations is essential to achieving the results we expect.
    Early stopping is one of the most popular regularization techniques to train machine learning models. It's both easy to implement and very effective.
    🔔 Subscribe for more stories: www.youtube.co...
    📚 My 3 favorite Machine Learning books:
    • Deep Learning With Python, Second Edition - amzn.to/3xA3bVI
    • Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow - amzn.to/3BOX3LP
    • Machine Learning with PyTorch and Scikit-Learn - amzn.to/3f7dAC8
    Twitter: / svpino
    Disclaimer: Some of the links included in this description are affiliate links where I'll earn a small commission if you purchase something. There's no cost to you.

ความคิดเห็น • 54

  • @paulallen1597
    @paulallen1597 2 ปีที่แล้ว +14

    Excellent high-level explanation of this topic. 10/10. Thank you for your hard work!

  • @limotto9452
    @limotto9452 2 หลายเดือนก่อน +1

    You sounds like a hero of all Machine Learning realm. Thank you very much for the video sir.

  • @11aniketkumar
    @11aniketkumar ปีที่แล้ว +1

    Wow, your teaching skills are excellent

  • @emresdance
    @emresdance 2 ปีที่แล้ว +3

    You've got yourself a subscriber brother, great edit, clarity.

    • @underfitted
      @underfitted  2 ปีที่แล้ว +1

      Thanks for the sub!

  • @a7med7x7
    @a7med7x7 8 หลายเดือนก่อน +1

    I just discovered one of the best channels

  • @karlbooklover
    @karlbooklover 2 ปีที่แล้ว +1

    implemented early stopping auto-saving best model for os-cnn, great explanation! I love your content

    • @underfitted
      @underfitted  2 ปีที่แล้ว

      Glad it was helpful!

  • @VinayaWani
    @VinayaWani หลายเดือนก่อน

    great video! thanks.

  • @alienm.nunezrivero2996
    @alienm.nunezrivero2996 2 ปีที่แล้ว

    That was awesome! My teacher, my mentor, you are just the person I had the opportunity to learn from the most in my whole carrier. Thank you for sharing stuff like this and also for the time we spent working together in the past. I noticed you are now doing what you love the more... just keep teaching

    • @underfitted
      @underfitted  2 ปีที่แล้ว

      Yo! What’s up! Thanks for the comment! Love ya, man!

  • @srishtigupta5507
    @srishtigupta5507 ปีที่แล้ว

    One of the most interesting video to learn from. Thanks!

  • @erfanelmtalab1615
    @erfanelmtalab1615 ปีที่แล้ว

    Supreme edit ! can't wait to see your channel grow ...

  • @PritishMishra
    @PritishMishra 2 ปีที่แล้ว +1

    Awesome Explanation and The Thumbnail is lit 🔥

  • @whilstblower901
    @whilstblower901 ปีที่แล้ว

    Sir your Explanation is excellent please make Videos on Regularization for Deep learning Parameter norm Penalties, Norm Penalties as Constrained Optimization

  • @graysadler4085
    @graysadler4085 ปีที่แล้ว

    This is exactly what I’m building! I’m creating variable training sets to see which training set size has best performance.

  • @ЕгорАбросимов-л2о
    @ЕгорАбросимов-л2о ปีที่แล้ว

    Great stuff! Looking forward to hearing more insights!

  • @karimmerchaoui9736
    @karimmerchaoui9736 ปีที่แล้ว

    Just what I was looking for ! how about doing a full machine learning course and simplify concepts with the same approach you did in this video ?

  • @alwaleedalattas8384
    @alwaleedalattas8384 2 ปีที่แล้ว

    Really like the way you explained it, thanks a lot

    • @underfitted
      @underfitted  2 ปีที่แล้ว

      Glad it was helpful!

  • @nedafiroz514
    @nedafiroz514 ปีที่แล้ว

    I want to say one thing, As many times I see your videos, i get inspired to work on the suggestion and improve my model. ❤The best explanation ever. Watching over several times

  • @roshanaryal310
    @roshanaryal310 2 ปีที่แล้ว +1

    Great explanation as always, Santiago 💯💯🔥

  • @varunahlawat9013
    @varunahlawat9013 2 ปีที่แล้ว +1

    OMG superbbbbbbbbbbbbb, today itself I came across this confusion (I've just started ML, today started with linear regression, so it's not always linear ahh satisfaction :))

    • @underfitted
      @underfitted  2 ปีที่แล้ว +1

      Glad it was helpful!

  • @dimasveliz6745
    @dimasveliz6745 2 ปีที่แล้ว

    nice analogy

  • @abdullahyaser9073
    @abdullahyaser9073 2 ปีที่แล้ว +1

    Fantastic explanation 👏

  • @Naeem2460
    @Naeem2460 10 หลายเดือนก่อน

    Amazingly creative explanation, new sub ❤

  • @thomaswijgerse723
    @thomaswijgerse723 10 หลายเดือนก่อน

    Early stopping is kinda frowned upon right? Since it does regularization and training at the same time.
    I think the preferred methods are L1, L2 and dropout iirc?

  • @ThiagoSilvaOfficial
    @ThiagoSilvaOfficial ปีที่แล้ว

    Nice video, thanks

  • @nicholasmarshall5775
    @nicholasmarshall5775 ปีที่แล้ว

    Great video, thanks! At 05:33, you mention the requirement of a "performance metric" - isn't this just the loss function the model is being optimized for?

    • @underfitted
      @underfitted  ปีที่แล้ว

      It depends. Sometimes you need something business-specific. For example, what’s the impact of the model in real life?

  • @M1911Original
    @M1911Original 2 ปีที่แล้ว

    I wish you could provide a technical walkthrough alongside the theoretical

  • @pandabear_77
    @pandabear_77 ปีที่แล้ว

    Sir, I had a question. What would be difference between num of boost rounds vs early stopping rounds, since both are available as parameters in xgb.train. I am a little lost as to what would be difference between the two ?
    Any clarification on the same is appreciated.

  • @aayushpatil7514
    @aayushpatil7514 3 หลายเดือนก่อน +1

    0:50 isn't this model over fitting to data?

  • @shavilyarajput5477
    @shavilyarajput5477 2 ปีที่แล้ว

    As far as i know santiago , epochs are in neural network right how can i use early stopping in ML ? I know it might be a noob question but i didn't really get it.

    • @underfitted
      @underfitted  ปีที่แล้ว

      Any algorithm that relies on an iterative approach can benefit from early stopping. But yes, we primarily use it with neural networks.

  • @stephenpaek9175
    @stephenpaek9175 2 ปีที่แล้ว +2

    Amazing content, please keep going Santiago

  • @TheInevitableHulk
    @TheInevitableHulk 2 ปีที่แล้ว

    You could tie the deceleration of the validation loss to the learn rate after a certain threshold.

  • @tzvi7989
    @tzvi7989 2 ปีที่แล้ว

    Great video. Guess you're still limited by the input data though - especially in the omics fields

  • @kozaTG
    @kozaTG 5 หลายเดือนก่อน

    i hate how he tricks you into watching another video and you can't ignore because of how good the one you're currently watching is

  • @headshock1111
    @headshock1111 2 ปีที่แล้ว

    Wait was this supposed to be an allegory for self improvement or what

  • @vinaykamath6628
    @vinaykamath6628 ปีที่แล้ว

    Is it just me or does this guy sound like PewDiePie

  • @StephenGillie
    @StephenGillie 2 ปีที่แล้ว

    Can an AI be trained to be better at training AI? You're a machine learning expert with a ton of manual work - why not get machine learning to do it for you?

  • @MadgulkarAdityaManoj
    @MadgulkarAdityaManoj 7 หลายเดือนก่อน

    poor show