L1 and L2 Regularization in Machine Learning: Easy Explanation for Data Science Interviews

แชร์
ฝัง

ความคิดเห็น • 11

  • @AllieZhao
    @AllieZhao ปีที่แล้ว +2

    Much clearer than what I learned from elsewhere. I also noticed that you slowed the pace of speaking which is nice for people to follow

  • @MinhNguyen-lz1pg
    @MinhNguyen-lz1pg ปีที่แล้ว +1

    Man, I tried to find videos and blog post about this topic and most of them just scratch the surface. Thanks for the deep analysis and comparison!

    • @emma_ding
      @emma_ding  ปีที่แล้ว

      So glad you found it helpful, Minh! Thanks for watching. 😊

  • @dimasushko9023
    @dimasushko9023 3 หลายเดือนก่อน +1

    Some details who'd like to to get into that really deep: regarding L1 penalty - we can't really choose w2 = 0 and w1 = 1, since loss consists of 2 parts: loss from the diamond shape (l1 penalty) + loss from the ellips shape (initial loss function). since for (w1, w2) pairs (0, 1) and (1, 0) l1-penalty term values are the same and equals 1 + 0 = 0 + 1 = 1, we now look at the initial loss function as it just as well depends on the (w1, w2) values. for w1 = 0 and w2 = 1 (closer to the center point) loss is going to be less than for w1 = 1 and w2 = 0 (farther from the center point) - we can see that from the contour lines plot. therefore, optimizer won't go there and will converge on w1 = 0 and w2 = 1 and that's it.

  • @HumbertoMoura
    @HumbertoMoura ปีที่แล้ว

    Great explanation, Emma! Have a nice day!

  • @louisforlibertarian
    @louisforlibertarian ปีที่แล้ว

    Great vid! It's like a fast recap of a college stat class.

  • @crystalcleargirl07
    @crystalcleargirl07 6 หลายเดือนก่อน

    thank you so much for this clear explanation. It has helped me more than the Coursera course.

  • @muhannedalogaidi7877
    @muhannedalogaidi7877 10 หลายเดือนก่อน

    Hello Emma, I started my switching to AI/ML and noticed your website and courses. does your training or courses suitable for beginner? . also am not sure if you you have special coursers for statistics and mathematics for AI/ML .. Thank you

  • @emma_ding
    @emma_ding  ปีที่แล้ว

    Many of you have asked me to share my presentation notes, and now… I have them for you! Download all the PDFs of my Notion pages at www.emmading.com/get-all-my-free-resources. Enjoy!

  • @cosmicfluke3718
    @cosmicfluke3718 ปีที่แล้ว +1

    How can increasing alpha would decrease the weight can you pls explain. Now 0.1 is bigger than 0.001 if I have weight as 0.4 now 0.1 *0.4= 0.04 where as 0.001 *0.4 would be 0.0004 now lesser the alpha lesser the weight which will be near to zero correct. I feel what you mean by bigger alpha is alpha with bigger negative power isn't that correct. Can you please clarify

    • @davidskarbrevik
      @davidskarbrevik ปีที่แล้ว +2

      "increasing the alpha would decrease the weight" does not refer to the calculation of alpha * weights, it refers to what happens when you minimize your regularized loss function.