Tutorial 27- Ridge and Lasso Regression Indepth Intuition- Data Science

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ต.ค. 2024
  • Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more
    / @krishnaik06
    #Regularization
    ⭐ Kite is a free AI-powered coding assistant that will help you code faster and smarter. The Kite plugin integrates with all the top editors and IDEs to give you smart completions and documentation while you’re typing. I've been using Kite for a few months and I love it! www.kite.com/g...
    Please do subscribe my other channel too
    / @krishnaikhindi
    Connect with me here:
    Twitter: / krishnaik06
    Facebook: / krishnaik06
    instagram: / krishnaik06

ความคิดเห็น • 405

  • @hipraneth
    @hipraneth 4 ปีที่แล้ว +240

    Lucid explanation at free of cost . Your passion to make the concept crystal clear is very much evident in your eyes...Hats Off!!!

  • @shubhamkohli2535
    @shubhamkohli2535 4 ปีที่แล้ว +66

    Only person who is providing this level of knowledge at free of cost. Really appreciate it .

  • @yadikishameer9587
    @yadikishameer9587 3 ปีที่แล้ว +5

    I never watched your videos but after watching this video I regret for ignoring your channel. You are a worthy teacher and a data scientist.

  • @aelitata9662
    @aelitata9662 4 ปีที่แล้ว +28

    I'm in crisis to learn this topic and all I know is y=mx+c. I think this is the clearest one I've watched on youtube. Thank you sooooo much and love your enthusiasm when you tried to explain the confusing parts

  • @iamfavoured9142
    @iamfavoured9142 2 ปีที่แล้ว +5

    100 years of blessing for you.
    You just gained a subscriber!

  • @YouKnowMe-123y
    @YouKnowMe-123y ปีที่แล้ว +2

    You are helping many of the ML enthusiasts free of cost... Thank you

  • @tenvillagesahead4192
    @tenvillagesahead4192 3 ปีที่แล้ว +3

    Brilliant. I searched all over the net but couldn't find such an easy yet detailed explanation of Regularization. Thank you very much! Very much considering joining the membership

  • @harshstrum
    @harshstrum 4 ปีที่แล้ว +30

    Thank You bhaiya. It feels like every mroning when I watch your videos my career slope will increase. Thank you for this explaination.

  • @AnkJyotishAaman
    @AnkJyotishAaman 4 ปีที่แล้ว +23

    This guy is legit !!
    Hat's off for the explanation!!
    Loved it sir, Thanks

  • @gerardogutierrez4911
    @gerardogutierrez4911 4 ปีที่แล้ว +38

    if you pause the video and just watch his facial movements and body movements, he looks like hes trying his best to convince you to stay with him during a break up. Then you turn on the audio and its like hes yelling at you to get you to understand something.
    Clearly, this man is passionate about teaching Ridge regression and knows a lot. I think its easier to follow when hes like checking up on you by saying, you need to understand this, and repeats words and uses his voice to emphasize concepts. I wish he could explain other things to me besides data science.

    • @TheMrIndiankid
      @TheMrIndiankid 4 ปีที่แล้ว +2

      he will explain u the meaning of life too

    • @MrBemnet1
      @MrBemnet1 4 ปีที่แล้ว +3

      my next project is counting head shakes in a youtube video .

    • @tanmay2771999
      @tanmay2771999 3 ปีที่แล้ว

      @@MrBemnet1 Ngl that actually sounds interesting.

  • @somnathpatnaik2277
    @somnathpatnaik2277 2 ปีที่แล้ว

    i have tried 4 very reputed organizations for doing courses all claim faculty from IIT and xyz high profile name. My feedback is if you are from IIT then that doesnt mean you are a good teacher, for teaching they should have passion like you had.
    When i see your lectures i enjoy learnings. Thank you

  • @ganeshrao405
    @ganeshrao405 3 ปีที่แล้ว +6

    Thank you soo much Krish, Linear regression + Ridge + Lasso cleared my concepts with your videos.

  • @HammadMalik
    @HammadMalik 4 ปีที่แล้ว +4

    Thanks Krish for explaining the intuition behind Ridge and Lasso regression. Very helpful.

  • @koderr100
    @koderr100 2 ปีที่แล้ว +1

    Now I finally got about key L2 and L3 difference. Thanks a lot!

  • @marijatosic217
    @marijatosic217 4 ปีที่แล้ว +6

    Great video! I appreciate how hard his effort is to help us really understand the material!

  • @TheR4Z0R996
    @TheR4Z0R996 4 ปีที่แล้ว +34

    Keep up the good work, blessing from italy My friend :)

  • @sahilzele2142
    @sahilzele2142 4 ปีที่แล้ว +11

    so the basic idea is:
    1)steeper slope leads to overfitting @8:16 (what he basically means is that the overfitting line we have has a steeper slope which does not justify his statement on the contrary)
    2)adding lambda*(slope)^2 will increase the value of cost function for the overfitted line, which will lead to reduction of slopes or 'thetas' or m's (there are all the same things) @10:03
    3)now that the value of cost function for overfitted line is not minimum, another best line is selected by reducing the slopes or 'thetas' or m's which will also reflect in addition of lambda*(slope)^2 ,just this time slope added will be less. @13:45
    4)doing this will overcome overfitting as the new best fit line will have less variance(more successful for training data) and less bias than our previous line @14:10 , the bias maybe more because it was 0 for overfitted line ,then it will be a bit more for the new line
    5)lambda can be also called as scaling factor or inflation rate to manipulate the regularization. as for the question ,what happens if we have overfitted line with less steeper slope?, then i think we'll find the best fit line with even less steep slope(maybe close to slope~0 but !=0) @16:30
    and tadaa!!!!
    we have reduced overfititng successfully!!
    please correct me if anything's wrong

    • @faizanzahid490
      @faizanzahid490 4 ปีที่แล้ว

      I've same queries bro.

    • @supervickeyy1521
      @supervickeyy1521 4 ปีที่แล้ว

      for 1st point. What if test data has the same slope value as that of train data? in such case there won't be overfitting correct ?

    • @angshumansarma2836
      @angshumansarma2836 4 ปีที่แล้ว

      just remember the 4 th point that the main goal of regularization we just wanted to generalize better for the test dateset while having some errors in the test dateset

    • @chetankumarnaik9293
      @chetankumarnaik9293 4 ปีที่แล้ว +1

      First of all, no linear regression can be built with just two data points. He is not aware of degree of freedom.

    • @Kmrabhinav569
      @Kmrabhinav569 4 ปีที่แล้ว +1

      the basic idea is to use lambda (i.e. also known as the regularization parameter) to reduce the product term of Lambda*(slope). Here slope implies various values of m, such as if y = m1x1+m2x2 and so on... we have many values of m(i). So here, we try to adjust the value of lambda such that, the existence of those extra m(i) doesn't matter. And hence we are then able to remove them, i.e. remove the extra features from the model. And we are doing this as one of the major causes of overfitting is due to the addition of extra features. Hence by getting rid of these features, we can curb the problem of overfitting. Hope this helps.

  • @shaurabhsinha4121
    @shaurabhsinha4121 3 ปีที่แล้ว

    Krish,but equation of a line with best generalized fit,eg y=Mx+c,can be possible with M being high-->AS ACTUAL DATAPOINT CAN BE closer and crowded near Y axis.So,steep slope can't be criteria.

  • @Sumta555
    @Sumta555 4 ปีที่แล้ว +5

    18:35 How the features get removed where |slope| is very very less? Hats off for this fantastic clarity on the topic.

    • @dhirajkumarsahu999
      @dhirajkumarsahu999 4 ปีที่แล้ว +2

      The regression works in such a way that, the coefficients of the inputs which are of less importance will keep on decreasing in every iteration.
      In case of ridge regression, the unimportant coefficients exponentially decrease (or asymptotically decrease). In which it will come very close to zero, but will never become zero. (Look the exponential graph for refrence)
      This is not the case with lasso regression.
      Hope this helps.

  • @indrasenareddyadulla8490
    @indrasenareddyadulla8490 4 ปีที่แล้ว +2

    Sir, you have mentioned in your lecture this concept is complicated but never I felt it is so. you have explained very excellent.👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌👌

  • @auroshisray9140
    @auroshisray9140 4 ปีที่แล้ว +3

    Hats offf...grateful for valuable content at 0 cost

  • @ChandanBehera-jp2me
    @ChandanBehera-jp2me 2 ปีที่แล้ว +1

    i found your free videos better than some other paid tutorials...thanx for ur work

  • @aish_waryaaa
    @aish_waryaaa 2 ปีที่แล้ว +4

    Krish Sir you are saving my masters literally,up to date explanation,and the efforts you are putting to help us understand,Thank You so Much Sir.😇🥰

  • @BoyClassicall
    @BoyClassicall 4 ปีที่แล้ว +2

    Concept well explained. I've watch a lot of videos on Ridge regression but most well explained has shown mathematically the effect of lambda on slope.

  • @ajithsdevadiga1603
    @ajithsdevadiga1603 8 หลายเดือนก่อน

    Thank you so much for this wonderful explanation,
    truly appreciate your efforts in helping the data science community.

  • @ArunKumar-yb2jn
    @ArunKumar-yb2jn 2 ปีที่แล้ว +1

    At 8:15 you say "a steep slope will always lead to overfitting case, why? I will just tell you now..." But I couldn't find where later on you have explained this.

  • @mumtahinhabib4314
    @mumtahinhabib4314 4 ปีที่แล้ว

    This is where I have found the best explanation of ridge regression after searching a lot of videos and documentations. thank you sir

  • @Bedivine777angelprayer
    @Bedivine777angelprayer ปีที่แล้ว

    Thanks is there articles i can refer any blogs you recommend thanks again great content

  • @adijambhulkar1742
    @adijambhulkar1742 2 ปีที่แล้ว

    Hats off... What a way... What a way to explain man... Clear...all doubts

  • @ibrahimibrahim6735
    @ibrahimibrahim6735 3 ปีที่แล้ว

    Thanks, Krish,
    I want to correct one thing here, the motivation behind the penalty is not to change the slop; it is to reduce the model's complexity.
    For example, consider the flowing tow models:
    f1: x + y + z + 2*x^2 + 5y^2 + z^2 =10
    f2: 2*x^2 + 5y^2 + z^2 =15
    f1 is more complicated than f2. Clearly, a complicated model has a higher chance of overfitting.
    By increasing lambda (the complexity factor), it is more likely to have a simpler model.
    Another example:
    f1: x + 2y + 10z + 5h + 30g = 100
    f2: 10z + 30g = 120
    f2 is simpler than f1. If both models have the same performance on the training data, we would like to use f2 as our model. Because it is a simpler model and a simpler model has less chance for overfitting.

  • @fatriantobong
    @fatriantobong ปีที่แล้ว

    i think you need to emphasize on the high variance toward the test data, and low variance toward the training data, the problem with overfitting is that this low variance on the training data comes at the expense of high variance on the test data. When the model is exposed to new, unseen data (the test data), it struggles to generalize because it has essentially memorized the noise and intricacies of the training data. This results in a significant difference between the model's predictions and the true values on the test data, indicating high variance on the test data.

  • @rishu4225
    @rishu4225 4 หลายเดือนก่อน

    Thanks, the enthusiasm with which you teach also carries over to us. 🥰

  • @JEEVANKUMAR-hf4ex
    @JEEVANKUMAR-hf4ex 3 ปีที่แล้ว

    good explanation without touching any complex maths derivations.

  • @bhuvaraga
    @bhuvaraga 2 ปีที่แล้ว

    Loved your energy sir and your conviction to explain and make it clear to your students. I know it is hard to look at the camera and talk - you nailed it. This video really helped me to understand the overall concept. My two cents, 1) Keep the camera focus on the white board I think it is autofocussing between you and the white board and maybe that is why you get that change in brightness also.

  • @anirbandey8999
    @anirbandey8999 2 หลายเดือนก่อน

    Very good video to understand the intuition behind L1, L2

  • @datafuturelab_ssb4433
    @datafuturelab_ssb4433 2 ปีที่แล้ว +1

    Best explanation on lasso n ridge regression ever on TH-cam... Thanks krish... You nailed it...

  • @143balug
    @143balug 4 ปีที่แล้ว +1

    Hi Krish,
    Your are making our confidence more on data scince with the clear explanations

  • @MuhammadAhmad-bx2rw
    @MuhammadAhmad-bx2rw 3 ปีที่แล้ว +1

    Extraordinary talented Sir

  • @dollysiharath4205
    @dollysiharath4205 ปีที่แล้ว +1

    You're the best trainer!! Thank you!

  • @mithunmiranda
    @mithunmiranda ปีที่แล้ว +1

    I wish I could like his videos multiple times. You are a great teacher, Kind Sir.

  • @ZubairAzamRawalakot
    @ZubairAzamRawalakot ปีที่แล้ว

    Very informative lecture dear. You explained with maximum detail. thanks

  • @vishalaaa1
    @vishalaaa1 4 ปีที่แล้ว

    This naik is excellent. He is solving every ones problem.

  • @gandhalijoshi9242
    @gandhalijoshi9242 3 ปีที่แล้ว

    Very nice explanation. I have started watching your videos and your teaching style is very nice .
    Very nice you tube channel for understanding data science-Hats Off!!

  • @kanavsharma9562
    @kanavsharma9562 3 ปีที่แล้ว

    I have watched more than 8 videos and 2-3 articles but didn't get how lambda value effect the slope ur video explain it best. Thanks

  • @tsrnihar
    @tsrnihar 2 ปีที่แล้ว +12

    Small correction - For Lasso regression, it is sum of mod of coefficients multiplied by the regularization parameter. You wrote it as mod of sum of coefficients multiplied by the regularization
    It is lambda*(|m1| + |m2| + ..) and not lambda*(|m1+ m2 + ...)

  • @sincerelysilvia
    @sincerelysilvia 2 ปีที่แล้ว

    This is the most clearest and best explanation about this topic on youtube. I can't express how thankful I am for this video for finally understanding the concept

  • @belllamoisiere8877
    @belllamoisiere8877 2 ปีที่แล้ว +1

    Hello from México. Thank you for your tutorials, they are as if one of my class mates was explaining concepts to me in simple words. A suggestion, please include a short tutorial on ablation of Deep Learning Models.

  • @abdulnafihkt4245
    @abdulnafihkt4245 2 ปีที่แล้ว +1

    Best best best bestttttt class...hats off maaan

  • @loganwalker454
    @loganwalker454 3 ปีที่แล้ว

    Regularization was a very abstruse and knotty topic. However, after watching this video; it is a piece of cake
    Thank you, Krish

  • @moe45673
    @moe45673 ปีที่แล้ว

    Thank you! I thought this was a great explanation (as someone who has listened to a bunch of different ones trying to nail my understanding of this)

  • @Zizou_2014
    @Zizou_2014 4 ปีที่แล้ว +1

    Brilliantly done! Thanks Krish

  • @JoseAntonio-gu2fx
    @JoseAntonio-gu2fx 4 ปีที่แล้ว +3

    Muchas gracias por compartir. Se agradece mucho el esfuerzo por aclarar los conceptos que es la base de partida para la resolución. Saludos desde España!

    • @sridhar7488
      @sridhar7488 3 ปีที่แล้ว

      sí, es un tipo genial ... también me encanta ver sus videos!

  • @nehasrivastava8927
    @nehasrivastava8927 4 ปีที่แล้ว

    best tutorials for machine learning with indepth intuition...i think there is no tutorial on utube like this...Thankuu sir..

  • @juozapasjurksa1400
    @juozapasjurksa1400 2 ปีที่แล้ว +2

    Your explanations are sooo clear!

  • @aravindvasudev7921
    @aravindvasudev7921 ปีที่แล้ว

    Thank you. Now I got a clear idea on both these regression techniques.

  • @mohammedfaisal6714
    @mohammedfaisal6714 4 ปีที่แล้ว +1

    Thanks a lot for your Support

  • @cyborg69420
    @cyborg69420 ปีที่แล้ว

    just wanted to say that I absolutely loved the video

  • @vladimirkirichenko1972
    @vladimirkirichenko1972 ปีที่แล้ว

    This man has a gift.

  • @BipinYadav-wn1pm
    @BipinYadav-wn1pm ปีที่แล้ว

    after going through tons of videos, finally found the best one, thnx!!

  • @rahul281981
    @rahul281981 3 ปีที่แล้ว +1

    Very nicely explained, thank God I found your posts on TH-cam while searching the stuff👍

    • @rayennenounou7065
      @rayennenounou7065 3 ปีที่แล้ว

      I have a mémoire master 2 about lasso régression i need informations more informations about régression de lasso but in frensh can you help me

  • @TheOntheskies
    @TheOntheskies 3 ปีที่แล้ว +1

    Thank you, for the crystal clear explanation. Now I will remember Ridge and Lasso.

  • @askpioneer
    @askpioneer 2 ปีที่แล้ว

    well explained krish. thank you for creating . great work

  • @fratcetinkaya8538
    @fratcetinkaya8538 2 ปีที่แล้ว

    Here is where I understood that damn issue. I’m appreciated too much, thanks my dear friend :)

  • @sakshargupta875
    @sakshargupta875 3 ปีที่แล้ว

    May be you could use different colour for highlighting the text before or after editing. That would be helpful and easy to grasp

  • @MrLoker121
    @MrLoker121 3 ปีที่แล้ว

    Good video for beginners, a couple of pointers though:
    1. The lasso regression would lead to |m1|+|m2|+|m3| +.... and not |m1+m2+m3+m4....|
    2. The explanation why coefficients in L1 regularization would go to zero and not for L2 is missing. Probably can expand upon it theoretically.

  • @Amir-English
    @Amir-English 6 หลายเดือนก่อน

    You made it so simple! Thank you.

  • @prashanths4455
    @prashanths4455 4 ปีที่แล้ว +2

    Krish An excellent explanation. Thank you so much for this wonderful in-depth intuition.

  • @adinathshelke5827
    @adinathshelke5827 7 หลายเดือนก่อน

    perfect explanationnnnnnnn. WAs wandering around for whole day. And at the end of the day, found this one.

  • @aseemjain007
    @aseemjain007 4 หลายเดือนก่อน

    Brilliantly explained !! thankyou !!

  • @kanhataak1269
    @kanhataak1269 4 ปีที่แล้ว

    After watching this lecture is not complicated... good teaching sir

  • @Markmania510
    @Markmania510 11 หลายเดือนก่อน

    What a great video thank you so much for posting. Can anyone tell me what happens if you’re overfitting with a smaller slope? I.e. you need to increase the slope to generalise the model more.
    Is this when you need to make the lambda parameter negative?

  • @MohsinKhan-rv7jj
    @MohsinKhan-rv7jj 2 ปีที่แล้ว

    The kind of explanation is truly inspirational. I am truly overfitted by knowledge after seeing your video.❤

    • @abhishekkumar465
      @abhishekkumar465 2 ปีที่แล้ว +1

      Reduce the rate of learning, this may help you as per Ridge regression :P

  • @SahanPradeepthaThilakaratne
    @SahanPradeepthaThilakaratne 4 หลายเดือนก่อน

    Your explanations are superbbb!

  • @sandipansarkar9211
    @sandipansarkar9211 4 ปีที่แล้ว

    Great explanation Krish.I think I a understanding a little bit about L1 andL2 regression.Thanks

  • @yitbarekmirete6098
    @yitbarekmirete6098 2 ปีที่แล้ว

    you are awesome, better than our professors in explaining such complex topics.

  • @VishalPatel-cd1hq
    @VishalPatel-cd1hq 4 ปีที่แล้ว

    Hi Krish here we are adding the regularization term to our loss function and this regularization will always positive as lambda value is greater then 1 and slope value we will do square or taking mode then how we can say that it is penalizing . Infact here it is adding some positive value in the loss .

  • @dianafarhat9479
    @dianafarhat9479 7 หลายเดือนก่อน

    Amazing explanation, thank you!

  • @ankitchoudhary5585
    @ankitchoudhary5585 3 ปีที่แล้ว

    @Krish Naik
    These penalties making the slope smooth not zero..these slopes are nothing but coefficients of the features(x1,x2,x3...) and we are trying to reduce the impact of a features which have high value of coefficient with respect to other coefficients(in case of multicollinearity some feature which are highly correlated will tend to confuse the gradient decent while it minimize the error function and which results in high value of coefficients of correlated features and eventually the higher values of coefficients tells the models that these are the most important deciding features for the target column, but it is a wrong prediction ...there are many reasons for the high values of coefficients of some features with respect to the other coefficients )
    So by adding the new parameter(lambda×sum of squares of coefficients) to the error function (which will eventually minimized by gradient decent) we are telling the gradient decent to take care of the coefficients which are very high even if we have to loose some training accuracy (we want generalized model not model which shows best training accuracy but not a good predictor for new unseen data ??)
    statisticsbyjim.com/regression/multicollinearity-in-regression-analysis/

  • @heplaysguitar1090
    @heplaysguitar1090 3 ปีที่แล้ว

    Just one word, Fantastic.

  • @maheshurkude4007
    @maheshurkude4007 4 ปีที่แล้ว +1

    thanks for explaining Buddy!

  • @robertasampong
    @robertasampong ปีที่แล้ว

    Absolutely excellent explanation!

  • @litonpaul6133
    @litonpaul6133 3 ปีที่แล้ว +2

    Krish it is a great video to understand Ridge and Lasso. You have shown here the example of test data which is below the best fit line. In this case we need to reduce the slope of the best fit line. It is fine. But if the test data is populated on the upper side (above) of the best fit line, then what we will do. Will we reduce the slope or increase the slope of the best fit line to overcome the overfitting and underfitting?

    • @52nevil36
      @52nevil36 ปีที่แล้ว

      Same question.. Can you explain me if you know then

  • @haziq7885
    @haziq7885 3 ปีที่แล้ว +3

    Thanks krish! Just a question, why are we penalising high slope functions? Are there any situations where the model would benefit from a higher slope (as compared to the linear regression slope) ?

    • @shinichi7357
      @shinichi7357 2 ปีที่แล้ว

      ah hello there mate, the penalty that's being incurred on the high slopes is because when a steep slope occurs the model is trying to overfit the dataset, it tries to make a line that should pass through every point in the dataset, but when you provide the model with test set, the point may or may not lie close to your best fit passing line, so it will affect the performance, Hence you got to generalize more, you have to observe general patterns not to get 100% results by just overfitting training sets

  • @yamika.
    @yamika. 2 ปีที่แล้ว

    thank you for this! finally understood the topic

  • @MsGeetha123
    @MsGeetha123 3 ปีที่แล้ว

    Excellent video!!! Thanks for a very good explanation.

  • @316geek
    @316geek 3 ปีที่แล้ว

    you make it look so easy, kudos to you Krish!!!

  • @NEDIKIMED
    @NEDIKIMED 2 ปีที่แล้ว

    thanks sir,is when i am learnning , hopping i will be good in 3-6 months to come , 2 hours a day

  • @nehabalani7290
    @nehabalani7290 3 ปีที่แล้ว +1

    Too good and short for ppl clear with basic modeling concepts

  • @abhishekchatterjee9503
    @abhishekchatterjee9503 4 ปีที่แล้ว +1

    You did a great job sir.... It helped me a lot in understanding this concept. In 20min I understood the basic of this concept. Thank you💯💯

  • @ling6701
    @ling6701 2 ปีที่แล้ว

    Thanks. Very well explained (I needed this). I have a question though. For ridge regression, we're trying to penalize steep slopes. Ok, but why is that ? Why don't we try to penalize flat slopes instead (imagine the flat slope is flat, and the real data points are above the best fit line, so that there is still overfitting)? I mean: don't we want to penalize any type of difference between the actual data points and the best fit line? Why only penalize the difference in one direction?

  • @taruchitgoyal3735
    @taruchitgoyal3735 8 หลายเดือนก่อน

    Hello Sir,
    As I understand, we apply Regularization [Ridge and Lasso] to reduce features or reduce contribution of features.
    As you also shared we took sum of squared residuals and then added the regularization constant to it.
    Thus, we get cost value which is equal to sum of squared residuals and regularization penalty and we get a numerical result as the output of the computation.
    At this point, How do we perform the feature selection or control contribution of each input variable in feature space?
    Thank you

  • @pjanjanam
    @pjanjanam 3 ปีที่แล้ว

    Hi Krish, Just curious, are there any general guidelines when to use which technique? Should we use Lasso regression in favour of Feature selection objective? or Should we experiment the best of the both?
    Thanks for a crystal clear explanation of this topic, you made it so easy to visualize this concept of regularisation.

  • @aravindmolugu3308
    @aravindmolugu3308 3 ปีที่แล้ว

    Hi Krish,
    Great video.. Thanks a lot. However, I have one small concern. through out the duration of 20 mins I was interrupted with close to 10 advert breaks and that is disturbing the flow. Please see is you can control the no of adverts. It is annoying. But the content and your way of explanation is what has kept me hooked.

  • @lavanyasenthilkumar4814
    @lavanyasenthilkumar4814 2 ปีที่แล้ว +1

    Krish,Thanks for the clear explanation. I have a doubt : if the test data points are below the best fit line we have selected earlier, this is fine. What if the test data points are above the best fit lines ? in that case , applying ridge regression , would still try to reduce the slope and we may not end up getting the best fit line. can i get a help in this scenario. Thanks in Advance!

    • @achilles2289
      @achilles2289 ปีที่แล้ว

      Same doubt I also have. I hope he explains this

  • @litonpaul6133
    @litonpaul6133 4 ปีที่แล้ว

    Hi Krish, here you explained overfitting when the slope is steep, but what if the slope is less steep for regression line.

  • @ENGMESkandaSVaidya
    @ENGMESkandaSVaidya 3 ปีที่แล้ว

    Lasso regression does feature selection which is an extra thing done when compared to ridge regression

  • @dineshpramanik2571
    @dineshpramanik2571 4 ปีที่แล้ว +1

    Excellent explanation sir...thanks

  • @sameerpandey5561
    @sameerpandey5561 3 ปีที่แล้ว

    Superbly explained

  • @saitcanbaskol9897
    @saitcanbaskol9897 2 ปีที่แล้ว

    Amazing explanations.

  • @creatorsayanb
    @creatorsayanb 3 ปีที่แล้ว +1

    11:34 there is a symbolic error. It is > instead of

    • @fusionarun
      @fusionarun 2 ปีที่แล้ว

      Yes, that's right. Even I was wondering the same