Great video! @ 20:14 you give the mean of x as the result but I think it should be the mean of y! I was confused when you say this term cancels out and this one too when lambda = infinity, in fact it's an approximation by neglecting the two elements mentioned since we know that the coefficients would tend asymptotically towards zero! @ 14:02 i = 1,...,n samples --> these are observations within a given sample!
nice job explaining the intuition on math but where can I find rigorous proof of bias-variance tradeoff in ridge regression? I mean the equations of bias-variance tradeoff where lambda is present. thanks again.
Fantastic Video!! Thank you so much!
Thank you! I'm glad the content is useful!
Great video! @ 20:14 you give the mean of x as the result but I think it should be the mean of y! I was confused when you say this term cancels out and this one too when lambda = infinity, in fact it's an approximation by neglecting the two elements mentioned since we know that the coefficients would tend asymptotically towards zero! @ 14:02 i = 1,...,n samples --> these are observations within a given sample!
Howdy Patite, Thank you! You are correct. I will add to my notes and update the next version! I appreciate, Michael
nice job explaining the intuition on math but where can I find rigorous proof of bias-variance tradeoff in ridge regression? I mean the equations of bias-variance tradeoff where lambda is present. thanks again.
Great video, but at 20.4, I think as lambda increases model complexity decreases therefore Variance decreases and bias increases.
Howdy Rohit, I made the change for the next version. A mix up on the slide but correct elsewhere. Thank you for the feedback, Michael
I didn't know Virgil Van dick was so good in machine learning.