Those videos are the best resource for someone who wants to understand data driven models! Thank you very much for your work from an engineering student!!
I love thiese videos! But in this one you point out the "squared projection error" while showing the segment going from the biased line to the outlier (like in PCA); instead in case of linear regression residuals should be vertical lines.
I was looking for copper, but found gold! Boss, excellent as always. Love your way of conveying the material. I hope you will continue presenting more topics on statistics, cause in the multivariate case it can become really intimidating. Best regards from Russia!
i've been watching all the videos in this chapter and this is the one that got me to cave and purchase the book!! i was so surprised to see that it was so affordable. thank you and your team so so so much for the high quality accessible information
Hello. In your book DATA DRIVEN SCIENCE & ENGINEERING page 24, relation (1.26), you express the matrix B. In this relation you must write: B = X - X bar and not as one can read B = X - B bar. With here X bar which is the matrix of means.
Interesting. In the first lecture of this series, individual faces (i.e. people) were in the columns, but a face was really a column of many pixels. In this lecture, people are in the rows. So each use of SVD is different. And each setup of a data matrix is different.
very nice series ... though it has been a while and I might be a bit rusty on my math. But if I recall correctly there is nowhere an explicit link made between the SVD and least squares. It is explained that the there is an SVD and with a theorem that this was the best one in some norm. But I have not seen an explicit link with ols. Would be nice if that would be more explicit in the video series...
I am slightly confused: the orthogonal projection of b onto a should minimize the distance between b and its projection - which is ORTHOGANAL to the span of a. If I remember correctly, the minimum least squares, however, should minimize the VERTICAL distance between the projected and the original point. I am sure there is something wrong with my assumptions but maybe someone can point me in the right direction
by cross referencing th-cam.com/video/ualmyZiPs9w/w-d-xo.html, one can clearly see the slope derived in the end is nothing but "covariance (a, b)/variance(a)"
I don't think anybody is teaching LR with respect to SVD on TH-cam right now, hence this video is more informative! Loved it immediately subscribed
Those videos are the best resource for someone who wants to understand data driven models! Thank you very much for your work from an engineering student!!
I am honestly surprised(just accidentally discovered this channel) why this coolest recourse is not popular among TH-cam algorithms
The lecture is so clear and well-organized! IT IS IMPRESSIVE!!!!
Dear professor, you're a great teacher!
Thank you so much for these videos.
I love thiese videos!
But in this one you point out the "squared projection error" while showing the segment going from the biased line to the outlier (like in PCA); instead in case of linear regression residuals should be vertical lines.
I was looking for copper, but found gold! Boss, excellent as always. Love your way of conveying the material. I hope you will continue presenting more topics on statistics, cause in the multivariate case it can become really intimidating. Best regards from Russia!
Hi Steve, I am a pharmaceutical data analyst, but you're just outstanding
Wow, thanks!
i've been watching all the videos in this chapter and this is the one that got me to cave and purchase the book!! i was so surprised to see that it was so affordable.
thank you and your team so so so much for the high quality accessible information
Absolutely awesome series, I will finish the whole series today:)
Hope you enjoy it!
Wow! Great video! I really liked your shirt, where is it from?
It’s a Patagonia capilene. My favorite shirt. I almost only wear them
This is gold, professor!
besides the very awesome explanation, the book is awesome and he writes mirrored as it was nothing 😄
Hello.
In your book DATA DRIVEN SCIENCE & ENGINEERING page 24, relation (1.26), you express the matrix B. In this relation you must write: B = X - X bar and not as one can read B = X - B bar. With here X bar which is the matrix of means.
Interesting. In the first lecture of this series, individual faces (i.e. people) were in the columns, but a face was really a column of many pixels. In this lecture, people are in the rows. So each use of SVD is different. And each setup of a data matrix is different.
Thank you sir,
your courses are awesome!
Excellent explanation!, What happens with the y-interception of the line? Is it b?
In mechanics, overdetermined is named statically indeterminate
sir great teacher you are
very nice series ... though it has been a while and I might be a bit rusty on my math. But if I recall correctly there is nowhere an explicit link made between the SVD and least squares. It is explained that the there is an SVD and with a theorem that this was the best one in some norm. But I have not seen an explicit link with ols. Would be nice if that would be more explicit in the video series...
Thanks
How's it going?
Is he writing in the reverse?
Good vid bruh
I am slightly confused: the orthogonal projection of b onto a should minimize the distance between b and its projection - which is ORTHOGANAL to the span of a. If I remember correctly, the minimum least squares, however, should minimize the VERTICAL distance between the projected and the original point. I am sure there is something wrong with my assumptions but maybe someone can point me in the right direction
by cross referencing th-cam.com/video/ualmyZiPs9w/w-d-xo.html, one can clearly see the slope derived in the end is nothing but "covariance (a, b)/variance(a)"
Cool
Besides the undeniable quality of the video overall, isn't awesome that he writes backwards in the air just to explain his points? 🤔
based
Can you explain why in the example at the end, U = a/|a|, is it because U has the only one eigen vector of matrix AA(transpose), which is just itself?