Statistical Learning: 2.3 Model Selection and Bias Variance Tradeoff
ฝัง
- เผยแพร่เมื่อ 10 ธ.ค. 2024
- Statistical Learning, featuring Deep Learning, Survival Analysis and Multiple Testing
Trevor Hastie, Professor of Statistics and Biomedical Data Sciences at Stanford University - statistics.sta...
Robert Tibshirani, Professor of Statistics and Biomedical Data Sciences at Stanford University - statistics.sta...
Jonathan Taylor, Professor Statistics at Stanford University - statistics.sta...
You are able to take Statistical Learning as an online course on EdX, and you are able to choose a verified path and get a certificate for its completion. You can choose to take the course in R (www.edx.org/co...) or in Python (www.edx.org/le...)
For more information about courses on Statistics, you can browse our Stanford Online Catalog: stanford.io/3Q...
Bias-Variance trade off 6:00
thanks
I am so happy, to be able to revise and expand my knowledge by learning from the legends themselves! Will give a deep dive to the presentations and the new book :)
Thanks for the nice course!
Thanks for watching!
9:26 is the MSE set at one when the trade off is balanced because the true function has an error set at one? If not, why is the ideal error here set closer to one instead of zero?
Very Nice!
Can we say the more simpler the model would be more intuitive to extrapolate out of the bounds of the training data?
Thank you sir
What is the flexibility of a model measured in?
same question here. I suppose the orange, blue and green dots are where the model converges; where is the curve comes from?
If I am not mistaken, the flexibility is proportional to the number of regressors of the model, i.e. the number of independent variables if we are considering a linear regression. The higher the number of regressor, the better the model will fit the data. But, at the same time, you risk to overfit, leading to worse generalization capabilities.
“Error”
I think this is some kind of property of non-parametric statistical learning methods how well a method fit. could also depend on the size of training data