Corrections: 14:24 I meant to say "larger" instead of "lower. 18:48 In the original XGBoost documents they use the epsilon symbol to refer to the learning rate, but in the actual implementation, this is controlled via the "eta" parameter. So, I guess to be consistent with the original documentation, I made the same mistake! :) Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
@@rahul-qo3fi At 15:27 we are calculating the output values for the leaf, not similarity scores, and the equation in the video at this time point is the correct equation for calculating output values.
I got a few academic papers under review thanks to Josh. I watch his videos first before studying the other sources. Without his videos it would be Xhard to understand those sources. I put his name in the acknowledgements for helpful suggestions (he did actually reply to me several times here). I wish I could cite some of his papers but they are very unrelated to my area (economics). Unfortunately that's all I can do because the exchange rate would make any donations I can make look very stupid...
When we use fit(X_train,y_train) and predict(X_test) without watching Josh's videos or studying the underline concepts, nothing happens even if we get good results. Thank you Josh for simplifying these hard pieces of stuff for us and creating these perfect numerical examples. Please keep up this great work.
I finished with this video all the list, I am from Colombia and is hard to pay for learn about this concepts, so I am very gratful for your videos, and now my mom hates me when I say Double Bamm for nothing!! jajaja
Josh, On a scale of 5 you are a level 5 Teacher. I have learned so much from your videos. I owe so much to Andrew Ng and You. I will contribute to Patreon Once I get a Job. Thank you
All the boosting and bagging algorithms are complicated algorithms. In universities, I have hardly seen any professor who can make these algorithms understand like Joshua does. Hats off man !!
I have come across all the videos from gradient boosting till now, you clearly explain each and every step. Thanks for sharing the information with all. It helps a lot of people.
You are a nice guy , absolutely! I can't wait for the part 3.Although I have been learned XGBoost from the original paper, I can still get more interesting things from your video.Thank you :D
Thanks for boosting my confidence in understanding. There was this recent Kaggle tutorial that said LightGBM model "usually" does better performance than xgboost, but it didn't provide any context! I remember that xgboost was used as a gold standard-ish about 2-3 years ago(even CERN uses it if I'm not mistaken). Anyhoo, I hope I can keep up with all of this. I need to turn my boosters on.
I'm happy to boost your confidence! Part 3 will explain the math if you are interested in those details - they are not required - and Part 4 will describe a lot of optimizations that XGBoost uses to be efficient (making it easier to find good hyper-parameters).
All the videos are awesome and this is THE BAMMEST way to learn about ML and predictive modelling. Can we also have some videos about time series and the underlying concepts. That would be TRIPLE TRIPLE BAM!!!
Josh you are saviour...thanks a ton for making these fantastic videos...your video lectures are simple and crystal clear! Plus I love the sounds you make in between :)
Hey Josh, you might not see this, but I really look up to you and your videos. I got sucked into machine learning last month, and you have made the journey easier thusfar. If I get an internship or something in the following months, I'll be sure to donate to you and hit you up on your social media to thank you :). Hopefully one day I will have enough knowledge to share it widely like you. Cheers
Thank you for marvelous video! I have some questions regarding what's explained 1. Can Number of trees we make be controlled by what we call 'Epoch' in ML? 2. When the model runs through epochs, is there any chance some epochs go the other way from the answer value? - I understood that by setting learning rate too high, new prediction will bypass the answer, causing the learning procedure to fluctuate a lot. 3. Ways we can slow down learning speed, I think are 1) Larger cover, 2) Larger gamma, 3) larger lambda is it right? or are there more ways to control the speed? Always thanks to all the efforts you made for the materials!
1) I think you can use that terminology if you want, but I don't know of anyone else who does. In xgboost, the parameter you set for the number of trees is "num_boost", and generally speaking, building trees is called "boosting". 2) I don't know. 3) Although not mentioned in the original paper, XGBoost contains a few other ways to slow down learning (add regularization). For full details, see the manual: xgboost.readthedocs.io/en/latest/parameter.html
@@statquest Thanks for your reply. I am a stats PhD student. These days the industry prefer machine learning and deep learning. However, I feel like stats people are not strong at programming compared to CS people. We know lots of theory, but when solve real problems CS seems better? You have any idea on this? Thanks!
Thank you very much professor! I would love to see your explanations of statistical learning theory covering following topics: concentration inequalities, rademacher complexity and so on
Hey, sorry to bother you after 4 years ahah but i was wondering. What if there are multiple features like dosage, frequency, etc... How do you know which to select to construct the tree and calculate the gain? Hope you will se this ! Thank you for your awesome videos...
Although gamma is thoroughly discussed in the original manuscript, cover is never mentioned. So my best guess is that while both cover and gamma do similar things, there are still differences in how they do them and the types of leaves they prune. For example, you could have a leaf with a lot of residuals in it (and thus, a relatively high "cover", so cover would not prune), but if they are not very similar, you will have a low similarity score and a low gain (so gamma would prune).
@Josh Starmer: I would like to know about PRUNING concept in XGB. Are Gamma and Cover used for Pre-Pruning or Post-Pruning. In sklearn, we generally use Pre-Pruning which make more sense to me. However, from you tutorial it's seems like we are doing Post-Pruning (after full tree built). Can you please specify with a reason ?
These videos on XGBoost describe how XGBoost was designed from the ground up. Thus, the reason for anything in these video is "that's the way they designed XGBoost."
Love this series of xgboost. I read your answer about finding the best gamma value parameter using cross validation. According this video xgboost does not create new leaves when the gain < 0. When is extra pruning necessary? I suppose pruning can be done using lambda and additionally use gamma to prevent overfitting...?
Trees, in general, are notorious for over fitting the data. Random chance can easily result in a gain < 0 and adding an extra parameter for pruning will help prevent over fitting. For more details about the need for pruning trees in general, see: th-cam.com/video/D0efHEJsfHo/w-d-xo.html
Alpha was added after the original publication, so I didn't cover it. Presumably alpha is just like lambda and makes the trees shorter and shrinking the output values. And presumably it can shrink output values all the way to 0, just like lasso regression (and presumably lambda can not, just like ridge regression).
These videos are being truly helpful. Many thanks for sharing them! I do have a question RE XGBoost usage context. You mentioned that XGB is designed for large, complicated datasets; does this mean that it performs poorly with smaller datasets? Thanks in advance
I'm not sure - I just know that it has tons of optimizations for large datasets. To learn more about them, see: th-cam.com/video/oRrKeUCEbq8/w-d-xo.html
@@statquest do you have lecture notes for these videos? I start downloading the video and prepare my slides. may if you have lecture notes for each video will help me documenting these as a book for me.
@@ahmedabuali6768 I have PDF study guides for some of my videos here: statquest.org/studyguides/ and I am writing a book that I hope will come out next year.
@@statquest that is good, I can pay for all at once? it will take time from me, please, I see you forgot to talk about multinet done by Nir Freidman, it is very important.
@@ahmedabuali6768 I'm not familiar with Multinet, so I can't say if it is important or not. And you are more than welcome to buy all of the study guides! That would be awesome. Thanks for your support.
Thanks a lot Josh for making ML algorithms understandable. I am learning a lot from your videos. Just one question on the order when splitting to create the trees. I think it doesn't matter whether you start from the last two or first two as we check all.
I'm not sure I understand your question. Are you wanting to compare the similarity score for XGBoost to how classification is done (with GINI or entropy) for a normal decision tree? If so, they are not related. This similarity score is derived from loss function, whereas GINI and entropy are just used because they work. For details on the XGBoost similarity score, see: th-cam.com/video/ZVFeW798-2I/w-d-xo.htmlsi=iv2nJpFE41ijE3zo
@@statquest thanks Josh! I was earlier under the impression that we need to specify gini or entropy in a xgboost classifier, which seems incorrect as they are only for decision tree, not XGBoost’ classifier. Yet is it true that the similarity score and gini/entropy serve the same purpose, that is to calculate the similarity/purity therefore determine the split? Thanks again and congrats on 1M subscribers, that says a lot!
@@LL-hj8yh Yes, the similarity score and GINI serve the same purpose, but we can't use them (Gini or entropy) here since we are fitting the tree to continuous values (even for classification). Thanks!
Hey Josh. Your videos are really informative and easy to understand. I have joined your channel today and look forward to more exciting content coming up. I was also eager to see your third video in the XGBoost Series. When will that be live?
If you go to the community page, you may be able to find a link to part 3 since you are a channel member. Here's the link to the community page: th-cam.com/users/joshstarmercommunity
Great explanation like always! just a small question, at 10:12 you mentioned that the cover is defined as the similarity score minus lambda, but it looks in the equation that is plus, so what is the right answer? thanks for such an amazing explanations!
8:04 If two thresholds have same 'Gain' why would we pick "Dosage < 15" rather than "Dosage < 5"? Dose it matters for larger dataset? 13:23 Since in part1 we set gamma=130 and part2 we set gamma=3, I'm wondering how do we choose the value for gamma?
1) If 2 or more thresholds have the same "best GINI score", then just pick one, it doesn't matter. Since this is a greedy algorithm it does not look ahead to see if one of those choices is better in the long run. 2) When we use XGBoost for regression, the residuals can be relatively large, so gamma may need to be relatively large. When we use XGBoost for classification, the residuals are relatively small, so gamma may need to be relatively small. You can always just build a few trees to get a sense of what values for gamma make sense for pruning.
Hi Josh, What happens if after splitting the node, one leaf has cover lower than the set threshold and the other leaf has cover greater than the set threshold. Splitting would not be performed, right?
I have a question that for the initial predicted output we have taken 0.5 but this is classification problem why did we choose 0.5 as the default value.. I mean why the predicted initial value couldn't have been any other value say 1 or 0. Probably my question seems stupid, apologies in advance..
You can set the initial predicted value to be whatever you want, but, by default, it is 0.5. To be honest, this seems fairly reasonable for classification (since the goal is to have probabilities between 0 and 1 and 0.5 is halfway between them). However, it seems totally crazy for regression, but that's the way it is and the guy that made XGBoost is totally fine with it.
Hi Josh, At 19:20, it is written that: log(odds) Prediction = 0 + (0.3 x -2) = -0.6 However I was just wondering since the tree is predicting the residuals, isn't the output of the XGBoost tree a probability? So shouldn't we convert the output from probabilities to log(odds) before we add it to the initial guess of 0?
The tree predicts residuals, but the output values from the leaves are not residuals, instead, they calculated at 14:58. Now, to be honest, I have no idea why that particular formula results in a log(odds), but it must, because that is what both XGBoost and Gradient Boost do, and neither of them do anything else before calculating the final log odds.
Josh, good morning, let me ask you a question. You said that we can put the initial probability to a value different than 0.5 if, for example, the training dataset is unbalanced. That means that xgboost can deal with unbalanced datasets without the needing to balanced the training dataset before submitting it to the model?
Hi Josh, Specifically, the gradient of the training loss is used to predict the target variables for each successive tree, right? Therefore, does a steeper gradient imply it is going to try harder to correctly predict a specific sample that has been mis-classified, or does it mean it will work harder to predict any member of a certain true class? Thanks!
For details on how XGBoost treats misclassified samples and how, exactly, it tries harder to correctly classify them, see th-cam.com/video/oRrKeUCEbq8/w-d-xo.html
This is an interesting question. In part 3 (which will be out in a few weeks), you'll see how the similarity scores and regularization are all derived from a single formula and I'm not sure how it would work if we swapped in GINI. So check back in in a few weeks and watch the next video in the series the reason GINI is not used may make more sense.
Hi Josh, thank you for your amazing videos. They are really helping me a lot. One thing i still don‘t get is how does xgboost predict multiple classes (e.g. „most likely drug to use“ with drugs 1,2 and 3)? Does this work like in multinomial logistic regression, where each class is checked against a baseline-class? Or is it something like a random forrest when using xgboost?
When there are multiple classes, XGBoost uses the softmax objective function. I explain softmax in my series on Neural Networks: th-cam.com/video/CqOfi41LfDw/w-d-xo.html
I have this question regarding XGboost classifier & Random or decision tree classifier I want to know that after label /ordinal encoding , the categorical variables are changed to 0,1,2 etc format and are in integer datatype. So will converting the datatype to category or object from integer change change output. The though to ask this is that if we keep datatype as integer then can splitting condition be a decimal value like 1.5,2.5 etc as against 1 or 2 etc when the datatype is category
Do you have the website, I think I will donate for some amazing videos which will come in the future. It's just awesomeeee and hope you will make video about hyperparameter for this kind of regression in Python
You have used example where x (variable/feature) is continuous. How are the unique regression trees made when x is discrete or ranked ? Like the candidate selection using gain and similarity scores ?
When the feature is discrete or ranked, we use the exact same method described in this video. This is because we are fitting the tree to the residuals, which will still be continuous, regardless of whether the feature is discrete or continuous.
Hi Josh , How to make a tree with multiple predictors using XG boost .Here you showed only single variable called Dosage . How to do it for multiple variables? Thanks
I'm not sure I understand your question, however, if you want to learn about the underlying details (i.e. see more of the math) of how XGBoost works, see: th-cam.com/video/ZVFeW798-2I/w-d-xo.html
@@statquest for each new tree, the root node is different in the video,so I'm confused why the root nodes are different since we are using the same gini or entropy to decide the root node.
@@zeus1082 See: 3:20 That said, I appreciate your interest in these topics, but I would it would help me if you could watch the videos, all of them (including a 4 gradient boost videos), in order, and maybe watch them a few times before asking more questions. It is possible that my videos are not the best learning tool for you, so I would also consider seeing how other people teach this topic, or consider reading the original manuscript.
You test every variable to find the optimal thresholds and use the one that does the best. However, XGBoost has some optimizations explained here: th-cam.com/video/oRrKeUCEbq8/w-d-xo.html
Hi Josh, I cannot understand why at minute 08:15, after you created the first split (Dosage < 15) and the consequent similarity gain, you don't update the predicted probabilities of the residuals by using the formula e^log(ODDS) / (1 + e^log(ODDS)). In the video it seems that the "previous predicted probability" remains always the initial 0.5, so I'm asking if it should be changed after the first split instead. Thank you in advance
Oh my mistake, you are totally right.. thank you very much. So basically like a standard Gradient Boosting Classifier I build the whole weak learner tree and once I obtain the output leaf values (which are log(ODDS) values calculated with the same formula as the standard GB Classifier apart from lambda) I compute the new prediction starting from the previous one. Then I convert the new log(ODDS) prediction into probability using the logistic function.
Hi Josh, Thank you for such a great explanation. Just want to clarify one thing i.e. is this cover concept applies specifically on the xgboost trees or is it a normal method for all the tree-based algorithms. As every tree-based algorithm have this min_child_weight parameter in sklearn library.
at around 11:00, could you explain further why the cover, meaning the minimum number of residuals in each leaf, is 0.25, why it cannot allow a leave with 1 residual? isn't 1 > 0.25?
You decide what the probability threshold should be and all predictions with probabilities > threshold are classified one way, and everything else is classified the other way.
@@mimaaristide7151 This is a "boosting" method, which is different from a "bagging" method. The gist of boosting is 22:52. However, more details are here: th-cam.com/video/3CC4N4z3GJc/w-d-xo.html
for purning the tree , is gain-gamma is same as cover value? As you remove the leaf when you calculate the cover value and also when you calculate gain-gamma
Hi Josh, if there were outliers in the data say dosage 1000, this wouldn't affect how the tree makes it's split therefore outliers do not affect it? Aren't tree methods robust to outliers
Ive been researching on how to use XGBoost for image classification, unfortunately I did not find a lot of research papers on this. Is it a good algorithm for this job, Classification has multiple different classes that are either various types of diseases on leaf plants or a healthy leaf. Thank you
What do you do if the cover of a left leaf is less than 1, but the cover of a right leaf is greater than 1? Do you only remove the left leaf or the entire subtree made of root, left leaf, right leaf?
Corrections:
14:24 I meant to say "larger" instead of "lower.
18:48 In the original XGBoost documents they use the epsilon symbol to refer to the learning rate, but in the actual implementation, this is controlled via the "eta" parameter. So, I guess to be consistent with the original documentation, I made the same mistake! :)
Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
Very nice videos. God bless you man!!
15:27 The similarity equations are missing residual **2 (Thanks for the detailed explanations Love your content)
@@rahul-qo3fi At 15:27 we are calculating the output values for the leaf, not similarity scores, and the equation in the video at this time point is the correct equation for calculating output values.
@@statquest aah got it, thanks:)
Hi Josh, I don't understand the mention to parameter "min_child_weight" in 12:58. Is that a typo or am I missing something. Thanks!
How do I pass any interviews without these videos? I don't know how much I owe you Josh!
Thanks and good luck with your interview. :)
did you clear ?
I got a few academic papers under review thanks to Josh. I watch his videos first before studying the other sources. Without his videos it would be Xhard to understand those sources. I put his name in the acknowledgements for helpful suggestions (he did actually reply to me several times here). I wish I could cite some of his papers but they are very unrelated to my area (economics). Unfortunately that's all I can do because the exchange rate would make any donations I can make look very stupid...
@@guneygpac6505 I think you are from Turkey :D
so you also yell BAM!! ?
From Vietnam, and hats off to your talent in explaining complicated things in a way that I feel so comfortable to continue watching.
Thank you very much! :)
When we use fit(X_train,y_train) and predict(X_test) without watching Josh's videos or studying the underline concepts, nothing happens even if we get good results.
Thank you Josh for simplifying these hard pieces of stuff for us and creating these perfect numerical examples. Please keep up this great work.
Thank you very much!
I finished with this video all the list, I am from Colombia and is hard to pay for learn about this concepts, so I am very gratful for your videos, and now my mom hates me when I say Double Bamm for nothing!! jajaja
That's awesome! I'm glad the videos are helpful. :)
Josh, On a scale of 5 you are a level 5 Teacher. I have learned so much from your videos. I owe so much to Andrew Ng and You. I will contribute to Patreon Once I get a Job. Thank you
Wow, thanks!
All the boosting and bagging algorithms are complicated algorithms. In universities, I have hardly seen any professor who can make these algorithms understand like Joshua does. Hats off man !!
Thank you!
as a beginner of data science, I am super grateful for all of your tutorials. Helps a lot!
Glad you like them!
Josh! You made a machine learning a beautiful subject and finally I m in love with these Super BAM videos.
Hooray! :)
I have come across all the videos from gradient boosting till now, you clearly explain each and every step. Thanks for sharing the information with all. It helps a lot of people.
Glad it was helpful!
Thank you Josh! You literally broke everything into little detail... Missed to meet you this time in India!
Thank you! Maybe we can meet up next time.
You are a nice guy , absolutely! I can't wait for the part 3.Although I have been learned XGBoost from the original paper, I can still get more interesting things from your video.Thank you :D
Thank you! :)
Thanks Josh. You're a life saver and have made my Data Science transition a BAM experience. Thank You!
Glad to help!
Thank you so much for making Machine Learning this easy for us . Grateful for your content . Love from India
Glad you enjoy it!
Yo fr these are the best data science/ML explanatory vids on the web. Great work, Josh!
Thank you very much! :)
I must have watched almost every video at least three times during this pandemic. Thank you so much for your effort!
Wow!!! Thank you very much! :)
Bravo! Thanks for making life easy. Thanks and appreciation from Qatar.
Hello Qatar!! Thank you very much!
Million Thanks Josh. I can not wait to watch other videos about XGBoost, lightBoost, CatBoost and deep learning. Your videos are the best.
Part 3 on XGBoost should be out on Monday.
Thanks for boosting my confidence in understanding. There was this recent Kaggle tutorial that said LightGBM model "usually" does better performance than xgboost, but it didn't provide any context! I remember that xgboost was used as a gold standard-ish about 2-3 years ago(even CERN uses it if I'm not mistaken). Anyhoo, I hope I can keep up with all of this. I need to turn my boosters on.
I'm happy to boost your confidence! Part 3 will explain the math if you are interested in those details - they are not required - and Part 4 will describe a lot of optimizations that XGBoost uses to be efficient (making it easier to find good hyper-parameters).
thanks buddy, its hard for me to know how xgboost works in classification, but this tutorial has explained well
Thanks!
All the videos are awesome and this is THE BAMMEST way to learn about ML and predictive modelling. Can we also have some videos about time series and the underlying concepts. That would be TRIPLE TRIPLE BAM!!!
Thank you very much! :)
Josh you are saviour...thanks a ton for making these fantastic videos...your video lectures are simple and crystal clear! Plus I love the sounds you make in between :)
Bam! :)
Finally yay waited for these video. For long but worth the wait. Thanks for everything.
Thank you! :)
awesome explanation! I bought your book "The statquest illustrated guide to machine learning" even though I have understanded all the concepts.
Thank you so much!!! I really appreciate your support.
Best Professor on the planet. Could you please make a playlist for DL or RL ?
I'm working on them.
Wow, I just discovered this channel and will use it to prep for my interview BAM! But the interview is in 2 hours Smal BAM :ccccccc
Good luck!
The little calculation noises give me life
beep, boop, beep!
Hey Josh, you might not see this, but I really look up to you and your videos. I got sucked into machine learning last month, and you have made the journey easier thusfar. If I get an internship or something in the following months, I'll be sure to donate to you and hit you up on your social media to thank you :). Hopefully one day I will have enough knowledge to share it widely like you.
Cheers
Thank you very much! Good luck with your studies! :)
@@statquest thanks Josh, will definitely update you in a year or two about the progress I've made😀
Bam!
Ty very much, will buy your song within tomorrow morning from Thailand :)
Wow! Thank you!
I hope you were my teacher in my college days. So instead of watching your videos, i am able to create it.
:)
hats off all my doubts clarified here, superb cooooooooooooool Big BAAAAAAAAMMMMMMMMMM!
Hooray! :)
Thank you for marvelous video!
I have some questions regarding what's explained
1. Can Number of trees we make be controlled by what we call 'Epoch' in ML?
2. When the model runs through epochs, is there any chance some epochs go the other way from the answer value?
- I understood that by setting learning rate too high, new prediction will bypass the answer, causing the learning procedure to
fluctuate a lot.
3. Ways we can slow down learning speed, I think are
1) Larger cover, 2) Larger gamma, 3) larger lambda
is it right? or are there more ways to control the speed?
Always thanks to all the efforts you made for the materials!
1) I think you can use that terminology if you want, but I don't know of anyone else who does. In xgboost, the parameter you set for the number of trees is "num_boost", and generally speaking, building trees is called "boosting".
2) I don't know.
3) Although not mentioned in the original paper, XGBoost contains a few other ways to slow down learning (add regularization). For full details, see the manual: xgboost.readthedocs.io/en/latest/parameter.html
@@statquest
Thanks for kind reply! :)
Excellent explanation Brother!
Thanks!
Awsome!!!👍👍👍very very very very good teacher!!!
Thank you! 😃
Enjoyed it! Cool explanation
Thanks!
Hit that like button before watching it.
Thank you!!!! :)
Thanx for this simplification, can you do the same this for LGBM and CatBoost ?
Thank Josh for your knowledge and funny BAM!!!
Thank you!
Thank you! Can’t wait for part 3.
Thanks! Part 3 should be out soon.
@@statquest Thanks for your reply. I am a stats PhD student. These days the industry prefer machine learning and deep learning. However, I feel like stats people are not strong at programming compared to CS people. We know lots of theory, but when solve real problems CS seems better? You have any idea on this? Thanks!
@@gerrard1661 It really just boils down to the type of job you want to work on. There are tons of jobs in both statistics and cs and machine learning.
Thank you very much professor! I would love to see your explanations of statistical learning theory covering following topics: concentration inequalities, rademacher complexity and so on
I'll keep that in mind.
U r awesome
Love from INDIA
Thank you!
Very well explained!! Awesome..
Thank you! :)
Your videos are so funny and smart! Thank you
Thanks! :)
concepts going straight to my head as if u shot arrows bam!!!!!
Hooray! :)
Classification is not a vacation,
it is not a sensation,
but it's cooooool!
🤣
bam!
Hey, sorry to bother you after 4 years ahah but i was wondering. What if there are multiple features like dosage, frequency, etc... How do you know which to select to construct the tree and calculate the gain? Hope you will se this ! Thank you for your awesome videos...
You just test every single feature to find out which one has the best gain.
The music is fantastic.
bam!
You really should write a book.
Thanks! :)
After a long time..... BAMMM!
Thanks! :)
Must watch videos.Just a small question,why do we need both cover and gamma for pruning?
Although gamma is thoroughly discussed in the original manuscript, cover is never mentioned. So my best guess is that while both cover and gamma do similar things, there are still differences in how they do them and the types of leaves they prune. For example, you could have a leaf with a lot of residuals in it (and thus, a relatively high "cover", so cover would not prune), but if they are not very similar, you will have a low similarity score and a low gain (so gamma would prune).
Thanks a lot, keep doing an awesome job
Thank you very much! :)
@Josh Starmer: I would like to know about PRUNING concept in XGB.
Are Gamma and Cover used for Pre-Pruning or Post-Pruning. In sklearn, we generally use Pre-Pruning which make more sense to me.
However, from you tutorial it's seems like we are doing Post-Pruning (after full tree built).
Can you please specify with a reason ?
These videos on XGBoost describe how XGBoost was designed from the ground up. Thus, the reason for anything in these video is "that's the way they designed XGBoost."
Love this series of xgboost. I read your answer about finding the best gamma value parameter using cross validation. According this video xgboost does not create new leaves when the gain < 0. When is extra pruning necessary? I suppose pruning can be done using lambda and additionally use gamma to prevent overfitting...?
Trees, in general, are notorious for over fitting the data. Random chance can easily result in a gain < 0 and adding an extra parameter for pruning will help prevent over fitting. For more details about the need for pruning trees in general, see: th-cam.com/video/D0efHEJsfHo/w-d-xo.html
Thank you such good videos. I see that XGBoost has boot alpha and lambda parameters. you've explained about lambda, where would alpha fit in ?
Alpha was added after the original publication, so I didn't cover it. Presumably alpha is just like lambda and makes the trees shorter and shrinking the output values. And presumably it can shrink output values all the way to 0, just like lasso regression (and presumably lambda can not, just like ridge regression).
Hit and like first... Then later i am gonna watch video... MEGAAAA BAMMMM
Awesome!!! :)
Rule No 1 before watching statquest video.
Like and then click on play button
bam! :)
Awesome vid!
Thanks!
life saver, cannot thank more
Thanks! Part 3 should be out soon.
which vedio making tool do u use .....its so cool.
I answer these questions in this video: th-cam.com/video/crLXJG-EAhk/w-d-xo.html
Awesome bang. Happy 2020
Thank you! :)
These videos are being truly helpful. Many thanks for sharing them! I do have a question RE XGBoost usage context. You mentioned that XGB is designed for large, complicated datasets; does this mean that it performs poorly with smaller datasets? Thanks in advance
I'm not sure - I just know that it has tons of optimizations for large datasets. To learn more about them, see: th-cam.com/video/oRrKeUCEbq8/w-d-xo.html
Hey thanks for the videos. Can't wait for the remaining parts in the XGboost series. When are you gonna release the next part?
Since you are a member, you'll get early access to part 3 this coming monday (January 27). Part 4 will be available for early access 2 weeks later.
please could you do more video, i am in love with your lectures, I want a video in how we use negative binomial in estimating the sample size
I'll keep that in mind.
@@statquest do you have lecture notes for these videos? I start downloading the video and prepare my slides. may if you have lecture notes for each video will help me documenting these as a book for me.
@@ahmedabuali6768 I have PDF study guides for some of my videos here: statquest.org/studyguides/ and I am writing a book that I hope will come out next year.
@@statquest that is good, I can pay for all at once? it will take time from me,
please, I see you forgot to talk about multinet done by Nir Freidman, it is very important.
@@ahmedabuali6768 I'm not familiar with Multinet, so I can't say if it is important or not. And you are more than welcome to buy all of the study guides! That would be awesome. Thanks for your support.
Awesome :)
Thanks 😁
Thanks a lot Josh for making ML algorithms understandable. I am learning a lot from your videos. Just one question on the order when splitting to create the trees. I think it doesn't matter whether you start from the last two or first two as we check all.
That is correct.
Hey Josh, how does the similarity score here related to gini/entropy we use for XGBoost’s classification?
I'm not sure I understand your question. Are you wanting to compare the similarity score for XGBoost to how classification is done (with GINI or entropy) for a normal decision tree? If so, they are not related. This similarity score is derived from loss function, whereas GINI and entropy are just used because they work. For details on the XGBoost similarity score, see: th-cam.com/video/ZVFeW798-2I/w-d-xo.htmlsi=iv2nJpFE41ijE3zo
@@statquest thanks Josh! I was earlier under the impression that we need to specify gini or entropy in a xgboost classifier, which seems incorrect as they are only for decision tree, not XGBoost’ classifier. Yet is it true that the similarity score and gini/entropy serve the same purpose, that is to calculate the similarity/purity therefore determine the split?
Thanks again and congrats on 1M subscribers, that says a lot!
@@LL-hj8yh Yes, the similarity score and GINI serve the same purpose, but we can't use them (Gini or entropy) here since we are fitting the tree to continuous values (even for classification). Thanks!
Hey Josh. Your videos are really informative and easy to understand. I have joined your channel today and look forward to more exciting content coming up. I was also eager to see your third video in the XGBoost Series. When will that be live?
If you go to the community page, you may be able to find a link to part 3 since you are a channel member. Here's the link to the community page: th-cam.com/users/joshstarmercommunity
@@statquest Finally. Made my day!!!
@@asabhinavrock Awesome!!! Thank you very much.
Great explanation like always! just a small question, at 10:12 you mentioned that the cover is defined as the similarity score minus lambda, but it looks in the equation that is plus, so what is the right answer? thanks for such an amazing explanations!
The denominator = [Sum (previous * (1 - previous)] + lambda. Cover = Sum (previous * (1 - previous). Thus, cover = denominator - lambda = [Sum (previous * (1 - previous)] + lambda - lambda = Sum (previous * (1 - previous)
8:04 If two thresholds have same 'Gain' why would we pick "Dosage < 15" rather than "Dosage < 5"? Dose it matters for larger dataset?
13:23 Since in part1 we set gamma=130 and part2 we set gamma=3, I'm wondering how do we choose the value for gamma?
1) If 2 or more thresholds have the same "best GINI score", then just pick one, it doesn't matter. Since this is a greedy algorithm it does not look ahead to see if one of those choices is better in the long run.
2) When we use XGBoost for regression, the residuals can be relatively large, so gamma may need to be relatively large. When we use XGBoost for classification, the residuals are relatively small, so gamma may need to be relatively small. You can always just build a few trees to get a sense of what values for gamma make sense for pruning.
@@statquest thank you very much Josh! Really enjoy your video!
Hi Josh,
What happens if after splitting the node, one leaf has cover lower than the set threshold and the other leaf has cover greater than the set threshold.
Splitting would not be performed, right?
That is correct.
your lecture is triple bamm!
do you have any plan to teach deep learning?
As soon as I finish with XGBoost.
great explanations! and how does this generalize to multiclass classification? Thanks (one vs all classif repeated n_classes times? )
That's one way to do it. I believe that you can also swap out the loss function and use cross entropy.
while cover makes a leaf not to be sufficient enough to stay in the tree, is it also kinda pruning?
That is correct. Cover is a way to enforce pruning and not over fitting the training data.
I have a question that for the initial predicted output we have taken 0.5 but this is classification problem why did we choose 0.5 as the default value.. I mean why the predicted initial value couldn't have been any other value say 1 or 0. Probably my question seems stupid, apologies in advance..
You can set the initial predicted value to be whatever you want, but, by default, it is 0.5. To be honest, this seems fairly reasonable for classification (since the goal is to have probabilities between 0 and 1 and 0.5 is halfway between them). However, it seems totally crazy for regression, but that's the way it is and the guy that made XGBoost is totally fine with it.
That's great. BTW is there a video that only contains songs? ;)
Not yet! :)
Hi Josh,
At 19:20, it is written that:
log(odds) Prediction = 0 + (0.3 x -2) = -0.6
However I was just wondering since the tree is predicting the residuals, isn't the output of the XGBoost tree a probability? So shouldn't we convert the output from probabilities to log(odds) before we add it to the initial guess of 0?
The tree predicts residuals, but the output values from the leaves are not residuals, instead, they calculated at 14:58. Now, to be honest, I have no idea why that particular formula results in a log(odds), but it must, because that is what both XGBoost and Gradient Boost do, and neither of them do anything else before calculating the final log odds.
Josh, good morning, let me ask you a question. You said that we can put the initial probability to a value different than 0.5 if, for example, the training dataset is unbalanced. That means that xgboost can deal with unbalanced datasets without the needing to balanced the training dataset before submitting it to the model?
I'm not really sure. It probably depends on how imbalanced the data are.
Hi Josh,
Specifically, the gradient of the training loss is used to predict the target variables for each successive tree, right? Therefore, does a steeper gradient imply it is going to try harder to correctly predict a specific sample that has been mis-classified, or does it mean it will work harder to predict any member of a certain true class?
Thanks!
For details on how XGBoost treats misclassified samples and how, exactly, it tries harder to correctly classify them, see th-cam.com/video/oRrKeUCEbq8/w-d-xo.html
Can we apply gini instead of gain in XGBoost?
This is an interesting question. In part 3 (which will be out in a few weeks), you'll see how the similarity scores and regularization are all derived from a single formula and I'm not sure how it would work if we swapped in GINI. So check back in in a few weeks and watch the next video in the series the reason GINI is not used may make more sense.
Hi Josh, thank you for your amazing videos. They are really helping me a lot.
One thing i still don‘t get is how does xgboost predict multiple classes (e.g. „most likely drug to use“ with drugs 1,2 and 3)?
Does this work like in multinomial logistic regression, where each class is checked against a baseline-class? Or is it something like a random forrest when using xgboost?
When there are multiple classes, XGBoost uses the softmax objective function. I explain softmax in my series on Neural Networks: th-cam.com/video/CqOfi41LfDw/w-d-xo.html
Thanks!
BAM! :)
What statistical tests do we need to perform on the training data and how do we validate the data
I have this question regarding XGboost classifier & Random or decision tree classifier
I want to know that after label /ordinal encoding , the categorical variables are changed to 0,1,2 etc format and are in integer datatype. So will converting the datatype to category or object from integer change change output. The though to ask this is that if we keep datatype as integer then can splitting condition be a decimal value like 1.5,2.5 etc as against 1 or 2 etc when the datatype is category
I talk about technical details of using XGBoost in this video: th-cam.com/video/GrJP9FLV3FE/w-d-xo.html
Do you have the website, I think I will donate for some amazing videos which will come in the future.
It's just awesomeeee and hope you will make video about hyperparameter for this kind of regression in Python
Glad you like the videos. Here's the website: statquest.org/support-statquest/
You have used example where x (variable/feature) is continuous. How are the unique regression trees made when x is discrete or ranked ? Like the candidate selection using gain and similarity scores ?
When the feature is discrete or ranked, we use the exact same method described in this video. This is because we are fitting the tree to the residuals, which will still be continuous, regardless of whether the feature is discrete or continuous.
@@statquest thanks for the quick response ! your videos are simply amazing...
Hi Josh ,
How to make a tree with multiple predictors using XG boost .Here you showed only single variable called Dosage . How to do it for multiple variables?
Thanks
For each variable in your dataset, you go through the process shown here. You then select the variable that results in the best similarity score.
Thank you for the explanation. Why are we using different decision nodes for each new trees?
entropy is calculated independent of the residual right?
I'm not sure I understand your question, however, if you want to learn about the underlying details (i.e. see more of the math) of how XGBoost works, see: th-cam.com/video/ZVFeW798-2I/w-d-xo.html
@@statquest for each new tree, the root node is different in the video,so I'm confused why the root nodes are different since we are using the same gini or entropy to decide the root node.
@@zeus1082 Every time we build a tree, we update the residuals. Different residuals = different trees.
@@statquest aren't we deciding the split nodes based on gini ?. Please refer a video/timestamp where we decide the split node based on the residuals
@@zeus1082 See: 3:20 That said, I appreciate your interest in these topics, but I would it would help me if you could watch the videos, all of them (including a 4 gradient boost videos), in order, and maybe watch them a few times before asking more questions. It is possible that my videos are not the best learning tool for you, so I would also consider seeing how other people teach this topic, or consider reading the original manuscript.
Hello Josh, thank you for your video.
How would this work with more than one variable? Does each variable end up with only one threshold?
Thank you!
You test every variable to find the optimal thresholds and use the one that does the best. However, XGBoost has some optimizations explained here: th-cam.com/video/oRrKeUCEbq8/w-d-xo.html
Hi Josh, I cannot understand why at minute 08:15, after you created the first split (Dosage < 15) and the consequent similarity gain, you don't update the predicted probabilities of the residuals by using the formula e^log(ODDS) / (1 + e^log(ODDS)). In the video it seems that the "previous predicted probability" remains always the initial 0.5, so I'm asking if it should be changed after the first split instead. Thank you in advance
The predicted probabilities should not be changed until after we have created the entire tree and calculated the output values for the leaves.
Oh my mistake, you are totally right.. thank you very much. So basically like a standard Gradient Boosting Classifier I build the whole weak learner tree and once I obtain the output leaf values (which are log(ODDS) values calculated with the same formula as the standard GB Classifier apart from lambda) I compute the new prediction starting from the previous one. Then I convert the new log(ODDS) prediction into probability using the logistic function.
Hi Josh, Thank you for such a great explanation. Just want to clarify one thing i.e. is this cover concept applies specifically on the xgboost trees or is it a normal method for all the tree-based algorithms. As every tree-based algorithm have this min_child_weight parameter in sklearn library.
Every tree based method has a way of filter out leaves that do no have enough samples going to them, however, the way XGBoost does it is unique.
at around 11:00, could you explain further why the cover, meaning the minimum number of residuals in each leaf, is 0.25, why it cannot allow a leave with 1 residual? isn't 1 > 0.25?
I answer this question in the StatQuest that explains the math behind XGBoost: th-cam.com/video/ZVFeW798-2I/w-d-xo.html
Thanks for video. 12:58 So you mean 'cover' is equal to hyperparameter 'min_child_weight' ??
Yep
Hi Josh:
Can it be possible to make some video on the scale_pos_weight feature of XGBoost and how it can help in solving imbalanced datasets problems?
I'll keep that in mind.
Thank Your sir for this wonderful video.
I have a question please, once we've built all classifiers, how do we obtain the final classification ?
You decide what the probability threshold should be and all predictions with probabilities > threshold are classified one way, and everything else is classified the other way.
@@statquest will this decision be made considering all classifiers, and take majority vote as in classic bagging method ?
@@mimaaristide7151 This is a "boosting" method, which is different from a "bagging" method. The gist of boosting is 22:52. However, more details are here: th-cam.com/video/3CC4N4z3GJc/w-d-xo.html
@@statquest yes sir, you're right, sorry for this confusion. And once again thank you for your wonderful videos...
Thnx sir😊
bam! :)
for purning the tree , is gain-gamma is same as cover value? As you remove the leaf when you calculate the cover value and also when you calculate gain-gamma
For details on cover (and everything else in XGBoost), see: th-cam.com/video/ZVFeW798-2I/w-d-xo.html
Hi Josh, if there were outliers in the data say dosage 1000, this wouldn't affect how the tree makes it's split therefore outliers do not affect it? Aren't tree methods robust to outliers
Trees can be less sensitive to outliers than other methods, however, it's always a good idea to remove them first.
Ive been researching on how to use XGBoost for image classification, unfortunately I did not find a lot of research papers on this. Is it a good algorithm for this job, Classification has multiple different classes that are either various types of diseases on leaf plants or a healthy leaf. Thank you
I've never done that myself, but I've heard of people who have and been successful.
What do you do if the cover of a left leaf is less than 1, but the cover of a right leaf is greater than 1? Do you only remove the left leaf or the entire subtree made of root, left leaf, right leaf?
If the cover value for one of the leaves is too small, we remove both leaves.
14:24 -- shouldn't be higher values for gamma in order to prune? Lower value for gamma hence Gain - gamma is tend to be positive, hence no prune.
Oops!! I should have said "larger" instead of "lower".