Save 20% off my machine learning online courses using code TH-cam ⬇ Cluster Analysis with Python online course: bit.ly/ClusterAnalysisWithPythonCourse My "Intro to Machine Learning" online course: bit.ly/TDWIIntroToML
Just had one question... I looked at the CSV files. And I was wondering how large the original data set is. Meaning I understand that you took data out of a larger data set to create the training set.. and I understand because you put 3,000 in. 3000 from the original data set. Is used for the learning set... Or did I not understand that aspect
Hi David thanks for a very cool lecture. At 55, I understood 90% of lecture in the first go. You mentioned you have shared slides of the video and Jupiter notebook. But I can't find it.
Very interesting and helpful. The greedy aspect I struggle with, are there alternatives that combine root nodes or is it a problem that gets solved with larger data sets?
@@DaveOnData I just thought the selection of the first that met the criteria would neglect the 2nd, or more parameters that also met the criteria, so could be leaving out key criteria that could inter-relate to future nodes more effectively than the first. I think it’s more of a glaring issue with the contrived example, and probably moot in larger data samples, in which it’s intended to function.
If I understand your concern correctly, one way to think about decision trees' greedy nature is computational complexity. To keep this simple, let's think only about the tree's root node. In an ideal world, the decision tree algorithm would always pick the root note based on the single most optimal combination of dataset and hyperparameter values. However, this is computationally infeasible as the algorithm must search through every possible tree that could be learned before knowing the single best root node.
Save 20% off my machine learning online courses using code TH-cam ⬇
Cluster Analysis with Python online course:
bit.ly/ClusterAnalysisWithPythonCourse
My "Intro to Machine Learning" online course:
bit.ly/TDWIIntroToML
Another absolute gem, incredibly well articulated, explanation and examples are brilliant
Thank you so much for taking the time to leave these kind words. I'm glad you are enjoying my content.
Thanks!
My pleasure! I hope you found the video useful.
Just had one question... I looked at the CSV files. And I was wondering how large the original data set is. Meaning I understand that you took data out of a larger data set to create the training set.. and I understand because you put 3,000 in. 3000 from the original data set. Is used for the learning set... Or did I not understand that aspect
The CSVs a are cleaned subset of the original dataset. You can get the original raw data here: archive.ics.uci.edu/dataset/2/adult
Hi David thanks for a very cool lecture. At 55, I understood 90% of lecture in the first go. You mentioned you have shared slides of the video and Jupiter notebook. But I can't find it.
Very interesting and helpful. The greedy aspect I struggle with, are there alternatives that combine root nodes or is it a problem that gets solved with larger data sets?
Glad you enjoyed the video! As to your question, what aspect of greedy selection are you struggling with?
@@DaveOnData I just thought the selection of the first that met the criteria would neglect the 2nd, or more parameters that also met the criteria, so could be leaving out key criteria that could inter-relate to future nodes more effectively than the first. I think it’s more of a glaring issue with the contrived example, and probably moot in larger data samples, in which it’s intended to function.
If I understand your concern correctly, one way to think about decision trees' greedy nature is computational complexity.
To keep this simple, let's think only about the tree's root node.
In an ideal world, the decision tree algorithm would always pick the root note based on the single most optimal combination of dataset and hyperparameter values. However, this is computationally infeasible as the algorithm must search through every possible tree that could be learned before knowing the single best root node.