I agree. great explanation.. thank you so much. one thing at the very end, I was not sure of. you said something about taking the best of the validation models, and in the context of hyper parameters. so, is this to say that you might use cross validation say for 10 different subsets, and use the one with the strongest validation (predictive ability)?
I'm soo confused with this, why I make training and testing on data by I split the data.? must make a test on data which is already trained without split a data into two set for test and train because it will be from the same part. do I right?
the kaggle example with the tree map really did it for me. thank you
the best explanation so far clear, easy to get, just on point
I agree. great explanation.. thank you so much. one thing at the very end, I was not sure of. you said something about taking the best of the validation models, and in the context of hyper parameters. so, is this to say that you might use cross validation say for 10 different subsets, and use the one with the strongest validation (predictive ability)?
I'm soo confused with this, why I make training and testing on data by I split the data.? must make a test on data which is already trained without split a data into two set for test and train because it will be from the same part. do I right?
Very good explanation! I was so damn confused because various sources just randomly mixed up the definition of test and validation sets..
@Atticus Anakin lol, please try harder..You scam is too obvious 😂
Hi may i know how to determine the size of each set of data? what is the name of the size-determination methodology? thank you
thanks!