For anyone wondering why he flatten the images example: X_train.reshape(X_train.shape[0],-1) is because sklearn predict wants a 2D array as a first argument.
Hi Greg.. Thank you for the wonderful video. I have been working on Time series evaluation and specifically cross validation. My question is, is there a way to generalize the parameter tuning once the cross validation is complete or is it just trial and error basis.
Take my courses at mlnow.ai/!
I don't get why your videos don't have millions of views yet, you explain everything so clearly! awesome work man keep it up!
Haha thank you I appreciate that!!
LOVE THE THUMBNAIL
Thank you!!!
For anyone wondering why he flatten the images example: X_train.reshape(X_train.shape[0],-1) is because sklearn predict wants a 2D array as a first argument.
Thanks Panagiotis, I appreciate that!!
Cheers for the help mate 👍
Thank you so much for the amazing video
Glad it was helpful and you're very welcome, Prachi!
Hi Greg.. Thank you for the wonderful video. I have been working on Time series evaluation and specifically cross validation. My question is, is there a way to generalize the parameter tuning once the cross validation is complete or is it just trial and error basis.
You can always tune values to minimize cross val loss :)
wanted an 'statistics for data science' video plz.. like distributions etc.,
Sure!
🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓🤓