I think your video is very good. However, I do have one point of criticism. You compare your metrics with two different data sets after you remove the outlier. Strictly speaking, this is not a fair comparison. If the model performs poorly in the first case exactly on the outliers, it will of course increase the error. In my opinion, it would be better to evaluate the first model again on the set without the sorted outliers. When forecasting, it is also more realistic if the test lies after the training set. This simulates the real case. Nevertheless, great video!
Silly question. Have you ever tried including a couple random noise sources in as trainable features? Say a normal and a Gaussian. Let the model decide for itself when and where such noise is useful in comprehending and inferencing the data. Just an odd thought. 🖖🙃👍
3.13 has dropped. Deep dive??? 😉 ✌ P.S. Mwwhhhaaahahahahhhaaaha! POWAAAAAAAHHHHHHhhhhhhh.... (can't wait till the JIT gets optimized. Gonna be siiiiiiiik! ... even if it's still kinda 'meh' now.).
Your channel is underrated by far. I love your approach and I appreciate the information!
Thank you!
I think your video is very good. However, I do have one point of criticism. You compare your metrics with two different data sets after you remove the outlier. Strictly speaking, this is not a fair comparison. If the model performs poorly in the first case exactly on the outliers, it will of course increase the error. In my opinion, it would be better to evaluate the first model again on the set without the sorted outliers. When forecasting, it is also more realistic if the test lies after the training set. This simulates the real case.
Nevertheless, great video!
Would Catboost be a better algorithm if u have thousands of sku:s?
How you learnt machine learning btw?
How can i implement this in web dashboard?
grafana
@@ralvarezb78 i already have web app in React + JS
@@ralvarezb78 i already have React web dash
Nice video
Silly question. Have you ever tried including a couple random noise sources in as trainable features?
Say a normal and a Gaussian. Let the model decide for itself when and where such noise is useful in comprehending and inferencing the data. Just an odd thought.
🖖🙃👍
create video on ml flow and zenml
why an rf, this is biased to range. what kb switch do you use?
Catboost is a better choice since we have a large amount of categorical features
3.13 has dropped. Deep dive??? 😉
✌
P.S. Mwwhhhaaahahahahhhaaaha! POWAAAAAAAHHHHHHhhhhhhh.... (can't wait till the JIT gets optimized. Gonna be siiiiiiiik! ... even if it's still kinda 'meh' now.).