Quantile Regression - EXPLAINED!

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ธ.ค. 2024

ความคิดเห็น • 36

  • @rishisharma8311
    @rishisharma8311 3 ปีที่แล้ว +11

    Dude the concepts you teach are new and unheard off. I always get to learn something new watching your videos. Keep it coming

  • @90benj
    @90benj 3 ปีที่แล้ว +17

    What would be extremely helpful for a new data scientist and machine learning enthusiast, would be a model zoo so to say, so a short summary of the most used models, what they are good at and what are their weaknesses and maybe a couple of advanced models which are based on the base models. Because often, I don't have any overview about what I am missing.

  • @bellahuang8522
    @bellahuang8522 10 หลายเดือนก่อน

    taking a machine leanring class in a policy school so you can imagine how bad my professor is when he was trying to explain this for 30 minutes in class. Your visuals give me very good intuition. TY!

  • @borisn.1346
    @borisn.1346 3 ปีที่แล้ว +1

    You could compute lower and upper bound with good ol' OLS regression as well.

  • @90benj
    @90benj 3 ปีที่แล้ว +2

    This is awesome! Really good understandable, I will probably try myself at that Quantile Regressor NN, it sounds fun.

    • @CodeEmporium
      @CodeEmporium  3 ปีที่แล้ว +1

      Awesome! Lemme know how that shakes out. Fun stuff! And thanks

  • @PD-vt9fe
    @PD-vt9fe 3 ปีที่แล้ว +1

    Thank you for another awesome video. Didn't expect this soon though. Keep it up!

  • @shambhaviaggarwal9977
    @shambhaviaggarwal9977 ปีที่แล้ว

    The explanation was pretty clear. Thanks!

  • @swatisingh4041
    @swatisingh4041 2 ปีที่แล้ว +2

    Hii! Very infomative video. Can you pls share how to apply quantile regression when there are more than one independent feature (X1, X2, X3.....) tHANKS

  • @brokecoder
    @brokecoder 2 ปีที่แล้ว +1

    Hmm.. I used to use bootstraping to get the percentile bounds, so that I can derive confidence intervals. But this seems like another apporach.

  • @remimoise8908
    @remimoise8908 3 ปีที่แล้ว +2

    Great video, thanks!
    Regarding the neural network which can return 3 values at once (low, median and high), beside adapting the loss function, how would you label the 3 values for each data point? Since we only have one label per point, would you duplicate that label?

  • @eliaskonig2526
    @eliaskonig2526 ปีที่แล้ว

    Thanks for the Video!
    Is it also possible to use it in combination with dummy-variables?

  • @mansikumari4954
    @mansikumari4954 หลายเดือนก่อน +1

    This was reallyyy good!!!!!

    • @CodeEmporium
      @CodeEmporium  หลายเดือนก่อน

      Thanks so much for watching

  • @kabeerjaffri4015
    @kabeerjaffri4015 3 ปีที่แล้ว +1

    Great video as usual!!

  • @cientivic7341
    @cientivic7341 2 ปีที่แล้ว +1

    I'm wondering, shouldn't the output of the model form a line instead of scattered points?
    Like... what the model does is basically identify each quantile and use it as a prediction without any type of smoothing (thus, it would become a line in the graph)?

    • @andytucker9991
      @andytucker9991 ปีที่แล้ว

      Hi, i also had the same question, but i think the reason is because he used Light GBM regressor instead of OLS to get the predicted values. The predictions given by a LightGBM model does not fall on a straight line, i.e the predictions are non linear, unlike OLS regressor.

  • @Im-Assmaa
    @Im-Assmaa 2 ปีที่แล้ว

    Thank you for the video. I have a question. How can I compute the quantiles for a specific p, using Rankit-cleveland method? It is used to estimate the value at risk using quantile regression and I am kind of stuck. please help

  • @shnibbydwhale
    @shnibbydwhale 3 ปีที่แล้ว +2

    Great video, but I am a little confused. How is using quantile regression fundamentally different than using linear regression and giving both the predicted value from the linear regression model + point prediction intervals for each prediction?

    • @zxynj
      @zxynj 2 ปีที่แล้ว

      I think the traditional method requires normality of residuals to estimate the prediction interval while quantile regression lets the model learn the quantile prediction through a specific loss function (l1 loss, but gives more panelty for wrong direction percentile). Therefore, quantile regression does not require linear regression assumptions (at least the normality of residuals part). This is just my understanding of the concept.

    • @snehanshusaha890
      @snehanshusaha890 ปีที่แล้ว

      @@zxynj , yes. On top of that, quantiles offer intervals of confidence in predictions which means based predictions don't.

    • @mansoocho2351
      @mansoocho2351 10 หลายเดือนก่อน

      @@zxynj Great explanation! I would like to add a small example - I think quantile regression would make a great use case for skewed distributions since it does not require the data to be normally distributed.

  • @patrickduhirwenzivugira4729
    @patrickduhirwenzivugira4729 2 ปีที่แล้ว

    Thank you for the video. I have a question. You have fit the LGBMRegressor with default hyperparameter values. How would one tune these hyperparameters and which metric can be used to get the best models?

  • @ziangxu7751
    @ziangxu7751 3 ปีที่แล้ว

    Great interpretation! Thank you!

    • @CodeEmporium
      @CodeEmporium  3 ปีที่แล้ว

      Thanks a ton for watching :)

  • @НикитаБуров-ъ6р
    @НикитаБуров-ъ6р 11 หลายเดือนก่อน

    nice explaination, thks

  • @emiya_muljomdaoo
    @emiya_muljomdaoo 3 ปีที่แล้ว

    well explained, thank you for the video!

  • @vladislavlevitin
    @vladislavlevitin 3 ปีที่แล้ว

    Excellent video!

  • @charan7233
    @charan7233 2 ปีที่แล้ว

    Amazing! Thank you

  • @johannaw2031
    @johannaw2031 ปีที่แล้ว +2

    Im sorry, but the math behind it is still a riddle. Did you say that: If we estimate the 10th percentile and the observed value is higher than the predicted value, then we want to penalize that? So then we take 0,9*|residual|. But if we estimate the 10th percentile and the observed value is lower than the predicted value then this is more "expected" and thus we only penalize it by 0,1*|residual|.

  • @TK-mv6sq
    @TK-mv6sq 2 ปีที่แล้ว

    Thank you!

  • @jayjhaveri1906
    @jayjhaveri1906 8 หลายเดือนก่อน

    love you

  • @amarpreet3519
    @amarpreet3519 ปีที่แล้ว

    not a good explanation