Quantile Regression - EXPLAINED!

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ก.ค. 2024
  • Quantile regression - Hope the explanation wasn't too all over the place
    Follow me on M E D I U M: towardsdatascience.com/likeli...
    CODE: github.com/ajhalthor/quantile...

ความคิดเห็น • 35

  • @rishisharma8311
    @rishisharma8311 3 ปีที่แล้ว +10

    Dude the concepts you teach are new and unheard off. I always get to learn something new watching your videos. Keep it coming

  • @90benj
    @90benj 3 ปีที่แล้ว +15

    What would be extremely helpful for a new data scientist and machine learning enthusiast, would be a model zoo so to say, so a short summary of the most used models, what they are good at and what are their weaknesses and maybe a couple of advanced models which are based on the base models. Because often, I don't have any overview about what I am missing.

  • @bellahuang8522
    @bellahuang8522 6 หลายเดือนก่อน

    taking a machine leanring class in a policy school so you can imagine how bad my professor is when he was trying to explain this for 30 minutes in class. Your visuals give me very good intuition. TY!

  • @PD-vt9fe
    @PD-vt9fe 3 ปีที่แล้ว +1

    Thank you for another awesome video. Didn't expect this soon though. Keep it up!

  • @90benj
    @90benj 3 ปีที่แล้ว +2

    This is awesome! Really good understandable, I will probably try myself at that Quantile Regressor NN, it sounds fun.

    • @CodeEmporium
      @CodeEmporium  3 ปีที่แล้ว +1

      Awesome! Lemme know how that shakes out. Fun stuff! And thanks

  • @kabeerjaffri4015
    @kabeerjaffri4015 3 ปีที่แล้ว +1

    Great video as usual!!

  • @shambhaviaggarwal9977
    @shambhaviaggarwal9977 ปีที่แล้ว

    The explanation was pretty clear. Thanks!

  • @remimoise8908
    @remimoise8908 2 ปีที่แล้ว +2

    Great video, thanks!
    Regarding the neural network which can return 3 values at once (low, median and high), beside adapting the loss function, how would you label the 3 values for each data point? Since we only have one label per point, would you duplicate that label?

  • @emiya_muljomdaoo
    @emiya_muljomdaoo 3 ปีที่แล้ว

    well explained, thank you for the video!

  • @ziangxu7751
    @ziangxu7751 3 ปีที่แล้ว

    Great interpretation! Thank you!

    • @CodeEmporium
      @CodeEmporium  3 ปีที่แล้ว

      Thanks a ton for watching :)

  • @patrickduhirwenzivugira4729
    @patrickduhirwenzivugira4729 2 ปีที่แล้ว

    Thank you for the video. I have a question. You have fit the LGBMRegressor with default hyperparameter values. How would one tune these hyperparameters and which metric can be used to get the best models?

  • @borisn.1346
    @borisn.1346 3 ปีที่แล้ว +1

    You could compute lower and upper bound with good ol' OLS regression as well.

  • @Im-Assmaa
    @Im-Assmaa ปีที่แล้ว

    Thank you for the video. I have a question. How can I compute the quantiles for a specific p, using Rankit-cleveland method? It is used to estimate the value at risk using quantile regression and I am kind of stuck. please help

  • @eliaskonig2526
    @eliaskonig2526 8 หลายเดือนก่อน

    Thanks for the Video!
    Is it also possible to use it in combination with dummy-variables?

  • @user-zu2sy2lq6t
    @user-zu2sy2lq6t 6 หลายเดือนก่อน

    nice explaination, thks

  • @charan7233
    @charan7233 2 ปีที่แล้ว

    Amazing! Thank you

  • @vladislavlevitin
    @vladislavlevitin 2 ปีที่แล้ว

    Excellent video!

  • @brokecoder
    @brokecoder ปีที่แล้ว +1

    Hmm.. I used to use bootstraping to get the percentile bounds, so that I can derive confidence intervals. But this seems like another apporach.

  • @swatisingh4041
    @swatisingh4041 ปีที่แล้ว +2

    Hii! Very infomative video. Can you pls share how to apply quantile regression when there are more than one independent feature (X1, X2, X3.....) tHANKS

  • @deepanshudashora5887
    @deepanshudashora5887 3 ปีที่แล้ว

    Awesome

  • @TK-mv6sq
    @TK-mv6sq 2 ปีที่แล้ว

    Thank you!

  • @shnibbydwhale
    @shnibbydwhale 3 ปีที่แล้ว +2

    Great video, but I am a little confused. How is using quantile regression fundamentally different than using linear regression and giving both the predicted value from the linear regression model + point prediction intervals for each prediction?

    • @zxynj
      @zxynj 2 ปีที่แล้ว +1

      I think the traditional method requires normality of residuals to estimate the prediction interval while quantile regression lets the model learn the quantile prediction through a specific loss function (l1 loss, but gives more panelty for wrong direction percentile). Therefore, quantile regression does not require linear regression assumptions (at least the normality of residuals part). This is just my understanding of the concept.

    • @snehanshusaha890
      @snehanshusaha890 ปีที่แล้ว

      @@zxynj , yes. On top of that, quantiles offer intervals of confidence in predictions which means based predictions don't.

    • @mansoocho2351
      @mansoocho2351 6 หลายเดือนก่อน

      @@zxynj Great explanation! I would like to add a small example - I think quantile regression would make a great use case for skewed distributions since it does not require the data to be normally distributed.

  • @cientivic7341
    @cientivic7341 2 ปีที่แล้ว +1

    I'm wondering, shouldn't the output of the model form a line instead of scattered points?
    Like... what the model does is basically identify each quantile and use it as a prediction without any type of smoothing (thus, it would become a line in the graph)?

    • @andytucker9991
      @andytucker9991 ปีที่แล้ว

      Hi, i also had the same question, but i think the reason is because he used Light GBM regressor instead of OLS to get the predicted values. The predictions given by a LightGBM model does not fall on a straight line, i.e the predictions are non linear, unlike OLS regressor.

  • @jayjhaveri1906
    @jayjhaveri1906 4 หลายเดือนก่อน

    love you

  • @johannaw2031
    @johannaw2031 ปีที่แล้ว

    Im sorry, but the math behind it is still a riddle. Did you say that: If we estimate the 10th percentile and the observed value is higher than the predicted value, then we want to penalize that? So then we take 0,9*|residual|. But if we estimate the 10th percentile and the observed value is lower than the predicted value then this is more "expected" and thus we only penalize it by 0,1*|residual|.

  • @amarpreet3519
    @amarpreet3519 9 หลายเดือนก่อน

    not a good explanation