Quantile Regression - EXPLAINED!

Поделиться
HTML-код
  • Опубликовано: 17 мар 2021
  • Quantile regression - Hope the explanation wasn't too all over the place
    Follow me on M E D I U M: towardsdatascience.com/likeli...
    CODE: github.com/ajhalthor/quantile...

Комментарии • 35

  • @rishisharma8311
    @rishisharma8311 3 года назад +10

    Dude the concepts you teach are new and unheard off. I always get to learn something new watching your videos. Keep it coming

  • @90benj
    @90benj 3 года назад +15

    What would be extremely helpful for a new data scientist and machine learning enthusiast, would be a model zoo so to say, so a short summary of the most used models, what they are good at and what are their weaknesses and maybe a couple of advanced models which are based on the base models. Because often, I don't have any overview about what I am missing.

  • @PD-vt9fe
    @PD-vt9fe 3 года назад +1

    Thank you for another awesome video. Didn't expect this soon though. Keep it up!

  • @bellahuang8522
    @bellahuang8522 6 месяцев назад

    taking a machine leanring class in a policy school so you can imagine how bad my professor is when he was trying to explain this for 30 minutes in class. Your visuals give me very good intuition. TY!

  • @90benj
    @90benj 3 года назад +2

    This is awesome! Really good understandable, I will probably try myself at that Quantile Regressor NN, it sounds fun.

    • @CodeEmporium
      @CodeEmporium  3 года назад +1

      Awesome! Lemme know how that shakes out. Fun stuff! And thanks

  • @remimoise8908
    @remimoise8908 2 года назад +2

    Great video, thanks!
    Regarding the neural network which can return 3 values at once (low, median and high), beside adapting the loss function, how would you label the 3 values for each data point? Since we only have one label per point, would you duplicate that label?

  • @shambhaviaggarwal9977
    @shambhaviaggarwal9977 Год назад

    The explanation was pretty clear. Thanks!

  • @patrickduhirwenzivugira4729
    @patrickduhirwenzivugira4729 2 года назад

    Thank you for the video. I have a question. You have fit the LGBMRegressor with default hyperparameter values. How would one tune these hyperparameters and which metric can be used to get the best models?

  • @emiya_muljomdaoo
    @emiya_muljomdaoo 3 года назад

    well explained, thank you for the video!

  • @kabeerjaffri4015
    @kabeerjaffri4015 3 года назад +1

    Great video as usual!!

  • @Im-Assmaa
    @Im-Assmaa Год назад

    Thank you for the video. I have a question. How can I compute the quantiles for a specific p, using Rankit-cleveland method? It is used to estimate the value at risk using quantile regression and I am kind of stuck. please help

  • @eliaskonig2526
    @eliaskonig2526 8 месяцев назад

    Thanks for the Video!
    Is it also possible to use it in combination with dummy-variables?

  • @ziangxu7751
    @ziangxu7751 3 года назад

    Great interpretation! Thank you!

  • @borisn.1346
    @borisn.1346 3 года назад +1

    You could compute lower and upper bound with good ol' OLS regression as well.

  • @swatisingh4041
    @swatisingh4041 Год назад +2

    Hii! Very infomative video. Can you pls share how to apply quantile regression when there are more than one independent feature (X1, X2, X3.....) tHANKS

  • @brokecoder
    @brokecoder Год назад +1

    Hmm.. I used to use bootstraping to get the percentile bounds, so that I can derive confidence intervals. But this seems like another apporach.

  • @cientivic7341
    @cientivic7341 2 года назад +1

    I'm wondering, shouldn't the output of the model form a line instead of scattered points?
    Like... what the model does is basically identify each quantile and use it as a prediction without any type of smoothing (thus, it would become a line in the graph)?

    • @andytucker9991
      @andytucker9991 Год назад

      Hi, i also had the same question, but i think the reason is because he used Light GBM regressor instead of OLS to get the predicted values. The predictions given by a LightGBM model does not fall on a straight line, i.e the predictions are non linear, unlike OLS regressor.

  • @vladislavlevitin
    @vladislavlevitin 2 года назад

    Excellent video!

  • @user-zu2sy2lq6t
    @user-zu2sy2lq6t 6 месяцев назад

    nice explaination, thks

  • @shnibbydwhale
    @shnibbydwhale 3 года назад +2

    Great video, but I am a little confused. How is using quantile regression fundamentally different than using linear regression and giving both the predicted value from the linear regression model + point prediction intervals for each prediction?

    • @zxynj
      @zxynj 2 года назад +1

      I think the traditional method requires normality of residuals to estimate the prediction interval while quantile regression lets the model learn the quantile prediction through a specific loss function (l1 loss, but gives more panelty for wrong direction percentile). Therefore, quantile regression does not require linear regression assumptions (at least the normality of residuals part). This is just my understanding of the concept.

    • @snehanshusaha890
      @snehanshusaha890 Год назад

      @@zxynj , yes. On top of that, quantiles offer intervals of confidence in predictions which means based predictions don't.

    • @mansoocho2351
      @mansoocho2351 6 месяцев назад

      @@zxynj Great explanation! I would like to add a small example - I think quantile regression would make a great use case for skewed distributions since it does not require the data to be normally distributed.

  • @charan7233
    @charan7233 2 года назад

    Amazing! Thank you

  • @deepanshudashora5887
    @deepanshudashora5887 3 года назад

    Awesome

  • @TK-mv6sq
    @TK-mv6sq 2 года назад

    Thank you!

  • @jayjhaveri1906
    @jayjhaveri1906 4 месяца назад

    love you

  • @johannaw2031
    @johannaw2031 Год назад

    Im sorry, but the math behind it is still a riddle. Did you say that: If we estimate the 10th percentile and the observed value is higher than the predicted value, then we want to penalize that? So then we take 0,9*|residual|. But if we estimate the 10th percentile and the observed value is lower than the predicted value then this is more "expected" and thus we only penalize it by 0,1*|residual|.

  • @amarpreet3519
    @amarpreet3519 9 месяцев назад

    not a good explanation