When Should You Use Regression Methods?

Поделиться
HTML-код
  • Опубликовано: 28 окт 2024

Комментарии • 31

  • @paragonprogrammer1090
    @paragonprogrammer1090 3 года назад +5

    Your videos are always on point. You make Data science a lot simpler....thanks a lot for explaining in detail

    • @RichardOnData
      @RichardOnData  3 года назад

      My pleasure! That's the stated goal of my channel!

  • @arcadevampire
    @arcadevampire 2 года назад +1

    It would be great if could do a deep dive into generalized, lasso, ridge, elastic nets. Your explanations are always very straight forward. Cheers

    • @RichardOnData
      @RichardOnData  2 года назад

      This is coming up the pipeline soon. Thanks!

  • @kaushikroychowdhury7787
    @kaushikroychowdhury7787 3 года назад +3

    Plz make this type of videos , why and when to use different models ?
    Appreciate you work
    Thank you

    • @RichardOnData
      @RichardOnData  3 года назад +2

      I will try to do exactly that! "When should you use PCA" is right around the corner!

  • @gregmaland5318
    @gregmaland5318 3 года назад

    Wow! This was way over my head. Yet, I still think I got something out of it.

  • @jorislimonier
    @jorislimonier 3 года назад +1

    Currently writing my thesis on High Dimensional Regression Models. Such an interesting topic 👌🏻👌🏻
    Great video !

    • @RichardOnData
      @RichardOnData  3 года назад

      Awesome! Thank you; yes, isn't it an exciting topic?

    • @jorislimonier
      @jorislimonier 3 года назад

      @@RichardOnData It is. Specifically LASSO and determining which parameters to throw away...pretty cool !

  • @chacmool2581
    @chacmool2581 2 года назад

    I see quite often the use of regression without checks for substitutions and model fit. I see people using SLR without understanding, in fact confusing linearity for collinearity when those two things are separate and distinct.
    I do think BLR is a bit trickier than SLR for two reasons. One, the coefficients that come out of the glm() function in R are logarithmic values so you need to exponentiate them. Two, the response variable is the log odds or just odds after coefficient exponentiation.
    The other tricky part of logistic regression are the assumption of linearity of continuous variables vs. the logit of the response to be checked with BoxTidwell.For OLR, the equal odds assumption to be checked with a Brant test.

  • @dmitrytkachuk2304
    @dmitrytkachuk2304 3 года назад +2

    Thanks for the video. Richard can you explain more about classification methods, for example when we should use log-regression, SVM or another methods? In modern data science log-regression (in your opinion) is still cool?

  • @bassthunder8111
    @bassthunder8111 3 года назад +2

    Great Video! In another video you could tackle an adjacent problem: "interpretable" ML methods like partial dependence profiles, variable importance measures and instance based methods.

    • @RichardOnData
      @RichardOnData  3 года назад

      Great suggestion! I think that would help a lot of people.

  • @vishalthatsme
    @vishalthatsme 3 года назад +1

    Using L1 for feature selection - I’ve seen it mentioned in various places but never explained clearly, in case you’re looking for future topic ideas 😉. Also, detecting/dealing with multicollinearity - tricky and a little confusing.... Also, GLMs... I could go on and on...

    • @RichardOnData
      @RichardOnData  3 года назад +1

      Those are three excellent video ideas. I'll roll them all into the video pipeline!

    • @vishalthatsme
      @vishalthatsme 3 года назад

      @@RichardOnData keep up the great work 👍🏽

  • @shyamgurunath5876
    @shyamgurunath5876 3 года назад +1

    Good tutorial Richard.Can you do a video on Linear regression Assumption & can I use ensemble of Linear & Ridge to find the response variable ?

    • @RichardOnData
      @RichardOnData  3 года назад

      You certainly can ensemble that way, though I've never done that myself nor heard of doing so. Now, ensembling the Lasso and Ridge Regression penalty parameters is an approach in and of itself known as the Elastic Net. I use that one all the time. Great video ideas!

  • @jasonloghry
    @jasonloghry Год назад

    I really enjoyed this video, so very helpful! Would you have any interest in making a video about the basics of interactions?

  • @Trazynn
    @Trazynn 3 года назад +1

    At university they only taught me the formulas with barely any context. And even those weren't complete. I had to learn everything else from RUclips.

    • @RichardOnData
      @RichardOnData  3 года назад

      Yeah..... I get the feeling that's a common experience for far too many. I hope this video was helpful.

  • @unmanbarman8619
    @unmanbarman8619 3 года назад

    Hi can you please do a video for how much and what to learn in python for data science/ data analysis same as you did for sql

  • @206Seattle
    @206Seattle 2 года назад

    Thank you Richard!

  • @moisesdiaz9852
    @moisesdiaz9852 3 года назад

    Great explanation as always

  • @prod.kashkari3075
    @prod.kashkari3075 3 года назад +2

    Ugh I hate when they label logistic regression as a classification algorithm in machine learning. It really isn’t right?

    • @RichardOnData
      @RichardOnData  3 года назад

      Logistic regression can be trained in ML style using stochastic gradient descent. While it's fitted values consist of the log odds of the event "success" (a quantity that is on a negative to positive infinity scale), this can be converted to a probability. This probability can then be used for classification purposes (i.e. if Observation 1 has >0.5 probability of being in Class A, classify them as Class A). Ergo, it's messy looking to define a "regression" method as a "classification algorithm", but it can indeed serve as such.

    • @prod.kashkari3075
      @prod.kashkari3075 3 года назад

      @@RichardOnData oh I see okay.

  • @olw4196
    @olw4196 2 года назад

    awesome

  • @jorgepableau
    @jorgepableau 3 года назад +1

    Nice shirt hehe 🧐