GLM - Expo. Family - 5 - Definition and examples

Поделиться
HTML-код
  • Опубликовано: 22 янв 2025

Комментарии • 10

  • @edwardschenk5591
    @edwardschenk5591 10 месяцев назад

    Just wanted to say thanks, your videos are soo helpful

  • @raltonkistnasamy6599
    @raltonkistnasamy6599 9 месяцев назад

    thank u man
    Ive been struugling to undertsand glm for so long
    thanks alot man

  • @pedrocolangelo5844
    @pedrocolangelo5844 3 года назад +1

    Meerkat, thank you again for uploading such good lectures. I'm learning a lot from you.
    I have a little question about a specific step in the normal distribution derivation. On 11:06, the process that made possible the manipulation of the term 1 / sqrt(2 π sigma) like that was:
    > Apply ln to the term 1 / sqrt( 2 π sigma)
    > Write it like ln ( 1 ) - ln ( sqrt (2 π sigma) )
    > ln ( 1 ) = 0
    > exponentiate - ln ( sqrt (2 π sigma) )
    ?
    I don't know if it was exactly like that. Was there another step or is it just it? Thank you!

  • @raltonkistnasamy6599
    @raltonkistnasamy6599 9 месяцев назад

    thanks

  • @ahjiba
    @ahjiba 3 года назад +3

    so thats how they get the logit function.

    • @MeerkatStatistics
      @MeerkatStatistics  3 года назад +4

      I think the main reason was to map the values from (0,1) to (-infinity, +infinity). But of course it does seem to be very convenient that it's also the natural parameter in the exponential family, which simplifies the calculations later on.

    • @ahjiba
      @ahjiba 3 года назад

      @@MeerkatStatistics Thanks for the response!
      I have question about logistic regression.
      In the Statsquest video on logistic regression, the video states that after applying the logit function to each data point's y-value, making each data point essentially (x,y/1-y), a line of best fit is fitted to all those applied data points. Does this mean that the linear predictor is simply the line of best fit for all data points with their applied link function?
      The video link is ruclips.net/video/vN5cNN2-HWE/видео.html
      The video link starts at around 7:30 for context and 8:35 is the part where the video says it's the best fitting line.

    • @MeerkatStatistics
      @MeerkatStatistics  3 года назад +2

      @@ahjiba No.
      The transformation is not for the Y-values. It's for P's. You cannot do log(y/(1-y)) on y's that are 0 or 1... you will get +/- infinity.

    • @flipflipshift855
      @flipflipshift855 2 года назад

      It seems a bit artificial since bernoulli distribution is discrete and interpreting it as p^y(1-p)^(1-y) to "fill in" the stuff between 0 and 1 isn't the immediately obvious way to do it. But this adds a lot more reason to view it as natural