first intro to regression (brms)

Поделиться
HTML-код
  • Опубликовано: 21 авг 2024
  • ХоббиХобби

Комментарии • 13

  • @andreasjemstedt3486
    @andreasjemstedt3486 2 года назад +5

    This was great! Please keep on sharing. I've just begun my brms journey, and this is really helpful. I also appreciate your pedagogical talent.

    • @hankstevens7628
      @hankstevens7628  2 года назад +2

      There are tons of great examples out there. I recommend material by Richard McElreath, but other folks have more cookbook-like examples.

  • @ManILoveGoats
    @ManILoveGoats 7 месяцев назад

    This was very useful! Thank you, Hank! :)

  • @osgrinds1687
    @osgrinds1687 2 года назад +1

    This was awesome, I was wondering how would we go about plotting a certain regression/data-fit from brm() using ggplot2? (Let's say I want to show the data-fit over the actual data points)

    • @hankstevens7628
      @hankstevens7628  2 года назад

      Here is a nice example of doing that: cran.r-project.org/web/packages/tidybayes/vignettes/tidy-brms.html#posterior-predictions-kruschke-style

  • @prashantbhandari7102
    @prashantbhandari7102 2 года назад

    great explanation hank! Do you by chance have other videos/lectures/blogs where you discuss more complicated cases? By complicated cases, I mean cases where covariates are correlated with predictors or confounded on a certain variable or mediation or moderation.

  • @iirolenkkari9564
    @iirolenkkari9564 2 года назад

    Very nice video, thank you.
    I'm wondering how to model the covariance structure in a bayesian longitudinal setting, similar to covariance patterns such as compound symmetry, autoregressive, Topelitz etc. in the frequentist world. In the frequentist world, taking serial correlation into consideration narrows the confidence intervals of the parameters.
    How to model the covariance structure in a bayesian longitudinal setting? I'm wondering if a bayesian intercept always introduces compound symmetry, similar to a random intercept in a frequentist linear mixed effects model? I suspect taking serial correlation would narrow the posterior distributions of the model parameters, strengthening the bayesian inference. However, I'm not at all sure if my thoughts are anywhere near correct.
    The brms package seems like a very valuable resource. However, the parts about covariance structures seem to be still in progress.
    If anyone has good theoretical (and why not practical) bayesian references regarding these covariance modeling issues (serial correlation etc.), I would appreciate them very much.

    • @hankstevens7628
      @hankstevens7628  Год назад

      One of the many good things about the brms and rstanarm packages is that they use the same syntax as the lme4 package. Any thing you want to play around with using known data, you can do it quickly in lme4 and move to brms or rstan (with rstanarm) to use a Bayesian approach.

    • @iirolenkkari9564
      @iirolenkkari9564 Год назад

      @@hankstevens7628 Thank you.
      I recently ran into 'Applied longitudinal data analysis in brms and the tidyverse' by Solomon Kurz. Based on that release I understood that the error covariance structure is under construction in brms. It seems that I got it wrong. Based on your comments, the package itself already makes it possible to modify the error covariance structure.
      I have to dive more deeply into the vignettes and reference manual. A book about brms would be priceless for a person like me (not being fully able to quickly absorb information from the vignettes and/or reference manual).
      But as you pointed out, since the syntax is the same as for lme4, maybe the move would be to get my hands on frequentist lme4 books to learn the syntax. Then just move seamlessly to brms :)

  • @dhanushka5
    @dhanushka5 9 месяцев назад

    Code
    ,
    finally!!!

  • @Belphagor13
    @Belphagor13 Год назад

    Sorry to ask; but how does one specify hyper parameters in brms?

    • @Belphagor13
      @Belphagor13 Год назад +1

      I think it is a bit more complicated than '|' vs '||'

    • @hankstevens7628
      @hankstevens7628  Год назад

      The purpose of this video is to show rank beginners that doing Bayesian regression in R is almost as easy as REML. To specify hyperparameters depends on what those are for. If you just want to estimate a hyperparameter as a variance component, just specify the random effect as you would in lmer().