Tech talk: A practical introduction to Bayesian hierarchical modelling

Поделиться
HTML-код
  • Опубликовано: 23 ноя 2024

Комментарии • 23

  • @mohammedalmarakby1221
    @mohammedalmarakby1221 Год назад +11

    hands down the best explanation for hierarchical modelling on youtube, especially with the graph visualising the effect of pooling strength on the parameter values

  • @alejdiazdelao8153
    @alejdiazdelao8153 Год назад +5

    Probably the clearest explanation of hierarchical models I’ve ever seen. Great video!

  • @baaliilyes
    @baaliilyes 3 месяца назад

    This is Gold! The best video explaining hierarchical bayesian models, it addresses many of the question I had, none of the other videos out there get to this level of details. it makes me feel more confident about using these models

  • @hemgreen9984
    @hemgreen9984 7 месяцев назад +1

    50:20 Make sure you know when to use variational inference instead of MCMC. (usually when working with large datasets)

  • @Olimorveu
    @Olimorveu Год назад +2

    One of the few well explained examples

  • @goshirago
    @goshirago 3 года назад +6

    Thank you for this well-paced video with its explanations. I feel much more confident in my understanding of Bayesian hierarchical modelling.

  • @Bybysker
    @Bybysker 6 месяцев назад

    Very good explanations . Seen a lot of videos but your presentation is very understandable . Thanks

  • @JoelGrathwohl
    @JoelGrathwohl 5 месяцев назад

    Very nice, thanks a lot for the talk. Very well explained and easy to follow!

  • @logitfau252
    @logitfau252 6 месяцев назад

    great one, currently looking for overlap between same treatment used for different diseases, this one looks very helpful to approach for synthesize

  • @sahajpatel7033
    @sahajpatel7033 5 месяцев назад

    Excellent explanation...Thank you!

  • @kaushikgupta1410
    @kaushikgupta1410 2 года назад +2

    Perhaps the best explanation. Thanks a hell lot

  • @gavinaustin4474
    @gavinaustin4474 2 года назад +1

    Thanks Omar. Clear and helpful.

  • @juluribk
    @juluribk 2 года назад +1

    Thanks for clear explanation. Very helpful.

  • @danielheinisch7146
    @danielheinisch7146 2 года назад +1

    Thanks for the informative video. In the end model in which uncertainty for different counties is very similar, isn´t the model understating this uncertainty for the cases with just 1 or 2 datapoints? Could you elaborate? Thanks!

  • @piotrlukasinski4063
    @piotrlukasinski4063 2 года назад

    Amazing lecture!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 2 года назад +1

    this is really well explained.

  • @dtr_cpg
    @dtr_cpg 3 месяца назад

    Interesting tutorial

  • @maraffio72
    @maraffio72 2 года назад

    Great video, thanks !

  • @noejnava8409
    @noejnava8409 2 года назад

    Omar, it is not clear to me why sigma controls the amount of pooling. Could you point me into some sources to learn more about this? I enjoyed your presentation. Thanks.

    • @williamchurcher9645
      @williamchurcher9645 2 года назад +1

      Hi Noé, maybe I can have a go at explaining. When doing the hierarchical modelling, we suppose that the parameters for each group themselves come from a distribution. If we assume that this distribution has zero variance, we are saying that all of the group-level parameters must be the same - they must be equal to the mean. This is because there is literally zero variance. This is the same as pooling all the data (the first example). On the other hand, if we assume the variance is very large, then each group parameter has the freedom to choose any value it wants, without penalisation from the group model-parameter distribution. This is the same as having no pooling - each group has its own parameter. We can choose sigma between these two extremes to specify how closely linked the group parameter should be.
      Thank you and I hope that helped!

    • @khanhtruong3254
      @khanhtruong3254 Год назад

      Hi @@williamchurcher9645. Correct me if I'm wrong: given the prior distribution of alpha_i is assumed as Normal(mu_alpha, sigma_alpha), if sigma_alpha = 0, all the alpha_i may not be (and can not be) equal because the mu_alpha is not a fixed number but follows a distribution Normal(0, 5). Put that in a formula to be clearer:
      alpha_i ~ Normal(mu_alpha, sigma_alpha)
      alpha_i ~ Normal(mu_alpha, 0)
      alpha_i ~ Normal(Normal(0, 5), 0)
      alpha_i ~ Normal(0, 5)
      => alpha_i is not a constant but a distribution
      Following that understanding, the answer for why the sigma_alpha represents the degree of pooling is still vague.

  • @hhhtocode651
    @hhhtocode651 2 года назад

    Under Partial Pooling, why does sigma_a represent the degree of pooling?