[09x04] Bayesian Logistic Regression | Turing.jl | Probability of Spotting Japanese Wolf Spiders

Поделиться
HTML-код
  • Опубликовано: 21 авг 2024

Комментарии • 12

  • @QQ-xx7mo
    @QQ-xx7mo Год назад +2

    Thank you so much for this great content !

    • @doggodotjl
      @doggodotjl  Год назад

      You're welcome! Thanks for watching!

  • @mortezababazadeh8375
    @mortezababazadeh8375 Год назад +5

    Thanks for your cool videos! 🌹🌹🌹
    Could you please make a video to explain how to use Probabilistic programming in case we have a time series dataset?

    • @doggodotjl
      @doggodotjl  Год назад +6

      Hi, thanks for being a Member! Yes, I will be sure to include a tutorial on Bayesian Time Series. Thanks for the suggestion!

    • @lawrencemidwinter9416
      @lawrencemidwinter9416 Год назад +2

      Yeah that would be really awesome. Since I am trying to figure thus out.

    • @doggodotjl
      @doggodotjl  Год назад

      Hi, I want to let you know that I just uploaded a video on Bayesian Time Series using Turing.jl. The video is available for Channel Members now. Thanks again for the suggestion!

    • @doggodotjl
      @doggodotjl  Год назад +1

      @@lawrencemidwinter9416 Hi, I just uploaded a video on Bayesian Time Series using Turing.jl. The video is available now for Channel Members and will be available to the public next Sunday, July 2nd. Thank you for your interest!

    • @lawrencemidwinter9416
      @lawrencemidwinter9416 Год назад +2

      @@doggodotjl hey thanks man I appreciate it. All your videos have e been a great help. Looking forward to this one.

  • @musiknation7218
    @musiknation7218 Год назад

    Bro i need all Bayesian inferences on python

  • @luna_fazio
    @luna_fazio Год назад +1

    Some comments:
    - When you show the mean estimates, you say the results are consistent with our prior belief. *Technically*, since the uniform prior assigns probability zero to any value outside of its range, your estimates are forced to be consistent with it. It's generally recommended to use priors that assign nonzero probability over the entire support of the parameter (e.g. unbounded for means, non-negative for variances, etc.) precisely so that in the event your data actually is not consistent with your prior belief, the posterior can still eventually reach low prior probability regions.
    - While it's true that a good Rhat should be "around 1", the margin of acceptable values is narrower than one might expect. While anything below 1.01 is generally very good, when you get values in the 1.02 - 1.05 range, you can already start seeing weird behaviors in the chain. Anything above 1.1 will almost certainly be completely untrustworthy. Those interested in learning more about this diagnostic can refer to arxiv.org/abs/1903.08008.
    - Finally, it's unusual but possible to see values around 0.99 when the sampler explores the posterior very efficiently so values below 1 are not an immediate cause for alarm. I have never ever seen values below that, but if I did (say, something like 0.8) then my first thought is that there must be some bug in the rhat calculation itself. Would love to hear about what's the smallest rhat someone has legitimately seen though!
    Anyway, thanks a lot for your great videos, they're helping make my transition into Julia a lot less painful. :)

    • @doggodotjl
      @doggodotjl  Год назад

      Thank you for your insightful comments!