Bayesian Data Science: Probabilistic Programming | SciPy 2019 Tutorial | Eric Ma

Поделиться
HTML-код
  • Опубликовано: 28 дек 2024

Комментарии • 24

  • @bilboswaggins7629
    @bilboswaggins7629 4 года назад +23

    Could listen to this dude talk all day, does such a great job of making it all seem so non-threatening and interesting.

  • @Mutual_Information
    @Mutual_Information 2 года назад +1

    Bayesian statistics makes for such satisfying modeling..

  • @nano7586
    @nano7586 3 года назад +1

    1:40:00 I was able to follow so well but then it was simply too fast.. I keep hearing the term "posterior" but it wasn't explained. I also didn't understand why p follows a uniform distribution and not e.g. a normal. Does it mean that p has equal probability of reaching a certain value for n data sets?

    • @ihgnmah
      @ihgnmah 3 года назад +1

      So he was talking about the parameter p of the groups (control and test), which he generated using the groupby() function. This p is the P(distribution /model | data), the probability of the distribution/model given the observed data. However, how much of this information is reliable. This is what you do if you don't have pymc3.
      Then he moved on to explain the mechanism of using pymc3 to achieve the same thing. Since p is unknown, the best you can guess is it follows a uniform distribution, which means any values between 0 and 1 have the same chance of being the value for p, and because each sample is a Bernoulli trial, he then used the Bernoulli distribution as the likelihood to estimate the value for p. Again, giving the observed likelihood, data, and prior (the distribution that you believe p follows), let's estimate its real value.
      The posterior he mentioned could be the estimated p after running code. I'm not sure about how the underlying algorithm work, but it probably calculates P(data | model) (remember, we can go back and forth between data and model in the Bayesian Formula), and it might be what he referred to as posterior.

  • @carstenlimberger9427
    @carstenlimberger9427 4 года назад +4

    Was the tutorial code executed with acceleration through C/C++ compilation at the background of Theano? When I execute the examples, the computation is much slower than in the video, especially when sampling from posterior in the baseball example.

    • @christiansmith2547
      @christiansmith2547 3 года назад +3

      Can’t remember exactly because I’m midway through, but I think he’s using an online jupyter kernel rather than a local instance.

  • @zzhou3894
    @zzhou3894 4 года назад +2

    Any link for the demo notebook? Thx.

  • @flowy-moe
    @flowy-moe 11 месяцев назад

    Would someone be able to share the Jupyter Notebooks? The link in the description is not working for me ...

  • @galaxymariosuper
    @galaxymariosuper 4 года назад +2

    you guys are gold

  • @henrmota
    @henrmota 4 года назад +2

    Great workshop.

  • @bhishanpoudel8707
    @bhishanpoudel8707 5 лет назад +2

    Great tutorial, lots to learn. One Aside: How to do you select the text and paste to another place so neatly? Which app do you use?

  • @juliocardenas-rodriguez1986
    @juliocardenas-rodriguez1986 3 года назад +1

    These guys rock !!

  • @matthewmeadows2456
    @matthewmeadows2456 4 года назад

    Cannot run jupyter notebooks properly now. Very frustrating.

  • @bhavinmoriya9216
    @bhavinmoriya9216 2 года назад

    Could anybody please send me the link of Justin Boyce blog?

  • @TomerBenDavid
    @TomerBenDavid 5 лет назад +2

    Thank you too

  • @joaopedrorocha5693
    @joaopedrorocha5693 Год назад

    We have uncles that doesn't change political views here on my country too ... maybe we could model hierarchically the probability of someone in some country having such an uncle hehe
    we could do a website called the "uncle project" which would have a form section on which we would ask people worldwide how many uncles they have and how many gets emotional when someone slightly disagrees with his views. I think we could use a binomial likelihood in this case since we get N uncles and have a Bernoulli trial on each fitting the description.
    Then each time someone submitted an answer we could update our hierarchical model ... So when enough people in enough countries send their estimates we would have a world map showing which countries people are the luckiest on the uncle subject and which ones are the unluckiest. 🤣

  • @programminginterviewsprepa7710
    @programminginterviewsprepa7710 2 года назад

    Watch first person on earth to understand bayes

  • @NidhiSinha4U
    @NidhiSinha4U 2 года назад

    Anyone who can help me with bayseian analysis? I'd really appreciate 😁

  • @perrygrossman2008
    @perrygrossman2008 4 года назад

    Cool presentation, Eric!
    Estimation is core of all statistical inference.
    ruclips.net/video/2wvt6GPZl1U/видео.html
    Nice one: "Calculating p-values is not even... the point of statistical inference."

  • @OriginalBernieBro
    @OriginalBernieBro 4 года назад +1

    Holy shit no timestamps?!