1:40:00 I was able to follow so well but then it was simply too fast.. I keep hearing the term "posterior" but it wasn't explained. I also didn't understand why p follows a uniform distribution and not e.g. a normal. Does it mean that p has equal probability of reaching a certain value for n data sets?
So he was talking about the parameter p of the groups (control and test), which he generated using the groupby() function. This p is the P(distribution /model | data), the probability of the distribution/model given the observed data. However, how much of this information is reliable. This is what you do if you don't have pymc3. Then he moved on to explain the mechanism of using pymc3 to achieve the same thing. Since p is unknown, the best you can guess is it follows a uniform distribution, which means any values between 0 and 1 have the same chance of being the value for p, and because each sample is a Bernoulli trial, he then used the Bernoulli distribution as the likelihood to estimate the value for p. Again, giving the observed likelihood, data, and prior (the distribution that you believe p follows), let's estimate its real value. The posterior he mentioned could be the estimated p after running code. I'm not sure about how the underlying algorithm work, but it probably calculates P(data | model) (remember, we can go back and forth between data and model in the Bayesian Formula), and it might be what he referred to as posterior.
Was the tutorial code executed with acceleration through C/C++ compilation at the background of Theano? When I execute the examples, the computation is much slower than in the video, especially when sampling from posterior in the baseball example.
We have uncles that doesn't change political views here on my country too ... maybe we could model hierarchically the probability of someone in some country having such an uncle hehe we could do a website called the "uncle project" which would have a form section on which we would ask people worldwide how many uncles they have and how many gets emotional when someone slightly disagrees with his views. I think we could use a binomial likelihood in this case since we get N uncles and have a Bernoulli trial on each fitting the description. Then each time someone submitted an answer we could update our hierarchical model ... So when enough people in enough countries send their estimates we would have a world map showing which countries people are the luckiest on the uncle subject and which ones are the unluckiest. 🤣
Cool presentation, Eric! Estimation is core of all statistical inference. ruclips.net/video/2wvt6GPZl1U/видео.html Nice one: "Calculating p-values is not even... the point of statistical inference."
Could listen to this dude talk all day, does such a great job of making it all seem so non-threatening and interesting.
Bayesian statistics makes for such satisfying modeling..
1:40:00 I was able to follow so well but then it was simply too fast.. I keep hearing the term "posterior" but it wasn't explained. I also didn't understand why p follows a uniform distribution and not e.g. a normal. Does it mean that p has equal probability of reaching a certain value for n data sets?
So he was talking about the parameter p of the groups (control and test), which he generated using the groupby() function. This p is the P(distribution /model | data), the probability of the distribution/model given the observed data. However, how much of this information is reliable. This is what you do if you don't have pymc3.
Then he moved on to explain the mechanism of using pymc3 to achieve the same thing. Since p is unknown, the best you can guess is it follows a uniform distribution, which means any values between 0 and 1 have the same chance of being the value for p, and because each sample is a Bernoulli trial, he then used the Bernoulli distribution as the likelihood to estimate the value for p. Again, giving the observed likelihood, data, and prior (the distribution that you believe p follows), let's estimate its real value.
The posterior he mentioned could be the estimated p after running code. I'm not sure about how the underlying algorithm work, but it probably calculates P(data | model) (remember, we can go back and forth between data and model in the Bayesian Formula), and it might be what he referred to as posterior.
Was the tutorial code executed with acceleration through C/C++ compilation at the background of Theano? When I execute the examples, the computation is much slower than in the video, especially when sampling from posterior in the baseball example.
Can’t remember exactly because I’m midway through, but I think he’s using an online jupyter kernel rather than a local instance.
Any link for the demo notebook? Thx.
On the video description. Tutorial information
Would someone be able to share the Jupyter Notebooks? The link in the description is not working for me ...
you guys are gold
Great workshop.
Great tutorial, lots to learn. One Aside: How to do you select the text and paste to another place so neatly? Which app do you use?
These guys rock !!
Cannot run jupyter notebooks properly now. Very frustrating.
Could anybody please send me the link of Justin Boyce blog?
Thank you too
We have uncles that doesn't change political views here on my country too ... maybe we could model hierarchically the probability of someone in some country having such an uncle hehe
we could do a website called the "uncle project" which would have a form section on which we would ask people worldwide how many uncles they have and how many gets emotional when someone slightly disagrees with his views. I think we could use a binomial likelihood in this case since we get N uncles and have a Bernoulli trial on each fitting the description.
Then each time someone submitted an answer we could update our hierarchical model ... So when enough people in enough countries send their estimates we would have a world map showing which countries people are the luckiest on the uncle subject and which ones are the unluckiest. 🤣
Watch first person on earth to understand bayes
Anyone who can help me with bayseian analysis? I'd really appreciate 😁
Cool presentation, Eric!
Estimation is core of all statistical inference.
ruclips.net/video/2wvt6GPZl1U/видео.html
Nice one: "Calculating p-values is not even... the point of statistical inference."
Holy shit no timestamps?!