Introduction to Bayesian statistics, part 1: The basic concepts

Поделиться
HTML-код
  • Опубликовано: 4 янв 2025

Комментарии •

  • @ahmedmoneim9964
    @ahmedmoneim9964 7 лет назад +66

    That was excellent explanation of the interaction between the parameters, thank a lot for putting the time and effort to do the animations

    • @ana_8696
      @ana_8696 Год назад

      Hello how are you?

  • @SuperDayv
    @SuperDayv 7 лет назад +11

    This is the best introduction to this that I've found online! Thanks!

  • @vietta9204
    @vietta9204 6 лет назад +55

    Wow, my understanding acquired from this video is more than from dozen of hours on classes.

  • @AradAshrafi
    @AradAshrafi 3 года назад +3

    It was the most comprehensive video with the amazing explanations about prior, likelihood, and posterior. Thank you so much for this wonderful video.

    • @ana_8696
      @ana_8696 Год назад

      Hello how are you?

  • @jehangonsal2162
    @jehangonsal2162 7 лет назад +13

    This is awesome. So intuitive and interesting. Why did we ever use null hypothesis testing? With the computational power we have now, this should be the norm.

  • @divyateja939
    @divyateja939 2 года назад +1

    excellent explanation. I had been surfing internet, for clarity

  • @pep_4_climate
    @pep_4_climate 7 месяцев назад +1

    What to say, an excellent explanation of Bayesian updating, long life to Stata and its People!

  • @SoumyadeepMisra7
    @SoumyadeepMisra7 2 года назад +1

    Thank you Sir, the best explanation I found on youtube..

  • @jennyapl1791
    @jennyapl1791 5 лет назад +12

    Posterior is proportional to the MLE x prior , not equal =

    • @amanrastogi5184
      @amanrastogi5184 4 года назад

      that's true in a certain sense that he should have written proportional instead of equal by avoiding the use of marginal distribution indication for scaling

  • @ngm_4092
    @ngm_4092 3 года назад

    Your teaching style is very effective. Explanation and pacing is very good and your voice maintains attention very well. Thank you for making this video, it was quite informative.

  • @haneulkim4902
    @haneulkim4902 3 года назад

    @4:30 what's the difference between credible interval and confidence interval? After reading about it made me even more confused...

  • @erggish
    @erggish 7 лет назад +2

    One question I would have on this, is how can you be sure you are not biasing your result using these informative priors? I believe the most conservative approach is indeed the uniform (equivalent to I don't know anything so everything is equally possible for me), but when I start getting "clever", choosing appropriate priors, I can't make a real hypothesis test with that because I already tell the coin to be 50:50 (while someone could have potentially given me a magic coin of 10:90).

    • @spotlessapple
      @spotlessapple 7 лет назад +4

      I believe the point of the prior is to introduce bias responsibly. That is, they should probably only be used if the prior was decided on from previous experience and expertise, and creating a posterior distribution could be helpful in cases that you believe will generate similar results from previous experiments but only have a limited sample size.

    • @ana_8696
      @ana_8696 Год назад

      Hello how are you? I need some help

  • @emilyzheng1
    @emilyzheng1 4 года назад

    Thank you very much for the explanations of non-informative prior and informative prior. Very helpful for my research.

  • @liviuflorescu
    @liviuflorescu 4 года назад

    At 1:40, shouldn't the area under the graph be equal to 1? What does the y-axis represent?

  • @ohmyfly3501
    @ohmyfly3501 7 лет назад +99

    .75x speed

  • @bigfishartwire4696
    @bigfishartwire4696 7 лет назад +2

    Finally I understand this thing. Thank you.

  • @ghjones1995
    @ghjones1995 7 лет назад

    Isn't there an error at 5:18
    Shouldn't the beta distribution's a and b be 86 and 84 NOT 106 and 114 ???? as the mean of 86 and 84 gives the mean on the screen (0.506) ......
    Whereas the mean of the beta(106,114) is 0.481

  • @yanchen3129
    @yanchen3129 5 лет назад +2

    great vid! so informative

  • @sujiththiyagarajan4290
    @sujiththiyagarajan4290 4 года назад

    excellent explanation sir.....

  • @MA-rc2eo
    @MA-rc2eo 2 года назад

    Thank you for making this video. I took statistics class before, but my knowledge is limited. Please add descriptive details so I can understand your video.

  • @alexisdasiukevich5417
    @alexisdasiukevich5417 Год назад

    How is it that you are able to neglect the probability of y for the posterior distribution function, which is normally on the denominator?

  • @xBrynnerX
    @xBrynnerX 5 лет назад

    why is the posterior narrower at 5:15?

  • @solidanswers3845
    @solidanswers3845 8 лет назад +1

    Awesome, thank you! Animations are really helpful.

  • @jokyere1553
    @jokyere1553 Месяц назад

    What informs the choice of a beta?

  • @sukursukur3617
    @sukursukur3617 2 года назад

    How can we specify belief power of prior? In this example alfa, beta=30. And we can assign 250 for both. There is no boundary for us to prevent assigning 250 instead of 30. In a real life data, if you assign powerful prior, this means you have a bias and you may have implemented pressure to information coming from data; otherwise you have come close to non-prior case.

  • @lostcaze
    @lostcaze 8 лет назад

    Thank you. The first video that makes me understand this reasoning in one go.

  • @euroszka8048
    @euroszka8048 4 года назад

    I shouldn't be saying that loud but dunno about you, I find this prior distribution & Ledoit-Wolf shrinkage method for accrued efficiency very difficult to picture and don't get me started on these affecting eigenvalues instead of eigenvectors... it's a mess in my head right now... I really need to pull myself together

  • @r2internet
    @r2internet 6 лет назад +1

    1:25 Why does this mean? Prior = Beta (1.0, 1.0)

    • @Magnuomoliticus
      @Magnuomoliticus 5 лет назад +1

      The Beta function evaluated in (1.0, 1.0) is the Uniform distribution. He says that he will asume not having any information about the probability of getting heads or tails. And for that he will use a prior with an uniform distribution: Beta (1.0, 1.0) = Uniform; so the probability of getting heads or tails has a uniform probability from 0 to 1.

    • @12rooted
      @12rooted 4 года назад

      @@Magnuomoliticus but how do you know how to accurately increase the parameters of the prior distribution ? The only thing I don't understand here is how he decided that beta(30,30) was a more accurate depiction of what he knows about the coin. why 30? And thanks for your previous answer.

    • @Magnuomoliticus
      @Magnuomoliticus 4 года назад

      @@12rooted Well that's a great question that I don't know the answer of. My first guess is that it's arbitrary which distribution you use. But let's wait if someone else can clarify that!'

  • @WahranRai
    @WahranRai 3 года назад

    Your animation were based on binomial likelihood and in Stata you choose Bernoulli likelihood
    are they the same if we remove the binome factor (choose (N,X)

    • @a.khurram3023
      @a.khurram3023 3 года назад

      No they are not the same, but a single stochastic variable with a binomial distribution can be described by several stochastic variables with Bernoulli distributions.

  • @nyambaatarbatbayar9333
    @nyambaatarbatbayar9333 6 лет назад +1

    I have the same version of Stata as yours. However, my Bayesmh window doesn't have the "univariate distribution" option. What could be the reason? Can you give me a hint?

  • @shaswatachowdhury9032
    @shaswatachowdhury9032 10 месяцев назад

    Amazing! Thank you so so much! :)

  • @valor36az
    @valor36az 6 лет назад +1

    great explanation

  • @kathyern861
    @kathyern861 2 года назад

    If the coin is held with heads facing up, what is the likelihood it will yield heads when it is tossed?
    If the con is held with heads facing up, what is the likelihood it will yield tails when it is tossed?
    If the coin is held with tails facing up, what is the likelihood it will yield tails when it is tossed?
    If the coin is held with tails facing up, what is the likelihood it will yield heads when it is tossed?

  • @nadineca3325
    @nadineca3325 4 года назад

    would someone please tell me what is he saying at 0:28 ? thank you

    • @j.m.4664
      @j.m.4664 3 года назад

      I think he says: "Many of us were trained using a frequentist approach to statistics..."

  • @Jdonovanford
    @Jdonovanford 6 лет назад +3

    Hi, thanks for the video. What I wonder is, what are " default priors" when it comes to bayesian inference? As I understand, the priors are specific to each hypothesis or data, so how come some packages include these defaults? What do these priors entail?

  • @Rainstorm121
    @Rainstorm121 3 года назад +1

    Thanks. Perhaps you do another video to call it part 0 as the building blocks for this part 1. Introduction that is :)

  • @at6969
    @at6969 3 года назад

    Chuck the new stata 17.1 has different command structure. Can you please redo the video for version 17.1.

  • @marketasvehlakova2088
    @marketasvehlakova2088 8 лет назад

    Thank you. That was very clear and helpful.

  • @shreyaskrishna6038
    @shreyaskrishna6038 Год назад

    What i dont understand is how is multiplying liklihood and prior distribution going to give us what we call the posterior distribution. If anything the product just seems like a random function

  • @learnandevolve4246
    @learnandevolve4246 3 года назад

    Thank you for this video its clear to me

  • @francescos7361
    @francescos7361 2 года назад

    Thanks . I love statistic.

  • @matthewcover8748
    @matthewcover8748 3 года назад

    that was so so helpful. thank you.

  • @The_Pandalorian_Cooks
    @The_Pandalorian_Cooks 5 лет назад +8

    Maybe the video creator intended to explain Bayesian statistics, but did not.
    The concepts start to be explained, then there is a stepwise jump into mentioning prior and posterior probability, with the introduction of on screen equations but no further explanations - it's like it was read out of a technical manual that only 'insiders' know about. This then quickly turns into how to use the software/which buttons to press, which seems applicable to those who already know about Bayes and want to use the software - and not for those who want an introduction.
    So I'm sorry to say this video was not useful to introduce Bayesian statistics and I would recommend giving it a miss.

    • @Pappa261
      @Pappa261 Год назад

      It was a really bad video if you’re actually trying to understand bayesian statistics

  • @ksspqf6016
    @ksspqf6016 3 года назад

    Brilliant video thank you a lot

  • @Pankaj-Verma-
    @Pankaj-Verma- 5 лет назад

    Thank you for your kind help.

  • @flake8382
    @flake8382 2 года назад

    "Non technical"
    3:07
    Right.

  • @Drockyeaboi12
    @Drockyeaboi12 3 года назад

    Hi can someone explain why this form of probability is important ?

  • @danielnakamura6430
    @danielnakamura6430 3 года назад

    excelent video

  • @rizwanniaz9265
    @rizwanniaz9265 7 лет назад +1

    how to calculate odd ratio in bayesian ordered logistic plz tell me

  • @jamesbowman7963
    @jamesbowman7963 3 года назад

    Ok so how has the Bayesian model been tested and demonstrated superior to other statistical methods. I'm always skeptical without hard evidence.

  • @albertcuspinera7003
    @albertcuspinera7003 5 лет назад

    Hi,
    On what depends the type of likelihood distribution?
    Thanks,

  • @josefwang
    @josefwang 3 года назад

    amazing! thanks!

  • @andreneves6064
    @andreneves6064 6 лет назад

    Please, could you send us the video transcript?

  • @bhaveshsolanki8765
    @bhaveshsolanki8765 8 лет назад

    excellent sir

  • @yuuki7831
    @yuuki7831 3 года назад +10

    i understand nothing

  • @andreneves6064
    @andreneves6064 7 лет назад +1

    Please could you indicate some friendly material about bayesian inference?

    • @bigfishartwire4696
      @bigfishartwire4696 6 лет назад +1

      It doesn’t exist. This stuff is taught horrendously everywhere

    • @amerjod3122
      @amerjod3122 6 лет назад

      @@bigfishartwire4696 100% Agree

  • @alexismarquez3674
    @alexismarquez3674 3 года назад

    DURING HIGHSCHOOL DAYS, MY CLOSEST FRIENDS ARE THE NICE ONES.

  • @kathyern861
    @kathyern861 2 года назад

    the coin could land on its edge, neither heads or tails. Forgot about that potential event didn't you.

  • @epicwhat001
    @epicwhat001 3 года назад

    this is Advance basic concept.

  • @jack8831
    @jack8831 4 года назад +1

    There's no information about what the Y in the graph is/refers to. This is unacceptable

  • @alexismarquez3674
    @alexismarquez3674 3 года назад

    BAYESIAN STATISTICS IS AN EXTENSION OF THE CLASSICAL APPROACH. VARIOUS DECISION RULES ARE ESTABLISHED. THEY ALSO USE SAMPLING DATA. I LEARNED ABOUT THIS WHEN I WAS STILL IN HIGHSCHOOL IN ATENEO DE ZAMBOANGA UNIVERSITY, MY GRADES IN ALGEBRA ARE HIGH.

  • @MrGoodaches
    @MrGoodaches 3 месяца назад

    Haha, “I’m going give a relatively non-technical explanation…” then proceeds to speak entirely in words that have definitions specific to statistics. Most people who remember the definitions of all the words used probably also remember what Bayesian is. People who don’t remember or never did know the vocabulary used have no hope of learning here what Bayesian is.

  • @ohnsonposhka9891
    @ohnsonposhka9891 3 года назад

    Proving the non-existence of God was harder than I thought.

  • @dr-rer-nat-jonathan
    @dr-rer-nat-jonathan 6 лет назад

    too many basic errors: "distribution closer to .5" such a claim is not even formally defined

  • @ijexcmos8153
    @ijexcmos8153 20 дней назад

    And this is an explanation for who? For a bunch of statisticians? Certainly not for the 'unintroduced'! A few seconds into the 'introduction' and you are already using highly technical jargon! Perhaps you need a course in pedagogy first! 😅

  • @nano7586
    @nano7586 Год назад

    This is a very bad introduction. You jumped from the absolute basics to straight up prior and posterior.
    I'm really tired of these videos that area dvanced videos as "beginner videos" in disguise. They really spam all of RUclips but don't provide any value.
    Please explain it more simply next time and please elaborate what each concept means that you introduce within a few seconds. Sorry for being this critical but I'm not here to learn and not to waste my time.

  • @mikesilva6521
    @mikesilva6521 4 года назад

    what tHE BLEEP did he just say?

  • @뭐냐-j7y
    @뭐냐-j7y 3 года назад

    Woo

  • @Pappa261
    @Pappa261 Год назад

    Really bad video for a newbie trying to learn Bayesian statistics

  • @jinudaniel6487
    @jinudaniel6487 4 года назад

    so fucking fast..