Frequentists vs. Bayesians

Поделиться
HTML-код
  • Опубликовано: 4 мар 2022
  • In this video, we look at the frequentist and Bayesian methods for presenting scientific results. The difference between these methods is a source of confusion for scientists and the general public alike. Often, scientific results are presented using frequentist methods, but are incorrectly interpreted as Bayesian statements.
    Here, we discuss what information each of these methods attempts to communicate.
    A highly relevant xkcd comic about the sun exploding!:
    xkcd.com/1132/
    Other links mentioned in the video:
    Hypothesis Testing Playlist:
    • Hypothesis Testing Pla...
    What's a P-Value?:
    • What's a P-value?
    Statistical Significance:
    • Statistical Significance
    Minicourse on Error Bars, Measurements, and Decision Analysis:
    • How We Know Stuff: Mi...
    Bayesian Playlist:
    • Bayesian Playlist
    Coin Flipping, Bayesian Probabilities, and Priors:
    • Coin Flipping, Bayesia...
  • НаукаНаука

Комментарии • 24

  • @ThinkLikeaPhysicist
    @ThinkLikeaPhysicist  2 года назад +2

    Hi! Questions?

  • @priyankaschnell5880
    @priyankaschnell5880 Год назад +2

    this is the perspective l could follow and actually get a clear understanding from out of all the sources I've come across.
    Thank you for creating this video!
    To many more.

  • @akkesm
    @akkesm 2 года назад

    This was a good explanation. Thank you!

  • @MrBendybruce
    @MrBendybruce 2 года назад

    What a great video. Thankyou so much. I find this particularly insightful, when it comes to the apparent conflict between the viewpoint of Avi Loeb, and the more traditional scientific community, who (correct me if I am wrong) are making a Bayesian assessment regarding Oumuamua, such that the probability of its origins being of alien intelligent design, are by definition extraordinarily unlikely. On the other side of the coin, Loeb seems to be arguing that this "prior" is flawed, and indicative of humanity assuming it is special. I really can't say where I actually fall on this particular debate, only that your video has brought some new insight into the nature of it. ps I am visually impaired so sorry if any typos

  • @nikhileshbelulkar4178
    @nikhileshbelulkar4178 Год назад

    Great video!

  • @nailcankara6017
    @nailcankara6017 Год назад

    Thank you

  • @kyle5519
    @kyle5519 10 месяцев назад

    Your bayesian example is a good example of how black swan events can mess up predictions about the future, or be hard to predict

  • @sherifffruitfly
    @sherifffruitfly Год назад +2

    Good explainer. I do think however that the bayesian has a better answer to the rando than merely "sure it's unusual". The bayesian has available to her to respond "it sure IS unusual - THAT'S WHY I'VE REVISED MY PROBABILITY ASSESSMENT FROM 1% UP TO 24%". Easier to say in the language of odds, using the bayes factor, but you get the idea.

  • @c2eien
    @c2eien Год назад +1

    Most practical and careful explanation I've seen so far, with very helpful examples - thank you! Do you share your slides anywhere (e.g., as a PDF)?

    • @ThinkLikeaPhysicist
      @ThinkLikeaPhysicist  Год назад

      I don't right now, but I hope to change that (or offer other materials) sometime soon.
      If you have any opinions on what materials would be helpful, feel free to let me know!
      Many thanks!

  • @zoozolplexOne
    @zoozolplexOne 2 года назад

    cool!!!

  • @leo5961
    @leo5961 6 месяцев назад +1

    As a Bayesian Conspirator, there's nothing I hate more than hearing about the Filthy Frequentists and their ongoing heresies. But at least the presenter had a very pretty voice to ease the pain.

    • @ThinkLikeaPhysicist
      @ThinkLikeaPhysicist  6 месяцев назад +1

      Ha! I got quite a good laugh out of this. ;-)

    • @leo5961
      @leo5961 5 месяцев назад

      @@ThinkLikeaPhysicist I used this video to introduce a young person to this contentious issue. Since then they found a paper on the differing neurology between hearing and seeing which just happened to contain several instances of the term "Bayesian Heuristic" and they won't stop talking about it.
      It is a very well done video. :)

    • @ThinkLikeaPhysicist
      @ThinkLikeaPhysicist  5 месяцев назад

      @@leo5961 That's great! Glad it was of use! Thank you.

  • @kyle5519
    @kyle5519 10 месяцев назад

    Now that computers do all the math bayesian statistics might actually become more popular

  • @phyzwiz
    @phyzwiz 2 месяца назад

    I'm a bit confused at the prior choice of 1% in the coin flipping example. If it were up to me, I would pick the prior to be 50% because I don't know anything about that coin, and I want to learn everything I can from the data. In that case, 5 flips will tell me (using the same equation that was shown) that the probability that it's a fair coin is 1.5/(50+1.5) so 2.9%. What is wrong with my reasoning?

    • @ThinkLikeaPhysicist
      @ThinkLikeaPhysicist  Месяц назад

      Hi!
      So, the reason that the Bayesian chose 1% is that they have been through the exercise hundreds of times, and they've never actually seen one of the novelty coins perform as advertised (ie, always coming up heads); it has always turned out to be a fair coin. So, basically, the Bayesian goes in with the belief that the probability that the coin always comes up heads is small. So small, in fact, that the probability that it's a fair coin that just happens to come up heads 5 times in a row is larger. (Note that we didn't consider other scenarios, like that the coin comes up heads 60% of the time, which does admittedly make our example rather artificial.)
      The Bayesian prior always has a subjective element to it, based on the beliefs one has before the experiment is performed. As an alternative example, the Bayesian might look at the coin and say, "Oh, wait. This novelty coin is made by a different company than all the ones that I've seen before. In that case, I will not assign a high prior probability to it being a fair coin. OK, let's call the prior probabilities 50-50." If they do this, the probability that the coin is fair after the 5 flips (all coming up heads) is (50%*1/32)/(50%*1/32+50%) = 1/33.
      While the Bayesian prior always has an element of subjectivity, treating all hypotheses as equally likely before the experiment is done also has its pitfalls. The xkcd comic referenced in the video description illustrates this very well. ;-)

    • @phyzwiz
      @phyzwiz Месяц назад

      @@ThinkLikeaPhysicist OK.. But if we don't know anything about the coin, there is no subjectivity in choosing the prior. It will have to be 50%.
      I think the Bayesian analysis adds the ability to introduce a prior knowledge on the model (for example probability the coin is fair or biased). I don't know if there's a way to do that in the frequentist approach. Is there?
      But even if we forget about the prior, and we take the likelihood function, normalize it and interpret it as a probability of a truth given a measurement (like the probability that the coin is biased given 5 heads,) that to me is a much simpler and transparent way to state the result than the frequentist way. So if you tell me that the probabilty of that coin being fair is 3% given five flips that are all head, I will believe it and ask for you for more flips :)
      In the case of the sun going supernova in the xkcd comic, I am perfectly fine with the interpretation that it's true at 97% IF we know nothing about how often the sun goes supernova. But that's not the case :)

    • @ThinkLikeaPhysicist
      @ThinkLikeaPhysicist  Месяц назад

      @@phyzwiz Hi!
      As for adding prior knowledge into a model in a frequentist approach.....well, let's say your prior knowledge is another experiment you did. In the context here, it would be a set of coin flips you've done before. So, you've got the coin flips you did before, and the coin flips you're about to do. You can take both of those sets of information and combine them into one big set of information. You can then ask, under a certain hypothesis, how strange the combined result is. In that sense, you can add prior information in in the frequentist approach--you take all experimental data, and ask how likely or strange the whole set of data is under a specific hypothesis.
      But, if you're asking for the probability, given some result, that a hypothesis is true, you are, by definition, asking a Bayesian question. The only way I can tell you that the probability that the coin is fair is 3% after coming up heads 5 times is if I am going the Bayesian route. And that requires choosing a prior, which is up to the person making the choice. It simply is not the case that P(coming up heads 5 times | coin is fair) is the same thing as P (coin is fair | came up heads 5 times). We can't do that.

  • @BANKO007
    @BANKO007 7 месяцев назад

    The +/- 11 GeV/C² came from nowhere. How would you know this is equal to sigma?

    • @ThinkLikeaPhysicist
      @ThinkLikeaPhysicist  7 месяцев назад

      Great question!
      It's not easy.
      Here, I'm talking about a case where systematic errors can be estimated from detailed experimental studies.
      (To contrast, in statistics texts, one will often see the case where something is measured many times, and one can look at the spread in results. That's not what I'm talking about here. Here, I'm talking about a case where something is measured once, and an error bar is quoted.)
      OK, so, then where does sigma come from?
      It's strongly dependent on the case at hand, and requires detailed knowledge of the experiment. An experimentalist will basically try to come up with every important source of error that they can think of, and estimate how large those errors are likely to affect the final result. They may simulate making the measurement many times.
      For example, let's say you want to measure the mass of a particle that decays in a particle detector. You have to measure the energies/momenta of all of its decay products and then use them to calculate the mass of the particle you're interested in. But, your detector's measurements of those energies and momenta are not infinitely precise; hopefully you can figure out just how good those energy and momentum measurements are. (One way you can do this is by examining other particle physics processes occurring in your detector that are already well-understood--you can compare what you see in your detector with what is already known from previous experiments.) Once you know how well your detector measures those energies and momenta, you have to calculate how the errors on those quantities can filter through to your mass measurement.
      But something like this is just 1 source of error. In practice, there are several or many important sources of error that have to be studied. And the effects of those sources of error have to all be included to estimate sigma.
      I fear what I've written above is a bit too complicated and specific, so I'll try to summarize: the people doing the experiment need to understand their equipment and methods really well, and then they need to estimate how wrong their measurements are likely to be. This is often a very complicated process; calculating an error bar may be one of the hardest parts of producing an experimental result.