Why Do Scientific Predictions Have Error Bars?

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024
  • In science, we make hypotheses and test them. Often, this means comparing the predicted value of some quantity in nature to its measured value. You might be aware that measurements have error bars, but predictions, do too! We recently saw an example of where the error bar on a scientific prediction was especially important: the new (2023) result on muon g-2 from Fermilab.
    Here, we look at some ways uncertainties arise on scientific predictions. And, these principles apply to real life, too.
    Links mentioned in the video:
    The New (2023) Result from the Muon g-2 Experiment:
    • The New (2023) Result ...
    The Value of Muon g-2 Predicted in the Standard Model:
    • The Value of Muon g-2 ...

Комментарии • 7

  • @ThinkLikeaPhysicist
    @ThinkLikeaPhysicist  9 месяцев назад

    Hi! Questions?

  • @martinpollard8846
    @martinpollard8846 9 месяцев назад +1

    Excellent as usual. Thank you.

  • @jonathanbyrdmusic
    @jonathanbyrdmusic 9 месяцев назад

    I love your videos. My observation is that the audio is very quiet, compared to other videos on RUclips. My suggestion is that you compare your volume to other videos before you export. I often use SciShow as a good, popular model.

  • @KaiseruSoze
    @KaiseruSoze 9 месяцев назад

    How would you verify that a meter stick is 1 meter long? Is there a theory that predicts it's length? Same with clocks. How would verify that 1 seconds is in fact one second long?

    • @ThinkLikeaPhysicist
      @ThinkLikeaPhysicist  9 месяцев назад +1

      Hi!
      Well, historically, the meter was defined as a certain fraction of the Earth's polar circumference, and the second was defined as a certain fraction of a day.
      But, neither of those definitions is all that precise, so now we have the following:
      The meter is defined in terms of the speed of light. The speed of light is defined to be _exactly_ 299792458 meters per second. This doesn't mean we know the speed of light perfectly; it means that we transfer the imprecision of our knowledge of the speed of light into the definition of the meter. So, if we measure the speed of light more precisely, this updates our knowledge of the length of the meter; the _numerical_ value of the speed of light does not change.
      Of course, that only works if we know the answer to your second question, which is what a second is. The second is defined in terms of the hyperfine transition frequency of cesium-133. That frequency is defined to be 9192631770 oscillations per second. And that's valid only if the measurement is performed under strict conditions to minimize error. Just like above, this means that if measurements become more precise, our knowledge of the length of a second gets updated.