An introduction to the Random Walk Metropolis algorithm

Поделиться
HTML-код
  • Опубликовано: 26 июл 2024
  • This video is part of a lecture course which closely follows the material covered in the book, "A Student's Guide to Bayesian Statistics", published by Sage, which is available to order on Amazon here: www.amazon.co.uk/Students-Gui...
    For more information on all things Bayesian, have a look at: ben-lambert.com/bayesian/. The playlist for the lecture course is here: • A Student's Guide to B...

Комментарии • 32

  • @collincherubim2698
    @collincherubim2698 4 года назад +19

    Finally, visuals for MCMC! Highly illuminating, thank you.

  • @jiachengchen7828
    @jiachengchen7828 3 года назад +6

    This video is 10000x better than the equations in the class.

    • @distrologic2925
      @distrologic2925 2 года назад +2

      Right? I don't understand how actual lecturers can be THAT terrible at conveying knowledge. Its like they don't want people to understand it.

  • @AmrutaOfAllTrades
    @AmrutaOfAllTrades 4 года назад +3

    Finally found out why its called Monte Carlo. This is the best explanation of the algorithm I have ever seen. Thanks for this.

  • @bradh2649
    @bradh2649 12 дней назад

    Beautifully explained

  • @vman049
    @vman049 5 лет назад +9

    Best explanation of MH on RUclips. Thank you!

  • @johnedwardhills4529
    @johnedwardhills4529 3 года назад +3

    Thanks Ben. This is a really clear visual representation of what the algorithm is doing and how it works in principle. Excellent stuff!

  • @MisterCactus777
    @MisterCactus777 2 года назад

    I used this for my Bachelors thesis to simulate ultracold fermions in a harmonic trap, which was a replication of real expermients! Thank you for explaining, I had forgotten what it did...

  • @lemyul
    @lemyul 5 лет назад +4

    thank God there's a video about this

  • @DJRaagaMuffin
    @DJRaagaMuffin 4 года назад

    Great explanation. Thank you

  • @tergl.s
    @tergl.s 4 месяца назад

    the simulation is so helpful! thanks

  • @mikotokitahara9923
    @mikotokitahara9923 3 года назад

    Best one on RUclips, thanks a lot.

  • @FluxProGaming
    @FluxProGaming 4 года назад

    Subscribed. Good voice, good explanations !!

  • @mikolajwojnicki2169
    @mikolajwojnicki2169 3 года назад

    Great video. Way easier to understand than my uni lectures.

  • @user-mn8th3ie1t
    @user-mn8th3ie1t 4 месяца назад

    Very insightful.

  • @karimaelouahmani7078
    @karimaelouahmani7078 2 года назад

    Brilliant honestly.

  • @Penrodyn
    @Penrodyn 4 года назад

    Are the Mathematica programs you used in the video available? I particularly liked the last one where you showed a more complicated surface. I also just ordered your book, are there available with that?

  • @mojiheydari
    @mojiheydari 3 года назад

    awesome

  • @ahmedjunaidkhalid3929
    @ahmedjunaidkhalid3929 5 лет назад

    I have a question. Suppose rather than having just one value theta, I have multiple values [A,B,C] in my state. Each variable can only have four values [0,1,2,3]. How would I choose a new state from the previous one? Would I calculate a new value for each variable and call it a proposed state and then calculate the value for the complete system?

    • @jelmerdevries7827
      @jelmerdevries7827 4 года назад +3

      probably too late, but for a multi-variable model you want to use a gibbs samples

  • @distrologic2925
    @distrologic2925 2 года назад

    Don't gaps in the true distribution skew the samples to the borders of these gaps because the random walk is less likely to cross the gap, especially with a low sigma in the jumping distribution?

  • @wunderjahr
    @wunderjahr 3 года назад

    👏👏👏

  • @darcycordell7156
    @darcycordell7156 3 года назад +1

    Maybe a dumb question, but at 5:46 aren't you using the unknown distribution to calculate r? Isn't the black line on the graph the unknown distribution you are trying to estimate?

    • @user-lx7jn9gy6q
      @user-lx7jn9gy6q 3 года назад

      No thats a good question. The way I understand it is that all you need to compare is the ratio of the numerator for Bayes rule. And you sample through different values of the parameter (theta). You basically run through lots of different possible parameter values and the way it walks across the graph is when it hits a value of the parameter that corresponds to the true posterior distribution. Most of the guesses of the parameter are wrong and don't lead to anything at all which is why the animation at around 8 minutes has many more wrong guesses than correct ones.
      The numerator of Bayes's rule is what dictates where those true parameter spots are. The denominator decides the height of it. This sampling method estimates the height of them which is decided by the denominator. So for your question of r. All we know is the numerator, not the denominator.

  • @siarez
    @siarez 5 лет назад +2

    I don't get where the likelihood term and the prior term come from. Here we assume they exist. What is an example of a practical application where we have these two terms but don't have the posterior?

    • @GeoffRuddock
      @GeoffRuddock 5 лет назад +1

      @Siarez the difficult part in calculating the posterior is usually the denominator (marginal distribution). This algorithm uses the ratio of unnormalized posteriors, so the cumbersome marginal distribution cancels out.

    • @AP-rs5wz
      @AP-rs5wz 5 лет назад

      Yes, the marginal can be quite costly to compute, as you have to integrate out so many (potentially) unknown dimensions.

    • @payam-bagheri
      @payam-bagheri 3 года назад

      I agree with you. There's a disconnect in the explanation in the video. The video mentions that the prior can be anything (just to have a starting point) but doesn't explain where the likelihood comes from.

    • @jimbocho660
      @jimbocho660 3 года назад +1

      @@payam-bagheri The likelihood is computed in the usual way from your data and proposed data generation model. This video is about how to sample from your unnormalized posterior once you obtain an expression for it using the likelihood x prior.

  • @ujjwaltyagi3030
    @ujjwaltyagi3030 5 месяцев назад

    where can i find the code for this?

  • @johannesluttmer1285
    @johannesluttmer1285 3 года назад

    Metropolis 1927 illumination= moloch
    Conspiracy of dunges = owl of minerva =satan

  • @amenaalhassan2807
    @amenaalhassan2807 2 года назад

    ước gì có tiền ăn tết