Markov Chain Monte Carlo and the Metropolis Alogorithm

Поделиться
HTML-код
  • Опубликовано: 5 июл 2024
  • An introduction to the intuition of MCMC and implementation of the Metropolis algorithm.
  • НаукаНаука

Комментарии • 119

  • @jacobm7026
    @jacobm7026 5 лет назад +8

    Jeff, you're fantastic for doing this. I've been struggling all semester trying to grasp this concept intuitively. I've finally seen the light

  • @hmsn22
    @hmsn22 8 лет назад +3

    One of the best explanations of MCMC I have seen on the web. Wonderful job . Wonderful

  • @lukechen8606
    @lukechen8606 7 лет назад +7

    This video is cool! I really like the two examples you give, illustrating the idea of MCMC concretely and clearly. Thanks!

  • @badbad_
    @badbad_ 8 лет назад

    Sir, you are a hero. I read a bunch of definitions, explanations and examples and only yours can make me really understand MCMC. Now I can continue my final assignment

  • @jeremyjacobsen4300
    @jeremyjacobsen4300 9 лет назад

    Great lecture. Thanks for showing code. This is the most straight forward MCMC tutorial that I've seen on youtube thus far.

  • @Ash338
    @Ash338 12 лет назад +1

    Excellent presentation. Very clear, with nice examples and simple codes. Thank you.

  • @gumbo64
    @gumbo64 Год назад

    easily the best MCMC explanation I've seen, huge thanks

  • @svetoslavbliznashki1710
    @svetoslavbliznashki1710 9 лет назад

    A great lecture indeed! Thanks very much :) The matlab code you shared really made it as clear as it gets. Keep them coming :)

  • @mayankpj
    @mayankpj 8 лет назад +1

    Nice work!
    You explained very clearly and the recording is also very nicely done...

  • @NasusTCotS
    @NasusTCotS 5 лет назад +2

    This video might be the only thing saving my thesis. Thanks :D

  • @Overdose21127
    @Overdose21127 11 лет назад

    I spent dozens of hours reading papers about MCMC. all that is sh...
    RUclips - the best source of any knowledge. Evidence of this - is the lecture above.
    Well done, author, well done...
    Thanks

  • @sethtrowbridge9122
    @sethtrowbridge9122 8 лет назад +55

    Yeah I see you, League of Legends. hiding out there in the task bar-- thinking you'll just chill until Mr. Picton gets some free time. Well this great intellect has moved on. When given a choice between toxicity and flaming or creating helpful videos, I'll have you know, Jeff Picton chose the high road.

  • @lauramanuel7619
    @lauramanuel7619 8 лет назад +4

    Thanks for the code. As a programmer, seeing how something would be coded makes a lot more sense than seeing a mathematical formula. :) The last example was also quite useful and a great way to tie it all together.

  • @premratan7511
    @premratan7511 8 лет назад

    Great video, Jeff Picton. It was really helpful. Thank you very much.

  • @MrFenh
    @MrFenh 7 лет назад

    Great video. Thank you, Jeff!

  • @ohrfeigenbaumhauweg
    @ohrfeigenbaumhauweg 7 лет назад

    Thank you. This really helped my understanding the model and the applications.

  • @aliabdollahzadeh1748
    @aliabdollahzadeh1748 9 лет назад

    Great work, almost answered all my questions. Thanks

  • @ablack0
    @ablack0 8 лет назад +1

    Thanks for this great explanation!

  • @paradox9086
    @paradox9086 9 лет назад +1

    Thank you so much for a very clear explanation

  • @cliffwang5481
    @cliffwang5481 7 лет назад

    Thanks so much for your inspiring explanation!

  • @piotrbjastrzebski
    @piotrbjastrzebski 10 лет назад

    Something that presents MCMC in a concise and clear way. Like it a lot.

  • @SandroBoschetti
    @SandroBoschetti 11 лет назад

    Thank you very much for your great lecture. It is really being of great help for me.

  • @ddaniel5857
    @ddaniel5857 10 лет назад

    It is really being of great help for me, thank you very much!

  • @TheGoodInquisitor
    @TheGoodInquisitor 10 лет назад

    Thank you for your clearness. Now I really have an idea.

  • @hannahshen2907
    @hannahshen2907 4 года назад

    That is a really good explanation! Thank you!!!

  • @yuanyuan3056
    @yuanyuan3056 7 лет назад +1

    Very clear explaination!

  • @ruili6415
    @ruili6415 4 года назад

    Clear explaination. Thank you Jeff. A question existing in my brain is: How do we set the judgement criteria during the model iteration?

  • @chloeduan8301
    @chloeduan8301 8 лет назад +3

    ths is so great, thank you!

  • @harmonyliu8239
    @harmonyliu8239 6 лет назад

    So nicely explained!!!!! Thank you !!!!

  • @PedroRibeiro-zs5go
    @PedroRibeiro-zs5go 6 лет назад

    Very very good explanation!! Thanks! :D

  • @ankitranjan8292
    @ankitranjan8292 8 лет назад

    This is an awesome lecture that clears the mcmc concept. I am curious to know how can we apply it in partitioning of jobs on 2 parallel machines in order to minimize makespan?

  • @ozgurakpinar1710
    @ozgurakpinar1710 8 лет назад

    Dude, You are awesome.

  • @cdclaxton
    @cdclaxton 6 лет назад +2

    Just in case it helps someone watching this very good video, here is some R code to demonstrate the Metropolis algorithm:
    # Metropolis algorithm -- Gaussian distribution
    library(ggplot2)
    mu

  • @momnaahsan8079
    @momnaahsan8079 3 года назад

    Great Lecture. Thankyou.

  • @ribaat2024
    @ribaat2024 11 лет назад

    i couldnt agree more with you! Well done author!!

  • @DreamWorker-jm5xn
    @DreamWorker-jm5xn 5 лет назад

    Some "Professors" teach students just to show how much they know about the topic, by using alien language (edit: but some are good prof). I spent hours in those language, but instead i can understand mcmc within 36 minutes. You're a superhero!!

  • @metalismystyle
    @metalismystyle 10 лет назад +1

    Great video! Do you know how I would use the Metropolis algorithm to select random points from the tails of a Normal Distribution (or do we always have to sample from a Uniform distribution?) at a higher probability than selecting points close to the mean? i.e. I need the target distribution to be a Normal Distribution and the proposed Distribution to be the tails ((-4*sigma, -3*sigma) and (3*sigma 4*sigma)) of the Normal Distribution? Is this possible?
    Thanks a lot!

  • @eraptor1955
    @eraptor1955 11 лет назад

    Very well done!

  • @grandeterra1698
    @grandeterra1698 7 лет назад

    Jeff thank you for these videos. I am self studying MCMC and is there any chance that you may share the simulation codes?

  • @nautiyogi8386
    @nautiyogi8386 6 лет назад

    Brilliant tutorial !

  • @tamerkhraisha6974
    @tamerkhraisha6974 6 лет назад

    Excellent explanation

  • @Cfx45321
    @Cfx45321 11 лет назад

    Great presentation. Thnx

  • @scottmacnevin3555
    @scottmacnevin3555 7 лет назад

    Well done! Thank you

  • @antonmarkov3715
    @antonmarkov3715 6 лет назад

    Thank you very much, that helped my a lot!

  • @picjeffton
    @picjeffton  11 лет назад

    I agree. I just didn't feel like opening latex to write out the equation and just took a screen cap of it from a paper I had.

  • @MrGeorgerififi
    @MrGeorgerififi 7 лет назад

    nice simple examples. thank u

  • @user-bz8nm6eb6g
    @user-bz8nm6eb6g 3 года назад

    Thank you so much! this vid is really helpful
    Can you explain why the alrogithm(22:28) creates N(0,1) instead of N(0,10) or N(0,140), etc...? is it because that the normpdf is based on N(0,1)?

  • @paulfrischknecht3999
    @paulfrischknecht3999 9 лет назад +11

    @3:00 Wiki says it's from Monte Carlo in Monaco.

  • @VisajDesai
    @VisajDesai 4 года назад

    Hey Jeff, how does the software construct the normpdf of x(i) and x_c in the gaussian code example? Considering we start off with only a single x(i) value and then sample a single point x_c, how can one create an entire pdf to be used in the equation?

  • @rafaellima8146
    @rafaellima8146 6 лет назад

    Thank you so much!

  • @leonardomaffeidasilva9774
    @leonardomaffeidasilva9774 3 года назад

    thank you. Really helped me

  • @SergioHernandez-wd7mb
    @SergioHernandez-wd7mb 7 лет назад

    Hi, great tutorial, thanks.
    I have a couple of doubts
    29'30" About the initial guess, what literature can I read to determine such a value of the initial guess?
    30' About proposal distribution and the cost function, is there any other tutorial or literature to understand how to design such a proposed distribution or using exp(-cost) should suffice considering a wide range of phenomena and datasets?
    Thanks again

  • @yongliangqin8673
    @yongliangqin8673 7 лет назад

    excellent tutorial

  • @Paivren
    @Paivren 6 лет назад

    So at 19:30, the q distribution is equivalent to the transition matrix T from the markov chain formalism at 14:00, right?

  • @gauthamchandra2081
    @gauthamchandra2081 4 года назад

    very coherently explained, most videos go into unnecessary esoteric detail.

  • @ateoc9246
    @ateoc9246 4 года назад +1

    in 31:41, have you any evidence for choose the accept/reject test function like this? If yes, where can i find it?

  • @renzocoppola4664
    @renzocoppola4664 7 лет назад

    You made it sound easy.

  • @juliusctw
    @juliusctw 9 лет назад +6

    Thank for the video, I have some questions. Let's say that we didn't know that the distribution was gaussian, how do we decide what proposal distribution to use? Even if we knew that the distribution is gaussian, how did you know to use normpdf (which already centers at 0 with sigma of 1) ? If the actual distribution was N(2,1) instead, would you still use normpdf ?

  • @chx75
    @chx75 5 лет назад

    The Markov condition is not "x4 depends only on x3", but "if we know x3, x4 becomes independent of x2 and x1"

  • @undertehlaw
    @undertehlaw 11 лет назад

    At 9:58, when "another" molecule is chosen, was that through a process that had a chance of reselecting the first molecule again?

  • @vidyashankar1389
    @vidyashankar1389 8 лет назад

    everythig was brilliant!! great job.. m interested also in knowing your approach to the functions step_param and ebm_model while it could explain a more clearer picture.. Thanks in advance.

  • @harmonyliu8239
    @harmonyliu8239 6 лет назад

    one question: How do we choose the proposal q? Is there any requirements for this choice?

  • @MaxKesin
    @MaxKesin 8 лет назад +2

    Great video - do you have any more from this class?

  • @picjeffton
    @picjeffton  11 лет назад

    Typically all of the molecules would be altered at once, as the position of each molecule is a variable parameter and the collection of these constitutes a state of the system. I described moving them individually to simply convey the intuition of making small changes to the system. But, my intuition tells me that selecting single molecules with random reselection would be fine and preserve ergodicitiy.

  • @arnaldopereira8435
    @arnaldopereira8435 2 года назад +1

    Make more videos, Jeff!

  • @picjeffton
    @picjeffton  11 лет назад +4

    Well there is a Monte Carlo in Vegas... but ya you're right.

  • @yonatan1myers
    @yonatan1myers 10 лет назад

    At last a clear explanation of this

  • @JuliaLondonChannel
    @JuliaLondonChannel 5 лет назад

    Gréât vidéo 👍🏻

  • @225kirt
    @225kirt 11 лет назад

    I liked the song

  • @SoumakBhattacharjee08
    @SoumakBhattacharjee08 5 лет назад

    nice video.

  • @marcosmetalmind
    @marcosmetalmind 4 года назад

    very good

  • @waguebocar9680
    @waguebocar9680 7 лет назад +2

    very programm monte carlo

  • @SaulBerardo
    @SaulBerardo 11 лет назад

    I'm also confused. A clarification about it would be welcome.

  • @Mooorifo
    @Mooorifo 10 лет назад

    Have you got a written program for the disks?

  • @haseebshehzad2372
    @haseebshehzad2372 7 лет назад

    I need the document presented in the video. Any help? Thanks

  • @francisbaffour-awuahjunior3099
    @francisbaffour-awuahjunior3099 3 года назад

    What is the explicit equation for the energy balance model?

  • @paulfrischknecht3999
    @paulfrischknecht3999 9 лет назад

    You say the method will visit the nodes an amount proportional to "their probability" many times. But we don't give any probability to the nodes a-priori, so really the output of the method *defines* this "per node probability" no?

  • @ahme0307
    @ahme0307 11 лет назад +2

    at 15:33 the first product between X0=[0.5 0.2 0.3] with T is not equals to [0.2 0.6 0.2]. actually it is [ 0.18 0.64 0.18], and converges to [0.2213 0.4098 0.3689]. am I missing missing some thing?

    • @RodrigoSilva-yn4on
      @RodrigoSilva-yn4on 5 лет назад +1

      I guess you're right! I also realized that, that's why I decided to read the comments!

  • @bv9613
    @bv9613 5 лет назад

    Interesting. About the climate example. Wouldn’t cloud formation be important since albedo was and perhaps that would be more important than the feedback, or just as?

  • @papiedra
    @papiedra 6 лет назад +1

    I didn't understood the difference between Metropolis algorith and MCMC?

  • @picjeffton
    @picjeffton  11 лет назад

    May I ask why?

  • @GabiRav
    @GabiRav 10 лет назад +31

    Great explanation , but....MONTE CARLO IS IN MONTE CARLO , not in LAS VEGAS :-)

    • @TanguyI
      @TanguyI 9 лет назад +3

      You Americans, so egocentric :-P
      Very clear video BTY. Thanks!

    • @JP-re3bc
      @JP-re3bc 7 лет назад +3

      Ah the legendary quality of American public education.
      Yes! Monte Carlo is in Africa, and Africa is some place in the south of Europe. No?

    • @RalphDratman
      @RalphDratman 4 года назад

      The town of Monte Carlo is in the tiny principality of Monaco (that is, a territory originally ruled by a prince) on the Mediterranean coast of France. Monte Carlo was -- and still is -- famous for its iconic, palatial gambling casino.

  • @bobcrunch
    @bobcrunch 8 лет назад

    Good job, but you missed the punch line at 7:10 that a histogram of the number of times you land in an interval matches the shape of the curve; i.e., the number of times is a maximum in an interval centered at 0 and falls off in both directions. Maybe it was obvious to others, but maybe I'm a little slow.

  • @QuantCoder
    @QuantCoder 11 лет назад

    Nicely done. Would have been better if the Hastings correction to alpha was discussed. It was mentioned and even kept in the presentation, but then neglected. Seems either losing it, and justifying the loss would be good, or leaving it out would be better.

  • @RAP4EVERMRC96
    @RAP4EVERMRC96 2 года назад

    Nice lecture, whats your Elo? :p

  • @dsm5d723
    @dsm5d723 3 года назад

    Taleb brought me here; the Kali Yuga keeps me grinding.

  • @xenonmob
    @xenonmob 3 года назад

    snazzy intro music

  • @jonathansmall4573
    @jonathansmall4573 7 лет назад

    I tried running that matrix program. Unfortunately it doesn' tconverge to (0.2, 0.4, 0.4) as you said. I don't know what I am doing wrong.

    • @picjeffton
      @picjeffton  7 лет назад +1

      Jonathan Small I messed up the arithmetic in that example.

    • @jonathansmall4573
      @jonathansmall4573 7 лет назад

      He he. Actually I tried again. This time using in-built matrix multiplication function in Python. It worked. Thanks :)

    • @FA-tq9ip
      @FA-tq9ip 3 года назад

      @@picjeffton When I find the product of the starting state X0 and the Markov transition matrix I do not get that the probabilities of the next state X1 are as shown [0.2, 0.6, 0.2] but rather [0.18, 0.64, 0.18]. Am I doing the multiplication wrong or is that part of the arithmatic error? Thanks for your help and the video.

  • @GabiRav
    @GabiRav 11 лет назад

    Can someone explain this?

  • @SotirisSar
    @SotirisSar 10 лет назад

    a good one! thank you!

  • @stipepavic843
    @stipepavic843 6 лет назад

    thx alot ! also good old league of legends days XD

  • @picjeffton
    @picjeffton  10 лет назад

    You're quite right. For the purposes of this video though, let's just pretend that is how arithmetic works.

  • @great2816
    @great2816 3 месяца назад

    monte carlo name came from famous casino in monaco not vegas i believe.

  • @bobcrunch
    @bobcrunch 10 лет назад

    I get the same answer.

  • @paulfrischknecht3999
    @paulfrischknecht3999 9 лет назад

    I don't see the difference between irreducible and aperiodic. IMO the graph is aperiodic (in the sense that there is no subgraph where we will get stuck) iff it is irreducible (for every pair of states (x,y), x and y are mutually reachable with nonzero probability).

    • @ahealey5961
      @ahealey5961 9 лет назад +1

      Paul Frischknecht irreducible is probability of reaching any state while starting at another state is positive. The periodicity, d, is the largest integer such that returning to a certain state i is always a multiple of d. ie if you can reach i after {2,4,6,8,10} steps then d=2 since {2,2(2),2(3),2(4)..} .. An aperoidic MC would be {2,3,4,6,7} here then is no d such that n*d will generate the periods.

  • @abdullahalsulieman2096
    @abdullahalsulieman2096 Год назад

    Jeff, I have an algorithm that I need help interpret.

  • @WoeiPatrickP90
    @WoeiPatrickP90 6 лет назад

    Hey you play League of Legends too bro???
    me too hahaaa

  • @dannyndnyad4182
    @dannyndnyad4182 5 лет назад

    18:24 u are welcome

  • @zilezile4942
    @zilezile4942 4 года назад

    Learn more about logistic regression with R
    drive.google.com/file/d/1qcq_186AMe2XK9aNiSLxLbvXlAmryWXX/view?usp=sharing

  • @czarekkawecki6548
    @czarekkawecki6548 Год назад

    The video is great, but why would you think that the name comes from a casino in Las Vegas and not from the original one in Monaco, that the american one was named after?? 😂😂