Expectation Maximization: how it works

Поделиться
HTML-код
  • Опубликовано: 4 окт 2024
  • Full lecture: bit.ly/EM-alg
    We run through a couple of iterations of the EM algorithm for a mixture model with two univariate Gaussians. We initialise the Gaussian means and variances with random values, then compute the posterior probabilities for each data point, and use the posteriors to re-estimate the means and variances.

Комментарии • 165

  • @Ewerlopes
    @Ewerlopes 8 лет назад +101

    Man, this guy is unbelievable! I wish I had professors like him! Great explanation, thanks!

  • @salmankhalifa2867
    @salmankhalifa2867 9 лет назад +163

    God, this is so interesting and now makes sense. I wish you were my data-mining lecturer! You should look into getting with khan academy!

  • @MrReierz
    @MrReierz 7 лет назад +8

    You have no idea how good this presentation was. Ive searched the web for hours. Nobody could explain this, except your video! Thankyou

  • @gergerger53
    @gergerger53 10 лет назад +4

    What an excellent explanation! As soon as I pulled a face trying to figure out what the new mean estimation was doing, you stopped and explained it and realised how unusual it might look at first. So many teachers lack this ability to go beyond what they know and imagine how certain formulas/concepts might need an extra minute or two of explanation for people who haven't seen it before. Subscribed!

    • @vlavrenko
      @vlavrenko  10 лет назад +2

      Thank you! Very happy you find my videos helpful.

  • @NickLilovich
    @NickLilovich 5 месяцев назад +1

    This video has (by far) the highest knowledge/time of any other video on this topic on RUclips. Clear explanation of the math and the iterative method, along with analogy to the simpler algorithm (k-means). Thanks Victor!

  • @xisnuutube
    @xisnuutube 7 лет назад +69

    Very nice explanation. But need to see your pointing..

  • @xinpang9611
    @xinpang9611 7 лет назад +11

    This explanation is fantastic! I have been studying machine learning courses in my master's but always found it difficult to understand. Now I finally understand EM. Thank you Prof. Lavrenko.

  • @sanjaykrish8719
    @sanjaykrish8719 7 лет назад +1

    One lecture like this can uncomplicate things so much to so many people around the world. After understanding this 1D it is so much easy to get a grasp of higher dimensions.

  • @drianhoward
    @drianhoward 9 лет назад +5

    Dear Victor
    Your machine learning tutorial videos are really great!

  • @roshinishake9400
    @roshinishake9400 Год назад +4

    I want professors like you in my college. You're so great sir thank you so much ❤️ for your great explanation ☺️. Your video's making easy machine learning easy

  • @archismanghosh7283
    @archismanghosh7283 4 месяца назад

    You just cleared every doubts on this topic, it's 10 days before my exam watching your video and getting everything cleared

  • @InvictusForever
    @InvictusForever 2 месяца назад

    So helpful. Really lucky to have found this goldmine!!

  • @erfandejband8945
    @erfandejband8945 2 года назад

    so simple explanation and at the same time so comprehensive. this video really help me to understanding this algorithem

  • @joelharsten2408
    @joelharsten2408 9 лет назад +2

    That was very good explained! It was nice that you often referred and compared to K-means. That made it easy to understand this algorithm! Thank you!

  • @ashokharnal
    @ashokharnal 7 лет назад +1

    So easily explained with superb clarity.

  • @deepakjoshi7730
    @deepakjoshi7730 6 месяцев назад

    Splendid. Example very well portrays the algorithm stepwise!

  • @orresearch007
    @orresearch007 9 лет назад +3

    Lucid explanation on EM. Mr Lavrenko is a superb teacher, explaining concepts using understandable language and helps learner to make sense of the equations.
    Keep up the good work.

  • @azuriste8856
    @azuriste8856 9 месяцев назад

    Great Explanation Sir. I don't know why it motivated me to appreciate and comment on the video.

  • @rayanaay5905
    @rayanaay5905 2 года назад

    Incredible, I read a lot of article and papers around EM. but this video gave me everything I need to know !

  • @dhineshkumarr3182
    @dhineshkumarr3182 9 лет назад +1

    you really mastered the technique of explaining big things to a beginer.it was very helpful.I am definitely going to follow your lectures in future.Thank you so much for the knowledge.

  • @ghilesdjebara8066
    @ghilesdjebara8066 Год назад

    OMG this is such simple and intuitive explanation. THANKS !

  • @JD-rx8vq
    @JD-rx8vq 11 месяцев назад

    Wow, you explain very well, thank you! I was having a hard time understanding my professor's explanation in our class.

  • @tchen8124
    @tchen8124 7 лет назад

    You are the best teacher I ever had. Thank you so much.

  • @vitid1
    @vitid1 8 лет назад +1

    best explanation I've ever seen

  • @theOceanMoon
    @theOceanMoon 7 лет назад +2

    annotations really helped understand better.
    Thanx man

  • @Luckymator
    @Luckymator 7 лет назад

    I can't thank you enough! You explain it so well, that i can now understand what the formuals in the script mean, i have to learn.

  • @muhammadfaizanasghar77
    @muhammadfaizanasghar77 2 года назад

    Brilliantly done, hats off.

  • @mohamadkoohi-moghadam7657
    @mohamadkoohi-moghadam7657 8 лет назад

    Awesome!! Obviously you are a teacher who knows the art of teaching! Thank you!

  • @westsidde
    @westsidde 9 лет назад

    I have tried to find good explanation on GMM for beginners (me) new to this topic and your explenation was the best i found. Very clear and good explanation, im very thankful sir! Keep up the good work!

  • @rohanshah9593
    @rohanshah9593 Год назад

    Amazing explanation! I was struggling to understand this concept. Thank you so much!

  • @lima073
    @lima073 2 года назад

    Best explanation i've seen about this subject. Thank you very much, your videos are really awesome !

  • @rishabhchopra6418
    @rishabhchopra6418 7 лет назад +1

    Amazing! Was stuck with the Udacity Course! Now, its all clear! :)

  • @sairaamvenkatraman5998
    @sairaamvenkatraman5998 7 лет назад +1

    You explain this so well! Awesome!!

  • @SandraMenesesBarroso
    @SandraMenesesBarroso 9 лет назад

    You are very good at teaching, you make it looks easy.

  • @SyedArefinulHaque
    @SyedArefinulHaque 7 лет назад

    This is a perfect practical example, helped clear my concept about EM algorithm!!

  • @playerseazay
    @playerseazay 7 лет назад +1

    You explains really clear. Thanks for saving me from struggling in EM algorithm.

  • @ShahzadHassanBangash
    @ShahzadHassanBangash 2 года назад

    Beautifully explained. Keep it up professor

  • @phuongdinh5836
    @phuongdinh5836 7 лет назад

    I wish you were my professor! I keep having to go to your channel after the expensive in-class lectures.

  • @MrYoyo2345
    @MrYoyo2345 Год назад

    God bless you sir, and your excellent explanations. You are a life saver

  • @luisfernandocamarillo9071
    @luisfernandocamarillo9071 7 лет назад

    Congratulations Sir, magistral EM lecture.

  • @hasnainvohra7754
    @hasnainvohra7754 10 лет назад +1

    Excellent explanation. Thank you.

  • @pudinda
    @pudinda 7 лет назад

    great explanation! the animations and the equations on the side, coming at the right time, really helped :)

  • @muhmazabd
    @muhmazabd 9 лет назад +23

    Thanks a lot, Very well explained,

    • @vlavrenko
      @vlavrenko  9 лет назад +7

      Thank you!

    • @TheDionator
      @TheDionator 8 лет назад +4

      Best explanation I've seen on the topic!
      Please make more!!

  • @m07hcn62
    @m07hcn62 11 месяцев назад

    This is awesome explanation. Thanks !

  • @uniqguy111
    @uniqguy111 Год назад

    This is gold standard

  • @fengjeremy7878
    @fengjeremy7878 Год назад

    Students that have such a great teacher make me jealous.

  • @Artaxerxes.
    @Artaxerxes. 2 года назад

    what a brilliant explanation

  • @mikelmenaba
    @mikelmenaba Год назад

    Great explanation mate, thanks!

  • @ismailatadinc816
    @ismailatadinc816 2 года назад

    Amazing explanation!

  • @DM-py7pj
    @DM-py7pj 2 года назад

    Probability and likelihood are used interchangeably here. When looking at P(xi | b), given the notation I assume the calculation is indeed a probability. I enjoyed this. I would like to have seen an explanation of stopping at a threshold for convergence.

  • @JiangXiang
    @JiangXiang 10 лет назад

    Excellent! Thanks for the effort to make it easy to understand!

    • @vlavrenko
      @vlavrenko  10 лет назад +1

      Thanks! Glad to know these videos are useful.

  • @sagarmeisheri2361
    @sagarmeisheri2361 10 лет назад +4

    Thank you very much. Excellent Example !!

  • @babyroo555
    @babyroo555 Год назад

    incredible explanation!

  • @schummelhase6055
    @schummelhase6055 9 лет назад +1

    Thanks from Germany! Very very helpfull.

  • @林思琳-v2r
    @林思琳-v2r 8 лет назад +9

    may I ask what is the value for P(b) in Estep, if I have two clusters, does that mean I can use 0.5 as Initial P(b) in estep?

  • @MrMopuri
    @MrMopuri 8 лет назад

    great work Mr. lavrenko!

  • @Luca-yy4zh
    @Luca-yy4zh 2 года назад

    Very good explanation. Thanks a lot!

  • @oTwOrDsNe
    @oTwOrDsNe 7 лет назад

    Thank you for making this so clear and intuitive! Love your lectures

  • @mahoneyj2
    @mahoneyj2 7 лет назад

    Great explanation and example! Many thanks!

  • @jamesschoi87
    @jamesschoi87 6 лет назад +2

    so what are P(a) and P(b) initially?

  • @TheChavakula
    @TheChavakula 7 лет назад

    Wow! the best explanation ever!

  • @DesmondCaulley
    @DesmondCaulley 8 лет назад

    Thanks man. Really really great explanations.

  • @sreichli
    @sreichli 8 лет назад

    Thank you for creating this video -- very helpful!

  • @brynkimura6738
    @brynkimura6738 7 лет назад

    Thank you so much for posting this explanation!

  • @HongjoKim
    @HongjoKim 9 лет назад

    Thank you very much. This video helps me a lot.

  • @gregoire33
    @gregoire33 7 лет назад +7

    hello, thanks for this video
    at the first iteration what value of p (b) do you use?

  • @andriananicolaou4087
    @andriananicolaou4087 9 лет назад +4

    Thank you very much for the presentation. I have one stupid question though. During the EM algorithm, when we apply the bayes rule to calculate the posterior p(b|x), how do we know the prior probabilities p(b) and p(a) (which are supposed to be equal)?

    • @raymendoza8157
      @raymendoza8157 6 лет назад +2

      I had the same question. Fishing through the comments it seems some have used 0.5 and 0.5 in this two-class example. So I guess our null assumptions would just be that they are equally likely for all classes. Not sure what happens after we go past the initial step. Did you figure out this? Hope this helps!

  • @_Junkers
    @_Junkers 10 лет назад +1

    Thank you for this, very valuable.

  • @wadewang574
    @wadewang574 2 года назад +1

    I have a question : At 4:16, the formula of calcualting μ_b has the division by (b_1 + b_2 + ... + b_n), but my first idea is division by the number of data points, i.e. n, so why not divided by n ?

  • @raymendoza8157
    @raymendoza8157 6 лет назад +2

    Got a question for anyone who can answer! I follow most of the calculations but I'm not sure how we get p(b) in the second equation (application of Bayes' Theorem). Or, for that matter how we calculate p(a) for the denominator. We can't estimate them from prior class membership samples since we haven't labeled anything. Any idea? Am I misunderstanding it?

  • @ventezcamilo
    @ventezcamilo 6 лет назад

    Thanks! it is great to finally understand

  • @harshtrivedi6605
    @harshtrivedi6605 8 лет назад +2

    +1. Thanks a lot .. I finally understood EM. And now it makes complete sense :)
    Would you mind citing a resource which proves that initial choice of gaussians don't make any difference. And the proof that these gaussians when keep shifting iterationwise how they always converge?. I guess it would be same as K-means, isn't it? Thanks.

  • @pperez1224
    @pperez1224 Год назад

    Excellent , the only one that i understood :-)
    Question : Why does it converge ? Does it has something to do with the central limit theorem?

  • @UC46Vf8SyesF0MGwcZDowszA
    @UC46Vf8SyesF0MGwcZDowszA 8 лет назад +1

    Excellent! Clean notation, very clear explanation.
    One question, are the priors the mean of the Gaussians?

  • @Dominoanty
    @Dominoanty 7 лет назад

    Great Explanation ! Thank you !

  • @virgenalosveinte5915
    @virgenalosveinte5915 10 месяцев назад

    Amazing, thank you!

  • @welovemusic9056
    @welovemusic9056 9 лет назад

    Thank you very much. Well explained !!

  • @maryamomrani155
    @maryamomrani155 9 лет назад

    Thanks for such a good explanation!

  • @NuclearSpinach
    @NuclearSpinach 2 года назад

    I'm trying to fully connect the theoretical pieces that justify the weighted average of mu_a, mu_b, sigma_a, and sigma_b estimates. I.e., where is the E step and the M step and how do they connect to what's written? *Great* explanation. Thank you for uploading.

  • @shaimaamohamed8797
    @shaimaamohamed8797 8 лет назад

    wonderful explanation ! thanks a lot .

  • @tonyngo9400
    @tonyngo9400 9 лет назад

    Great explanation, thank you!

  • @G1aVoL
    @G1aVoL 7 лет назад

    Thank you for the amazing video.

  • @sameerjain6086
    @sameerjain6086 10 лет назад +12

    When you refer to a point saying 'that' point there I have no visual indication on the screen.

    • @vlavrenko
      @vlavrenko  10 лет назад +1

      Added the annotations, hope this helps.

  • @hoovie3000
    @hoovie3000 7 лет назад

    Great explanation!!

  • @ahmedibrahim-lw1ut
    @ahmedibrahim-lw1ut Год назад

    Great explanation. I have a question tho, about the values of xi. Why do we need to multiply those values to the posterior probability? What does the values of x really represent, if they are just points on the line why do we need to multiply them in the equation of recalculating the means?

  • @cumaliturkmenoglu2240
    @cumaliturkmenoglu2240 7 лет назад +1

    wonderful lecture. But how to determine the number of gaussians(clusters) are behind the data?

  • @charlibravo2578
    @charlibravo2578 2 года назад

    Thanks a lot for this

  • @aakankshachoudhary8532
    @aakankshachoudhary8532 7 лет назад

    Thank you so much...This helped me a lot!!! Thanks a lot, again!

  • @stochasticneuron
    @stochasticneuron 7 лет назад +7

    at [7:34] instead of mu_1 to mu_n it should be mu_a and mu_b for variance calculation if I am not wrong?

    • @arnabsen8633
      @arnabsen8633 7 лет назад +3

      yes. and he mentioned the mistake in the video.

  • @damianoazzalini8573
    @damianoazzalini8573 10 лет назад

    Very clear video. Thank you.

  • @lorrainewang9929
    @lorrainewang9929 10 лет назад

    Such a nice video! Thanks a lot!

    • @vlavrenko
      @vlavrenko  10 лет назад +1

      Thank you for the kind words. Good to know you find it useful.

  • @ajayram198
    @ajayram198 9 лет назад +1

    In 2:04 why is the posterior calcluated ? and what is the difference between p(x | a) , p(x |b ) and p( a | x) , p(b |x) ? which of these correponds to the probability that the data point belongs to cluster a /b ?

  • @RyeCA
    @RyeCA 2 месяца назад

    excellent, thank you

  • @thestuarts3815
    @thestuarts3815 6 лет назад +1

    Hi Victor, Thank you a lot for these interesting videos. do you post your slides anywhere ? id love to keep a copy of everything i watch here.

  • @Jessica-dt9by
    @Jessica-dt9by 8 лет назад

    Awesome video !!

  • @CHANTI8947
    @CHANTI8947 8 лет назад +1

    Video states that we are aware of the data points coming from 2 gaussians..GMM being unsupervised model, would we have that kind of information(# of Gaussians) beforehand..

  • @Chr0nalis
    @Chr0nalis 8 лет назад +1

    It sounded as if you said: "we will discover the Gaussians automagically" :)

  • @peterpan-hf5rg
    @peterpan-hf5rg 9 лет назад

    very comprehensible explanation of em. but what i dont get how all these probabilities look like if k is k = 3.

  • @8147333930
    @8147333930 2 года назад

    thank you so much

  • @alirezakhoshkbari7625
    @alirezakhoshkbari7625 9 лет назад

    Well done!