Hidden Markov Model Clearly Explained! Part - 5

Поделиться
HTML-код
  • Опубликовано: 27 дек 2024

Комментарии • 270

  • @cmfrtblynmb02
    @cmfrtblynmb02 2 года назад +242

    I got my PhD back in 2000s. I wish material like this existed back then. I struggled so much with a lot of basic concepts. I had to look at material written in the most extreme mathematical notation without gaining any intuition. You have done an amazing job explaining something I really had hard time grasping. Well done.

    • @amrdel2730
      @amrdel2730 Год назад +1

      yeah phd level is almost all time in use or link to some machine learnning theorie understanding i doin it now

    • @kurghen988
      @kurghen988 Год назад +2

      I thought exactly the same!

    • @HeartDiamond_variedad
      @HeartDiamond_variedad Год назад +1

      En español no hay contenido así, tenemos que recurrir a videos en inglés

    • @stryker2322
      @stryker2322 7 месяцев назад

      because of people like you we alll are able to go deep into artificial intelligence and able to simplify what we are watching right now

    • @RkR2001
      @RkR2001 6 месяцев назад

      Great teaching Nerd. Where are You based now? ,I am a doctor from INDIA need your academic help

  • @pradipkumardas2414
    @pradipkumardas2414 3 года назад +48

    I am 60 by age. I am learning how to simplify a complex subject. You are a great teacher! God bless you

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +5

      Thanks a lot sir 🙏
      I am very flattered.

    • @BhargavSripada
      @BhargavSripada 2 года назад

      didnt ask

    • @mr.nobody125
      @mr.nobody125 2 года назад +10

      @@BhargavSripada and we certainly didn't ask for your opinion

  • @adaelasm6467
    @adaelasm6467 2 года назад +5

    This might be the fastest I've gone from never having used a concept to totally grokking it. Very well explained.

  • @gracemerino3267
    @gracemerino3267 3 года назад +62

    Thank you for making these videos! I am starting my master's thesis on Markov Chains and these videos help me get a solid introduction to the material. Much easier to understand the books now!

    • @abdullahalsulieman2096
      @abdullahalsulieman2096 2 года назад

      That is great. How is it so far? I want to apply it to predict system reliability. Not sure how complicated it is. Could you advise please?

  • @k.i.a7240
    @k.i.a7240 4 месяца назад +2

    Better explained than over 3 AI university classes I have gone through. Simple, efficient. Thank you

  • @xherreraoc
    @xherreraoc 3 месяца назад +2

    This series of videos has been incredibly useful for me to write my Masters thesis. Thank you so much.

  • @dheerajmakhija9752
    @dheerajmakhija9752 Год назад +3

    Very rare topic explained in most precise manner. Very helpful 👍

  • @HKwak
    @HKwak 3 года назад +13

    Thanks for videos. I am majoring in Data Science and obviously videos like this sometimes help enormously comparing to reading texts. Very Intuitive and visual. I don't think I will forget the weather signs you showed us today.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      That's the goal 😉

    • @Portfolio-qb7et
      @Portfolio-qb7et Год назад

      ​​@@NormalizedNerdCan you briefly explain the steps involved in finding the probability of sunny day..I really don't understand

  • @aramisfarias5316
    @aramisfarias5316 3 года назад +5

    After going through your Markov chains series, you my friend got yourself a new subscriber! Great work. Your channel deserves to grow!

  • @mienisko
    @mienisko Год назад +1

    I'm just going through the markov chain playlist and because of its quality I'm gonna subscribe to this channel. Great material!

  • @mertenesyurtseven4823
    @mertenesyurtseven4823 Год назад +8

    Dude your videos are severly underappreciated. The animations, basic but complete style of you used when talking about abstract and complex topics. I just discovered this channel, and I will recommend all the people including beginners because even they will be able to understand as well as advanced people.

  • @IndianDefenseAnalysis
    @IndianDefenseAnalysis Год назад +2

    What a channel. Have never came across any Data Science channel like yours. You are doing a fantastic work. Love your videos and going thru them ❤

  • @mayurdeo627
    @mayurdeo627 3 года назад +12

    Amazing job. This really helps if you are preparing for interviews, and want a quick revision. Thank you for doing this.

  • @ivanportnoy4941
    @ivanportnoy4941 3 года назад +10

    Loved it! I am looking forward to (maybe) seeing a video on the Markov Chain Monte Carlo (MCMC) algorithms. Best regards!

  • @muneshchauhan
    @muneshchauhan 2 года назад +8

    Many thanks for the beautiful visualization and summarization of the Markov model. It was effortless understanding it. May require a little revision but comprehensible with ease. 🙂

  • @songweimai6411
    @songweimai6411 2 года назад +2

    Thank you, really appreciate your work. Watched this video let me consider that my professor in school class is not a good teacher.

  • @josippardon8933
    @josippardon8933 Год назад +1

    Great! Just Great! I really dont understand why most professors at colleges hate this type of explaining things. They always choose standard "lets be super formal and use super formal mathematical notation". Yes, it is important to learn formal mathematical things, but why not combine both formal and informal approach and put them together in textbook?

  • @SousanTarahomi-vh2jp
    @SousanTarahomi-vh2jp 8 месяцев назад

    Thanks!

  • @tatvamkrishnam6691
    @tatvamkrishnam6691 2 года назад

    Excellent! Skipped 4th part of this Magnificient Markov series. Took roughly 3hrs to verify at moments and coninvce myself. HIT MOVIE!!

  • @gregs7809
    @gregs7809 3 года назад +6

    A really well laid out video. Looking forward to watching more

  • @brainnuke5450
    @brainnuke5450 4 года назад +2

    Don't stop what you are doing! It's amazing.

    • @NormalizedNerd
      @NormalizedNerd  4 года назад

      Thanks!! :D

    • @asaha9479
      @asaha9479 3 года назад

      From the accent u r a bengali but r u from isi? GREAT vdo,keep going

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      @@asaha9479 Yup I'm a bong...But I don't study in ISI.

  • @Mosil0
    @Mosil0 2 года назад +21

    Hey, thanks a lot for making these!
    One suggestion if you don't mind: you could avoid using red and green (especially those particular shades you used) as contrasting colors, given that they're close to indistinguishable to about 8% of males. Basically any other combination is easier to tell apart, e.g. either of those colors with blue.
    Just a minor quibble, the videos are otherwise very good!

    • @NormalizedNerd
      @NormalizedNerd  2 года назад +11

      Thanks a lot for pointing this out. Definitely will keep this in mind.

  • @Hangglide
    @Hangglide 7 месяцев назад

    Hello author from the past! Your video is really helpful! Thank you!

  • @zoegorman8233
    @zoegorman8233 3 года назад +6

    Super entertaining videos helping me with my Oxford master's thesis. Study night or movie night? Plus he has an awesome accent :-)

  • @noame
    @noame 2 года назад +5

    Thank you for the clarity of explanation. Why did you neglect the denominator P(Y). How can we calculate it ? I assume that the correct arg max should take into consideration the denominator P(Y)

    • @Ujjayanroy
      @Ujjayanroy 11 месяцев назад +1

      we are taking argmax of a function with X as the variables, so Y doesnt matter because of argmax...you can refer to bayes theorem for maximum likelihood, they always do the same thing

  • @debasiskar4662
    @debasiskar4662 3 года назад +2

    You explain things nicely. I would request you to make videos on advanced stochastic processes like semi-Markov Process, martingale etc.

  • @sye9522
    @sye9522 8 месяцев назад

    Amazing!! It really helps me understand the logic behind those scary HMM python codes. Thank you.

  • @sreelatharajesh2365
    @sreelatharajesh2365 Год назад +2

    Wonderful video. Amazing explanation. Please explain why P(Y) is neglected? Or is considered as 1?

    • @rodrigodiana1774
      @rodrigodiana1774 3 месяца назад +1

      arg max is computed by varying X, so we can neglect P(Y) because it's not varying on each iteration and will not change the final result

  • @girishthatte
    @girishthatte 3 года назад +4

    Awesome explanation ! Loved the way you explained Math used for calculations !

  • @nad4153
    @nad4153 3 года назад +5

    nice video ! but what happen to P(Y) at 8:30 in the final formula, why does it disappear ?

    • @papa01-h2z
      @papa01-h2z 2 года назад +1

      We want to find the X_1, ..., X_n that gives us the maximum value. Note that P(Y) does not depend on the Xs and is therefore a constant. A constant does not change the Xs that give us the maximum.

  • @VersatileAnthem
    @VersatileAnthem 2 года назад

    superb explanations ! that shows how in depth your knowledge is !

  • @ITTOMC
    @ITTOMC Год назад +1

    Simply wonderful. Keep up your excellent work. Really really well done!

  • @truongsonmai6915
    @truongsonmai6915 3 года назад

    Very helpful video as well as the rest of the Markov series. Wish you luck from Vietnam!

  • @marcovieira8356
    @marcovieira8356 3 месяца назад

    Thank you for the vid, it is probably the arg max (understandable) available around. But something remains unclear for me. You said at 09:30 that one Markov property is that X_i "depends only of X_i-1 but Markov property I know is the opposit: X_i is independent of X_i-1 (future does not depends on past, just on the current state). Where I am missing the point?

  • @Louisssy
    @Louisssy 4 года назад +5

    Really cool explanation! Can you also explain why is P(Y) ignored?

    • @NormalizedNerd
      @NormalizedNerd  4 года назад +10

      Because of two reasons...
      1. It's often hard to compute P(Y)
      2. To maximize that expression we only need to maximize the numerator (depends on X). Note that P(Y) doesn't depend on X.

    • @ts-ny6mx
      @ts-ny6mx 4 года назад

      @@NormalizedNerd Thank you! I had the same concern and your explanation makes sense!

    • @mariamichela
      @mariamichela 2 года назад

      @@NormalizedNerd 1. can't we just compute P(Y) as P(X1,Y) x P(X1) + P(X2,Y) x P(X2) + P(X3,Y) x P(X3) ?
      2. true, I agree. Since you didn't say it in the video I was just wondering where did P(Y) disappear, and didn't bother to think that the max was actually over X

  • @how_about_naw
    @how_about_naw 10 месяцев назад +1

    Oh look, a coffee button! Thanks!

    • @NormalizedNerd
      @NormalizedNerd  10 месяцев назад +1

      Thanks so much!!

    • @how_about_naw
      @how_about_naw 10 месяцев назад

      @@NormalizedNerd no, thank you sir. This has been very informative and accessible. I will be checking out your other content in future!

  • @freenrg888
    @freenrg888 3 месяца назад

    Beautiful! Thank you.
    Question: in the final formula, "arg max(over X) Prod P(Yi | Xi) P(Xi | Xi-1)" ...
    We have a product term P(X1 | X0) the assumes there is an X0 value. However, there is no X0. Don't we need to replace this term with a different expression that does not rely on X0?

  • @MERIEMELBATOULEDDAIF
    @MERIEMELBATOULEDDAIF 8 месяцев назад +1

    hiii please i just want to know the tool you create those exemples with , its urgent save me

  • @ramchillarege1658
    @ramchillarege1658 2 года назад +1

    Very nice and clear. Thank you.

  • @philipbutler
    @philipbutler 3 года назад +3

    this was the perfect video for where I'm currently at. I learned about Markov chains last year, and just finally got a good grasp on Bayes' (I struggled through Prob & Stats years ago). Thanks so much! keep it up!

  • @artemiosantiagopadillarobl4280
    @artemiosantiagopadillarobl4280 4 года назад +5

    Great videos, keep it up! :)
    I would be nice to have a video about MCMC (Markov Chains via Monte Carlo) and the Metropolis-Hastings algorithm

    • @NormalizedNerd
      @NormalizedNerd  4 года назад

      Great suggestion.

    • @nonconsensualopinion
      @nonconsensualopinion 3 года назад +1

      @@NormalizedNerd I'm on the edge of subscribing. A video on MCMC would convince me to subscribe and never leave!

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      @@nonconsensualopinion 😂😂...Let's see what comes next

  • @ishhanayy
    @ishhanayy Год назад

    Very well explained ! 👏👏

  • @streamindigoflags8035
    @streamindigoflags8035 2 года назад

    Thanks a lot! These type of videos are amazing and helps me understand the concepts in good way that are there in the books. It boosts my interest to this area. It helps me a lot in doing my project! U make these kind of videos for almost all concepts!😍

  • @vocabularybytesbypriyankgo1558

    Great explanation, super helpful

  • @Cookies4125
    @Cookies4125 2 года назад +2

    Thanks for the explanation! You went through the math of how to simplify \argmax P(X=X_1,\cdots,X_n\mid Y=Y_1,\cdots,Y_n) but how do you actually compute the argmax once you've done the simplification? There must be a better way than to brute force search through all combinations of values for X_1,\cdots,X_n right?

  • @ViralPanchal97
    @ViralPanchal97 8 месяцев назад

    Thanks a ton, I wish my professors from Monash Uni taught this way.

  • @surajpandey86
    @surajpandey86 3 года назад +1

    I really like the visuals and your presentation with the content. Good work!

  • @asmaaziz2436
    @asmaaziz2436 3 года назад

    Explained so simply. Well done. Helped me a lot.

  • @mehmetkemalozdemir7311
    @mehmetkemalozdemir7311 11 месяцев назад

    In 8:25, P(X) does not look right to me as we should have P(X1). Pi_i=2 to n P(X_i | X_i-1)

  • @xiaoruidu8603
    @xiaoruidu8603 3 года назад +3

    please keep updating~ you are doing an amazing job~~.

  • @TariqulHasan-p2y
    @TariqulHasan-p2y Год назад

    HI.. thanks for wonderful videos on Markov Chain. I just want to know how do you define the probability of transition state and emission state? What to do about unknown probabilities of state?. Regards

  • @varshachinnammal
    @varshachinnammal Год назад

    Dear sir,
    Your explanation was very well explained and understandable. It is full of mathematics coming here matrix, probability and so. I'm from a science background without maths. I needed this for bioinformatics but it is difficult to compare with nitrogenous bases with these matrix and formulae. Will you explain it in simpler method? It would be very helpful sir 🥺

  • @Ramakanth2002
    @Ramakanth2002 Год назад

    "Hello People From The Future!" that was very thoughtful

  • @muqaddaszahranaqvi5585
    @muqaddaszahranaqvi5585 2 года назад

    Finally, some video helped.

  • @lucasqwert1
    @lucasqwert1 Год назад

    How do we calculate the stationary distribution of the first state? I watched your previous videos but still cant calculate it! Thanks for answering!

  • @24CARLOKB
    @24CARLOKB 2 года назад

    Great video! Just a little lost where you get the prob(sad or happy | weather), which I think are emission probabilities? Thanks!

  • @rohanthakrar7599
    @rohanthakrar7599 Год назад

    can you explain the calculation of P(X|Y), the last step of the video when you put inside the products of P(Y|X) and P(X|X). Where does the P(Y) in the denominator go? Thanks

    • @Periareion
      @Periareion Год назад +1

      Howdy!
      I think of it like this: P(Y) is a constant, and doesn't affect which sequence X has the highest probability of occurring. In other words: since every term gets divided by P(Y), we can just ignore it.
      Perhaps he could've made that a little clearer in the video.
      Cheers!

  • @tetamusha
    @tetamusha Год назад

    I got a little confused with the two HMM videos. I thought the second video would solve the argmax expression presented at the end of the first one, but the algorithm that solves this expression is the Viterbi algorithm and not the Forward algorithm from the second video. Just a heads-up to those that got a little lost like me.

  • @MB-rc8ie
    @MB-rc8ie 3 года назад

    This is gold. Thanks for uploading. Very helpful

  • @najeebjebreel2885
    @najeebjebreel2885 Год назад

    Nice explanation. Thanks.

  • @Sam-rz5hw
    @Sam-rz5hw 8 месяцев назад

    excellent explanation!

  • @aadi.p4159
    @aadi.p4159 Год назад

    ssuper super good. awesome work bro

  • @hyeonjusong1159
    @hyeonjusong1159 Год назад

    You have a knack for teaching! The explanation was clear and the example along with the emojis was so cute! Thank you!!!

  • @atifshakeel1982
    @atifshakeel1982 2 года назад

    HI excellent and very easy way of explaining the complex mathematical terms. Did you make these slides in PowerPoint ? or some other tool ? Regards,

  • @acesovernines
    @acesovernines Год назад

    Superb explanation

  • @Extra-jv5xr
    @Extra-jv5xr 9 месяцев назад

    Is this related to the viterbi algorithm? Could you make a video on that?

  • @ameyapawar7097
    @ameyapawar7097 2 года назад

    Very well explained!

  • @丰岛龙健
    @丰岛龙健 3 года назад +1

    best videos! hope to see more of the videos about markov chain! Thank you

  • @BasilAltaie
    @BasilAltaie 3 года назад

    Where can I find the software you are using for mathematical visualization? Please advice.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      It's an open source python library called Manim.

  • @antoninmartykan9004
    @antoninmartykan9004 Год назад

    fantastic series! I am just wondering, is there a more efficient way to calculate the maximum than to try all possible permutations? Thanks!

  • @siphamandlamazibuko1424
    @siphamandlamazibuko1424 9 месяцев назад

    Can someone please explain where the P(Y) in the denominator went when the products were substituted in the Bayes theorem?

  • @abdullahalsulieman2096
    @abdullahalsulieman2096 2 года назад

    Can I use the equation to compute the max join probability distribution or a code is required to do that?

  • @dannymoore1530
    @dannymoore1530 7 месяцев назад

    This was good! Thank you.

  • @JVenom_
    @JVenom_ 2 года назад +7

    not me studying for my final in 6 hrs

  • @barhum5765
    @barhum5765 2 года назад

    Bro you are a king

  • @islemsahli737
    @islemsahli737 6 месяцев назад

    Thank you well explained

  • @elixvx
    @elixvx 4 года назад +1

    thank you for your videos! please continue the great work!

  • @karannchew2534
    @karannchew2534 2 года назад

    Hi Normalised Nerd, and Everyone Reading This, I have a question please. Can I use HMM the other way round: find the mood sequence given the weather sequence? How?

  • @ThePablo505
    @ThePablo505 Год назад

    something that looks so hard, you make it understandable even for a 4 years old kid

  • @tarazdxs
    @tarazdxs 7 месяцев назад

    Thanks for the video, is it possible to access the python code you did for this problem?

  • @abdsmmeat7492
    @abdsmmeat7492 2 года назад

    We need more videos on Markov chains

  • @nakjoonim
    @nakjoonim 3 года назад

    You deserve my subscription thanks a lot!

  • @minjun87
    @minjun87 2 года назад

    Thanks for the great video. What would be the hidden state sequence data and observed sequence data in speech to text use case?

  • @nchsrimannarayanaiyengar8003
    @nchsrimannarayanaiyengar8003 3 года назад

    very nice explanation thank you

  • @Gamer2002
    @Gamer2002 Год назад

    You should make more videos, you are awesome

  • @Garrick645
    @Garrick645 4 месяца назад +1

    As soon as the emoji's left, the video went over my head

  • @thejswaroop5230
    @thejswaroop5230 3 года назад

    hey you from the past....good video

  • @annanyatyagip
    @annanyatyagip 10 месяцев назад

    excellent video!

  • @jonathanisai9286
    @jonathanisai9286 3 года назад +1

    Amazing video bro! you rock!

  • @santiespinell
    @santiespinell Год назад

    How do you get the transition probabilities?

  • @aveenperera3128
    @aveenperera3128 3 года назад +1

    please do more examples for hidden markov chains

  • @stephenchurchill3929
    @stephenchurchill3929 2 года назад

    Great Video, Thanks!

  • @bassami74
    @bassami74 3 года назад

    Thank you for sharing, could you please explain how to implement HMM on measuring the earnings quality. Need your help 🙏🙏

  • @rustemdevletov2176
    @rustemdevletov2176 3 года назад +1

    I don't understand how you got the values of stationary distribution. Trying to calculate myself and getting totally different numbers..

  • @keithmartinkinyua2067
    @keithmartinkinyua2067 2 года назад +1

    this video is good but it left me hanging. I was expecting you to calculate the probability at the end.

  • @jtanium
    @jtanium 2 года назад +7

    I've been going through these videos and doing the calculations by hand to make sure I understand the math correctly. When I tried to calculate the left eigenvector ([0.218, 0.273, 0.509]) using the method described in the first video (Mark Chains Clearly Explained Part 1), I got a different result ([0.168, 0.273, 0.559]), and I'm wondering if I missed a step. Here's what I did: starting with [0 0 1] meaning it is sunny, pi0A = [0.0 0.3 0.7], pi1A = [0.12 0.27 0.61], pi2A = [0.168, 0.273, 0.559]. It's interesting the second element matches. If anyone might help me understand where I went wrong, I'd greatly appreciate it!

    • @sushobhitrathore2555
      @sushobhitrathore2555 2 года назад

      you must start the probability of states as [0 1 0].....you will get the same values

    • @angadbagga9166
      @angadbagga9166 2 года назад

      @@sushobhitrathore2555 - lol why 010 ..sunny is at last position. And even if we go through your way we don't get the right result. Result of pi2A as per you funda is [0.252, 0.272, 0.476]

  • @user-se9uk2py5k
    @user-se9uk2py5k 3 года назад

    Really good explanation ! Thank you!

  • @deepayaganti7653
    @deepayaganti7653 3 года назад

    Clearly explained.superb

  • @mika-ellajarapa5646
    @mika-ellajarapa5646 3 года назад +1

    hello!!!! please explain conditional random field :( thank you

  • @sriramsridhara1763
    @sriramsridhara1763 3 месяца назад

    How did you compute stationary distribution for the question which is not irreducable. The sunny to rainy probability is zero!