Hidden Markov Model Clearly Explained! Part - 5

Поделиться
HTML-код
  • Опубликовано: 1 окт 2024
  • So far we have discussed Markov Chains. Let's move one step further. Here, I'll explain the Hidden Markov Model with an easy example. I'll also show you the underlying mathematics.
    #markovchain #datascience #statistics
    For more videos please subscribe -
    bit.ly/normaliz...
    Markov Chain series -
    • Markov Chains Clearly ...
    Facebook -
    / nerdywits
    Instagram -
    / normalizednerd
    Twitter -
    / normalized_nerd

Комментарии • 266

  • @cmfrtblynmb02
    @cmfrtblynmb02 Год назад +206

    I got my PhD back in 2000s. I wish material like this existed back then. I struggled so much with a lot of basic concepts. I had to look at material written in the most extreme mathematical notation without gaining any intuition. You have done an amazing job explaining something I really had hard time grasping. Well done.

    • @amrdel2730
      @amrdel2730 Год назад +1

      yeah phd level is almost all time in use or link to some machine learnning theorie understanding i doin it now

    • @kurghen988
      @kurghen988 10 месяцев назад +1

      I thought exactly the same!

    • @HeartDiamond_variedad
      @HeartDiamond_variedad 10 месяцев назад +1

      En español no hay contenido así, tenemos que recurrir a videos en inglés

    • @stryker2322
      @stryker2322 4 месяца назад

      because of people like you we alll are able to go deep into artificial intelligence and able to simplify what we are watching right now

    • @RkR2001
      @RkR2001 4 месяца назад

      Great teaching Nerd. Where are You based now? ,I am a doctor from INDIA need your academic help

  • @JVenom_
    @JVenom_ Год назад +4

    not me studying for my final in 6 hrs

  • @gracemerino3267
    @gracemerino3267 3 года назад +54

    Thank you for making these videos! I am starting my master's thesis on Markov Chains and these videos help me get a solid introduction to the material. Much easier to understand the books now!

    • @abdullahalsulieman2096
      @abdullahalsulieman2096 2 года назад

      That is great. How is it so far? I want to apply it to predict system reliability. Not sure how complicated it is. Could you advise please?

  • @keithmartinkinyua2067
    @keithmartinkinyua2067 2 года назад +1

    this video is good but it left me hanging. I was expecting you to calculate the probability at the end.

  • @noame
    @noame Год назад +5

    Thank you for the clarity of explanation. Why did you neglect the denominator P(Y). How can we calculate it ? I assume that the correct arg max should take into consideration the denominator P(Y)

    • @Ujjayanroy
      @Ujjayanroy 8 месяцев назад +1

      we are taking argmax of a function with X as the variables, so Y doesnt matter because of argmax...you can refer to bayes theorem for maximum likelihood, they always do the same thing

  • @nad4153
    @nad4153 3 года назад +5

    nice video ! but what happen to P(Y) at 8:30 in the final formula, why does it disappear ?

    • @papaglory01
      @papaglory01 Год назад +1

      We want to find the X_1, ..., X_n that gives us the maximum value. Note that P(Y) does not depend on the Xs and is therefore a constant. A constant does not change the Xs that give us the maximum.

  • @ivanportnoy4941
    @ivanportnoy4941 3 года назад +9

    Loved it! I am looking forward to (maybe) seeing a video on the Markov Chain Monte Carlo (MCMC) algorithms. Best regards!

  • @sreelatharajesh2365
    @sreelatharajesh2365 Год назад +2

    Wonderful video. Amazing explanation. Please explain why P(Y) is neglected? Or is considered as 1?

    • @rodrigodiana1774
      @rodrigodiana1774 12 дней назад +1

      arg max is computed by varying X, so we can neglect P(Y) because it's not varying on each iteration and will not change the final result

  • @tirateehoffenberg8824
    @tirateehoffenberg8824 2 месяца назад +1

    I realized that this video did not provide the python script so I made one for ya all to try!
    Weather is represented by:
    0 = Rainy
    1 = Cloudy
    2 = Sunny
    Mood is represented by:
    0 = Sad
    1 = Happy
    And here's the code to try out and experiment :D
    import numpy as np
    import itertools
    def calculate_single_sequence(X, Y, state, Obsv, stationary_dist):
    temp_value = 0
    for i in range(len(X)):
    if i == 0:
    temp_value = stationary_dist[X[i]] * Obsv[X[i]][Y[i]]
    else:
    temp_value *= (state[X[i-1]][X[i]] * Obsv[X[i]][Y[i]])

    return temp_value
    def main():
    weather_state = np.array([[0.5, 0.3, 0.2],
    [0.4, 0.2, 0.4],
    [0.0, 0.3, 0.7]])

    mood_observation = np.array([[0.9, 0.1],
    [0.6, 0.4],
    [0.2, 0.8]])

    stationary_dist = np.array([0.218, 0.273, 0.509])
    # Happy, Happy, Sad
    mood_pattern = [1, 1, 0]
    # Generate all permutations of weather sequences of length 3 (since mood_pattern is of length 3)
    weather_permutations = list(itertools.product([0, 1, 2], repeat=len(mood_pattern)))
    all_probabilities = []
    for weather_pattern in weather_permutations:
    value = calculate_single_sequence(weather_pattern, mood_pattern, weather_state, mood_observation, stationary_dist)
    all_probabilities.append(value)

    # Find the max index and corresponding weather sequence
    max_index = np.argmax(all_probabilities)
    highest_prob_weather_sequence = weather_permutations[max_index]
    print(f"Highest Probability Value: {all_probabilities[max_index]}")
    print(f"Weather Sequence with Highest Probability: {highest_prob_weather_sequence}")
    if __name__ == "__main__":
    main()

  • @muneshchauhan
    @muneshchauhan 2 года назад +7

    Many thanks for the beautiful visualization and summarization of the Markov model. It was effortless understanding it. May require a little revision but comprehensible with ease. 🙂

  • @zoegorman8233
    @zoegorman8233 3 года назад +6

    Super entertaining videos helping me with my Oxford master's thesis. Study night or movie night? Plus he has an awesome accent :-)

  • @HKwak
    @HKwak 3 года назад +11

    Thanks for videos. I am majoring in Data Science and obviously videos like this sometimes help enormously comparing to reading texts. Very Intuitive and visual. I don't think I will forget the weather signs you showed us today.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      That's the goal 😉

    • @Portfolio-qb7et
      @Portfolio-qb7et Год назад

      ​​@@NormalizedNerdCan you briefly explain the steps involved in finding the probability of sunny day..I really don't understand

  • @mika-ellajarapa5646
    @mika-ellajarapa5646 2 года назад +1

    hello!!!! please explain conditional random field :( thank you

  • @mayurdeo627
    @mayurdeo627 3 года назад +10

    Amazing job. This really helps if you are preparing for interviews, and want a quick revision. Thank you for doing this.

  • @pradipkumardas2414
    @pradipkumardas2414 2 года назад +32

    I am 60 by age. I am learning how to simplify a complex subject. You are a great teacher! God bless you

    • @NormalizedNerd
      @NormalizedNerd  2 года назад +5

      Thanks a lot sir 🙏
      I am very flattered.

    • @BhargavSripada
      @BhargavSripada Год назад

      didnt ask

    • @mr.nobody125
      @mr.nobody125 Год назад +8

      @@BhargavSripada and we certainly didn't ask for your opinion

  • @Mosil0
    @Mosil0 2 года назад +20

    Hey, thanks a lot for making these!
    One suggestion if you don't mind: you could avoid using red and green (especially those particular shades you used) as contrasting colors, given that they're close to indistinguishable to about 8% of males. Basically any other combination is easier to tell apart, e.g. either of those colors with blue.
    Just a minor quibble, the videos are otherwise very good!

    • @NormalizedNerd
      @NormalizedNerd  2 года назад +10

      Thanks a lot for pointing this out. Definitely will keep this in mind.

  • @IndianDefenseAnalysis
    @IndianDefenseAnalysis Год назад +2

    What a channel. Have never came across any Data Science channel like yours. You are doing a fantastic work. Love your videos and going thru them ❤

  • @khalidshaikh7075
    @khalidshaikh7075 3 года назад +1

    Haha nicely copied 3blue1brown

  • @BasilAltaie
    @BasilAltaie 3 года назад

    Where can I find the software you are using for mathematical visualization? Please advice.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      It's an open source python library called Manim.

  • @Garrick645
    @Garrick645 Месяц назад +1

    As soon as the emoji's left, the video went over my head

  • @subramanyak6187
    @subramanyak6187 3 года назад +1

    Can anyone please explain how he got p(x1=☀)

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      I've explained this in the 1st and 3rd videos:
      ruclips.net/p/PLM8wYQRetTxBkdvBtz-gw8b9lcVkdXQKV

  • @Louisssy
    @Louisssy 3 года назад +5

    Really cool explanation! Can you also explain why is P(Y) ignored?

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +10

      Because of two reasons...
      1. It's often hard to compute P(Y)
      2. To maximize that expression we only need to maximize the numerator (depends on X). Note that P(Y) doesn't depend on X.

    • @ts-ny6mx
      @ts-ny6mx 3 года назад

      @@NormalizedNerd Thank you! I had the same concern and your explanation makes sense!

    • @mariamichela
      @mariamichela 2 года назад

      @@NormalizedNerd 1. can't we just compute P(Y) as P(X1,Y) x P(X1) + P(X2,Y) x P(X2) + P(X3,Y) x P(X3) ?
      2. true, I agree. Since you didn't say it in the video I was just wondering where did P(Y) disappear, and didn't bother to think that the max was actually over X

  • @songweimai6411
    @songweimai6411 Год назад +2

    Thank you, really appreciate your work. Watched this video let me consider that my professor in school class is not a good teacher.

  • @artemiosantiagopadillarobl4280
    @artemiosantiagopadillarobl4280 3 года назад +5

    Great videos, keep it up! :)
    I would be nice to have a video about MCMC (Markov Chains via Monte Carlo) and the Metropolis-Hastings algorithm

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      Great suggestion.

    • @nonconsensualopinion
      @nonconsensualopinion 3 года назад +1

      @@NormalizedNerd I'm on the edge of subscribing. A video on MCMC would convince me to subscribe and never leave!

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      @@nonconsensualopinion 😂😂...Let's see what comes next

  • @aramisfarias5316
    @aramisfarias5316 2 года назад +5

    After going through your Markov chains series, you my friend got yourself a new subscriber! Great work. Your channel deserves to grow!

  • @girishthatte
    @girishthatte 3 года назад +4

    Awesome explanation ! Loved the way you explained Math used for calculations !

  • @gregs7809
    @gregs7809 3 года назад +6

    A really well laid out video. Looking forward to watching more

  • @marcovieira8356
    @marcovieira8356 5 дней назад

    Thank you for the vid, it is probably the arg max (understandable) available around. But something remains unclear for me. You said at 09:30 that one Markov property is that X_i "depends only of X_i-1 but Markov property I know is the opposit: X_i is independent of X_i-1 (future does not depends on past, just on the current state). Where I am missing the point?

  • @avishitaneja3839
    @avishitaneja3839 2 года назад +1

    can you pls share the python code for maximising probability?

  • @welilehlope7949
    @welilehlope7949 3 года назад +2

    I didnt get why we drop the denominator in calculating argmax at the end of the video

    • @JulioMartinez-hy2mu
      @JulioMartinez-hy2mu 2 года назад

      The denominator is a normalization term that divides all the results, i.e., it's just a scalling factor that does not change the relationship (greater/lesser than) among all the likelihoods. EXAMPLE: max{2/3, 4/3, 1/3} is the second term; now, identify 3 as the denominator, i.e., P(Y)=3, and compute again without P(Y): max{2,4,1}. You pick-up the second one, again! That is, P(Y) doesn't change a thing the choice of the term that actually represents the maximum likelihood. Not mentioning that it's very convenient to eliminate P(Y) from the calculatios because it is a marginal probability which computation adds innecesary computing time. Hope this helps!

  • @freenrg888
    @freenrg888 22 дня назад

    Beautiful! Thank you.
    Question: in the final formula, "arg max(over X) Prod P(Yi | Xi) P(Xi | Xi-1)" ...
    We have a product term P(X1 | X0) the assumes there is an X0 value. However, there is no X0. Don't we need to replace this term with a different expression that does not rely on X0?

  • @Cookies4125
    @Cookies4125 2 года назад +2

    Thanks for the explanation! You went through the math of how to simplify \argmax P(X=X_1,\cdots,X_n\mid Y=Y_1,\cdots,Y_n) but how do you actually compute the argmax once you've done the simplification? There must be a better way than to brute force search through all combinations of values for X_1,\cdots,X_n right?

  • @rustemdevletov2176
    @rustemdevletov2176 3 года назад +1

    I don't understand how you got the values of stationary distribution. Trying to calculate myself and getting totally different numbers..

  • @RkR2001
    @RkR2001 4 месяца назад

    Great teaching NERD. I AM a doctor from INDIA and need your help in deriving HEART transplant models -MGM hospital CHENNAI. Pls share email

  • @josippardon8933
    @josippardon8933 Год назад +1

    Great! Just Great! I really dont understand why most professors at colleges hate this type of explaining things. They always choose standard "lets be super formal and use super formal mathematical notation". Yes, it is important to learn formal mathematical things, but why not combine both formal and informal approach and put them together in textbook?

  • @shubhpachchigar1457
    @shubhpachchigar1457 3 года назад +2

    It's 3b1b for ML! Is this different from Viterbi algorithm?

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      Yes, actually I've done the calculations naively to explain the things. The efficient approach (like Viterbi) will be present in future videos.

  • @Paranorman2049
    @Paranorman2049 2 месяца назад

    One question though: Why is the probability of the observed state, P(Y), negligible?

  • @tspiker03
    @tspiker03 Месяц назад

    Thrown off by seeing “clearly explained,” but not hearing a guitar and a monotone singing voice!!

  • @sriramsridhara1763
    @sriramsridhara1763 23 дня назад

    How did you compute stationary distribution for the question which is not irreducable. The sunny to rainy probability is zero!

  • @MERIEMELBATOULEDDAIF
    @MERIEMELBATOULEDDAIF 5 месяцев назад +1

    hiii please i just want to know the tool you create those exemples with , its urgent save me

  • @mehmetkemalozdemir7311
    @mehmetkemalozdemir7311 8 месяцев назад

    In 8:25, P(X) does not look right to me as we should have P(X1). Pi_i=2 to n P(X_i | X_i-1)

  • @Hangglide
    @Hangglide 4 месяца назад

    Hello author from the past! Your video is really helpful! Thank you!

  • @xd100josesanchez2
    @xd100josesanchez2 Год назад +1

    POR FAVOR, SUBTITULOS EN ESPAÑOL LLEGARAS A MAS GENTE, SON DEMASIDO BUENO TUS VIDEOS.

  • @JackBlack-qn7us
    @JackBlack-qn7us 4 месяца назад

    1/3 of my course dropped out cuz of this spaghetti
    I do not blame them, however, I do like to eat

  • @Ujjayanroy
    @Ujjayanroy 8 месяцев назад

    I was expecting a handwritten final answer to the problem...but it seems I had guessed it right! It cant be solved without a computer.

  • @xiaoruidu8603
    @xiaoruidu8603 3 года назад +3

    please keep updating~ you are doing an amazing job~~.

  • @dheerajmakhija9752
    @dheerajmakhija9752 9 месяцев назад +1

    Very rare topic explained in most precise manner. Very helpful 👍

  • @shivkrishnajaiswal8394
    @shivkrishnajaiswal8394 2 года назад +1

    good one

  • @siphamandlamazibuko1424
    @siphamandlamazibuko1424 7 месяцев назад

    Can someone please explain where the P(Y) in the denominator went when the products were substituted in the Bayes theorem?

  • @EJSM
    @EJSM Год назад

    Could you show your script? I can get the 4% of probability

  • @SeiroosFardipour-sy3sh
    @SeiroosFardipour-sy3sh 2 месяца назад

    But probability is in heart nature of binary code the way that zero goes to seven follows a probability path .

  • @debasiskar4662
    @debasiskar4662 2 года назад +2

    You explain things nicely. I would request you to make videos on advanced stochastic processes like semi-Markov Process, martingale etc.

  • @MrCarloshvieira2
    @MrCarloshvieira2 2 года назад

    Why can you neglect the denominator (P(Y))? I don't understand it

  • @tarazdxs
    @tarazdxs 4 месяца назад

    Thanks for the video, is it possible to access the python code you did for this problem?

  • @ITTOMC
    @ITTOMC Год назад +1

    Simply wonderful. Keep up your excellent work. Really really well done!

  • @ViralPanchal97
    @ViralPanchal97 5 месяцев назад

    Thanks a ton, I wish my professors from Monash Uni taught this way.

  • @adaelasm6467
    @adaelasm6467 2 года назад +2

    This might be the fastest I've gone from never having used a concept to totally grokking it. Very well explained.

  • @mienisko
    @mienisko Год назад +1

    I'm just going through the markov chain playlist and because of its quality I'm gonna subscribe to this channel. Great material!

  • @Extra-jv5xr
    @Extra-jv5xr 6 месяцев назад

    Is this related to the viterbi algorithm? Could you make a video on that?

  • @SalinaParbin-u2t
    @SalinaParbin-u2t Месяц назад

    Davis Anthony Williams Maria Jackson Lisa

  • @aadityasawant5437
    @aadityasawant5437 Год назад

    could someone explain why he neglected the denominator??

  • @vivekbhat720
    @vivekbhat720 3 года назад +1

    I've some doubts
    since markov chain is hidden, we won't know translation matrix right?
    We'll only know emission matrix
    P(Yi | Xi) can be found out by emission matrix
    how to find P( Xi | Xi-1) ?

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +5

      That's a great question! Let me clarify it. In this video I've assumed that our HMM is already trained i.e. we know both transition (T) and emission (E) matrix. T & E are the parameters of our model that are learnt during the training. In the future I shall make a video about training HMMs.

  • @rishabhkumar2741
    @rishabhkumar2741 Год назад +1

    Today is ny exam and I am here :)

  • @rohanthakrar7599
    @rohanthakrar7599 Год назад

    can you explain the calculation of P(X|Y), the last step of the video when you put inside the products of P(Y|X) and P(X|X). Where does the P(Y) in the denominator go? Thanks

    • @Periareion
      @Periareion Год назад +1

      Howdy!
      I think of it like this: P(Y) is a constant, and doesn't affect which sequence X has the highest probability of occurring. In other words: since every term gets divided by P(Y), we can just ignore it.
      Perhaps he could've made that a little clearer in the video.
      Cheers!

  • @MuhammadDanyalKhan
    @MuhammadDanyalKhan 2 года назад

    Professor explaining the subject :/ Me trying to study on my own ??? Indian guy on youtube explaining the subject :)

  • @aveenperera3128
    @aveenperera3128 3 года назад +1

    please do more examples for hidden markov chains

  • @how_about_naw
    @how_about_naw 8 месяцев назад +1

    Oh look, a coffee button! Thanks!

    • @NormalizedNerd
      @NormalizedNerd  8 месяцев назад +1

      Thanks so much!!

    • @how_about_naw
      @how_about_naw 8 месяцев назад

      @@NormalizedNerd no, thank you sir. This has been very informative and accessible. I will be checking out your other content in future!

  • @வியத்தகுபுவி

    Sir good evening how to calculate wet dry week formula
    And
    Markov chain model idry wet weekly rainfall probability
    Please help 🙏
    My deesy work

  • @ramchillarege1658
    @ramchillarege1658 Год назад +1

    Very nice and clear. Thank you.

  • @samSamsem
    @samSamsem Год назад

    This doesn't sound right to me. why did you write p(Y|X) = p(y1 | x1) *...*p(Yn | Xn)? wouldn't we also need like p(y1 | x2) and the rest? I mean y1 can happen with all the Xs why aren't we considering the rest of the probabilities? your previous three videos were great but this one wasn't very intuitive. And also in the beginning of the video when you were calculating the probabilities you ignored the number of steps to get to that state. why?

  • @DJBankrupt
    @DJBankrupt 2 года назад

    here's the weird thing, i got here because i was reading about AI and it using the hidden markov model to predict stock market trends.

  • @sye9522
    @sye9522 5 месяцев назад

    Amazing!! It really helps me understand the logic behind those scary HMM python codes. Thank you.

  • @k.i.a7240
    @k.i.a7240 Месяц назад

    Better explained than over 3 AI university classes I have gone through. Simple, efficient. Thank you

  • @jeyasheelarakkinimj6534
    @jeyasheelarakkinimj6534 6 месяцев назад

    give the video link for finding left eigen vector

  • @VersatileAnthem
    @VersatileAnthem 2 года назад

    superb explanations ! that shows how in depth your knowledge is !

  • @mediwise2474
    @mediwise2474 Год назад

    Sir pl.make a video on finite element method

  • @fireblast3339
    @fireblast3339 3 года назад +2

    i fucking love you dude, this really helps my thesis

  • @MrJoeDone
    @MrJoeDone 2 года назад

    too bad you didn't upload your python script

  • @islemsahli737
    @islemsahli737 3 месяца назад

    Thank you well explained

  • @varshachinnammal
    @varshachinnammal Год назад

    Dear sir,
    Your explanation was very well explained and understandable. It is full of mathematics coming here matrix, probability and so. I'm from a science background without maths. I needed this for bioinformatics but it is difficult to compare with nitrogenous bases with these matrix and formulae. Will you explain it in simpler method? It would be very helpful sir 🥺

  • @codeking4585
    @codeking4585 3 месяца назад

    It reminds me of mia malkova

  • @bassami74
    @bassami74 2 года назад

    Thank you for sharing, could you please explain how to implement HMM on measuring the earnings quality. Need your help 🙏🙏

  • @karannchew2534
    @karannchew2534 2 года назад

    Hi Normalised Nerd, and Everyone Reading This, I have a question please. Can I use HMM the other way round: find the mood sequence given the weather sequence? How?

  • @TariqulHasan-p2y
    @TariqulHasan-p2y Год назад

    HI.. thanks for wonderful videos on Markov Chain. I just want to know how do you define the probability of transition state and emission state? What to do about unknown probabilities of state?. Regards

  • @brainnuke5450
    @brainnuke5450 3 года назад +2

    Don't stop what you are doing! It's amazing.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      Thanks!! :D

    • @asaha9479
      @asaha9479 3 года назад

      From the accent u r a bengali but r u from isi? GREAT vdo,keep going

    • @NormalizedNerd
      @NormalizedNerd  3 года назад

      @@asaha9479 Yup I'm a bong...But I don't study in ISI.

  • @itzmehaprilg
    @itzmehaprilg Год назад

    what's the answer here? I don't understand

  • @dannymoore1530
    @dannymoore1530 4 месяца назад

    This was good! Thank you.

  • @24CARLOKB
    @24CARLOKB 2 года назад

    Great video! Just a little lost where you get the prob(sad or happy | weather), which I think are emission probabilities? Thanks!

  • @xherreraoc
    @xherreraoc 15 дней назад

    This series of videos has been incredibly useful for me to write my Masters thesis. Thank you so much.

  • @lucasqwert1
    @lucasqwert1 Год назад

    How do we calculate the stationary distribution of the first state? I watched your previous videos but still cant calculate it! Thanks for answering!

  • @tetamusha
    @tetamusha Год назад

    I got a little confused with the two HMM videos. I thought the second video would solve the argmax expression presented at the end of the first one, but the algorithm that solves this expression is the Viterbi algorithm and not the Forward algorithm from the second video. Just a heads-up to those that got a little lost like me.

  • @indiecowan33
    @indiecowan33 6 месяцев назад

    i

  • @utkarshkohli9275
    @utkarshkohli9275 2 года назад

    If I was from future I would take my question paper and prepare from it :(=)

  • @donmiguel4848
    @donmiguel4848 3 года назад

    Shouldn't it be called hidden states markov model because as I understand the transition matrix of the model, which is the key feature of the model, is not hidden ?

  • @omicron296
    @omicron296 Год назад

    Great! Tanks!

  • @atifshakeel1982
    @atifshakeel1982 Год назад

    HI excellent and very easy way of explaining the complex mathematical terms. Did you make these slides in PowerPoint ? or some other tool ? Regards,

  • @tatvamkrishnam6691
    @tatvamkrishnam6691 2 года назад

    Excellent! Skipped 4th part of this Magnificient Markov series. Took roughly 3hrs to verify at moments and coninvce myself. HIT MOVIE!!

  • @Slappydafrog_
    @Slappydafrog_ 5 месяцев назад

    Hello from the future! Thank you for sharing your insights!

  • @tyrannicalguy7262
    @tyrannicalguy7262 Год назад

    "Hello People From The Future!" that was very thoughtful

  • @51_ramyasree43
    @51_ramyasree43 11 месяцев назад

    Can u share the material