Markov Chains : Data Science Basics

Поделиться
HTML-код
  • Опубликовано: 6 окт 2024

Комментарии • 95

  • @RD-zq7ky
    @RD-zq7ky 4 года назад +27

    Yes, I would like more videos on Markov Chains. Thank you for your videos.

  • @diegososa5280
    @diegososa5280 4 года назад +41

    Brilliant explanation, I cannot thank you enough. Markov chains are so important, we easily get lost in linear thinking, Markov helps us see probabilities differently. More videos on this topic would be highly appreciated.

    • @ritvikmath
      @ritvikmath  4 года назад +8

      Thank you for the kind words! Seeing the positive response, more Markov Chain videos will come soon

    • @redghost105
      @redghost105 2 года назад

      @@ritvikmath Would it be possible to create a video on Markov chains and their implementation in finance? Much appreciated, these videos are invaluable! Thank you

  • @thegoodgorilla287
    @thegoodgorilla287 День назад

    Thank you sir! I finally understood the Markov Chain concept now!!

  • @stephanieb.3873
    @stephanieb.3873 3 месяца назад

    Sir, I think I am in love with you. How is it possible that you explain everything so simply and clearly, and my teacher sucks at making me understand what a Markov chain is???? Why aren't university teachers taught how to teach and explain the material in such simplicity???? Thank you for your explanations. You have helped me through linear algebra. THANK YOU!!!!!!!! You were born to teach.

  • @ingenierocivilizado728
    @ingenierocivilizado728 7 месяцев назад

    Incredibly useful! You achieve to explain difficult concepts in a straightforward and easy way. Thank you for these videos!!

  • @ramankutty1245
    @ramankutty1245 3 года назад +3

    You have a wonderful knack for explaining concepts. Thank you

  • @nikkatalnikov
    @nikkatalnikov 4 года назад +8

    A brilliant into, thank you!
    Just a small addition: steady-state vector is (unsurprisingly) an eigenvector of transformation matrix with the corresponding eigenvalue (once again, unsurprisingly, as the matrix is normalized) of 1.
    A video on Monte Carlo Markov Chains would be nice.

    • @ritvikmath
      @ritvikmath  4 года назад +2

      thanks! And a great suggestion!

  • @jake5camposano
    @jake5camposano Год назад

    this is the first video that I understand the Markov chain thanks you i watch the commercial to pay you thanks a lot

  • @CleverSmart123
    @CleverSmart123 11 месяцев назад

    This is so brilliantly well explained, thank you. I was not getting it at all before.

  • @minrongwang4038
    @minrongwang4038 Год назад +1

    I love your tutorial. It is very helpful. Thank you.

  • @tatianapashkova7275
    @tatianapashkova7275 Год назад +1

    Thank you for a great explanation!

  • @xinyuan6649
    @xinyuan6649 2 года назад +1

    Thanks so much as always for the great video 🫰It also feels very philosophical to convince someone to not dwell on the past or anxious about the future, “The future is independent of the past given the present.”

  • @user-um4di5qm8p
    @user-um4di5qm8p Год назад +1

    Awesome! Thank you for this!

  • @anaibrahim4361
    @anaibrahim4361 3 года назад

    that example you gave at the end i was looking and asking people one by one but null gave me a direct simple answer like what you gave us in that vid thanks a lot for the vid
    deserve the subs and likes
    keep up the good work

  • @Blu3B33r
    @Blu3B33r 4 месяца назад

    You explanations are so good

  • @dennismwangi3573
    @dennismwangi3573 2 года назад +1

    Excellently explained.

  • @l2edz
    @l2edz 4 года назад +2

    Thank you for the clear explanation! Have a video request for Conditional Random Fields

  • @dmno45
    @dmno45 Год назад

    You are FANTASTIC at teaching.

  • @beyerch
    @beyerch 3 года назад +38

    How exactly is Sunny W2 0.44? If there is a 0.3 chance of the day after a sunny day also being sunny day how did your probability INCREASE for W2? Seems there is either an error here or something was left out of this explanation?

    • @ritvikmath
      @ritvikmath  3 года назад +95

      great question. The 0.44 is the probability of day 2 being sunny. We know the day before that (day 1) had to be sunny or cloudy. If day 1 was sunny (as you noted this has a 0.3 chance) then there is a 0.3 chance that day 2 will be sunny. So multiplying those, we get a 0.09 chance that day 2 is sunny *and* day 1 was sunny. Now, the missing part of the puzzle is that day 1 could have also been cloudy (with a 0.7) chance. If day 1 was cloudy, there is a 0.5 chance that day 2 will be sunny. Multiplying those we get 0.35. Adding the 0.09 from before with the 0.35 we get 0.44.
      In a nutshell, probability of day 2 being sunny is computed considering *both* cases where the previous day is sunny *and* previous day is cloudy.

    • @beyerch
      @beyerch 3 года назад +14

      My attempt to break down in English, let me know if this is accurate:
      Initial State "t"
      ---------
      W0 is known. (SUNNY) P(Sunny) = 1 / P(Cloudy) = 0
      First Time Step "t+1"
      -----------------------
      W1(Sunny) = Probability that it stays Sunny (1 * .3) + Probability that it was cloudy and transitioned (0 * .7) = .3 + 0 = .3
      W1(Cloudy) = Probability that it stays Cloudy (0 * .5) + Probability that it was sunny and transitioned (1 * .7) = 0 + .7 = .7
      Next Time Step "t+2"
      -----------------------
      W2(Sunny) = Probability that it stays Sunny from prior "t" sunny (.3 * .3) + Probability that it was cloudy @ prior "t" and transitioned (.7 * .5) = .09 + .35 = .44
      W2(Cloudy) = Probability that it stays Cloudy from prior "t" cloudy (.7 * .5) + Probability that it was sunny @ prior "t" and transitioned (.3 * .7) = 0 + .7 = .56

      I got hung up a bit when you said the only thing that matters is prior state. Since the starting point was known (W0), I was ignoring "cloudy" possibilities for the future "t" calculations. Ooops.

    • @MrAstonmartin78
      @MrAstonmartin78 4 месяца назад

      @@ritvikmath Now it's clear... cross-transition from both possibilities... thx for an explanation

  • @anadianBaconator
    @anadianBaconator 4 года назад +3

    I would like more videos on Markov Chains and also on Metropolis-Hastings algorithm

  • @NicolaevM
    @NicolaevM 3 года назад +4

    Great explanation!! It would be great if you could overview DSGE models (often used in econometrics), they also have steady state.

  • @eduardocruces3959
    @eduardocruces3959 3 года назад +1

    Nice job! clear and simple explanation.

  • @JonathanFraser-i7h
    @JonathanFraser-i7h 3 месяца назад

    I think it's worth pointing out that this assumption that you only ever depend on the previous state is a "weak" assumption. You can get around it by expanding your state space to include the previous value. The bigger limitation on markov chains is that the probability of transition is independent of state. That is to say it can be constructed as a linear expression x_t = A*x_t-1, which is the heart of the simplification here.

  • @sumers9396
    @sumers9396 2 года назад +1

    well explained, thanks a lot!

  • @DanielLopez-mk9ih
    @DanielLopez-mk9ih 4 года назад +10

    Good video. I did not get how do you get the 0.44 and 0.56. And i suppose that the 0.30 was only an assumption right? Thanks for the video, and if you can explain me that it would be awesome

    • @Phil-oy2mr
      @Phil-oy2mr 4 года назад +11

      0.44 = (0.5*0.7)+(0.3*0.3)

    • @DanielLopez-mk9ih
      @DanielLopez-mk9ih 4 года назад +2

      Phil E Thanks!

    • @ritvikmath
      @ritvikmath  4 года назад +12

      Thanks Daniel for the question and Phil for the reply! I likely should have explained that a bit more.

  • @donatsu8
    @donatsu8 3 года назад +1

    Great job explaining!

  • @joycwang
    @joycwang 2 года назад

    great explanation. Would like to explore how monopoly is a Markov chain!

  • @เกี๊ยวอย่างเดียว

    Thanks for video!! Awesome video!, And I would like to see more video about Markov Chains about finance related topic.

    • @ritvikmath
      @ritvikmath  4 года назад +3

      Thanks! I'm planning to make a Markov Chains for stock price prediction video soon.

  • @蔡小宣-l8e
    @蔡小宣-l8e Год назад

    Thank you ! 谢谢!

  • @lorezampadeferro8641
    @lorezampadeferro8641 3 года назад

    Fantastic explanation

  • @dalmacyali1905
    @dalmacyali1905 2 года назад

    Bro! Thank you very much!

  • @teegnas
    @teegnas 4 года назад +3

    It would be great if you could make a video on the Markov decision process in the context of reinforcement learning.

    • @ritvikmath
      @ritvikmath  4 года назад +1

      Good suggestion! I'll look into it

    • @teegnas
      @teegnas 4 года назад

      @@ritvikmath thanks!

  • @kemrank8739
    @kemrank8739 Год назад

    Thanks for explaining. As I noticed, in the pi equation there is a mistake. pi1 x 0,3 + pi2 x 0,7 (instead of 0,5). Since the probability that it will be cloudy after sunny is 0,7. Moreover, in the steady state above this equation we can see what i have written. If Im mistaken take apologies and correct me. Thanks

  • @kosalamanojeewa
    @kosalamanojeewa Год назад

    easy to understand 👍

  • @dlee4736
    @dlee4736 3 года назад

    Awesome. You deserve more views !

  • @calebleung6761
    @calebleung6761 7 месяцев назад

    Can you talk about the difference between stationary distribution and invariant distribution?

  • @robertc6343
    @robertc6343 3 года назад +2

    Baltic Avenue? What city are you in? 😀 great video as always! 👍🏻

  • @mohamedr3w
    @mohamedr3w Год назад

    thanks!

  • @goncalocruz2206
    @goncalocruz2206 4 года назад +3

    Any chance you could do Monte Carlo Markov Chains (MCMC) methods ?

  • @wendycastillo1756
    @wendycastillo1756 2 года назад +3

    Hello how did you calculate W2? Thank you!

  • @randalllionelkharkrang4047
    @randalllionelkharkrang4047 2 года назад +3

    How did you get 0.44 and 0.56 for W2?

  • @JohnJones-rp2wz
    @JohnJones-rp2wz 3 года назад

    Awesome

  • @Hagakure12e412rede
    @Hagakure12e412rede 3 года назад

    Damn dude you are good at teach

  • @oligneflix6798
    @oligneflix6798 2 года назад

    please create a video explaining how to convert markov chains to neural networks (RNN)

  • @gigz54
    @gigz54 3 года назад +1

    Great video, thank you! The steady state explanation really helped. A comment and a question...
    I agree that the weather example is the go-to method that others have used to explain what a Markov Chain is, but the Monopoly example seems to much more intuitive for a basic explanation. Your next square depends only on your current square + a dice roll, and the dice roll has a discrete pdf that is well covered in basic prob stat courses. But with weather, the transition probabilities seem more hypothetical, and possibly over simplified.
    Ironically, I will ignore the simply Monopoly scenario and focus on the weather for my question. How do you address the transition from a more complex/multivariate current state. For example, in practice the probability of tomorrow being sunny may be different if we know that today was sunny and in August as opposed to today being sunny and in April. Is it as simple as adding the necessary rows to the transition matrix? That seems like the simplest answer, but does it blow up the idea of having a steady state by creating a cycle?

    • @nathankrowitz3884
      @nathankrowitz3884 2 года назад

      Well I think it's a daily transition-- so if you're trying to incorporate some sort of cyclicality, I think you need to think about it on a daily basis versus monthly. E.g. Day 1 of the year has a 100% probability of transitioning to day 2... up to 365. Or July 31 has a 100% probability of transitioning to August 1st... and onward. That would preserve the state.

  • @millaniaangela4147
    @millaniaangela4147 3 года назад +1

    Cool video thanks! Can you explain about Markov Switching Autoregressive model?

  • @ben7333
    @ben7333 2 года назад

    Can you do a video on Structural Equation Model?

  • @kamathpremitha8887
    @kamathpremitha8887 Месяц назад

    Just a small clarification, the probability of W2 should be 0.58 for sunny and 0.50 for cloudy as per the Markov formula of (w2 = transition matrix * w1) matrix multiplication, right? correct me if I'm doing it wrong, please!

  • @bapireddy5790
    @bapireddy5790 4 года назад +3

    Can you cover Markov chains for time series.

    • @meanreversion1083
      @meanreversion1083 4 года назад +1

      Yep I agree. Run through an example would be nice. And also would see math been covered.

    • @ritvikmath
      @ritvikmath  4 года назад +1

      Yes! More Markov Chain videos to come!

  • @yulinliu850
    @yulinliu850 4 года назад +1

    Cool

  • @learn5081
    @learn5081 4 года назад +1

    hope to see more math under the hood. thanks

    • @ritvikmath
      @ritvikmath  4 года назад

      Thanks for the suggestion!

  • @sumitkumarpal3957
    @sumitkumarpal3957 3 года назад +1

    Just a comment: If we calculate using {S(t)} = [Transition Matrix] {S(t-1)} for W2 then we have to transpose the matrix shown at 3:33. Please correct me if I am wrong.

    • @whycurious6754
      @whycurious6754 2 года назад

      This had me confused for a while. its either that or {S(t)} = {S(t-1)}[Transition Matrix] , depending on how the transition matrix is defined.

  • @arc6fire
    @arc6fire 2 года назад

    hi
    i'm a little unsure about the reasoning used in the steady state example
    the calculation's presented show *IF* the values converge, then these would be the values
    but what is not shown is that these values should necessarily converge
    So yes I understand if these values converged to a number, these would be the numbers, but what isn't shown is that the markov chains would necessarily converge
    do all markov chains neccesarily converge to a steady state?
    p.s. Brilliant videos btw, watching a handful of them and they're very digestible!

    • @gmatsue84
      @gmatsue84 2 года назад

      If I'm not wrong, any finite Markov Chain will convert, because they all have recurring states. In this case all states are recurrent, so they converge. Given a finite number of states, the transient ones (the one's which you can't go back to if you leave, which can be alocated as classes such as {1, 2}) will converge to 0, while the rest (the recurrent ones) will converge at some point.

  • @Phil-oy2mr
    @Phil-oy2mr 4 года назад +1

    Do markov chains relate to difference equations at all?

  • @altruist_AI
    @altruist_AI 3 года назад

    your videos are amazing, I just wanted to know, how statistical models are different from Kalman (ekf/ukf) because both are predicting states only

  • @nicoleluo6692
    @nicoleluo6692 Год назад

    WOW. love you. btw, i have a UCLA shirt too 😁

  • @guidosalescalvano9862
    @guidosalescalvano9862 3 года назад

    So is it true that the steady state is an eigenvector of the transition matrix? Can there be multiple steady states given a sufficiently complicated transition matrix? Say you define such a steady state region as the region of state space that converges onto a particular steady state. Could you model a meta transition matrix that shows probabilities of the markov model transitioning between steady states regions?

  • @rohitnath5545
    @rohitnath5545 3 года назад

    Excellent video but a silly doubt how did we get the intial probabilities which are in transition matrix

    • @gmatsue84
      @gmatsue84 2 года назад

      Like any other: observation.

  • @abdelrahmanaltawil2219
    @abdelrahmanaltawil2219 3 года назад +1

    Hi, I have a question and "sorry for my weak probability fundamentals", I am quite confused about one thing,
    Does reaching the steady state means that our one step probability has changed, I mean after infinitely times steps if I were to ask myself what is the probability of having sunny day tomorrow given today is cloudy, what you be the answer? "if the answer was other than 0.5, is it okey for the Marko property to be not preserved, I mean can we still call it markovain process or it became iid "
    again sorry, I am sure I mixed too many things up

    • @fszhang9010
      @fszhang9010 3 года назад +1

      I think steady state is a statistical number that indicates a possibility over a long time span(infinite times, like you said), just like if you flip a coin only 10 times and there's 0.6 of chance you'll get front side and 0.4 of the back side, but after trying 1million times the probability of front/back side will eventually come to 0.5, however 0.5 doesn't equal to the initial P(front) or P(back), its just a long term trend which computated at different time dimension compared with single time computation. NOTE: I'm not sure if my thought are correct : )

    • @abdelrahmanaltawil2219
      @abdelrahmanaltawil2219 3 года назад

      @@fszhang9010 thank you very much for the reply

  • @transcendentpsych124
    @transcendentpsych124 Год назад

    I don't get the assumptions. Are you taking a given data that whether will change based on some pattern? I mean the probability it will just stay sunny in Saudi Arabia is higher than it would be in Reykjavik.

  • @rajath1964
    @rajath1964 4 года назад

    Which are those books tor referred to above giving similar examples?

  • @farzansoltani344
    @farzansoltani344 2 года назад

    tnx for your videos ... but this video has some problems, ... the voice and the video are not sync

  • @rakith
    @rakith Год назад

    It’s not sunny. So why’d you wear a cap?

  • @riusaddm
    @riusaddm 4 года назад +1

    Can you write a book already?

    • @ritvikmath
      @ritvikmath  4 года назад +1

      Hahaha not a bad idea :)

  • @Naturehack
    @Naturehack 2 года назад

    Maxwell's Demon Here
    1|0 = 3
    1 |transition| 0
    Markov Maxwell
    Feelings three states
    Positive |transition| negative
    Procedural programmed population
    First language update
    Species Growth Rights
    3rd planet from Sun
    Galileo Galilee
    380+ years
    Late
    Like your style
    Language sort ahead

  • @devonk298
    @devonk298 4 года назад

    You are seriously cute! Great instructor too. ty

  • @MrNitKap
    @MrNitKap 3 года назад

    Thanks. Learnt basic concept of steady state. However can’t help point out that 0.42 (5/12) is not🙄 ‘Point forty-two’ (yes saying 42% is fine) or it could just as well be 0.4166 ‘point four thousand one hundred and sixty six’ ... many sports commentators when reading stats like players average score make such mistakes...but I find it hard to accept it from a person dedicated to maths and science 👍...