Markov Chains Clearly Explained! Part - 1

Поделиться
HTML-код
  • Опубликовано: 24 окт 2020
  • Let's understand Markov chains and its properties with an easy example. I've also discussed the equilibrium state in great detail.
    #markovchain #datascience #statistics
    For more videos please subscribe -
    bit.ly/normalizedNERD
    Markov Chain series -
    • Markov Chains Clearly ...
    Facebook -
    / nerdywits
    Instagram -
    / normalizednerd
    Twitter -
    / normalized_nerd

Комментарии • 631

  • @NormalizedNerd
    @NormalizedNerd  3 года назад +202

    Since many of you are asking about the calculation of left eigenvector (π)...Here are the equations:
    from πA = π
    0.2x + 0.3y + 0.5z = x
    0.6x=y
    0.2x+0.7y+0.5z=z
    from π[1]+π[2]+π[3] = 1
    x+y+z=1

    • @arianakenzie4235
      @arianakenzie4235 2 года назад +3

      Dude you should collaborate with @ahmadbazzi

    • @putraduha3176
      @putraduha3176 2 года назад +2

      Thanks man, online school isn't really being nice to my brain

    • @dhruvsingla2212
      @dhruvsingla2212 Год назад +3

      Hey, can you also tell how to code moving from one state to another based on probability? Like you did a random probability walk, how did the code decide which state to go to using probability.

    • @NormalizedNerd
      @NormalizedNerd  Год назад +2

      @@dhruvsingla2212 I think you are looking for this video: ruclips.net/video/G7FIQ9fXl6U/видео.html

    • @dhruvsingla2212
      @dhruvsingla2212 Год назад

      @@NormalizedNerd Great, thanks 👍

  • @nshiba
    @nshiba 3 года назад +287

    This is sooo easy to understand. I took atleast a month to learn this about 25 yrs back for my masters thesis work when I first learnt this subject. Now, I thought of revisiting this topic for my daughter's higher secondary project. 25 years have really brought a topic from masters to secondary school level and months of learning to a few minutes of a well prepared video. Thanks to your channel, RUclips, Internet and technology in general. 🙏

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +25

      Thanks a lot for sharing your experience. It really feels nice to read such comments. I'm glad have to have a platform like this.

    • @Fresh_Career_Compass
      @Fresh_Career_Compass Год назад +8

      Which institute is teaching this in secondary level.plz provide syllabus link .i am redesigning a syllbus

    • @Nuur_Rajput
      @Nuur_Rajput 6 месяцев назад +2

      ​@@Fresh_Career_Compass yea. I'm curious too

    • @_rd_kocaman
      @_rd_kocaman 4 месяца назад +1

      where the heck does this studied in high school?

    • @nshiba
      @nshiba 4 месяца назад +1

      Sorry I missed all your comments. Markov Chains is one of the IA (Internal Assessment) topics to choose from for the HL (Higher level ?) Mathematics for IB (International Baccalaureate Diploma) programme ( which is higher secondary school level - Year 11 and Year 12) in Singapore. You can Google to find the details with the above information.

  • @DawgFL
    @DawgFL 2 года назад +107

    Thanks dude. It takes a whole nother level of intelligence to be able to break down a concept like this so anyone can understand it. I'm learning markov chains in class right now and when the professor teaches it it literally looks like an alien language to me, i almost broke down because i might fail the class. but im going to watch all ur videos and itll help me a lot.

  • @counter-thought2226
    @counter-thought2226 8 месяцев назад +10

    This is a lifesaver. I started a stochastics class last week with an almost nonexistent background in probability. I was completely troubled at first but after watching this video and reading through some course material, I can actually understand the exercises. Thank you.

  • @angrybruce8262
    @angrybruce8262 3 года назад +335

    Mate, that is a good explanation! The only problem is that now I AM HUNGRY:)

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +9

      Haha XD

    • @69erthx1138
      @69erthx1138 3 года назад

      After burgers, Will took Skulyer for pizza, then give her a night cap with his hot dog.

    • @2highbruh
      @2highbruh 2 года назад +1

      @@69erthx1138 oh, okay, good for him

    • @themathskompanyap4730
      @themathskompanyap4730 2 года назад

      Subscribe for more such Markov chain concepts friends. ruclips.net/video/bk3MjAC9QsY/видео.html

    • @muhammadihsan6645
      @muhammadihsan6645 2 года назад

      Woowww , human being human

  • @joemaxwell8361
    @joemaxwell8361 3 года назад +67

    Got way more excited than I should have when I thought "hmm, that kinda looks like the eigen vectors..." AND THEN IT WAS.

  • @ishankaul9065
    @ishankaul9065 3 года назад +26

    Great explanation! A full series on the different types of Markov chains with explanations like this would be awesome.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +3

      I'll try to work on this

    • @themathskompanyap4730
      @themathskompanyap4730 2 года назад

      Subscribe for more such Markov chain concepts friends. ruclips.net/video/bk3MjAC9QsY/видео.html

  • @karlrombauts4909
    @karlrombauts4909 Год назад +4

    This is such a fantastic video. It makes all the concepts very easy to understand without skipping important technical details. Thank you for making such a great resource!

  • @willbutplural
    @willbutplural Год назад +12

    Wow great explanation that includes terminology, stationary states, and connections between adjacency matrices, directed graphs, and markov chains 👍 A+ thank you!

  • @gameboardgames
    @gameboardgames Год назад +3

    This video was really well constructed and interesting, in equal measure to being informative! Thank you Mr Nerd!

  • @michaella5110
    @michaella5110 Год назад +3

    you have no clue how much you helped a bunch of online MS Analytics students. Thank you so much!

  • @opencode69
    @opencode69 10 месяцев назад

    While i was trying to understand this i avoided complex terms the best i can but with this video i have no need to try avoiding it because of the thorough explanation typing this 2 years later the "whats up people of the future" really got me

  • @IshanBanerjee
    @IshanBanerjee 2 года назад +3

    I was trying to understand Evolution algebras and for that I needed idea of Markov chains. Beautifully explained. Thank you so much.

  • @aromalas5713
    @aromalas5713 2 года назад +2

    Omg man this is such a great explanation. Loved the presentation, the animation and everything about it. Keep going!

  • @dieserhugo2960
    @dieserhugo2960 5 месяцев назад +1

    Jeez, if my professor had introduced Markov chains like this instead of spending multiple lectures talking about Google's page-rank system without any goal in mind, I would've saved myself a lot of confusion. Thank you!

  • @karannchew2534
    @karannchew2534 2 года назад +89

    Note for my future revision.
    Markov Chain models a system that changes its status.
    One important rule: the next status of the system only depends on its current status.
    Status
    = serve pizza, serve burger or serve hotdog
    = x, y, z
    = Connected, Disconnected, Terminated, Active
    Markov chain can be drawn as a state diagram.
    Or written as a transition matrix.
    State diagram represents all possible status and associated probabilities.
    Transition matrix
    = represent the state diagram
    = probability from one state to another
    = A
    At equilibrium, the probabilities of the next status doesn't change any more. The probability of state at equilibrium = Stationary Distribution.
    Let's call such equilibrium probability π.
    Aπ = π
    π
    = Eigenvector of the matrix
    = Probabilities of each status the system could be in, assuming equilibrium stage.
    Using two equations:
    A) Aπ = π
    B) sum of probability is 1,
    we can work out the value of π, i.e. the equilibrium probability
    Alternatively, run a simulation.
    A: Do all Markov Chain have a equilibrium state?
    Q: Don't know... Need to study more to find out...
    Q: Can I use Subscriber Status to as the hidden state?
    A: Yes. But if the status is known, then it's better to use it as the Observation states.
    Q: Can I "model" next status to be only depending on the current status? But then the next status actually also depend on the previous status, this seems contradictory.
    A: Yes, I can. At per state level, the next status only depends on the current status. But the at the system level and at equilibrium, it "depends" on both the current and the previous state, because the current states had been "affected" by the previous states.

    • @Ceratops17
      @Ceratops17 Год назад +4

      hii, maybe if you still need the answer. You can prove that an ergodic Markov chain, so a chain where all states communicate with each other and it’s aperiodic (gcd is 1) always has an equilibrium state.

    • @chaityashah4221
      @chaityashah4221 Месяц назад +1

      dont know if you revised it , but i surely did a revision

  • @bubblewrap55
    @bubblewrap55 3 года назад +29

    Good explanation, they never covered why am I calculating eigen values in high school, loved how that path and random walk converged in the end.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +1

      Thanks!! Yeah...they teach stuffs without their applications :(

    • @abc4924
      @abc4924 3 года назад +5

      You guys calculate eigenvalues at High school? Great!

    • @manishmayank4199
      @manishmayank4199 3 года назад +2

      @@abc4924 my reaction was same...I studied eigenvalues in my 2nd semester of college

  • @akhilgoenka6817
    @akhilgoenka6817 Год назад +1

    Found this awesome channel today. Fantastic visuals & crystal clear explanation. Subscribed!

  • @theelysium1597
    @theelysium1597 2 года назад +10

    This is a great video! I am currently having Linear Algebra II and Probability (2 separat courses) and this video perfectly connected them :) thank you!

  • @traderrider09
    @traderrider09 6 месяцев назад +1

    That was one of the smoothest explanations i ever came across !

  • @lowerbound4803
    @lowerbound4803 2 года назад

    Very well-explained, appreciate your effort. Thank you for making this.

  • @rommix0
    @rommix0 10 месяцев назад +4

    This is so cool. I'm getting into machine learning, and videos like these are extremely help. I've only really come across HMM for its historic use in Speech Recognition.

  • @jiangxu3895
    @jiangxu3895 2 месяца назад

    Dude, this is the first time I get the idea of Markov chain. Thanks a lot!!!

  • @johnperkins6550
    @johnperkins6550 2 года назад +1

    I am just starting to learn this. The best explanation of all the videos I have seen.. Very understandable. And there is the application to Python and Data Science as BONUS! I am subscribed and I want to see all of the videos now!!!

  • @umarkhan-hu7yt
    @umarkhan-hu7yt Год назад

    You make it clear and more intuitive. Thanks

  • @dr-x-robotnik
    @dr-x-robotnik 2 года назад +2

    This tutorial helped me with my NLP project on part-of-speech-tagging. Thank you very much!

  • @anubhavyadav4279
    @anubhavyadav4279 2 года назад

    You made it look so simple! Amazing man!

  • @rebeccacarroll8385
    @rebeccacarroll8385 8 месяцев назад

    This is the best video ever. Seriously, I was ripping my hair about these concepts and this bridges each point beautifully.

  • @bhushanakerkar6441
    @bhushanakerkar6441 Год назад

    excellent explanation. Just too good to be true. You have made an esoteric subject so simple

  • @mariusbaur6765
    @mariusbaur6765 Год назад

    thanks a lot! incredible how you can explain a difficult topic in such an easy way!

  • @markm4642
    @markm4642 Год назад +1

    Great content, thanks for sharing. Your education is helping lots of people. Keep going.

  • @chloewei768
    @chloewei768 3 года назад +2

    Awesome explanation!! It is so beginner friendly and I love it!!
    Thank you! and look forward to seeing more content from you!!

  • @JackMenendez
    @JackMenendez 10 месяцев назад

    Wow, thank you. Why was this so hard for me back in the day? Great job.

  • @MrRaja
    @MrRaja 8 месяцев назад

    Thanks for the explanation. It's starting to make sense. Little by little.

  • @nicholasadegbe4629
    @nicholasadegbe4629 3 года назад

    I covered just 2 minutes of this and I'm so excited!!!

  • @3munchenman
    @3munchenman 2 года назад

    You explained it to me like I am 5 years old. And that is what I needed. Thank you!

  • @ralphvonchunjo
    @ralphvonchunjo 2 года назад

    Thanks from the future! Great explanation, outstanding instructor!

  • @wakabaka777
    @wakabaka777 3 месяца назад +1

    Wonderful explanation! I love this visualization

  • @pemessh
    @pemessh 3 года назад +11

    You sir, just earned a subscriber.
    These kinds of quality videos and great explanation is what we love.
    Thank you.
    Best wishes from Nepal.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +1

      Thanks and welcome to normalized nerd :)

    • @themathskompanyap4730
      @themathskompanyap4730 2 года назад

      Subscribe for more such Markov chain concepts friends. ruclips.net/video/bk3MjAC9QsY/видео.html

  • @franciss.fernandez7581
    @franciss.fernandez7581 2 года назад +3

    This was an amazing video. You're an outstanding instructor!

  • @AnupKumar-nz2qq
    @AnupKumar-nz2qq 4 месяца назад

    It's a very nice video to understand the Markov chain model in a simplified way. Please make more such videos on the Markov model and stochastic process.

  • @aydnaydin9109
    @aydnaydin9109 6 месяцев назад

    perfect explanation.. everybody can understand. this video may be the easiest explanation for this topic. THANK YOU !!!

  • @taquakhairysaeed1771
    @taquakhairysaeed1771 2 года назад

    wow this is the best technical video i have ever seen!! Well done!

  • @yashshah4182
    @yashshah4182 4 месяца назад

    What a great introduction to Markov Chains! Thank you, it was really helpful

  • @KN-ls9rq
    @KN-ls9rq 2 года назад

    woah, that was an awesome video man! I think i'll be watching your videos just for fun too! keep doing what you do 👍

  • @manim4434
    @manim4434 20 дней назад

    Thank you so much, I really needed it and your video and great explanation and method helped so much.

  • @piotrgorczyca5548
    @piotrgorczyca5548 3 года назад +8

    5:24 I feel you bro, recording entire audio and then finding out about the mistake just at the editing ... I did the same, just cut words from other parts of the recording and put them to create a sentence...
    Thanks for the video btw, very nice.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +2

      Exactly bro :/

    • @rg31222
      @rg31222 3 года назад +1

      So true...so much time and effort goes into creating any content especially audio and video...great video and great channel.

  • @Reigatsu
    @Reigatsu 2 года назад +3

    Great video! As a physics graduate, it’s honestly surprising how often eigenvalues and eigenvector keep showing up in what I do!

    • @n-panda921
      @n-panda921 Год назад

      ya! and you can really think this in terms of quantum mechanics too, I like all these connections

  • @georgeiskander2458
    @georgeiskander2458 Год назад

    Really awesome. I never understood this topic as easily as you did..................
    Thanks

  • @traj250
    @traj250 Год назад

    Awesome video. Undergad student that really appreciates this simplification

  • @casestudy3167
    @casestudy3167 2 года назад +1

    very well explained. thank you for making this video

  • @eventhisidistaken
    @eventhisidistaken 3 года назад +4

    Thank you, this was very helpful. If you decide to expand on the video, the one thing that was not immediately clear to me, was *why* pi represents the stationary state probabilities. I had to write out the probability equations for that to become clear.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +1

      Thanks for this feedback :D

    • @RamakrishnanRukmini
      @RamakrishnanRukmini 2 года назад

      That is the value reached by the system finally. No more variation. End state values. Hence stationary state.

  • @emrullahcelik7704
    @emrullahcelik7704 2 года назад

    Very concise explanation. Thank you.

  • @jeevanmarg
    @jeevanmarg 3 года назад +1

    Excellent demonstration. Really helpful. Thank you.

  • @SamA-nh9mf
    @SamA-nh9mf Год назад

    HI, could you explain why you took the middle row of the matrix A to find the values? would it work with the top and bottom rows?

  • @ihateyourusernames
    @ihateyourusernames 2 года назад

    Awesome video. Great presentation, clear explanation, and plenty of areas to grow from here. Thanks very much. Sub'd and looking forward to more content!

  • @aftabasir7933
    @aftabasir7933 11 месяцев назад

    Well made and well visualized. Good job.

  • @JeffLuntGames
    @JeffLuntGames 5 месяцев назад

    Cool video - watching the whole series now.

  • @marclennardcolina6033
    @marclennardcolina6033 3 года назад +8

    Great Explanation. Learned a lot from these! I would also like to ask for permission to cite your examples in a report I'm about to make in my masters class.

    • @NormalizedNerd
      @NormalizedNerd  3 года назад +2

      Yes, absolutely. Best of luck for your report :D

  • @rishikambhampati2862
    @rishikambhampati2862 Год назад +4

    Hello, Thanks for the wonderful explanation. I have a naive question though, how did we arrive at the adjacency matrix and directed graph with probabilities in the first place? Is it from observations or domain knowledge(in this case, will restaurant give us the probabilities)?

  • @pratikshakharat8644
    @pratikshakharat8644 Год назад

    Yes yes we want more with such interesting examples

  • @techie1143
    @techie1143 Год назад

    Very good explanation with clarity. It will be greatly appreciated if you could include the script of these videos.

  • @cvzsmit
    @cvzsmit 2 года назад

    Can you convert an adjacency matric to a transition matrix? I have a 5x5 adjacency matrix with some numbers larger than 1 (due to multiple connections between states).

  • @XiaohanGao
    @XiaohanGao 3 месяца назад +1

    Informative and super clear!!! Thx!

  • @ciberman
    @ciberman 3 года назад +34

    "Please pause the video if you need a moment to convince yourself"
    What kind of 3blue1brown is that?!

  • @lucasqwert1
    @lucasqwert1 9 месяцев назад +6

    Thank you only one thing I didn't get after πA = π and sum of probability is 1, how do we calculate the stationary state as π =[25/71 15/ 71 31/71 ] ?

  • @panpeter7879
    @panpeter7879 2 месяца назад

    Have to say this is a very very helpful video for understanding MCMC 🎉🎉🎉

  • @praneetkumarpatra2661
    @praneetkumarpatra2661 2 года назад

    my mind is blown!!! evry new thing that was covered in my course in the last 1 month just got used here!!!

    • @NormalizedNerd
      @NormalizedNerd  Год назад

      Haha...don't you like when that happens 😍

  • @naageshk1256
    @naageshk1256 7 месяцев назад

    Great explanation . Thank you so much ..

  • @AaronCarlsson
    @AaronCarlsson 2 года назад

    Great explanation, please make more videos that go into greater depth.

  • @blackcoffee2go558
    @blackcoffee2go558 2 года назад

    @Normalized Nerd - Regarding the diagram at the 1:43 mark, why don't you have a transition arrow from the pizza state looping back to it as you do for the hamburger state and the hot dog state?

  • @mayukhdifferent
    @mayukhdifferent Год назад

    Life saver video 👍 prerequisite to understand markov regime switching model in econometrics

  • @GhosT-sd6ji
    @GhosT-sd6ji 3 года назад

    Lovely explanation and very clear in a simple way, would love to see more of these!

  • @Life_42
    @Life_42 2 месяца назад

    Thank you greatly! You're a great educator!

  • @vinx3078
    @vinx3078 2 года назад

    I'm here from ddlc and I could not understand a thing until I saw this video. Dude is the most helpful guy on this site

  • @joaopinto415
    @joaopinto415 2 года назад

    You saved my life! Thank you very much!

  • @joeyng7366
    @joeyng7366 2 года назад

    I am trying to do Markov Chain attribution modelling to classify conversion (1/0). Could you suggest a way to do this?

  • @onelivingsoul2962
    @onelivingsoul2962 Год назад

    Nice video.
    I have implemented the Markov chains as neuromorphic Hardware,for implementation of Boltzman machine

  • @abdelkaderbenzirar5795
    @abdelkaderbenzirar5795 2 года назад

    Thank you so much for this great work👏🤝

  • @zinniye
    @zinniye 3 года назад +3

    This helped me so much! Thank you!

  • @noshabgul7589
    @noshabgul7589 Год назад

    You make it quite simple to learn👍
    Can you plz make a detailed video on eigenvector and eigen values concepts?

  • @vatsaldabhi1145
    @vatsaldabhi1145 3 года назад

    Superb Explanation. Learned a lot from these!

  • @kevintrinh977
    @kevintrinh977 8 месяцев назад

    For a general problem, how do you get A (or know the probabilities going from one state to another?) I see that you assumed values for each arrow, but for a different problem, I don't know where to get such values.

  • @shivamvaish7475
    @shivamvaish7475 2 года назад

    I really find it very interesting to learn from you. So can you make videos on TAR and STAR model also?

  • @hunterhughes2589
    @hunterhughes2589 Год назад

    Great video! A quick question and I apologize if someone already asked it:
    Intuitively, I would assume that the probability of any given state is simply the sum of the probabilities for that state divided by the sum of the probabilities for all states. For example, the probability for pizza is 0.2, because 0.6/3.0. The answers you get from the pi calculation and the random walk are very close to this, but not exact. What gives?

  • @capszabidin9219
    @capszabidin9219 8 месяцев назад

    Perfect explanation, thanks!

  • @blakef.8566
    @blakef.8566 2 года назад

    Reminds me a lot of 3B1B. Very helpful! Thank you.

    • @kstxevolution9642
      @kstxevolution9642 Год назад

      i can only assume he used 3b1b's engine for videos. its a brilliant bit of work

  • @theodoresweger4948
    @theodoresweger4948 7 месяцев назад

    I watched the movie "Beautiful Mind'" and the random walk comes to mind, I found this quite interesting along with the Ignamac macine the Germans used and how it was the code was finally broken, and the consequences of giving away the code was broken and how that would effect WWII.

  • @Dubai_life_
    @Dubai_life_ 10 месяцев назад

    Thank you very much.
    More videos please.

  • @achrafelbrigui7687
    @achrafelbrigui7687 2 года назад

    Thanks a lot ! you have literally enlighted my path.

  • @sumers9396
    @sumers9396 Год назад

    perfectly explained, thank you!!

  • @saadali4797
    @saadali4797 Год назад

    a great video to understand the topic easily

  • @estelitaribeiro4196
    @estelitaribeiro4196 2 года назад

    Perfect explanation!! Thanks!!

  • @muhammadwaseem_
    @muhammadwaseem_ 2 года назад +1

    Fell in love with your channel and content quality....

  • @qqyerik38
    @qqyerik38 2 года назад

    Impressive video! Helped me a lot!!

  • @lucasaimaretto3725
    @lucasaimaretto3725 3 года назад +1

    Very clear explanation! thank you!

  • @shilpask9601
    @shilpask9601 2 года назад

    great way to explaining sir tq so much...!!

  • @user-fw2kc6iv1f
    @user-fw2kc6iv1f 6 месяцев назад

    good bro,you make me understand Markov model

  • @sinaghafouri2612
    @sinaghafouri2612 2 месяца назад

    Awesome video. Thanks!

  • @huitv1
    @huitv1 3 года назад +2

    Very clear explanation, well done! ty