Bro dropped the coldest playlist on Markov Chains and thought we wouldn't notice 🥶 . Sir 🙏🙏🙇♂🙇♂. These animation combined videos just make you understand 30 mins topic in 10 mins.
Notes for my future review. 05:06 *Chanpman-Kolmogorov Theorem* can be used to calculate the probability of going from state i to state j after n steps. 05:06 = Probability of going from state i to state j in n steps with an intermiadte stop in state k after r steps, then sum over all possible value of k. Probability for going from state i to state j after n step = Probililty of going from state i into a in-between state * Probability of going from the in-between states to j state, for all the possible in-between state = ( Proability of going from state i to k after r steps * Probaility of going from k to n after n-r steps ).sum_for_all_k_states In other words: Get the probability of going from state-i to state-j in n step, through a state k. Do it for all possible state k. Then sum all the probabilities. But what is r? What is the value of r? The system reaches an intermidiate state k after r steps. There are different possible k states, so for each k, the r (the number of step to reach from i to k) might be different. The exact value of r is not important, except that probability of k to j is for n-r steps.
I've only just discovered this channel and watched the Markov Chains series. They're excellent. They explained the concept and practice clearly and relatively simply. I'll definitely be coming back to see what else is available here. It's a great resource.
Thank you so much for the informative video. I am enrolled in a machine learning course with no background in calculus or linear algebra and this video is literally a life saver
Your presentation and teaching quality gives a positive vibe specially your voice. Moreover great to see an Indian teaching in such a beautiful way. Highly recommended channel who wish to learn advance probability theory. Have a great smoothness in your teaching. Thank you Normalizing Nerd
Thank you so much for the videos. You explained Markov Chain a lot better than my professor. You have mentioned aperiodicity in this video but you have not talked about it in later videos.
Amazing Content :) For any who wondered like my why getting from 0 to 2 through 1 in 2 steps is the product of getting from 0 to 1 with 1 to 2, it follows from the Chapman Kolomogorov Thm explained around 5:54 :)
This is the best HMM series in youtube. Crystal clear explanation and the music in between the video is just superb... If you can send me the link to the music would be great. Thanks in advance.
Thank you. I am doing a course in probability models and I have never seen Markov Chains. The textbook examples aren't the most intuitive so it has been a bit frustrating. This helps a lot.
Hello, future person here. I really like your videos regarding markov chains :) I was wondering if it be possible to also get the mathematical definitions that go along with the videos. Will you also be covering markov decision programs and go into detail regarding optimal value function/policy? It would also be really interesting to have a parallel series on continuous state (and continuous action if we are talking about decision programs) markov chains as i dont know any good sources who cover them!
Humm interesting...Markov decision process and it's use in RL is a broad topic. Currently, I don't possess a very good understanding of them. Maybe someday I will.
Very explicative and informative, concise and to the point. Please add more videos about the applications of Chapman-Kolmogorov in variant applied fields.
@ Itachi Uchiha Assuming num of states = 3 Suppose you have taken r=1 so, P11(3) = P11(1) * P11(2) + P12(1) * P21(2) + P13(1) * P31(2) Again applying CK theorem on 2 step transitions, P11(2) = P11(1) * P11(1) + P12(1) * P21(1) + P13(1) * P31(1) P21(2) = P21(1) * P11(1) + P22(1) * P21(1) + P23(1) * P31(1) P31(2) = P31(1) * P11(1) + P32(1) * P21(1) + P33(1) * P31(1) Similarly, you can use any value of r
When you brought up Chapman-Kolmogorov theorem proof and said “you can go through it” this felt slightly off putting, because I couldn't. I mean, everything else is so clearly stated, and looks like I should've understood the theorem proof too, but failed to. I dunno whether I'd feel better if the proof was explained or if I knew what exactly the proof notation means or if it was in the description. Anyway, your channel is great, you're so good at breaking complicated stuff down to simple stuff, I just wanted to give some feedback
Thanks for your honest feedback. Well, I didn't explain the derivation because that kinda bores people. Here's a tip to understand the derivation: take any two states i and j, n=4. Now try to compute P_ij(n) using r=0,1,2,3. You'll see how things work.
Год назад
Please talk about the use of Markov Chains in Monte Carlo Method
Great videos! Could you please clarify whether we just use the pi values we found in the first video to find A^infinity as I recall that in the first video we found that pi=[0.3591 0.2145 0.43564] which are not the same as 0.444 0.333 or 0.222 (which are the values of A^infinity). Thank you!
Just wanted to say there's been a little tiny error at ~4:30. It's stated that P_{ij}(n) = A_{ij}^n, that's not exactly correct in index notation, that's the equivalent of raising each individual element to a power and not the actual matrix. For n=2, P_{ij}(2) = A_{ik}A_{kj}, n=3 P_{ij}(3) = A_{ik}A_{kl}A_{lj}. It's a little more convoluted, but it would simply work the way you want with addition of parenthesis. P_{ij}(n) = (A^n)_{ij}
Nice channel, I am waiting for this excellent explanation. Next, how about addressing the Attention Mechanism. Thank you so much for your excellent channel.
Because of your explanations on matrices, during my exam instead of solving all the pis for the transition matrix, i just made my calculator power it to 100 to answer it correctly on my exam 😂😂😂
The content is good, but you probably should put your mouth closer to the microphone because I turn up the volume to hear your voice but then the ad in the video blows up my headphones.
Bro dropped the coldest playlist on Markov Chains and thought we wouldn't notice 🥶 . Sir 🙏🙏🙇♂🙇♂.
These animation combined videos just make you understand 30 mins topic in 10 mins.
This is a very underrated channel! Fantastic visuals and explanations. Thank you for all that you do!
Thanks man! Do share the video if you can ❤️
So true
How we can know the value of r? If possible make a separate video on Chapman Kolmogorov process proof.
I totally agree.
Absolutely true
You speak in such a clear manner, it's truly amazing. People forget how significant part it is of teaching. Thank you.
Notes for my future review.
05:06
*Chanpman-Kolmogorov Theorem*
can be used to calculate the probability of going from state i to state j after n steps.
05:06 = Probability of going from state i to state j in n steps with an intermiadte stop in state k after r steps, then sum over all possible value of k.
Probability for going from state i to state j after n step
= Probililty of going from state i into a in-between state * Probability of going from the in-between states to j state, for all the possible in-between state
= ( Proability of going from state i to k after r steps * Probaility of going from k to n after n-r steps ).sum_for_all_k_states
In other words: Get the probability of going from state-i to state-j in n step, through a state k. Do it for all possible state k. Then sum all the probabilities.
But what is r? What is the value of r?
The system reaches an intermidiate state k after r steps. There are different possible k states, so for each k, the r (the number of step to reach from i to k) might be different.
The exact value of r is not important, except that probability of k to j is for n-r steps.
I've only just discovered this channel and watched the Markov Chains series. They're excellent. They explained the concept and practice clearly and relatively simply. I'll definitely be coming back to see what else is available here. It's a great resource.
I am so glad I stumbled upon this channel. Concepts are extremely clear and these videos have definitely helped me out so far. Thank you!
Thank you so much for the informative video. I am enrolled in a machine learning course with no background in calculus or linear algebra and this video is literally a life saver
Video 10/10
(animations, sound quality, content, focused and on point)
Thanks!
Single word is enough " Awesome" . Cleared All doubts I had. Please make more videos on AIML series.
I wasn't convinced of the quality of the vids, but after this, the channel deservers a subscribe.
Not in contact with books/studies atleast from13 years, but your power of illustration is amazing enough that I am understanding what's is going on.👍👍
Glad to hear that man! :D
Your presentation and teaching quality gives a positive vibe specially your voice. Moreover great to see an Indian teaching in such a beautiful way. Highly recommended channel who wish to learn advance probability theory. Have a great smoothness in your teaching. Thank you Normalizing Nerd
Please reduce the bell sound in the beginning. It is too high as compared to the voice and at that moment we are getting a earful
Thanks for the feedback
And on the other hand voice while illustration is too low
It has been so intuitive with the visualization and the explanation of your teaching! Thanks and keep up the good work!
this is just SUPER COOL explanation of MARKOV MODEL i have ever seen!!!!!!!!!! Thanks so much man!
ah yes! I've been waiting for this! Fantastic. Thank you so much!
You're welcome! Keep supporting...
These videos are gold !!!!!!
Keep supporting :D
If there was a video like yours for every topic I have to learn at university I wouldn't be so frustrated with learning.
You are the best in the game.
Thank you so much for the videos. You explained Markov Chain a lot better than my professor. You have mentioned aperiodicity in this video but you have not talked about it in later videos.
Maybe I'll in a future video :D
Channels like these made me think "Math is Wonderful"!
Outstanding series of explanations
intuitive example and clear explanation thank you
Amazing Content :) For any who wondered like my why getting from 0 to 2 through 1 in 2 steps is the product of getting from 0 to 1 with 1 to 2, it follows from the Chapman Kolomogorov Thm explained around 5:54 :)
This is the best HMM series in youtube. Crystal clear explanation and the music in between the video is just superb... If you can send me the link to the music would be great. Thanks in advance.
this is so helpful. I'll be really excited when the next video comes out. Top quality.
:D keep supporting
Amazing video bro! Thanks, from Delhi.
This is very badly explained by my own teacher, you save me thank you! Please keep doing what you do
love from China, good video
You deserve the likes bro
Thanks
Great explanations! Thank you for making it so followable.
Such a simple and clear explanation of a complicated concept.. Fantastic.
Thanks a lot!
Beautifully explained!
This is a work of art!
Thank you. I am doing a course in probability models and I have never seen Markov Chains. The textbook examples aren't the most intuitive so it has been a bit frustrating. This helps a lot.
That was the goal! Thanks man.
Awesome series of videos, much appreciated
thanks a lot. you are a blessing. waiting for the aperiodicity video
People from the future says thanks man
This video is incredibly clear! Thank you!
Very nicely explained. Thank you
Thanks chad, your videos really help me out. Keep it up bro...
Loved your narrative. I agree everything was Cool and Elegant 😎. Great video series on Markov. Gracias amigo 🤙.
Thanks a lot man! Keep supporting :D
looking forward to another video about aperiodicity!
awesome presentation.....
Hope to see more content form you.....
Good going !!
You really normalise a nerd !!!
this was great
You earned my subscription. great videos till part 3
Let's see if you continue the great work in the next parts
Amazing video! Thank you!
Hello, future person here. I really like your videos regarding markov chains :) I was wondering if it be possible to also get the mathematical definitions that go along with the videos. Will you also be covering markov decision programs and go into detail regarding optimal value function/policy? It would also be really interesting to have a parallel series on continuous state (and continuous action if we are talking about decision programs) markov chains as i dont know any good sources who cover them!
Humm interesting...Markov decision process and it's use in RL is a broad topic. Currently, I don't possess a very good understanding of them. Maybe someday I will.
@@NormalizedNerd Same here, so I was hoping you do :P Anyhow, Im eager for the next video!
Very explicative and informative, concise and to the point.
Please add more videos about the applications of Chapman-Kolmogorov in variant applied fields.
What if we need to find 3 step transition probabilities like P11(3).Then how should we proceed using the Chapman-Kolmogorov Theorem?
@
Itachi Uchiha
Assuming num of states = 3
Suppose you have taken r=1 so,
P11(3) = P11(1) * P11(2) + P12(1) * P21(2) + P13(1) * P31(2)
Again applying CK theorem on 2 step transitions,
P11(2) = P11(1) * P11(1) + P12(1) * P21(1) + P13(1) * P31(1)
P21(2) = P21(1) * P11(1) + P22(1) * P21(1) + P23(1) * P31(1)
P31(2) = P31(1) * P11(1) + P32(1) * P21(1) + P33(1) * P31(1)
Similarly, you can use any value of r
Good Stuff bro!! Keep up the good work. Your explanation and the video edit is damn cool
Thanks a lot :D
Can you also do a video on QR and Cholesky decomposition, determinants, and eigenvalues/eigenvectors
Nice suggestions...
It is a wonderful and useful lecture series. Can you share some reading material associated with this lecture series?
Great illustration man! Btw, what is the music that's played when we are multiplying A infinitely?
When you brought up Chapman-Kolmogorov theorem proof and said “you can go through it” this felt slightly off putting, because I couldn't. I mean, everything else is so clearly stated, and looks like I should've understood the theorem proof too, but failed to. I dunno whether I'd feel better if the proof was explained or if I knew what exactly the proof notation means or if it was in the description.
Anyway, your channel is great, you're so good at breaking complicated stuff down to simple stuff, I just wanted to give some feedback
Thanks for your honest feedback. Well, I didn't explain the derivation because that kinda bores people. Here's a tip to understand the derivation: take any two states i and j, n=4. Now try to compute P_ij(n) using r=0,1,2,3. You'll see how things work.
Please talk about the use of Markov Chains in Monte Carlo Method
Very nice visualization of concept! what software did you use to make the visualization and animation?
Fantastic Explanation...
Thanks!
Your presentation is fabulous.
Make more such videos related to ML, AI, DS...
Thanks a lot for this video. It helps me a lot clearing all my doubt. ❤️
Very welcome...Sure I'll 😍
Great explanation! The yellow highlighting color you use on the Markov chain is a little tough to see, though.
Thanks for pointing this out.
Hello 👋
Please kindly do a video on limiting distribution
Great video. Loved Grant Anderson style tutorials :)
Thanks a lot man! :D
5:55 alternative proof can be derived from matrix multiplication.
Nice job here!
Great videos! Could you please clarify whether we just use the pi values we found in the first video to find A^infinity as I recall that in the first video we found that pi=[0.3591 0.2145 0.43564] which are not the same as 0.444 0.333 or 0.222 (which are the values of A^infinity). Thank you!
High goh ohhh
Just wanted to say there's been a little tiny error at ~4:30. It's stated that P_{ij}(n) = A_{ij}^n, that's not exactly correct in index notation, that's the equivalent of raising each individual element to a power and not the actual matrix. For n=2, P_{ij}(2) = A_{ik}A_{kj}, n=3 P_{ij}(3) = A_{ik}A_{kl}A_{lj}. It's a little more convoluted, but it would simply work the way you want with addition of parenthesis. P_{ij}(n) = (A^n)_{ij}
Thanks a ton!! You are the best!
:D :D :D
Good job bro !
Great videos!
Would you consider making video/s on Queueing theory for stochastic models please?
Nice channel, I am waiting for this excellent explanation. Next, how about addressing the Attention Mechanism. Thank you so much for your excellent channel.
Nice suggestion!
hey thanks for the video.
Thank you!
Super cool!!
ty for the content!
I think 3b1b will be proud that he has empowered you to make such great videos
Thanks a lot, mate! I can't thank him enough for making and publishing manim.
Tqu😍
Can you make videos on the intro of stochastic process
suggestion noted!
hello person in the past !!! 😀😃😃
great video!! thanks a lot !
Glad it helped!
I think I saw Bayes' Theorem? Where did that bad boy pop out from?
P.S Just started watching your videos... Awesome Work!
5:54
Hi, what is the difference between kolmogorov equation and n step transtions?
What are you referring to as the 'n step transition'?
Great content😅😅😅
Amazing video. It's like watching 3blue1brown :)
Glad you think so!
Hi! Thanks for the video. I am wondering if n-step transition matrix is helpful building 2nd-order Markov Chain?
Thx
Because of your explanations on matrices, during my exam instead of solving all the pis for the transition matrix, i just made my calculator power it to 100 to answer it correctly on my exam 😂😂😂
Well…if they allow calculators then why not! 🤭
@@NormalizedNerd it was pretty cool to have a "trick" up my sleeve not taught in class so thank you!
Hello welcome to people from future😊😊😊
yeeesssss
I'm so here for indian 3b1b
😍
Dude! who the fuck are you teaching me wayyyy better than my prof in uni? How dare you ^^
hi
The content is good, but you probably should put your mouth closer to the microphone because I turn up the volume to hear your voice but then the ad in the video blows up my headphones.
Thanks for the feedback!
Ur accent sounds familiar
Indian?
Yeah :)
@@NormalizedNerd I'm from Bangladesh
@@WeedL0ver Great. আমি বাঙালি :D
Lemme sub first xD
@@NormalizedNerd do u have discord? I found this video on reddit
Hello person in the past😊
Sir I need your email