THANK YOU! God, i finally understood how to make the damn matrix. My professor is so bad at explaining things and he just goes by the sum version, and doesn't explain things. This is such a life savior !
OMG! This is the best video on Markov Chains. I just spent 30 mins on reading articles on medium, brilliant, wikipedia, etc and couldn't understand what they meant at all. But 4 mins into this video, I got it!
I've been watching math videos for a few years and I have to say that your channel is the best. You just teach in a extremely organized and interesting way. Please keep on!
Good, very good. Some people on ytube are just afraid of writing math when ever they are teaching, and just mistify the subject. This is good math indeed.
Generally speaking the rows are "from state A" and the columns are "to state B" within the literature (so invert his matrix along the diagonal) and it would have been nice to see the even simpler form of P using eigenvalues and eigenvectors to create AD(A^-1)=P to even better show how this generalises transitions and then shows the rate at which the markov chain converges
Wow, I was writing up my thesis on TMMC application to my little chemical adsorption model and I cannot understand the Maths behind it properly. You saved my life.
Clear explanation. Well poised and articulated. Makes its interesting, even without illustrating a real life practical example in the video. Also, a true desire to teach.
Thank you for this very practical video, I was immediately able to apply this concept, although I didn't immediately understand why multiplying the transition matrix with the current state vector yields the next state vector, but after some further consideration, what this multiplication actually does, it is quite clear, why/how that works.
This just blew my mind because it made me realize that the final convergent state of a markov chain is dictated by the transition matrix's eigenvector corresponding to its largest eigenvalue because the repeated multiplication essentially comprises the power method of finding the largest eigenvector/value.
thank you for your video it is well explained, but at 3:19, the matrix isn't supposed to be the way around? I mean the 0.25 shouldn't be in the place of 0.4? because the rows explain the directions, not the columns?
Absolutely clear and concise, thank you! It worth noting however, that computing the P^n matrix is very computationally expensive, is there a better way to to solve for P^n without having to do the power?
great video! there's so much more you can talk about concerning markov chains, this is just the beginning! Like how they can limit to some stationary matrix under certain conditions of the transition matrix P, or even easier ways to calculate P^n (if you decompose it such that P=U D U^-1, where U is the matrix of eigenvectors and D is the matrix of eigenvalues, then P^n = U D^n U^-1, where D is simply the matrix of only eigenvalues^n along it's diagonal). They are very interesting indeed, you have your work laid out for you! XD
If we were to line up the probability distributions to = 1 along the rows, rather than the columns that wouldn’t work (keeping the vector unchanged). Is that because of how it’s defined, due to the notation used?
Indeed, it's just a quirk of the definition. If you wanted to do it your way, you'd have to be multiplying with the vector on the left instead, which would be just as good but not as conventional.
At first I thought the result won't always add up to 1, but it can be easily shown that if both columns of the P matrix and of course the one column of the S matrix add up to 1, the product's column will also add up to 1.
I am a bit confused on how we came up with S0, if we had 3 vectors how do you come up with S0? Watching the previous video helped me understand how S1 was derived, but cannot understand how S0 the initial state was derived. Why not .5/.5?
thank you for your videos . if you will explain the logic behind it and not the matrix structure / equation structure perspective it will be much easier to understand. also first video is not on the list
This is an awsome video however I am still confused that is it possible to calculate the transition matrix using only the initial probabilities? Or calculate the initial probabilities using only the transition matrix?
I didn't understand how the S2 vector was determined. If S2 is just the states of B, it should just be 0.6 and 0.4 right?how do we have a 0.66 and 0.34?
I think that there was something wrong in the video which is the initial vector or matrix as you write it verticaly and should be horizantal S(x y..) and the multiplication should be S* P^n not like in the video as the resulats are not the same.. and thank you for the video
Best video on Markov Chains. So easy to understand and no unnecessary analogies. Great job!
THANK YOU! God, i finally understood how to make the damn matrix. My professor is so bad at explaining things and he just goes by the sum version, and doesn't explain things.
This is such a life savior !
OMG! This is the best video on Markov Chains. I just spent 30 mins on reading articles on medium, brilliant, wikipedia, etc and couldn't understand what they meant at all. But 4 mins into this video, I got it!
So crisp. So clean. So clear.
Thank you!
I've been watching math videos for a few years and I have to say that your channel is the best. You just teach in a extremely organized and interesting way. Please keep on!
Thank you so much!
What a video man, what an explanation. I literally understood the concept in one go. Keep it up !!!!!!
Dr Trefor, you're a blessing. Thank you for such clear explanations. They're liquid gold.
Dude, you are a magician, the way u explain it ! Seems so easy, and make so much sense, thank you so much ! Please do more part !
don't know how to thank you sir
this deserves to be paied for
really great job and pure gold
Good, very good. Some people on ytube are just afraid of writing math when ever they are teaching, and just mistify the subject. This is good math indeed.
BEST EXPLAINATION EVER I"M COMING BACK TO SEE IF YOU CAN EXPLAIN THE HARDER STUFF THIS WELL
Simply excellent explanation. In 6 minutes you made me understood what I tried to study in a week
Generally speaking the rows are "from state A" and the columns are "to state B" within the literature (so invert his matrix along the diagonal) and it would have been nice to see the even simpler form of P using eigenvalues and eigenvectors to create AD(A^-1)=P to even better show how this generalises transitions and then shows the rate at which the markov chain converges
Was wondering this lol
The explanations are easy to understand and the video length is at the sweet spot. Great job!
Looking forward to the rest of the series.
Thank you, glad you're enjoying!
Your explanation is much better than the Khan's Academy lets say. So detailed and so simple to understand.
Thank you so much!
His video is completely wrong about the matrix positioning
Give this man an award!
Wow, I was writing up my thesis on TMMC application to my little chemical adsorption model and I cannot understand the Maths behind it properly. You saved my life.
best explanation I could find on youtube
Thank you!
Crystal clear explanation. Direct and easy to understand. Thank You!
Wow. Thank you very much. What a way to make this look so easy. I understood this concept for the first time in my life.
that's what I call a straightforward explanation. Thank's a lot!
This guy saving my linear algebra grades
also i just realized that markov chains look like finite state machines
Clear explanation. Well poised and articulated. Makes its interesting, even without illustrating a real life practical example in the video. Also, a true desire to teach.
You had shed lights to people like me who suffered a lot from a college class which takes about 90 min
So we can apply eigendecomposition to simplify the matrix exponentiation! Thanks Trefor!
Absolutely! That was beyond the scope of this video, but would definitely be the next thing to do.
i have an exam today on this topic and you clearly explained it to me
This guy gave a 6-minute crash course where I started so confused. my man.
Our Markovian hero, thanx
Incredibly good explanation of Markov Chains. Subscribed!
Welcome aboard!
Thank you for this very practical video, I was immediately able to apply this concept, although I didn't immediately understand why multiplying the transition matrix with the current state vector yields the next state vector, but after some further consideration, what this multiplication actually does, it is quite clear, why/how that works.
This just blew my mind because it made me realize that the final convergent state of a markov chain is dictated by the transition matrix's eigenvector corresponding to its largest eigenvalue because the repeated multiplication essentially comprises the power method of finding the largest eigenvector/value.
Simple and comprehensive, thank you
I cried.
This was very good
Amazing video sirrr......Thank you for video. Loves from India
Thanks for the lucid explanation!
Lovely. I think even Markov would not be able to explain like that !!! Liked and Subscribed!!!
Thanks for the sub!
That was a wonderful explanation of the Markov chain, thank you
I liked your explanation it was simple and clear, thank you so much.
Simple and comprehensive.Thank you soooooo much
i am safwan, good video👍🏻🙏🏻
Also, the diagonalization of a general two state transition matrix is quite nice, so taking a high power of one is not so bad
Thank you Dr. Trefor Bazett! 谢谢!
You, Sir, are a Superhero.❤
Wow! Interesting Topic! Thank You for covering something wonderful!
Glad you enjoyed it!
thank you for your video it is well explained, but at 3:19, the matrix isn't supposed to be the way around? I mean the 0.25 shouldn't be in the place of 0.4? because the rows explain the directions, not the columns?
yes, you are right
I really liked your easy explanation. Thank you.
Brilliant to say the least
Spot on delivery Dr, many thanks
Thank you, I think I will be able to ace the CS 70 final exam at Berkeley.
Why would some one dislike your Videos. They must be in a dislike Markov state. I wonder when they will transition Dr Trefor Bazett.
You just saved me !
Thanks
I am impressed, wayyyy too good. Liked and Subscribed
Awesome , cleared my concept , Thank you !
Thank you for the lecture. It's easy to understand. Do you have any plan on Non-linear control theory (obeviously in easy way llke you taught now)?.
Absolutely clear and concise, thank you!
It worth noting however, that computing the P^n matrix is very computationally expensive, is there a better way to to solve for P^n without having to do the power?
Very well explained sir! Thank you.
great video! there's so much more you can talk about concerning markov chains, this is just the beginning! Like how they can limit to some stationary matrix under certain conditions of the transition matrix P, or even easier ways to calculate P^n (if you decompose it such that P=U D U^-1, where U is the matrix of eigenvectors and D is the matrix of eigenvalues, then P^n = U D^n U^-1, where D is simply the matrix of only eigenvalues^n along it's diagonal). They are very interesting indeed, you have your work laid out for you! XD
Totally! I am thinking of doing some follows we are just scratching the surface here
Awesome video!
wow it just so happens to be that the lecture today included transition matrices! what luck!
nice timing!
GREAT EXPLANATION!
Nice! Even I understood that.
Brilliant explanation thank you :)
Here's a specific question. Can you help solve this?
LRA: Please calculate 5 years LRA(Long Run Average) PiT transition matrices for each rating class.
Rating R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 Default
R1 77.49% 13.10% 3.20% 2.10% 1.02% 0.96% 0.75% 0.64% 0.52% 0.21% 0.01%
R2 5.00% 70.44% 4.10% 4.04% 3.20% 3.10% 2.90% 2.85% 1.80% 1.46% 1.11%
R3 4.12% 5.12% 72.00% 5.14% 3.11% 2.80% 1.70% 1.66% 1.55% 1.42% 1.38%
R4 2.14% 2.80% 3.81% 72.96% 3.45% 3.42% 2.92% 2.60% 2.00% 1.99% 1.91%
R5 1.20% 1.36% 1.51% 1.72% 76.14% 4.10% 3.20% 3.13% 2.99% 2.35% 2.29%
R6 0.11% 0.19% 0.21% 0.28% 4.35% 73.88% 7.19% 4.42% 3.27% 3.10% 3.00%
R7 0.10% 0.21% 0.31% 0.36% 0.98% 1.55% 74.68% 8.88% 5.28% 4.02% 3.63%
R8 0.13% 0.24% 0.38% 0.48% 1.20% 1.56% 5.45% 71.23% 7.26% 6.10% 5.97%
R9 0.12% 0.23% 0.36% 0.44% 1.21% 1.54% 3.20% 4.11% 65.67% 11.91% 11.21%
R10 0.10% 0.22% 0.34% 0.46% 1.20% 1.55% 2.60% 2.72% 4.32% 70.28% 16.21%
Default 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 100.00%
I am Almina khatun who also comment on our video sir.......I al ways first
very good explanation. thank you.
Great video, thanks!! Any chance to follow up on this topic? Perhaps look into Markov Models?
This was absolutely brilliant. This video could also be used to explain quantum spin 1/2; just make a and b stand for spin up and spin down
Thank you for confusing me. Great work 👍
Very nice explanation
If we were to line up the probability distributions to = 1 along the rows, rather than the columns that wouldn’t work (keeping the vector unchanged). Is that because of how it’s defined, due to the notation used?
Indeed, it's just a quirk of the definition. If you wanted to do it your way, you'd have to be multiplying with the vector on the left instead, which would be just as good but not as conventional.
Great! very clear and concise, what is the connection of this with turing machines?
Lovely explanation
Respect!!!!!!✌✌ >>>Legend👏
Your 6 minutes = my professor’s 1 hour
Thank you man! This was so helpful☺️
At first I thought the result won't always add up to 1, but it can be easily shown that if both columns of the P matrix and of course the one column of the S matrix add up to 1, the product's column will also add up to 1.
absolutely amazing
Beautiful video Sir..👌👌
This Markov process feels vaguely quantum mechanical to me, the idea of probabilities spreading out over time over multiple states.
I am a bit confused on how we came up with S0, if we had 3 vectors how do you come up with S0? Watching the previous video helped me understand how S1 was derived, but cannot understand how S0 the initial state was derived. Why not .5/.5?
omg this video helps me a lot! thanks a ton
It sounds good, i can apply this to Roulette game! 😅
very clear. nice work.
I did not see a link to the video you referenced introducing matrix multiplication
It would've been extremely helpful if you went through more examples at the end, like s4 or s6 or whatever
thank you for your videos . if you will explain the logic behind it and not the matrix structure / equation structure perspective it will be much easier to understand. also first video is not on the list
This is an awsome video however I am still confused that is it possible to calculate the transition matrix using only the initial probabilities? Or calculate the initial probabilities using only the transition matrix?
I didn't understand how the S2 vector was determined.
If S2 is just the states of B, it should just be 0.6 and 0.4 right?how do we have a 0.66 and 0.34?
When does a Markov chain converge into steady state?
How many steps does it take to converge?
Memory less ness property explained
I derived this before knowing what it was
at 3:32, I think the row in the matrix should add up to 1. am I correct? Thanks!
Columns add up to 1.0. Not the rows.
Thank you it was usefull
Best explanation ever!~!!
Thank you!!
Does this non-Markovian system turns into a Markovian system if we let n -> Infinity ?
I think that there was something wrong in the video which is the initial vector or matrix as you write it verticaly and should be horizantal S(x y..)
and the multiplication should be S* P^n not like in the video as the resulats are not the same.. and thank you for the video
toast to the second part...
How do you find at what value n the S vector will have a given value for x1??
Saved me👏
Beautiful.
Incredible 🔥