Good explanation! Just one thing: At 2:20 it's the other way round. First goes the future state, then comes the current state. So a_(ij) = P(q_(t+1) = S_j | q_t = S_i)
I don't understand the Initial state distribution part, how you calculated that one? If you chose snowy as the first state then there are 75% chances of sunny, 5 % chances for rainy and 20% of snowy, correct me if I'm wrong.
Good explanation! Just one thing: At 2:20 it's the other way round. First goes the future state, then comes the current state. So a_(ij) = P(q_(t+1) = S_j | q_t = S_i)
Yes, I was thinking the same!
Same thought, i'm not crazy ! :)
Thanks for pointing out ... İ was going crazy with rows vs columns for a few minutes
Well pointed
+1
And also your lecture is great!
Oh my god, that was amazing
What is the address for the post card? :D
Wow, Now that's a Markov Model :)
Awsome explanation
Great Video.
Thumbs up..
Nice explanation.
Superb explanation...
Really nice. can you explain HMM in speech recognition please. Thank you
Where'd you get the initial state distribution from?? (3:06)
it mostly depends on today. but he assumed those values
็How to collect the data for probability transition state and initial state in the real life
how did you calculate the Initial state distribution?
it provided i guess
u got me
I don't understand the Initial state distribution part, how you calculated that one? If you chose snowy as the first state then there are 75% chances of sunny, 5 % chances for rainy and 20% of snowy, correct me if I'm wrong.
@Konstantinos Mechteridis this could be based on a prior
Awsome explanation