I really like how you are making an effort to explain the intuition behind the concepts by explaining the meanings within the equations. Only a few ppl do that.
That’s a good point to discuss. I missed that in the video I suppose, Let’s consider we have diagonal covariance matrix (ie. the off diagonal value go zero), then for this matrix the eigen values will be the variance only for each variable. If it’s 2x2 then we will have 2 eigen values and corresponding eigen vectors. So eigen vectors will act as 2 principal components and will point to directions of variance (ie. major and minor axis) and the spread by their respective eigen values. Diagonal covariance would mean axis aligned ellipse for 2D .. ellipsoid etc for higher dimensions. For non diagonal matrix, the same concept holds true but the ellipse becomes non axis aligned due to covariance change amongst variables and requires to calculate eigen value explicitly rather considering diagonals as the output.
I really like how you are making an effort to explain the intuition behind the concepts by explaining the meanings within the equations. Only a few ppl do that.
Thank you so much for appreciating the effort :)
Really a great explaination. I really liked the part you related the formulation of gaussian curve with the iso-contours.
Thank you 😊
I'm an undergrad at Berkeley currently taking ML and you just saved my ass on this midterm
:D
Love the explanation! Thank you!!
some input on how the eigen values of the covariances matrix would give a better intuition for the ellipses
That’s a good point to discuss. I missed that in the video I suppose,
Let’s consider we have diagonal covariance matrix (ie. the off diagonal value go zero), then for this matrix the eigen values will be the variance only for each variable. If it’s 2x2 then we will have 2 eigen values and corresponding eigen vectors. So eigen vectors will act as 2 principal components and will point to directions of variance (ie. major and minor axis) and the spread by their respective eigen values. Diagonal covariance would mean axis aligned ellipse for 2D .. ellipsoid etc for higher dimensions. For non diagonal matrix, the same concept holds true but the ellipse becomes non axis aligned due to covariance change amongst variables and requires to calculate eigen value explicitly rather considering diagonals as the output.
At @11.55 I think the whole term is supposed to be r1 squared and not squared root of r1.
Btw wonderful explanation 👍
what a great video
Great video thank you so much sir
Glad you liked it.
@@TechVizTheDataScienceGuy sir I m doing mtech in machine learning related field at IIT Delhi ...ur videos are very helpful🙏
Nice content
nice explanation
Thanks Prabir.
nice
👌
👍👍