Размер видео: 1280 X 720853 X 480640 X 360
Показать панель управления
Автовоспроизведение
Автоповтор
Sir, kya padhaya hai aapne, maza aa gaya, saare teachers aise ho to kya hi mazza aaye padhne mein. Thank you
God or Wot. Thank you sir, I can't express how amazing it is.🙏🙏
Really good explanation, thank you !
at 15:02 Should the matrix product be written as X transpose times P giving us X hat transpose?
excellent derivation of why we use covariance matrix for PCA
Good description of PCA
The covariance matrix is given by XX' but the lecture says X'X. So i think this interpretation is incorrect
I did further reading and found that X'X is also covariance matrix. So the interpretation makes complete sense now to me
it actually depends on how you define X.
In covariance matrix of tranformed data (X^)...why covairance should be zero?....anyone?
Consider that there are some columns y and z (after transformation the original data). These columns should be linearly independent i.e. the covariance between them should be low. Check this same lecture starting from 6:50, you will understand it.
because we want the new features to be uncorrelated
What if mean of the columns are not zer0??
I think you need to use xki-Sigma(xk)=xki-1/m(xk1+xk2+xk3+.....xkm),and then Ckk`=1/mSigmai(xki-uk)Sigmaj(xk`j-uk`)
Please watch the vedio at 19:19
It will be always be zero. Because you standardize the data
16:18 - calculations look off
Sir, kya padhaya hai aapne, maza aa gaya, saare teachers aise ho to kya hi mazza aaye padhne mein. Thank you
God or Wot. Thank you sir, I can't express how amazing it is.🙏🙏
Really good explanation, thank you !
at 15:02 Should the matrix product be written as X transpose times P giving us X hat transpose?
excellent derivation of why we use covariance matrix for PCA
Good description of PCA
The covariance matrix is given by XX' but the lecture says X'X. So i think this interpretation is incorrect
I did further reading and found that X'X is also covariance matrix. So the interpretation makes complete sense now to me
it actually depends on how you define X.
In covariance matrix of tranformed data (X^)...why covairance should be zero?....anyone?
Consider that there are some columns y and z (after transformation the original data). These columns should be linearly independent i.e. the covariance between them should be low. Check this same lecture starting from 6:50, you will understand it.
because we want the new features to be uncorrelated
What if mean of the columns are not zer0??
I think you need to use xki-Sigma(xk)=xki-1/m(xk1+xk2+xk3+.....xkm),and then Ckk`=1/mSigmai(xki-uk)Sigmaj(xk`j-uk`)
Please watch the vedio at 19:19
It will be always be zero. Because you standardize the data
16:18 - calculations look off