Deep Learning(CS7015): Lec 6.4 Principal Component Analysis and its Interpretations

Поделиться
HTML-код
  • Опубликовано: 7 ноя 2024

Комментарии • 17

  • @shivampratap8863
    @shivampratap8863 2 года назад +3

    Sir, kya padhaya hai aapne, maza aa gaya, saare teachers aise ho to kya hi mazza aaye padhne mein. Thank you

  • @Kishan31468
    @Kishan31468 3 года назад +3

    God or Wot. Thank you sir, I can't express how amazing it is.🙏🙏

  • @videojonkie
    @videojonkie 7 месяцев назад

    Really good explanation, thank you !

  • @karanwadekar3392
    @karanwadekar3392 4 месяца назад

    at 15:02 Should the matrix product be written as X transpose times P giving us X hat transpose?

  • @raghavmoar3211
    @raghavmoar3211 4 года назад +4

    excellent derivation of why we use covariance matrix for PCA

  • @Recordingization
    @Recordingization 4 года назад +2

    Good description of PCA

  • @kishorab
    @kishorab 3 года назад +1

    The covariance matrix is given by XX' but the lecture says X'X. So i think this interpretation is incorrect

    • @kishorab
      @kishorab 3 года назад +5

      I did further reading and found that X'X is also covariance matrix. So the interpretation makes complete sense now to me

    • @shubhamparida2584
      @shubhamparida2584 2 года назад +1

      it actually depends on how you define X.

  • @ruturajjadhav8905
    @ruturajjadhav8905 3 года назад +3

    In covariance matrix of tranformed data (X^)...why covairance should be zero?....anyone?

    • @pratikkumarbulani8903
      @pratikkumarbulani8903 3 года назад +2

      Consider that there are some columns y and z (after transformation the original data). These columns should be linearly independent i.e. the covariance between them should be low. Check this same lecture starting from 6:50, you will understand it.

    • @ankitgupta8797
      @ankitgupta8797 3 года назад +1

      because we want the new features to be uncorrelated

  • @utkarshsrivastava6021
    @utkarshsrivastava6021 5 лет назад +1

    What if mean of the columns are not zer0??

    • @Recordingization
      @Recordingization 4 года назад

      I think you need to use xki-Sigma(xk)=xki-1/m(xk1+xk2+xk3+.....xkm),and then Ckk`=1/mSigmai(xki-uk)Sigmaj(xk`j-uk`)

    • @Recordingization
      @Recordingization 4 года назад

      Please watch the vedio at 19:19

    • @ruturajjadhav8905
      @ruturajjadhav8905 3 года назад

      It will be always be zero. Because you standardize the data

  • @RahulMadhavan
    @RahulMadhavan 4 года назад +2

    16:18 - calculations look off