Lectures by the professor told Feature-Reduction can be of two types: Feature-Selection & Feature-Extraction. This tutorial names feature-extraction as feature-reduction. Isn't it?
Yes, but both of them mean the same. Both names intend to reduce the number of features which are either redundant/highly correlated. Thus, we try to reduce the feature space by extracting valuable features among them.
Finally understood, k-nearest neighbors. Thanks
Lectures by the professor told Feature-Reduction can be of two types: Feature-Selection & Feature-Extraction.
This tutorial names feature-extraction as feature-reduction. Isn't it?
Yes, but both of them mean the same. Both names intend to reduce the number of features which are either redundant/highly correlated. Thus, we try to reduce the feature space by extracting valuable features among them.
in igen value and vector calculation, should we not derive the covariance matrix before | A-Lamda I | = 0 calculation?
33:02 K value should be always a odd number
where can we get the lecture material? github link is not working
thanks sir
Super kick bumper kick