This is the best video I have ever watched regarding KNNs. I just have one question though, what did you exactly mean when you said 'Learned feature scaling'?
Thank you Haswanth! Not to be too coy, but we walk through the details of this in the course (e2eml.school/221). I didn't include it in the video because 1) it's a pretty specialized rabbit hole and 2) there is no standard way to do it that I'm aware of. In the course we pull together a workable method that resembles Powell's Method for optimization, which boils down to iteratively making small changes to the weights and keeping the changes that result in an improvement.
Maybe it's not trainable in the conventional sense; but you still gotta tune the hyperparameters to obtain more accurate results, which could be interpreted as a form of training.
If you first convert the categorical feature to a one-hot representation (say, 0 for indented and 1 for rounded) then you can choose a scaling factor to multiply that by. That's a trick we step through in detail in Course 221 (e2eml.school/221).
Finally videos about ML explained by somebody whose English I can understand.
Nice explaination. Can you do more videos on other ml algorithms?
This is the best video I have ever watched regarding KNNs. I just have one question though, what did you exactly mean when you said 'Learned feature scaling'?
Thank you Haswanth! Not to be too coy, but we walk through the details of this in the course (e2eml.school/221). I didn't include it in the video because 1) it's a pretty specialized rabbit hole and 2) there is no standard way to do it that I'm aware of. In the course we pull together a workable method that resembles Powell's Method for optimization, which boils down to iteratively making small changes to the weights and keeping the changes that result in an improvement.
Very clearly explained. And the examples were very well and appropriately chosen. Thanks Brandon!!
Thanks Waseem!
Very very useful
Thank you
Great video, Love the comparison to GPT ;)
Hot damn this video is soo goood
Nice. Thank you.
WOW, you are LDS!
Maybe it's not trainable in the conventional sense; but you still gotta tune the hyperparameters to obtain more accurate results, which could be interpreted as a form of training.
A form of tuning
How to feature scale categorical variables?
If you first convert the categorical feature to a one-hot representation (say, 0 for indented and 1 for rounded) then you can choose a scaling factor to multiply that by. That's a trick we step through in detail in Course 221 (e2eml.school/221).