How k-nearest neighbors works

Поделиться
HTML-код
  • Опубликовано: 1 окт 2024

Комментарии • 18

  • @3rdman99
    @3rdman99 Год назад +3

    Finally videos about ML explained by somebody whose English I can understand.

  • @youtubecurious9000
    @youtubecurious9000 3 года назад +6

    Nice explaination. Can you do more videos on other ml algorithms?

  • @haswanthaekula7656
    @haswanthaekula7656 3 года назад +6

    This is the best video I have ever watched regarding KNNs. I just have one question though, what did you exactly mean when you said 'Learned feature scaling'?

    • @BrandonRohrer
      @BrandonRohrer  3 года назад +2

      Thank you Haswanth! Not to be too coy, but we walk through the details of this in the course (e2eml.school/221). I didn't include it in the video because 1) it's a pretty specialized rabbit hole and 2) there is no standard way to do it that I'm aware of. In the course we pull together a workable method that resembles Powell's Method for optimization, which boils down to iteratively making small changes to the weights and keeping the changes that result in an improvement.

  • @waseemahmed4995
    @waseemahmed4995 Год назад +2

    Very clearly explained. And the examples were very well and appropriately chosen. Thanks Brandon!!

  • @zainulabideen_1
    @zainulabideen_1 Год назад +1

    Very very useful

  • @zainulabideen_1
    @zainulabideen_1 Год назад +1

    Thank you

  • @PanagiotisFoufoutis
    @PanagiotisFoufoutis 3 года назад +1

    Great video, Love the comparison to GPT ;)

  • @victoraguirre7486
    @victoraguirre7486 7 месяцев назад

    Hot damn this video is soo goood

  • @RajaSekharaReddyKaluri
    @RajaSekharaReddyKaluri 3 года назад +1

    Nice. Thank you.

  • @Justin-zw1hx
    @Justin-zw1hx Год назад

    WOW, you are LDS!

  • @tiagotiagot
    @tiagotiagot 2 года назад

    Maybe it's not trainable in the conventional sense; but you still gotta tune the hyperparameters to obtain more accurate results, which could be interpreted as a form of training.

    • @w花b
      @w花b 2 года назад +1

      A form of tuning

  • @RajaSekharaReddyKaluri
    @RajaSekharaReddyKaluri 3 года назад +1

    How to feature scale categorical variables?

    • @BrandonRohrer
      @BrandonRohrer  3 года назад

      If you first convert the categorical feature to a one-hot representation (say, 0 for indented and 1 for rounded) then you can choose a scaling factor to multiply that by. That's a trick we step through in detail in Course 221 (e2eml.school/221).