k-Nearest Neighbour

Поделиться
HTML-код
  • Опубликовано: 26 июл 2016

Комментарии • 44

  • @m00ndr0p
    @m00ndr0p 5 лет назад +1

    Thank you for this great video. It has helped me immensely !

  • @princejainist
    @princejainist 5 лет назад +17

    NPTEL is serving nation in true sense, we should learn from them.

  • @mayanksj
    @mayanksj 6 лет назад +16

    Machine Learning by Prof. Sudeshna Sarkar
    Basics
    1. Foundations of Machine Learning (ruclips.net/video/BRMS3T11Cdw/видео.html)
    2. Different Types of Learning (ruclips.net/video/EWmCkVfPnJ8/видео.html)
    3. Hypothesis Space and Inductive Bias (ruclips.net/video/dYMCwxgl3vk/видео.html)
    4. Evaluation and Cross-Validation (ruclips.net/video/nYCAH8b5AQ0/видео.html)
    5. Linear Regression (ruclips.net/video/8PJ24SrQqy8/видео.html)
    6. Introduction to Decision Trees (ruclips.net/video/FuJVLsZYkuE/видео.html)
    7. Learning Decision Trees (ruclips.net/video/7SSAA1CE8Ng/видео.html)
    8. Overfitting (ruclips.net/video/y6SpA2Wuyt8/видео.html)
    9. Python Exercise on Decision Tree and Linear Regression (ruclips.net/video/lIBPIhB02_8/видео.html)
    Recommendations and Similarity
    10. k-Nearest Neighbours (ruclips.net/video/PNglugooJUQ/видео.html)
    11. Feature Selection (ruclips.net/video/KTzXVnRlnw4/видео.html )
    12. Feature Extraction (ruclips.net/video/FwbXHY8KCUw/видео.html)
    13. Collaborative Filtering (ruclips.net/video/RVJV8VGa1ZY/видео.html)
    14. Python Exercise on kNN and PCA (ruclips.net/video/40B8D9OWUf0/видео.html)
    Bayes
    16. Baiyesian Learning (ruclips.net/video/E3l26bTdtxI/видео.html)
    17. Naive Bayes (ruclips.net/video/5WCkrDI7VCs/видео.html)
    18. Bayesian Network (ruclips.net/video/480a_2jRdK0/видео.html)
    19. Python Exercise on Naive Bayes (ruclips.net/video/XkU09vE56Sg/видео.html)
    Logistics Regession and SVM
    20. Logistics Regression (ruclips.net/video/CE03E80wbRE/видео.html)
    21. Introduction to Support Vector Machine (ruclips.net/video/gidJbK1gXmA/видео.html)
    22. The Dual Formation (ruclips.net/video/YOsrYl1JRrc/видео.html)
    23. SVM Maximum Margin with Noise (ruclips.net/video/WLhvjpoCPiY/видео.html)
    24. Nonlinear SVM and Kernel Function (ruclips.net/video/GcCG0PPV6cg/видео.html)
    25. SVM Solution to the Dual Problem (ruclips.net/video/Z0CtYBPR5sA/видео.html)
    26. Python Exercise on SVM (ruclips.net/video/w781X47Esj8/видео.html)
    Neural Networks
    27. Introduction to Neural Networks (ruclips.net/video/zGQjh_JQZ7A/видео.html)
    28. Multilayer Neural Network (ruclips.net/video/hxpGzAb-pyc/видео.html)
    29. Neural Network and Backpropagation Algorithm (ruclips.net/video/T6WLIbOnkvQ/видео.html)
    30. Deep Neural Network (ruclips.net/video/pLPr4nJad4A/видео.html)
    31. Python Exercise on Neural Networks (ruclips.net/video/kTbY20xlrbA/видео.html)
    Computational Learning Theory
    32. Introduction to Computational Learning Theory (ruclips.net/video/8hJ9V9-f2J8/видео.html)
    33. Sample Complexity: Finite Hypothesis Space (ruclips.net/video/nm4dYYP-SJs/видео.html)
    34. VC Dimension (ruclips.net/video/PVhhLKodQ7c/видео.html)
    35. Introduction to Ensembles (ruclips.net/video/nelJ3svz0_o/видео.html)
    36. Bagging and Boosting (ruclips.net/video/MRD67WgWonA/видео.html)
    Clustering
    37. Introduction to Clustering (ruclips.net/video/CwjLMV52tzI/видео.html)
    38. Kmeans Clustering (ruclips.net/video/qg_M37WGKG8/видео.html)
    39. Agglomerative Clustering (ruclips.net/video/NCsHRMkDRE4/видео.html)
    40. Python Exercise on means Clustering (ruclips.net/video/qs7vES46Rq8/видео.html)
    Tutorial I (ruclips.net/video/uFydF-g-AJs/видео.html)
    Tutorial II (ruclips.net/video/M6HdKRu6Mrc/видео.html )
    Tutorial III (ruclips.net/video/Ui3h7xoE-AQ/видео.html)
    Tutorial IV (ruclips.net/video/3m7UJKxU-T8/видео.html)
    Tutorial VI (ruclips.net/video/b3Vm4zpGcJ4/видео.html)
    Solution to Assignment 1 (ruclips.net/video/qqlAeim0rKY/видео.html)

  • @abhijeetsharma5715
    @abhijeetsharma5715 3 года назад +4

    36:07 in the way that it is shown in the slides, I think if kernel-width is large, we are effectively considering smaller region(instead of larger) since the weights will be more damped/smaller now. Correct me if I am wrong.

  • @BarqKadapavi
    @BarqKadapavi 3 года назад

    Thank you Madam!

  • @deepakkumarshukla
    @deepakkumarshukla 4 года назад

    Thank you M'am!

  • @krzb6725
    @krzb6725 Год назад

    Thank you Prof.Sudeshna Sarkar & Anirban Santara!

  • @amrutgirase5821
    @amrutgirase5821 5 лет назад +4

    Hi Ma'am,
    U r such a great teacher, ur teaching method helped me a lot.
    Thank you so much for making such great videos.

  • @nitinkulkarni02
    @nitinkulkarni02 6 лет назад

    Great Session

  • @skyalrazzaq6160
    @skyalrazzaq6160 6 лет назад

    Great Lecture

  • @dhoomketu731
    @dhoomketu731 6 лет назад

    Your teaching methodology is simply amazing ma'am.

  • @shubhgajjar8782
    @shubhgajjar8782 9 месяцев назад +1

    Nice teaching

  • @sruthi3408
    @sruthi3408 4 года назад

    Very simple and succinct explanation and thank you very Madam

  • @Rizwankhan2000
    @Rizwankhan2000 3 года назад

    Lectures on Machine learning in English / HIndi: ruclips.net/p/PLGeIxG41Dh351Tapkofz0WktplooH5C6s

  • @dhruvsanghvi5562
    @dhruvsanghvi5562 10 месяцев назад

    Great Instructor , poor cameraman

  • @Creative_arts_center
    @Creative_arts_center 2 года назад

    Machine learning made easy with her

  • @zulfiqarali-zq1rg
    @zulfiqarali-zq1rg 4 года назад

    thankyou dear mam

  • @jaytube277
    @jaytube277 6 лет назад

    Should we always have K as odd value ? This is because we classify x based on the class to which majority of the nearest neighbors belong. If we have even value of K then there is a possibility that equal number of neighbors belong to multiple classes.

    • @HimanshuKumar-cq8zq
      @HimanshuKumar-cq8zq 5 лет назад

      If you have 2 class chose odd k and k should not be the multiple of class.

  • @anugyamishra4893
    @anugyamishra4893 5 лет назад

    I have a question , for new instance we have only x value , so basically (x,0) type of value and it will always be nearest to the lowest y of that x or some( x2,0) type of neighboutr.

    • @anugyamishra4893
      @anugyamishra4893 5 лет назад

      Please let me know if my question is unclear the main point we dont have Y then how distance is calculated

    • @HimanshuKumar-cq8zq
      @HimanshuKumar-cq8zq 5 лет назад +1

      You are lil bit confused and that's totally okay, for a new x (Instance ) of course we don't have y and that's what we are trying to find, by applying knn let's say for(k=3) we found 3 nearest points (these points came after applying euclidean distance ) and now the majority points will tell what should be our y.
      You can also think like that if 3 nearest neighbours have y=1, 2 , 2 then majority is 2 so the y for new instance (x) will also belong to majority class which is 2.

  • @AdityaSingh-mx8lw
    @AdityaSingh-mx8lw 7 лет назад +1

    Thank you, Mam you are teaching very good, but lectures are not in a sequence and missing.

  • @ashishshrma
    @ashishshrma 4 года назад +3

    how do we get training error in this, when we're not even training?

    • @volleysmackz5960
      @volleysmackz5960 8 месяцев назад

      ig you just compute avg distance to k-nearest neighbors in the training set itself for the training point. so for 1-nearest error is 0 as point is first closest to itself and its an accurate estimation!

  • @OmkaarMuley
    @OmkaarMuley 6 лет назад +1

    the way she pronounces ALGORITHM is funny! :D :D

    • @705pratik9
      @705pratik9 2 года назад +2

      Yep. Because you don't know how to pronounce it.
      Ma'am is speaking the finest English. Go check out her profile and you'll know how good she is

    • @OmkaarMuley
      @OmkaarMuley 2 года назад

      @@705pratik9 okay!

    • @705pratik9
      @705pratik9 2 года назад

      @@OmkaarMuley Yep take care man. Always wear a mask.

  • @rajnishkumarrobin7055
    @rajnishkumarrobin7055 6 лет назад +2

    what does "classes are spherical" means?

    • @kingofshorekishore
      @kingofshorekishore 3 года назад +2

      it means the distribution of data in this case our training data have a spherical form. it is distributed throughout the x-y plane.

  • @letslearnjava1753
    @letslearnjava1753 4 года назад

    Plot of decision boundaries using MATLAB:
    ruclips.net/video/uql5RbM9GHI/видео.html

  • @jaggis4914
    @jaggis4914 5 лет назад +2

    You got even the heading of the video wrong Prof. The algorithm is called k-Nearest neighbors!

  • @prasanthvarmac3120
    @prasanthvarmac3120 4 года назад

    Text boot or else PDF can you send me link for downloading ur PDF materials please mam. It is useful for my project work

    • @nishah4058
      @nishah4058 2 года назад

      U can download these lecture transcript from neptel site

  • @swathikumari7073
    @swathikumari7073 6 лет назад +1

    try to give the numbering for lectures

    • @shubhampatial2278
      @shubhampatial2278 6 лет назад +1

      okk Swathi

    • @swathikumari7073
      @swathikumari7073 6 лет назад

      is these videos are published by you??

    • @shubhampatial2278
      @shubhampatial2278 6 лет назад

      Swathi numbering means what??
      you mean like alphabets the way kids starts in order..
      m sry plz dont mind but its like that and videos are not published by me.

    • @swathikumari7073
      @swathikumari7073 6 лет назад +2

      it does not mean like that .....i mean try to publish in an appropriate orde.....it is better to understand others in a correct way...thank you

    • @swathikumari7073
      @swathikumari7073 6 лет назад +1

      No thanks