Complete Unsupervised Machine Learning Tutorials In Hindi- K Means,DBSCAN, Hierarchical Clustering

Поделиться
HTML-код
  • Опубликовано: 1 окт 2024

Комментарии • 55

  • @krishnaikhindi
    @krishnaikhindi  Год назад +25

    Dont Forget to subscribe the channel for more such videos :)

    • @umaprasadsutar2658
      @umaprasadsutar2658 Год назад +2

      Hello Sir, I wanted some help on ONA, where or which course is available to learn ONA.

    • @fahadshaikh3786
      @fahadshaikh3786 Год назад +3

      Sir please complete the staticial playlist

    • @naturalworld2863
      @naturalworld2863 Год назад +3

      Sir i want to join pw 3500 rupees data science course,,,, but on ow app this course is not visible

    • @shaikmubashira5836
      @shaikmubashira5836 Год назад +1

      sir please complete stat playlist😕

    • @flixgpt
      @flixgpt 7 месяцев назад

      ai/bi how can it be small when ai is big . please don’t put wrong explanation. It misleads and create lot of confusion . I would request you to please go through it and confirm . As the understanding get completely twisted .

  • @sakshamarya2052
    @sakshamarya2052 Год назад +25

    correction = 1:08:06
    if bi> ai then only it is a good cluster

    • @anshikasharma6846
      @anshikasharma6846 11 месяцев назад +1

      You are right.

    • @nightowl5136
      @nightowl5136 8 месяцев назад +1

      Yes I got confused because of this line😅

    • @samarthparekh
      @samarthparekh 2 месяца назад

      yah bro , good point i got confused also

  • @ArjunPaudel-q6o
    @ArjunPaudel-q6o 11 месяцев назад +15

    K-means++ techinque is used to avoid random initilization traps.Or in other words to intialize the centroids very far apart from each others so that all the data are grouped properly and classification becomes much accurate.

  • @atifghori6529
    @atifghori6529 7 месяцев назад +2

    Hello Mr Krish,
    I believe you made a mistake in the silhoutte score section. If a(i) >>> b(i) this would mean that the intra cluster distance is greater than the inter cluster distance, which would further imply that the clustering that is done is poor. In the video you have taken that as a case of good clustering. Same is true for a(i)

  • @bharatsoni9734
    @bharatsoni9734 2 месяца назад

    seriously Bhai !!!!!! Esse bande padhye to padhne mai maaja aata hai...... saale clg ke gvr teachers , female faculty
    se baate krne mai hi time waste krte hai....

  • @sohanamitarathod928
    @sohanamitarathod928 Год назад +3

    Hi Krish. Pls make share more transition stories 🥲
    Also emphasize more on the hiring process. Many companies probably have live codings rounds . Pls throw more light on this 🙏🏻

  • @letsrockwithnavin11
    @letsrockwithnavin11 4 месяца назад +1

    time --> 1:08 , if b(i) > a(i) then our cluster is good.
    .
    this playlist is Very Helpful...
    Thanks Sir ❤❤❤❤❤❤❤

  • @GauravKumar-ot5kb
    @GauravKumar-ot5kb 2 месяца назад +1

    mistake: a(i) < b(i) , good cluster, but it is said opposite in video timestamp(1:05:00)

  • @mdmusaddique_cse7458
    @mdmusaddique_cse7458 Год назад +4

    As always, super crisp and dope explanations!

  • @hasiiabbasi
    @hasiiabbasi Год назад +2

    Sir please complete statistics playlist in Hindi

  • @rishabhchoudhary0
    @rishabhchoudhary0 3 месяца назад

    Can I get this notepad pdf???

  • @subhashishmitra3409
    @subhashishmitra3409 7 месяцев назад +1

    Fantastic explanation. No deviated talks, just the crux. Love the way you explain. Great going !!!

  • @faizannaviwala163
    @faizannaviwala163 2 месяца назад

    notes of last few videos is missing

  • @Tech_Enthusiasts_Shubham
    @Tech_Enthusiasts_Shubham Год назад +1

    sir when will end to end data science projects come with dvc and mlops included in that i really need it sir if possible please bring it as soon as possible

  • @letsrockwithnavin11
    @letsrockwithnavin11 4 месяца назад

    Time : 1: 16 ---> when our centroid is located very nearly then our model can't able to make good cluster , to outcome this problem we use K-MEAN ++ Initialization --> after applying this our centroid located FAR from each other.
    Thanks Sir 💓💓💓

  • @bharatsoni9734
    @bharatsoni9734 2 месяца назад

    Dendrograms : A Dendrogram is a tree-like diagram used to visualize the relationship among clusters.

  • @ajitabh99935
    @ajitabh99935 Месяц назад

    Great wok you are doing Krish, Please tell me about the Whiteboard app that you use, it will be a great help

  • @shreyanbasuray1033
    @shreyanbasuray1033 Месяц назад

    For -
    ` from sklearn.cluster import AgglomerativeClustering
    cluster=AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='ward')
    cluster.fit(pca_scaled) `
    You might now get this error -
    {
    "name": "TypeError",
    "message": "AgglomerativeClustering.__init__() got an unexpected keyword argument 'affinity'",
    "stack": "---------------------------------------------------------------------------
    TypeError Traceback (most recent call last)
    Cell In[16], line 2
    1 from sklearn.cluster import AgglomerativeClustering
    ----> 2 cluster=AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='ward')
    3 cluster.fit(pca_scaled)
    TypeError: AgglomerativeClustering.__init__() got an unexpected keyword argument 'affinity'"
    }
    Resolution -
    from sklearn.cluster import AgglomerativeClustering
    cluster = AgglomerativeClustering(n_clusters=2, linkage='ward')
    cluster.fit(pca_scaled)
    the ward linkage implicitly uses euclidean now

  • @rohitrohit6883
    @rohitrohit6883 11 месяцев назад +1

    Hy i am unable to get ppt or notes of the classes.

  • @subhashishmitra3409
    @subhashishmitra3409 6 месяцев назад

    Hello Krish, if possible can you record some sessions around HM Markov model and Expectation-maximization algo?

  • @ShivamVisnoi
    @ShivamVisnoi 4 месяца назад

    Sir , please share the board PDP as you shared in the playlist of ML , it will very helpful for us

  • @ChinmayMishra-g2f
    @ChinmayMishra-g2f 10 месяцев назад

    score=silhouette_score(x_train,labels=kmeans.labels_)
    this line throws an error. Please guide

  • @sagarchhoker8808
    @sagarchhoker8808 7 месяцев назад +3

    Sir At 38:23, you have said that agglomerative is top down but its bottom up, and similarly divisible is top down.
    Sir thank you for the informative sessions.

  • @sagarchaudhari2343
    @sagarchaudhari2343 4 месяца назад

    Thank you very much sir i am from non it background but get all the concept little bit the way you r teaching is superb

  • @slayofficial1136
    @slayofficial1136 3 месяца назад

    k means for large dataset and hierarchial for small dataset

  • @avadheshbhoot
    @avadheshbhoot 2 месяца назад

    Thank you for the awesome free content on clustering.

  • @soniyachaudhary6363
    @soniyachaudhary6363 7 месяцев назад

    sir, aapka teaching style bhut jyada accha hai, mere doubts clear hotey hai aapki hi videos se. Is lectures k notes kha se le sir.

  • @muskaanchawla571
    @muskaanchawla571 11 месяцев назад

    Can you please share the whiteboard you use

  • @ambreprathamesh7655
    @ambreprathamesh7655 8 месяцев назад

    But how to find min pts value..?

  • @anshikasharma6846
    @anshikasharma6846 11 месяцев назад

    By using k-means ++ we can initialise centroids which are far from each other, so that we can overcome Random Initialisation Trap.

  • @gaurigupta4766
    @gaurigupta4766 10 месяцев назад

    sir please make complete playlist for ml for college exam like arm and all other topics,Your way of teaching is excellent!!

  • @niku237
    @niku237 Год назад

    Sir kindly provide lecture of ANN also

  • @deepaklonare9497
    @deepaklonare9497 8 месяцев назад

    for K=3 , how perpendicular lines will divide the clusters..there will be 6 clusters right..

    • @likithb3726
      @likithb3726 7 месяцев назад +1

      no bro when dividing it will be like one-vs-all method, i.e consider 1st k value , one side will belonging the cluster of this point and the either is side will belong to the other groups

  • @vikashprasad1309
    @vikashprasad1309 Год назад +1

    sir ur amazing..!!!!

  • @shruti9731
    @shruti9731 8 месяцев назад

    Amazing video. Thank you for teaching us with dedication.

  • @sabbiruddinakash7181
    @sabbiruddinakash7181 2 месяца назад

    Thank you so much sir.

  • @abhinavrajsaxena789
    @abhinavrajsaxena789 11 месяцев назад

    Please yellow color use kare in compare of red

  • @ashishdeshmukh3785
    @ashishdeshmukh3785 Год назад +2

    Hard hai

  • @RahulGupta-sj8fn
    @RahulGupta-sj8fn Год назад

    yeh slides github par upload kiya hai kya?.

  • @navjeetkaur3014
    @navjeetkaur3014 4 месяца назад

    how do you define small and large dataset ?

    • @navjeetkaur3014
      @navjeetkaur3014 4 месяца назад

      @krishnaikhindi please help me understand

  • @vishalthakur4920
    @vishalthakur4920 Год назад +1

    Sir django p playlist bnao

  • @deepakchauhan2508
    @deepakchauhan2508 9 месяцев назад

    It is very helpful to understand unsupervised learning methods