ai/bi how can it be small when ai is big . please don’t put wrong explanation. It misleads and create lot of confusion . I would request you to please go through it and confirm . As the understanding get completely twisted .
K-means++ techinque is used to avoid random initilization traps.Or in other words to intialize the centroids very far apart from each others so that all the data are grouped properly and classification becomes much accurate.
Hello Mr Krish, I believe you made a mistake in the silhoutte score section. If a(i) >>> b(i) this would mean that the intra cluster distance is greater than the inter cluster distance, which would further imply that the clustering that is done is poor. In the video you have taken that as a case of good clustering. Same is true for a(i)
seriously Bhai !!!!!! Esse bande padhye to padhne mai maaja aata hai...... saale clg ke gvr teachers , female faculty se baate krne mai hi time waste krte hai....
Hi Krish. Pls make share more transition stories 🥲 Also emphasize more on the hiring process. Many companies probably have live codings rounds . Pls throw more light on this 🙏🏻
sir when will end to end data science projects come with dvc and mlops included in that i really need it sir if possible please bring it as soon as possible
Time : 1: 16 ---> when our centroid is located very nearly then our model can't able to make good cluster , to outcome this problem we use K-MEAN ++ Initialization --> after applying this our centroid located FAR from each other. Thanks Sir 💓💓💓
Sir At 38:23, you have said that agglomerative is top down but its bottom up, and similarly divisible is top down. Sir thank you for the informative sessions.
no bro when dividing it will be like one-vs-all method, i.e consider 1st k value , one side will belonging the cluster of this point and the either is side will belong to the other groups
Dont Forget to subscribe the channel for more such videos :)
Hello Sir, I wanted some help on ONA, where or which course is available to learn ONA.
Sir please complete the staticial playlist
Sir i want to join pw 3500 rupees data science course,,,, but on ow app this course is not visible
sir please complete stat playlist😕
ai/bi how can it be small when ai is big . please don’t put wrong explanation. It misleads and create lot of confusion . I would request you to please go through it and confirm . As the understanding get completely twisted .
correction = 1:08:06
if bi> ai then only it is a good cluster
You are right.
Yes I got confused because of this line😅
yah bro , good point i got confused also
K-means++ techinque is used to avoid random initilization traps.Or in other words to intialize the centroids very far apart from each others so that all the data are grouped properly and classification becomes much accurate.
Hello Mr Krish,
I believe you made a mistake in the silhoutte score section. If a(i) >>> b(i) this would mean that the intra cluster distance is greater than the inter cluster distance, which would further imply that the clustering that is done is poor. In the video you have taken that as a case of good clustering. Same is true for a(i)
seriously Bhai !!!!!! Esse bande padhye to padhne mai maaja aata hai...... saale clg ke gvr teachers , female faculty
se baate krne mai hi time waste krte hai....
Hi Krish. Pls make share more transition stories 🥲
Also emphasize more on the hiring process. Many companies probably have live codings rounds . Pls throw more light on this 🙏🏻
time --> 1:08 , if b(i) > a(i) then our cluster is good.
.
this playlist is Very Helpful...
Thanks Sir ❤❤❤❤❤❤❤
mistake: a(i) < b(i) , good cluster, but it is said opposite in video timestamp(1:05:00)
As always, super crisp and dope explanations!
Sir please complete statistics playlist in Hindi
Can I get this notepad pdf???
Fantastic explanation. No deviated talks, just the crux. Love the way you explain. Great going !!!
notes of last few videos is missing
sir when will end to end data science projects come with dvc and mlops included in that i really need it sir if possible please bring it as soon as possible
Time : 1: 16 ---> when our centroid is located very nearly then our model can't able to make good cluster , to outcome this problem we use K-MEAN ++ Initialization --> after applying this our centroid located FAR from each other.
Thanks Sir 💓💓💓
Dendrograms : A Dendrogram is a tree-like diagram used to visualize the relationship among clusters.
Great wok you are doing Krish, Please tell me about the Whiteboard app that you use, it will be a great help
For -
` from sklearn.cluster import AgglomerativeClustering
cluster=AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='ward')
cluster.fit(pca_scaled) `
You might now get this error -
{
"name": "TypeError",
"message": "AgglomerativeClustering.__init__() got an unexpected keyword argument 'affinity'",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[16], line 2
1 from sklearn.cluster import AgglomerativeClustering
----> 2 cluster=AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='ward')
3 cluster.fit(pca_scaled)
TypeError: AgglomerativeClustering.__init__() got an unexpected keyword argument 'affinity'"
}
Resolution -
from sklearn.cluster import AgglomerativeClustering
cluster = AgglomerativeClustering(n_clusters=2, linkage='ward')
cluster.fit(pca_scaled)
the ward linkage implicitly uses euclidean now
Hy i am unable to get ppt or notes of the classes.
Hello Krish, if possible can you record some sessions around HM Markov model and Expectation-maximization algo?
Sir , please share the board PDP as you shared in the playlist of ML , it will very helpful for us
score=silhouette_score(x_train,labels=kmeans.labels_)
this line throws an error. Please guide
Sir At 38:23, you have said that agglomerative is top down but its bottom up, and similarly divisible is top down.
Sir thank you for the informative sessions.
Thank you very much sir i am from non it background but get all the concept little bit the way you r teaching is superb
k means for large dataset and hierarchial for small dataset
Thank you for the awesome free content on clustering.
sir, aapka teaching style bhut jyada accha hai, mere doubts clear hotey hai aapki hi videos se. Is lectures k notes kha se le sir.
Can you please share the whiteboard you use
But how to find min pts value..?
By using k-means ++ we can initialise centroids which are far from each other, so that we can overcome Random Initialisation Trap.
sir please make complete playlist for ml for college exam like arm and all other topics,Your way of teaching is excellent!!
Sir kindly provide lecture of ANN also
for K=3 , how perpendicular lines will divide the clusters..there will be 6 clusters right..
no bro when dividing it will be like one-vs-all method, i.e consider 1st k value , one side will belonging the cluster of this point and the either is side will belong to the other groups
sir ur amazing..!!!!
Amazing video. Thank you for teaching us with dedication.
Thank you so much sir.
Please yellow color use kare in compare of red
Hard hai
Stay hard
yeh slides github par upload kiya hai kya?.
how do you define small and large dataset ?
@krishnaikhindi please help me understand
Sir django p playlist bnao
It is very helpful to understand unsupervised learning methods