Hello Guys, Finally iNeuron is happy to announce Full Stack Data Scientist Bootcamp with Job Guarantee Program starting from 7th May 2022 and the class timing is from 10am to 1 pm and then the doubt clearing session is from 1pm to 3pm every Saturday and Sunday. This time we are keeping 2 hours doubt clearing session after the class. All the live sessions will be recorded and it will be available through lifetime.Even prerecorded videos are also available for everyone. You can check the detailed syllabus and all information below courses.ineuron.ai/Full-Stack-Data-Science-Bootcamp Use Krish10 for additional 10% discount Emi options also available Direct call to our Team incase of any queries 8788503778 6260726925 9538303385 8660034247 9880055539
Thanks for showing the visualization for KNN method. Have a request, can you share similar examples when input variables 3 or more. Thanks again @krishnaikhindi
in classfication problem statement and regression problem statement both me can be uses. Euclidean distance and Manhattan distance. In regression, it is the average of the K(hyperparameter) nearest data points whereas in classification problem it is equal to max number of category points out of the K nearest neighbours. Limitations: Huge dataset me problem create karega. Sensitive to Outliers Sensitive to missing values
Sir I have joined ur programe I did not like the teaching method in the live class. I have stopped joining the class i watch all ur vedios on u tube and thats how i learn I cant wait to join ur live class
I have a question. In classifier, if the value of both the category is same then what will we select. For example if number of 1s are 4 and number of 0s are also 4 then which value will be predicted
Hello Guys,
Finally iNeuron is happy to announce Full Stack Data Scientist Bootcamp with Job Guarantee Program starting from 7th May 2022 and the class timing is from 10am to 1 pm and then the doubt clearing session is from 1pm to 3pm every Saturday and Sunday. This time we are keeping 2 hours doubt clearing session after the class.
All the live sessions will be recorded and it will be available through lifetime.Even prerecorded videos are also available for everyone.
You can check the detailed syllabus and all information below
courses.ineuron.ai/Full-Stack-Data-Science-Bootcamp
Use Krish10 for additional 10% discount
Emi options also available
Direct call to our Team incase of any queries
8788503778
6260726925
9538303385
8660034247
9880055539
Sir I love to take your course but I am working professional it very hard to follow
Sir your Hindi playlist is one of the best
liked your presentation👌👌👌👌
Thumbnail be like: nikal ❤️day . 😁
It's😂
❤ + ⛅
😂😂
Where the practical implementation of Naive Bayes algorithm?
how the graph is plotted for 3 feature. like size, room and price????
Is this playlist sufficient to crack online tests and interview? @krishnaikhindi
Please start Deep learning 7 days live session at the earliest.
Please start deep learning and computer vision 7 days lecture
Sir, what happens when 1's and 0's have the same number of data points under KNN i.e. 3 for 1's and 3 for 0's, Then what would be the output?
This understanding comes with practical knowledge or domain knowledge. Odd no. of K value is taken into account in that case. K = 3, K = 5
same doubt
@@h44r96 i did not get this
u can pls elaborate on ur answer
Sir
Where is implementation part? -,-
Can someone share me the python implementation of this video
Noice
Please update the material link as this link is showing 404 error
sir u didn't uploaded practical implementation of knn.
Thank you sir clearly understood
🙂
Please upload same playlist as you uploaded on your second channel. And also, tell us how to follow your playlist.
It was super simplified. Understood well
Thanks for showing the visualization for KNN method.
Have a request, can you share similar examples when input variables 3 or more.
Thanks again
@krishnaikhindi
Nice way of explaining as well presentation. Can you share what are the gadgets you are using for writing, please 🙏
I couldn't understand...
in classfication problem statement and regression problem statement both me can be uses. Euclidean distance and Manhattan distance.
In regression, it is the average of the K(hyperparameter) nearest data points whereas in classification problem it is equal to max number of category points out of the K nearest neighbours.
Limitations:
Huge dataset me problem create karega.
Sensitive to Outliers
Sensitive to missing values
🎉🎉🎉🎉🎉🎉🎉
Thank you sir ❤😘❤
Thanks a lot ❤
can i find English version of this ?
Thanks
Best explanation in simple way on RUclips... Great work Krish sir, your Hindi & English both channels are awesome. 👏🙌
Lovely
sier regarding Knn If there are more than 2 classification like 0 1 2 3 4 does THE SAME PRINCIPAL APPLY
thank you so much! Very helpful
Sir I have joined ur programe I did not like the teaching method in the live class. I have stopped joining the class i watch all ur vedios on u tube and thats how i learn I cant wait to join ur live class
but sir how to find the value of k in our problem as the value of k changes the output result will change.
Thankyou So Much Sir!!!
Thank You so much!
Hello Dear Sir, How can I get your contact details?
Best explanation with clear and concise way
I have a question.
In classifier, if the value of both the category is same then what will we select. For example if number of 1s are 4 and number of 0s are also 4 then which value will be predicted
👌
Thank you very much sir.
Can test data come as outlier
Great tutorial...
great explanation