Thank you for the video and breaking down the process very neatly. Got a very good understanding and also the resource links that you provided are excellent !!
Happy New Year too Respected Sir, Thanks for the great guidance through this tutorial, would you please explain the implementation with kaggle sentiment140 dataset for PCA, tfidf, BOW, preciosn, recall, f1 with DNN and LSTM. AND also tfidf, BOW, LDA, wordcloud for KAggle SMILE dataset with lda and knn??
Remember to keep interating on the steps above to improve your model's performance. Experiment with different techniques and models to find the best combination for your specific task. Finally, it's important to handle offensive or profane language in the dataset. You can either filter out such tweets or apply techniques like profanity fitering or sentiment analysis to classify and handle them appropriately. Good luck with your competiton and feel free to ask if you have any specific questions
First of all, when the training/test sets are concatenated and tweet counts by keyword are computed. it can be seen that conclusion by looking at id feature. This means every keyword are stratified while creating training and test set. We can replicate the same split for cross-validation
Thank you for the video and breaking down the process very neatly. Got a very good understanding and also the resource links that you provided are excellent !!
Glad it was helpful!
Great video .. the explanation was perfectly paced and the resources are very helpful!
Thank You!
Happy New Year!! More such videos coming up this year
Happy New Year too Respected Sir, Thanks for the great guidance through this tutorial, would you please explain the implementation with kaggle sentiment140 dataset for PCA, tfidf, BOW, preciosn, recall, f1 with DNN and LSTM.
AND also tfidf, BOW, LDA, wordcloud for KAggle SMILE dataset with lda and knn??
Sir sent you linkedin Connection request, please accept
Thank you. You are a good logical presenter. Suggest getting another microphone to further improve your videos.
Please have a look at my recent videos . I have a better microphone now
Remember to keep interating on the steps above to improve your model's performance. Experiment with different techniques and models to find the best combination for your specific task. Finally, it's important to handle offensive or profane language in the dataset. You can either filter out such tweets or apply techniques like profanity fitering or sentiment analysis to classify and handle them appropriately. Good luck with your competiton and feel free to ask if you have any specific questions
First of all, when the training/test sets are concatenated and tweet counts by keyword are computed. it can be seen that conclusion by looking at id feature. This means every keyword are stratified while creating training and test set. We can replicate the same split for cross-validation
how much time does the kernel take to run actually..mine's taking a lot
I don’t remember . This was done a long time ago