BERT Text Classification Kaggle NLP Disaster Tweets TensorFlow

Поделиться
HTML-код
  • Опубликовано: 22 дек 2024

Комментарии • 13

  • @abhilashatutube
    @abhilashatutube 3 года назад +3

    Thank you for the video and breaking down the process very neatly. Got a very good understanding and also the resource links that you provided are excellent !!

  • @AkshayRaman97
    @AkshayRaman97 2 года назад

    Great video .. the explanation was perfectly paced and the resources are very helpful!

  • @RitheshSreenivasan
    @RitheshSreenivasan  4 года назад +1

    Happy New Year!! More such videos coming up this year

    • @Nasirbcs2006
      @Nasirbcs2006 4 года назад

      Happy New Year too Respected Sir, Thanks for the great guidance through this tutorial, would you please explain the implementation with kaggle sentiment140 dataset for PCA, tfidf, BOW, preciosn, recall, f1 with DNN and LSTM.
      AND also tfidf, BOW, LDA, wordcloud for KAggle SMILE dataset with lda and knn??

    • @Nasirbcs2006
      @Nasirbcs2006 4 года назад

      Sir sent you linkedin Connection request, please accept

  • @moopoo123
    @moopoo123 2 года назад

    Thank you. You are a good logical presenter. Suggest getting another microphone to further improve your videos.

    • @RitheshSreenivasan
      @RitheshSreenivasan  2 года назад

      Please have a look at my recent videos . I have a better microphone now

  • @haiacsac3522
    @haiacsac3522 Год назад

    Remember to keep interating on the steps above to improve your model's performance. Experiment with different techniques and models to find the best combination for your specific task. Finally, it's important to handle offensive or profane language in the dataset. You can either filter out such tweets or apply techniques like profanity fitering or sentiment analysis to classify and handle them appropriately. Good luck with your competiton and feel free to ask if you have any specific questions

    • @nhamientayds
      @nhamientayds Год назад

      First of all, when the training/test sets are concatenated and tweet counts by keyword are computed. it can be seen that conclusion by looking at id feature. This means every keyword are stratified while creating training and test set. We can replicate the same split for cross-validation

  • @harshwardhan5635
    @harshwardhan5635 3 года назад

    how much time does the kernel take to run actually..mine's taking a lot