Thank you so much sir...I literally fell in love with BERT model, you really have great teaching skills, only an awesome teacher can teach ML in such and interesting easy and detailed way.... ❤🥰
Thank you for this video. It's very enlightening. However, I have a question. This is a fake news detection that is solely based on linguistic cues. The research I'm working on is more than just predicting based on linguistic cues but source credibility analysis and contextual analysis are involved too. How can I develop a fake news prediction model that will effectively detect if news is fake or real based on an extensive dataset such as the Twitter Truth Seeker dataset?
Hey Hooman 😊, I'm sure you are a dog lover!! Within the Fake New Detection code file, under the 'Model Training' section, the # Train & Predict code script has this line: torch.save(model.state_dict(), 'c2_new_model_weights.pt') With this line of code, the Model weights get stored in your Working Directory. For loading these saved weights, you may use (also shown in the tutorial at 15:24): # load weights of best model path = 'c2_new_model_weights.pt' model.load_state_dict(torch.load(path))
I followed the notebook and trained my model for 30 epochs but in every epoch training loss was 0.001 and test loss was 0.003. Any reason that might be the case?
I tried running this notebook and hit a bunch of errors on the sklearn imports. Again, tried running this with an older version of sklearn and hits errors. Any help? I am fairly new to coding.
Hey Daniel, how are you mate! Yes, sure you can. Not just similar, you can actually use it do "almost" any problem you may have, be it NLP / CV / Audio. Here's their documentation: huggingface.co/docs/transformers/index For similar problems, you may do Sentiment Analysis, Movie/Book Recommendation, Text Summarization, Topic Modeling, etc. within NLP.
hi sir ,i need your help i have only 10 day left for my fyp submision can u please help me in this i dont know how to implement bert on roman urdu data set . oter wise my fyp will be rejected can u please make a vedio on it ,
Hi Sir.I used the same code and run it. But when the epochs are = 2 , code takes log time to run, even it doesn't execute. I even tried increasing the the number of epochs. Noprogress.How can I resolve it? :(
Dear Basmah, how many epoch you trained your model for? I think you might have done 2 epochs. If yes, you should do 2-4 more epochs for your results to get better.
@SKILLCATE sure it would be great.Another thing i want to get know that colab offers 12hours runtime per session.what does this mean that after 12 hours we cannot run the same notebook free ?
@@rashdakhanzada8058 12-hour runtime per session in Colab means you can use the free version of Colab for up to 12 consecutive hours. Once the time is up, you can start a new session and continue your work. It's like driving a rental car for a specific duration and then getting a new car if you need more time.
@SKILLCATE means now i have to create a new notebook ,insert the same code inside this and can work on it for 12 hours more.And have to do this again again...
@@rashdakhanzada8058 Not really. Notebook remains as is. Just think it like this: You're training a very heavy Deep Learning Model for which Model training goes on for let's say for 24hours. With the free Colab Account, you have a Runtime limit of 12hours. So you won't be able to train such heavy Deep Learning Models. But, for a beginner level Data Science related stuff, you will face absolutely no problem. As most of your training would take minutes to a couple of hours. 🤗
Thank you so much sir...I literally fell in love with BERT model, you really have great teaching skills, only an awesome teacher can teach ML in such and interesting easy and detailed way.... ❤🥰
Thanks for the lovely feedback Rakesh ❤️
Thank you for this video. It's very enlightening. However, I have a question. This is a fake news detection that is solely based on linguistic cues. The research I'm working on is more than just predicting based on linguistic cues but source credibility analysis and contextual analysis are involved too. How can I develop a fake news prediction model that will effectively detect if news is fake or real based on an extensive dataset such as the Twitter Truth Seeker dataset?
Thanks for this wonderful project, please let me know one point about transfer learning. Have we used this? if yes then where and how. Thanks.
Man,What a video Loved it...The way you explain,patience is outstanding. Keep on droping videos
Thanks for the video! How can we save the fine-tuned model so that we can run it next time without going through fine-tuning the model again?
Hey Hooman 😊, I'm sure you are a dog lover!!
Within the Fake New Detection code file, under the 'Model Training' section, the # Train & Predict code script has this line:
torch.save(model.state_dict(), 'c2_new_model_weights.pt')
With this line of code, the Model weights get stored in your Working Directory. For loading these saved weights, you may use (also shown in the tutorial at 15:24):
# load weights of best model
path = 'c2_new_model_weights.pt'
model.load_state_dict(torch.load(path))
Very insightful. Thank you so much for this! Hope we can have NLP deep learning project for text classification.
Absolutely!
Hi sir ,your videos are just awesome, I use this code for sentiment analysis to detect spam email.
Great 👍 Glad that you find it useful.
Thank you for sharing such great content with great explanation. 🙌 Keep posting!
Glad that you find it useful!
I followed the notebook and trained my model for 30 epochs but in every epoch training loss was 0.001 and test loss was 0.003. Any reason that might be the case?
what things will changes if we want to use text instead of title?
I tried running this notebook and hit a bunch of errors on the sklearn imports. Again, tried running this with an older version of sklearn and hits errors. Any help? I am fairly new to coding.
Can you let me know how to save the model
Is possible to use huggingfacae transformers for a project similar to this ?
Hey Daniel, how are you mate!
Yes, sure you can. Not just similar, you can actually use it do "almost" any problem you may have, be it NLP / CV / Audio. Here's their documentation: huggingface.co/docs/transformers/index
For similar problems, you may do Sentiment Analysis, Movie/Book Recommendation, Text Summarization, Topic Modeling, etc. within NLP.
@@skillcate Thanks for the information. Please I would like to ask if a tensorflow version of this episode is available?
also do project on fake reviews
hi sir ,i need your help i have only 10 day left for my fyp submision can u please help me in this i dont know how to implement bert on roman urdu data set . oter wise my fyp will be rejected can u please make a vedio on it ,
Let's keep in touch on this..
Sir i am unable to see the datasets from the link u provided it is showing that no results found
Hi Sir.I used the same code and run it. But when the epochs are = 2 , code takes log time to run, even it doesn't execute. I even tried increasing the the number of epochs. Noprogress.How can I resolve it? :(
Where are you running the code? It is likely that you are training on CPU machine where it take longer time than gpu
I implemented this code for fake reviews detection and got an accuracy of 57% which was quite unexpected.. :(
Dear Basmah, how many epoch you trained your model for? I think you might have done 2 epochs. If yes, you should do 2-4 more epochs for your results to get better.
@@skillcate Yes, I changed that, and with epoch= 10, I got an accuracy of 71%. That's the maximum accuracy I could get.
how to work with roman urdu dataset using bert.
Dear Rashda, we'll soon be doing a video on this problem statement. Allow us a week's time.
@SKILLCATE sure it would be great.Another thing i want to get know that colab offers 12hours runtime per session.what does this mean that after 12 hours we cannot run the same notebook free ?
@@rashdakhanzada8058 12-hour runtime per session in Colab means you can use the free version of Colab for up to 12 consecutive hours. Once the time is up, you can start a new session and continue your work. It's like driving a rental car for a specific duration and then getting a new car if you need more time.
@SKILLCATE means now i have to create a new notebook ,insert the same code inside this and can work on it for 12 hours more.And have to do this again again...
@@rashdakhanzada8058 Not really. Notebook remains as is. Just think it like this: You're training a very heavy Deep Learning Model for which Model training goes on for let's say for 24hours. With the free Colab Account, you have a Runtime limit of 12hours. So you won't be able to train such heavy Deep Learning Models.
But, for a beginner level Data Science related stuff, you will face absolutely no problem. As most of your training would take minutes to a couple of hours. 🤗