I don't often left comments on youtube but, finally someone that explains everything from scratch...I am a JS developer. And it's really cool your that you explain every piece of code. That really helped, I was able to understand everything.
Huge thank you to you!!! I recently participated in a ML hackathon and they had sentiment analysis as one of their problem statements. I had watched your video prior to the competition and used hugging face whereas everyone else used the standard vader. I ended up getting the highest accuracy and placed first, all in my second year of engineering. Genuinely, can’t thank you enough for the information! Team random_state42
Hey brother , you just provided the best NLP sentiment project , your channel deserve million+ subscriber , nd now I am just one new subscriber now to reach you there
Your videos like gem to me learned a lot your use of modules packages are like cherry on cake. Currently I'm working as an Jr. Data scientist in KPMG but man oh man you taught me many things thank you 😊 🙏
Bro, I just need to talk to u. I wanted to ask few questions regarding the profile you are working on. I have secured a job with Deloitte but want to switch to KPMG (Gurgaon).
Really interesting video. I've been following a lot of your tutorials lately and I must say that I really like the way you explain things, it's so easy to understand and follow along. Thank you!
Thanks so much for the feedback Juan. It's always hard to tell when I'm recording these if they are any good, so it's great to hear that it is helpful to you.
A very terrific video! I am about to start a sentiment analysis project and this is absolute gem. Makes you want to explore everything there is in the subject
Thanks for such a wonderful tutorial. I used your shared data on my own with Google Collab and worked so well. Just I had to download a few more libraries for tokenization. Wonderful content and I truly enjoyed it.
Thank you so much for this step by step process it has opened up all sorts of new analysis opportunities for our customer insights. Really well explained and easy to follow
just did all of that as a thesis by myself without knowing you made a video about it lol, luckily I've used a different Bert model from hug face at least. Nice video btw!
00:01 In today's video, we'll explore sentiment analysis on Amazon reviews using traditional and more complex models. 02:26 Importing and reading data for sentiment analysis 07:28 Tokenization and part of speech tagging in NLTK 09:55 Introduction to VADER for sentiment analysis 15:12 Looping through Amazon review data to calculate polarity scores. 17:33 Perform sentiment analysis with NLTK and 🤗 Transformers 22:04 Explains the positive, neutral, and negative sentiments in Amazon reviews 24:24 Transformer-based deep learning models from Hugging Face are easy to use and powerful 28:45 Introduction to sentiment analysis with NLTK and Transformers 31:02 Running sentiment analysis on text using Vader and Roberta 35:28 Comparing vader and roberta sentiment analysis scores using seaborn's pair plot. 37:45 Vader model less confident compared to Roberta model 41:59 Hugging Face Transformers makes sentiment analysis simple and efficient 44:09 Explored models and ran sentiment analysis on Amazon reviews. Crafted by Merlin AI.
I've just recently found myself interested in Computer Vision and NLP and I've finally gotten to the right content creators, this video absolutely rocks! And I fouind it 2 years late, I wonder how far are you now in this topic, if ever you come back to this comment section could I ask how did you get so experienced in this topic and how did you learn how to tackle all this problems? Thank you!
Who are you? My saver! I was asked to conduct a sentiment analysis on reviews from my internship. I was doing computer vision at the graduate school. New to NLP. Thanks God.
This was a good tutorial. I'm trying to get my feet wet in data analytics and found myself overwhelmed while trying to read the NLTK documentation, so thanks for the structured guidance. I'm working on analyzing sentiment across a dataset I've gathered myself, so I wasn't following along in kaggle and hit a hiccup as AutoModelForSequenceClassification requires pytorch and I initialized a python 3.10 environment. Oopsy poopsy. All the same, you made my headache significantly less daunting. Thank you. :)
Thanks so much. I’m glad it helped you get started with NLTK it can be a lot easier when you see it in action once. Setting up an environment that works with all the packages can also sometimes be frustrating so I can relate!
Great tutorial, for anyone facing the error of tensor_size more than 514 need to add the max_length as an argument in tokenizer... def polarity_scores_roberta(example): encoded_text= tokenizer(example, return_tensors='pt', truncation=True, max_length=512) # (max_length should be 512) output= model(**encoded_text) scores= output[0][0].detach().numpy() scores= softmax(scores) scores_dict= { 'roberta_neg': scores[0], 'roberta_neu': scores[1], 'roberta_pos': scores[2] } return scores_dict
Thanks for the video, we have a school project to do anything coding related and while my classmates are using scratch I wanted to do something flashier, and some kind of language analysis seemed the way to go. I'll use this video as inspiration.
Just found your channel through Twitter. Great work, I am doing research in sentiment analysis and related to a lot of the video. Cool stuff! I will have to use the pariplot, I typically use a confusion matrix.
Thank you very much for this video. I'm new to the field of Data Analysis and related disciplines so this sentimental analysis project is pretty insightful for me.
Pls make more such videos, that was great. I am a data engineer and wants to move to Data Science, please make videos for guidance also. Love from India
🎯 Key points for quick navigation: 00:00 *🎬 Introduction to Sentiment Analysis* - Introduction to natural language processing (NLP) and sentiment analysis. - Overview of the project, including using traditional techniques like VADER and more advanced models like RoBERTa. - Explanation of the dataset used for sentiment analysis, which consists of Amazon food reviews with ratings. 03:00 *📊 Data Preprocessing and Exploration* - Importing necessary libraries for data analysis and visualization. - Reading the dataset and performing basic exploratory data analysis (EDA). - Downsampling the dataset for quicker analysis and showcasing the structure of the data. 05:05 *📈 Exploring Sentiment Distribution* - Analyzing the distribution of sentiment scores based on review ratings. - Visualizing the distribution of sentiment scores across different star ratings using bar plots. - Observing the relationship between review ratings and sentiment scores. 07:00 *🧠 Introduction to NLTK for Sentiment Analysis* - Overview of NLTK (Natural Language Toolkit) and its capabilities for text processing. - Demonstrating tokenization and part-of-speech tagging using NLTK. - Explaining the process of chunking text into entities using NLTK. 10:48 *📉 Sentiment Analysis with VADER* - Introduction to VADER (Valence Aware Dictionary and sEntiment Reasoner) for sentiment analysis. - Understanding how VADER assigns sentiment scores based on individual words. - Applying VADER sentiment analysis to example sentences and the food review dataset. 23:41 *🔍 Advanced Sentiment Analysis with RoBERTa* - Introducing RoBERTa, a transformer-based deep learning model for contextual understanding. - Preprocessing text and encoding it for analysis using RoBERTa's tokenizer. - Applying the pre-trained RoBERTa model to perform sentiment analysis on text data. 29:05 *📊 Comparing Vader and Roberta sentiment analysis models* - Demonstrated how to print scores from both Vader and Roberta sentiment analysis models. - Created a scores dictionary for both models to store negative, neutral, and positive scores. - Illustrated the difference in sentiment analysis results between the Vader and Roberta models using a negative review as an example. 35:52 *📈 Comparing sentiment scores across models and reviewing examples* - Utilized Seaborn's pair plot to compare sentiment scores between Vader and Roberta models. - Reviewed examples where the sentiment analysis model contradicted the actual review sentiment, showcasing nuances in language understanding. - Examined instances where both models misinterpreted the sentiment of reviews, highlighting the limitations of bag-of-words approaches like Vader. 42:08 *🤖 Simplifying sentiment analysis with Hugging Face Transformers pipeline* - Demonstrated how to use Hugging Face Transformers pipeline for sentiment analysis, simplifying the process to just two lines of code. - Showcased the ease of changing models and tokenizers within the pipeline for different analysis tasks. - Provided examples of sentiment analysis using the pipeline, showcasing its efficiency and accuracy. Made with HARPA AI
dude in 26:03 while writing the pertained model from hugging face it throwing an error. "Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. " and my connection is very good I had run this around 40 times with good connection and still throwing that error and also changed the model from hugging face please help me on this
You might want to check and make sure the source hasn't changed from the hugging face site. They might have changed this specific model and your refrence might need to be updated.
Had the same problem. Just solved it. Unlike aveage laptop, Kaggle notebook is not connected to internet. To get an internet access with your Kaggle notebook you need to go through a phone verification. Look for the notebook option menu on the right side.
Great content. Please do more content model which solves attrition prediction for org. Very complex subject because its hard to find already made models on such topics. It would be great help if you can make something attrition prediction model with variables more than 45-50.
loved what you did, but would be nice to show how you got the amazon data as well. Plus, do you have any videos on sentiment analysis for company stocks?
Thank you so much for this video tutorial! I wanted to ask if you created the Amazon review dataset from scratch or was it already pre-made from somewhere else?
Thanks for this video, it was descriptive, well structured and well explained. I have two questions and I would appreciate if you can give your opinion and guidence on that. 1. At the end of the day star reviews and sentiment are giving the same results so how can we justify going through all this process when we already have a very good indication of user sentiment based on the star reviews. 2. How can we get the strength and weakness of the product based on the reviews using the sentiment analysis.
23:42 Step 3. Roberta Pretrained Model. RoBERTa base sentiment I am getting a value error. That is we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. But My internet connection is good. What can I do about it?
yeah im getting the same problem ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
Awesome video. Would be great to see you follow the sentiment analysis with a topic analysis. I’ve seen a few different options out there (LDA, Top2Vec and BERTopic), but would love to see your take on it.
Excellent explanation and material. Thank you for your efforts in making learning enjoyable. A brief query about reviews that are negative (5 stars) and positive (1 stars), where the algorithm is unable to forecast the relevancy score. Regarding these kinds of situations, how would you advise handling them??
Hey Rob , I was trying to execute the code where you extract the model trained on twitter comments but I keep getting the error "Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on." even though I am connected to the Internet . Could you please help me out ?
One of the best tutorials on Vader and the Huggingface Transformers I have seen. One question I had: How is the confidence score calculated on the Pipeline model and is there a way to evaluate the model's performance on these calculations?
Thanks so much for the feedback. Glad you found it helpful. Evaluating the model performance is a bit tricky without ground truth labels. The output of the Pipeline model is essentially the probability the model predicts of each class given the dataset it was trained on. Check out the actual model description on the huggingface site here along with the noted limitations: huggingface.co/distilbert-base-uncased-finetuned-sst-2-english Specifically this part is interesting: ``` Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations. For instance, for sentences like This film was filmed in COUNTRY, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this colab, Aurélien Géron made an interesting map plotting these probabilities for each country. ```
@@robmulla FWIW - I reached out to the creator of this and what I was told is that the score is calculated using the activation function after the final layer of the neural net. It is used to determine polarity (and is not a confidence score). The model returns an array with the score for each polarity, and the larger is the prediction. The values will always be positive, regardless of the actual sentiment class tagged to the text. This is unlike Vader's model which provides a composite polarity score that could be a positive or negative float based on the inferred sentiment (positive, negative, neutral).
@robmulla, great presentation but I have looked through videos on your channel, it appears you have not done one on finetunning a BERT model with custom dataset. I am particularly wanting to learn how you would finetune a BERT model for multiclass text classification, maybe on Google collab. I think many of us subscribers would love it. Thanks.
Hey Rob, great content man, it helps big time! I just cannot find you conveniently find at 26:00. I work with PyCharm so nothing is that automatic for me. Where can I download those files?
It is really a wonderful video! I just wonder @Rob, do we need to do Cross Validations? Are there any hyper-parameters that we also need to optimise? How to do the cross validations here in the NLP? Just like the normal ML Cross Validation process? Should we worry about the overfitting under-fittings problems? How would the learning curve look like with and without the cross validations? Thanks
great content, this deserves a million views... {'roberta_neg': 0, 'roberta_neu': 0, 'roberta_pos': 100}😀
Haha. Best comment! Pinned.
Pos should be 1, since the maximum value is 1. lol
@@robmulla plz give ur what's app no
Good one!!😅
I don't often left comments on youtube but, finally someone that explains everything from scratch...I am a JS developer. And it's really cool your that you explain every piece of code. That really helped, I was able to understand everything.
Hey! I really apprecaite this comment. Thanks so muc.
Huge thank you to you!!! I recently participated in a ML hackathon and they had sentiment analysis as one of their problem statements. I had watched your video prior to the competition and used hugging face whereas everyone else used the standard vader. I ended up getting the highest accuracy and placed first, all in my second year of engineering. Genuinely, can’t thank you enough for the information!
Team random_state42
Mil gaya tu yaha
This is so awesome! Thanks for sharing. I posted a screenshot of your comment on twitter, hope that's ok!
Btw huge fan of your statistics' notes Mr. Patawar, didn't expect to find you here.
@@bhaumik3118 i also study statistics from mr patawar
nice man
I like the pace at which you teach this content it is relaxed and very enjoyable to watch for me.
Just completed it. I really enjoyed working on it. Your way of teaching is just awesome!
Hey brother , you just provided the best NLP sentiment project , your channel deserve million+ subscriber , nd now I am just one new subscriber now to reach you there
Thank you so much 😀
You are my newly found Python mentor. Good content Rob
Happy to be! There are a lot of good channels out there.
Your videos like gem to me learned a lot your use of modules packages are like cherry on cake. Currently I'm working as an Jr. Data scientist in KPMG but man oh man you taught me many things thank you 😊 🙏
Great to hear you enjoyed the video. Data science is a never ending learning journey for all of us!
Bro, I just need to talk to u. I wanted to ask few questions regarding the profile you are working on. I have secured a job with Deloitte but want to switch to KPMG (Gurgaon).
Really interesting video. I've been following a lot of your tutorials lately and I must say that I really like the way you explain things, it's so easy to understand and follow along. Thank you!
Thanks so much for the feedback Juan. It's always hard to tell when I'm recording these if they are any good, so it's great to hear that it is helpful to you.
Amazing content man! Your channel and videos deserve a lot more attention. Hope you have an amazing week!!
Thanks so much. I really appreciate the feedback. Please consider sharing the video with anyone else you think might learn from it.
Your channel is a gem, thanks so much for the free course.
Glad you enjoyed it. Thanks for watching!
Thanks for posting the awesome tutorial. Would love to learn more from you.
Thanks for watching and learning!
great content, perhaps the best material I found on sentiment analysis in youtube!!!
Thanks for the compliment Ayush! That means a lot to me.
This may be the test tutorial on any language/library/app I have ever watched. One part, very concise and well explained. Thank you.
Glad it was helpful! This comment makes me really happy and excited to make more tutorials!
More of an appetite wetter. to make any use of it, I have to learn Python first 😀 But then, that's valuable by itself.
I’m so glad I found this channel!!
Me too!
A very terrific video! I am about to start a sentiment analysis project and this is absolute gem. Makes you want to explore everything there is in the subject
Hey , Can you explain to me what's a dataset and a model ?
I'll admit I watched this on two times speed, but those were the best spend 21 minutes of the day!
Very helpful and we'll explained!
Thanks for such a wonderful tutorial. I used your shared data on my own with Google Collab and worked so well. Just I had to download a few more libraries for tokenization. Wonderful content and I truly enjoyed it.
Thank you so much for this step by step process it has opened up all sorts of new analysis opportunities for our customer insights. Really well explained and easy to follow
Good, very good video! You cannot imagine how valuable this kind of video is for someone like me who is trying to transition to data science...
Really must watch video. I must say that I really like the way you explain things, it's so easy to understand and follow along. Thank you!
Rob, you are the Best! Thank you for all the quality content you are uploading!
Greetings from Greece!
Thanks so much Pavlos for watching. Sending a 💙 to Greece.
just did all of that as a thesis by myself without knowing you made a video about it lol, luckily I've used a different Bert model from hug face at least. Nice video btw!
Thanks!
00:01 In today's video, we'll explore sentiment analysis on Amazon reviews using traditional and more complex models.
02:26 Importing and reading data for sentiment analysis
07:28 Tokenization and part of speech tagging in NLTK
09:55 Introduction to VADER for sentiment analysis
15:12 Looping through Amazon review data to calculate polarity scores.
17:33 Perform sentiment analysis with NLTK and 🤗 Transformers
22:04 Explains the positive, neutral, and negative sentiments in Amazon reviews
24:24 Transformer-based deep learning models from Hugging Face are easy to use and powerful
28:45 Introduction to sentiment analysis with NLTK and Transformers
31:02 Running sentiment analysis on text using Vader and Roberta
35:28 Comparing vader and roberta sentiment analysis scores using seaborn's pair plot.
37:45 Vader model less confident compared to Roberta model
41:59 Hugging Face Transformers makes sentiment analysis simple and efficient
44:09 Explored models and ran sentiment analysis on Amazon reviews.
Crafted by Merlin AI.
I rarely comment on YT videos but this is amazing! +1 subscriber!
That really means a lot to me. Thanks for leaving a comment.
I've just recently found myself interested in Computer Vision and NLP and I've finally gotten to the right content creators, this video absolutely rocks! And I fouind it 2 years late, I wonder how far are you now in this topic, if ever you come back to this comment section could I ask how did you get so experienced in this topic and how did you learn how to tackle all this problems? Thank you!
Awesome! I am shocked that everything is so efficient and amazing. THANKS!
Glad it was helpful! Share the video with friends.
I find the topic really interesting , the way you explain were pretty articulated and having a fundamental approach
this tutorial is very helpfull for me when i was learning sentiment analysis. Love it
Who are you? My saver! I was asked to conduct a sentiment analysis on reviews from my internship. I was doing computer vision at the graduate school. New to NLP. Thanks God.
I've watched bunch of ML videos and you are THE TOP! 👍👍👍
I am so happy to have discovered your channel. Many thanks friend.
Great content. I am doing a project in my uni where I need to do sentiment analysis on book reviews. This helped me a lot. Thanks.
A great video! Many thanks for your valuable content.❤
Excellent video, started coding with chatgpt, and this adds a new layer of info , thank you mate :) Subd
This was a good tutorial. I'm trying to get my feet wet in data analytics and found myself overwhelmed while trying to read the NLTK documentation, so thanks for the structured guidance.
I'm working on analyzing sentiment across a dataset I've gathered myself, so I wasn't following along in kaggle and hit a hiccup as AutoModelForSequenceClassification requires pytorch and I initialized a python 3.10 environment. Oopsy poopsy. All the same, you made my headache significantly less daunting. Thank you. :)
Thanks so much. I’m glad it helped you get started with NLTK it can be a lot easier when you see it in action once. Setting up an environment that works with all the packages can also sometimes be frustrating so I can relate!
Really great, helped me a lot in my project!
Glad it helped. Thanks for watching.
Great video. Your explanations were very clear and concise and easy to follow.
Great tutorial, for anyone facing the error of tensor_size more than 514 need to add the max_length as an argument in tokenizer...
def polarity_scores_roberta(example):
encoded_text= tokenizer(example, return_tensors='pt', truncation=True, max_length=512) # (max_length should be 512)
output= model(**encoded_text)
scores= output[0][0].detach().numpy()
scores= softmax(scores)
scores_dict= {
'roberta_neg': scores[0],
'roberta_neu': scores[1],
'roberta_pos': scores[2]
}
return scores_dict
Extremly useful, super easy to understand! Thank you so much for a great and valuable video !!
Really appreciate the feedback. Comments like this make me want to keep making more videos!
Great video, I am starting to understand NLP much more. Thank you so much!
What a video! I lovee this. Please keep continue this content. Greetings
Thank you! Will do, Adem!
i cannot thank you enough , you saved my 6th semester
Thanks for the video, we have a school project to do anything coding related and while my classmates are using scratch I wanted to do something flashier, and some kind of language analysis seemed the way to go. I'll use this video as inspiration.
I love it! Good luck on your project !
insane
@@techingenius2540 in the membrane?
Just found your channel through Twitter. Great work, I am doing research in sentiment analysis and related to a lot of the video. Cool stuff! I will have to use the pariplot, I typically use a confusion matrix.
Awesome Josiel. Glad you find it helpful. Check out some of my other videos if you have time and share the video with friends!
I founf this video immensely helpful Rob
Thanks
So glad you found it helpful!!
Great work🎉🎉🎉🎉 ty for this amazing video .Your explanation , flow , content everything is up to the mark 🚩
very good, thank you for your effort and passion!!!
Thank you very much for this video. I'm new to the field of Data Analysis and related disciplines so this sentimental analysis project is pretty insightful for me.
Glad you found it helpful
This video is incredibly helpful! Thanks!
thank you for this content! Great quality! Now subscribed!
Thanks so much for watching!
Really good content. Liked and subscribed!
Pls make more such videos, that was great. I am a data engineer and wants to move to Data Science, please make videos for guidance also.
Love from India
I will! Hope this video was helpful for you in your journey into data science.
Great resource! Thanks Rob.
Glad you liked it! Thanks for watching.
Thank you so much. This tutorial helped me in my project. Thanks a lot.
I really liked this video a lot, it answered lot of my questions, thanks a lot.
🎯 Key points for quick navigation:
00:00 *🎬 Introduction to Sentiment Analysis*
- Introduction to natural language processing (NLP) and sentiment analysis.
- Overview of the project, including using traditional techniques like VADER and more advanced models like RoBERTa.
- Explanation of the dataset used for sentiment analysis, which consists of Amazon food reviews with ratings.
03:00 *📊 Data Preprocessing and Exploration*
- Importing necessary libraries for data analysis and visualization.
- Reading the dataset and performing basic exploratory data analysis (EDA).
- Downsampling the dataset for quicker analysis and showcasing the structure of the data.
05:05 *📈 Exploring Sentiment Distribution*
- Analyzing the distribution of sentiment scores based on review ratings.
- Visualizing the distribution of sentiment scores across different star ratings using bar plots.
- Observing the relationship between review ratings and sentiment scores.
07:00 *🧠 Introduction to NLTK for Sentiment Analysis*
- Overview of NLTK (Natural Language Toolkit) and its capabilities for text processing.
- Demonstrating tokenization and part-of-speech tagging using NLTK.
- Explaining the process of chunking text into entities using NLTK.
10:48 *📉 Sentiment Analysis with VADER*
- Introduction to VADER (Valence Aware Dictionary and sEntiment Reasoner) for sentiment analysis.
- Understanding how VADER assigns sentiment scores based on individual words.
- Applying VADER sentiment analysis to example sentences and the food review dataset.
23:41 *🔍 Advanced Sentiment Analysis with RoBERTa*
- Introducing RoBERTa, a transformer-based deep learning model for contextual understanding.
- Preprocessing text and encoding it for analysis using RoBERTa's tokenizer.
- Applying the pre-trained RoBERTa model to perform sentiment analysis on text data.
29:05 *📊 Comparing Vader and Roberta sentiment analysis models*
- Demonstrated how to print scores from both Vader and Roberta sentiment analysis models.
- Created a scores dictionary for both models to store negative, neutral, and positive scores.
- Illustrated the difference in sentiment analysis results between the Vader and Roberta models using a negative review as an example.
35:52 *📈 Comparing sentiment scores across models and reviewing examples*
- Utilized Seaborn's pair plot to compare sentiment scores between Vader and Roberta models.
- Reviewed examples where the sentiment analysis model contradicted the actual review sentiment, showcasing nuances in language understanding.
- Examined instances where both models misinterpreted the sentiment of reviews, highlighting the limitations of bag-of-words approaches like Vader.
42:08 *🤖 Simplifying sentiment analysis with Hugging Face Transformers pipeline*
- Demonstrated how to use Hugging Face Transformers pipeline for sentiment analysis, simplifying the process to just two lines of code.
- Showcased the ease of changing models and tokenizers within the pipeline for different analysis tasks.
- Provided examples of sentiment analysis using the pipeline, showcasing its efficiency and accuracy.
Made with HARPA AI
Very good explanation . Thanks a lot❤❤
Rob you are the best. Hands Down mate.
thank you for your videos. This video is useful for my project in the future. Instead of using English dataset, I can train the Vietnamese dataset!
crystal clear explanation thanks my friend
Glad you liked it!
Thanks for great model ideas.
Glad you like them!
Hi bro I am from india and I like your video and your explanation and english is so understandable love you bro❤❤❤❤
i am a beginner and i understood everythinggg
Thank you! Great content and easy to understand!
Appreciate that!
Great Content, thanks man
Thanks!
Extremely helpful! Thanks a bunch!
your content is goldmine
Thank you sir! Share the goldmine with others!
This video was genius and very helpful thank you
how you don't have 100k subs, defeats me.
Hah. Thanks Juan. Maybe someday 😊
dude in 26:03 while writing the pertained model from hugging face it throwing an error. "Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. " and my connection is very good
I had run this around 40 times with good connection and still throwing that error and also changed the model from hugging face
please help me on this
You might want to check and make sure the source hasn't changed from the hugging face site. They might have changed this specific model and your refrence might need to be updated.
Had the same problem. Just solved it. Unlike aveage laptop, Kaggle notebook is not connected to internet. To get an internet access with your Kaggle notebook you need to go through a phone verification. Look for the notebook option menu on the right side.
@@dailypolyglot2815 thank you so much!
The huggingface model , should it require any preliminary dataset while we are importing it?
This is a great video, thanks a lot.
Glad you like it. Thanks for watching
great!! i hope you will create video more than!! tkssssssssss
Thank you, I will. I appreciate you watching.
24:45 the hugging face model is not laoding properly
wow. speechless. both you and ml.
New viewer and sub!! great work!!!
Great content. Please do more content model which solves attrition prediction for org. Very complex subject because its hard to find already made models on such topics. It would be great help if you can make something attrition prediction model with variables more than 45-50.
Is there a other source then Kaggle where you got that csv from ??
Great Content, We need more tutorial on Transformers please
Glad you liked it. Anything specific about transformers you would like to see? Huggingface has so many of them for various NLP tasks.
@@robmulla Please explain Transformers and BERT architect. Also tutorial with use case in current industry
loved what you did, but would be nice to show how you got the amazon data as well. Plus, do you have any videos on sentiment analysis for company stocks?
Thank you so much for this video tutorial! I wanted to ask if you created the Amazon review dataset from scratch or was it already pre-made from somewhere else?
abi eline koluna sağlık çok güzel olmuş. türkçe karakterleri cozememdik
Thanks?!
Love from India ♥️
Thanks! ❤️
Thanks for this video, it was descriptive, well structured and well explained.
I have two questions and I would appreciate if you can give your opinion and guidence on that.
1. At the end of the day star reviews and sentiment are giving the same results so how can we justify going through all this process when we already have a very good indication of user sentiment based on the star reviews.
2. How can we get the strength and weakness of the product based on the reviews using the sentiment analysis.
23:42
Step 3. Roberta Pretrained Model.
RoBERTa base sentiment
I am getting a value error. That is we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. But My internet connection is good.
What can I do about it?
yeah im getting the same problem
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
Awesome video. Would be great to see you follow the sentiment analysis with a topic analysis. I’ve seen a few different options out there (LDA, Top2Vec and BERTopic), but would love to see your take on it.
Great suggestion! I'll keep that in mind for future videos.
@@robmulla Looking forward to that!! :)
Awesome teaching quality. Can you create a coursera course
Very interesting!!
Thanks!
26:00 if youre getting wn error here get phone verified (check notebook options on the right) and turn internet on
Thanks from June 2023😂
@@dhaaraniselvam225 you're welcome from April 2024 :)
Thanks for the video. Very well explained.
Is there any token limit for the transformer based Roberta model ?
Excellent explanation and material. Thank you for your efforts in making learning enjoyable. A brief query about reviews that are negative (5 stars) and positive (1 stars), where the algorithm is unable to forecast the relevancy score. Regarding these kinds of situations, how would you advise handling them??
Hey Rob , I was trying to execute the code where you extract the model trained on twitter comments but I keep getting the error "Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on." even though I am connected to the Internet . Could you please help me out ?
Hmm. It might be because hugging face is down. I’m not sure you could post an issue on their GitHub. Good luck.
facing the sm prblm!!! could you pls help me out
did you find any solution? same problem for me
I face thhe same issue, Its maybe because of the internet connection turn on internet for Kaggle..first you have to verify with your phone number.
One of the best tutorials on Vader and the Huggingface Transformers I have seen. One question I had: How is the confidence score calculated on the Pipeline model and is there a way to evaluate the model's performance on these calculations?
Thanks so much for the feedback. Glad you found it helpful. Evaluating the model performance is a bit tricky without ground truth labels. The output of the Pipeline model is essentially the probability the model predicts of each class given the dataset it was trained on. Check out the actual model description on the huggingface site here along with the noted limitations: huggingface.co/distilbert-base-uncased-finetuned-sst-2-english
Specifically this part is interesting:
```
Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.
For instance, for sentences like This film was filmed in COUNTRY, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this colab, Aurélien Géron made an interesting map plotting these probabilities for each country.
```
@@robmulla FWIW - I reached out to the creator of this and what I was told is that the score is calculated using the activation function after the final layer of the neural net. It is used to determine polarity (and is not a confidence score). The model returns an array with the score for each polarity, and the larger is the prediction. The values will always be positive, regardless of the actual sentiment class tagged to the text. This is unlike Vader's model which provides a composite polarity score that could be a positive or negative float based on the inferred sentiment (positive, negative, neutral).
@@timdentry9754 thanks for clarifying. Cool that you got a response from the creator!
@robmulla, great presentation but I have looked through videos on your channel, it appears you have not done one on finetunning a BERT model with custom dataset. I am particularly wanting to learn how you would finetune a BERT model for multiclass text classification, maybe on Google collab. I think many of us subscribers would love it. Thanks.
Hey Rob, great content man, it helps big time! I just cannot find you conveniently find at 26:00. I work with PyCharm so nothing is that automatic for me. Where can I download those files?
You can download the data via Kaggle! Check the notebook in the description. Hope that helps.
@@robmulla Hey man, for some reason I can find the Amazon food reviews but cannot find this one on the Input tab of your Kaggle Notebook.
what an absolute legend
It is really a wonderful video! I just wonder @Rob, do we need to do Cross Validations? Are there any hyper-parameters that we also need to optimise? How to do the cross validations here in the NLP? Just like the normal ML Cross Validation process? Should we worry about the overfitting under-fittings problems? How would the learning curve look like with and without the cross validations? Thanks
Top-notch 🔥 !!
Thanks Akshat!
thank you sir. you are my savior