Hi Spencer, how's it going?! I got a question. I've started studying Naive Bayes recently and in some of the formulas the multiplication of the initial probability P(H) and the product of each Evidence probability given the Hypothesis P(E|H) is divided by the probability of the Evidence P(E). By 7:50 you show your function and it doesn't have the division part. Any reason for that?
Thank you Spencer. Your videos are super helpful. I have a quick question at 8:33 -- Why multiplied by the initial init_pos or init_neg? Wouldn't it give higher weight to positive prediction?
Hi! Glad you liked it! At 8:33, those are the weights for the positive and negative values that exist within the dataset. P(N) * P(N | X_i) -- See lines 17 and 18 in the Rscript I posted on github. Those are where the variables come from. I hope that helps!
Hey, this video is really useful However I have occured with some problems > classify_sentiment(" My name is Sahil Sharma") [1] "name" "sahil" "sharma" [1] "Positive" > classify_sentiment(" I do not like this Restaurant") [1] "like" "restaurant" [1] "Positive" > classify_sentiment(" I hate the restaurant services") [1] "hate" "restaurant" "services" [1] "Positive" > classify_sentiment(" Services were poor") [1] "services" "poor" [1] "Positive" > classify_sentiment("You're fired") [1] "fired" [1] "Negative" > classify_sentiment("The company is going under loss") [1] "company" "going" "loss" [1] "Positive" See, In the second command I tried to use "I do not like this Restaurant" which is obviously a Negative sentiment. But, our classifier is incorrectly telling that it's a positive sentiment. Would you please guide me how to improve it.
This seems like a classification issue. There are a variety of ways to potentially improve this. You can start with the following: Are you tagging enough data to classify correctly? Are you tagging the appropriate parts of speech? What type of sentiment model are you using?
faced same issue even if i am selecting data from which i trained the model it shows negative data as positive have you camed up with a solution @Sahil Sharma ?
@@semilshah8252 No, at that time i dropped the idea of using this model. But, i think this is a issue of training data or less amount of data. Therefore, i think you can dig deep at training part of the model.
Great content.
If I may, pls do consider adding an addendum with the Python implementation of each topic !
Sir could you please tell me, how to calculate the accuracy of this model as well?
Hi Spencer, how's it going?!
I got a question.
I've started studying Naive Bayes recently and in some of the formulas the multiplication of the initial probability P(H) and the product of each Evidence probability given the Hypothesis P(E|H) is divided by the probability of the Evidence P(E). By 7:50 you show your function and it doesn't have the division part. Any reason for that?
Ahh yes. This is bayes rule. if P(E|H) = P(E), then this assumes the events are independent.
Thank you Spencer. Your videos are super helpful. I have a quick question at 8:33 -- Why multiplied by the initial init_pos or init_neg? Wouldn't it give higher weight to positive prediction?
Hi! Glad you liked it!
At 8:33, those are the weights for the positive and negative values that exist within the dataset. P(N) * P(N | X_i) -- See lines 17 and 18 in the Rscript I posted on github. Those are where the variables come from. I hope that helps!
Got it. Need to level off the weights of pos/neg probability calculated from the skewed subdatasets. Thanks.
Love ur videos! It’s very useful although it will be great if u can share the codes online for the examples 😅
Sure thing! Github link: github.com/SpencerPao/Data_Science/tree/main/Naive%20Bayes
@@SpencerPaoHere Thanks man! I learnt a lot from your videos :)
Hey, this video is really useful
However I have occured with some problems
> classify_sentiment(" My name is Sahil Sharma")
[1] "name" "sahil" "sharma"
[1] "Positive"
> classify_sentiment(" I do not like this Restaurant")
[1] "like" "restaurant"
[1] "Positive"
> classify_sentiment(" I hate the restaurant services")
[1] "hate" "restaurant" "services"
[1] "Positive"
> classify_sentiment(" Services were poor")
[1] "services" "poor"
[1] "Positive"
> classify_sentiment("You're fired")
[1] "fired"
[1] "Negative"
> classify_sentiment("The company is going under loss")
[1] "company" "going" "loss"
[1] "Positive"
See, In the second command I tried to use "I do not like this Restaurant" which is obviously a Negative sentiment. But, our classifier is incorrectly telling that it's a positive sentiment.
Would you please guide me how to improve it.
This seems like a classification issue. There are a variety of ways to potentially improve this. You can start with the following: Are you tagging enough data to classify correctly? Are you tagging the appropriate parts of speech? What type of sentiment model are you using?
@@SpencerPaoHere Thank you! I will look over these questions.
faced same issue even if i am selecting data from which i trained the model it shows negative data as positive have you camed up with a solution @Sahil Sharma ?
@@semilshah8252 No, at that time i dropped the idea of using this model. But, i think this is a issue of training data or less amount of data. Therefore, i think you can dig deep at training part of the model.
@@sahil_shrma okay thanks