Great things is that - you recommend other people's channel as well. It seems u r just trying to make people learn something no matter whose channel. Respect🙏🙏
This is the first time I am exploring machine learning and Python, I have never tried to learn Python. But your tutorials are just awesome, it is much easier to learn and understand the concepts. Great Work!❤👏
thanks a lot sir,for your great support,I started my data science path through your videos really great mentor,altruistic human being i am proud of you sir!!
Great intro. For the last line of code in notebook, I think we shall use X, y instead bc now it's time for full model evaluation: np.mean(cross_val_score(GaussianNB(), X, y, cv = 5))
Sir, I got one doubt here that, since you have created dummies in this project, you should drop the first dummy column, in order to avoid Multi-colinearity. Please revert back with your comments if I am wrong
Your teaching skills are best. Please continue this series and covers all topics of ML. If not possible, then plz provide link so that we can study. There is no channel which teaches ML the way you are. Hope, you will come to INDIA and do your dream job "Organic farming "
Sir these are the best videos with best explanation,Thanks alot for these sources. Please try to upload more projects and please help and explain more detail about when to use which classifiers. Thankyou
Generally, people suggest for using GaussianNB if we are dealing with continuous feature like age & fare. And BernoulliNB for Binary type of data like female & male. categoricalNB for discrete features like Pclass. So, can GaussianNB can take all types of features??
both can be used, it is just that he is more used to one hot encoder.. that is the same pd.get_dummies(" " ). Otherwise, the results will be the same for both.
thanks for this video! one thing i dont understand: at 2:33 P(queen) should be 1/4 or? and P(diamond) 1/13. as there are 4 queens and 13 diamonds in the deck
thanks a lot.. can i ask to you, what if i want to show xtest result after tf-idf sir? I have tried only with the xtest code but the results are not as desired
I have question, why you did not drop either fenale or male column? In your previous tutorials, you said one column should be dropped if converting using dummy. Thanks...
Should I use mean method or median method for Nan because in one video you told us to use median method in this video it is mean method. Which one is the best right to use?
I have a request if you could make one video on it that would be very helpful. I want to know when we make a UDF in python how can I check it at each and every step function is working or not before completing the whole UDF
by saying male as a feature are u sure u did not confuse it with sex. I thought sex would be the feature with values such as male, female. Unless you meant male as a feature taking on values yes or no?
hi,the titanic data we used earlier in DecisionTreeClassifier model. i campared the score is higher in DTC than Naive Bayes and we get probability in DTC also,so just wanted to ask how to know which model is the best to use in realtime?please suggest.
In a categorical dataset , how can we decide whether the problem can be solved by using Naive Bayes algorithm or no? Or which algorithm will give high accuracy?
Based on type of problem you might end up using one or the other algorithm. You can use gridsearchCV to evaluate performance of different alogs with different parameters. Please watch my video in this same series "it is called hypertunning parameters using gridsearchcv"
Do you not need to remove one of the dummies "male" or "female"? It does not make sense to have both of them, since in that dataset, who is not male is female. To my knowledge that is an essential step.
Sir Is it necessary to learn behind mathematics of machine learning algorithm or some overview of mathematics of machine learning algorithm Sir please tell me please Because I am very confused
You need to know some math. Not very much in depth. So don't worry too much about it. If you want to become machine learning engineer or data scientist who solves complex problem than of course advanced math knowledge is always useful
Hii , the lectures are just amazing , cn u plz make a tutorial on how to write custom layers in keras like we make in variational autoencoder . Plz man therre is almost no resource in internet explaining it properly
Hi, A couple of questions, hope someone could help please: 1) I thought Gaussian NB only take continuous features value. But here, there are continouse (e.g. Age) and discrete (e.g. Gender) value. Can I use Gaussian NB if all features are discrete value? 2) One hot encoding split the Gender data into two data: Male and Female. These features are related i.e. mutually exclusive. Does Gaussian NB algorithm jointly 'process' these two data as one feature or two separate feature? Hope someone could enlighten. Thanks.
Just for the ones who might be as stupid as me and were missing the "survived" column: On kaggle there are two files. One test and one train file. Take the train file instead :)
Exercise solution: github.com/codebasics/py/blob/master/ML/14_naive_bayes/Exercise/14_naive_bayes_exercise.ipynb Step by step guide on how to learn data science for free: ruclips.net/video/Vn_mmOuQkSA/видео.html Machine learning tutorials with exercises: ruclips.net/video/gmvvaobm7eQ/видео.html
Hi, I am getting following error: ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). But I checked and there is no NA value in my dataframe. inputs.info() Gives me this output: RangeIndex: 891 entries, 0 to 890 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Pclass 891 non-null int64 1 Age 891 non-null float64 2 Fare 891 non-null float64 3 female 891 non-null uint8 4 male 891 non-null uint8 dtypes: float64(2), int64(1), uint8(2) memory usage: 22.7 KB
Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced
Great things is that - you recommend other people's channel as well. It seems u r just trying to make people learn something no matter whose channel.
Respect🙏🙏
This is the first time I am exploring machine learning and Python, I have never tried to learn Python.
But your tutorials are just awesome, it is much easier to learn and understand the concepts.
Great Work!❤👏
Great to see you back with a new tutorial. Your idea of first explaining theory then going to practical is awesome. That's awesome.
hey flaboyant person. I was expecting to see your comment. How are you ?
@@codebasics It's impossible I won't like and reply to your video. I am great fantastic. How are you and how is your health now?
Simplicity is the ultimate sophistication. You are amazing!
Part 2 of this naive bayes tutorial. Email spam detection: ruclips.net/video/nHIUYwN-5rM/видео.html
thanks a lot sir,for your great support,I started my data science path through your videos really great mentor,altruistic human being i am proud of you sir!!
hey Prakhar, thanks for your kind words and I wish you all the best. I am sure you will become a successful data scientist one day. good luck :)
@@codebasics sir we waant full play list of deep lerning and real world data science and machine learning projects
Your Playlists worked as Revision before my interview. Thank You for your support
I think you are the best teacher. Thanks to you, I am learning data science. I hope I can find a job in this field. Greetings from Turkey.❤🤩
This is the best tutorial on you tube. I understand concepts easily.
I am happy this was helpful to you.
Great intro. For the last line of code in notebook, I think we shall use X, y instead bc now it's time for full model evaluation:
np.mean(cross_val_score(GaussianNB(), X, y, cv = 5))
That was the best ever tutorial I watched about naive bayes.... Thank you so much ❤
We need more tutorials on deep learning and start a new AI tutorials. Your machine learning tutorials are really good
You are fantastic! If you were a lecturer you would be the one everyone likes!
I found that RandomForest classifier performs slightly better than Naive bayes model. anw, love your tutorials, thank u for your hard works :)
You really know how to explain jargons in simple language. Thanks a lot
You are doing wonderful job ...really learnt a lot from your videos
Super B SIr Cyrstal clear Explanation.There are some many videos on machine learning but no one cann't explain as you.
Thanks and welcome
Sir, I got one doubt here that, since you have created dummies in this project, you should drop the first dummy column, in order to avoid Multi-colinearity. Please revert back with your comments if I am wrong
I agree, have u figured out the answer?
oh yeah, dummy variable trap
this is a really good example explanation
what a gift of explanation!!!
Your teaching skills are best. Please continue this series and covers all topics of ML. If not possible, then plz provide link so that we can study. There is no channel which teaches ML the way you are. Hope, you will come to INDIA and do your dream job "Organic farming "
Oh deepanshu.. I want to do that and spread awareness of eating right. Anyways but yea I have plan to cover many more topics in ML, stay tuned.
you are the best man. I am being not Naive.
Good keep doing these AI videos, i liked it good to see the flow of functions in single video.
Great video, great teaching, great speed, great other misc stuff like fillna, drop, concat. Please make more of these types of videos! Subscribed!
I am happy this was helpful to you.
Very well explained sir! Thanks :-)
Always love the u teach.
..u are amazing
Glad it was helpful!
thank you bhai for the explanation
Sir these are the best videos with best explanation,Thanks alot for these sources.
Please try to upload more projects and please help and explain more detail about when to use which classifiers.
Thankyou
sir , how did you come to know that the data is a bell curve (gaussian distribution).
Hello,
Thanks for the explanation.
I was wondering dont we need to normalize the data? Let me know your thoughts on this.
Naive can deal without any Feature Scaling
Thank you for this very well explained Tutorial.
Glad it was helpful mario!
Generally, people suggest for using GaussianNB if we are dealing with continuous feature like age & fare. And BernoulliNB for Binary type of data like female & male. categoricalNB for discrete features like Pclass. So, can GaussianNB can take all types of features??
I have a small query here. Why did we not drop the either female/male column after one hot encoding to avoid dummy variable trap?
Please complete tutorials for deep learning
@6:20 Why can't we use LabelEncoder instead of panda's dummy variables?
both can be used, it is just that he is more used to one hot encoder.. that is the same pd.get_dummies(" " ).
Otherwise, the results will be the same for both.
Very nice video sir can I used logistics regression also because I think it also give same result as naive bayes
Great job sir thank u
Glad it was helpful!
Sir, why there is a target variable????? like it's a clustering algorithm i.e unsupervised...and target variable is used in supervised
"target" is a variable, you can take anything as a variable
Wonderful video, thank you. Simple but well-explained! It has helped me a lot. =)
Glad it helped!
just i was looking for , thanks sir oh u re great great .....
Thanks man. I really appreciate love from all of you :)
@@codebasicsas usual many thank to u pls if u have material for data science of don't mind send to me ,suliman_allahgabo@yahoo.com
Hi. Thanks so much.
shouldn't we split the data first and then perform preprocessing or does it not matter?
Afaik, the order dosen't really matter. If you split first, you'd need to preprocess all the parts individually.
thanks for this video! one thing i dont understand:
at 2:33
P(queen) should be 1/4 or?
and P(diamond) 1/13.
as there are 4 queens and 13 diamonds in the deck
You are right, I think there is an error in the presentation.
thanks a lot.. can i ask to you, what if i want to show xtest result after tf-idf sir? I have tried only with the xtest code but the results are not as desired
Great, thanks for this series...Pls can u do series on evaluation metrics,I will love to see explicit explanation on it.
can we also do "One Hot Encoding" instead of dummy variables.
Waw, i am a first! 😃😃 #LovePython
oh yup. Hanzo.. you got the "first commenter" award :) ha ha...
super video
Does we don't need to drop one dummy column?
Dose the dummy variable trap only for linead_mode?
I agree, have u figured out the answer?
thanks a lot SIR
Hey @codebasic
you are teaching way is awesome.
i hv question here.
why is there a target variable in unsupervised learning?
why we use train_test _split, we can use cross_validation fro better results, cant we?
I have question, why you did not drop either fenale or male column? In your previous tutorials, you said one column should be dropped if converting using dummy. Thanks...
I agree, have u figured out the answer?
Sir plz explain when to take the mean median mode for null values .....
Should I use mean method or median method for Nan because in one video you told us to use median method in this video it is mean method. Which one is the best right to use?
Median is more robust to outlier so generally a better idea.
Features like age are normally distributed hence mean can also be safely used
Awsome!But I Have A doubt why we have not normalize out dataset certain columns?
sir make tutorials on Natural language processing(NLP)
From where you have learned NLP???? ( I assume that you have done something for learning NLP)
why my predict_proba return >1 and
same issue
how you decide which feature to keep and use in the model and which to drop?
I mean is there any strategy to handle this situation?
I have a request if you could make one video on it that would be very helpful. I want to know when we make a UDF in python how can I check it at each and every step function is working or not before completing the whole UDF
by saying male as a feature are u sure u did not confuse it with sex. I thought sex would be the feature with values such as male, female. Unless you meant male as a feature taking on values yes or no?
hi,the titanic data we used earlier in DecisionTreeClassifier model. i campared the score is higher in DTC than Naive Bayes and we get probability in DTC also,so just wanted to ask how to know which model is the best to use in realtime?please suggest.
In a categorical dataset , how can we decide whether the problem can be solved by using Naive Bayes algorithm or no?
Or which algorithm will give high accuracy?
Based on type of problem you might end up using one or the other algorithm. You can use gridsearchCV to evaluate performance of different alogs with different parameters. Please watch my video in this same series "it is called hypertunning parameters using gridsearchcv"
Do you not need to remove one of the dummies "male" or "female"? It does not make sense to have both of them, since in that dataset, who is not male is female. To my knowledge that is an essential step.
Yes your right, these are negatively correlated. Including both of the features would make gender have higher influence on inference
hey u know parameters prior_fit in naive bayes for what? i dont understand in documentation thx
Sir Is it necessary to learn behind mathematics of machine learning algorithm or some overview of mathematics of machine learning algorithm
Sir please tell me please Because I am very confused
You need to know some math. Not very much in depth. So don't worry too much about it. If you want to become machine learning engineer or data scientist who solves complex problem than of course advanced math knowledge is always useful
@@codebasics
Sir if I want machine learning Engineer than if I have some knowledge of math
Please reply I wait
Sir Hindi main bhi bana dijiye pls aap bohut acha samjha rahe hain
sure jatin. RUclips me codebasics hindi search karo, maine already those ML ke video hindi me upload kiye hai.
thank you so much
Glad it was helpful!
sir why you have used GaussianNB model instead of using Logistic model or any other
Because this tutorial is on naive Bayes 😊
@@codebasics so can be use any other model to perform this task of mail classification or this model is best suited for this task
super
Hii , the lectures are just amazing , cn u plz make a tutorial on how to write custom layers in keras like we make in variational autoencoder . Plz man therre is almost no resource in internet explaining it properly
Thanks gaurav for appreciation. I have noted down the topic you suggested and will get to it in future 👍
Hi, A couple of questions, hope someone could help please:
1) I thought Gaussian NB only take continuous features value. But here, there are continouse (e.g. Age) and discrete (e.g. Gender) value. Can I use Gaussian NB if all features are discrete value?
2) One hot encoding split the Gender data into two data: Male and Female. These features are related i.e. mutually exclusive. Does Gaussian NB algorithm jointly 'process' these two data as one feature or two separate feature?
Hope someone could enlighten. Thanks.
hey i have the same doubts regarding Gaussian NB. Did you figure it out? Would be really helpful for me:)
may i know the reason of using one hot encoding but not label encoding ??...can anyone clear my doubt
thank you
Just for the ones who might be as stupid as me and were missing the "survived" column: On kaggle there are two files. One test and one train file. Take the train file instead :)
Good explanation but the problem is, in the dataset I'm not able to find jack and rose :/
thanks
isn't it female and male column 'highly related'?
Exercise solution: github.com/codebasics/py/blob/master/ML/14_naive_bayes/Exercise/14_naive_bayes_exercise.ipynb
Step by step guide on how to learn data science for free: ruclips.net/video/Vn_mmOuQkSA/видео.html
Machine learning tutorials with exercises:
ruclips.net/video/gmvvaobm7eQ/видео.html
Input contains NaN, infinity or a value too large for dtype('float64'). - Can you help with the errror?
use fillna() method to fill that "NAN" value as shown in video
Sir how is your health? I'm waiting for your videos.
Sir, how to encode multiple variables at once?
hey Jainmiah, my health is improving. The full recovery might still take one complete year but at least I am in a position to upload videos now.
@@codebasics I Pray God to recover you fast to your GOOD Health. #LoveCodeBasics and #LoveYouSir.
can't the sex column be simplified by LabelEncoding??
where is code of this tutorial??
check video description for github link
Hi, I am getting following error:
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
But I checked and there is no NA value in my dataframe.
inputs.info()
Gives me this output:
RangeIndex: 891 entries, 0 to 890
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Pclass 891 non-null int64
1 Age 891 non-null float64
2 Fare 891 non-null float64
3 female 891 non-null uint8
4 male 891 non-null uint8
dtypes: float64(2), int64(1), uint8(2)
memory usage: 22.7 KB
Sir can u upload more videos on ML ALgorithms like this??
yes Ayaz. sure. I have the plan to upload more videos on this topic.
hello sir,
Please do some videos on Natural Language processing, I am waiting for this badly
wow....actually thats the topic I am going to cover next. You read my mind almost. I will start that series soon.
@@codebasics yes...i am also waiting...I started studying data science a few days back...your way of teaching are simply awesome...
Where to get the dataset?
github.com/codebasics/py/tree/master/ML/14_naive_bayes
I am getting file not found error as I import data set using the same code
if you're using IDE you should import the file to the workspace you're working in
Boss, can you please make a lecture on reinforcement learning and also one lecture on Q learning??
could not convert string to float: 'Birnbaum, Mr Jakob' how to eliminate this error
SIR ,HOW TO IMPROVE ITS ACCURACY?
We can't able to download ur code... It's coming invalid
I have checked all URL's working perfectly. Please check URL in description.
pd.get_dummies(df,columns=['sex']) could have just done this.. no need to perform concati. @6:45
y should be a 1d array, got an array of shape (179, 5) instead.
How P(diamond/queen) is 1/4
Can somebody explain to me?
Please help me, i got this error message "could not convert string to float : 'male' " someone can explain me why it's happen to me?
Use sklearn encoding
7:04
Better than Andrew NG
Really? Isn't Andrew Ng famous in artificial intelligence?
How the fuck can I get the dataset