Machine Learning Tutorial Python - 9 Decision Tree
HTML-код
- Опубликовано: 22 июл 2024
- Decision tree algorithm is used to solve classification problem in machine learning domain. In this tutorial we will solve employee salary prediction problem using decision tree. First we will go over some theory and then do coding practice. In the end I've a very interesting exercise for you to solve.
#MachineLearning #PythonMachineLearning #MachineLearningTutorial #Python #PythonTutorial #PythonTraining #MachineLearningCource #DecisionTree #sklearntutorials #scikitlearntutorials
Code: github.com/codebasics/py/blob...
csv file for exercise: github.com/codebasics/py/blob...
Exercise solution: github.com/codebasics/py/blob...
Topics that are covered in this Video:
0:00 - How to solve classification problem using decision tree algorithm?
0:26 - Theory (Explain rationale behind decision tree using a use case of predicting salary based on department, degree and company that a person is working for)
2:10 - How do you select ordering of features? High vs low information gain and entropy
3:52 - Gini impurity
4:28 - Coding (start)
9:11 - Create sklearn model using DecisionTreeClassifier
13:32 - Exercise (Find out survival rate of titanic ship passengers using decision tree)
Do you want to learn technology from me? Check codebasics.io/?... for my affordable video courses.
Next Video:
Machine Learning Tutorial Python - 10 Support Vector Machine (SVM): • Machine Learning Tutor...
Populor Playlist:
Data Science Full Course: • Data Science Full Cour...
Data Science Project: • Machine Learning & Dat...
Machine learning tutorials: • Machine Learning Tutor...
Pandas: • Python Pandas Tutorial...
matplotlib: • Matplotlib Tutorial 1 ...
Python: • Why Should You Learn P...
Jupyter Notebook: • What is Jupyter Notebo...
Tools and Libraries:
Scikit learn tutorials
Sklearn tutorials
Machine learning with scikit learn tutorials
Machine learning with sklearn tutorials
To download csv and code for all tutorials: go to github.com/codebasics/py, click on a green button to clone or download the entire repository and then go to relevant folder to get access to that specific file.
🌎 My Website For Video Courses: codebasics.io/?...
Need help building software or data analytics and AI solutions? My company www.atliq.com/ can help. Click on the Contact button on that website.
#️⃣ Social Media #️⃣
🔗 Discord: / discord
📸 Dhaval's Personal Instagram: / dhavalsays
📸 Codebasics Instagram: / codebasicshub
🔊 Facebook: / codebasicshub
📱 Twitter: / codebasicshub
📝 Linkedin (Personal): / dhavalsays
📝 Linkedin (Codebasics): / codebasics
🔗 Patreon: www.patreon.com/codebasics?fa...
Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced
it is better for us if you please provide that slides sir can you please send slides also sir
Cool
Hi Dhaval sir, please note I tried to register in Python course. But the link is not working on the site
@codebasics
Hello Sir, Regarding the encoding approach (label encoding) used in the video, I read on the sklearn documentation that it should be used only on the target variable (output "y") and not the input feature ("x"). The documentation stated that for input feature one should use either onehotencoder, ordinalencoder, or dummy variable encoding.
Also, I was expecting that you use onehotencoder(OHE) since the input features (company, job and degree) are nominal and not ordinal variables. Is it best practice to use OHE for nominal variables or it just doesn't matter?
Please could you clarify for me???
Thank you.
My model got a score of 98.6%. I dropped all the Age Na values which reduced the sample size from 812 to 714. I label-encoded the Sex column and then used a test size of 0.2 with the remainder of 0.8 as the training size. I am all smiles. Thanks @codebasics
Step by step roadmap to learn data science in 6 months: ruclips.net/video/H4YcqULY1-Q/видео.html
Exercise solution: github.com/codebasics/py/blob/master/ML/9_decision_tree/Exercise/9_decision_tree_exercise.ipynb
Complete machine learning tutorial playlist: ruclips.net/video/gmvvaobm7eQ/видео.html
5 FREE data science projects for your resume with code: ruclips.net/video/957fQCm5aDo/видео.html
97.97% accurate
@@bestineouya5716 i also got the same accuracy
Great Explanation Sir, Thanks a lot for your efforts and help. I got 97.76% accuracy. I did not map male and female to 1, 2 instead used as it is. Is it necessary to do that ? is there any significance of it?
Exercise results ::::: Accuracy : 0.8229665071770335
Actually I your csv file as training and for test data used test.csv provided on Kaggle
>> which increase my training data(which would have been less if I had split my data)
>> Increased Accuracy(As we have more data to train)
>> Reduce chances of overfitting if i had used same data for both training and testing...
Thank you.. for great video
0.98
This is by far the most straight forward and amazing video on decision trees I have come across! Keep making more videos Sir! I am totally hooked to your channel :) :)
Thanks kartikeya for your valuable feedback. 👍
Hi sir, I am a 10th grade student and I am learning ML and in the exercise My model got 81% accuracy😀 sir. Will Make many models while learning and share with you. Thanks for the tutorials sir.
It is ok to learn ML but make sure you find time for outdoor activities, sports and some fun things. The childhood will never come back and do not waste it in search of some shiny career. If you are so much concerned, I would advice focusing on math and statistics at this stage and worry about ML later.
@@codebasics ok sir. Thanks for guidance!
Hi bro now what doing....
How to fill empty value on age feature
@@codebasicsAbsolutely correct, it’s great to learn new things. But learning all these is not your right age. Make more and more memories in childhood. I am 23 and trust me life is very painful…
Actually this man has made learning Machine Learning easy for everyone whereas if you will see other channels they show big mathematical equations and formulas..which makes beginners uncomfortable in learning ML.
But thanks to this channel.♥️🥰
very True. Complex concept explained in very understanding way. Hats off really
Played with the test_size a bit and I managed to push out a score of 87% max.Appriciate the tutorial lot!
For those wondering what 'information gain' is, it is just the measure of decrease of entropy after the dataset is split.
Incredible video! Thank you for sharing your knowledge. Scored a 83.15%. I changed the hyperparameter "criterion" to entropy instead of gini and was consistently performing better. Looking forward to seeing how changing other hyperparameters effects accuracy.
That’s the way to go niko, good job working on that exercise
Your videos are absolutely awesome.... Those who wants a career transition in DS basically they use to spend more then 3k us dollars to do their certification and what they ultimately get is a diploma or a degree certification on Data Science not what exactly happening in data science, but when a scholar like you train us we come to know what's happening in it.
K Prabhu, thanks for your kind words of appreciation.
Best description of Information gain, your explanation is really the only resource that explains the intuition well
Excellent Tutorial. In Exercise I used three different method to fill Age
1- Backward, Forward, Median of Age
2- Median of Female_Survive to fill Female_Survive_Age and Median of Male_Survive to fill Male_Survive_Age and same for Not survive.
3- Interpolate Method.
Using train_test_split of 0.3 test size. I get max of 82% accuracy and I also change gini to entropy for each approach
Thanks for this tutorial mate, it is the best straight forward DTC tutorial.
Using entropy i got an 81% accuracy and,
using gini i have a 78% accuracy
That’s the way to go Larry, good job working on that exercise
Thank you so much for the tutorial. Im doing all the exercise.I got an accuracy of 81% on titanic dataset
Sujan that a decent score. Good job 👍👏
Got an accuracy of 78.92
Thanks for the Lovely tutorial !
Hello sir, I received an accuracy of 97.97% for the given exercise. Thank you for the wonderful tutorials, all of them are very helpful and I am performing all exercises that you give at the end of the video.
Thank for these awesome videos. I have been learning a lot through your ML tutorials.
I replaced the missing values in the 'Age' column with the median. My test set was 20% and my accuracy on test data was 99.44%.
how? can u share the solution?
There is high chance that the model is overfitted, it is not generalized
Nd chances are that ur model has already seen your test data, better rerun from the first cell once and check...
Do you have any thing related to sentiment analysis/Text mining/Text analysis? please have a tutorial for the text analytics as the other videos are so good
I also request you to create chats for AUC and also create a model evaluation according to CRISP DM model
Thanks for the video. My model got an accuracy of 83.5%. Glad to be this far with the data science roadmap. Continue with good work sir.
Thanks for the awesome tutorial....
Dropped all na values in Age column which reduced the sample size from 812 to 714 and ran the model couple times, the best accuracy I got was 83.21%
In In (8) you use the "le_company" LaberEncoder object 3 times and never use the 'le_job" and 'le_degree' objects. It still works, so my guess would be that you'll only need one LabelEncoder object to do the job.
label encoder basically converts the categorical to numerical, since job and degree are categorical you still need them to be LabelEncoded. and he used them see carefully using fit_transform().
@@rajubhatt2 he encoded them using company object Only though
well here it worked as Sir used fit_transform but if he had splitted the data into test and train sets , then he would have used transform on remaining test set and for that different instances would be required for each coloumn.
when i use only one object then my first 2 rows are drop from dataset ,why??
Got an accuracy of 97.20%
Dropped all rows whose values were missing.
Thank you, Dhaval sir..
Kuldeep, that is indeed a nice score. good job buddy.
Mine is 98.459%. Likewise I removed all missing data for Age.
@@elvenkim why you are removing the missing value whether it is possible to fill with whether mean or median it depends upon the outlier present in the column age
Hi! Thank you for this playlist
help me explain this : I use different methods to encode string from SX columns ( 1 : LabelEncoder, 2 get_dummies , 3 map ) then I fillna with mean() method and also test_size the same for 3 above encoding methods BUT I got different accuracy . Tell me why??
I got 98.4 % in titanic data set . Thank you Sir , you are the best.
Oh wow, good job arnab 👍😊
I don't want to hurt your fillings but 98.4% is only possible if you are checking model score on train data instead of test data.
true@@jayrathod2172
@@jayrathod2172yes I was about to say that, and also possible if you have change the random state multiple times and your model has seen all your data, and is now overfitted
Thanks for helping me get my homework done, by God it was a mistake to wait till the last day
oh i couldn't agree more
I got an warning and found a dead end in Label encoding input['Fare']
"TypeError: Encoders require their input to be uniformly strings or numbers. Got ['float', 'method']"
trying solved to change to int or string but failed,anyone can help me?
Very nice video! Can you use anything as target or does it need to be binary? so is it also possible to use the salary for example?
Will never get tired to say thank you at every video I watched but honestly, you're the best! :) Keep posting great videos
I am happy this was helpful to you.
got 97.4% accuracy filled the empty blocks in age with mean.
thanks a lot for perfect tutorial
How to calculate accuracy for the above dataset mentioned in the video
thanks it helped me increase my accuracy
Is there a way to get the mapping after using LabelEncoder? I used LabelEncoder in a larger, different dataset from this one. After I fit the model, I want to try predictions but I don't know which string belongs to which code. Help please.. Thanks.
Hey! Where can I get a dataset for the exercise? I cannot find it anywhere in the description
Thank you so much sir, I really appreciate your tutorial, I learnt a lot
Krijancool, thanks for the comment. By the way your name is really cool 😎
can anyone tell me why didn't we use OneHotEncoding in this example????
does it mean that we need dummy variable only in Regression algorithms???
I also got the same question. I appreciate if somebody help.
Maybe here is the answer: "Still there are algorithms like decision trees and random forests that can work with categorical variables just fine". datascience.stackexchange.com/questions/9443/when-to-use-one-hot-encoding-vs-labelencoder-vs-dictvectorizor
use pandas.get_dummies
Using One hot encoding worsens the accuracy of trees...therefore it's recommended to use label encoding
I filled the null values in Age column to Age.mean(). I also label encoded the 'Sex' column followed by a train_test_split and normalization using Standard Scaler. When I checked my accuracy score, it came out to be just 77%. What did I do wrong?
in the inputs do we have to take all the columns except the target or we can take any number of columns
i have only started to learn about data science using python and i have a question: Why use labelencoder rather than getting dummy variables for the categorical variables? Is it more efficient using labelencoder?
I prefer the .get_dummies()
Hi Sir, Thanks for the great video.
I've a question, why didn't we use one hot encoding here for our categorical variables?
We can but for decision tree it doesn't make much difference that's why I didn't use it
@@codebasics Ohh, OK Sir. Thank you
@@codebasics But then doesn't the model give a higher priority(value) to Facebook than to google on the basis of the number assigned in Label Encoding ...just confused here.
sir in the PClass columns the values are present like this 1st,2nd,3rd...... so how to change these values into an integer when I am using labelEncoder() I got an error
Great Video! thank you, I have a question I am working on Logistic Regression problem and trying to feature engineer some of text columns Do you think LableEncoder can serve my purspose?
Thanks Again!!
really appreciate your work. learning a lot... just want to confirm something from the tutorial @7:40 you are using fit_transform with le_company object for all the other columns and did not use le_job object and le_degree object. is it ok? or should we do it? Thank you very much again.
That's just the variable name you can use that way too..
As usual, all your videos are awesome to watch. Thanks for the same :)
can you please say why fit_transform () is used with labelencoder?
Learning quite a lot through these wonderful tutorials, thanks Codebasic.
Great Tutorials keep going but I have a doubt why haven't you used onehotencoder for company here as it is nominal variable? and please make a tutorial on what exactly these parameters are and on random forests
true, one hot encoding is better than labelEncoder as assigning categories would results in errors in prediction if that feature is chosen, because higher category is considered better over the others. so in this case if google =0 and Fb =1 , then FB>Google.
@@swatiamonkar9827 Thank you for the clarification, actually i was trying it with OneHotEncoder and resulted in mis-prediction.
Hurray! Sir i got an accuracy of 97.38% by using interpolate method for Age column.😍✨
Did u use train_test_split method
Good job Prudhvi, that’s a pretty good score. Thanks for working on the exercise
It was easy.. i got 98.3%
Hello! What can be done if we have an overfitting situation? (trainning_set accuracy 0.88 vs testing_set accuracy of 0.4)
I question I have is arent we supposed to do OneHotEncoding since the variables are not ordinal or is it that decision trees takes care of it since it doesnt considers the magnitude of features but rather the values of feature to determine the rules
There are some NaN values in the Age column. I filled them through padding. Also, I spit my data for testing and at the end I got the accuracy of 0.8.
Use fillna with median and accuracy will be 0.9777 by normal method
@@piyushtale0001 i have used median() for Pclass, Age and Fare but got score = 78 around. How to improve?
*for x in features.columns:*
*features[x] = le.fit_transform(features[x])*
How to write the predicted values into a csv file
For eg: model.predict(test_data), I want the output array in a csv file submission.csv
Hello sir at 7:50 LabelEncoder is used for all the columns like compony,job and degree but when we fit_transform then why only le_compony is used ?
For job and degree we have to write le_job.fit_transform() and le_degree.fit_transform() ?
Am I right please answer 😶
Got accuracy as 76.22 %. Tried by tweaking train data & test data but no significant difference. Thank you very much for simple & clear explanation.
How to fill empty value on age feature
Thank you so much for making it very simple. As an ML learner, will do we need to understand the code behind each of these sklearn functions ?
Not necessary. If you know the math and internal details then it can help if you want to write you own customised ML algorithm but otherwise no.
@@codebasics can you recommend videos for understanding the math behind it, thanks
Your VIdeos are always Awesome!
Can u suggest me some websites where I can find Questions like those in ur Excercises and all?
Hey, honestly I am not aware of any good resource for this. Kaggle.com is there but it is for competition and little more advanced level. Try googling it. Sorry.
hi wouldnt label encoder means you're assigning some sort of ordering to the values ?
i start to learn about machine learning and your video help me so much to make understanding
Why we have not created dummy variables here as we have done in Logistic Regression using OneHotEncoder
In one hot encoding turorial you mentioned its better cos then we dont have encoding which has relation to each other. Please clarify. These videos are teaching me a lot.
Yes same doubt. Have you cleared your doubt? If yes, then please tell.
I think company should be given one hot encoding while job and degree should be label encoded.
thank you for such amazing, well detailed and easy to understand tutorial(s) !
im following your channel exclusively for learning ML, along with kaggle competitions.
also recommending your channel to my peers.
great work..!
PS - i got 75.8% as the score of my model in for the exercise.
any tips to improve the score?
take test_size=0.5 it increases to 78.15%
re execute the test train split function as it generates rows randomly. Then Again fit the model and execute. Continue this for 4-5 time until u get somewhere around 95% accuracy. So this set of data is the most accurate for training the model.
Hello thank you for creating this content. I am specifically looking for some thing that will help me do this task: XL has a lot of raw data of x,y,z. Z=x-2y. Can i query the dataset if i know z but i need to find out which Combinations of X and why will yield that particular Z.
Can we convert these lables with get dummies , not with label encoder. Plase explain if there is some difference
I got 74.4% accuracy. it is good to do everything by my own....
That’s the way to go anujack, good job working on that exercise
Why didn't we use dummy column concept here like we did for linear regression?
As in trees we have many levels so here dummy variables concept doesnt work well so we try to avoid it
@@naveedarif6285 how can we train and split dataset in this? Please help
Sir i want to ask. Is it possible to get multiple target values. E.g if i have more than 1 target values on my input. This tree is giving only 1 .
thanks a lot sir i just learnt so many things without getting bored(usually we don't get to do hands on for these topic), this was super helpful
Amazing Video! But I have some doubts please help me here:
1. We made three Label encoder instances here. Cant we use just one to encode all three?
2. We Use label encoding and not OneHoteEncoding, however, the latter made more sense as our model might assume that our variables have some order/ precedence
It would be great if you clarify my doubts. Thanks!
It is necessary to understand the underlying logic of the algorithm. In regression, the algorithm tries to fit to a line, curve (or higher dimensional object in SVM), so, what the relative value (order, or where it is on the axis) is matters. In decision tree, the algorithm is just asking Yes/No questions, such as Is the company Facebook?, Does the employee have only a bachelors degree?, etc, so the order is not significant. Therefore, a the Label encoder is valid for decision tree.
While it could have been possible to lump the label encoders into one, say by using a power of 10 to distinguish them, it would have given too much weight to the highest power of 10 (the algorithm understands numbers, so it is going to ask >/< /= questions), but the whole point of using decision tree was for *the algorithm* to find the precedence of features that will give the quickest prediction. Therefore it is better to have more features (i.e. more Label encoders).
Then, if more features is better, one could re-ask the question of why not one-hot encoding, that would give even more encoders. Now, the issue is the tradeoff of accuracy vs conciseness. Here, there were only 3 companies, but there could be a case where a problem was examining over 100 companies. Having a one-hot encoder for all the companies would get quite cumbersome.
Exercise result for the titanic dataset: Score: 0.77 (using Decision Tree Classifier)
DUDE CAN U PLS SHARE ME THE CODE.... IM GETTING ACCURACY 1.0
@@cyberversary262 you are giving entire dataset to get trained ,
Better try with test_size != 1 (use 0.3-0.2 ) to get better results
@@prakashdolby2031 dude I have asked this question 3 months ago 😂😂😂
What is the datatype of target variable? I executed the query model.fit(inputs_n, target) and it throwed below error : ValueError: Unknown label type: 'unknown' . Pls help
ValueError: could not broadcast input array from shape (2,712) into shape (1,712)
I'm getting this error whenever I'm tryint to fit the (xtrain,ytrain) in the model
can anyone please resolve it??
Please can any one tell me how to increase our model's accuracy? i.e. Score
Increasing score is an art as well as science. If your question is specific to only decision tree then try fine tunning model parameters such as criterian, tree depth etc. You can also try some feature engineering and see if it helps.
I tried with increasing training data and score is increased.
Score is 97.75% for exercise dataset. Filled the null values in Age column with median value
I did the same thing, but i still get accuracy around 79%. Any suggestions?
@@RohithS-ig4hlHey, I got 80% percent accuracy. I got also low accuracy like your.
Hello Sir,
Great tutorial. My model's accuracy for the titanic dataset came out to be 82%. Thank you.
I truly appreciate your effort, sir. please make more videos. Take love from Bangladesh
Thanks so much for these tutorials! These are the best tutorials I've found so far. The code shared by you for examples and exercises are very helpful.
I got score 76% for the exercise. How is it possible to get a different score for the same model and the same data? The steps followed are the same too.
In train_test_split it will generate different samples Everytime so even when you run your code multiple times it will give different score. Specify random_state in train_test_spkit method, let's say 10, after that when you run your code you get same score. This is because now your train and test samples are same between different runs.
@@codebasics Got it. Thanks!
Same, I too got an accuracy of 76% but was aware about the random_state attribute! :)
Sir, In the Exercise you perform map on sex column and I did it using LabelEncoder. I liked when you give us a difference approach to perform a same task .and one more question Sir, instead of mean why cant we use mode on age column ........btw My score is 79%
my score : 0.8044692737430168
Can't download the exercise dataset from GitHub. Getting this error: "ParserError: Error tokenizing data. C error: Expected 1 fields in line 28, saw 384"
WHy is this error shows up whenever I try to predict by my model? "D:\Software\INSTALLED\Annaconda\Lib\site-packages\sklearn\base.py:464: UserWarning: X does not have valid feature names, but DecisionTreeClassifier was fitted with feature names
warnings.warn("
Sir i have a doubt regariding method .score() from sklearn.model_selection.DecisionTreeClassifier and accuracy_score() from sklearn.metrics.
you have computed the performance of the model on the basis of .score().What if we compute on basis of accuracy_score()??Are they identically the same??
What if for a certain classifier accuracy is not the best parameter to measure the performance?i.e the best parameter might be precision or recall or something else
is it possible to make a prediction given inputs never seen in the dataframe ? I mean, what if we give as input a sample of a male, age 44, fare 30, pclass 2 which is not in the current dataset ? what will the model predict ? and how is this done ?
Dhaval sir ..just to know.. in this model every time when we want to predict for any combination, we have to find the assigned code for each category and it becomes difficult to identify. Can you give clarification on this ! I mean how can we make it easy .. thanks
I am unable to download the CSV file from github
please provide a CSV file link
where can I find machine learning questions for python ?
how u can connect the explanation with scatter plot to this data set. Because then we can use decision tree here
My model accuracy is 79.32
Thanks for the nice data science series🙏
sir, why OneHotEncoding is not done for the first example as all there are categorical values ??
array([0.75])
is the prediction of model.predict([[3,0,24,7.225]]) this data is it ok ?? i am using the titanic.csv as my dataset.
Such a nice explanation , now I dont need to watch any further videos. This video was very satisfactory and convincing !!
after trying i give up and open the exercise and i try run exactly in my own jupyter and got and error
"float() argument must be a string or a number, not 'method'"
I think its because dataset is update,anyone can help me?
Hello,
At timestamp 8:09, should the inputs['jobs_n'] not be le_job.fit_transform(inputs['job']) and similarly for inputs['degree_n'] = le_degree.fit_transform(inputs['degree'])
The video used le_company object for all the three encodings. Is it not necessary to create three different objects? If yes, why were these objects not used?
please explain.
Simple explanation thank you! The excercise you have given got score of 98.18%... And it's predicting pretty well 👍 Thank you once again
Best_params_ plz
This is unbelieveable. I saw someone used Random forecast, SVM, Gradient Boosting etc. The best score on testing data is 84%. With simple Decsion Tree, best score would be around 82%, i think.
I didnt get the label encoder part could u explain that in comment ?
Thanks for this video. I have used train and test csv files of titanic. Cleaned both datasets and implemented Decision Tree Classifier and got a test score of 0.74 ❤️
That’s the way to go anil, good job working on that exercise
should we drop the na rows in exercise? since the ages are not correlated to each other, and, in my opinion, fillna with the mean value may affect the accuracy of the final model.
Hi sir, in the titanic dataset i did not use train test method.. i used entire dataset as training data and am getting 0.9845 model score using decision tree algorithm. Is this correct way of doing prediction ? My final count of rows came down to 714 after removing NaN values from original 891 rows.
Wohooooo once again new video thank you so much sir
Thanks for sharing this awesome video. I have learned more about ML using this.
Great to hear!
Will I get better accuracy if I used one hot encoder instead of label encoder?
Thank you very much for this course! Super helpful. I was able to get an accuracy of 83.24%
can we use get dummies method insted of label encoding?
How to download the csv file?? Or how to read csv file url
On 7:28, why do you have to created multiples LabelEncoder() classifier?