Thank you, Ashok! This is an outstanding explanation of a complex subject. You make it all feel very intuitive. Awesome stuff - I will look for more DataMites videos in the future!
At 8.15 you have said it is taking the average of centroids which is completely wrong. SMOTE is calculated over the feature space...it goes like this 1. we take the feature vector of the minority class point. 2. we calculate the distance between the neighbours (neighbours=5). 3. we multiply the distance between the neighbours with a random number that is created between 0 &1. 4. Then we create the synthesized point. hope you got it 😀
SMOTE works by selecting examples that are close in the feature space, drawing a line between the examples in the feature space and drawing a new sample at a point along that line. Specifically, a random example from the minority class is first chosen. Then k of the nearest neighbours for that example are found (typically k=5). A randomly selected neighbour is chosen and a synthetic example is created at a randomly selected point between the two examples in feature space.
hello sir, the material that you explain is very easy to understand. I want to ask about my project. I have imbalanced data, then I do smote and I model it with KNN, but why after smote does the accuracy go down? 79% to 78%, is there something wrong with my data? Can you help explain this? I am very grateful if you respond to my comment.
Using SMOTE, your model will start detecting more cases of the minority class, which will result in an increased recall, but a decreased precision. Accuracy is not a good measure of performance on unbalanced classes. That's because SMOTE technique puts more weight to the small class, makes the model bias to it. The model will now predict the small class with higher accuracy but the overall accuracy may decrease.
Firstly, Thank you for sharing. I wanna ask something about time series. I have lots of data. But datas are different frequency. I wonder how deal with all datas. And assume that datas edited to same frequency. By the way datas are not fitted normal distribution so imbalanced that's why i am asking. If datas be same frequency, Smote can be appliable for time series? If not how to resample my time series?
Hi, we cannot comment until we look in your data and all the approaches that you have taken. One of the possibility might be your prediction was previously overfitted.
Thank you for the informative video! In this video, you have used SMOTE to rectify imbalance in target label. What methods can we use to deal with class imbalance in categorical features( input) in order to make the model more robust?
Hi Sasidharan Sathiyamoorthy, Its property of input so if u balance the input it might affect the target variable. Make 2 models with and without balancing n check the performance
@@DataMites sir if we have 54% persons cancer patients and 46% non-cancer patients then do we need balancing? If yes then which balancing technique should be selected?
Nice video. After applying smote, balanced data was obtained. But balanced data (X_smote,y_smote) was not split (80:20) in to train n test data sets before reapplying classification model? Is it necessary or not to split the data again? Or orginal dataset itself was considered as test dataset.
Oversampling is increasing the samples for minority class to match with the majority class. Undersampling is reducing the samples for majority class to match with minority class.
Thanks so much bro..i have shown some data scientists used undersampling and oversampling before Splitting the dataset into training and testing..in my research paper we heve used NEARMISS technique to balance the dataset..i have got a good results with using cross validation Splitting and Extra tree classifier as model and also the same model to select the best importance features where my results are : (ACC 0.97 , F1 0.97 and AUC 0.99) are there results may be accepted for publishing?
Very informative, i have a question sir, it is possible to set how many synthetic data created by smote ? in example i want to set n_sample increase to 200% so, how to put this parameters in pyhton code ?
Thank you very much for this video. I have a precipitation dataset containing 4 columns and 8000 rows, each of them has a lot of zeros and only a few continuous values. I would like to know if I can use smote in this case?
While dividing training and test data shouldn't you be doing "stratify=y" ? To ensure test data and training data set have equal proportion of outcome variable?
The aim of machine learning model is to generalization on training set so that performance on unseen Data is good.We don't care what the test data consist instead we try to given more generalized pattern to the algorithms.
Running into a problem with sklearn 'support' column still looking unbalanced after smoting on print(classification_report(y_test, y_pred)) what gives?
SIr, I went on as per the recommended procedures but my jupyter environment giving an AttributeError that SMOTE object has no attribute '_validate_data'. Can you please help me with the.
hi sir i have question how did we implement those resampling technique in neural network, let say if we implement embedding layer and work with multiple kind of data is that resampling technique make our data losing such information?
Thank you for this nice explanation. I was making progress with the codes but when I tried to fit using the command X_train_smote, y_train_smote = smote.fit_sample(X_train.astype('float'),y_train), I got error saying AttributeError: 'SMOTE' object has no attribute 'fit_sample'. I need urgent help please. Thank you
I have one doubt. What if data contains Nan values and you want to do under_sampling? If you impute Nan values with Mean() then there will be information leakage because we impute data before splitting it into train and test dataset. Could you please tell me what should be the possible solution in this case?
Thank you for informative video! I used your coding but got error " ValueError: could not convert string to float: '5more'"...plz tell me how can I resolve this error...Thanks in advance:)
@@DataMites sir I am predicting heart disease and out of my sample 54% people have heart disease and rest 46% don't have so which method I should use for balancing?
It can due to multiple reasons like in logistic-regression doing classification more than 2 classes. Or due to the use of classifier if the target variable is continuous.
best teacher i have ever seen! Explaining in very proper way! in short time explaining exact things!!!
Thank you!
Data Mites is a hidden gem now but soon they will be a Brand for Data Science. Keep my note for Future.
Thank you 😊
A real pro! Subbed this channel after watching first 3 minutes. Glad to have found it.
Thank you so much.
Thank you, Ashok! This is an outstanding explanation of a complex subject. You make it all feel very intuitive. Awesome stuff - I will look for more DataMites videos in the future!
"Hi, Donal O'Leary,
Thanks for your comment and keep on visiting our channel for more and updated content."
Amazing in depth explanation! I was exactly searching for this type of explanation.. Thanks for sharing
Glad it was helpful!
extreme clarification really superb teaching skills along with good communications
Hi lalitha priya, thank you for you comment.
Great Ashok.!! Genuinely liked your way of explanation in depth and the solution... Glad i landed on your page...
Thank You..!
Thanks and welcome
Wow sir liked u r session .please continue posting such videos
Thank you so much for the great tutorial.. As someone who does not have even the basic knowledge of python, I could learn many things from you, sir.
Glad it was helpful!
Thanks Ashok, very clear and simple explanation.
Thank You
The best smote tutorial I've seen. Thanks
Glad it was helpful!
DataMites is like hidden pattern in unsupervised learning thank you so much ashok❤️❤️
Thank you!
This is really helpful and thank you again!
Glad it was helpful! Keep Watching!
Nice and informative. Please keep up the good work.
Thank you.
Hello Sir, Thanks for explaining this very clearly.. keep it up....
You're most welcome
Thank you so much, sir! I hope I see more videos
Keep watching.
Hai sir! thanks a lot for very simple and clear explanation.keep going we expect more videos from you...
Keep watching
wow...... i am very well impressed. well explained. thanks
You are most welcome
Excellent content, brilliantly presented. Thank you. Subscribed.
Thanks and welcome
Great content sir !! Keep on spreading knowledge
Thank you, Keep watching
Such an amazing topic
Thank You
Great tutorial. Very good explanation sir.
Glad you liked it
Thank you ashok, clear explanation, but howto handle the imbalanced datasets if we have 4 classes?
For multiclass also same technique is applied as that of 2 classes
Great video. Thanks
Glad you like it! Keep Supporting
Great video and explanation! Thanks.
You're welcome!
Great video. Great explanation. Thank you
You are welcome!
Nicely explained. Thanks!
You're welcome!
Thank you for your sincere lecture sir
You are most welcome
Amazing in depth explanation
Thank you!
Great job ....made it look very easy
Thanks you 👍
Thank you very much
Most welcome! Keep Watching
Nice explanation .. Looking for more NLP related video
Sure
At 8.15 you have said it is taking the average of centroids which is completely wrong. SMOTE is calculated over the feature space...it goes like this
1. we take the feature vector of the minority class point.
2. we calculate the distance between the neighbours (neighbours=5).
3. we multiply the distance between the neighbours with a random number that is created between 0 &1.
4. Then we create the synthesized point.
hope you got it 😀
SMOTE works by selecting examples that are close in the feature space, drawing a line between the examples in the feature space and drawing a new sample at a point along that line.
Specifically, a random example from the minority class is first chosen. Then k of the nearest neighbours for that example are found (typically k=5). A randomly selected neighbour is chosen and a synthetic example is created at a randomly selected point between the two examples in feature space.
Really appreciate sir..Lot off.🙏🏼🙏🏼🙏🏼🙏🏼🤗🤗🤗👌👌👌👌😊😊😊😊
Thank you!
Very clear explanation. Thanks
You are welcome!
Thank you so much!!! Really helpful. thanks
Glad it helped!
wonderfully explained.thank you.
You are welcome!
very nicely explained sir ..thank you
You are most welcome
hello sir, the material that you explain is very easy to understand. I want to ask about my project. I have imbalanced data, then I do smote and I model it with KNN, but why after smote does the accuracy go down? 79% to 78%, is there something wrong with my data? Can you help explain this? I am very grateful if you respond to my comment.
Using SMOTE, your model will start detecting more cases of the minority class, which will result in an increased recall, but a decreased precision. Accuracy is not a good measure of performance on unbalanced classes. That's because SMOTE technique puts more weight to the small class, makes the model bias to it. The model will now predict the small class with higher accuracy but the overall accuracy may decrease.
This was a great lesson. Thanks a lot
You're very welcome!
excellent explanation!
Thank you.
Amazing Explanation!!! Thankyou.
You are welcome!
very good explanation
Keep watching
Best Explanation sir ..............!
Keep watching
Thanks for sharing. Very helpful
Glad it was helpful!
thank you so much man. great thumbs up...
You're welcome!
Thanks, nice explanation
You are welcome
thanks alot for this good tutorial.
You are welcome!
Amazing. Thanks a lot.
You are welcome!
Perfect video..thank you
You are welcome!
great video, thank you!
You are welcome!
Great video! Thanks a lot!!!
Glad you liked it!
Firstly, Thank you for sharing. I wanna ask something about time series. I have lots of data. But datas are different frequency. I wonder how deal with all datas. And assume that datas edited to same frequency. By the way datas are not fitted normal distribution so imbalanced that's why i am asking. If datas be same frequency, Smote can be appliable for time series? If not how to resample my time series?
Great Ashok. That was a well explained video. I tried the same thing on my data set but my accuracy came down from 94 to 86. What could be the cause?
Hi, we cannot comment until we look in your data and all the approaches that you have taken. One of the possibility might be your prediction was previously overfitted.
Thank you for the informative video! In this video, you have used SMOTE to rectify imbalance in target label. What methods can we use to deal with class imbalance in categorical features( input) in order to make the model more robust?
Hi Sasidharan Sathiyamoorthy, Its property of input so if u balance the input it might affect the target variable. Make 2 models with and without balancing n check the performance
@@DataMites sir if we have 54% persons cancer patients and 46% non-cancer patients then do we need balancing? If yes then which balancing technique should be selected?
Nice video.
After applying smote, balanced data was obtained. But balanced data (X_smote,y_smote) was not split (80:20) in to train n test data sets before reapplying classification model?
Is it necessary or not to split the data again? Or orginal dataset itself was considered as test dataset.
We have already split and then we balanced the data. So not required to split again.
Nicely explained
Thank you so much 🙂
in oversampling do you have to make the minority class instances equals the majority class instances ?
for example:
can it be 900 nc
and 800 c
Oversampling is increasing the samples for minority class to match with the majority class. Undersampling is reducing the samples for majority class to match with minority class.
nice explanation
Thank You!
Thanks so much bro..i have shown some data scientists used undersampling and oversampling before Splitting the dataset into training and testing..in my research paper we heve used NEARMISS technique to balance the dataset..i have got a good results with using cross validation Splitting and Extra tree classifier as model and also the same model to select the best importance features where my results are : (ACC 0.97 , F1 0.97 and AUC 0.99) are there results may be accepted for publishing?
You achieved good results. However, whether your results are acceptable for publishing depends on several other factors too.
Very good explanation Thanks. but this code, is applicable with text data (tweets) or not?
yes after converting text to numerical vectors. use fit_resample()
Oh! I got it. Don't worry. Thanks
You're welcome
Hi, can I know how did you correct it? i got the same error message
Very informative, i have a question sir, it is possible to set how many synthetic data created by smote ? in example i want to set n_sample increase to 200% so, how to put this parameters in pyhton code ?
Your question is not clear. Can you elaborate plz?
Awesome video
Glad you enjoyed it
Thank you very much for this video. I have a precipitation dataset containing 4 columns and 8000 rows, each of them has a lot of zeros and only a few continuous values. I would like to know if I can use smote in this case?
Hi Lavanya Nayak
, Github link is provided in the description. please check it out.
getting attribute error: 'SMOTE' object has no attribute 'fit_sample' but I have all the packages requirement satisfied still showing the error
Hi please check imbalanced-learn.org/stable/over_sampling.html for any update in imbalance learn package
Excellent!
Thank You!
Great video on SMOTE. Do you have a video on undersampling? Can someone perform both undersampling and oversampling in one line of code??? THANKS.
The other flavor of SMOTE is SMOTETOMEK which uses undersampling of majority class and upsamping of minority class.
Thank you so much. ^^
You're welcome 😊
what a crystal clear explanation. thank you.
You're very welcome!
While dividing training and test data shouldn't you be doing "stratify=y" ? To ensure test data and training data set have equal proportion of outcome variable?
that would be undersampling
The aim of machine learning model is to generalization on training set so that performance on unseen
Data is good.We don't care what the test data consist instead we try to given more generalized pattern to the algorithms.
Running into a problem with sklearn 'support' column still looking unbalanced after smoting on print(classification_report(y_test, y_pred)) what gives?
The support is the number of samples of the true response that lie in that class.
great video thank you. I am trying to figure out how to use this with a generator flowing from directory?
"Hi
insidiousmaximus, thanks for reaching us with your query.
Can you please put your query more precisely so that we can help you?"
SIr, I went on as per the recommended procedures but my jupyter environment giving an AttributeError that SMOTE object has no attribute '_validate_data'.
Can you please help me with the.
You need to upgrade scikit-learn to version 0.23.1.
Hey Ashok, can u make a video on dsste algorithm for removing class imbalance?
Will do in future session.
@@DataMites awesome! Will be waiting.
which module is used for alternative module of imblearn in python sir(for handling imbalance dataset)
For balancing the dataset we have only imblearn module. But there are other ways to deal with the imbalanced dataset.
how split stratify solves the problem?
you haven't encoded the target variable?
Target variable needn't require encoding
Amazing!
Thanks!
Thank you sir...👍
Can you explain which algorithm should be selected for regression problem....it will help me alot
All the best
Can you please explain this part of the code in the label encoder section:
Hi Ishan, please reframe your query.
Sir,
Can I know how to run a logistic regression on the oversampled dataset?
Hi Inspirit Lashi, you can use SMOGN for preprocessing of your dataset. More more information: proceedings.mlr.press/v74/branco17a/branco17a.pdf
Does Smote algorithm support Multi output classification?
Yes, you can use SMOTE.
i find this error >> plz help !
Hi, please use fit_resample
subscribed for ur content
Thank you
hi sir i have question how did we implement those resampling technique in neural network, let say if we implement embedding layer and work with multiple kind of data
is that resampling technique make our data losing such information?
You can use mini-batch SGD optimizer to handle imbalance dataset.
Hi sir what is the data type for outcome ? i think it is in object . Did u convert that into float or int?
"Hi Terry, thanks for reaching to us regarding your queries.
Outcome datatype is in the string and we label encoded it to an integer."
Thank you for this nice explanation. I was making progress with the codes but when I tried to fit using the command X_train_smote, y_train_smote = smote.fit_sample(X_train.astype('float'),y_train), I got error saying AttributeError: 'SMOTE' object has no attribute 'fit_sample'. I need urgent help please. Thank you
Hi Chinedum Joseph, can you please list the version of python and scikit learn in your system?
Use smote.fit_resample instead of smote.fit_sample.
@@ObaidoGeorge Tqvm for your help
Hello Sir!
can you please tell me how to generate images using smote technique ???
Thanks in advance...
For image generation we have a different method called Data Augmentation it will newly create synthetic data from existing data.
AttributeError: 'SMOTE' object has no attribute 'fit_sample'
Use smote.fit_resample
I have one doubt. What if data contains Nan values and you want to do under_sampling? If you impute Nan values with Mean() then there will be information leakage because we impute data before splitting it into train and test dataset. Could you please tell me what should be the possible solution in this case?
Hi
Ajay Patel, if you have a large dataset, you can certainly drop the Nan Values
@@DataMites Sir I have continuous data coming from sensors. Dropping few rows will lead to break a pattern.
@@patelajay1010 In that case without knowing the source and significance of your nan value, we cannot comment on anything.
@@DataMites ok sir. Thank you for your response.
Good
Thanks
after I resample an imbalance dataset how can I download the resampled dataset from colab?
Combine the resampled x and y and create a new dataframe, then convert that dataframe to a csv file using to_csv()
ur awesome
Thank you.
Any video where we use SMOTE for regression??
Hi Sunny Arora, you can use SMOGN for it. More more information: proceedings.mlr.press/v74/branco17a/branco17a.pdf
@@DataMites Thank you, is it less likely to use SMOGN?
Thank you for informative video! I used your coding but got error "
ValueError: could not convert string to float: '5more'"...plz tell me how can I resolve this error...Thanks in advance:)
We have to look into your code. But please check if you have converted all the categorical values to numerical values in your dataset.
@@DataMites sir I am predicting heart disease and out of my sample 54% people have heart disease and rest 46% don't have so which method I should use for balancing?
Getting an error: ValueError: Unknown label type: 'continuous-multioutput'
It can due to multiple reasons like in logistic-regression doing classification more than 2 classes.
Or due to the use of classifier if the target variable is continuous.
Cannot install imblearn. Kindly help me with this
Once you install imblearn, restart the kernel. If it doesn't work try any of these codes: "!pip install delayed" or "pip install --user imblearn"
Hi, did anyone applied this concept to image dataset. please anyone let me know...
For image generation you can use method called Data Augmentation it will newly create synthetic data from existing data.
thanks.
Welcome!