It would have been nice to demonstrate the impact these resampling methods have on the test metrics of some benchmark model (especially one that can use class weights in the loss function). In my experience, resampling can sometimes make a model perform worse and it can be better to use models with class-weighted loss functions.
Hi professor, I am trying to do binary classification on advertising conversions using Markov Chain but I'm not sure how should I implement it. Do you have any suggestions on this?
Great example. Perhaps you could make another video showing the oversampling on training data. Lots of people (myself included) start doing the oversampling on the whole dataset, which leads to data leakage... which is a mistake.
thanks for this quick guide to overcoming the imbalance issue. I like to know, before applying these oversampling or undersampling techniques.. do i need to like standardize my dataset, or I can go with the original form of the data set?
Very great job professor ! Thank you so much for this clear video . By the way , do you think that after applying oversampling for example and after training a model (like XGBoost ) on the data , it would be interesting to use the Matthews Correlation Coefficient as a KPI to measure the efficiency of the model ? Or do you think it is not necessary? Thank you 🙏🏽
Yes, definitely, MCC is a great way to measure the performance of classification models, the plus side is that it is also more resistant to imbalanced data than that of accuracy.
Thank you so so much. This is something that I am looking for. I struggled with this step in R-language for many months. I understand that by randomly sampling the overweight samples to mix with the underweight samples, just one time and further do model developing -- would create a poor model. Thus, my question is 1. How many times should I randomly sample 2. Does the distribution of both overweight and underweight samples affect times that we have to sample? Could you please share your thoughts?
What the side effect if we use synthetic data when handling the imbalance for building the models? And what if we have a lot of data, should we use oversample or undersample? Thank you prof
I have a question. I have a lot of negative samples which means that the data are unlabeled. Their number is much bigger than labeled data. I must include them. In this situation, how to handle this kind of imbalance?
That's a great question! Thanks for bring it up. Stratification maintains the ratio of the classes such that they train/test splits have roughly the same ratio of the classes (it does nothing with the class imbalance). On the other hand, data balancing will either bring up or bring down the minority or majority class, respectively, in order to make both to be the same size.
I think there are some scenarios where we can use this technique differently..Can you tell us the different scenarios where we can perform oversampling, undersampling or random sampling
I've been following your channel since the collab with Ken Jee without realizing your name. Now you're inspiring me to pursue Data science even more! Thank you krub Ajarn Chanin! 🙏😂
Thank you for the explanation! What is your opinion on creating decoys, that is, artificial data derived from the least represented class, for balancing? Do you know if this functionality is available in some library?
Hello sir I calculate the descriptors of ligand it 10k, use recursive features elimination then SVM and KNN model but my accuracy is low 0.82 and 0.83 how can I improve the accuracy(low mean paper published om same enzyme having accuracy 0.88...I tried correlation and drop the negative columns but it's not working. Need your help please.
Hi, there's no sure path for achieving high model performance. Several factors come into play (descriptor type, feature selection approach, learning algorithm, parameter optimization, data splitting, etc.), which is a part of research. I would recommend to try out addressing the different factors mentioned previously. Hope this helps.
This is a clear and simple guide to get started, thanks for sharing! About your last question, I am curious what would be your answer, which approach do you prefer from your experience?
Thanks for the lesson, professor! I'd like to ask one question if you don't mind. Should we always over/undersample to 1:1 ratio? I guess, in case the initial ratio of majority and minority classes is 99:1, it can cause some problems while modelling.
Hi, the practice of addressing data balancing for a wide range of scenarios is a topic for research and experimentation. It might be worthwhile to check out published paper on the topic for various use case. Please feel free to share what you find.
Oh, I seem to be the first guy here. As a rookie DS, I have to deal with the imbalanced dataset, too. My curiosity is we should perform undersampling or oversampling within the pipeline of cross-validation (say, K-fold cv) or should we do it before cross validation?
Hey, @amar I am also working on Machine learning-based virtual screening almost completed ML Models for VS. If u have any publications on it i need some help thanks.
I want to thank you for the Bioinformatics Project from Scratch.. I managed to apply it on AChE and I am willing to apply it to other target. Thanks so much and waiting for other Models 😁
Yes, you're right. Here goes. Imagine we have a dataset consisting of 1000 samples. 800 belongs to class A and 200 belongs to class B. As class A has 4 times higher samples than class B, there is a high possibility that the model may be biased towards class A. To avoid such scenario, we can either perform undersampling where 800 is reduced to 200. Or we can perform oversampling where 200 is resampled to 800 samples. In both cases, the samples are balanced for both classes.
🌟Check out my second RUclips channel (Coding Professor) ruclips.net/channel/UCJzlfIoF8nmWqJIv_iWQVRw
🌟 Download Kite for FREE www.kite.com/get-kite/?
nice video, i just want to know, how can i train this to get training data and testing data.
an example will be really good
Hallo prof, how to handle imbalance dataset in multilabel classification data text?
Hallo prof, how to handle imbalance dataset in multilabel classification data text?
It would have been nice to demonstrate the impact these resampling methods have on the test metrics of some benchmark model (especially one that can use class weights in the loss function). In my experience, resampling can sometimes make a model perform worse and it can be better to use models with class-weighted loss functions.
Hi professor, I am trying to do binary classification on advertising conversions using Markov Chain but I'm not sure how should I implement it. Do you have any suggestions on this?
Great example. Perhaps you could make another video showing the oversampling on training data. Lots of people (myself included) start doing the oversampling on the whole dataset, which leads to data leakage... which is a mistake.
Thanks very much for this comment.
Really helpful comment, thank you
thanks for this quick guide to overcoming the imbalance issue. I like to know, before applying these oversampling or undersampling techniques.. do i need to like standardize my dataset, or I can go with the original form of the data set?
Ooo awesome tutorial! Love how clear it is
Thank you! Cheers!
Thanks alot . very precise and easy to understand
It's helpful for me and many more. Great tutorial, Chanin. Thank you so much for sharing with us.
Happy to hear that! Thanks Thinam!
Great tutorial Sir, When you split the data into X and Y and performed the resampling method, how can you make a concatenation with each other later?
Very great job professor ! Thank you so much for this clear video . By the way , do you think that after applying oversampling for example and after training a model (like XGBoost ) on the data , it would be interesting to use the Matthews Correlation Coefficient as a KPI to measure the efficiency of the model ? Or do you think it is not necessary? Thank you 🙏🏽
Yes, definitely, MCC is a great way to measure the performance of classification models, the plus side is that it is also more resistant to imbalanced data than that of accuracy.
Thank you so so much. This is something that I am looking for. I struggled with this step in R-language for many months. I understand that by randomly sampling the overweight samples to mix with the underweight samples, just one time and further do model developing -- would create a poor model. Thus, my question is 1. How many times should I randomly sample 2. Does the distribution of both overweight and underweight samples affect times that we have to sample? Could you please share your thoughts?
Great tutorial as usual. Thanks for sharing, Professor!
Glad you liked it!
prof, thankyou for the nice video. But, i want to ask, how to show the balance data after had do SMOTE?
Thank you Prof. Very helpful
Can u explain how does logistics regression behave with imbalanced dataset
What step for fix imbalance Before splits data or after splits in train set only
Should we calculate the molecular descriptors and then balance the data?
What the side effect if we use synthetic data when handling the imbalance for building the models? And what if we have a lot of data, should we use oversample or undersample? Thank you prof
Why do undersampling instead slice the dataset do take the same amount of results?
I have a question. I have a lot of negative samples which means that the data are unlabeled. Their number is much bigger than labeled data. I must include them. In this situation, how to handle this kind of imbalance?
Awesome explained every line of code lot helpful for Novice in understanding ipynb
in my data science course, we used the stratification parameter from train_test_split() from sklearn, how do they differ?
That's a great question! Thanks for bring it up. Stratification maintains the ratio of the classes such that they train/test splits have roughly the same ratio of the classes (it does nothing with the class imbalance). On the other hand, data balancing will either bring up or bring down the minority or majority class, respectively, in order to make both to be the same size.
I think there are some scenarios where we can use this technique differently..Can you tell us the different scenarios where we can perform oversampling, undersampling or random sampling
I've been following your channel since the collab with Ken Jee without realizing your name. Now you're inspiring me to pursue Data science even more! Thank you krub Ajarn Chanin! 🙏😂
do we not need to split the data into test and train before balancing?
Thank you for the explanation! What is your opinion on creating decoys, that is, artificial data derived from the least represented class, for balancing? Do you know if this functionality is available in some library?
Hello sir I calculate the descriptors of ligand it 10k, use recursive features elimination then SVM and KNN model but my accuracy is low 0.82 and 0.83 how can I improve the accuracy(low mean paper published om same enzyme having accuracy 0.88...I tried correlation and drop the negative columns but it's not working. Need your help please.
Hi, there's no sure path for achieving high model performance. Several factors come into play (descriptor type, feature selection approach, learning algorithm, parameter optimization, data splitting, etc.), which is a part of research. I would recommend to try out addressing the different factors mentioned previously. Hope this helps.
Great video. Thanks for sharing!!
It’s my pleasure, thank you 😊
How to know if we should use oversampling or undersampling?
Helpful, thx
This is a clear and simple guide to get started, thanks for sharing! About your last question, I am curious what would be your answer, which approach do you prefer from your experience?
Hi, I prefer undersampling
@@DataProfessor Could you please tell some reasons why?
@@michellpayano5051 I prefer to use actual data and thus undersampling. Oversampling introduces artificial data upon balancing.
@@DataProfessorI understand , thank you!!
What do we do if there are more than 2 classes which are imbalanced?
Thanks Data Professor; may I also know if this method is applicable to imbalance datasets in text classification model? Thanks
Yes this is application to imbalanced classes for classification model.
@@DataProfessor thanks for your reply professor 👍🏻
I cama here through Notification, thanks Professor. We will wait for new and interesting videos
Awesome, glad to hear and thanks for supporting the channel!
Thanks for the lesson, professor! I'd like to ask one question if you don't mind. Should we always over/undersample to 1:1 ratio? I guess, in case the initial ratio of majority and minority classes is 99:1, it can cause some problems while modelling.
Hi, the practice of addressing data balancing for a wide range of scenarios is a topic for research and experimentation. It might be worthwhile to check out published paper on the topic for various use case. Please feel free to share what you find.
Thank you for your response! I will definitely research on this topic :D
nice video, i just want to know, how can i train this to get training data and testing data
Hi, once the data is balanced, you can take the balanced data to perform data splitting to train and test data using the train_test_split function.
Oh, I seem to be the first guy here. As a rookie DS, I have to deal with the imbalanced dataset, too. My curiosity is we should perform undersampling or oversampling within the pipeline of cross-validation (say, K-fold cv) or should we do it before cross validation?
Hi, You can apply this prior to CV.
Great tutorial Prof ! I could see how someone would use this in a test dataset, does it have other usecases ? Thanks a lot !
Hi, thanks for watching Ibraheem. Actually, we could use it in the training set in order to obtain a balanced model.
Professor can you share your contact details
I think that in this case oversampling would be the right approuch due to the low number of compounds. Is this correct?
Both are valid approaches, it is subjective, depending on the practitioner. Personally, I like to use undersampling.
@@DataProfessor undersampling should only be done when the when the data is in millions or thousands. orelse the accuracy will get reduced.
Hi! I have a doubt should we prefer undersampling or oversampling
Hi, both are valid approaches and depends on the practitioner. Personally, I like to use undersampling.
Thanks a lot. I am looking for your explain protein ligand interaction through AI.
Hey, @amar I am also working on Machine learning-based virtual screening almost completed ML Models for VS. If u have any publications on it i need some help thanks.
@@muhammaddanial4549 hi i am on the beginning
I want to thank you for the Bioinformatics Project from Scratch.. I managed to apply it on AChE and I am willing to apply it to other target. Thanks so much and waiting for other Models 😁
Fantastic! Glad to hear that.
@@DataProfessor Can you do a tutorial on how to implement Neural networks on drug discovery?
@sherif Arafa can I get the link of these scratches?
I am also working on AChe and BChe
@@muhammaddanial4549 Awesome, sure the link is here ruclips.net/p/PLtqF5YXg7GLlQJUv9XJ3RWdd5VYGwBHrP
Thank you so much
You're welcome!
Data is missing. Link is not working for input
hi, how should i save this in the form of csv file
You can use the to_csv function from pandas.
@@DataProfessor when i handle my dataset using under sampling my accuracy is decreasing by 20 percent what should i do so,
Thanks for the video but where is the notebook?
Thanks for the reminder, the link is now in the video description.
You are a great professor!! Thanks a lot
Thank you! 😃
Te amo señor extraño mi modelo despegó
how I appreciate a balance between more than two categories......example of diabetic retinopathy's classification is 5 categories and two balance
There is a flaw, we should apply the methos on train set, not all data
It's not been talked about: why is imbalance an issue?
Yes, you're right. Here goes. Imagine we have a dataset consisting of 1000 samples. 800 belongs to class A and 200 belongs to class B. As class A has 4 times higher samples than class B, there is a high possibility that the model may be biased towards class A. To avoid such scenario, we can either perform undersampling where 800 is reduced to 200. Or we can perform oversampling where 200 is resampled to 800 samples. In both cases, the samples are balanced for both classes.
@@DataProfessor wil be "biased"? only if you use the accuracy as a measure.
use AUC to measure
Over sampling
do we not need to split the data into test and train before balancing?
up