Credit Card Fraud Detection - Dealing with Imbalanced Datasets in Machine Learning
HTML-код
- Опубликовано: 6 окт 2024
- Error: The neural net predictions function is using shallow_nn everytime instead of the model passed in, sorry about that! This changes the results a bit, but the main point is choosing and creating a model, which this doesn't impact.
The Code: colab.research...
Kaggle dataset (ensure you make an account!): www.kaggle.com...
Learn Python, SQL, & Data Science for free at mlnow.ai/ :)
Subscribe if you enjoyed the video!
Best Courses for Analytics:
---------------------------------------------------------------------------------------------------------
IBM Data Science (Python): bit.ly/3Rn00ZA
Google Analytics (R): bit.ly/3cPikLQ
SQL Basics: bit.ly/3Bd9nFu
Best Courses for Programming:
---------------------------------------------------------------------------------------------------------
Data Science in R: bit.ly/3RhvfFp
Python for Everybody: bit.ly/3ARQ1Ei
Data Structures & Algorithms: bit.ly/3CYR6wR
Best Courses for Machine Learning:
---------------------------------------------------------------------------------------------------------
Math Prerequisites: bit.ly/3ASUtTi
Machine Learning: bit.ly/3d1QATT
Deep Learning: bit.ly/3KPfint
ML Ops: bit.ly/3AWRrxE
Best Courses for Statistics:
---------------------------------------------------------------------------------------------------------
Introduction to Statistics: bit.ly/3QkEgvM
Statistics with Python: bit.ly/3BfwejF
Statistics with R: bit.ly/3QkicBJ
Best Courses for Big Data:
---------------------------------------------------------------------------------------------------------
Google Cloud Data Engineering: bit.ly/3RjHJw6
AWS Data Science: bit.ly/3TKnoBS
Big Data Specialization: bit.ly/3ANqSut
More Courses:
---------------------------------------------------------------------------------------------------------
Tableau: bit.ly/3q966AN
Excel: bit.ly/3RBxind
Computer Vision: bit.ly/3esxVS5
Natural Language Processing: bit.ly/3edXAgW
IBM Dev Ops: bit.ly/3RlVKt2
IBM Full Stack Cloud: bit.ly/3x0pOm6
Object Oriented Programming (Java): bit.ly/3Bfjn0K
TensorFlow Advanced Techniques: bit.ly/3BePQV2
TensorFlow Data and Deployment: bit.ly/3BbC5Xb
Generative Adversarial Networks / GANs (PyTorch): bit.ly/3RHQiRj
Take my courses at mlnow.ai/!
Stunning bro just clear cut explanation not wasting a single minute it's just a gold mine of information
best video on a project explained step by step
Thank you for the very kind words! Glad it was helpful 😀
Thank you for your amazing efforts! I don't have much experience in building different models, so this video helped me a lot! Btw, I tried increasing max_depth to 6 in random forest model, and it really increased model's performance better than I expected. Thanks again!
Interesting! Yeah it's surprisingly easy to mess around with models. That's great about the max_depth! And you're very welcome :)
One thing worth mentioning would be the data wrangling part. It's often a good idea to check for feature relevance and feature importance. Funny enough, the amount of transaction and the time of it were not considered as the features that had a substantial impact on the general outcome of the model to see if a transaction was fraudulent or not.
This not only reduces bias in our data frame, but it can also substantially increase the computation speed of that model! (mine had a 36% boost in speed while losing only 0.01 points in F1 score, and 0.02 in precision.)
Another thing would be to write a function that fits the training and validation data in each of the models automatically. It will substantially help with the cleanliness and readability of the project.
I would also consider hyperparameter tuning and pipelining everything together to make it a robust project. However, great video and a great demonstration of how to check each model and measure their suitability for the problem at hand.
please i have a poject in this topic could you pleeeease help me i don't know what to do
I'll be trying this soon, thanks Greg
No problem Krish! 😊😊
Great video on classification. Good luck with the channel!
Thanks so much Petar! I appreciate that 😊
After training the model on the balance population please find the model performance on the original population the imbalanced one.
Thanks man. I'm going to try this one. It's really helpful. 🙏😍
Enjoy! You're very welcome 😊
Great vídeo. I was just wondering if taking a slice from the original dataset to use as a test set is a more consistent way to evaluate the resampling procedure. Because in production, the model still has to deal with imbalanced data.
yes I agree. I've tried slice of original data for test set and the results look completely different.
Hope to see more of this kind in the coming days!!
With an account name of "Machine Learning" I would expect nothing less! 😂 And absolutely ☺️
really like your vidoe!
One thing though, when you downsampling the data, shouldn't you still keep validating/testing on the ratio of data?
In your case, you are basically assuming the testing data is also have a 50/50 split, which in reality will never be the case.
Great video and explanation! Thanks!
You're very welcome!
What is your opinion on doing oversampling (SMOTE) on the minority class?
Definitely a solid option.
Nice video, however, it is not completely clear to me how the undersampling relates to the overall problem. In the end, you have to provide the client (the bank) with a model capable of detecting fraud. Let's suppose we give them the model trained on the rebalanced dataset. Since frauds are unbalanced by nature, then they will end up using the model trained on a balanced dataset on a test set that is actually unbalanced. Isn't this causing issues? Isn't the prediction biased toward the fraud? Aren't we predicting way too many frauds?
To be more specific, I think you can try balancing the training set but you cannot balance the test set because, in the end, in the real scenario, the new data to be predicted will be always unbalanced.
its not practical to evaluate the model on the balanced the evaluation/test set since its ignore the real fraud representation. data representation is sacred.
Great video ❤❤ looking forward for more videos like this..
Thank you!! Absolutely 😊
I just wanna know whether it gives the accuracy details only or detect whether card is fraud or not
Thanks greg!!
Is it okay to do projects by looking at the tutorial videos!? When is the time, we need to do it on our own
Absolutely! Go ahead. You can do it on your own when you feel like you've got the general hang of things, if that makes sense.
That's amaaazzzing!!
im getting errposts on the rest train and val run for the numpy
how do you balance test set when you don't have labels in real life?
12:51 shouldn't shape of y_train be (240000, 1) since it consists of exactly one column?
(240000,) and (240000,1) are very close to the same thing. I'm not sure if they both work or not
Are we not supposed to test from original data instead of balanced one.
well i have the same question but every code i saw for this dataset with high f1 score did like him and after a lot of research i found that if you have highly imbalanced data like this it is okay to test on the under sampled data if u know anything else
please share it
Hey Greg, thank you for the video but I have a question. At first, we had a dataset that had 280000 rows and 30 columns but towards to end of the video, we decreased the dataset that only had 984 rows. Doesn't this make the model bad because we're trained on less data?
Or the real problem was we were getting bad results at first because we had so many not_fraud data compared to fraud ones?
In the predict function, you’re taking model as input arg but returning on shallow-nn. Is it correct? Or should it be model.predict() 28:31
Probably that’s why the values are exactly the same at 51:51
can i try train_test_split function from sklearn to split data into train and test set?
Sweet. This is going to my github!!
I sure hope so!
Awesome 👏🥳
Thank you! 😊
Awesome
Are you not leaking targets if your normalize before splitting the data?
If I am, it isn't really a big deal
@@GregHogg it isn't a big deal in most cases probably, but with time series data you are leaking future information that the model will not have during inference, such as changes in trend 📈 in future data points
@@MatTheBene For time series it would be more concerning yes
thanks
Hi thankyou a lot from making this video I learn a lot through this, I have some question at @52:05 the line print rf.predict(x_val_b) isn't that should be rf_b.predict(x_val_b) instead ? along with Gbc later on too it should use gbc_b.predict right ??
I thought that as well not entirely sure why he hadn't changed those when the neural_net_predictions function he had it under shallow_nn_b
I have the same question. The inference and final choice of model may differ with that change.