Это видео недоступно.
Сожалеем об этом.
How to solve Santander Kaggle Transaction Competition [Top 1% Solution, No Ensemble]
HTML-код
- Опубликовано: 15 авг 2024
- ❤️ Support the channel ❤️
/ @aladdinpersson
Paid Courses I recommend for learning (affiliate links, no extra cost for you):
⭐ Machine Learning Specialization bit.ly/3hjTBBt
⭐ Deep Learning Specialization bit.ly/3YcUkoI
📘 MLOps Specialization bit.ly/3wibaWy
📘 GAN Specialization bit.ly/3FmnZDl
📘 NLP Specialization bit.ly/3GXoQuP
✨ Free Resources that are great:
NLP: web.stanford.e...
CV: cs231n.stanford...
Deployment: fullstackdeepl...
FastAI: www.fast.ai/
💻 My Deep Learning Setup and Recording Setup:
www.amazon.com...
GitHub Repository:
github.com/ala...
✅ One-Time Donations:
Paypal: bit.ly/3buoRYH
▶️ You Can Connect with me on:
Twitter - / aladdinpersson
LinkedIn - / aladdin-persson-a95384153
Github - github.com/ala...
Timestamps:
0:00 - Introduction to competition
1:50 - Get data
8:22 - Simple NN baseline
20:50 - First results 0.86 score
21:33 - Understanding the data
23:40 - Modifying our NN
28:10 - Improvement to baseline
29:00 - Feature engineering
44:54 - Modifying our NN v2
50:45 - Final result and submission
56:50 - Ending
These videos are so valuable. I love how we move from an idea - let's look at correlation - to updating the model, to adding new features. Please please please do more kaggle competitions!
Thank you so much - subscribed!
Thank you so much for sharing your knowledge step by step, your channel is a hidden gem !
Thanks for this video; Hoping more videos like these
Really great to hear your thought process on this one!
Can someone elaborate more on the idea of turning each feature's values to some kind of "embedding" after knowing the fact that the columns seem to be not correlated? What is the logic behind this? And how does that explain the decent improvement we saw? I'm genuinely intrigued
I asked myself the exact same thing!
Would be nice if someone could explain the thoughts about this improvement :)
The intuition is that each feature can be broken down into common ingredients. This is what the "embedding" layer does. Since the features are unrelated, this means that each one of them is a different combination of some sort of the common ingredients. But, operating on the basic ingredients comprising the features, results in the NN more efficiently mapping the input space to the output one.
Daily videos 😊🔥🔥
Adding uniqueness feature is clever trick especially useful here. Very well done !
Thank you!
Very Elegant Solution Bro , Keep Up the good work
Hey, i was just wondering, do Neural networks generally result in a better score than traditional ML models?
you just earned a new sub :D more walkthrough videos please!
STRAIGHT FIRe MY BRO! LOVING THESE VIDS!
👊 👊
Great video, man!
I'm new to pytorch. Could you post on the comments this NN built on tensorflow? That would be awesome.
thanks for sharing
May I ask you a question related to Dynamic U-net (unet + resnet34 as a backbone?)
Fantastic showcase😄.
Karpathy constant 3e-4😂
hahah it's true! :D
Isn't it an unrealistic approach to solving this real-world problem using this kind of feature engineering? I say, when you need to classify each one of the samples that represent the "specific customer habits of making transactions" in order to predict if they will do a similar one, you won't be able to determine if each feature is unique or not. Am I wrong? What I am not getting?
What font are you using?
Great video! What resources did you use to get good at PyTorch?
The official tutorials I think are pretty good! I've learned things mostly from starting with very simple projects (like training a CNN on mnist) and figuring out how to solve things related to it. Like, how do I load my own dataset? What are the best architectures to use? What data augmentation can improve performance (and how do I do this). You get the idea :)
thanks
More kaggle videos please
You got it, working on something right now but I'll get back to kaggles
@@AladdinPersson thank you so much !