A Critical Skill People Learn Too LATE: Learning Curves In Machine Learning.

Поделиться
HTML-код
  • Опубликовано: 14 янв 2025

Комментарии • 73

  • @underfitted
    @underfitted  2 года назад +15

    Hey everyone! Hope you enjoyed the video and it's helpful for you! Make sure to SMASH that like button and Subscribe to help the channel grow. The next video will be sickkkkk! See you then.

  • @chuckcheddar461
    @chuckcheddar461 11 месяцев назад +25

    My left ear really loved this video

  • @spaicersoda7165
    @spaicersoda7165 Год назад +15

    My left ear loved this video.

    • @victorhplus
      @victorhplus 5 месяцев назад

      thanks bro, I thought my headphones were the problem

  • @kozaTG
    @kozaTG 8 месяцев назад +5

    if you can't explain it to a 6 year old, you don't understand it yourself. brilliant video

  • @kraeftigerkanacke
    @kraeftigerkanacke Год назад +3

    I have read a lot of articles and watched a few tutorials. But THIS is the perfect explanation for beginners in the ML field. Thank you very much.

  • @orbitinggeek4000
    @orbitinggeek4000 2 года назад +12

    What an amazing explanation ! It shows how well you yourself understood it - so glad I found you on Twitter !!

    • @underfitted
      @underfitted  2 года назад +1

      Thanks! Really appreciate it!

  • @chidubem31
    @chidubem31 2 года назад +4

    Straigtht to the point. I honestly like how you talk more of theory and analysis rather than code.

  • @agenticmark
    @agenticmark Год назад

    That was easily the best explanation of learning curves. I have seen each of those, except the perfect curve, but I will keep trying!

  • @VelazquezJFP
    @VelazquezJFP Год назад

    I belive you have thought so many artificial brains that you know how to get information in the most slow human brains out there. You repeat the fundamentals with a different tone, get letters on screen and give it time to absorve ensuring there is no overfit or underfit in my learning today.
    Great job and you got a new subscriber.

  • @mohammadmassri2394
    @mohammadmassri2394 Год назад

    Best explanation ever on RUclips! Keep it up man!

  • @jacemi
    @jacemi 2 года назад +3

    Excellent analogy 👏 Thank you very much Santiago!! Your videos are so cool and to the point !!

    • @underfitted
      @underfitted  2 года назад +1

      Glad you like them, Javier! Really appreciate your comment.

  • @JordiRosell
    @JordiRosell 2 года назад +2

    Thanks for this video. I would say validation set, cross-validation sets or resamples instead of testing. But the main ideas are the same. I only use the holdout set once for the last fit and some people can misinterpret some concepts here.

    • @underfitted
      @underfitted  2 года назад +3

      Absolutely! This is exactly what a validation set looks like. I wanted to keep it simple and talk about only 2 sets for the sake of the video, but you are correct.

  • @AB-cd5gd
    @AB-cd5gd 6 дней назад

    Best explanation on the entire youtube

  • @paulallen1597
    @paulallen1597 2 года назад +2

    Superb explanation of the what is the problem and how to approach solving it.

    • @underfitted
      @underfitted  2 года назад

      Glad it was helpful, Paul! Really appreciate your comment.

  • @tehsupervik
    @tehsupervik 2 года назад +3

    Great quality of information and really precise. So helpful for a beginner like me

    • @underfitted
      @underfitted  2 года назад

      Thanks, Victor! I'm glad this is helpful!

  • @Singasongwithme2004
    @Singasongwithme2004 5 месяцев назад

    Teaching style is too unique and too good 👍

  • @msfasha
    @msfasha Год назад

    Great explanation with lots of illustrations, simply a very good job, keep going.

  • @FrocketGaming
    @FrocketGaming Год назад

    This was very helpful but how do we define 'high' and 'low' loss? It's relative I assume but is there some rule of thumb?

  • @muzammilrizvi6424
    @muzammilrizvi6424 Год назад

    Can't describe how helpful and beautiful this video is, simply Amazing.

  • @Fantalic
    @Fantalic 7 месяцев назад

    this is so good. so much important information in short time. big thanks. :)

  • @ScottSavageTechnoScavenger
    @ScottSavageTechnoScavenger 2 года назад

    YES!!!! You just solved a problem I ran into years ago!

  • @bellion166
    @bellion166 Год назад

    Thank you! That helped a lot!!

  • @peacefulmusic3908
    @peacefulmusic3908 Месяц назад

    why my loss curve start very low at begining epochs (at epoch 02: loss 0,007) but when I see evaluate mecrics (MAE, RMSE) very bad ( 0,94 ; 1,12)? I'm doing regression problem, please help me!

  • @dilshanpieris9439
    @dilshanpieris9439 Год назад

    understood evey bit of it, well done brother ❤❤

  • @rewiredbyadhd
    @rewiredbyadhd 2 года назад +1

    This is one of the best Machine Learning channels I've seen. Thanks Santiago 🙏🏻 You have a new subscriber. I came here from Twitter, your content there is super good❤️🙏🏻
    Please keep making explanatory videos with simple language, so anyone can understand.
    Thank you again🙏🏻

  • @NavyaVedachala
    @NavyaVedachala 9 месяцев назад

    This was so helpful! Thank you

  • @starreachsocietybw
    @starreachsocietybw 10 месяцев назад

    Best Explanation Ever!!!!!!!!!!!!!

  • @burhanrashidhussein6037
    @burhanrashidhussein6037 2 года назад +1

    Qn. Can we use the concept of overfiting to understand how good is our training data? Eg. What if we can not overfit our training data? Can we say our data is not good enough to train the model?

    • @underfitted
      @underfitted  2 года назад +2

      If you can't overfit your model, you might have a problem with the data, yes (it could also be a problem with the model itself, of course.)

    • @hiankun
      @hiankun 2 года назад

      @@underfitted AFAIK, if a model is to simple then we cannot overfit it with the dataset. Right?

  • @atakanbilgili4373
    @atakanbilgili4373 Год назад +1

    Great explanations, especially useful when you work on your own dataset rather than the Kaggle ones.

    • @jamespaz4333
      @jamespaz4333 Год назад

      One of the things that Kaggle taught me is how to avoid over-fitting.

  • @Ninja1Dark159
    @Ninja1Dark159 10 месяцев назад

    Q. If my problem was binary classification problem and Im using built in model like decision tree or random forest. How to get the loss function ? And If it is not possible or not logical, does drawing learning curve with y axis as accuracy and x axis as number of training samples can still identify the overfitting and underfitting ?

  • @mtomazza
    @mtomazza 2 года назад +1

    Absolutely brilliant!

  • @otakusil69
    @otakusil69 8 месяцев назад

    Short-Sharp-Understandable

  • @junaidmohammad1967
    @junaidmohammad1967 2 года назад +1

    what does it mean when test loss is lower than training loss?

    • @underfitted
      @underfitted  2 года назад +3

      It usually means the test set is too easy for the model. You might have too few test samples, or the samples might be both in the training and testing sets, or they might be too simple for the model.

  • @Malikk-em6ix
    @Malikk-em6ix 2 года назад

    Great Insights. Very helpful, Thank you.

  • @artistoryartdesign
    @artistoryartdesign Год назад

    Like this so much 👍👍👍

  • @AB-cd5gd
    @AB-cd5gd 6 дней назад

    Thanks

  • @Jobic-10
    @Jobic-10 Год назад

    This is too good❤❤

  • @Param3021
    @Param3021 2 года назад +1

    That beat 💓

  • @allgasnobrakeseul
    @allgasnobrakeseul 2 месяца назад

    holy shit u just explained this so well

  • @cbr_n
    @cbr_n 2 года назад

    Hey! Solid video, had a slight recommender; balance the audio out a bit, I think L is 10-20% louder than R.

    • @underfitted
      @underfitted  2 года назад

      Thanks, CB! Yeah, I’ve been trying to improve the audio for a few videos now. I think the last video came out better? Thanks for the feedback!

  • @manasmhatre8200
    @manasmhatre8200 11 месяцев назад

    Please tell what should be the max difference between Val loss and training loss so that model is not overfitting, my model is showing training loss - 1.46 x 10^(-05) and Val loss - 0.018. So is the model overfitting ? Anyone reply

  • @meryemamaouche8158
    @meryemamaouche8158 Год назад

    great explanation

  • @okotpascal
    @okotpascal 2 года назад +1

    oh no,... someone must have complained about the sound right?🤨... I'm going to miss the loud videos, I loved them louder. It always woke up my attention😒😒

    • @underfitted
      @underfitted  2 года назад +3

      They did! But now they aren't clipping, which was a problem.

  • @alaeldinabdulmajid6576
    @alaeldinabdulmajid6576 6 месяцев назад

    Oriented - great job👍

  • @almerandomendezjr.4742
    @almerandomendezjr.4742 Год назад

    amazin explanation thanks

  • @lokeshsharma4177
    @lokeshsharma4177 8 месяцев назад

    YAA - You Are Awesome

  • @alirezanorouzi8924
    @alirezanorouzi8924 2 года назад

    hey man im not understand roc curves inn logreg

  • @johnpan4789
    @johnpan4789 2 года назад

    So top suggestion for a ML book

  • @Anonyms-rt5fb
    @Anonyms-rt5fb Месяц назад

    A-mazing Video

  • @TheDarkestForce
    @TheDarkestForce 6 месяцев назад

    Anyone else like me who has a decent set of speakers is probably greatly annoyed and distracted my the base thumps you put in the background. I have no idea why you would put that in the audio stream. The video explanation was well done but I couldn't continue watching for the reason I mentioned.

    • @underfitted
      @underfitted  6 месяцев назад

      That’s the type of thing you do when you are learning.

  • @jasbarlegaspina1220
    @jasbarlegaspina1220 2 года назад

    17 minutes than i've seen in years.

  • @salmanqafarov9556
    @salmanqafarov9556 Год назад

    İndian dominance is so much high in IT now, white guys make educational videos in indian accent.

  • @ahmadbodayr7203
    @ahmadbodayr7203 Год назад +2

    read about islam man