Regularization in a Neural Network | Dealing with overfitting

Поделиться
HTML-код
  • Опубликовано: 1 окт 2024

Комментарии • 79

  • @mmostafa95
    @mmostafa95 2 года назад +47

    I swear this playlist is one of the best resources I have ever seen on these topics. Great explanation. Please continue to upload more of this great content. Much thanks for your time and outstanding effort.

    • @AssemblyAI
      @AssemblyAI  2 года назад +1

      That's great! Glad to hear you liked it!

  • @syedmohammed786y
    @syedmohammed786y Месяц назад +1

    Your explanation is just amazing! ...

  • @AlexKashie
    @AlexKashie Год назад +4

    Wow so useful, thank you for the amazing content. Your can feel the confidence of the lecturer and her explanations are very clear. Watching all the playlist

  • @mmacaulay
    @mmacaulay Год назад +5

    Another absolutely fantastic, accessible teaching resource on a complex machine learning concept. I don't think there are any resources out there that can match the quality, accessibility and clarity this resource provides.

  • @TimUnknown-h5q
    @TimUnknown-h5q Год назад +6

    Just wanted to say that these videos are really well done and the speaker really knows what she's talking about. Iam doing my PhD right now in mechanical engineering, using deep learning for modeling a production process (steel) and your videos really helped me to get a much better grips on what to tune and do with my model. Highly appreciated, thx a lot :)!

    • @AssemblyAI
      @AssemblyAI  Год назад

      You are very welcome Tim and thank you for the support! - Mısra

  • @harshitvijay197
    @harshitvijay197 5 месяцев назад +1

    damn this whole series is like a gold mine ... i was suspicious of how a so well know topic be covered in so less time ... the videos might be not good but happy to be proven the wrong. THESE ARE GOLD ... thank you @AssemblyAI & thank you very much Ma'am for helping.

  • @kuretaxyz
    @kuretaxyz 2 года назад +2

    Great video! Also you sound like you are from Turkey. Am I correct?

    • @AssemblyAI
      @AssemblyAI  2 года назад +3

      Yes, that is correct :)

  • @donmiguel4848
    @donmiguel4848 6 месяцев назад

    Variance is a mathematical term used in probability theory and stochastic for the measurement of data spreading around a mean. You are confusing your audience by abusing this term in a different manner when claiming that variance is the change rate of prediction per training data change. Don't do that !

  • @abd_sam
    @abd_sam 2 года назад +1

    I didn't understand why do we need 'keep probability '.

  • @malithdesilva7799
    @malithdesilva7799 Год назад +2

    Great playlist, the contents are on the spot to each topic in a minimum time. Please keep outstanding work.🤘and thanks for the content.

    • @AssemblyAI
      @AssemblyAI  Год назад

      You are very welcome! - Mısra

  • @AlexXPandian
    @AlexXPandian 5 месяцев назад +1

    Really to the point and excellently delivered.

  • @thecarradioandarongtong4631
    @thecarradioandarongtong4631 5 месяцев назад

    Beauty with Brains 💐

  • @phone2134
    @phone2134 Год назад

    Forget about Regularisation,
    I just came here to look thy beautiful lady❤❤❤

  • @EmanAlsayoud
    @EmanAlsayoud 24 дня назад

    wow, thank you!!

  • @MrPioneer7
    @MrPioneer7 4 месяца назад

    Overfitting is frequently happening in my programs. I tried reducing the number of input parameters but I know it is not a good solution. I was familiar with L1 and L2 regularisation. This tutorial helped me better understanding of them and other common methods. I tried to decrease both the train and test errors but I was not successful by the use of regularisation. I hope to do it soon 🙂 Thanks for your illustrative explanations

  • @kk008
    @kk008 Год назад

    Can I use this L1 regularization to overcome the maximized mutual information problem?

  • @ferdaozdemir
    @ferdaozdemir 11 месяцев назад

    I liked this video very much. You explained all these techniques very well in my opinion.. Thank you..

  • @techyink5344
    @techyink5344 6 месяцев назад

    ty

  • @jacobyoung6876
    @jacobyoung6876 2 года назад +1

    Great job. The explanation is very clear and easy to understand.

  • @jabessomane7282
    @jabessomane7282 9 месяцев назад

    This is very, very helpful. Great explanation Thank you.

  • @brahimmatougui1195
    @brahimmatougui1195 2 года назад +1

    You said that L1 regularisation encourages weights to be 0.0 and this could lead to not considering some of outputs, is this the same behavior of dropouts?

    • @AssemblyAI
      @AssemblyAI  2 года назад +2

      It is a similar behaviour to dropouts, yes! Both L1 and dropout can make a network sparse (not all neurons are connected to all other neurons). The way they achieve it is still different though.

    • @brahimmatougui1195
      @brahimmatougui1195 2 года назад +2

      @@AssemblyAI thank you so much for your prompt answer

    • @AssemblyAI
      @AssemblyAI  2 года назад +2

      @@brahimmatougui1195 You are very welcome! -Mısra

  • @bellion166
    @bellion166 10 месяцев назад

    Thank you! This gave a good intro b4 I started reading Ian Goodfellow.

  • @radethebookreader5312
    @radethebookreader5312 11 месяцев назад

    I'm a Research scholar from India, your videos are just awesome 👍

  • @Ali-Aljufairi
    @Ali-Aljufairi 5 месяцев назад

    Your videos are so good keep up the good work I have read and watched a lot of content explain it yours is the best

  • @aryanmalewar7789
    @aryanmalewar7789 2 года назад +1

    Very clear and precise explanation. Thanks :)

  • @OrcaRiderTV
    @OrcaRiderTV Год назад

    What if input features have multiple dimensions ie age and height, can we still use batch norm as first layer to normalize thr input data ?

  • @qandos-nour
    @qandos-nour Год назад

    Wow, very clear
    thanks you helped me

  • @deltamico
    @deltamico Год назад

    With L2 regularisation, do we add the weights^2 of the whole network to the final loss function, or while doing backprop and only the input weights of a node to the loss of that node?

  • @Eyetrauma
    @Eyetrauma 11 месяцев назад

    Thank you for this explanation. Like many I’d imagine, I’ve bumped into these concepts predominately via my use of SD. It’s nice having an overview of what’s being conveyed so I can understand what’s happening without getting too bogged down in the minutiae.

  • @akshaygs4048
    @akshaygs4048 7 месяцев назад

    Amazing video. Very good content.

  • @narendrapratapsinghparmar91
    @narendrapratapsinghparmar91 8 месяцев назад

    Thanks for this informative lecture

  • @rohitkulkarni5590
    @rohitkulkarni5590 Год назад

    This is amazing series with concepts well explained. Lot of the other videos dwelve a lot on mathematical formulas without explaining the concepts.

  • @igeolumuyiwab.7980
    @igeolumuyiwab.7980 2 месяца назад

    I love your teaching. Keep it up

  • @DK-ox7ze
    @DK-ox7ze 8 месяцев назад

    I don't understand the purpose for regularization. The sole purpose of weights is to quantify the importance of a feature (strength of connection). So it's very much possible that one weight has much larger value than others because it's more important to the desired outcome in real life. But if you regularize. Then that weight loses its value and therefore might result in incorrect prediction.

    • @anozatix1022
      @anozatix1022 5 месяцев назад

      That's the point of regularization. It is used when your model is overfitting as stated in the video. If the model is performing decent without regularizers then you probably shouldn't use regularizers as this could result in underfitting.

  • @akashpatel1575
    @akashpatel1575 Год назад

    you might have inclue batch size too

  • @arminkashani5695
    @arminkashani5695 2 года назад +1

    Brief yet very clear and informative. Thank you.

    • @AssemblyAI
      @AssemblyAI  2 года назад

      You are very welcome Armin! - Mısra

  • @mrbroos2843
    @mrbroos2843 2 года назад +1

    the best video with a clear explanation

  • @sourabhbhattacharyaa4137
    @sourabhbhattacharyaa4137 2 года назад

    Awesome stuff from thy side...Danke shen....can you give the link to the playlist containing these lectures.....

    • @AssemblyAI
      @AssemblyAI  2 года назад

      Here it is: ruclips.net/video/dccdadl90vs/видео.html

  • @deependu__
    @deependu__ 11 месяцев назад

    Thanks for the clear explanation.

  • @suryanshdey4773
    @suryanshdey4773 Год назад

    I don't understand why don't we simply reduce no of layers and neurons in a neural network to get rid of OVERfitting.

    • @michacz3230
      @michacz3230 Год назад

      That's just one of the ways. You can also try to reduce size of the model as you said or try data augmentation

  • @PurtiRS
    @PurtiRS Год назад

    So Good, SO Good, Oh My God! Thank you soooo much!

  • @swapnildoke8777
    @swapnildoke8777 6 месяцев назад

    so nice and simple

  • @mallikarjunpidaparthi
    @mallikarjunpidaparthi 2 года назад +1

    Thanks.

  • @emadbagheri2755
    @emadbagheri2755 Год назад

    great

  • @annajohn7890
    @annajohn7890 9 месяцев назад

    Absolutely clear explanation

    • @AssemblyAI
      @AssemblyAI  9 месяцев назад

      Glad it was helpful!

  • @marijatosic217
    @marijatosic217 2 года назад

    Amazing! 😍😍

  • @skhapijulhossen6499
    @skhapijulhossen6499 Год назад

    This playlist is treasure for me.

  • @its_me_hb
    @its_me_hb Год назад

    I really loved it

  • @adarsh7604
    @adarsh7604 2 года назад

    Brief, Concise and Precise.

  • @Danielamir1998
    @Danielamir1998 Год назад

    Brilliant video

  • @FirstLast-tx7cw
    @FirstLast-tx7cw Год назад +2

    I had tears in my eyes. absolute gem of a video.

  • @ryanoconnor160
    @ryanoconnor160 2 года назад

    Great content!

  • @_Who_u_are
    @_Who_u_are 10 месяцев назад

    plz speak slowly

  • @dwfischer
    @dwfischer Год назад

    Great video. I’m currently making flash cards and this was a great resource.