Batch Normalization (“batch norm”) explained

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024

Комментарии • 262

  • @deeplizard
    @deeplizard  6 лет назад +18

    Machine Learning / Deep Learning Tutorials for Programmers playlist: ruclips.net/p/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU
    Keras Machine Learning / Deep Learning Tutorial playlist: ruclips.net/p/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL

    • @rey1242
      @rey1242 6 лет назад

      I already asked it on another video, but just to cover the most area as possible
      Could I possibly normalize the weights to have mean 0 and variance 1 on weights initialization?

    • @coolbits2235
      @coolbits2235 5 лет назад +4

      I am in debt to you for teaching me so much in one day. I would have kissed your hand in gratitude if you were in front of me. NN are such a convoluted mess but you have made things easier.

    • @Itsme-wt2gu
      @Itsme-wt2gu Год назад

      Can we make a game where ai have their own life and we live as their family and social system with our friends

  • @kareemjeiroudi1964
    @kareemjeiroudi1964 6 лет назад +95

    I'm deeply impressed by the quality of your videos. Allow me to say that these, by far, are the most helpful video tutorials on Neural Networks. I seriously appreciate the time you spend researching such information and then putting it in such a concise pleasant way, that's also easy to comprehend. Trust me without you, I wouldn't have been able to understand what changes these parameters make in the network. That's why, thank you very very much for both the time and the effort you put into this! And please, please, keep making more tutorials.
    Also, I'd like to remark that the topics of these videos are so sequential, so if you're following the playlist from the very beginning you'd absolutely be able to make sense of everything noted in the videos, regardless what your prior knowledge of Neural Networks is. Besides, the Keras playlist is also complementary and adds up a lot to the learning experience.
    All in all, this is - in one word - "professional work".

    • @deeplizard
      @deeplizard  6 лет назад +16

      Wow kareem, thank you so much for leaving such a thoughtful comment! I'm very happy to hear the value you're getting from this series, and we're really glad to have you here!

    • @vdev6797
      @vdev6797 4 года назад

      i don't allow you to say..!!

    • @WahranRai
      @WahranRai 2 года назад +1

      It was the purpose of this *deep learning* videos : to be *deeply* impressed by the *learning* you get

  • @PatriceFERLET
    @PatriceFERLET 4 года назад +25

    Several days that I read several articles to understand what really does Batch Norm, and I found your video. Perferctly explained, thanks a lot !

  • @dr.hafizurrahman9374
    @dr.hafizurrahman9374 5 лет назад +24

    God Bless you, my dear Teacher. I saw in every lesson that you put the whole ocean in a small jar. This is the unique qualities and very few teachers have such good quality.

  • @tamoorkhan3262
    @tamoorkhan3262 3 года назад +6

    One of the few youtube series I have completed in my life. Instead of beating around the bushes, you kept it to the point with the tons of info just in few minutes. Hope to see more such series.

  • @woudjee2
    @woudjee2 Год назад +1

    Literally watched all 38 videos in one go. Thank you so much!

  • @pranaysingh3950
    @pranaysingh3950 2 года назад +2

    The video I was finding like a beggar over the internet to help me understand the step 2 and 3 of batch norm. Here it was finally! Thank you so much for doing great work. I really really appreciate. So simple calm and informative explanation to very important topic.

    • @shraddhadevi8964
      @shraddhadevi8964 2 года назад

      Ohh bhai khajaana 💰💰💰mil gaya💰💰💰💰💰💰💰💰💰

  • @deepcodes
    @deepcodes 4 года назад +3

    Finally completed the series of deep learning, Thank You for such amazing videos and blogs for giving free on RUclips. It's great quality!!!

  • @tanfortyfive
    @tanfortyfive 3 года назад +10

    Top-notch, I finished it all, kudos to the Deeplizard team, love you all, love you Mandy, your sweet voice keep up us.

  • @nasiksami2351
    @nasiksami2351 3 года назад +4

    THANK YOU SO MUCH FOR THIS AMAZING PLAYLIST! One of the best channels for learning deep learning. Absolutely loved your content. It was explained in the easiest possible way and awesome graphical illustrations. You really worked hard on the editing! Thanks again!

  • @linknero1
    @linknero1 4 года назад +9

    Thanks, I'm writing my thesis thanks to your explanations!

  • @smithflores6968
    @smithflores6968 2 года назад +1

    I found, pure gold ... ! Great video ! I understood perfectly !

  • @abdelrahmansalem6233
    @abdelrahmansalem6233 2 года назад +1

    This one of the most comprehensive videos I ever watch.....
    really thank you and I am looking forward for advanced concepts

  • @vikasshetty6725
    @vikasshetty6725 4 года назад +1

    worth watching all the videos because of the content delivery and quality. big thumbs up for the entire team

  • @robinkerstens516
    @robinkerstens516 3 года назад +2

    just like all other comments: i have just finished your video series and I am impressed by the quality of explanation. Many videos go into tiny details way to fast, before making sure that everyone at least understands the terms. Kudos! I hope you make many more.

    • @deeplizard
      @deeplizard  3 года назад +1

      Thank you Robin! Much more content available on deeplizard.com :)

  • @FernandoWittmann
    @FernandoWittmann 5 лет назад +26

    Great video! But from my understanding, only g and b are trainable. In 4:23, it is mentioned that the mean and std are parameters as well ("these four parameters ... are all trainable")

    • @deeplizard
      @deeplizard  5 лет назад +4

      Thanks Fernando, you’re right! The blog for this video has the correction :)
      deeplizard.com/learn/video/dXB-KQYkzNU

    • @davidireland724
      @davidireland724 4 года назад +5

      came looking for this comment! thanks for stopping me losing my mind trying to reconcile this explanation to the paper

  • @pamodadilranga
    @pamodadilranga 3 года назад

    Thank You Very Very Much. I'm posting this comment in 2020. and under the house quarantine. I needed to know about deep learning to my internship. And thanks to this playlist, now I have good knowledge about fundamental theories of neural networks.

  • @parthbhardwaj2262
    @parthbhardwaj2262 4 года назад +2

    I am really fascinated by your hard work that bring such quality to your videos ! I would be really happy if you could make as much more stuff as possible. Channels like yours only keep up the spirits of students like us really high! Just one word to sum it up....... OUTSTANDING !!

  • @stwk8
    @stwk8 2 года назад +1

    Thank you Deeplizard!.
    The playlist of Machine Learning & Deep Learning Fundamentals made me understanding the concepts of ML super easily.
    Thank you so much :D

  • @JoeSmith-kn5wo
    @JoeSmith-kn5wo Год назад +1

    Great playlist!! I went through the entire Deep Learning playlist, and have to say it's probably one of the best at explaining deep learning in a simplistic way. Thanks for sharing your knowledge!! 👍

  • @aashwinsharma1859
    @aashwinsharma1859 3 года назад +1

    Completed the whole playlist. Now I am confident about the basics of neural networks. Thanks a lot for the great series!!

  • @PritishMishra
    @PritishMishra 3 года назад

    Hurray, Completed the series (The only series on RUclips which I have seen from the first video to last without skipping a second). Amazing job Deep Lizard Team. Highly Appreciated!
    Now I am going to see the Keras Playlist and den I will see the Pytorch series and den Reinforcement learning

    • @deeplizard
      @deeplizard  3 года назад +1

      Congratulations! 🎉 Keep up the great work as you progress to the next courses!

  • @yelchinyang148
    @yelchinyang148 5 лет назад +2

    The online tutorial is very useful and helps me understand in detail batch normalization concept, which has confused me for a long time. Thanks very much for your sharing.

  • @aniketbhatia1163
    @aniketbhatia1163 5 лет назад +4

    These tutorial videos are one of the best ones I could find. The explanations are extremely lucid and so easy to understand. I really hope you expand your pool of videos to include other topics such as RNNs. You could also dedicate some videos to hyper-plane classifiers, SVMs, RL, even some optimization methods. All in all the set of videos is just amazing!

  • @tss109
    @tss109 2 года назад +1

    Wow. Such a nice explanation. Thank you!

  • @smartguy3043
    @smartguy3043 4 года назад +2

    This is the best intro to deep learning i have seen anywhere be it a textbook or video lecture series. You have definitely put in serious efforts and thought to break down this dense topic into bite-size tutorials packed with logical chain of thought which is easy to follow through. Thanks a lot :)

  • @senduranravikumar3554
    @senduranravikumar3554 3 года назад +1

    Thank you so much mandy... i have gone through all the videos... 😍😍😍 .

  • @lingjiefeng3196
    @lingjiefeng3196 5 лет назад +2

    I love your tutorial. The illustration is just so concise and easy to understand. Thank you for all your effort in making these videos!

  • @al-farabinagashbayev5403
    @al-farabinagashbayev5403 4 года назад +1

    I think every machine learning specialist even specialized one will find in you course something new for himself.:) Great course, Thanks a lot!

  • @ranitbarman6471
    @ranitbarman6471 Год назад +1

    Cleared the concept. Thnx

  • @sciences_rainbow6292
    @sciences_rainbow6292 3 года назад +1

    i completed thes series of this videos, can't wait to watch more on your playlist!

    • @deeplizard
      @deeplizard  3 года назад

      Awesome job! See all of our deep learning content on deeplizard.com :)

  • @silentai3826
    @silentai3826 3 года назад +1

    Wow, this is awesome. Kudos to you! Perfect explanation. Was trying to understand batchnorm from some websites and articles, this was much better than any of them. Thanks!

  • @ahmadnurokhim4168
    @ahmadnurokhim4168 2 года назад +1

    Great quality content, subscribed ️‍🔥

  • @hamzawi2752
    @hamzawi2752 4 года назад +1

    Very Excellent, I hope you continue this series. Your explanation is so clear.

  • @CosmiaNebula
    @CosmiaNebula 4 года назад +2

    0:10 intro
    0:30 normalize and standardize
    1:25 why normalize
    3:05 problem of large weights, and batch normalization
    5:46 Keras

    • @deeplizard
      @deeplizard  4 года назад

      Thank you for your contribution of the timestamps for several videos! Will review soon for publishing :)

  • @pallavbakshi612
    @pallavbakshi612 6 лет назад +3

    Wow, thanks for putting this up. You deserve every like and every subscribe. Great job.

  • @baqirhusain5652
    @baqirhusain5652 2 года назад +1

    Beautiful !! super clear !

  • @qusayhamad7243
    @qusayhamad7243 3 года назад +1

    thank you really you are the best teacher in the world. I appreciate your efforts

    • @deeplizard
      @deeplizard  3 года назад +1

      Happy to hear the value you're getting from the content, qusay!

    • @qusayhamad7243
      @qusayhamad7243 3 года назад

      @@deeplizard I am so happy for your reply to my comment ^_^

  • @rowidaalharbi6861
    @rowidaalharbi6861 3 года назад +1

    Thank you so much for your explanations!. I'm writing my phd thesis and your tutorial helped me a lot :)

  • @thepresistence5935
    @thepresistence5935 2 года назад +1

    Wonderful explanation

  • @rob21dog
    @rob21dog 4 года назад +1

    Thanks for all of your hard work in putting this series together. I just finished this last video & I can say that with your help I am much further ahead in understanding deep learning. God bless!

  • @karelhof319
    @karelhof319 5 лет назад +3

    finding this channel has been a great help for my studies!

  • @tymothylim6550
    @tymothylim6550 3 года назад +1

    Thank you very much for this whole series! It was really enjoyable to watch and I learnt a lot!

  • @adwaitnaik4003
    @adwaitnaik4003 4 года назад +1

    Simple and lucid explanation. loved it. Thanks

  • @jonathanmeyer4842
    @jonathanmeyer4842 6 лет назад +13

    Nice tutorial, clear, professional voice and animations !
    Looking forward more deep learning videos :)
    (I'm aware of your Keras tutorial series and I'm going to watch it right now !)

    • @deeplizard
      @deeplizard  6 лет назад +2

      Thank you, Jonathan! I'm glad you're liking the videos so far!

  • @punitdesai4779
    @punitdesai4779 3 года назад +1

    Very well explained!

  • @HasanKarakus
    @HasanKarakus Год назад +1

    The best explonation I ever watch

  • @jerseymec
    @jerseymec 5 лет назад +1

    Thanks for the amazing series! I really enjoyed your videos! Keep up the good work! Hope to see more complex networks made simple by you!

  • @gurpriyakaur2109
    @gurpriyakaur2109 3 года назад +1

    Amazing explanation!

  • @khalilturki8187
    @khalilturki8187 3 года назад +1

    nice short video and great way of explaining!
    I will follow this channel and watch more videos!
    Keep up the great work

  • @orcuncetintas2258
    @orcuncetintas2258 4 года назад +1

    Great video, very clear and understandable. However, I want to point out some mistakes. In the batch norm, only b and g are trainable; not the m and s. Moreover, batch norm is applied after fully connected/convolutional layers but before activation functions. Therefore, it doesn't normalize the output from activation function; it normalizes the input to the activation function.

  • @yashgupta417
    @yashgupta417 4 года назад +1

    Very well explained

  • @rupjitchakraborty8012
    @rupjitchakraborty8012 4 года назад +3

    Loved your video. I am going to complete this series. Can you include RNNs, LSTMs and GRUs, and also complete the video series? I am looking forward to this as I start and complete this series.

  • @anshumaandash137
    @anshumaandash137 4 года назад

    Nice Explanation. However, there is a small mistake you can correct. We batch normalize the outputs of a layer (Conv or Linear) before squashing it through the activation function. That way, the activations never overshoot or understood, leading to a stable output and easier convergence. This also allows us to use bigger learning rates.
    I hope that helps..

  • @farzadimanpour2751
    @farzadimanpour2751 3 года назад +1

    the best tutorial that I've ever seen.thanks

  • @ericdu6576
    @ericdu6576 Год назад +1

    AMAZING SERIES

  • @OKJazzBro
    @OKJazzBro Год назад

    Batch norm according the paper is actually applied before the activation function, not after. For this reason, they even recommend dropping the bias parameter of the layer itself because batch norm comes with a learnable bias term. The output of batch norm then goes to the activation function.

  • @julianarotsen6521
    @julianarotsen6521 5 лет назад +1

    Thanks for the amazing explanation!! By far the best tutorial video I've seen!

  • @ejkitchen
    @ejkitchen 3 года назад +1

    Great content. Like many others have said, one of the best series on ML out there.

  • @alphadiallo9324
    @alphadiallo9324 3 года назад +1

    That was very helpful, thanks

  • @rogeriogouvea7278
    @rogeriogouvea7278 Год назад

    These videos are SO helpful, thank you

  • @marioandresheviacavieres1923
    @marioandresheviacavieres1923 2 года назад +1

    I'm deeply thankful 🤓

  • @aravindvenkateswaran5294
    @aravindvenkateswaran5294 3 года назад

    I have successfully binged (across 2 weeks) this playlist and found them really helpful! Thank you for all you do and keep up the good work. Hope to watch more vids getting added here or elsewhere on the channel. Lots of love:)

    • @deeplizard
      @deeplizard  3 года назад +1

      Thank you, and great work! Check out the homepage of deeplizard.com to see all other DL courses and the order in which to take them after this one!

  • @shaelanderchauhan1963
    @shaelanderchauhan1963 2 года назад +1

    Just WoW! Amazing content. Please make series on Explainig research papers

  • @simonbernard4216
    @simonbernard4216 5 лет назад +1

    just woaaa ..! Please keep making these videos, it's by far the best explanation I got here

  • @robertc6343
    @robertc6343 3 года назад +1

    Ohhh what a wonderful narrative. I really like the way you explained it. Thank you and I’ve just Subscribed to your channel👍🏻

  • @fritz-c
    @fritz-c 4 года назад +1

    I spotted a slight issue in the article for this video.
    At the end of the article, it says "I’ll see ya in the next one!", with a link to the Zero Padding article, but by that point that article has already been covered.
    I really enjoy your courses so far, by the way. I've stopped and started a few times with studying ML in the past, but this has been a pleasure to go through.

    • @deeplizard
      @deeplizard  4 года назад

      Fixed, thanks Chris! :D
      I've rearranged the course order since the initial posting of these videos/blogs, so I removed the hyperlink.

  • @richarda1630
    @richarda1630 3 года назад +1

    Just wanted to say kudos and thanks so much for your awesome series :D I have learned so much! Now I'm off to your Keras w/TF series :)

    • @deeplizard
      @deeplizard  3 года назад

      Great job getting through this course!

    • @richarda1630
      @richarda1630 3 года назад

      @@deeplizard Thanks! moving to your Deep Learning and Keras series next :)

  • @travel_with_rahullanannd
    @travel_with_rahullanannd 4 года назад +1

    I really enjoyed learning with your videos. Can you please create videos on RNN.!!

  • @Yadunandankini
    @Yadunandankini 6 лет назад +3

    great video. precise and concise. Thanks!

  • @GS-kj5pc
    @GS-kj5pc 2 года назад

    Excellent series!

  • @gaurav_gandhi
    @gaurav_gandhi 5 лет назад +1

    Clearly explained, good animation, covered most areas. Thanks

  • @entertainment8067
    @entertainment8067 2 года назад +1

    I watched this complete playlist of deep learning. It was totally very amazing. but my suggestion is that please add some video about RNN and also make a separate video playlist about Supervised learning, Unsupervised Learning, Imitation Learning and Deep Reinforcement Learning. Thanks you mam.

    • @deeplizard
      @deeplizard  2 года назад

      You're welcome, I'm glad you enjoyed it! We have some of the topics you've suggested already available in other courses. Check them out here:
      deeplizard.com/courses

  • @kavithavinoth4557
    @kavithavinoth4557 4 года назад +1

    great series
    amazing teaching skills you have got madam
    thank you

  • @from-chimp-to-champ
    @from-chimp-to-champ 2 года назад

    As always, very well done and clear, thank you!!

  • @karatugba
    @karatugba 6 месяцев назад +1

    I am sorry that this is the last video in the playlist I want more😢

  • @prasaddalvi3017
    @prasaddalvi3017 4 года назад

    These are really good set of videos for neural network. I really liked it a lot and enjoyed watching it. Great work. But there is just one thing which I would like to suggest, you guys have explained Back propagation really well, better than most that I have seen, but it would be really helpful in understanding back propagation better if you could add a small numerical problem for back propagation calculation.

  • @fanusgebrehiwet6286
    @fanusgebrehiwet6286 4 года назад

    gentle and to the point. Thank you.

  • @diogo9610
    @diogo9610 5 лет назад

    Wonderful work. Thank you for setting up all this content.

  • @pranavdhage691
    @pranavdhage691 4 года назад

    awesome...I am going to watch the playlist....

  • @arohawrami8132
    @arohawrami8132 9 месяцев назад +1

    Thanks a lot.

  • @yuriihalychanskyi8764
    @yuriihalychanskyi8764 4 года назад +1

    Thanks for the video. So do we have to normalize data before putting to the model or batch normalization does it itself in the model?

  • @nikhillahoti7628
    @nikhillahoti7628 5 лет назад +1

    This is a gem! Thank you very much!!!

  • @amirraad4437
    @amirraad4437 Год назад +1

    Thank you so much for your great work ❤

  • @mukulverma8404
    @mukulverma8404 4 года назад +1

    Very good Explanation..watched this whole playlist.Thanks for making understanding DL so easy and fun.Moreover your funny stuff made me laugh.

  • @roxanamoghbel9147
    @roxanamoghbel9147 3 года назад +1

    so helpful!

  • @Hi-zlv
    @Hi-zlv 3 года назад

    I finished all 38 videos. Great great great explanation!
    Can you also do some sample projects?

    • @deeplizard
      @deeplizard  3 года назад +1

      Great job finishing the course, Zehra! Many projects are included in our other various deep learning courses. Check out all the courses listed on the home page of deeplizard.com. We give the recommended order for which to take the courses there as well.

    • @Hi-zlv
      @Hi-zlv 3 года назад

      @@deeplizard Sure! I will check the website. I also recommend it to my friends. Thank you, Mandy!

  • @UtaShirokage
    @UtaShirokage 4 года назад

    Amazing and concise video, thank you!

  • @akhtarzaman7864
    @akhtarzaman7864 6 лет назад +1

    thankyou for amazing explanation

  • @oriabnu1
    @oriabnu1 4 года назад

    I have seen your all videos,i am Ph.D. student truly learn many things from
    you ,if you have time please teach how can variational autoencoder use in CNN

  • @FuryOnStage
    @FuryOnStage 6 лет назад +1

    this was an amazing explanation. thank you.

  • @g.jignacio
    @g.jignacio 4 года назад +1

    Once again again you did it! you dit it!

  • @peteabc1
    @peteabc1 6 лет назад +7

    Ahh, explained in human language..thank you :). What I don't understand is where to insert those layers? Intuition tells me just everywhere, right?
    Btw the scale problem and such equations are called stiff equations (the NN is an equation solved by using numerical methods). But another problem is denormalization with numbers close to 0, that can cause 200-300 times slowdowns even with modern CPUs.

    • @deeplizard
      @deeplizard  6 лет назад +4

      Thanks, peteabc1!
      Yeah, you would want to insert batch norm after your "typical" layers, like dense, conv, etc. that are followed by an activation. From my experience, determining when/where to add batch norm involves testing and analyzing my training results after adding or removing more batch norm layers. But yes, you certainly can add a batch norm layer after _all_ of these typical layers and observe how your model performs.
      Also, thanks for the stiff equations info!

  • @mustafacannacak9279
    @mustafacannacak9279 3 года назад +1

    Love your channel

  • @bl7395
    @bl7395 4 года назад

    @deeplizard please do a series on transfer learning, or more in-depth teaching on NLP/CV :)

  • @sanaullahaq2422
    @sanaullahaq2422 3 года назад

    {
    "question": "What kind of parameters are g and b?",
    "choices": [
    "Learnable Parameters",
    "Hyperparameters",
    "g learnable and b hyperparameter",
    "g hyperparameter and b learnable "
    ],
    "answer": "Learnable Parameters",
    "creator": "Sanaulla Haq",
    "creationDate": "2021-07-19T09:27:20.333Z"
    }

  • @ogsconnect1312
    @ogsconnect1312 5 лет назад +1

    Thanks

  • @rapunziao2929
    @rapunziao2929 6 лет назад

    i started to fall in love with the voice

  • @parismollo7016
    @parismollo7016 4 года назад +1

    I haven't watch the video yet, but I know it's good.

  • @ArgumentumAdHominem
    @ArgumentumAdHominem 8 месяцев назад

    Could you please clarify why would one normalize a RELU layer, as shown in the example? In the discussion you suggest that it is to prevent cascading effects due to too large weights. However, later, you state that BN normalizes the output of the unit. How are the two related? Further, why normalize a RELU unit that is already bound to [0, 1]?