Thanks for doing these videos. :) As someone who is familiar with Batch Normalization I was personally missing a few important information which is why I add them here for the community: - The normalisation is happening over the batch dimension (in contrast to other variants such as layer normalisation where we normalise over the layer dimension), meaning that we normalize the feature over the mini-batch - which is why it does not work well for smaller batch sizes (usually 16+) - another advantage for the scale and the offset parameter is that it allows the network to undo the BN, meaning that BN can't make your result worse - during test time with e.g one sample only we can't compute mean and std since we don't have a batch. This is why we use running statistics of mean and variance calculated during training
Thanks for doing these videos. :) N.B : If the original data does not follow a normal distribution, it will not become normal simply by standardizing it. Therefore, it is incorrect to suggest that standardization automatically produces a symmetric or normal distribution.
The hint to omit the bias when using afterwards batchnorm_layers is very good, the information that batch_norm can be used while omitting the learnable scaling and offset would also be helpfull because these functionality is also computational very expensive and not the core feature of the batch_norm.
I have watched this video over and over, you explained it very well though I am not a fan of Keras. From scratch, implementation would have been more helpful.
First thanks for an amazing video, answered many questions. It felt like you predicted my questions during the video and proposed the answers right after! I have one question, if we don't normalize the input and use BatchNormalisation, wouldn't it behave completly different? Like for example say we feed training images of luminance value of 0-200, but on real world interference or during validation we use some other images that have full scale luminance values 0-255. Giving we know the range of our luminance values during modeling, wouldn't be better to use prenormalization, as Batch normalization will behave incorrectly during real world/validation process? P.S. To avoid any confusion about the data and why we didn't feed 0-255 before, say we have grayscale images, and we don't know if they're all in range, and what we'll have during validation, basically random split.
Standardization does not change the overall shape of your distribution, it just translates and scales it to have mean 0 and std 1. It will give a normal distribution if and only if your distribution was already normal. If your distribution was uniform, or poisson, or whatever, it will remain like that
What I learned from other videos, that all of this is applied across the samples per batch for each weight. Not across all weights. I hope I got it right…
No one really knows why BN works at the moment. The best intuition we have is that it counteracts the internal covariance shift problem during training.
I think you just answered your own question. Its to help keep features and activation values within in a finite range thus avoiding exploding and vanishing gradients. Having said that, though, isn’t that what the sigmoid and htan activation function supposed to do?
so if we have three columns age ,weight and height all are in different scale so we don't have to scale them separately instead we can use batch normalization to bring them to same scale
Why does she begins with making the difference between normalization and standardization and then @4:30 describe standardization with mean=0 and var=1 under the name "Batch normalization" ,does anyone understand this advance ?
There is a difference between squeezing the data between 0 and 1 and on the other hand pushing the mean to 0 and squeezing the variance to 1. The lady does not seem to understand what the difference is @10:20 ! The normalization layer just can use the statistic of the actual 28x28 image. But the statistic of the other 60000 images are all individual, so they are all normalized based on different metrics in contrast to just squeezing the data between 0 und 1 by dividing by 255. She does not understand what she is doing and misuse the workflow. The network has to work for all input images and so the input layer has to be adjusted to the average statistic of all images if at all. Her initial /255 method is way better for the mnist data than the lazy NormalizationLayer abuse she is advising which is way more computational expensive on top of that.
Thanks for doing these videos. :)
As someone who is familiar with Batch Normalization I was personally missing a few important information which is why I add them here for the community:
- The normalisation is happening over the batch dimension (in contrast to other variants such as layer normalisation where we normalise over the layer dimension), meaning that we normalize the feature over the mini-batch
- which is why it does not work well for smaller batch sizes (usually 16+)
- another advantage for the scale and the offset parameter is that it allows the network to undo the BN, meaning that BN can't make your result worse
- during test time with e.g one sample only we can't compute mean and std since we don't have a batch. This is why we use running statistics of mean and variance calculated during training
Thank you for the additional information Ludwig!
Thank you for clarifying this!
Thanks for doing these videos. :)
N.B : If the original data does not follow a normal distribution, it will not become normal simply by standardizing it. Therefore, it is incorrect to suggest that standardization automatically produces a symmetric or normal distribution.
Best explanation of Normalization and Standardization... Thank you
Tebrikler , en iyi anlatan kanal !
The hint to omit the bias when using afterwards batchnorm_layers is very good, the information that batch_norm can be used while omitting the learnable scaling and offset would also be helpfull because these functionality is also computational very expensive and not the core feature of the batch_norm.
Nicely Explained ! I liked the part where we start from definitions.
Thank you, Please keep doing this kind of videos, your explanation is simple and clear
Great to hear, thank you!
I have watched this video over and over, you explained it very well though I am not a fan of Keras. From scratch, implementation would have been more helpful.
Hey Mayank, thank you! I'm glad to hear it was helpful. -Mısra
you nailed a new subscriber >> thank you so much
simple and very useful . thank you for this great content
First thanks for an amazing video, answered many questions. It felt like you predicted my questions during the video and proposed the answers right after!
I have one question, if we don't normalize the input and use BatchNormalisation, wouldn't it behave completly different?
Like for example say we feed training images of luminance value of 0-200, but on real world interference or during validation we use some other images that have full scale luminance values 0-255.
Giving we know the range of our luminance values during modeling, wouldn't be better to use prenormalization, as Batch normalization will behave incorrectly during real world/validation process?
P.S. To avoid any confusion about the data and why we didn't feed 0-255 before, say we have grayscale images, and we don't know if they're all in range, and what we'll have during validation, basically random split.
Standardization does not change the overall shape of your distribution, it just translates and scales it to have mean 0 and std 1. It will give a normal distribution if and only if your distribution was already normal. If your distribution was uniform, or poisson, or whatever, it will remain like that
Best explanation , Thank you
Thanks for the info, It was super easy to understand and clear. ( :
Great to hear :)
Superb explanation to one of the important interview question..
Great work.!!👍🏻👍🏻👍🏻
Thanks for the video.
You're very welcome!
Thank you 👍🏻👍🏻👍🏻
The explanation is superb
Thank you....That was awesome...
Very clear explanation!
great work! thank you.
I got impression that this video is about normalization. There is nothing about batches and what does it mean BATCH normalization
Nice explanation thanks you
Amazing, thanks a lot .
This is really nice. Please keep up the good work , the world needs it. If possible can you also share the notebook.
Lots of love from India!
Thank you!
Your explanation very good keep doing more videos on datascience concepts
Thank you Sai!
What I learned from other videos, that all of this is applied across the samples per batch for each weight. Not across all weights. I hope I got it right…
This is great!... This video needs a much high view count. What is going on @youtube???? you need to work on your ranking algorithm.
Hahah that's great to hear that you like the video Aritra!!
Hahah that's great to hear that you like the video Aritra!!
Great Explaination!
Thank you!
perfect explanation 😍
Thank you :)
Thanks
No one really knows why BN works at the moment. The best intuition we have is that it counteracts the internal covariance shift problem during training.
I think you just answered your own question. Its to help keep features and activation values within in a finite range thus avoiding exploding and vanishing gradients. Having said that, though, isn’t that what the sigmoid and htan activation function supposed to do?
@@codematrix well u might end up getting dead Neurons without batch normalization
THanks
Why do we need this activation function?
saol abla cok iyi anlattin
Marvelous.
Thank you! - Mısra
great!
Thanks a lot.
You're very welcome!
nice
Thanks
so if we have three columns age ,weight and height all are in different scale so we don't have to scale them separately instead we can use batch normalization to bring them to same scale
Why does she begins with making the difference between normalization and standardization and then @4:30 describe standardization with mean=0 and var=1 under the name "Batch normalization" ,does anyone understand this advance ?
ty
You're welcome :)
Batch normalization works upon all samples in a batch, but only one feature i thought?
2:22 this is two separate features right, number of phones, and amount of money withdrawn from ATM.
Love...
worth it
Are you turkish? We are so curious :)
There is a difference between squeezing the data between 0 and 1 and on the other hand pushing the mean to 0 and squeezing the variance to 1. The lady does not seem to understand what the difference is @10:20 ! The normalization layer just can use the statistic of the actual 28x28 image. But the statistic of the other 60000 images are all individual, so they are all normalized based on different metrics in contrast to just squeezing the data between 0 und 1 by dividing by 255. She does not understand what she is doing and misuse the workflow. The network has to work for all input images and so the input layer has to be adjusted to the average statistic of all images if at all. Her initial /255 method is way better for the mnist data than the lazy NormalizationLayer abuse she is advising which is way more computational expensive on top of that.