at 3:44, "the best option is to execute a short benchmark..." , what does a short benchmark mean? I am not a native english speaker, would you explain it for me? Thanks!
It is difficult to increase the batch size and always have the out of memory error. I use Colab pro to train like 680 by 480 images for image segmentation or coloring, but it often requires me to decrease it to 4 or 2 in batch size because of the out of memory error.
I think this just means that the resources given in collab are not enough for what you are trying to do. Segmentation is usually very resource intensive.
@@jonathansum9084 is the 25GB GPU RAM or CPU RAM? If I'm not wrong, it's CPU RAM. The GPU RAM was 16 GB when I was using P100, even though I had the high RAM instance of 25 GB. I often had to greatly decrease my batch sizes when using 440x440 images or bigger to avoid OOM, which was a shame. I'm not sure if the VMs in Colab Pro come with some memory overheads. I heard that the I/O is slow, but not sure how that affects memory issues.
This is terrific. Practical and carefully described too.
Thanks, Arun !
Awesome thanks a bunch, some high quality content here!
Thanks a ton, really good advice
at 3:44, "the best option is to execute a short benchmark..." , what does a short benchmark mean? I am not a native english speaker, would you explain it for me? Thanks!
I hope we will do a how to do a Performance Tuning and avoid out of memory error for Colab Pro.
Very useful! Thanks for sharing.
It is difficult to increase the batch size and always have the out of memory error. I use Colab pro to train like 680 by 480 images for image segmentation or coloring, but it often requires me to decrease it to 4 or 2 in batch size because of the out of memory error.
I think this just means that the resources given in collab are not enough for what you are trying to do. Segmentation is usually very resource intensive.
@@konataizumi5829 25GB with V100, I think it is enough. And I often see this OOM error in the forum.
@@jonathansum9084 is the 25GB GPU RAM or CPU RAM? If I'm not wrong, it's CPU RAM. The GPU RAM was 16 GB when I was using P100, even though I had the high RAM instance of 25 GB. I often had to greatly decrease my batch sizes when using 440x440 images or bigger to avoid OOM, which was a shame. I'm not sure if the VMs in Colab Pro come with some memory overheads. I heard that the I/O is slow, but not sure how that affects memory issues.
Note - Except for recent optimizers like LAMB, increasing batch size leads to poorer generalization performance.
So has using LAMB mitigated this problem for you? Or in general?
Hello?
This is extremely relative and problem specific. both in terms of batch-size and the problem you are tyring to solve.
Thank you
What if the BatchNorm layer is after the ReLU? (i.e. Conv -> ReLU -> BatchNorm). Is it okay mathematically to turn off the Conv bias in this case?
apex has been part of the main branch of Pytorch for quite some time now.
10:11 if it is really speeding up and doing the same thing, why don't they change it :)