For the input image (TB1), How come image size of 224 X 224 after applying 7X7 sized filters with stride of 2 results in shape 112X112? With my prior knowledge Operations should be as follows: Image size should be 227X227, applying zero padding of 1 and then convolving it with 7X7 sized filter with stride of 2 that would result in 112X112 sized output feature map.
First of all Impressive work done here. Secondarily, I have a doubt, as explained in video about Transition Block after applying conv2d with filter_size = (1*1) and strides = (1,1) why did the input size just got halfed.
For the input image (TB1), How come image size of 224 X 224 after applying 7X7 sized filters with stride of 2 results in shape 112X112? With my prior knowledge Operations should be as follows:
Image size should be 227X227, applying zero padding of 1 and then convolving it with 7X7 sized filter with stride of 2 that would result in 112X112 sized output feature map.
Awsome understood most of the things. This is the video I needed.
very informative. thank you
First of all Impressive work done here.
Secondarily, I have a doubt, as explained in video about Transition Block after applying conv2d with filter_size = (1*1) and strides = (1,1) why did the input size just got halfed.
where can i find other imagenet models explained such as xception, inceptionresnetV2 ? plz
It's as typo brother, it should be 56*56*128 after conv2d step.