CNN Architecture Part 5 (DenseNet)

Поделиться
HTML-код
  • Опубликовано: 2 дек 2024

Комментарии • 6

  • @akashkewar
    @akashkewar 4 года назад +1

    For the input image (TB1), How come image size of 224 X 224 after applying 7X7 sized filters with stride of 2 results in shape 112X112? With my prior knowledge Operations should be as follows:
    Image size should be 227X227, applying zero padding of 1 and then convolving it with 7X7 sized filter with stride of 2 that would result in 112X112 sized output feature map.

  • @aniqatiq731
    @aniqatiq731 5 лет назад

    Awsome understood most of the things. This is the video I needed.

  • @flying.mp3
    @flying.mp3 2 года назад

    very informative. thank you

  • @parikshitagarwal3901
    @parikshitagarwal3901 5 лет назад

    First of all Impressive work done here.
    Secondarily, I have a doubt, as explained in video about Transition Block after applying conv2d with filter_size = (1*1) and strides = (1,1) why did the input size just got halfed.

    • @portgasdace8961
      @portgasdace8961 5 лет назад

      where can i find other imagenet models explained such as xception, inceptionresnetV2 ? plz

    • @luckychauhan7348
      @luckychauhan7348 5 лет назад

      It's as typo brother, it should be 56*56*128 after conv2d step.