Transposed Convolutions Explained: A Fast 8-Minute Explanation | Computer Vision

Поделиться
HTML-код
  • Опубликовано: 2 янв 2025

Комментарии • 13

  • @tdv8686
    @tdv8686 9 месяцев назад +1

    very clear and informative video, it's a shame that not many people view this masterpiece, for me, it's the best explanation of vectorization of convolution layers, despite the main topic of the video being the transpose convolution

  • @Phazz88
    @Phazz88 8 месяцев назад +1

    Great explanation, simple and to the point! It really bothered me that most other explanation skipped the linear algebra leaving me in the dark as to why it is called transposed convolution. Also, thx for having the written guide in the description.

  • @lbognini
    @lbognini 2 месяца назад +1

    Great video!
    If I may, what is presented on the screen and what's said need to be more synchronized (e.g at 03:25). This is important to make it easy for us to follow, especially when there are lots of numbers, sizes, dimensions to grasp at once.

  • @AdvaithKrishna-ob1ud
    @AdvaithKrishna-ob1ud 7 месяцев назад

    Short, clear and precise! Great work :)

  • @thesoul2086
    @thesoul2086 5 месяцев назад

    Best explanation on the internet.

  • @msbrdmr
    @msbrdmr 8 месяцев назад

    Thank you so much for such a clear and explanative video.

  • @Vanadium404
    @Vanadium404 Год назад

    Hello Sir!
    Can you please provide detailed research papers like this one on GAN's and CNN's.
    Thanks

  • @sourabhverma9034
    @sourabhverma9034 6 месяцев назад

    It would me more space optimized if you made the convolution matrix using patches of input instead of kernels. That way if input size is really high (100x100 or 256x256), your matrix would include a lot of zeros, increasing both size and time required for matmul.
    instead creating patches of input of size (f x f x nc) and flattening them into size (#strides, f x f x nc) and then taking dot product is more space and time optimal

  • @msbrdmr
    @msbrdmr 8 месяцев назад

    In CNN's the filters have weights. How do we reverse the convolution operation and call it a transpose convolution. I mean, how do we use it in backpropagation and update the layers weights. Shouldn't it result the initial inout that we have applied the convolution?

    • @JohannesFrey
      @JohannesFrey  8 месяцев назад

      I think I said it in the video… but a transposed convolution is basically the backpropagation of a „normal“ convolution… therefore a „normal“ convolution is basically the backpropagation of a transposed convolution

    • @msbrdmr
      @msbrdmr 8 месяцев назад

      @@JohannesFrey Absolutely. I have mistaken it with the backpropagation of CNN itself.