PyTorch - The Basics of Transfer Learning with TorchVision and AlexNet

Поделиться
HTML-код
  • Опубликовано: 28 янв 2025

Комментарии • 22

  • @vivian_who
    @vivian_who 3 года назад +7

    Thank you very much for this! I am currently doing my undergrad thesis in PyTorch and freaking out. Your explanation is quite clear and helpful.
    Keep going ^^

  • @observor-ds3ro
    @observor-ds3ro 7 месяцев назад

    That was excellent! a great help for me , you described as clear and clean as possible

  • @user-ot6yk6ie2f
    @user-ot6yk6ie2f Год назад

    i can save this model as usual (by using alexnet) and use it with other model on open cv right?

  • @shinchannohara3927
    @shinchannohara3927 11 месяцев назад

    will the same code with num-out 200 work for 200 class classification with such great accuracy

  • @tudoronrec
    @tudoronrec Год назад

    Thank you for the information!

  • @mightylearning
    @mightylearning Год назад

    when i am going to train vgg model with 38 classes then that error occur: RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[2, 38, 224, 224] to have 3 channels, but got 38 channels instead, when i use summarytool, how to slove the error

  • @danielac520
    @danielac520 3 года назад +1

    hi! which is the difference between freezing the layers or using model.eval()?

    • @DennisMadsen
      @DennisMadsen  3 года назад +1

      Hi. In eval mode you notify all the layers that you are not in training mode. Which also has an impact on e.g. dropout and batchnorm layers.
      With no_grad (freeze) - this is often used for training to avoid computing the gradient for X number of layers.
      But indeed the goal of the two functions looks a bit the same - do not compute the gradient.

    • @danielac520
      @danielac520 3 года назад +2

      @@DennisMadsen Thanks for your answer! I understand, Anyway computing the gradient does not imply the update of parameters right ?

    • @DennisMadsen
      @DennisMadsen  3 года назад +1

      @@danielac520 Glad it was helpful. And true. The parameters are not updated when the gradients are. You would then need to update them with something like: optimizer.step()

    • @danielac520
      @danielac520 3 года назад +1

      @@DennisMadsen Got it :)

  • @murtazajabalpurwala8124
    @murtazajabalpurwala8124 3 года назад +1

    Hi thanx for the video, appriciate it, but I believe this tutorial was more suited for a moderate level to advanced level. I still had many concepts to dig and I thought you were skipping on many things that were still new to me. May be you can make another video where you can guide data loading process and training process more in details. Thanx again

    • @DennisMadsen
      @DennisMadsen  3 года назад +1

      Noted. Thanks for the input Murtaza :)

  • @kvdiatpune8753
    @kvdiatpune8753 Год назад

    Thanks ,nicely explained

  • @vikramrs4191
    @vikramrs4191 2 года назад

    Is there an example how we can use our own trained models in transfer learning of other images in keras library

  • @nagamadhubabuvikkurthi5695
    @nagamadhubabuvikkurthi5695 2 года назад

    please tell me how can I build a confusion matrix from this.,

  • @tycstahX
    @tycstahX 3 года назад

    Great stuff!

  • @muhammadzubairbaloch3224
    @muhammadzubairbaloch3224 4 года назад +1

    sir please make lecture on GAN

    • @DennisMadsen
      @DennisMadsen  4 года назад

      Hereby put on my video lidt Muhammad. Thanks a lot for the suggestion!

  • @BudgiePanic
    @BudgiePanic Год назад

    nice

  • @VaishnaviRoyM23EEI007
    @VaishnaviRoyM23EEI007 Год назад

    unable to train my data