PyTorch - Linear Regression implementation

Поделиться
HTML-код
  • Опубликовано: 6 сен 2024
  • Basic usage of PyTorch. From simple low-level usage of Adagrad to building up your neural networks with the model.nn module from PyTorch. In this video we look at how to implement a simple linear regression algorithm as a neural network.
    Notebooks: github.com/mad...
    PyTorch playlist: • PyTorch - The Basics
    Deep Learning introduction playlist: • Deep Learning: Part1 -...

Комментарии • 5

  • @kitanomegumi1402
    @kitanomegumi1402 2 года назад

    great video! thank you

  • @AlexeyMatushevsky
    @AlexeyMatushevsky 3 года назад +1

    Very nice video and well put together. I have a question - should we always call the optimizer.zero_frad() when training the model? Should the torch do it for us?

    • @DennisMadsen
      @DennisMadsen  3 года назад +2

      Hi Alexei. Yes. The gradients should be zeroed in every iteration. Otherwise you get the accumulated gradient. This is useful for some applications. So pytorch just decided to make the user aware of this and forcing them to manually call the zero() in every iteration.

    • @AlexeyMatushevsky
      @AlexeyMatushevsky 3 года назад

      @@DennisMadsen thank you for the explanation!

  • @anumhassan1422
    @anumhassan1422 Год назад

    What if the weights get started negative with increasing iterations?