PyTorch Tutorial 07 - Linear Regression

Поделиться
HTML-код
  • Опубликовано: 27 янв 2025

Комментарии • 75

  • @alexlang178
    @alexlang178 Год назад +5

    Dear Patrick
    your lectures are awesome! What a great way to get a first grip on the subject without reading through difficult manuals or reading a big book. Fab!

  • @michaeltsang1748
    @michaeltsang1748 Год назад +3

    I've been binge watching your vids, and they have been helpful for me. I'm trying to get a software developer job, and I want to put pytorch on my belt. So, thanks for these vids again. Your information is straight to the point, accurate and easy to follow.

  • @psy_duck8221
    @psy_duck8221 4 года назад +15

    I love your strong German accent. I study at a German university and you remind me of my professor. Thank you very much

    • @patloeber
      @patloeber  4 года назад +3

      Haha thanks :D Which university is it?

    • @andromachirozaki5753
      @andromachirozaki5753 4 года назад +1

      @@patloeber same!

    • @yuezhang4659
      @yuezhang4659 3 года назад

      @@patloeber yeah actually it's much clear than some British people

    • @Nermalton77
      @Nermalton77 3 года назад +1

      'tensoah' 'optimizah'

    • @tobi9668
      @tobi9668 3 года назад +1

      12:00 and ja. Danke für die Tutorials und dein Englisch ist auch sehr gut.

  • @soerengebbert
    @soerengebbert Год назад

    Zuerst vielen Dank für deine ausgezeichneten Tutorials. Jedoch macht es didaktisch viel Sinn, die Zwischenergebnisse zu zeigen, um dem Zuseher ein tieferes Verständnis zu vermitteln.

  • @davidkhassias4876
    @davidkhassias4876 4 года назад +7

    All your series are great! Thanks a lot!

  • @naveedmazhar7260
    @naveedmazhar7260 4 года назад +2

    All his lectures are so good. I really liked his work

    • @patloeber
      @patloeber  4 года назад

      Thanks :)

    • @naveedmazhar7260
      @naveedmazhar7260 4 года назад

      @@patloeber Thanks to you sir. i found it very easy because of you otherwise i always thought of pytorch to be very difficult.

    • @patloeber
      @patloeber  4 года назад

      @@naveedmazhar7260 I'm glad to hear that!

  • @Ftur-57-fetr
    @Ftur-57-fetr 4 года назад +3

    Great job, man!

  • @mohammadkarimi2595
    @mohammadkarimi2595 2 года назад

    Thank you very much.
    I love your gernan accent, and your tutorial helped me learn pytorch.

  • @swethanandyala
    @swethanandyala 9 месяцев назад

    Great Job sir! Thank you so much for your informative sessions

  • @jyotipch
    @jyotipch 2 года назад +6

    Question: At 100th epoch, the loss is 567. Without looking at the plot, how do I know that this loss is good enough? Because in the previous examples, the losses were near zero.

    • @RafaelusOptimus
      @RafaelusOptimus Год назад +1

      Hello,
      I'm not sure if the original person who posted the question will read this, but it may be useful for others:
      The expected loss is proportional to your number of samples and your noise.
      Here, the data is scattered around your trendline, so the loss (which is proportional to average of the distance between the line and the data points) is never going to be zero. If you have enough data points, your loss will be on the order of magnitude of~noise^2.
      In Patrick's example, it's still a bit too high (you can kinda see that the slope of his line doesn't really match the data). If he had set his n_samples to millions and the epochs to 200 or 300, he'd fallen within the noise^2 range and the line would cross the data points right in the middle
      The size of the loss will depend on your problem, the amount of data points you have and the noise that data has.There's probably a statistics lesson to be had here, but I'm too lazy to look for it in Wikipedia, it probably has something to do with Bayesian statisticshas

  • @darkchoclate
    @darkchoclate 2 года назад +1

    It aint clear what is the use of sklearn dataset, what does that function do, and all the parameters of the function do? can you please explain?

  • @ridael-mehdawe4681
    @ridael-mehdawe4681 5 лет назад +4

    thank you so much, so detailed lecture, appreciated

  • @harshapatankar484
    @harshapatankar484 4 года назад +2

    Very good programming demo.

  • @bijjalanaganithin3798
    @bijjalanaganithin3798 4 года назад +2

    Thank You for the excellent tutorial series
    I am new to PyTorch and I am a little confused at 6:07 here MSELoss is a class so criterion will be an object but you said it as a callable function and used this object as a function to compute the loss. how is it possible to use the object of a class as a function? can u please explain or point out to some resources
    Thank You

    • @patloeber
      @patloeber  4 года назад +4

      Every object in Python can be made callable (such that it behaves like a function) by implementing the __call__() method: __docs.python.org/3/reference/datamodel.html#object.__call___

    • @bijjalanaganithin3798
      @bijjalanaganithin3798 4 года назад

      @@patloeber Thank You so much

  • @lucenHan
    @lucenHan 3 года назад +1

    thank you so musch, but i can't import datasets. could you help me?

  • @priyanshumohanty5261
    @priyanshumohanty5261 Год назад

    Can someone explain why we do loss.item() here instead of simply loss, as done in previous tutorials?

  • @HoangNguyen-be4vy
    @HoangNguyen-be4vy 4 года назад +1

    Why do we have to reshape the y (line 18) but not the X also?

    • @patloeber
      @patloeber  4 года назад +1

      Because our loss function (MSELoss) expects the y in a certain shape with one column. The shape of X is not fixed, it can vary dependent on the number of samples and the number of features

    • @HoangNguyen-be4vy
      @HoangNguyen-be4vy 4 года назад

      @@patloeber thank you sir

  • @computerscience8532
    @computerscience8532 4 года назад +3

    thank you so much for giving a complex concept in an easy way I am learning pytorch from your tutorial. please extend to seq2seq model and also make example of language translation in RNN module thank you again

  • @Aditya_Kumar_12_pass
    @Aditya_Kumar_12_pass 3 года назад

    okay so a quick question.
    when i use pytorch. i get very high accuracy in my new data.
    but when i use sklearn, though i get very high accuracy too, but i takes less time
    why does that happen? isn't sklearn doing the same thing we did ? 🤔

  • @itsadira007
    @itsadira007 4 года назад

    10:53 why do you need to call detach() at line #50 but not at line #34

    • @patloeber
      @patloeber  4 года назад +3

      detach stops a tensor from tracking history. during training we still want this, because we have to apply backpropagation. (so in line 34 we want the gradient). after training in line 50 we don't need this anymore, and can call detach.

  • @ravivarma5703
    @ravivarma5703 4 года назад +4

    Hi Bro,
    I couldn't stop thanking you again and again...It was such an amazing explanation.
    can we connect in linked-In or in any other platforms aswell?

    • @patloeber
      @patloeber  4 года назад +1

      you are correct. this was just for demo purpose. in later tutorials i use training and testing datasets

    • @patloeber
      @patloeber  4 года назад +2

      you can connect on twitter :) link is in the description below the video

    • @ravivarma5703
      @ravivarma5703 4 года назад +1

      Python Engineer Done lets connect at Twitter thanks a lot

  • @omererylmaz3619
    @omererylmaz3619 2 года назад

    Thank you!

  • @avivamazurek1744
    @avivamazurek1744 2 года назад

    Im confused about why there is no train test split ? How do we test the data ?

    • @avivamazurek1744
      @avivamazurek1744 2 года назад

      the model*

    • @gabrieleliuzzo7859
      @gabrieleliuzzo7859 4 месяца назад

      well... I guess maybe it's due to the fact the we already know the data is proportional... so the plot is enough to see that the model is actually correct: it shows a linear equation with the correct slope

  • @arpansrivastava6405
    @arpansrivastava6405 2 года назад

    Please answer me that why the output_size is 1.

  • @vl4416
    @vl4416 2 года назад

    Hello! Why don't we iterate over n_samples in training loop?

  • @alirezamohseni5045
    @alirezamohseni5045 11 месяцев назад

    it was useful, thank you

  • @lakeguy65616
    @lakeguy65616 4 года назад

    another great video!
    My plot doesn't show a blue line, but rather every red dot is connected by a blue line. What am I doing wrong?
    #plot # detach X keeps the gradients of x from being updated
    predicted = model(X).detach().numpy()
    plt.plot(x_numpy,y_numpy,"ro")
    plt.plot(x_numpy,y_numpy,"b")
    plt.show()

  • @assalahzaki3609
    @assalahzaki3609 4 года назад

    Hi, how can I get this interface? Please reply with details

  • @shuaili5656
    @shuaili5656 4 года назад

    hello, why in the for loop, model input is the X, which has a 100*1 shape, didn't we just define that the model = nn.Linear(1,1), which means input is 1 dim and output is 1 too ?

    • @patloeber
      @patloeber  4 года назад +2

      we have 100 samples and each samples has input_dim= [1]. We only need to define this in our Linear layer.

    • @shuaili5656
      @shuaili5656 3 года назад

      @@patloeber thank u !

  • @LeafBalm
    @LeafBalm 2 года назад

    Hello. I just have a question and forgive me as I am still a beginner in ML, what if I have a dataset that contains x, y1, y2. where x is the independent variable and y1 and y2 are the actual values. so basically, the graph shows two plots of (x,y1) and (x,y2). Also, the graph shows a nonlinear trend (goes up and down). Can I still apply this method? Or is there a builtin nonlinear model in torch?

  • @davidcordova1773
    @davidcordova1773 3 года назад

    AMAZING,THANK U

  • @Footballistaas
    @Footballistaas 3 месяца назад

    what is numpy useful for

  • @MASTAN005
    @MASTAN005 4 года назад +2

    super cool :D

  • @fahadaslam820
    @fahadaslam820 4 года назад

    Thank you man !
    Best course (Y)

    • @patloeber
      @patloeber  4 года назад +2

      Thanks for watching!

  • @МихаилПоликарпов-ф4м

    Can you explain what does it mean:features and samples?

    • @patloeber
      @patloeber  4 года назад +1

      Samples are the items or observations we can use for the training. For example in the iris dataset, we have 150 samples. For each sample, we have a vector of different features that describe the sample. Here we have the 4 features: sepal width / sepal length / petal width / petal length.

    • @МихаилПоликарпов-ф4м
      @МихаилПоликарпов-ф4м 4 года назад

      @@patloeber thank you!!!

  • @cybermanithan7514
    @cybermanithan7514 3 года назад

    thanks a lot u helped me a lot

  • @vincecarter7500
    @vincecarter7500 4 года назад

    how come there isnt a def foward function in the model

    • @ferdaus57
      @ferdaus57 4 года назад +1

      Please go to his previous video of this series for your answer

  • @TechnGizmos
    @TechnGizmos 4 года назад

    Thank you very much for making this series.
    I have one doubt though: Why is the value of output_size equal to 1 here? In your previous video, it was set to be equal to n_features...wasn't that example also based on Linear Regression? How do I determine the value the output_size while designing my own models?

    • @patloeber
      @patloeber  4 года назад +2

      output_size is the number of different classes/labels you want to predict. For normal linear regression this is always 1. In the previous tutorial n_features=1 because of the toy dataset I used.

    • @TechnGizmos
      @TechnGizmos 4 года назад

      Got you. Thanks a bunch :-)

  • @akileshtangella9333
    @akileshtangella9333 4 года назад

    why does using float32 prevent problems?

    • @patloeber
      @patloeber  4 года назад +2

      Otherwise you get this Runtime "Error RuntimeError: Expected object of scalar type Float but got scalar type Double". At least with the pytorch version I was using at the time of the video. I guess the linear layer needs the tensors to be Float

  • @youcefsb4708
    @youcefsb4708 4 года назад +1

    Thank you for this excellent series.
    I am a bit confused about converting Model(X) at line 50, shouldn't we convert y_predicted instead which contains the predicted labels? and does calling model(X) there result in an extra forward pass? why not just use y_predicted.detach().numpy()? Thanks.

    • @patloeber
      @patloeber  4 года назад +2

      Yes that’s an extra forward pass which we do with our final model after the training. I could have also named this last variable y_predicted to be consistent...