Linear Regression in Python from Scratch | Simply Explained

Поделиться
HTML-код
  • Опубликовано: 7 сен 2024

Комментарии • 70

  • @saiprathapreddykarru576
    @saiprathapreddykarru576 3 года назад +7

    I was searching for one resource where I can learn about machine learning.. after watching this video I found one great resource...
    Explanation was good.. but explain while writing code, don't skip to that writing part of coding, .
    Please Regularly come with great content like this..All the very best to this channel

    • @CodingLane
      @CodingLane  3 года назад +1

      Okay... Thank You very much for your feedback. I will not skip the writing of codes from my next videos...
      And sure, i am coming up with content regularly

  • @bharathmuthuswamyparan585
    @bharathmuthuswamyparan585 3 года назад +6

    I've been searching for exactly this implementation. Your the only one who implemented this Thanks it was really helpful.

    • @CodingLane
      @CodingLane  3 года назад +1

      Thank You so much ! Its really good to hear that... I am glad that it was helpful to you 😇

  • @santhoshmanoharan8969
    @santhoshmanoharan8969 2 года назад +5

    In X why are we having the first column as ones?

  • @kgtw5506
    @kgtw5506 8 месяцев назад

    Tried multiple vids for understanding this concept... finally understood after this vid...
    Great work...
    Subscribed

  • @varshafegade4688
    @varshafegade4688 2 года назад +2

    when we do using gradient descent ,we consider partial derivative with respect to theta 0 and theta 1

  • @butex_43
    @butex_43 3 месяца назад

    Thanks for the short video. However, I needed some other videos and searching as a beginner.

  • @mishrajit
    @mishrajit Год назад +1

    it would be great if dataset used here would be available for absolute beginners, it is one of the easiest explanation to understand cost function and why gradient decent algorithm is being used to minimize the cost.
    A lot of minute details related to numpy & algorithms is being given here which help a novice to feel like champ.

    • @CodingLane
      @CodingLane  Год назад

      Thank you so much. Highly appreciate how you observed the details. Glad it was useful 🙂

  • @varunpatil5270
    @varunpatil5270 Месяц назад +2

    Why do u run written code fast ??? Some people may find it difficult to understand

    • @CodingLane
      @CodingLane  Месяц назад

      Amm... I only fast forward the code that is not related to the concept I am teaching. I will consider this comment from next time onwards though

  • @praveensevenhills
    @praveensevenhills 3 года назад +4

    hi bro u have done an excellent video please do more all the best

    • @CodingLane
      @CodingLane  3 года назад

      Hi Praveen! Thank You so much! This really means alot to me !!

  • @malikhamza9286
    @malikhamza9286 3 года назад +3

    Thank you so much for the amazing stuff. Looking forward for your more videos.

    • @CodingLane
      @CodingLane  3 года назад

      Sure Malik ! Next video coming tomorrow

  • @pemadechen9901
    @pemadechen9901 4 месяца назад

    One thing about the code
    Please add resize on Y only after adding X = np.vstack((np.ones((X.size, )), X)).T (In the beginning phase of the code)

  • @nihaanthreddy6206
    @nihaanthreddy6206 2 года назад +1

    Frankel, u should have 1M subs, bro!!

    • @CodingLane
      @CodingLane  2 года назад

      Hahaha… thank you so much 😄 Happy to hear this!

  • @rickmoni4598
    @rickmoni4598 3 года назад +3

    It’s a great explanation. In Machine Learning, I still wonder what can we get from the Linear Regression analysis. Could you explain us what information or advantages from LR analysis?

    • @SuperDaxos
      @SuperDaxos 3 года назад

      It gives a sense of how predictive certain trends might be. Lineair regression is a way of predicting response variables to explanatory variables. If you know how much Y change for an n amount of X, then you can predict for example the earnings for next year based on how much you advertise etc.

  • @mohammedzia1015
    @mohammedzia1015 2 года назад +1

    Hi, thanks for a nice tutorial. One thing I did not understood is, why you are taking 1's in the X vairable as a first column ?

  • @ayushtiwari1774
    @ayushtiwari1774 Год назад +1

    well explained JP i understood now thanks.

  • @KunaalNaik
    @KunaalNaik 2 года назад +1

    Nicely done bro! Simply Explained indeed! I would love to host you on my Channel sometime :)

    • @CodingLane
      @CodingLane  Год назад

      Hi Kunaal, you content and channel is great! I would love to colab with you too!

  • @RahulKumar-rf7yc
    @RahulKumar-rf7yc 3 года назад +3

    Nicely explained sir💙💙

    • @CodingLane
      @CodingLane  3 года назад

      Thank You so much Rahul ! Means alot to me !!

  • @samuelakwantui3124
    @samuelakwantui3124 3 месяца назад

    This is great… thanks

  • @___daniel___
    @___daniel___ 3 года назад +2

    Very Good videos, thank you!

    • @CodingLane
      @CodingLane  3 года назад

      Thank You.. I appreciate your comment !

  • @uchindamiphiri1381
    @uchindamiphiri1381 8 месяцев назад

    Why were you appending 1 when choosing random number to show the approximate prices?

  • @conamx5728
    @conamx5728 Год назад

    good job. I have seen simple and multi-linear regression videos,. one question, why is that your implementation of both models not working with learning rates 0.01 and 0.001?
    [ I think it is the most commonly used values in regression]

  • @samitaadhikari3182
    @samitaadhikari3182 3 года назад +2

    thank you so much for the explanation sir but i'm confused why theta is taken zero what if we take any other values

    • @CodingLane
      @CodingLane  3 года назад +4

      Thank You very much for your question. Your question made me learn something new as well.
      Here is my answer :
      We can initialize theta with any random values, but these values must be closer to zero. Or in other words, modules of theta should be small enough, so that its close to zero ( |theta| ~ 0 )
      If |theta| is too big, then | y_pred | becomes too big, and (y_pred - Y)^2 ( in cost function) becomes almost equal to y_pred^2 only, i.e, (y_pred - Y)^2= y_pred^2. Thus the model will not learn.
      There is no exact way of knowing how small |theta| should be to train our model well.
      So when we initialize theta to zeros, it adapts itself in such a way that y_pred becomes similar to Y, by minimizing the difference (y_pred - Y)^2. That's why we initialize theta with zeros and not with any other value, because it can lead our model to not train at all.
      I hope it helps !

    • @samitaadhikari3182
      @samitaadhikari3182 3 года назад +1

      @@CodingLane thank you so much for the answer

  • @shubhamraj310
    @shubhamraj310 3 года назад +2

    bro , awsm it izzz😁😁

  • @v1hana350
    @v1hana350 2 года назад +1

    Can you make a video about Python Decorators in complete detail and their uses in real-life applications?

    • @CodingLane
      @CodingLane  2 года назад +1

      Hi Hana... I won't be able to make that video, coz it is not directly related to Machine Learning.
      You can refer to this link - www.geeksforgeeks.org/decorators-in-python/ I find geeksforgeeks articles very intuitive and easy to understand. Hope it helps.

    • @v1hana350
      @v1hana350 2 года назад

      @@CodingLane I will check it out 👍

  • @sakibhasan7857
    @sakibhasan7857 3 года назад +3

    Please do some math videos

    • @CodingLane
      @CodingLane  3 года назад +1

      For sure Sakib ! More mathematical explaination of other Machine Learning models are coming !!

    • @sakibhasan7857
      @sakibhasan7857 3 года назад +1

      @@CodingLane i am excited😙😙

  • @shreyasmagajikondi7838
    @shreyasmagajikondi7838 Год назад +1

    how to create that data set

    • @CodingLane
      @CodingLane  Год назад

      Hi, you can create a dummy dataset by just writing the data in a .txt or more preferably .csv file.

  • @varshafegade4688
    @varshafegade4688 2 года назад +1

    why you have not taken partial derivative w.r.t0 theta 0 and theta 1 and update both

    • @CodingLane
      @CodingLane  2 года назад

      Hi... I have taken partial derivatives that you are talking about. You can refer to my gradient descent video on linear regression. In that video, I have derived an expression after computing partial derivatives and I am using that expression here.

    • @varshafegade4688
      @varshafegade4688 2 года назад

      @@CodingLane taken only one derivative not other

    • @CodingLane
      @CodingLane  2 года назад

      @@varshafegade4688 I have taken derivative w.r.t to both of theta 0 and theta 1. The equation has theta matrix, which has both theta 1 and theta 0. And thus the derivative is also taken for both theta 1 and theta 0, but is represented in 1 single equation by using matrix mutlplication.

    • @varshafegade4688
      @varshafegade4688 2 года назад

      @@CodingLane Thank you ,now I understood

  • @cspython1314
    @cspython1314 2 года назад +1

    why d_theta doesn't have np.sum

    • @CodingLane
      @CodingLane  2 года назад

      The summation is implicit in matrix multiplication

  • @sumaiyahashim3535
    @sumaiyahashim3535 3 года назад +1

    How can we obtain a diminishing return from this analysis?

    • @CodingLane
      @CodingLane  3 года назад +1

      Hi Sumaiya,
      Sorry, but I didn’t get your question. Can you please elaborate?

    • @sumaiyahashim3535
      @sumaiyahashim3535 3 года назад +1

      Sure, I’m trying to do a linear regression on forecasting revenue based on what we spend. So, I’d like to understand at what point does spend stops affecting the revenue? Would you know how to obtain that from this regression analysis?

    • @CodingLane
      @CodingLane  3 года назад

      @@sumaiyahashim3535 Sure.. I will need to to look at the problem statement.
      Can you mail me your contact details (whatsapp number with country code or any social media handle ) along with more information/resources/links on the problem you are trying to solve , at my mail - codeboosterjp@gmail.com ?
      I will let you know, if I can help you in any way further from there.

  • @varshafegade4688
    @varshafegade4688 2 года назад +1

    Please add video on KNN

    • @CodingLane
      @CodingLane  2 года назад

      Thank you for the suggestion. I will make a video on it.

  • @divyamsaxena295
    @divyamsaxena295 3 года назад +1

    Everything is good but if you don't explain the code side by side, Its not that profitable as it should be.

    • @CodingLane
      @CodingLane  3 года назад +1

      Thanks for the suggestion Divyam! From the next videos, I will be explaining code side by aide, and only skip the redundant explanations

  • @mujtaba5912
    @mujtaba5912 2 года назад +1

    can you please provide the dataset.txt file

    • @CodingLane
      @CodingLane  Год назад

      Hi, it was provided in the pdf notes. But here is the link again: github.com/Jaimin09/Beginners-Machine-Learning-Explained-Simply/blob/master/Assets/data.txt

  • @shubhambala809
    @shubhambala809 3 года назад +1

    Why you write all this code??why you not just import regression model

    • @CodingLane
      @CodingLane  3 года назад +3

      Thanks for your question. Writing the complete model helps us to understand how machine learning models work behind the hood. This way, we can later develop new complex models that are not available directly to import.
      There are many complex deep learning models in Computer Vision and Natural Language Processing. We need to understand these basic models to understand those complicated models.
      But yea, when we just want to use these simple regression models, then we have option to directly import and use them. And I will also make videos on how to do so, in future.

  • @shreyashkadam3334
    @shreyashkadam3334 3 года назад +2

    🙌🙌🙌

    • @CodingLane
      @CodingLane  3 года назад

      Thanks alot Shreyash !! 🙌🏻

  • @cspython1314
    @cspython1314 2 года назад +1

    d_theta = 1/m * np.sum(np.dot(x.T, y_pred - y))