Simple Linear Regression | Mathematical Formulation | Coding from Scratch

Поделиться
HTML-код
  • Опубликовано: 20 дек 2024

Комментарии • 180

  • @a1x45h
    @a1x45h 3 года назад +102

    Finally in model. Your explanation is clearer for me compared to even Andrew Ng. I am not kidding. Thank you Nitish. :)

    • @tusharsub1000
      @tusharsub1000 3 года назад +7

      Yes, I too feel the same...

    • @iamsomeone54
      @iamsomeone54 Год назад +1

      real

    • @arghadeepmisra7865
      @arghadeepmisra7865 Год назад +3

      @@iamsomeone54 I came here just because I was not feeling comfortable with andrew ng

    • @SumanSadhukhan-md4dq
      @SumanSadhukhan-md4dq 7 месяцев назад

      Can anyone differentiate loss and coss function?

    • @JyotirmoyGupta-x5r
      @JyotirmoyGupta-x5r 5 месяцев назад +4

      @@SumanSadhukhan-md4dq Loss function is the error expression when considered for a single data point. When we sum up all the errors over all the data points, we get the cost function. So, basically, cost function is when we sum up the loss functions of all the data points. Hope this makes sense to you.

  • @yusufkhanvlogs9581
    @yusufkhanvlogs9581 3 года назад +86

    You deserve more respect bro....really i admire you...No one give interpretation and inferences like u.
    And please make a videos on Neural Network and NLP.

  • @boringclasses8765
    @boringclasses8765 3 года назад +29

    u r best mentor/teacher i ever came across for data science....

  • @jonmasson3311
    @jonmasson3311 2 года назад +14

    It takes patience, endurance and talent to be a teacher, but you make it look so easy day after day. I hope you know just how much we all appreciate you.

  • @ritwikgupta7540
    @ritwikgupta7540 Год назад +14

    Still halfway trough the video, but I feel necessary to comment here. This is by far the best explanation of Loss function and the intuition leading up to it. I have enrolled in paid courses to learn this, and let me tell you the quality of explanation in this 53 min video is better than any other resources out there. Kudos to you...keep shining Nitish!!

  • @tusharsub1000
    @tusharsub1000 3 года назад +24

    Awesome man.. India is moving towards AI/ML and in the journey you are a blessing.. Anybody can learn AI/ML from your videos...

  • @Salman_Shaikh43622
    @Salman_Shaikh43622 9 месяцев назад +2

    Dear Nitish, You are best DS mentor. Thanks for creating the channel for those who seeks the job in DS.

  • @shortflicks83
    @shortflicks83 Месяц назад +1

    I tried to learn from many places but not able to learn it in a well manner even I try krish naik. I love the way you teach bro its awesome to learn from you

  • @numanakhtersiddiqui6573
    @numanakhtersiddiqui6573 9 месяцев назад +1

    best tutor for ML on youtube. know one teaches with this much depth

  • @saptarshisanyal6738
    @saptarshisanyal6738 2 года назад +5

    As of today this video has 8k views. But i am sure in the next 1 year it will cross 1 Lac view. This is the best explanation of cost function derivation.

  • @HealthyEveryday-to5dt
    @HealthyEveryday-to5dt Год назад

    I have never seen a teacher who teach math's soo easily. I mean hats off to you sir. Every student deserves teaching like you give

  • @anonymousxyz3856
    @anonymousxyz3856 Год назад +3

    You are amazing!! Just the way we want the lectures to be!! When you take care of uppercase and lowercase chars while writing terms like X_train, y_train, it reflects your in depth understanding of the subject and the notations one should follow. You will reach great heights.

  • @SimranCE-
    @SimranCE- 2 года назад +2

    Finally I found the best teacher which will help me in ml journey

  • @sarajaved5552
    @sarajaved5552 7 месяцев назад +1

    Hats off to you sir. Apart from immense knowledge, your patience and ability to simplify abstract topics is outstanding. You are a gifted teacher. Thank you is not enough.

  • @srikrithibhat1999
    @srikrithibhat1999 6 месяцев назад +11

    15:54 Actually it should be yi^, Because that line has been drawn by the model, so it will be a predicted value right? And the above point should be yi. I think you have said it ulta. Could you please reply to this? And all your sessions are awesome. Thank you so much for such a great explanation.

    • @automationwithwasi
      @automationwithwasi 4 месяца назад +2

      Hi,
      I also observed the same thing, was checking for any existing comments and found this.

    • @Akarsh2022
      @Akarsh2022 3 месяца назад

      I think he just accidentally marked y as y hat on graph but everything else is correct, because he referred y hat as the one predicted by the model and y as the actual result. So in the formula it makes sense that we're subtracting the predicted value from the actual value.

  • @mohammedshahid6577
    @mohammedshahid6577 Месяц назад +1

    Best explanation that exists, hands down!

  • @saptarshisanyal6738
    @saptarshisanyal6738 2 года назад +1

    You are a boon for us. As a gratitude, I dont skip the ads in your video.

  • @SAADKHAN-m5m
    @SAADKHAN-m5m 10 месяцев назад

    machine learning is fun right now but time consuming too. All thanks to you sir.

  • @paruchuriradhakrishna7759
    @paruchuriradhakrishna7759 3 года назад +2

    Awesome vedio I couldn't this much detailed explanation even from great RUclips data science teachers

  • @manandesai6404
    @manandesai6404 4 месяца назад

    I took one course which has over 1M students still I find the course rubbish, like they did not go any deep. And here is this guy with just 220k subscribers(while commenting) and have god level teaching skills and going very deep in what and how. You deserve more than what you are getting. Hopefully people will find this channel and you get more for what you are doing.
    You are a just amazing 🔥

  • @VarunMalik-mo6mr
    @VarunMalik-mo6mr 11 месяцев назад

    You’re helping so many lives god bless you!!

  • @imsruthi
    @imsruthi 3 месяца назад

    SPEECH-LESS!!!!watching your videos is the best decision i've everrrr madeee.

  • @warrior3357
    @warrior3357 8 месяцев назад

    Sir u are really .......Superrr
    Respect ++ from NIT Raipur

  • @techtonic3652
    @techtonic3652 6 месяцев назад

    You are awesome in explanation I have never learn anything as simple as you explained. you made the complex things for me very simple. You are awesome.

  • @Aquib_Aftab_7903
    @Aquib_Aftab_7903 5 месяцев назад +2

    Loved ur explaination sir, Really got a taste of calculus after so long.

  • @chiragjain5630
    @chiragjain5630 7 месяцев назад +2

    thank you sir for all such beautiful content. there is a small mistake at 16:00. predicted and actual values are named incorrectly

  • @BhanupratapApat
    @BhanupratapApat 8 месяцев назад

    The most underrated data science channel on RUclips

  • @bansarigaloria6149
    @bansarigaloria6149 7 дней назад

    This playlist is a blessing. Thank you Sir🙏

  • @preetiughrejiya3045
    @preetiughrejiya3045 Год назад

    I think the way you have cleared my concepts has given me more confidence to talk on ML models and their fine tuning.
    Really very intuitive understanding.

  • @bakkaabhilash5393
    @bakkaabhilash5393 Год назад

    Woww..... ! Satisfied with full clarity. Thank you sir.

  • @narendraparmar1631
    @narendraparmar1631 Год назад

    One of the best explanation i saw
    Thanks a lot sir😀

  • @kalyanikadu8996
    @kalyanikadu8996 Год назад

    Such an Amazing teacher with such an incredible content and explanation.
    Very grateful to you Nitish!
    A very biggg Thank you too you.

  • @ramanmaini1931
    @ramanmaini1931 Год назад

    Really great service to the society. God bless you. You get everything in life

  • @infinity2creation551
    @infinity2creation551 2 года назад +2

    Sir apney (y hat) ko ulta likh diya (y hat) toh actual value hai and (yi) jo model predict kiya ..but ap bol sahi rahe ho likh ulta rahe ho syd...(yi) upr hoga and (y^) niche hoga q ki nichey wla model predicte kr rha is liya niche wla hoga..
    Or di =(yi-y^) hoga or yi upr hoga and y^ niche hoga.. 15:30
    Plz reply

    • @niketasengar9191
      @niketasengar9191 8 месяцев назад +1

      thank god someone noticed, i was sooooooo confused T_T

    • @infinity2creation551
      @infinity2creation551 8 месяцев назад

      @@niketasengar9191 yaah so mujhe abi tak solution ni mila..! Apko mila?

  • @rajaneeshray7502
    @rajaneeshray7502 Год назад

    bhai kya teaching style superb.

  • @arjunsingh-dn2fo
    @arjunsingh-dn2fo 7 месяцев назад

    Sir, You are such a Genius, I never ever seen like you...🙇

  • @sumitwarkari4455
    @sumitwarkari4455 9 месяцев назад

    Very good explanation for the linear regression algorithm. You covered the math behind it. I never thought that i could learn how algorithm works in the backend. Thanks for the explanation.

  • @sudipgautam7471
    @sudipgautam7471 6 месяцев назад

    @ 16:00 there is small mistake
    If predicted is represented by hat symbol and actual by not hat
    Then y hat must be below and y must be above in y-axis line

  • @Rushidanidhariya
    @Rushidanidhariya 3 месяца назад

    great explanation . concept clear . mathematical intuition was too good .

  • @parthraghuwanshi2929
    @parthraghuwanshi2929 Год назад

    selfless service once I would get placement I would surely do something to grow this channel

  • @rajasekharreddy3629
    @rajasekharreddy3629 2 года назад

    No words to say after watching your videos just love you brother...

  • @Amanansari003
    @Amanansari003 Год назад

    16:00 mere khayal se pridiction value line pr honi chahiye or actual value to hum logo ne plot krai hi hai to jo line pr Dot bnaya hai (x,y) vo H^ hoga or jo actual point hai line wale point ke uper vo Hi hoga kunki vahi to actual data hai or Jo pdidict hua vo line pr hai to us hisab se Hi - H^ (actual value - predicated value) krenge tabhi to equation bnegi. Mere khayal se aapne vo glti se H^ actually point ko likha hai 15:55 pr

  • @JACKSPARROW-ch7jl
    @JACKSPARROW-ch7jl Год назад

    you are best ever nitishbhaoi hats off to you

  • @cs_soldier5292
    @cs_soldier5292 Год назад +1

    No one can explain more simpler than this

  • @smiteshtamboli1051
    @smiteshtamboli1051 2 года назад

    Awesome videos, You are the best teacher for data science

  • @shivashu1360
    @shivashu1360 4 месяца назад

    Believe me Your videos are great I can say better than Andrew NJ thank you for all these video

  • @jayanthAILab
    @jayanthAILab Месяц назад

    Loved it!!! No one can explain like you sir! ❤❤❤❤

  • @SumanSadhukhan-md4dq
    @SumanSadhukhan-md4dq 7 месяцев назад

    Wow...
    I got full clarity now😊

  • @gamingmonster8148
    @gamingmonster8148 Год назад

    you are the great bro what a way of teaching, incredible
    .....no doubt knowledge has no boundary alots of love from pakistan❣

  • @sayanNITdgp2025
    @sayanNITdgp2025 2 года назад

    you have created a gold mine...

  • @abhishekganapure6456
    @abhishekganapure6456 9 месяцев назад

    I rarely subscribe any channel in here, so, I subscribed yours. keep up the good work till advance level and beyond

  • @chitrakshi2587
    @chitrakshi2587 5 месяцев назад

    Next Level Explanation.......thanks for sharing🫡🙂‍↕️

  • @tanmayshinde7853
    @tanmayshinde7853 3 года назад +2

    Hidden Gem On RUclips

  • @priyanujmisra4245
    @priyanujmisra4245 Год назад

    You deserve a lot of respect. Thank you for the effort!!!!!!

  • @zerotohero1002
    @zerotohero1002 2 года назад +1

    You deserve million views

  • @aayushrimall
    @aayushrimall 3 месяца назад +1

    16:00 I feel like he said opposite. Am I wrong or he got confused?

  • @JitendraVyas-pt3ct
    @JitendraVyas-pt3ct 6 месяцев назад

    What a great explanation 👏🔥
    Hats off 🫡🙇‍♂️

  • @harishvishwakarma9072
    @harishvishwakarma9072 10 месяцев назад

    bahut hi pyara video tha sir thankyou

  • @gauravmalik3911
    @gauravmalik3911 2 года назад +1

    At 16:20 , isn't the yi hat the prediction that lie on the line, not the actual y

    • @akzork
      @akzork 2 года назад

      true

  • @AyushPatel
    @AyushPatel 3 года назад +3

    Thanks sir for the video🔥🔥🔥🔥🔥🔥🔥🔥🔥 Hats off to you

  • @rish725
    @rish725 3 года назад +1

    That was exceptionally good... Thank you for this amazing explainer

  • @jaikumarpritmani6596
    @jaikumarpritmani6596 Год назад

    thankyou so much sir for adding so much value...

  • @pankajitm
    @pankajitm 2 года назад

    Brother you nailed it in explanation, best tutorial

  • @sumeersaifi6354
    @sumeersaifi6354 2 года назад

    on (19:11 min) you have taken y-y^ and y^ is mx+b then why you have taken (y - mx-b) why there is minus sign between mx and b

    • @anshulsharma7080
      @anshulsharma7080 Год назад

      - sign when multiplied to
      (mx +b) will change it to
      (mx-b)

  • @infinity2creation551
    @infinity2creation551 2 года назад +1

    I hope ki ap hamesha premium content free mai ❤️☺️☺️☺️❤️......❤️

  • @sahaji-bi7qi
    @sahaji-bi7qi 6 месяцев назад

    great learning bro,
    maja a gaya ,
    love you bhai.

  • @sober_22
    @sober_22 Год назад

    The best Explanation ever!

  • @muhammadmustafa3158
    @muhammadmustafa3158 3 года назад +1

    Amazing Sir !!!!
    Love your videos

  • @RahulKumar-ko8pu
    @RahulKumar-ko8pu 3 года назад

    the way you teach is awesome

  • @descendantsoftheheroes_660
    @descendantsoftheheroes_660 2 года назад

    Thanks sir God bless you ..Keep continue

  • @FindMultiBagger
    @FindMultiBagger 2 года назад

    where is your Patreon , you deserved lots of love and respect, Thank you for everything :) GBU

  • @milindtakate5987
    @milindtakate5987 3 года назад +5

    Clear and concise explanation!!
    Will it be possible to create one lecture series on in-depth Python? I have checked the one uploaded by you. But it has some missing lectures. Also, in some lectures the board is not visible. Thanks in advance!

  • @sameerpandey5561
    @sameerpandey5561 3 года назад

    Clean and crisp Explanation

  • @balrajprajesh6473
    @balrajprajesh6473 2 года назад

    Best Teacher Ever!

  • @arvindsinghrawat538
    @arvindsinghrawat538 2 года назад

    Sir Ji Tussi Great Ho !! :)

  • @namansethi1767
    @namansethi1767 3 года назад

    Thanks sir for this explanation on linear regression

  • @MrGaurav331
    @MrGaurav331 3 года назад +1

    Sir Your videos are amazing

  • @pkdudejunction2542
    @pkdudejunction2542 2 года назад

    Watching like movie.... thank you sir... 🙏🏻

  • @samuelnikhade5612
    @samuelnikhade5612 Год назад

    Awesome Sirji, tuse great ho

  • @shivoham5939
    @shivoham5939 2 года назад

    31:02 if first derivative is zero it can be either maxima or minima. so if we calculate our equation for this how to confirm is this maxima or minima

  • @bajisk6769
    @bajisk6769 2 года назад +1

    You course is really great,can you cover time series as well

  • @coding_world_live9
    @coding_world_live9 2 года назад

    Man dil jeet liyaa❣️🔥

  • @akshaykanavaje474
    @akshaykanavaje474 3 года назад +1

    that's what we called learning algorithms from scratch. sir, can u tell us which book you preferred to learn this from? just love your content.

  • @dilipgyawali1776
    @dilipgyawali1776 2 года назад

    U are the best.........really amazing

  • @arjunshankar1673
    @arjunshankar1673 2 года назад

    after getting the m 's value at 39:27 , i think (xi -xbar) can be cancelled?

  • @deeprajmazumder6261
    @deeprajmazumder6261 2 года назад +1

    Hi sir, You said that in OLS we don't use approximation process like differentiation etc, but while explaining for OLS, you went to explain gradient descent and then proved the formula for OLS. I have a doubt in that sir.

  • @balrajprajesh6473
    @balrajprajesh6473 2 года назад

    Wish I could like the video twice!

  • @ParthivShah
    @ParthivShah 9 месяцев назад +1

    Thank You Sir.

  • @pubbrbebegg969
    @pubbrbebegg969 Год назад

    @12:35 can anyone explain, what does the penalize means here?

  • @1981Praveer
    @1981Praveer 2 года назад

    Great Video. clear explanation

  • @pawanmishra6571
    @pawanmishra6571 Год назад

    All doubts cleared thanks bro

  • @1981Praveer
    @1981Praveer 2 года назад

    @16:22 - di should be y^-yi, right?

  • @mr.deep.
    @mr.deep. 2 года назад

    best video for LR maths

  • @aryanraykar2124
    @aryanraykar2124 3 года назад +2

    I have one question in the interview do they ask how the formula is derived?

    • @campusx-official
      @campusx-official  3 года назад +4

      No, but the understanding gives you Conceptual clarity about the topic

  • @Kishore6413
    @Kishore6413 Год назад

    In the error function, along with m and b, y also depends on x, right?

  • @ameerrace2284
    @ameerrace2284 3 года назад +1

    Will you b working on sgd regressor in upcoming videos?

  • @stardusthere
    @stardusthere Месяц назад

    lovely explanation

  • @rk_dixit
    @rk_dixit 4 месяца назад

    I completely understand this concept but i have a doubt should I make the derivation portion notes ,, if whether this formulation part I need to revise in future ,, plz reply if anyone knows

  • @meethirpara4707
    @meethirpara4707 2 года назад

    Your explanation is mind-blowing but my question is
    when we square all the data point then line will be form according to the squared data but our actual data is non square how it will adjust according to our real data

  • @aakiffpanjwani1089
    @aakiffpanjwani1089 9 месяцев назад

    sir, you are a legend