Linear Regression

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024

Комментарии • 102

  • @Creative_arts_center
    @Creative_arts_center 2 года назад +12

    The best professor in machine learning.i like her teaching.i followed her from 2010 onwards.i had collected her lectures through CDs from 2010.i like her very much

  • @Nikhil-lq1kb
    @Nikhil-lq1kb 2 года назад +31

    Error at 15:32
    Multiple linear regression: Y = B0 + B1*x1 + B2*x2 + ... + BP*xp + E
    Polynomial linear regression: Y = B0 + B1*x^1 + B2*x^2 + ... + BP*x^p + E

    • @kasyapvelampalli2811
      @kasyapvelampalli2811 Год назад +3

      Right.. I was confused here too! Because linear regression must always have deg=1 as opposed to what has been taught in the lect, eq. cannot have a polynomial degree of 'p'

    • @ram-pc4wk
      @ram-pc4wk Год назад

      no , linearity is based on coeffiecents x terms in this case, not directly the x terms

    • @narmadaa2106
      @narmadaa2106 8 месяцев назад

      Yes u r right
      It's polynomial regression

    • @narmadaa2106
      @narmadaa2106 8 месяцев назад +1

      If the degree of x is more than1 it represents non linearity

  • @shudeshna66
    @shudeshna66 7 лет назад +4

    Having 1/2 as a multiplicative factor does not change the solution as what minimizes z also minimizes 1/2 z. 1/2 is usually added so that the derivative formula has a constant coefficient of 1.

  • @RohitKumar-jh1km
    @RohitKumar-jh1km 7 лет назад +64

    You people explain those things in details which doesn't require explanation. And those things which do require explanation you skip them as if they even doesn't need expatiation.

    • @dipanjanbiswas4924
      @dipanjanbiswas4924 6 лет назад +3

      they copied from andrew NG's lectures

    • @akhandbha
      @akhandbha 6 лет назад

      How do you know ?

    • @shivaniamehta9851
      @shivaniamehta9851 4 года назад

      You can see this for clarification.
      medium.com/@nicolabernini_63880/ml-what-is-the-difference-between-gradient-descent-and-stochastic-gradient-descent-be79ab450ef0

    • @subashchandrapakhrin3537
      @subashchandrapakhrin3537 4 года назад +1

      @@dipanjanbiswas4924 Is the Anderw NG father of ML ??? or the people who write the papers

    • @dipanjanbiswas4924
      @dipanjanbiswas4924 4 года назад

      @@subashchandrapakhrin3537 you can say that

  • @lampfall7915
    @lampfall7915 2 года назад +2

    She is wonderful teacher, respect to you

  • @roseb2105
    @roseb2105 5 лет назад +7

    maybe i am missing somthing here but are these lessons meant to be a review or just an overview of what will be taught? beacuse its hard to understand this if learning this the first time without much examples?

  • @siddharthGupta632
    @siddharthGupta632 6 лет назад +13

    Why you have written polynomial regression equation in place of multiple linear regression. This seems a bad lecture. Not expected from IIT

    • @sunny10528
      @sunny10528 4 года назад

      Yes I too got stuck at this point in the lecture and started doubting my own knowledge

  • @avinashdwivedi2015
    @avinashdwivedi2015 2 года назад +6

    i was good at Linear regression and after watching this lecture.. i forgot everything about Regression. lol ironic

  • @itsdurgeshray
    @itsdurgeshray 13 дней назад

    CORRECTION at 16:24
    equation shall not have exponential degrees in power as it is for Polynomial Regression.

  • @tolifeandlearning3919
    @tolifeandlearning3919 2 года назад +2

    Great lecture

  • @jamesmathew8291
    @jamesmathew8291 Год назад

    Excellently covered the topic. Which textbook reference ma'am

  • @wreckedinsect5069
    @wreckedinsect5069 3 года назад

    my professor lectured fucking 3h and i understood nothing but linear is straight.. here in half and hour i am really ready for the exam, thanks

  • @chandureddim4327
    @chandureddim4327 Месяц назад

    Anybody can help me why we need to assume that errors are independent to each other or mean as zero & has some standard deviation ? and as normally distributed ? please

  • @santoshkumargoutam4791
    @santoshkumargoutam4791 3 года назад

    Mam :- Excellence concept clarification

  • @jivanmainali1742
    @jivanmainali1742 3 года назад +2

    Why objective function is 1/2 of sum square error. If we have n data set it should be average so I guess it is 1/n of sum of square error

    • @AkashCherukuri
      @AkashCherukuri 3 года назад

      It's for making the mathematics easier since you would have to differentiate the function later. (1/2 gets canceled with the 2 which you get from differentiation, making equations and stuff a lot cleaner.)

  • @roseb2105
    @roseb2105 5 лет назад +3

    im very confused. and lost with these lectures

  • @SHIVAMGUPTA-wb5mw
    @SHIVAMGUPTA-wb5mw 4 года назад

    We started with question to find the parameter but never discussed on that....

    • @ashwinprasad5180
      @ashwinprasad5180 3 года назад

      That is what the algorithm called gradient descent does, which she wrote at the end. It finds the parameters such that it reduces the loss function

  • @SwaroopSinghDeval
    @SwaroopSinghDeval 7 лет назад +7

    Equation of multi-variable liner regression is wrong.

    • @solarstryker
      @solarstryker 7 лет назад +1

      Swaroop Singh Deval yeah I think she misinterpreted the sub's as the powers

    • @premshankar5967
      @premshankar5967 6 лет назад

      yes exactly

    • @vaibhavagrawal1856
      @vaibhavagrawal1856 5 лет назад +2

      You are getting confused between Multivariate Linear Regression and Polynomial linear regression. The equation given here is not wrong. It is just a special case for multivariate, where every feature is a function of the 1st parameter. This is called as polynomial linear regression.

  • @rohitranjan5218
    @rohitranjan5218 3 года назад

    How is she explaining the non linear equation as linear one, the equation should be linear and she has end up with non linear one. the numbers suffix notation has been written as power. 16- 19th minutes of the video.

  • @saurabhchoudhary4572
    @saurabhchoudhary4572 6 лет назад +4

    Ma'am please review your lectures before publishing, poor explanation and incorrect equation for multi linear regression.

    • @vaibhavagrawal1856
      @vaibhavagrawal1856 5 лет назад

      You are getting confused between Multivariate Linear Regression and Polynomial linear regression. The equation given here is not wrong. It is just a special case for multivariate, where every feature is a function of the 1st parameter. This is called as polynomial linear regression.

  • @JMD_coding
    @JMD_coding 3 года назад

    Shall we get any certificate after completion all video's

  • @manyamittal6767
    @manyamittal6767 6 лет назад

    Maybe split this lecture into two. It got really rushed at the end.

  • @shashu1999
    @shashu1999 6 лет назад +1

    Copied J(theta) formula from Andrew Ng's module and didnt update the variables

  • @tararawat2955
    @tararawat2955 7 лет назад +1

    Things are not being clearly explained. Its really unclear or confusing...atleast that example must be taken completely to understand the concept

  • @JMD_coding
    @JMD_coding 3 года назад

    Mam can i get certificate

  • @wolfisraging
    @wolfisraging 6 лет назад +6

    Worst explanation of gradient descent in the world

  • @TheUnblameable22
    @TheUnblameable22 4 года назад

    Surprised to see.. writing from the chit still making the basic equation itself wrong. The multiple linear regression is wrongly written. Assumptions are just copied and not explained.

  • @SubhamCreative.613kviews
    @SubhamCreative.613kviews 6 лет назад

    nptel teach us very badly.........

  • @navedahmad5851
    @navedahmad5851 7 лет назад

    proper explanation should be provided, the teacher is just rushing without explaining the concepts, this is not good.

  • @sandeepkushwaha9790
    @sandeepkushwaha9790 6 лет назад

    Now more confuse explanation is not good can any one share Good videos for linear regression with gradient descent

    • @a.yashwanth
      @a.yashwanth 6 лет назад

      coursera's machine learning by stanford is good.

  • @koppuprasanthkumar9211
    @koppuprasanthkumar9211 4 года назад

    is this called an NPTL ....worst of time whatever the concepts need extra time you just skip them like anything.......i wont watch NPTL from now onwards ........why u r doing these NPTL certifications i don't know........and overall the title is not at all justified......we don't know how to learn the straight line using linear regression ...........don't watch and waste your time........find anyother resources.......

  • @SandeepSharmaRhythmNGroove
    @SandeepSharmaRhythmNGroove 6 лет назад

    not good explanation at all.

  • @mayanksj
    @mayanksj 6 лет назад +13

    Machine Learning by Prof. Sudeshna Sarkar
    Basics
    1. Foundations of Machine Learning (ruclips.net/video/BRMS3T11Cdw/видео.html)
    2. Different Types of Learning (ruclips.net/video/EWmCkVfPnJ8/видео.html)
    3. Hypothesis Space and Inductive Bias (ruclips.net/video/dYMCwxgl3vk/видео.html)
    4. Evaluation and Cross-Validation (ruclips.net/video/nYCAH8b5AQ0/видео.html)
    5. Linear Regression (ruclips.net/video/8PJ24SrQqy8/видео.html)
    6. Introduction to Decision Trees (ruclips.net/video/FuJVLsZYkuE/видео.html)
    7. Learning Decision Trees (ruclips.net/video/7SSAA1CE8Ng/видео.html)
    8. Overfitting (ruclips.net/video/y6SpA2Wuyt8/видео.html)
    9. Python Exercise on Decision Tree and Linear Regression (ruclips.net/video/lIBPIhB02_8/видео.html)
    Recommendations and Similarity
    10. k-Nearest Neighbours (ruclips.net/video/PNglugooJUQ/видео.html)
    11. Feature Selection (ruclips.net/video/KTzXVnRlnw4/видео.html )
    12. Feature Extraction (ruclips.net/video/FwbXHY8KCUw/видео.html)
    13. Collaborative Filtering (ruclips.net/video/RVJV8VGa1ZY/видео.html)
    14. Python Exercise on kNN and PCA (ruclips.net/video/40B8D9OWUf0/видео.html)
    Bayes
    16. Baiyesian Learning (ruclips.net/video/E3l26bTdtxI/видео.html)
    17. Naive Bayes (ruclips.net/video/5WCkrDI7VCs/видео.html)
    18. Bayesian Network (ruclips.net/video/480a_2jRdK0/видео.html)
    19. Python Exercise on Naive Bayes (ruclips.net/video/XkU09vE56Sg/видео.html)
    Logistics Regession and SVM
    20. Logistics Regression (ruclips.net/video/CE03E80wbRE/видео.html)
    21. Introduction to Support Vector Machine (ruclips.net/video/gidJbK1gXmA/видео.html)
    22. The Dual Formation (ruclips.net/video/YOsrYl1JRrc/видео.html)
    23. SVM Maximum Margin with Noise (ruclips.net/video/WLhvjpoCPiY/видео.html)
    24. Nonlinear SVM and Kernel Function (ruclips.net/video/GcCG0PPV6cg/видео.html)
    25. SVM Solution to the Dual Problem (ruclips.net/video/Z0CtYBPR5sA/видео.html)
    26. Python Exercise on SVM (ruclips.net/video/w781X47Esj8/видео.html)
    Neural Networks
    27. Introduction to Neural Networks (ruclips.net/video/zGQjh_JQZ7A/видео.html)
    28. Multilayer Neural Network (ruclips.net/video/hxpGzAb-pyc/видео.html)
    29. Neural Network and Backpropagation Algorithm (ruclips.net/video/T6WLIbOnkvQ/видео.html)
    30. Deep Neural Network (ruclips.net/video/pLPr4nJad4A/видео.html)
    31. Python Exercise on Neural Networks (ruclips.net/video/kTbY20xlrbA/видео.html)
    Computational Learning Theory
    32. Introduction to Computational Learning Theory (ruclips.net/video/8hJ9V9-f2J8/видео.html)
    33. Sample Complexity: Finite Hypothesis Space (ruclips.net/video/nm4dYYP-SJs/видео.html)
    34. VC Dimension (ruclips.net/video/PVhhLKodQ7c/видео.html)
    35. Introduction to Ensembles (ruclips.net/video/nelJ3svz0_o/видео.html)
    36. Bagging and Boosting (ruclips.net/video/MRD67WgWonA/видео.html)
    Clustering
    37. Introduction to Clustering (ruclips.net/video/CwjLMV52tzI/видео.html)
    38. Kmeans Clustering (ruclips.net/video/qg_M37WGKG8/видео.html)
    39. Agglomerative Clustering (ruclips.net/video/NCsHRMkDRE4/видео.html)
    40. Python Exercise on means Clustering (ruclips.net/video/qs7vES46Rq8/видео.html)
    Tutorial I (ruclips.net/video/uFydF-g-AJs/видео.html)
    Tutorial II (ruclips.net/video/M6HdKRu6Mrc/видео.html )
    Tutorial III (ruclips.net/video/Ui3h7xoE-AQ/видео.html)
    Tutorial IV (ruclips.net/video/3m7UJKxU-T8/видео.html)
    Tutorial VI (ruclips.net/video/b3Vm4zpGcJ4/видео.html)
    Solution to Assignment 1 (ruclips.net/video/qqlAeim0rKY/видео.html)

  • @sunderrajan6172
    @sunderrajan6172 7 лет назад +8

    Kind of confusing lecture - switching from single variable regression example to multi-variable. All explanation is in a rush. I was hoping that the examples are well explained. Having 1/2 in the equation, is it for half theta? I heard this is not important. When you compare Stanford or MIT Online lectures, lots of improvements needed.

  • @sauravprasad1996
    @sauravprasad1996 7 лет назад +4

    directly skipped to LMS algo without explaining "how to learn the parameters " clearly ..! poor explanations !

  • @getfitwithakhil
    @getfitwithakhil 6 лет назад +2

    Mam, you rushed towards the end of the lecture. The theory is more important as we have computers to do most of the calculations.

  • @HA-bj5ck
    @HA-bj5ck Год назад +3

    Very well explained...........This is GOLD ❤

  • @rishabhpansari9963
    @rishabhpansari9963 5 лет назад +4

    I think LMS is least mean square

    • @mrm371
      @mrm371 2 месяца назад

      Scope

  • @ritikraushan7392
    @ritikraushan7392 2 года назад +1

    Kuch samajh me nahi aaya

  • @onataghoghoatikpe5989
    @onataghoghoatikpe5989 4 года назад +3

    I am enjoying your courses. thanks

  • @viral_baba
    @viral_baba 6 лет назад +2

    Hello Prof
    the equations written on the blackboard are of polynomial regression but the slides contain equations of multivariate regression is it a mistake if it is please mention it in the annotation. if anyone knows the answer to my query respond to me freely.
    Thanks

  • @JMD_coding
    @JMD_coding 3 года назад +1

    Sir/mam after completion of this course can i get any certificate
    Please Reply me

  • @anushamathur2019
    @anushamathur2019 3 года назад +5

    polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x.
    y = b0+b1x+b2x^2+...
    and you are calling that as multiple independent variables which relates to multiple linear regression not polynomial

    • @ashwinprasad5180
      @ashwinprasad5180 3 года назад

      This is indeed a mistake , I presume. it should be y = b0 + b1x1 + b2x2 + ... + bpxp instead of rising to the power.

    • @shreyxsh5054
      @shreyxsh5054 3 года назад +2

      @@ashwinprasad5180 yes! i took me 1 day. i rhought IIK KGP teachers must be right .... then i found MIT Andre ng one and now all is sorted.. thanks

  • @rajasekharareddy6246
    @rajasekharareddy6246 6 лет назад +5

    To understand this video , I think people must know the linear algebra . Then only they can understand this concept.

    • @AkashCherukuri
      @AkashCherukuri 3 года назад

      The name is *Linear* Regression my man

  • @saptarshisanyal4869
    @saptarshisanyal4869 3 года назад

    Sorry to say this maam , but this is wrong explanation of gradient descent algorithm and cost function. This tutorial was good till 25 minutes and after that it was total confusion.

  • @debarpitosinha1162
    @debarpitosinha1162 Год назад

    Error in multiple linear regression formula formula should be x=b1+b2x2+b3x3+...............+bpxp

  • @Tchknow879
    @Tchknow879 Год назад

    mam you teaches awesome but one thing that i suggest you can you improve your black board ,improve your camera so we can see clearly

  • @mahipalmahato7648
    @mahipalmahato7648 Год назад

    7:25

  • @ishankulkarni3542
    @ishankulkarni3542 5 лет назад

    Nahi samaj mei aa Raha hai.....jo PPT mei hai use hi phir se explain kar Rahi hai madam

  • @sujitfulse8846
    @sujitfulse8846 7 лет назад

    please explain the concept completely do not leave them in-between.

  • @Man0fSteell
    @Man0fSteell 7 лет назад +11

    There are RUclips channels that provide better lectures or explanations in a simplified form than these IIT professors .
    Too bad our Indian quality of education/teaching (or whatever you wanna call) needs to improve a lot!! :(

  • @hiraksenroy691
    @hiraksenroy691 6 лет назад

    Easy to interpret for statistics background..

  • @abhyunnati8589
    @abhyunnati8589 Год назад

    Superb

  • @ankursaxena4942
    @ankursaxena4942 4 года назад

    Nice Video How to use #Linear_Regression in #Machine_Learning

    • @harisankar6104
      @harisankar6104 4 года назад

      bro please help me, at 16:09 of this video she tells the equation of Y with square terms for p independent variable like in a polynomial regression but at the last section she tells for multi variables the equation is a linear function just like one for a multi variable regression, is this p independent variables are not multi variables?

  • @sachinsd4663
    @sachinsd4663 6 лет назад +4

    28:12 wtf was that?It sounded alien like and hilarious 😂

    • @mitrabb4812
      @mitrabb4812 6 лет назад

      bro i was searching for this comment lmao !!!!

    • @sachinsd4663
      @sachinsd4663 6 лет назад

      @@mitrabb4812dude I am glad someone noticed that shit.It is insane.

    • @mitrabb4812
      @mitrabb4812 6 лет назад

      yea man big LOL

  • @harshitsingh480
    @harshitsingh480 5 лет назад +1

    lms stands for least m,ean square not least minimum slope

    • @harshitsingh480
      @harshitsingh480 5 лет назад

      Sorry for comma in between its mean not m,ean

  • @madsudan9227
    @madsudan9227 6 лет назад +1

    gives a brief overview ,Thanks for your efforts

  • @suddhasheel
    @suddhasheel 7 лет назад

    Sorry to say this! But poor explanations by IIT standards. LMS, Batch gradient descent, and Stochastic descent would require more explanation.

  • @s_sasmal
    @s_sasmal 6 лет назад +4

    Can't imagine that Kids are preparing from there 8th standard to get into the IIT
    and after getting into the IIT they will get this kind of lecture.

  • @theperson66
    @theperson66 7 месяцев назад

    The best professor!! I love your classes, thank you for your hard work.

  • @harisankar6104
    @harisankar6104 4 года назад

    please help me, at 16:09 of this video she tells the equation of Y with square terms for p independent variable like in a polynomial regression but at the last section she tells for multi variables the equation is a linear function just like one for a multi variable regression, is this p independent variables are not multi variables?

  • @pankajkumarbarman765
    @pankajkumarbarman765 Год назад

    great lecture ma'am . Thank you so much and happy teacher day, Pronam niben.

  • @regretsonly44
    @regretsonly44 Год назад

    Queen 👑
    Amazing explanation