Ordinary Least Squares Regression

Поделиться
HTML-код

Комментарии • 31

  • @gueyenono
    @gueyenono 7 лет назад +35

    Never in my life have I made the connection between squaring a value and the area of a square. Explaining least squares this way is so smart! Great job!

  • @kuei-hsienchu7010
    @kuei-hsienchu7010 8 лет назад +20

    Half way though the video I'm like holy shit. This is super intuitive!!!! Thanks a lot!!

  • @SnyderRV
    @SnyderRV 7 лет назад +3

    Thats a really good intuitive explanation! Good job! :)

  • @haoxiong7077
    @haoxiong7077 8 лет назад

    Very informative and concise! Thank you for your hard work!

  • @zuhrymousa12345
    @zuhrymousa12345 6 лет назад

    Excellent explanation, thank you so much Ritvik. You have a real talent for teaching.

  • @Justin-zw1hx
    @Justin-zw1hx Год назад +1

    this was 7 years ago, I can still tell this is a top quality tutorial even the video quality was not that good

  • @hameddadgour
    @hameddadgour Год назад

    Great presentation!

  • @unknownvariablex7
    @unknownvariablex7 3 месяца назад +1

    loved it man this is the stuff I was looking for . thanks a lot 🙏🙏

  • @happy_labs
    @happy_labs 7 лет назад +1

    Very helpful, thanks so much for making this series. Only thing that threw me was your Qs look a lot like Zs!.

  • @jeevanraajan3238
    @jeevanraajan3238 6 лет назад

    Amazxingg!! Ur the GOD !!

  • @alimuqaibel7619
    @alimuqaibel7619 6 лет назад

    Thanks for the clear explanation

  • @pavelkonovalov8931
    @pavelkonovalov8931 7 лет назад +6

    Greetings sir, I would like to know whether it is possible to make a video-manual on time series statistics and its models (Such as AR, MA, ARIMA, ARMAX, so on). It would be great. So far I liked your videos about shrinkage methods (the best ones I have seen so far). Cheers from Russia

  • @donaldputnam8413
    @donaldputnam8413 3 месяца назад

    Squares are used to equalize negative error with positive error, not for the “area”.

  • @babakparvizi2425
    @babakparvizi2425 6 лет назад

    Great explanation, it all starts with a good geometric visual and example similar to how 3blue1brown does it.

  • @herryfrd2740
    @herryfrd2740 7 лет назад

    Crazy helpful!

  • @udaymhaskar
    @udaymhaskar 7 лет назад

    Best explanation.

  • @Justin-zw1hx
    @Justin-zw1hx Год назад

    and this is basically the so called machine learning concept, minimizing the sum of the error square, which means minimizing the loss/cost function, machine learning is just statistics and optimization.

  • @aliteshnizi672
    @aliteshnizi672 6 лет назад

    A great one.

  • @angelali1846
    @angelali1846 Год назад

    very useful video, much appreciated. may i know why OLS cannot be used to solve Moving Average coefficients?

  • @jake_runs_the_world
    @jake_runs_the_world 2 года назад

    holy shit, this is brilliant wow

  • @utkarshasthana3853
    @utkarshasthana3853 7 лет назад +1

    Could you put a video out for MLE regression?

  • @mullerqenawy7715
    @mullerqenawy7715 4 года назад

    Amazing

  • @factnewschannel1245
    @factnewschannel1245 8 лет назад

    and kaisai equal hota hai?

  • @PratikPrajapati84
    @PratikPrajapati84 5 лет назад

    perfect.

  • @ujjwalsharma7283
    @ujjwalsharma7283 6 лет назад

    very well explained sir. now i hope i would be able to pass my exam

  • @arunbm123
    @arunbm123 7 лет назад

    nice tutorial

  • @ranjeetapegu9015
    @ranjeetapegu9015 7 лет назад

    cool

  • @ManishKumar-bb5ql
    @ManishKumar-bb5ql 6 лет назад

    Why not to write R^2=SSE/SST instead of R^2=1-SSR/SST? Both are representing same thing total variation explained by model

    • @bhargavasavi
      @bhargavasavi 4 года назад

      As Sir explained in the end, this is only true in OLS when we have a linear model. If we want to calculate R2 for a quadratic model, we cannot have an assumption that SST=SSR+SSE. Hence we need to generalize the R^2 equal to SSE/SST