Derivation of OLS coefficients

Поделиться
HTML-код
  • Опубликовано: 10 авг 2015
  • The simple maths of OLS regression coefficients for the simple (one-regressor) case.
    This video screencast was created with Doceri on an iPad. Doceri is free in the iTunes app store. Learn more at www.doceri.com

Комментарии • 33

  • @averymojica3742
    @averymojica3742 6 лет назад +5

    Clear and to the point. I'm going to be referencing your videos for my econometrics class! Thank you so much!

  • @preritgoyal9293
    @preritgoyal9293 3 года назад +2

    Sir you are great !!
    Very easily explained such a complex derivation..
    😍😍

  • @amritraj3934
    @amritraj3934 2 года назад +1

    ooooh god ... you saved my degree sir!!!! hats off!!

  • @whitefiberman2094
    @whitefiberman2094 4 года назад

    Excelent derivation!

  • @AJ-et3vf
    @AJ-et3vf 2 года назад

    awesome video sir. Thank you

  • @trchri
    @trchri 2 года назад

    This is the one that finally made it click. Thank you

  • @kundananji.simutenda
    @kundananji.simutenda 2 года назад

    well, Explained what books would you you recommend for econometrics?

  • @lovenessphiri2213
    @lovenessphiri2213 6 месяцев назад

    Thank you for this😊

  • @jameschen2097
    @jameschen2097 3 года назад

    So clear. GOD u saved my life

  • @grkoica
    @grkoica 2 года назад

    Best regards to Mr.Mathematics...thank you

  • @shivammishrashashwat
    @shivammishrashashwat 2 года назад

    Awesome 🙏🙏

  • @tmrmbx5496
    @tmrmbx5496 4 года назад +1

    Thank you legend

  • @rohtashbhall2671
    @rohtashbhall2671 5 лет назад

    Very nice 👍

  • @NaveenKumar-bi2ku
    @NaveenKumar-bi2ku 4 года назад

    Thanks ... its awesm :) .. thanks :)

  • @user-io4sr7vg1v
    @user-io4sr7vg1v 2 дня назад

    Bravo.

  • @quinnpisani180
    @quinnpisani180 4 года назад

    Would you take the same steps for the OLS estimate of Beta-hat2 if there was another slope estimator of Beta-hat2 in the regression line?

    • @billsundstrom8948
      @billsundstrom8948 4 года назад

      Yes, only then we have three equations in three unknowns so the formulas get more complicated.

    • @quinnpisani180
      @quinnpisani180 4 года назад

      Thank you

  • @lefokomafoko2814
    @lefokomafoko2814 11 месяцев назад

    exam in 2 weeks. thank you 😀

  • @darkhansaidnassimov4407
    @darkhansaidnassimov4407 3 года назад

    On 12:00, how did the sum of xi-yi - n*mean(x)*mean(y) turn into the second last row? Didn't quite get it. Thanks in advance

    • @williamsundstrom6103
      @williamsundstrom6103  3 года назад +2

      If you take the LHS of the second to last row and expand it, you will see that some terms can be collected and you get the LHS of the third to last row. The derivation of this equivalency is very similar to the derivation around 7:15.

  • @1UniverseGames
    @1UniverseGames 3 года назад

    How we can obtain intercept and slope of B0 and B1 after shifting line l to l'

    • @williamsundstrom6103
      @williamsundstrom6103  3 года назад

      Not exactly sure what you are asking, Jahid. The B0 and B1 derived in this video are estimates completely dependent on the data... so if we have new or different data that imply a different line, we will have different calculated values of B0 and B1.

  • @ashishchauhan7343
    @ashishchauhan7343 7 лет назад +1

    i want multi regresion derivation

    • @pianotalent
      @pianotalent 6 лет назад

      I will be posting a comprehensive proof and derivation of the formulas of the multi regression in a couple of weeks, busy at this moment...

    • @msfasha
      @msfasha Год назад

      Very clear, brilliant 👍

  • @paultoronto42
    @paultoronto42 4 года назад

    Excellent explanation. There is just one step that I'm not 100% convinced about. No doubt this has to do with my own ignorance about summation. In the part where he solves for b_0 I can't quite see how he goes from Sigma (b_1 * X_i) to (b_1 * n * mean(X)). With my limited understand I would have thought it should be (b_1 * n^2 * mean(X)). He calculates n but I'm thinking it should be n^2.

    • @billsundstrom8948
      @billsundstrom8948 4 года назад +1

      This follows from something mentioned around 6:50, namely by definition mean(X) = (1/n)sigma (X_i), so multiplying both sides by n we have sigma (X_i) = n*mean(X). Then note that because b_1 is a parameter we can factor it out of the sum, so Sigma (b_1 * X_i) = b_1 * Sigma (X_i) = b_1 * n*mean(X). There is no square involved because when we take the derivative with respect to b_0 in the minimization, the square term goes away.

    • @paultoronto42
      @paultoronto42 4 года назад

      @@billsundstrom8948 Thanks, but I'm still not 100% clear. I understood the derivative with respect to b_0. That is not where I got the n squared from. This is my thinking which I know is wrong but I don't quite understand why. If Sigma(b_1) = n*b_1, and Sigma(X_i) = n * mean(X) then wouldn't Sigma(b_1 * mean(X)) be [n * b_1 * n * mean(X)] = n^2 * b_1 * mean(X)?

    • @billsundstrom8948
      @billsundstrom8948 4 года назад +1

      ​@@paultoronto42 It helps to write out the sum: Sigma (b_1 * X_i) = b_1 * X_1 + b_1 * X_2 + b_1 * X_3... + b_1 * X_n and now factor out b_1 to get b_1 * (X_1 + X_2 + X_3... + X_n) = b_1 * n* mean(X)

    • @paultoronto42
      @paultoronto42 4 года назад

      @@billsundstrom8948 Thanks, that does help!

    • @paultoronto42
      @paultoronto42 4 года назад

      @@billsundstrom8948 Thanks also for you video on Summation Notation. I should have watched that one first.