FRM: Standard error of estimate (SEE)

Поделиться
HTML-код
  • Опубликовано: 30 янв 2008
  • A linear regression gives us a best-fit line for a scatterplot of data. The standard error of estimate (SEE) is one of the metrics that tells us about the fit of the line to the data. The SEE is the standard deviation of the errors (or residuals). For more financial risk videos, visit our website! www.bionicturtle.com

Комментарии • 36

  • @tjarnie
    @tjarnie 12 лет назад

    This was grrreeeeaaat... taking a research stats class and have been trying to wrap my head around what the SEE really means... Thank you.

  • @skjoldborg
    @skjoldborg 12 лет назад

    Beautifully done, easy to follow and to reproduce in excel without any loss of detail. Thank you David.

  • @tamaraaziz427
    @tamaraaziz427 3 года назад

    Crazy that this was recorded a couple months before the downturn of the market.

  • @01cloudwalker
    @01cloudwalker 13 лет назад

    Your the best! I recently destroyed my exam after so struggling with the material, can't express how much the videos helped, thanks

  • @anaarika
    @anaarika 14 лет назад

    Thank you for creating this video. It's helped clear up a few questions I had concerning SEE. Thanks again.

  • @lleverfreell
    @lleverfreell 16 лет назад

    Thank you for this video. Your explanations were very clear and easy to understand.

  • @HamletNOR
    @HamletNOR 7 лет назад +1

    Good video! Very helpful for understanding the SSE/SEE concept! Thank you for making it!

    • @bionicturtle
      @bionicturtle  7 лет назад

      You're welcome, and thank you for watching! We are happy to hear that our video was so helpful :)

  • @oomieboobie
    @oomieboobie 11 лет назад

    Thank u! I love the way that you clearly explain the content!

  • @bionicturtle
    @bionicturtle  12 лет назад

    Thank you for your support, it is nice to get affirmation on youtube

  • @jordanf2646
    @jordanf2646 5 лет назад

    Perfect video, informative and easy to understand

  • @bionicturtle
    @bionicturtle  12 лет назад

    @dinas0r (n-2) is the degrees of freedom, which is the sample size (n) but reduced by the number of coefficients that need to be estimated: in the univariate regression, that is two, the slope and the intercept. So, it's n-k, where k is the number of coefficients or, equivalently and easier to remember, the number of variables (e.g., one independent + one dependent)

  • @FormosaFinance
    @FormosaFinance 16 лет назад

    Intuitively and fundamentally, Sum of Squared Error (SSE) is similar to variance. It is the square of difference between two variables. Standard Error of Estimate (SEE) is similar to standard deivation; it measures the relative distance of variable from benchmark value.
    Hence SSE and SEE are different but related.
    Hopefully this explanation helps.

  • @statswithcats
    @statswithcats 14 лет назад

    Good job turtle. Just to clear up one thing, the denominator in the SEE formula is (n-k-1), the number of samples minus the number of independent variables minus one.

  • @ashandrod
    @ashandrod 13 лет назад

    Thank you David!!, you are the BEST.

  • @Infinite8Doomsday
    @Infinite8Doomsday 11 лет назад

    Thanks David. You've been a big help.

  • @ganglixtube
    @ganglixtube 8 лет назад

    It helped! Thank you for making this video!

  • @xchrisit
    @xchrisit 10 лет назад

    Thank you for this video !

  • @kamalkarim4948
    @kamalkarim4948 8 лет назад

    Well done.Thanks for creating this video

  • @WaRpEdProductionz
    @WaRpEdProductionz 13 лет назад

    Thank you! I was so lost in my dam stats hwk til I found this!

  • @MrSupernova111
    @MrSupernova111 7 лет назад +1

    Awesome presentation!!! Thank you!!

  • @Mariamlas
    @Mariamlas 12 лет назад

    Thank you so much!

  • @roseb2105
    @roseb2105 6 лет назад

    to clarify is this different from sum of squares due to error that in prediction error we are determining the average of expected distance from an observed y value to its corresponding y value on the regression line. so if one was to guess a the y value of an x value that is not part of the graph we would assume if we were to continue the regression line we can find the corresponding y value of this x value ( that was not part of the graph) however we would assume expect on average that the error the distance between this y value ( that was not originaly part of graph) and the regression line would be equal to the standard error or estimate( prediction error) which in this case mentioned in the video is 8.6?

  • @haleysdad2004
    @haleysdad2004 12 лет назад

    THANK YOU!!!!
    THANK YOU!!!!
    THANK YOU!!!!
    I finally get it.
    THANK YOU!!!!

  • @212Dude212
    @212Dude212 16 лет назад

    Dude thanks so much, your so helpful!!!!!!!!!

  • @magdabntmohamed5590
    @magdabntmohamed5590 5 лет назад

    thank you, but please how an x point has more than one or two y points

  • @nikastacy
    @nikastacy 14 лет назад

    holy heck THANKYOU!!

  • @innosanto
    @innosanto 5 лет назад

    why divide by n-2 and not n-1 or n?

  • @nosuchthing8
    @nosuchthing8 12 лет назад

    Awesome!

  • @dinas0r
    @dinas0r 12 лет назад

    why is it divided with "n-2"?not "n-1" or only "n"?

  • @pisanghangus
    @pisanghangus 14 лет назад

    good !!!

  • @fraddi
    @fraddi 10 лет назад

    I love the sexy jazz music

  • @cmckay889
    @cmckay889 9 лет назад

    you look like a young kevin spacey!

  • @roseb2105
    @roseb2105 6 лет назад

    sorry for long comment