Apparently it's a rule: Var[Ax + b] = A * Var(x) * A_transpose, where x is a random column vector and A is a constant matrix. Mathematical derivations: math.stackexchange.com/a/2365257 Additional info: www.statlect.com/fundamentals-of-probability/covariance-matrix www.sfu.ca/~lockhart/richard/350/08_2/lectures/GeneralTheory/web.pdf
Hi guys, yes Sonya you are correct that is the rule. What I would suggest for you to convince yourselves is use the fact that for a random Variable X, Var[X] = E[(X)^2] - (E[X])^2. If we have a random variable Y = aX + b, where a and are constants (not random) then Var[Y] = Var[aX + b] = E[(aX + b)^2] - (E[aX + b])^2. If you are taking this route expand the brackets and use the properties of expectation. This will convince you of the fact that the equation holds in scalar form.
Hi, since the since identity matrix multiplied by the other matrix that remains in that equation (the inverse of X transpose times X) itll just leave the XTX inverse behind with the Sigma squared in front
Most profound and in deep mathematical videos on statistics and LRMs. Thanks
Thank you for the clear explanation, your videos have been incredibly helpful!
Thank you very much. It's very well explained.
You said that the Variance of BY is B Var[Y]B^T. Where (and why) comes the B^T come from ?
same question
Apparently it's a rule: Var[Ax + b] = A * Var(x) * A_transpose, where x is a random column vector and A is a constant matrix.
Mathematical derivations:
math.stackexchange.com/a/2365257
Additional info:
www.statlect.com/fundamentals-of-probability/covariance-matrix
www.sfu.ca/~lockhart/richard/350/08_2/lectures/GeneralTheory/web.pdf
Hi guys, yes Sonya you are correct that is the rule. What I would suggest for you to convince yourselves is use the fact that for a random Variable X, Var[X] = E[(X)^2] - (E[X])^2. If we have a random variable Y = aX + b, where a and are constants (not random) then Var[Y] = Var[aX + b] = E[(aX + b)^2] - (E[aX + b])^2. If you are taking this route expand the brackets and use the properties of expectation. This will convince you of the fact that the equation holds in scalar form.
why do you remove the identity matrix when calculating Var(Beta hat)? It goes from sigma squared * I to just sigma squared
Hi, since the since identity matrix multiplied by the other matrix that remains in that equation (the inverse of X transpose times X) itll just leave the XTX inverse behind with the Sigma squared in front
Short and precise!
This video is great! Surprised it doesn’t have many likes!
Using "x" as multiply by sign when using x also as variable was a little bit confusing. xD
Cypher is part of your name its shouldn’t have been hard for you ;-)