I have a question when I’m doing my white test I got a result of 193 degrees of freedom that’s exactly the same as the dataset. Is there a way to fix it?
dear mike, can you please elaborate how to explain the result if imtest Command, how we will know that there is Heteroskedasticity. if Heteroskedasticity exists then how to treat it ?? is it bad for the model ?? or it effect the result ??
What about the panel data, can we still the White test in panel data? I read in some articles that the xttest3 is only suitable for large T small N samples, so I'm trying to figure out if we could apply the white test to a fixed effects model...
I have one question, when using regress command I added robust command afterwards. It means that I won't have heteroskedasticity because I cannot do hettest command?
Ah good question! Using the robust standard errors removes the impact of heteroskedasticity on the variance, SE and t-stat calculation, but it does not fix the underlying problem. So the assumption is that if you use "robust" you have het, but don't need to test for it because you have corrected for it.
the uhat^2 variable is the estimated error variance for each observation. If the model is homoskedastic, this term should be constant. Therefore, if we can show that the RHS variables can predict changes in uhat^2, we conclude that the term is not constant across observations, and thus the model suffers from heteroskedasticity.
You can use Y-hat and Y-hat-squared as the independent variables in the test equation on squared residuals. This will contain the levels, squares and interactions of all 4 variables.
Hi Henry: the chi-square value is the test statistic calculated as the N*R2 from the test equation. The product N*R2 can be shown to follow a chi-square distribution, so the interpretations is that if the null was true (no het.), then R2 would be zero. A non-zero value can come either from random chance or because the null is not true. If the statistic is large enough such that it equals the 5% critical value, we can say that there is only a 5% chance of seeing this value if the null is true - or that we can reject the null with 95% confidence.
The variable "density" is already defined in the data set. The notation "c.density" in the regression command tells Stata that the variable density is continuous (rather than binary or a factor variable) and allows it to be used in an interaction term. I hope that helps!
Great video Mike, you really helped me and my friends out with our project!
He surely did
Glad I could help!
absolutely great ! thank you for the video
Thank you so much for this! Lifesaver!
Thank you for everything Mike
nice and concise, thank you.
Thank you very much Prof.
You are a life saver!
Thank you professor
thankyou!!!!!!!!!!!!!!!!!!!!!!!!!!!
Thank you
Thank you so much
Mike why should we use c.for calculating square of the variable, say c.density, c.polpc ? kindly response.
This is awesome, thanks mate
I have a question when I’m doing my white test I got a result of 193 degrees of freedom that’s exactly the same as the dataset. Is there a way to fix it?
dear mike, can you please elaborate how to explain the result if imtest Command, how we will know that there is Heteroskedasticity. if Heteroskedasticity exists then how to treat it ?? is it bad for the model ?? or it effect the result ??
Thank You!
What about the panel data, can we still the White test in panel data? I read in some articles that the xttest3 is only suitable for large T small N samples, so I'm trying to figure out if we could apply the white test to a fixed effects model...
I have one question, when using regress command I added robust command afterwards. It means that I won't have heteroskedasticity because I cannot do hettest command?
Ah good question! Using the robust standard errors removes the impact of heteroskedasticity on the variance, SE and t-stat calculation, but it does not fix the underlying problem. So the assumption is that if you use "robust" you have het, but don't need to test for it because you have corrected for it.
This works for a Var?
Can you explain intuitively what step 2 does? Like why do we have to get the u_hat^2 term and set it to the RHS of the equation?
the uhat^2 variable is the estimated error variance for each observation. If the model is homoskedastic, this term should be constant. Therefore, if we can show that the RHS variables can predict changes in uhat^2, we conclude that the term is not constant across observations, and thus the model suffers from heteroskedasticity.
is there a shortest way to do the reg when using 4 x variables ? Ty so much
You can use Y-hat and Y-hat-squared as the independent variables in the test equation on squared residuals. This will contain the levels, squares and interactions of all 4 variables.
how is the chi value interpreted?
Hi Henry: the chi-square value is the test statistic calculated as the N*R2 from the test equation. The product N*R2 can be shown to follow a chi-square distribution, so the interpretations is that if the null was true (no het.), then R2 would be zero. A non-zero value can come either from random chance or because the null is not true. If the statistic is large enough such that it equals the 5% critical value, we can say that there is only a 5% chance of seeing this value if the null is true - or that we can reject the null with 95% confidence.
how do you create c.density?
The variable "density" is already defined in the data set. The notation "c.density" in the regression command tells Stata that the variable density is continuous (rather than binary or a factor variable) and allows it to be used in an interaction term. I hope that helps!
Name