A Random Walk - introduction and properties
HTML-код
- Опубликовано: 15 сен 2013
- This video provides an introduction to Random Walk processes, and we start to derive the properties of such processes.
This video provides a methodology for diagnosing whether a given series is AR(1) or MA(1). Check out ben-lambert.com/econometrics-... for course materials, and information regarding updates on each of the courses. Quite excitingly (for me at least), I am about to publish a whole series of new videos on Bayesian statistics on youtube. See here for information: ben-lambert.com/bayesian/ Accompanying this series, there will be a book: www.amazon.co.uk/gp/product/1... Хобби
absolutely clear explanation. Thanks a lot!
The following is derived from d-diff(y(t)). The optimal difference is usually d in [0,2]
1st order random walk (1- diff): y(t)= y(t-1) +e(t)
2nd order random walk (2 -diff): y(t)= 2*y(t-1) - y(t-2) + e(t)
3rd order random walk (3-diff): y(t)=3*y(t-1) - 3*y(t-2) + y(t-3) + e(t)
Best explanation I found
Really helpful, Thanks.
Thanks, my only question here, is that, does this imply that a correlation in this random walk is 1 for every Rho(k) ?
Does anyone know of a video/ reference where they derive higher order moments using this approach? Thanks in advance.
I am getting confused in the calculation of covariance. It would be Cov (xt, xt) + cov (xt ,summation of errors). The first term is variance . I got that. But in the second term it would be cov (xt , et+h) + cov (xt , et+h-1) and so on , then it shouldn't be zero even if error is iid. Even though xt is not changing and it is only the erros which are varying but there can be the possibilty of xt correlated to errors varying in time due to lag effect.Please point out my mistake.
Thanks.
THANK YOU SIR
Why if we back-substitute the AR(1) with rho < 1 do we not get a variance which goes to infinitiy? Because rho * t * sigma^2 should go to infinite even with rho < 1 , shouldn't it? Where is my mistake?
Hey @Ben Lambert! Can you add some more details on how a random walk can have a constant mean (as claimed 2:45)? I don't really understand that part. If the function starts at (t=0, Y=0) for example, and then moves randomly in according to its white-noise error-term (as time progresses), how can it then have a constant mean? Could it not move infinitely into negative or positive Y just by chance?
Thanks for all the excellent videos, it has been a superb resource during my time at university! /Hugo, Umeå, Sweden
how did you come to the conclusion if rho = 1 then non stationary time series?
variance =t sigma squared
what if ET phoned home?
How did this E[Xo] become 0?
just an assumption
If X(t)=X(t-1)+X(t-2)+e(t-1),+e(t), what is E[X(t)]?
0
How you can assume everything on your own?
by imagination!
what happen if E[X_0] is not 0
Hi, thanks for your message. yes - I agree with you this point is somewhat confusing. However, we can always get back to the situation where E[X_0] = 0 by a simple transformation of variables. Hence, this fact isn't an issue. Hope that helps, Best, Ben
Mathematical proof show that random walk have constant mean = 0 but in graphs this is not true it have differents means over time
Is this how photons escape the Sun?
You know, we can read equations, too. You didn't have to speak.
fuck off
I can't read. Speak for yourself.