Eviews - 9 - Johanson VECM Model Estimation and Diagnostics

Поделиться
HTML-код
  • Опубликовано: 16 сен 2024
  • This tutorial will guide in selecting appropriate specification in unit root tests, interpretation of unit root tests (ADF and Unit root with structural break), estimating #VAR model, #lag #selection #criterion, #exogenous #variables, #cointegration #test specification determination, #normalized coefficients, Long run and short run coefficients, post regression diagnostics of #autocorrelation, #heteroskedasticity, #normality, #stability tests, Short run causality, Impulse response function and variance #decomposition.

Комментарии • 27

  • @mickeykozzi
    @mickeykozzi 4 года назад +1

    You should use AIC for lag selection, 4. You could even run the VAR at lag 3 to see what your results are. Typically 1-2 lags for annual is ok, however, you can test for 3. When using the Johansen test you should use (p-1) as this test is a differenced VAR test. So if you picked lag 1, you should actually write 0 0 in this test. Also, the literature says for small series, the Trace holds strong power over max-eng. Also, because you choose option 3 based on your assumptions, because there is 0 lags, you should now run a VAR with differenced series, which is the short run VAR model only. Also, when you run the VECM, because you have chosen 1 lag, you must use (p-1), so use 0 0. If you use 0 0, EViews will run a model with limited coefficients, meaning you only interpret the ECM and no short-run causal effects. Therefore, as stated before, running a VAR will explain the model better. Also, you did not talk about the significance of your long-run model. Only TREF is significant and the others are not. This is a weak causal long-run model. Also, you did not run the ECM model! (go to proc, make system, order vy variable, then copy-paste the model then run it). From that mode, then you do you residual testing. Overall, your methods and processes are incorrect and will provide students a poor example of how to run a VECM in Eviews.

  • @TinaTina-xn9on
    @TinaTina-xn9on 3 года назад +1

    You are awesome man!!!

  • @sonicasinghi7117
    @sonicasinghi7117 9 месяцев назад

    Wonderfully explained . All things are very well Clear. Can you Please make a video on VARX and SVARX by applying cholesky decomposition? Or even if you can guide me here . I would like to reach out to you on this topic. Please.Thanks in advance .

    • @nomanarshed
      @nomanarshed  9 месяцев назад +1

      Will upload soon

    • @sonicasinghi7117
      @sonicasinghi7117 9 месяцев назад

      Look forward sir. It will be extremely helpful.

  • @AshrafulIslam-bp1iv
    @AshrafulIslam-bp1iv 21 день назад

    How did you determine the t-distribution (critical value) for the long run equation?

    • @AshrafulIslam-bp1iv
      @AshrafulIslam-bp1iv 21 день назад

      Is there any references? My model is good, also get conintegrating relationship among variable. But I am confused regarding the statistical significance of the long-run coefficients. Please explain or refer any literature. Thanks.

    • @nomanarshed
      @nomanarshed  21 день назад

      Rule of thumb if t is above 2 it is significant

  • @parveenkumar4848
    @parveenkumar4848 Год назад

    EXCELLENT EXPLANATION

  • @bellisma77
    @bellisma77 9 месяцев назад

    Thanx for the explanation. I have a question plz, when estimating VAR or VECM should we use variables at their levels ? Or at their stationary levels?

    • @nomanarshed
      @nomanarshed  9 месяцев назад

      Since VECM model is for I(1) variables you can use variables are levels, if you make them stationary then it will not be added in the long run, it can come as a control variable in short run. In simple words, VECM model can distinguish between I(0) and I(1) variables so you cannot add both of them in long run.

  • @ammarali4420
    @ammarali4420 2 года назад

    If we take log of all variables and diagnostic test is still showing the problem of heteroscedasticity normality and auto correlation then what should be the next step ?

    • @nomanarshed
      @nomanarshed  2 года назад +1

      Explore the theory then. You might be missing important variable, transformation and specification. Not all models are linear in reality

    • @ammarali4420
      @ammarali4420 2 года назад

      @@nomanarshed Thank you sir

  • @mynameisjoejeans
    @mynameisjoejeans Год назад

    Great video, thank you. Is VECM or ARDL more appropriate for cointegrated variables with a mixture of I(0) and I(1), where 2 variables are endogenous, and one is exogenous? Thanks

    • @nomanarshed
      @nomanarshed  Год назад +1

      If more than 1 endogenous VECM is better way to handle the model. In ARDL you can only assume one dependent variable in long run.

    • @mynameisjoejeans
      @mynameisjoejeans Год назад

      ​@@nomanarshed Excellent, thank you. I Thought that the lags could take care of endogeneity?

    • @nomanarshed
      @nomanarshed  Год назад +1

      @@mynameisjoejeans yes lags can take care of endogeneity and making the model assume that there is one long run dependent variable but in reality there could be more than one. So if you are not using system of equation to allow flow of effects to define more than one dependent variable the forecsting performance will be effected.

    • @mynameisjoejeans
      @mynameisjoejeans Год назад

      @@nomanarshed Brilliant answer, thank you very much for your help.

  • @geetanjali3436
    @geetanjali3436 3 года назад

    Sir I have run VECM residuals diagnostic but my model found non normal and hetroskedastic residuals but it solution for it
    I already taking my variable as natural log form.
    What can I do for this problems
    Pls rpy

    • @nomanarshed
      @nomanarshed  Год назад

      Add structural break, remove ourliers or use Quantile ARDL

  • @geetanjali3436
    @geetanjali3436 3 года назад

    You said apply central limit theorem
    Sir pls explain it

    • @nomanarshed
      @nomanarshed  3 года назад

      It means that when sample is above 30 obs the repeated sample in regression will make data normally distributed. See section of sampling distribution in any statistics book.

    • @gunndohpark5517
      @gunndohpark5517 Год назад

      @@nomanarshed I don't think only 30 obs are enough

    • @nomanarshed
      @nomanarshed  Год назад

      @@gunndohpark5517 definately more rhe marrier. But minimum 30 needed for central limit theorem