SEM Series (2016) 7. Setting Up Causal Model

Поделиться
HTML-код
  • Опубликовано: 9 сен 2024

Комментарии • 168

  • @thejasperjohnson
    @thejasperjohnson 8 лет назад +2

    Thanks so much for uploading this new series! Your explanations are so clear and understandable, and the fact you take time to share this and help others is greatly appreciated by many.

  • @neenkunanansak9302
    @neenkunanansak9302 4 года назад +1

    Hi Dr Gaskin, thank you so much for the instructional video. It definitely save my life.
    Once I created the causal model, I tried to do test model fit. However, I got 0 for degrees of freedom. Then I would like to delete the insignificant paths but they are my hypotheses, so I'm not sure what to do. Please advise.

    • @Gaskination
      @Gaskination  4 года назад +1

      With path models (that don't use latent factors), it is common to have zero (or very few) DF. This happens when all possible paths are estimated. This is also fine. If all paths are hypothesized, then keep them in. Do not delete them. You just have perfect fit, so fit is irrelevant.

  • @shazji786
    @shazji786 6 лет назад

    Dear Dr. Gaskin, I am one of the silent learners and followers of your lessons. You have always been an inspiration to me and many like me in the field of social sciences.
    May I request your guidance on one of the problems I am facing. I have been working on antecedents of mobile shopping with UTAUT-2 model combined with the constructs of trust and perceived risk. I am using a sample of international students for my study. I wanted to check for impact of several exogenous variables like risk, trust, influence, structural assurance and faciltating conditions etc. along with age, gender variables on to the behavioral intention towards use of mobile shopping. All the reliability and construct validity measures are ok but I only introduced actual usage behavior in the end because there was only one DV and the degrees of freedom were zero and model fit values were also ambiguous.
    So now the R.squared correlation for behavioral intention is 0.78 and the regression effect from Behavioral_intention to actual usage is also good i.e. 0.32 but the real problem is that the R-squared correlation value for actual usage is only 0.10 which is a matter of concern for me. Let me tell you that since I have focused on expat students here in China and mostly they are using online mobile shopping apps. The actual usage measure adopted from Martins et al. (2014) a categorical value recording the actual usage of mobile shopping app from one time usage in a year to several times usage a day. Only 13% of our users indicated use of mobile apps on a monthly basis whereas the others indicated usage as follows in the end.
    Kindly advise me on this as to what would be the reason for this much low value of R-squared of actual use of mobile app while the model is highly able to explain the intention to use mobile shopping.
    freq_of_usage_mob_shopping table
    Frequency Percent Valid Percent Cumulative Percent
    never used 4 1.0 1.0 1.0
    once in a year 16 3.8 3.8 4.8
    oncein6months 13 3.1 3.1 7.9
    oncein3months 24 5.7 5.7 13.6
    every month 122 29.1 29.1 42.7
    every week 100 23.9 23.9 66.6
    every 4-5 days 31 7.4 7.4 74.0
    every 2-3 days 27 6.4 6.4 80.4
    every 1-2 days 17 4.1 4.1 84.5
    every day 36 8.6 8.6 93.1
    manytimesaday 29 6.9 6.9 100.0
    Total 419 100.0 100.0

  • @ylehchieng
    @ylehchieng 7 лет назад +1

    Hi Dr Gaskin,
    I have a second-order factor in my model. Do I need to include 1st order factor in the path model with factor scores or just second order factor?
    Thanks.

  • @abdulmoeed4661
    @abdulmoeed4661 2 года назад

    Researchers are talking about two ways to test the Categorical variables.
    1. By creating a dummy variable that will produce the new variables equal to number of classes in each category and we will see the effect of each individually.
    2. They are creating the new categorical dummy variables by 'Recoding into different variables'. One used this method and created equal number of dummies to classes of each category,, and one said that you will create one less dummy than classes of each category.
    What's the difference? Which method you will prefer or most understandable?

    • @Gaskination
      @Gaskination  2 года назад

      Correct. You must omit a reference category to avoid a singularity matrix. Then all coefficients for the included categories are relative to the reference category. So, if we assume the reference category as the baseline of zero effect (regardless of its actual effect), then if we observe a positive effect from another indicator (that is part of that construct), then it is a stronger positive predictor than the reference category would be. If we observe a negative effect, then it is a stronger negative predictor. If no effect, then it is equivalent to the reference cateogry.

    • @abdulmoeed4661
      @abdulmoeed4661 2 года назад

      @@Gaskination What if there are only two groups in one category like GENDER. Then, we will have to create 2 dummy variables or 1 dummy variable?

    • @Gaskination
      @Gaskination  2 года назад +1

      @@abdulmoeed4661 Just one. The variable then becomes e.g., "Male" rather than "Gender". So, if the participant was male, replace the value with a 1. If female, replace with zero. Then all effects are for the male, relative to the female.

    • @abdulmoeed4661
      @abdulmoeed4661 2 года назад

      @@Gaskination Thanks...

  • @user-xu7pg8fw9q
    @user-xu7pg8fw9q Месяц назад

    Dear Professor Gaskin, I believe my commend hasn't got posted for some reason and I lost the video where I posted it, so here I am again.
    I have 4 independent questions:
    1) when we run a SEM model the literature usually says that we should use standardized coefficients to see the difference between the influence of two IVs on one DV. I don't get how we can say that they are statistically different or not even if they are different. The first idea I had is to use Fisher's test to compare regression coefficients, but now I think it's a bad idea- the sample is the same, it's not two independent samples(Y=b0+b1*x1+b2*x2 I want to test if b1-b2 =0) I now think that maybe I should try something like nested models - unrestricted coefficients would be for model one, and the restricted coefficients would be for model 2 where b1=b2 and then I would compare the likelihood ratio statistics, would it be an appropriate way to handle it?
    2) According to STROBE recommendations, a study should report a model with and without control variables to see if they influenced the analysis much. So, here I also thought about Fisher's test, now I think it's bad idea too. Now I think ,maybe to use likelihood ratio test, but now I also think it won't answer my question - it will say if 2 models are different, not if the coefficients are different (Y=b0_1+b1_1*x1+b2_1*x2 and Y=b0_2+b1_2*x1, I want to test if b1_1-b1_2=0). So, I came across the Clogg and colleagues’ article(Clogg, C. C., Petkova, E., & Haritou, A. (1995). Statistical Methods for Comparing Regression Coefficients Between Models. American Journal of Sociology, 100(5), 1261-1293. doi:10.1086/230638) they have a formula for multiple regression models, which requires to use sigmas for the computation, and I am not sure where to take it from the output in R when using lavaan or semTools package. Is it the right way to handle it? If yes, where to take those sigmas?
    3) what if I want to test the same sample for the same model but in 2 different conditions? Do I understand correctly that it's the work of longitudinal SEM? or something like repeated measures SEM?
    4) and the last question is when we use common method factor, the correlations between the variables change, and we should provide how much they change, so here I assume we also could run a test, I thought about Fisher's z-test for correlations. However, again, we have the same sample, not two independent samples, and it’s not just correlations, but correlations computed as a part of CFA analysis. What to do in this case?
    Thank you!

    • @Gaskination
      @Gaskination  Месяц назад

      1. You can just do a chi-square difference test where one model is the unrestricted model (just the normal one) and the other constrains b1 to equal b2.
      2. A chi-square test can test whether models are different or whether coefficients are different. To test differences in coefficients, refer to my above answer. To test differences in models, just run both unrestricted models (but with and without controls). However, usually we're more interested in how adding controls changes R-square and path coefficients.
      3. Sounds like repeated measures.
      4. I would just report the change in correlation, and then wait to see if a reviewer requests more than that, and if so, ask what they recommend.

    • @user-xu7pg8fw9q
      @user-xu7pg8fw9q Месяц назад

      @@Gaskination thank you a lot, I ended up using wald chi square for the first one, nested models R square and path coefficient changes for the second one. Although, for the second one I am a bit confused how to represent a nested moderation model

  • @abdulmoeed4661
    @abdulmoeed4661 Год назад

    What is the main difference between "Common Method Bias" in CFA and "Multicollinearity-VIF" in Structural Model as both are assessing the relationship between independent and dependent variables?

    • @Gaskination
      @Gaskination  Год назад

      Multicollinearity is one way to test CMB (here is a video about it: ruclips.net/video/pp-2dKCFrWo/видео.html). Multicollinearity is a potential manifestation of CMB. CMB is just a latent inflation of shared variance due to a common source (such as social desirability bias). Since multicollinearity is a form of high shared variance, CMB and multicollinearity often occur together.

  • @user-xu7pg8fw9q
    @user-xu7pg8fw9q Месяц назад

    Dear Prof. Gaskin, I have a problem with justifying why my model is preferable to an alternative model. I understand that given that the difference in df is 0, that means that the alternative model's fit is better statistically, but is it really different enough to say my model is not at least equivalent? (I changed the position of two elements in the model, all covariates/controls are also added to calculate model fit)
    My model: CMIN=40.613, DF=13 ,CMIN/DF=3.124, CFI=0.945, RMSEA=0.063, 90%CI RMSEA [0.042;0.085], SRMR=0.041.
    Alternative model: CMIN=39.453, DF=13 ,CMIN/DF=3.03, CFI=0.948, RMSEA=0.061, 90%CI RMSEA [0.040;0.084], SRMR=0.041.
    Do I need to report the results of both? Wouldn't it be a bit confusing given that I justify my model as favorable?

    • @Gaskination
      @Gaskination  Месяц назад

      The alternative model does not significantly improve model fit. Unless the alternative model is based on some existing theory, I wouldn't even bring it up. If you need to justify it, you can use the delta cfi method: chatgpt.com/share/88bd9647-9703-4aa9-8e20-81e580272c0e

  • @abdulmoeed4661
    @abdulmoeed4661 Год назад

    How does the reference category works with other categories in 'Categorical variable'? e.g. There are four categories in Education: 1. Primary 2. Secondary 3. Higher 4. Graduation......
    Let's take 'Graduation' as reference category. All other categories significance will be compared with reference category. Is there any need to assure the significance of 'reference category'? Or It means 'reference' category has been alloted a weight of '1' and all other are being compared with it. What is your interpretation?

    • @Gaskination
      @Gaskination  Год назад

      Check out slides 25 and 26 here: health.ucdavis.edu/ctsc/area/Resource_Library/documents/logistic-regression-intro-Feb-2021.pdf

    • @abdulmoeed4661
      @abdulmoeed4661 Год назад

      @@Gaskination Thanks...Sir

  • @spordipsuhholoogia7442
    @spordipsuhholoogia7442 8 лет назад

    Dear Mr. Gaskin,
    I'm trying to set up similar casual model by following your instructions (with my own dataset), but I'm facing following problem: "The model is probably unidentified. In order to achieve identifiability, it will probably be necessary to impose 6 additional constraints". Adding those 6 additional constraints makes my model meaningless, what I'am missing? I'm using AMOS 20.0

  • @BaluDerBaer933
    @BaluDerBaer933 9 месяцев назад

    Why did you use residual errors for the observed variables (Useful, InfoAcq, DecQual, (Joy))? Aren't they only to add on latent variables?
    Thanks a lot!

    • @Gaskination
      @Gaskination  9 месяцев назад +1

      Residuals must be placed on all endogenous variables (anything being predicted has error associated with it).

    • @BaluDerBaer933
      @BaluDerBaer933 9 месяцев назад

      @Gaskination Great to know, thanks a lot!

  • @user-fy7rn9jh5m
    @user-fy7rn9jh5m Год назад

    At 11:36 you mentioned that the effect of age on decision quality is significant at the 90 percent confidence level, which the p-value was .081(over .05-usually anyhting over this was told as not significant). If you see that as significant, what is the threshold for the significant level that you go by? If therer's any reference that you use also be very helpful. Thank you very much.

    • @Gaskination
      @Gaskination  Год назад

      The p-value threshold is just the confidence level subtracted from zero. So, if I desire 95% confidence (i.e., 0.95), then my p-value threshold will be 1.00-0.95=0.05. If I desire 90% confidence level, then I must have a p-value less than 0.10 (1.00-0.90).

    • @user-fy7rn9jh5m
      @user-fy7rn9jh5m Год назад

      @@Gaskination Thank you. So you are saying that it is up to the researcher to accept sufficient level of significance between .10-.05(p-value). Thank you again.

    • @Gaskination
      @Gaskination  Год назад +1

      @@user-fy7rn9jh5m Correct. The researcher chooses the level of significance. However, there are established conventions/norms. Most journals would recommend 95% confidence in well-established areas, and tolerate 90% in more exploratory areas.

    • @user-fy7rn9jh5m
      @user-fy7rn9jh5m Год назад

      @@Gaskination Thank you!

  • @mrs.vprasannath9304
    @mrs.vprasannath9304 2 года назад

    Hi James, thanks for your wonderful explanation. In this model instead of running with latent variables, you have run with its composite variable.
    1. Is running SEM like the above and with latent variables the same?
    2. In order to calculate these composites, do we have to add only the items that loaded together in the EFA?
    3. Usually EFA and CFA should not be run in the same data. What is your suggestion if I have data with nearly 320 samples?
    4. In the SEM paper, what parameters or results should we report (whether have to report model fit, C.R value of the final model)? Thanks in advance

    • @Gaskination
      @Gaskination  2 года назад +2

      1. It will usually result in very similar estimates, but not identical. Occasionally the estimates can be quite different. In such cases, the latent model is considered more accurate (because it includes measurement error).
      2. To create these, just do it from the CFA with just the latent factors. If you have other variables, like income, gender, job position, etc. then just leave those out of the model while imputing the factor scores (ruclips.net/video/dsOS9tQjxW8/видео.html)
      3. It is best to run it on separate data, but it is common to run it on the same data. You can also take a random sample for each.
      4. Look at the articles in the journal you are targetting. Report what they report. It is common to report model fit, factor validities and reliability.

    • @mrs.vprasannath9304
      @mrs.vprasannath9304 2 года назад

      @@Gaskination Thank you so much. You have videos for almost all in SEM. How great you are. You are a GURU of SEM teaching. Thanks a lot, James.

  • @abdulmoeed4661
    @abdulmoeed4661 2 года назад

    There is a difference in Model Fit and Regression weights while doing SEM with 1) Path model by Imputation and 2) Structural Model along with Observeable Items.
    All model fits are good in 2) Structural Model along with Observeable Items but issue of RMSEA in 1) Path Model by Imputation.
    What if I want to report the structural model in results? Can I use the "Structural model along with Observeable Items? How can I move forward and with what approach? Thanks Waiting for a valuable reply..

    • @Gaskination
      @Gaskination  2 года назад +1

      The model fit will differ because there are much more parameters in a latent model (so, higher chi-square and df). To improve RMSEA, you might consider omitting non-necessary and non-significant structural paths in your path model. This will create additional df which will reduce the RMSEA. If that is not an option, you can report that all fit metrics are good except RMSEA, which is likely due to the low df inherent with a path model.

  • @KousarSadeghzadeh
    @KousarSadeghzadeh 3 года назад

    Thank you, James. Is it alright to find out some of our control variables (e.g., four age groups 16-49) have a significant effect on the dependent variable, while age 50-59 is insignificant? and how do we interpret the significant effect of control variables? general idea is we control for them to avoid their effect, right? but how do we interpret the significant effect of a control variable?

    • @Gaskination
      @Gaskination  3 года назад +1

      Yes, this is fine. Since these are caegorical, then you just need to interpret them in relation to the reference category (whichever category was left out). So, a significant positive effect means that there is a stronger positive effect for that age group than for the reference group. The opposite for negative significant effect. If no significant effect, then it means that group is not different than the reference group.

  • @abdulmoeed4661
    @abdulmoeed4661 2 года назад

    I am testing the categorical variables on the latent dependent constructs.
    Do we need to insert all the classes simultaneously for that specific categorical variable to assess their effect on Latent dependent variables or Insert one by one to see their effects?
    There is a difference in results while testing only 1 class of categorical variable or multiple classes of categorical variables simultaneously. Waiting for reply. Thanks

    • @Gaskination
      @Gaskination  2 года назад +1

      You should insert them altogether, minus one reference category. So, for example, if your categorical predictor has three values, you would insert two binary ("dummy") variables. These are then interpreted in reference to the omitted category.

  • @abdulmoeed4661
    @abdulmoeed4661 Год назад

    Can you please tell me about which Unique enteries are considered along with number of parameters to be estimated to calculate degree of freedom in FULL Structural Model?
    D.F. = 'p(p+1)/2' minus 'No. of parameter to be estimated'.
    For p= unique enteries and Parmaters= Paths+Variances+Covariances+Error Variance
    But question is that do we consider to calculate 'Paths' for both "Observeable items and Latent Items"? What are number of Variances and Error Variances? I am confused.

    • @Gaskination
      @Gaskination  Год назад

      Fortunately, DF is calculated automatically. It is simply the number of parameters not estimated. This is simple in path model. It is just the number of omitted connections between variables. For example, in the video above, around the 1:58 mark (before adding controls), there are five paths not included that we could have included. Thus, there would be 5 DF. Adding controls creates the opportunity for more DF. In a full latent SEM, this is much more complicated since we now have latent factors and observed items. But again, it is simply the parameters (Paths, Variances, Covariances, Error Variances) that could have been specified, but were not.

  • @prayaansood-akidwhosnotkid4334
    @prayaansood-akidwhosnotkid4334 4 года назад

    Dear James Greetings..thanks a lot for saving and helping so many scholars..we all owe you a lot for creating these series, moreover you solve individual queries as well.🙏
    I have a doubt too.
    Model fit for CFA was acceptable however it is too bad while performing SEM. Now my question is - A->B->C gives me a bad model fit but the model fit improves drastically if I regress (with an ->) A->C too.
    Pls advise if this is advisable/acceptable?

    • @Gaskination
      @Gaskination  4 года назад +1

      This is logical and appropriate. If B is the mediator between A and C, then it implies there should be an effect between A and C. So, adding this effect is logical.

    • @prayaansood-akidwhosnotkid4334
      @prayaansood-akidwhosnotkid4334 4 года назад

      @@Gaskination thanks a lot :-)

  • @aymabdullah
    @aymabdullah 7 лет назад

    Hello sir, would be really grateful if you could answer some questions:
    1) Will the results from a causal model establish the variable/factor under inspection as a causal factor?
    2) There are many literature that seriously doubts the capacity of SEM to establish cause and effect relationship. In midst of these disagreements, how reliable is the use of SEM?
    3) In a risk factor analysis, where we try determining the relationship between variables A,B and C with incidences of disease X. Will running SEM establish A, B and C as causal variables?

    • @Gaskination
      @Gaskination  7 лет назад

      1) The causal nature of the relationship should be established theoretically. The regression weight and R-square do offer some evidence that the relationship exists.
      2) There are different schools of thought on this. For every paper that says SEM is flawed, there are another 1000 papers using it to establish and validate theories. For now, most journals accept SEM as a useful tool for assessing causal theories.
      3) SEM can reveal the relationships, but again, theory should be used to establish cause. Cause is also better assessed in an experimental setting.

  • @nurannizahishak5646
    @nurannizahishak5646 7 лет назад

    Hi Prof, I have all latent variables (after imputation) in my model. Can I covaried one of my error term with one of the independent latent variable? It was suggested by my 'modification indices'. Thanks.

  • @abdulmoeed4661
    @abdulmoeed4661 2 года назад

    Sir, All the demographic categorical variables are "Insignificant" except 'Gender'. Should I include the categories that are "Insignificant" to test further While doing the comparison of different groups within same Categorical variables by dummy coding?

    • @Gaskination
      @Gaskination  2 года назад +1

      There is debate on this topic. My recommendation is to always include all of the relevant variables, even if they are non-significant. This allows you to maintain nomological validity.

  • @BunnyBooSHT
    @BunnyBooSHT 6 лет назад

    Dear Dr. Gaskin, thank you for this generous resource that's been extremely helpful.
    I'm currently trying to do a mediation analysis with my dataset. One of my variables is derived from the sum of its 2 subscales divided by the absolute value of their difference (based on instructions from the authors that came up with the scale). I want to include both subscales as well as the overall scale score in the model. My questions are:
    1) Would that be a sound approach?
    If so:
    2) Should I include the individual subscales with the overall scale score as a latent factor? But no other variables in the model will be latent, so would that be a sound approach? Or
    3) Should I include the 2 subscales as mediators of the overall-scale variable?
    I guess theoretically it wouldn't make sense for the overall scale to be a mediator of the two subscales.
    Any help or suggestion would be gratefully appreciated. Thank you so much in advance.

  • @subhrosarkar4366
    @subhrosarkar4366 8 лет назад

    Dear Dr. Gaskin,
    I have a model with three independent and one dependent variables. I want to check how much of the variance in the dependent variable is being explained by each of the independent variables (i.e contribution of each independent variable to the R-square of dependent variable). Please suggest.

    • @Gaskination
      @Gaskination  8 лет назад

      You would have to look at them separately. Or, you can do them in separate blocks in SPSS.

  • @linaw.3353
    @linaw.3353 3 года назад

    Hi Dr. Gaskin, can you tell me why I have to covary the exogenous variables? Can you give me a source for this? Without doing this my modell fit of the SEM is super bad..... Thank you so much!

    • @Gaskination
      @Gaskination  3 года назад

      Covarying exogenous variables is a standard assumption of covariance-based SEM. If it is stated anywhere, it would probably be in the Byrne AMOS book: Byrne, B. M. (2009). Structural equation modeling with AMOS: basic concepts, applications, and programming (2nd ed.). Abingdon-on-Thames: Routledge.
      The reason the model fit is so bad without doing this is because model fit is a measure of how the proposed model matches the covariance matrix. When nothing is covaried, then it doesn't match it very well.

  • @hebaelsayedelbdawyahmedhas8529
    @hebaelsayedelbdawyahmedhas8529 2 года назад

    Hello James,
    Thanks for your amazing videos. they help me so much.
    I have two question regarding testing structural model. The standardized factor loading of one of independent factors was more that one (1.473) and it was significant too.
    1. Does this mean that I have a problem in my model or this is normal?
    2. Do you think that I will face a problem with reviewers when I write this result in the paper?
    Thank you in advance

    • @Gaskination
      @Gaskination  2 года назад

      This is called a Heywood case. This can happen for many reasons. Here is a post about it: gaskination.com/forum/discussion/118/can-a-standardized-regression-weight-exceed-1-00
      The latter half of the post is about Heywood cases in structural models.

  • @polyvore2908
    @polyvore2908 8 лет назад

    Aren't you deleting the wrong relationship? You say age-decision quality, but you delete age-infoacq. Would that have changed the model fit?

    • @Gaskination
      @Gaskination  8 лет назад

      +Polyvore oops! You're right. Oh well... Yes, that would have changed things slightly. Good catch.

  • @abdulmoeed4661
    @abdulmoeed4661 2 года назад

    I have made a causal model. All the model fits are good except RMSEA of 0.107.... So, when I include a Gender categorical variable in the model and set it regression line to the dependent variable, All Model Fit values are satisfied. When I include Gender as latent variable, model fit is good. What if I want to test the 'Multi group gender comparison', then what approach should I make? Should I leave that Gender variable within the model or what can I do?

    • @Gaskination
      @Gaskination  2 года назад +1

      If using gender as a grouping variable, then remove it as a predictor. If you keep it as a predictor in the model, then the multigroup will fail (due to zero variance on a predictor - gender - for both groups).

    • @abdulmoeed4661
      @abdulmoeed4661 2 года назад +1

      @@Gaskination Sir, I have not used it in the Multigroup comparison. I am talking about using Gender as predictor in the Structural Model because it is helping in making good model fit as predictor.

  • @user-hq7cc7yg1f
    @user-hq7cc7yg1f 4 года назад

    Thank you for the excellent video. A question that I have though is why AMOS and other SEM software (e.g., Mplus) require/estimate covariances among exogenous variables by default? Is an underlying SEM assumption the reason for this? Shouldn't theory and logic guide drawing covariances among exogenous variables?
    I have a model where work experience and educational level are included as controls. Why do I have to covary them with each other and especially with other exogenous variables when they are not theoretically related?
    If this is indeed because of an underlying methodological reason, could you please introduce a source where I can read more about the issue?
    I am so grateful.

    • @Gaskination
      @Gaskination  4 года назад

      You are correct that the underlying reason is methodological. Mplus and AMOS are covariance-based SEM applications. This means they rely on the covariance matrix for most of their estimates. Part of this method includes the covariance of all non-predicted variables. As for a reference, the Byrne AMOS book is probably good, or the Muthen Mplus book.

    • @user-hq7cc7yg1f
      @user-hq7cc7yg1f 4 года назад

      @@Gaskination Thank you so much for your response. it clarified the issue for me.

  • @ibrahimarsal2805
    @ibrahimarsal2805 8 лет назад

    Are the variable for example "Joy", the total scores of the "Joy1" "Joy2" "Joy3" ... "Joy7" or mean score or something else? I couldn't understand it. Can you please tell me what is it?

    • @Gaskination
      @Gaskination  8 лет назад +2

      See the previous video (#6) in the playlist. It shows how to create factor scores. These are essentially weighted averages. Some versions of amos also mean-center them.

  • @hienho2266
    @hienho2266 7 лет назад

    Dear Dr. Gaskin,
    Thank you so much for your previous response to me. I have 2 more questions:
    1) I am setting up a causal model after the CFA has been finished. In the end of CFA, unfortunately, I have to retain the CLF. In this case, would I also put the CLF in the causal model?
    2) I want to test 8 latent variables (factors) together with 1 observed variable (single one) Is it possible in SEM? Thank you!

    • @Gaskination
      @Gaskination  7 лет назад

      1. if using a latent model, retain the CLF in the causal model. If using a path model with factor scores, then the effect of the CLF is calculated into the factor scores, so you can exclude the CLF in the causal model.
      2. That's fine. Just include the observed variable on its own (not as part of a latent factor).

    • @hienho2266
      @hienho2266 7 лет назад

      Thank you very much Dr. Gaskin. Have a good day.

  • @abdulmoeed4661
    @abdulmoeed4661 2 года назад

    Do I need to change the 'Measure' type of Imputed latent variables or leave it blank, when the variables are imported from CFA model by Imputation method? The Measure type is 'unknown' showing in new Imputed SPSS file.

    • @Gaskination
      @Gaskination  2 года назад

      It is good to change it if you are going to do further analysis in SPSS. However, if you are going to do the analysis outside of SPSS, then it doesn't matter.

    • @abdulmoeed4661
      @abdulmoeed4661 2 года назад

      @@Gaskination Thanks

  • @maheshankanakaratne9184
    @maheshankanakaratne9184 7 лет назад

    Hi James, many thanks for these great videos. Really helped me a lot.
    I am running some tests and need to include 4 controls. I was wondering if I could justify the inclusion/exclusion of controls by running some T-Tests and ANOVAs using these control variables and the IVs/DVs. Not sure if this makes sense but my argument is that for example, if there is no significant difference in one IV or DV between or amongst groups within a control variable, then I do not need to impose that control on the said IV or DV.
    Please advice. Thank you!

    • @Gaskination
      @Gaskination  7 лет назад

      That is a fine approach. You can also simply include them so that you can say that you accounted for them, even though they have no significant effect.

    • @maheshankanakaratne9184
      @maheshankanakaratne9184 7 лет назад

      Many thanks for your swift response. Much appreciated!

  • @dhruvdutta9891
    @dhruvdutta9891 5 лет назад

    Dear Jamesh, how did the model has been drawn without observed variables in AMOS?

    • @Gaskination
      @Gaskination  5 лет назад

      If you don't have observed variables, then you cannot create a model. If you do have observed variables, but you want them to be part of latent factors, then you can create the latent factors (as shown here: ruclips.net/video/JkZGWUUjdLg/видео.html), and then just use causal arrows, rather than covariance arrows (as shown here: ruclips.net/video/n-ULF6BGVw0/видео.html). Hope this helps.

  • @ruthdarshini3642
    @ruthdarshini3642 6 лет назад

    Dear Dr.James,
    I am building my model with 2 IV and one Mediating Variable and one DV i did the data imputation method and setting up the causal model but i have two concerns;
    1. Why the variables are not in elipses? is that a problem?
    2. The indirect effects estimands has syntax errors and doesn't run?
    Your advice would help me finish my PhD analysis.
    Thank you in advance.
    Ruth

    • @Gaskination
      @Gaskination  6 лет назад

      1. Variables that are columns in your dataset are represented as rectangles in AMOS. Variables that are only measured indirectly through multiple indicators are represented as ellipses in AMOS.
      2. The indirect effects estimand is only to be used with the specific indirect effects plugin. Since you have only one mediator, you don't need the estimand. Just do a bootstrap while estimating indirect effects.

  • @OPaixao13
    @OPaixao13 7 лет назад

    Hi again James,
    3 questions:
    1 - It is right (according to most literature) to assume significancy of the path if it is between .05 and 1.0??
    2 - Should we covariate residuals rather create a new path? I ask this because we saw that Joy could predict Useful but you didn't draw the unidirectional path.
    3 - When we are correcting our model, according with the MI, should we generally start with bigger values??
    Thank you in advance,
    Regards

    • @Gaskination
      @Gaskination  7 лет назад +1

      1. No. Most literature would argue that a p-value less than 0.05 is significant, but anything between 0.05 and 1.0 is not significant.
      2. In this case I covaried the errors instead of drawing a regression line because these are both mediators and (according to Dave Kenny's arguments) can therefore be justifiably covaried. However, you can just as easily draw a regression line. These have different implications though.
      3. Yes.

    • @OPaixao13
      @OPaixao13 7 лет назад

      The reason for question 1 is because, sometimes, when I delete a non-significant path the RMSEA values increase (but they are still below .05).
      If the adjustment values keep between the acceptable values, should I just proceed with this process of deleting non-significant paths?

    • @Gaskination
      @Gaskination  7 лет назад

      Deleting paths is completely optional. It increases degrees of freedom (which is why RMSEA improves). If you have enough DF, then no need to remove non-significant paths.

    • @OPaixao13
      @OPaixao13 7 лет назад

      Terrific! Thank you very much!

  • @HealthbeautyluckyshahBlogspot
    @HealthbeautyluckyshahBlogspot 3 года назад

    Hi James I am having one issue when building the Causal model. When I draw residual on my DV variables and try to run estimates it says (The observed variable, e1, is representing by an ellipse in the path diagram). I don't know why it's coming. I have co-varry my IV variables. What I am doing wrong

    • @Gaskination
      @Gaskination  3 года назад

      It means that you have a variable in your dataset called e1. You need to rename those variables to not be the same as your residual (error) variables.

    • @HealthbeautyluckyshahBlogspot
      @HealthbeautyluckyshahBlogspot 3 года назад

      @@Gaskination Oh I will check thanks for your help. another quick question can we run SEM directly after CFA without check validity

    • @Gaskination
      @Gaskination  3 года назад

      ​@@HealthbeautyluckyshahBlogspot Sorry for the delayed reply. I've been busy trying to fix the StatWiki. It is fixed now though. As for your question, it is always prudent to check validity before conducting a causal model. Otherwise, how can you have confidence that your constructs are being measured validly?

    • @HealthbeautyluckyshahBlogspot
      @HealthbeautyluckyshahBlogspot 3 года назад

      @@Gaskination if there is validity issue in one of the construct can we remove it and rerun the analysis? There is no high correlation but still one construct has validity issue AVE is below .5

    • @Gaskination
      @Gaskination  3 года назад

      @@HealthbeautyluckyshahBlogspot Removing a construct should be done with great caution, and only if including the variable invalidates findings (in this case, due to poor measurement). I would first judge whether the factor is actually reflective (and thus subject to AVE constraints).

  • @sonjaremetic5450
    @sonjaremetic5450 8 лет назад

    Dear Mr. Gaskin, how would you test for endogeneity in this case? Thank you, Sonja

    • @Gaskination
      @Gaskination  8 лет назад

      +Sonja Remetic I'm not sure, but I know Dr. Antonokis has a few videos on this topic.

  • @abdulmoeed4661
    @abdulmoeed4661 2 года назад

    At 44 seconds of the video start, when you pull the latent variables for causal model. How did you make it a single combined latent variable like 'InfoAcq' and other latent variables because there are number of observeable items for each latent variable. How did you draw this single latent variable for all in SPSS Statistics?

    • @Gaskination
      @Gaskination  2 года назад

      To draw latent variables with multiple items, use the icon on the top that looks like a candelabra. I walk through how to do this in another video called: model fit during confirmatory factor analysis

    • @abdulmoeed4661
      @abdulmoeed4661 2 года назад

      @@Gaskination I want to draw a single latent variable as you drew in the video at 44 seconds, not the observeable items of latent variable. How did you combine the observeable items to show as single latent construct as in DataSet Input file?

    • @Gaskination
      @Gaskination  2 года назад +1

      @@abdulmoeed4661 I think you must mean you want to draw a composite variable or factor score (not latent). I used this approach to create those factor scores: ruclips.net/video/dsOS9tQjxW8/видео.html

    • @abdulmoeed4661
      @abdulmoeed4661 2 года назад

      @@Gaskination Thanks. Yeah, this is what I was looking for. One more question is that is it necessary to Impute for Factor Scores while doing final Structural Modeling or should we use 'latent variables' along with its 'observeable items' for Final Structural Modeling?

    • @Gaskination
      @Gaskination  2 года назад +1

      @@abdulmoeed4661 Latent is considered more precise. However, latent becomes very cumbersome when you want to do interactions.

  • @areyougoingtoeatthatbanana2095
    @areyougoingtoeatthatbanana2095 6 лет назад

    Quick question... if the modification indices show a high covariance between the errors of two independent variables, may I introduce that covariate in the model? Is that allowed, from a practical point of view? Thank you!

  • @abdulmoeed4661
    @abdulmoeed4661 2 года назад

    How can I assess the Observeable items effect along with Latent variables in Structural Modeling? Because we used the Imputed factor scores to conduct SEM after CFA...What if I want to assess the results of observeable items of Latent variables? What approach should I use? Thanks Waiting for reply..

    • @Gaskination
      @Gaskination  2 года назад

      For the CFA, only include measures for latent factors. Then you can bring the observed measures into the structural model after the CFA.

    • @abdulmoeed4661
      @abdulmoeed4661 2 года назад

      @@Gaskination Can I make a Structural Model without imputing Factor Scores by including Observeable items with Latent variables just after CFA? without Path Model..

    • @Gaskination
      @Gaskination  2 года назад +1

      @@abdulmoeed4661 Yes, you can use a latent SEM. That approach is considered more robust and rigorous anyway.

    • @abdulmoeed4661
      @abdulmoeed4661 2 года назад

      @@Gaskination Thanks for the clarification...

  • @philmichel3
    @philmichel3 8 лет назад

    Thanks a lot for making these instructional videos! I've been struggling with my model fit metrics and I have a feeling that no matter what I do my RMSEA won't ever go below .15 - is this one of those times where a small sample size trumps everything? I've only got 80 or so cases for my research. Thankfully mediation analyses can salvage my research!

    • @Gaskination
      @Gaskination  8 лет назад

      Small sample size can hurt model fit. You might list this as a limitation and just move on, especially if the other model fit metrics are alright.

    • @kennethdeandre2037
      @kennethdeandre2037 3 года назад

      Sorry to be offtopic but does anyone know of a method to get back into an instagram account??
      I was dumb lost my login password. I appreciate any tricks you can give me!

    • @curtisjude4602
      @curtisjude4602 3 года назад

      @Kenneth Deandre Instablaster =)

    • @kennethdeandre2037
      @kennethdeandre2037 3 года назад

      @Curtis Jude Thanks so much for your reply. I found the site thru google and I'm trying it out now.
      Seems to take a while so I will get back to you later when my account password hopefully is recovered.

    • @kennethdeandre2037
      @kennethdeandre2037 3 года назад

      @Curtis Jude It worked and I now got access to my account again. Im so happy!
      Thanks so much you saved my ass!

  • @lilichili
    @lilichili 8 лет назад

    Dear James, thank you for the videos! So, I have a question here. Can you co-vary for example e4 and e3? So two error variables, not from the same level in the model?
    And I have never seen in the papers these error variables. Do you report about them in the paper?

    • @Gaskination
      @Gaskination  8 лет назад +2

      Here is a useful reference: Kenny, D.A. (2011) “Respecification of Latent Variable Models,” davidakenny.net/cm/respec.htm (provides justification for covarying error terms). If you end up covarying error terms, you should mention it as a minor note. Something like: "In order to achieve adequate model fit, we were required to covary the errors between CSE1 and Joy3".

  • @1983zil
    @1983zil 7 лет назад

    Hi James, sorry one more question, what does it mean by CMIN in the multigroup analysis. Does it stand for chi-square (for sure chi-square is x2, I am a bit confused by this term). Thanks :)

    • @Gaskination
      @Gaskination  7 лет назад

      yes. cmin is chi-square

    • @1983zil
      @1983zil 7 лет назад

      Yes, thanks for the reply. Kind of you. We know that chi-square is used to assess the measurement model, but doesn't it also assess the discriminant validity? Or it assess the discriminant validity, but we enter its value in the model fit? Any comment James? (Thanks in the advance once again). I would like to add your name in my PhD thesis as a Big Helper, I hope you won't mind?

    • @Gaskination
      @Gaskination  7 лет назад

      There are some papers that use chi-square for assessing discriminant validity, but it is more common to use the square root of the AVE vs the correlations.

  • @filzahussain1464
    @filzahussain1464 4 года назад

    hi james .first of all thx very much for such valuable videos.i wanted to ask while setting upa causal model .i found out a relationship which is actually not in my model but when i connect those variables and make a relation my model fit was better.the variable is not a control variable but it is just like u can say the relation between atypuse and joy which was not theorized in model.how to report that.and is it ok to have such relation ?waiting anxiously

    • @Gaskination
      @Gaskination  4 года назад

      If such a relationship is required to meet minimum thresholds for goodness of fit, then it should be included. If it is not a logical relationship, then explain that there is likely some third unmeasured variable causing issues of endogeneity. If it is logical, just explain that this relationship was not considered a-priori, but that it makes sense.

    • @filzahussain1464
      @filzahussain1464 4 года назад

      @@Gaskination bundle of thx james.helped alot

  • @filzahussain1464
    @filzahussain1464 3 года назад

    Hi james plz can u tell if the critical ratio is negative but p value is significant.what should we consider a relationship?plz give reference too

    • @Gaskination
      @Gaskination  3 года назад

      critical ratios can be negative. It's just the unstandardized coefficient (which can be negative) divided by the standard error. So, it's perfectly normal. No citation necessary.

    • @filzahussain1464
      @filzahussain1464 3 года назад

      @@Gaskination thx sir for your valuable time but according to Hair et al(2010) ,if the critical value is less than 1.96 and p value is less than 0.05,the relationship is not considered significant.my latent constructs relationship is negative in critical ratios but p value is less tha 0.01 .what should i call it a significant or insignificant relationship.

    • @filzahussain1464
      @filzahussain1464 3 года назад

      Sorry p value is less than 0.001

    • @Gaskination
      @Gaskination  3 года назад

      @@filzahussain1464 For critical ratios, it is absolute value (unless it is one-tailed - but even then it is still significant). Trust me, a negative critical ratio with an absolute value greater than 1.96 is considered significant at the 0.05 level.

    • @filzahussain1464
      @filzahussain1464 3 года назад

      @@Gaskination thx alot sir

  • @1983zil
    @1983zil 7 лет назад

    Hi James, I hope you are fine. I had a similar model fit, I went to the modification indices, found that there is a higher relationship between two unique variables (e2 and e3), I co-varied and then went to the model fit again and found most of the values like Pclose, RMSEA are 1.00 or completely missing. So is there a problem or something???

    • @Gaskination
      @Gaskination  7 лет назад +1

      You probably have zero degrees of freedom. It is best to use a latent model instead if possible.

    • @1983zil
      @1983zil 7 лет назад

      Yes thanks, I used the latent model, just to see the values. Thanks for your reply. Kind of you :-)

  • @roshankhadka5762
    @roshankhadka5762 7 лет назад

    Hello James,
    When I run my Model in AMOS, it gives me output but doesn't show model with regression values. What should I do?

    • @Gaskination
      @Gaskination  7 лет назад

      Make sure to click on the up arrow. This reveals the coefficients on the model.

    • @roshankhadka5762
      @roshankhadka5762 7 лет назад

      Thank you. I have gone through the video of iteration that was very helpful.

  • @user-yx2bf3xj4o
    @user-yx2bf3xj4o 6 лет назад

    Hi James,
    Many thanks for your amazing videos.
    I would be grateful if you could answer my question:
    when I run my path model (which have 5 independents factors and 1 dependent factor), it was written that the chi-square is zero, degrees of freedom is zero, CFI= 1 and probability level can not be computed.
    what does it mean and how can I solve this problem to test my hypotheses?

    • @Gaskination
      @Gaskination  6 лет назад

      This is because all possible parameters are accounted for. When there is only one DV and no mediator, then there will be zero DF unless you use latent factors. This is fine and you can just report that there are no DF because it is a fully identified model, therefore model fit is irrelevant (actually it is perfect).

    • @user-yx2bf3xj4o
      @user-yx2bf3xj4o 6 лет назад

      Thanks sir for your kindly reply.
      I have some more questions:
      1- Can I use the structural model (with latent variables) to test my hypothesis instead of path model to have model fit indices? does it will be accepted by the reviewers (especially because all papers in this area used path models)?
      2- Can I use multi-group analysis in the structural model (with latent variables) or not?
      3- if, yes. Do the steps of doing multi-group analysis in your videos are the same of doing it among the structural model or they vary?

    • @Gaskination
      @Gaskination  6 лет назад

      1. The latent model should be fine. No one should complain. It is more rigorous than path modeling anyway.
      2. Yes. I think my 2016 SEM Series playlist used a latent model: ruclips.net/video/w5ikoIgTIc0/видео.html
      3. Should be the same.

    • @user-yx2bf3xj4o
      @user-yx2bf3xj4o 6 лет назад

      Many Thanks sir for your explanation.
      when I use the structural model (with latent variables with its indicators) to test my hypothesis, the results of the model fit indices were the same like the results of the model fit indices for CFA of the measurement model.
      Here is the picture of the structural model (latent model) :
      docs.google.com/forms/d/e/1FAIpQLSfvAARmldOmzTrO35QCTtd18zOBTiSrBzDNZO6uG0lc0mI6oA/viewform?usp=sf_link
      Here is the picture of the CFA :
      docs.google.com/forms/d/e/1FAIpQLSc8Hx2Da8cglsD3moISnjqG9MtXJ3MBnQO-PimXw5CZmqNKNQ/viewform?usp=sf_link
      1- am i doing something wrong in the analysis?
      2. if not, How can I write the same results of the model fit indices for both CFA and the structural model ?

    • @Gaskination
      @Gaskination  6 лет назад

      This is perfectly normal if the df didn't change (which is common between CFA and causal model). You are essentially accounting for the same parameters, so the df and X2 don't change, which means the model fit measures also don't change. No problem there.

  • @user-yx2bf3xj4o
    @user-yx2bf3xj4o 6 лет назад

    Hello sir,
    Many thanks for your videos. They help me a lot.
    Please, I have a simple question. If I have the following hypotheses "X have a significant positive impact on Y". and the results was significant for this relationship but the effect was negative (not positive). In this case, what I should report about the result of this hypotheses. is it supported or Rejected?

    • @Gaskination
      @Gaskination  6 лет назад

      Not supported, unless the items were reverse-coded.

    • @user-yx2bf3xj4o
      @user-yx2bf3xj4o 6 лет назад

      Thank you sir.

    • @user-yx2bf3xj4o
      @user-yx2bf3xj4o 6 лет назад

      please, I have another two questions:
      1- I used HTMT method to achieve discriminant validity (because it is not achieved by comparing AVE with the squared correlation), but someone advice me to provide a standard zero-order correlation matrix of the constructs also to support achieving discriminant validity. how can I use this matrix to support discriminant validity and what I should report about that?
      2- Does random sampling is the correct sampling method if I use online panel?
      also, what is the correct sampling method if I send the questionnaire via email? is it random sampling or convenience sampling?

    • @Gaskination
      @Gaskination  6 лет назад

      1. I'm not sure. Perhaps they are referring to the criteria of having all zero-order correlations less than 0.900.
      2. Usually online surveys are not random, because it is hard to randomly assign people to take your survey unless you have some connection with a large body of participants already. It is more likely a convenience sample.

    • @user-yx2bf3xj4o
      @user-yx2bf3xj4o 5 лет назад

      Many thanks Sir for your replies.
      Does the criteria of having all zero-order correlations less than 0.900. can be used to prove achieving discriminant validity? if the answer is yes, what are the references that support this?

  • @ahmadusmanlive
    @ahmadusmanlive 7 лет назад

    Hi James, Thanks a lot for your videos. I am encountering various problems in my analysis such as heywood cases etc. Can i e-mail you my model, questionnaire, data and amos files so that you may have a quick look on what i am doing wrong? Looking forward for your kind reply.

    • @Gaskination
      @Gaskination  7 лет назад

      You can send it to me, but please be as concise as possible. I receive many more than a dozen similar emails each day, so it is nice when they are concise. Thanks! You can google my name to find my email.

    • @ahmadusmanlive
      @ahmadusmanlive 7 лет назад

      Thanks, i have sent you an e-mail on your gmail id.

  • @kendramaxinereyes2918
    @kendramaxinereyes2918 8 месяцев назад

    u sound like phil dunphy

  • @riadincfcb
    @riadincfcb 8 лет назад

    Dear James, Thx for the video, I have small question, Do we have to covary all control variables with all the exogenous variables? is the result wrong if i covary just control variables among themselves and exogenous variables among themselves?
    Thank you in advance

    • @Gaskination
      @Gaskination  8 лет назад +1

      There are different schools of thought on this. Some would argue that you should covary all exogenous variables (including controls) in a covariance-based SEM, but others would argue that if you achieve good model fit when they are not covaried, then it is a sufficiently accurate way to model those variables. I'm of the second school of thought.

    • @riadincfcb
      @riadincfcb 8 лет назад

      Thx a lot James for ur reply, this is help (can you just help me to back it up with literature "Reference for the second school") ,
      I have another question regarding my finding when a path shows a significant relationship however the sign is contradictory to theory, what does it mean? how can I interpreted? (does it mean that the path is not significant!!) (by the way I tested using the normal linear regression and the sign was associate with theory)

    • @Gaskination
      @Gaskination  8 лет назад

      I don't know if you'll find any paper that explicitly states that you can remove impotent controls, but you will find plenty of model fit papers that explain what model fit is, and this definition lends itself to supporting the argument to drop non-significant paths if the model fit remains good. As for the opposite sign on a path, that means it is not supported, but it is significant. That can happen for various reasons. Check the relationship in a correlation test to see if it is also negative there.

  • @p_2023
    @p_2023 Год назад

    too fast... showoff