Here's a fun pet project I've been working on: udreamed.com/. It is a dream analytics app. Here is the RUclips channel where we post a new video almost three times per week: ruclips.net/channel/UCiujxblFduQz8V4xHjMzyzQ Also available on iOS: apps.apple.com/us/app/udreamed/id1054428074 And Android: play.google.com/store/apps/details?id=com.unconsciouscognitioninc.unconsciouscognition&hl=en Check it out! Thanks!
Mr. James Gaskin I am about to hand over my thesis. I would like to thank you without your videos and explanations I have not been able to finish my thesis. I mentioned your name on the Acknowledgment page because you really deserve it
Dear Mr. Gaskin, I am writing my thesis now, and I can't say how grateful I am. I literally build all the analysis by learning from your videos! It's tooooo bad that in my country we don't have access to youtube. I really wish my friends in China could see these amazing tutorials and benefit from them as I did. Thank you so much:) All the best, Lu
Hey James, upon reading Hair et al's Primer on PLS-SEM, there's a section on model evaluation that states "the concept of fit-as defined in CB-SEM- does not apply to PLS-SEM. Efforts to introduce model fit measures have generally proven unsuccessful'. Does that mean SRMR values are negligible?
Hi James. Thank you for all the helpful videos. There's an issue when running the data that I want to ask you. I'm following your instruction for factor analysis. After deleting all the items with low factor loading, I run bootstrapping, and then there's no Quality Criteria column in the report. So I don't know where to see the AVE. It's actually the step at 10:10 min in your video. Have you ever experienced this kind of issue or I should have missed a detail? Thank you so much.
That is weird. I sometimes still get bugs when I use the PLSconsistent algorithm. If you are using that, maybe try using the regular PLS algorithm. The only other thing I can think of is that you don't have any reflective factors. In this case, no AVE would be calculated, as AVE is not relevant to formative factors.
Hi Christine. In the settings when conducting the bootstrapping procedure go to the tab bootstrapping. There you have to check "Complete Bootstrapping" under Amount of Results ("Basic Bootstrapping" is the default which won't provide the quality criteria).
Hi Dr. Gaskin - thanks for this great video. I'm using reflective constructs in my model, and in analyzing the outer loadings, I'm noticing that an item with an outer loading of 1.279. The construct has 3 items and the other 2 item outer loadings are .508 and .671. I thought I might have an issue with model misspecification or Heywood case - I'm still investigating. I would appreciate any insight you could provide. Thank you!
Yes, this is a heywood case. Try using the regular PLS algorithm instead of the PLSc algorithm. If you were not using PLSc, try using it. Hopefully this will fix it. If not, try removing that item.
THANK YOU, I still have a question for a CFA is it better to use the process that you suggest in this video or is it better to use CB SEM? thank you again
Hello professor Gaskin, I have studied your "EAM 2021 SEM Workshop" video and have learned how to conduct EFA (I have also learned to use eigenvalues and doublecheck factors using parallel analysis). Would you suggest me to conduct EFA separately (in SPSS) and then re-test the new model in SMARTPLS like shown in this video? This way I feel more "safe" than just importing my data directly into SMARTPLS. Thank you for your kind help.
Yes, this is the approach I would recommend. However, most scholars would argue that the EFA is not necessary. The reason I conduct it anyway is because it allows you to identify discriminant validity concerns that might be difficult to identify in SmartPLS.
Hi James, quick question please. For formative constructs, do I have to mention AVE and CR? Read some paper, they did, but formative items do not need to be correlated....so I am confusing.
Invalid path model in smartPLS. How do I correct that in SmartPLS 3? The comment says I should check the reds elements but I don't see any on the console or the panel.
Hello dr.Gaskin. Thanks for the video. Could I perform PCA on spss for each construct separately befor testing hypotheses on SmartPls (PLS-SEM). All first order and second order constructs are reflective.
@@Gaskination Thanks Dr. Gaskin. I remember that you said somewhere that when there are multidimensional (higher order) constructs in the model it's better to conduct PCA with promax rotation for each one of them separately. Is this right or I misunderstood something? The problem is that the factor structure produced by factor analysis gives indiscriminately valid higher order construct. Can I ignore factor analysis totally and go directly to SmartPls and obtain measure qualities (reliability and validity) by investigating cross loadings, composite reliability, and HTMT?
@@مسلمعبدالله-س7ن Oh, I misunderstood. Didn't realize you were working with higher order factors. You can test them separately. However, if they are established scales, you can also jump straight to PLS and assess validity there.
Dear Dr. Gaskin, related to the question by VLion on on EFA ; you mean even if my scale items are adapted ones (Ex: one item added to an existing measure or language changed to suit to my sample), still I do not need to do EFA in SPSS before come to SmartPLS? just HTMT checking is enough? Thank you so much for your valuable videos.
Hi Dr James..just wondering, I have good CR, CA, VIF, loadings, and AVE. But my T value and P value exceed 0.05...can you suggest me what to do for that?
Hi t-value is good. High p-value is probably not what you were hoping for. Here is something I've written about that: gaskination.com/forum/discussion/6/why-i-found-no-significant-relationships-and-all-my-hypotheses-failed#latest To this I'll just add that sometimes a theory just isn't a good approximation of truth.
Hi James, Would you recommend to create different new projects for different measurement dimension? I have run the PLS and bootstrap for many times and it is very hard to get Pvalue for
Hi Dr. James Gaskin, You continued with lower AVE saying that composite reliability is already achieved. Can we continue a model with lower AVE? What are the supportive literature. Thank you so much for your wonderful video series.
AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702). Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.
@@Gaskination Dear Dr. James Gaskin, Thank you so much for replying to my question. So nice of you. Your videos helped me a lot in learning PLS-SEM. Many thanks. Love from Asia.
Dear James, Could you help me to clarify a test step concept regarding T statistics, please? For familiarizing to the smart pls procedures: In your video running bootstrapping to be able to check T statistics, correct? How to interpret a situation: Although factor loadings, CR, Alpha, AVEs are all green and above 0.70, it has only a few T statistics above 1,96 and those above Show 0,00 p values. Rest of the items don't show neither 0,00 p values nor t statistics . What would be the potential cause of showing low T statistics? Is there any video or source to check regarding T statistics? Kind regards,
Do you mean the t-statistics for the structural (inner) paths or for the measurement (outer) paths? If for outer, then this is weird. If for inner, then this is completely fine. The quality and validity of the factors does not determine the strength of the relationships between factors.
Hi James. I have a question. I am writing a paper about designing and validating a questionnaire with 232 samples. While running pls algorithm, model has passed through reliability and validity tests but NFI of model fit is 0.5 which is too low. Could it be due to sample size or issue with model? I am yet to do primary data collection and hence I cannot test with more samples at the moment. So is it a good idea to skip the model fit iand just show reliability and validity results in the manuscript?
Unless the model is very complex, the sample of 232 is probably sufficient to determine if the model fit is good. So, it is probably due to something else. I recommend checking the validities to see if they pass. If they do, then perhaps ignore model fit since this is PLS and model fit is not a PLS measure.
Can you advise me, please? My outer model has perfect results. My inner model has some issues, I think: My R² and F² are very low. Is there a way to improve them, any recommendations, please? There were some path coefficients that showed insignificant value, I am not too sure if this means I should delete the paths or the latent variable? Or if I just do not interpret the results of insignificant p value?
If R2 and F2 are low, then that just means the relationships between the latent factors are weak. This is perfectly normal, even with a strong and valid outer model. Here are some relevant thoughts on this: gaskination.com/forum/discussion/6/why-i-found-no-significant-relationships-and-all-my-hypotheses-failed
Hi Mr. James Gaskin Could you help me, please! I am running analysis on SmartPLS 3.0 for path analysis, let say i have latent variable with 3 dimension. the result show that all dimension have value of cronbach's alpha, CR, and AVE but in the latent variable the value is not appear. How i to get the latent variable value or any manual calculation. I want to calculate the goodness of fit (GoF) but I didn't' have the value of latent variable. Your's sincerely,
These measures will only be computed for reflective factors. If the factor is formative, the AVE, CR, etc. will not be calculated. If your factors are reflective, but the measures are still not being calculated, then make sure to use the PLS algorithm instead of PLSc algorithm (which still seems to have some bugs).
Hi Mr. Gaskin, what happens if for example in the reflective measurement model the constructs fail to demonstrate either convergent or discriminate validity? what happen to my analysis, do i just continue and report that the constructs do not show, say discriminate validity? I want to really understand the purpose for internal consistency reliability, convergent validity and discriminate validity and if the construct in my research model do not demonstrate the acceptable values for these, what happens to my research? Thank you.
You can go back to your data screening to check normalities and outliers. Also see if anything can be done to improve the validities during the EFA. This video may help: ruclips.net/video/oeoTpXiSncc/видео.html Moving forward without good validity is not advised. It will undermine and invalidate your findings.
Dear Professor James, thanks for your great videos. I am confused about the value of the HTMT. In the video, you have mentioned less than 1 is fine. However in one of the comments when a person asked you about having a HTMT greater than 0.85, you have recommended removing the item. So, now my question is, if below 1 is fine, why that person should remove the item? Thank you so much. I really appreciate your response. Looking forward to your response. I really need clarification on this to complete my thesis.
Do you look at the saturated or estimated model fit values when you consider the SRMR? my research have different values saturated=0.050 and estimated=0.158
Hi James, for an Exploratory study if the number of indicators dropped to TWO for one factor due to a low loading (less than 0.7) Should this be considered as problematic for PLS-SEM? If not, could you suggest any literature back-up please?
I would not drop indicators unless they are less than 0.500. Two indicators is okay, but not ideal. Four is the ideal. I can't remember which reference says this though...
Hi James, I am now writing my master thesis, and I really need your help. I adapted the measurement scales of my five independent variables based on the literature review. And I did EFA in SPSS, but there are so many cross-loadings. I have tried many methods, but I can’t get a satisfying results, there are always two totally different variables loading to one new factor. I feel so frustrated. Then I found SmartPLS, and conducted the CFA by following your video, I deleted 7 items in total, the results look good now. My question is: does this acceptable? If I don’t do EFA in SPSS and do CFA in SmartPLS directly, will that be a problem?
@@karama8260 Doing the analysis in SmartPLS is fine. Many scholars argue that an EFA is unnecessary if all latent factors are known already. As for the HTMT greater than 1, that means that two factors are very strongly correlated. These are probably the two factors that were loading together in your EFA. This means they are not discriminant - i.e., they are measuring something very similar. They might be different dimensions or manifestations of the same construct.
@@Gaskination Thank you James! That really helps. Do you have any suggestions on the literature that might support the argument? In terms of the HTMT, the latent variable “customer trust” is very highly correlated with customer satisfaction and repurchase intention, so should I select customer trust and give up studying this? Is this the easiest way? I don’t have chance to re- design the questionnaire and re-collect data.
@@karama8260 A non-trivial amount of debate exists among methodologists regarding whether the EFA is absolutely necessary, particularly when the set of observed measures have all been validated in prior literature (Costello and Osborne 2005), and when the factor structure is already theorized (i.e,. when we already expect, for example, that the jsat1-5 variables should factor together to represent the job satisfaction construct). As has been shown in replication studies (cf. Luttrell et al. 2017; Taylor et al. 2003), the same scales will perform differently when in the presence of other measures (not in the original study) or in the context of a different sample. Thus, in my view, the EFA should always be conducted to surface validity issues idiosyncratic to the current context. My personal school of thought is that it is best to do an EFA first because discriminant validity problems manifest more visibly in the EFA than the CFA. Then follow this up with a CFA informed by the EFA. The EFA is for exploration only, and should be used mainly to highlight potential problems (such as discriminant validity) that will likely resurface in the CFA. -Costello, A. B., and Osborne, J. 2005. "Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most from Your Analysis," Practical assessment, research, and evaluation (10:1), p. 7. -Luttrell, A., Petty, R. E., and Xu, M. 2017. "Replicating and Fixing Failed Replications: The Case of Need for Cognition and Argument Quality," Journal of Experimental Social Psychology (69), pp. 178-183. -Taylor, G. J., Bagby, R. M., and Parker, J. D. A. 2003. "The 20-Item Toronto Alexithymia Scale: Iv. Reliability and Factorial Validity in Different Languages and Cultures," Journal of Psychosomatic Research (55:3), pp. 277-283. It makes sense that those three variables are highly related, as they all capture something about customer satisfaction, or positive affect towards the product/service. You might try conducting an EFA with just the variables from these three latent factors. Then try to discriminate them by seeing which variables refuse to load where you want them. This is probably easiest by forcing the EFA to extract three factors and then producing a loadings plot to see the distance between variables in 3D space.
How can every indicator for a latent variable have a loading of 0.000? Or your bootstrapping shows a sample mean for all (M) for all connections of "N/A", standard deviation of "N/A"? and then leave blank t-statistics and p-values?
CR and AVE are measures of convergent validity, while HTMT is a measure of discriminant validity. So, the best way to fix this (if it is above 0.900) is to remove the item that is most shared between the two factors.
@ James. Please lemme know in my result during the CFA three loadings are in negative value (_). Should I remove that factor? Or is there any justification of it?
If the loadings are negative, then perhaps those variables were reverse-coded. If they were, then you need to re-reverse them by subtracting their values from 1+scalesize. So, if on a 5-point likert scale, subtact from six. If they are not reverse coded, then it is probably due to being more formative in nature. If not formative, then they are just bad items and can be deleted.
Dear Mr. Gaskin, Thank you so much for your videos, they have helped me a lot! Do you know a good source to justify using constructs that do not meet the threshold of .5 for the AVE? I cannot get my factor solution to meet the threshold. Thanks in advance!
AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702). Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.
James Gaskin Thank you for your response! Nevertheless, this quote means that an AVE below .5 should not be accepted since more than 50% of the variance is due to error, right?
@@Gaskination Hi, Mr. Gaskin. Sorry for joining the conversation. You said that AVE below 0.5 still acceptable if only the CR is good enough, does it work for AVE in PLS-SEM also? Thank you!
Dear sir in many papers the researchers use reflective indicators those are not interchangeable, in their papers they show the outer loadings and other conditions as fulfilled by them. sir I don't know the reason of using these concepts in the papers. There indicators show the formative nature and they have used the exploratory factor analysis for factor extraction (for non-interchangable indicators)? sir are there data shows manipulation.
There was a good paper about this a few years back. They showed how so many researchers misspecify their models. Here is the reference: Jarvis, C. B., MacKenzie, S. B., & Podsakoff, P. M. (2003). A critical review of construct indicators and measurement model misspecification in marketing and consumer research. Journal of consumer research, 30(2), 199-218.
Dear Dr. Gaskin, Thanks for the tutorial. Can you please suggest how can we perform a CFA using smartpls. I was conducting a scale validation study which would need an EFA and CFA. Thanks.
Hey Sarkar, actually CFA is what you have on SEM. Take a look into some theory on it. Let me suggest you: 1) www.revistabrasileiramarketing.org/ojs-2.2.4/index.php/remark/article/view/2698/pdf_154 2) www.revistabrasileiramarketing.org/ojs-2.2.4/index.php/remark/article/view/2718/pdf_166 3) www.revistabrasileiramarketing.org/ojs-2.2.4/index.php/remark/article/view/2717/pdf_215 I hope you like it. Regards Marcelo
Hello Prof. James About the threshold of HTMT, you said It should be less than 1. Could you please tell me where I can find the reference? Best regards
Hello Prof. James,thanks for the tutorial. Prof. I have all good values for HTMT,AVE, R^2, Crobach alpha but path coefficient & effect size is very low. Can you please tell me how to address this issue with path coefficient , beta vlaues & effect size using smartpls?
That just means they're not strongly related. So, there are other variables that have stronger effects, but they are not included in your model. The only thing you can do to try to boost these values, without altering their validity, is to see if there are any outliers (univariate or multivariate) that might be influencing the relationships away from their true estimates.
@@Gaskination thank you prof. Would you like to explain more on boosting the values ?? How to boost these values ? And is there any reference for low path coefficients & low f square values ?? Thanks in advance, prof.
@@nazreenchowdhury5635 To boost these values (i.e., make them bigger), you would delete outliers (univariate or multivariate) that might be influencing the relationships away from their true estimates
Hi Dr. James, thank you for the video. It helps me a lot. But, I have one more question. Is it ok to conduct CFA for pilot study with only 30 sample size? Thank you.
@@Gaskination Thank you for your response! I have 13 constructs with 52 indicators. Actually, I have tried EFA for my data. But, I got non positive definte value and it couldn't be interpreted properly.
Dear Mr. Gaskin, Thank you for your informative video. I just have one question left. Do we need to conduct EFA in spss before conducting PLS. FYI, my model is fully reflective, and the measurement scale is modified from another research. Sincerely.
EFA first is helpful to find any discriminant validity issues, but it is not necessary since SmartPLS has built-in discriminant validity assessment (HTMT).
Dear Dr. Gaskin, you mean even if my scale items are adapted ones (Ex: one item added to an existing measure or language changed to suit to my sample), still I do not need to do EFA in SPSS before come to SmartPLS? just HTMT checking is enough? Thank you so much for your valuable videos.
Dear Mr. Gascin, Thank You for the Video. I want total perform confirmatory FA in smartpls. Is this the right one, or would You rather call this exploratory Factor Analysis? If Not, Whats would be a CFA in smart pls? Thousand Thanks in advance from Germany. Stefan
Dear Sir Thank you very much for the videos. I have two latent IVs in my model. One has 40 items whereas second one has 44 items. Can I use this method to reduce items in my thesis? If yes then can I make factors on the bases of the reduced items?
Yes. The most appropriate approach may be to use SPSS for an exploratory factor analysis. That would help you identify dimensions within the sets of items. Here is a video on that: ruclips.net/video/VBsuEBsO3U8/видео.html
Hello Dr. James ~ I'm newbie to SmartPLS. Grateful to hv yr videos ^^. But I'm a bit confused. Hopefully you could help to clear my mind a bit ~ My Prof. asked me to report him the factor validity value for each latent variable.. I'm not sure which one should I report. Is it the outer loadings value? Thank you very much in advanced ~ Stay blessed.
Factor validity comes in the form of convergent and discriminant, as well as reliability. I would guess your Prof is looking for either the AVE or the Cronbach's alpha.
Hi James, thanks for your vedio. however, i am facing a problem now. i have two constructs. one is formative and the other is reflective. besides, i will not test the relationship between these contructus afterwards, meaning there will be not arrows between them. in this case, how can i test their reliability and validity? look forward to hearing from you and thanks!
SmartPLS requires all latent factors be attached by regression lines to some other latent factor. Otherwise it will not run. So, just include the line, but run the factor method (rather than path).
Thanks for your answer, James. 1. so i just need to include the line between two constructs and it does not matter the arrow is from which construct to which construct, right? 2. when you said running the factor method, do you mean that i just read the index related to factor analysis, even though factor and path analyses will run simultaneously, like what happen in AMOS? i am not familiar with SmartPLS and assume that factor analysis and path analysis cannot run separately in SmartPLS. 3. For the formative construct, what do I need to report in terms of its reliability and validity? 4. Regarding the model tested by using SmartPLS, do i also need to report the model fit, like what i need to do in AMOS? if so, what do i need to report? once again, thanks for your help and look forward to hearing from you.
1. Correct. 2. In the run properties before you execute the analysis, there is a tab called Partial Least Squares. In that tab, one of the basic settings is Path, Factor, or Centroid. Choose Factor. 3. Just discriminant validity through the HTMT table. Other papers and books will also recommend other metrics to report. You can follow these as well. 4. No need to report model fit. It is not relevant to a PLS model.
Thanks, James. I have run the model according to your suggestions. However, I did not obtain any value in the HTMT table. Whether is this because I only have two constructs, and one is reflective and the other is formative? Or does this represent poor discriminant validity? If so, what can i do then? Once again, thanks for your help!
Oh. I'm not sure if it is limited if you have only two factors. I would assume it works as long as you have at least two... So, I'm not sure. Make sure you are using the PLS Algorithm (not 'consistent'), and that you then click on the Discriminant validity link in the output.
Dear Mr james i have one question regarding my adapted variable from different sources ..can i use CFA alone in SmartPLS to check its reliability and validity Or is it necessary to do EFA in SPSS before proceeding to CFA? Thanks in advance!
There are different schools of thought on this. I always recommend doing an EFA for all reflective constructs. If it is formative though, you can move straight to PLS.
yes sir my constructs are reflective and one more thing after performing EFA on that particular adapted construct.. how can i drop or delete items on which basis ? like in output should i consider component matrix or rotated component matrix? and under which criteria i can keep items or delete them .. because its so confusing if spss give me eigen values and make 3 new components then looking at rotated component matrix table.. what should i do next ? p.s i m sorry for long comment but i m so confused and need help.
You can try to separate the two factors. Look at the loadings matrix to see which items load most strongly on the other factor, then try removing those items.
@@Gaskination Thank you for replying. Considering that the HTMT score lies between the confidence interval - 1 and 1, should i let go of the construct or should I argue that discriminant validity has been established?
Model fit issues can be due to many things, such as variable normality, factor validity, and sample size inadequacy. So, make sure you meet all these criteria first. Then, if there are still problems with model fit, check the modification indices to see if one particular item is causing the issue. If so, delete that item and check again.
Thank you very much. I checked all of what you suggested (variable normality and factor validity) and they were good! I The model fitted, but found no significant relationships between all the variables! However, the sample size is 105 and have 8 variables! Is this sample not sufficient for PLS?
That's called a Heywood case. In SmartPLS, there's not much you can do. However, it implies there is some error happening in the measurement. So, make sure you're not including categorical variables or variables that are completely redundant.
@@GaskinationCan you please help me with the factors you have considered in the video ? How are these factors vis a vis complete model? I mean are they all IVS or anyone DVS? please explain variables little more?
@@2375nikhil The factors are CSE (computer self-efficacy), Innovativeness (with computer), and Skill Acquisition. These were the only three construct being considered in the video. If I were running the factor analysis on my own model, I would include all latent factors during the factor analysis.
Dear Mr. Gaskin, I have one question. I have 2 independent latent variables and one dependent LV. When I run the "normal" bootstrap (with path weighting scheme), I get very significant T-statistics. When I run the same model as a "Consistent PLS" bootstrap, it comes up with 0.000 everywhere in the model. What does that indicate?
@@Gaskination Thanks a lot. Does that mean I do the entire analyses with the normal algorithm? So from identifying the factor loadings, running the algorithm and the bootstrap, or do I use the PLS consistent Algorithm to calculate the factor loadings, and take it from there on the normal algorithm?
@@TheShroogle Assuming all of your factors are reflective, if the PLSc works properly (i.e., you don't see any weird results) then go ahead and use it where you can. Otherwise, use the normal algorithm. If any of your factors are not reflective, then I recommend using the normal PLS algorithm for everything.
Thank you Dr Gaskin for this and all other videos. As for the model fit, PLS team seems to be skeptical (at least not very confident) of its adequacy of SRMR in PLS-SEM. This is mentioned in: www.smartpls.com/documentation/functionalities/goodness-of-fit What's your opinion, should SRMR and other values be used to determine GoF or should we rely on CB-SEM for this.
SRMR is the only measure they seem okay with because it is not built upon the covariance matrix. They are more skeptical of the other measures and are hesitant to recommend them. Model fit is more appropriate for CB-SEM, since nearly all measures of fit are built off the chi-square, which is a difference measure of covariance matrices. So, I would rely on SRMR for now. Hopefully they'll implement the BIC and AIC and GM (I think that's what it is called...) soon, as these are robust to PLS.
Hello Dr. , Thank you very much for for all of your videos. I was wondering would it effect much on the results if i use path rather than factor in running my model? What the major difference between them while reporting in a thesis?
Here's a fun pet project I've been working on: udreamed.com/. It is a dream analytics app. Here is the RUclips channel where we post a new video almost three times per week: ruclips.net/channel/UCiujxblFduQz8V4xHjMzyzQ
Also available on iOS: apps.apple.com/us/app/udreamed/id1054428074
And Android: play.google.com/store/apps/details?id=com.unconsciouscognitioninc.unconsciouscognition&hl=en
Check it out! Thanks!
Mr. James Gaskin
I am about to hand over my thesis. I would like to thank you without your videos and explanations I have not been able to finish my thesis.
I mentioned your name on the Acknowledgment page because you really deserve it
Thanks! Congrats! Best of luck to you.
Dear Mr. Gaskin,
I am writing my thesis now, and I can't say how grateful I am. I literally build all the analysis by learning from your videos! It's tooooo bad that in my country we don't have access to youtube. I really wish my friends in China could see these amazing tutorials and benefit from them as I did.
Thank you so much:)
All the best,
Lu
same here, these videos are saviour ..but the sad part is currently I am in China O.o its difficult to access them whenever I want.
Thank you very much, James for this PLS videos; it makes it easy for me to learn it.
Good explanation and very easy to understand and run SmartPLS...... I am always your great fan..... Merry Christmas
Hey James, upon reading Hair et al's Primer on PLS-SEM, there's a section on model evaluation that states "the concept of fit-as defined in CB-SEM- does not apply to PLS-SEM. Efforts to introduce model fit measures have generally proven unsuccessful'. Does that mean SRMR values are negligible?
Correct. The model fit measures are not really appropriate for PLS. I do not even bother reporting them unless I'm using CB-SEM.
Hey James, well done my friend! Keep movin'
Thank you so much sir. Your videos are really helpful.
Hi James. Thank you for all the helpful videos. There's an issue when running the data that I want to ask you. I'm following your instruction for factor analysis. After deleting all the items with low factor loading, I run bootstrapping, and then there's no Quality Criteria column in the report. So I don't know where to see the AVE. It's actually the step at 10:10 min in your video. Have you ever experienced this kind of issue or I should have missed a detail? Thank you so much.
That is weird. I sometimes still get bugs when I use the PLSconsistent algorithm. If you are using that, maybe try using the regular PLS algorithm. The only other thing I can think of is that you don't have any reflective factors. In this case, no AVE would be calculated, as AVE is not relevant to formative factors.
Hi Christine. In the settings when conducting the bootstrapping procedure go to the tab bootstrapping. There you have to check "Complete Bootstrapping" under Amount of Results ("Basic Bootstrapping" is the default which won't provide the quality criteria).
Hi Dr. Gaskin - thanks for this great video. I'm using reflective constructs in my model, and in analyzing the outer loadings, I'm noticing that an item with an outer loading of 1.279. The construct has 3 items and the other 2 item outer loadings are .508 and .671. I thought I might have an issue with model misspecification or Heywood case - I'm still investigating. I would appreciate any insight you could provide. Thank you!
Yes, this is a heywood case. Try using the regular PLS algorithm instead of the PLSc algorithm. If you were not using PLSc, try using it. Hopefully this will fix it. If not, try removing that item.
@@Gaskination thank you!
THANK YOU, I still have a question for a CFA is it better to use the process that you suggest in this video or is it better to use CB SEM? thank you again
CB-SEM is better for all reflective factors. SmartPLS now has a CB-SEM side: ruclips.net/video/FS1D4KmmABU/видео.htmlsi=I6Yo_i-1JDVFT3-g
Hi, Dr. James. When doing the factor analysis, is the T value threshhold for formative indicators the same with that of reflective indicators?
Yes. 1.96 for 95% confidence. 1.65 for 90% confidence.
Hello professor Gaskin,
I have studied your "EAM 2021 SEM Workshop" video and have learned how to conduct EFA (I have also learned to use eigenvalues and doublecheck factors using parallel analysis). Would you suggest me to conduct EFA separately (in SPSS) and then re-test the new model in SMARTPLS like shown in this video? This way I feel more "safe" than just importing my data directly into SMARTPLS.
Thank you for your kind help.
Yes, this is the approach I would recommend. However, most scholars would argue that the EFA is not necessary. The reason I conduct it anyway is because it allows you to identify discriminant validity concerns that might be difficult to identify in SmartPLS.
Hi James, quick question please. For formative constructs, do I have to mention AVE and CR? Read some paper, they did, but formative items do not need to be correlated....so I am confusing.
You are correct. AVE and CR are not relevant to formative factors.
Invalid path model in smartPLS. How do I correct that in SmartPLS 3? The comment says I should check the reds elements but I don't see any on the console or the panel.
Make sure all latent factors have items attached to them.
Hello Dr Gastin,
I have done that and has been able to run the data. Thanks for the assistance.
Hello dr.Gaskin. Thanks for the video. Could I perform PCA on spss for each construct separately befor testing hypotheses on SmartPls (PLS-SEM). All first order and second order constructs are reflective.
factor analysis is much better when all factors are included together. This allows for discriminant validity checks.
@@Gaskination Thanks Dr. Gaskin. I remember that you said somewhere that when there are multidimensional (higher order) constructs in the model it's better to conduct PCA with promax rotation for each one of them separately. Is this right or I misunderstood something? The problem is that the factor structure produced by factor analysis gives indiscriminately valid higher order construct. Can I ignore factor analysis totally and go directly to SmartPls and obtain measure qualities (reliability and validity) by investigating cross loadings, composite reliability, and HTMT?
@@مسلمعبدالله-س7ن Oh, I misunderstood. Didn't realize you were working with higher order factors. You can test them separately. However, if they are established scales, you can also jump straight to PLS and assess validity there.
@@Gaskination Thanks Dr. This was really helpful.
very good tutorial. Can you clarify how critical ratios have been extracted in smart pls
I'm sure they use some standard approach. It is just a t-statistic.
Dear Dr. Gaskin, related to the question by VLion on on EFA ; you mean even if my scale items are adapted ones (Ex: one item added to an existing measure or language changed to suit to my sample), still I do not need to do EFA in SPSS before come to SmartPLS? just HTMT checking is enough? Thank you so much for your valuable videos.
correct. However, I still do EFA in SPSS because it is better at identifying discriminant validity issues.
Dear Dr. Gaskin, Thank you so much for your kind explanation.
Hi Dr James..just wondering, I have good CR, CA, VIF, loadings, and AVE. But my T value and P value exceed 0.05...can you suggest me what to do for that?
Hi t-value is good. High p-value is probably not what you were hoping for. Here is something I've written about that: gaskination.com/forum/discussion/6/why-i-found-no-significant-relationships-and-all-my-hypotheses-failed#latest
To this I'll just add that sometimes a theory just isn't a good approximation of truth.
Hi James,
Would you recommend to create different new projects for different measurement dimension? I have run the PLS and bootstrap for many times and it is very hard to get Pvalue for
No. I would recommend putting all latent factors in the same model. This is best practice, as it minimizes the possibility of false positives.
Hi Dr. James Gaskin,
You continued with lower AVE saying that composite reliability is already achieved. Can we continue a model with lower AVE? What are the supportive literature. Thank you so much for your wonderful video series.
AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).
Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.
@@Gaskination Dear Dr. James Gaskin, Thank you so much for replying to my question. So nice of you. Your videos helped me a lot in learning PLS-SEM. Many thanks. Love from Asia.
Dear James,
Could you help me to clarify a test step concept regarding T statistics, please?
For familiarizing to the smart pls procedures:
In your video running bootstrapping to be able to check T statistics, correct?
How to interpret a situation:
Although factor loadings, CR, Alpha, AVEs are all green and above 0.70, it has only a few T statistics above 1,96
and those above Show 0,00 p values.
Rest of the items don't show neither 0,00 p values nor t statistics
.
What would be the potential cause of showing low T statistics?
Is there any video or source to check regarding T statistics?
Kind regards,
Do you mean the t-statistics for the structural (inner) paths or for the measurement (outer) paths? If for outer, then this is weird. If for inner, then this is completely fine. The quality and validity of the factors does not determine the strength of the relationships between factors.
@@Gaskination Sincere Thanks...
Hi James. I have a question. I am writing a paper about designing and validating a questionnaire with 232 samples. While running pls algorithm, model has passed through reliability and validity tests but NFI of model fit is 0.5 which is too low. Could it be due to sample size or issue with model? I am yet to do primary data collection and hence I cannot test with more samples at the moment. So is it a good idea to skip the model fit iand just show reliability and validity results in the manuscript?
Unless the model is very complex, the sample of 232 is probably sufficient to determine if the model fit is good. So, it is probably due to something else. I recommend checking the validities to see if they pass. If they do, then perhaps ignore model fit since this is PLS and model fit is not a PLS measure.
Can you advise me, please?
My outer model has perfect results.
My inner model has some issues, I think: My R² and F² are very low. Is there a way to improve them, any recommendations, please?
There were some path coefficients that showed insignificant value, I am not too sure if this means I should delete the paths or the latent variable?
Or if I just do not interpret the results of insignificant p value?
If R2 and F2 are low, then that just means the relationships between the latent factors are weak. This is perfectly normal, even with a strong and valid outer model. Here are some relevant thoughts on this: gaskination.com/forum/discussion/6/why-i-found-no-significant-relationships-and-all-my-hypotheses-failed
Hi Mr. James Gaskin
Could you help me, please! I am running analysis on SmartPLS 3.0 for path analysis, let say i have latent variable with 3 dimension. the result show that all dimension have value of cronbach's alpha, CR, and AVE but in the latent variable the value is not appear. How i to get the latent variable value or any manual calculation. I want to calculate the goodness of fit (GoF) but I didn't' have the value of latent variable.
Your's sincerely,
These measures will only be computed for reflective factors. If the factor is formative, the AVE, CR, etc. will not be calculated. If your factors are reflective, but the measures are still not being calculated, then make sure to use the PLS algorithm instead of PLSc algorithm (which still seems to have some bugs).
Hi Mr. Gaskin, what happens if for example in the reflective measurement model the constructs fail to demonstrate either convergent or discriminate validity? what happen to my analysis, do i just continue and report that the constructs do not show, say discriminate validity? I want to really understand the purpose for internal consistency reliability, convergent validity and discriminate validity and if the construct in my research model do not demonstrate the acceptable values for these, what happens to my research?
Thank you.
You can go back to your data screening to check normalities and outliers. Also see if anything can be done to improve the validities during the EFA. This video may help: ruclips.net/video/oeoTpXiSncc/видео.html
Moving forward without good validity is not advised. It will undermine and invalidate your findings.
Dear Professor James, thanks for your great videos. I am confused about the value of the HTMT. In the video, you have mentioned less than 1 is fine. However in one of the comments when a person asked you about having a HTMT greater than 0.85, you have recommended removing the item. So, now my question is, if below 1 is fine, why that person should remove the item?
Thank you so much. I really appreciate your response. Looking forward to your response. I really need clarification on this to complete my thesis.
The published threshold is 0.850 for conservative and 0.900 for liberal. I can't remember why I said 1.00 in this video. Sorry about that.
@@Gaskination dear james thanks for your clarification. What should we do if the HTMT is between 0.9 and 1!?
Dear teacher, if I measure independent variable using one item, is that OK to run structural model by smart PLS? Thank you
yes
Do you look at the saturated or estimated model fit values when you consider the SRMR?
my research have different values saturated=0.050 and estimated=0.158
Estimated. But I also don't put much trust in model fit in PLS....
Hi James, for an Exploratory study if the number of indicators dropped to TWO for one factor due to a low loading (less than 0.7) Should this be considered as problematic for PLS-SEM? If not, could you suggest any literature back-up please?
I would not drop indicators unless they are less than 0.500. Two indicators is okay, but not ideal. Four is the ideal. I can't remember which reference says this though...
Thanks James
Hi James, I am now writing my master thesis, and I really need your help. I adapted the measurement scales of my five independent variables based on the literature review. And I did EFA in SPSS, but there are so many cross-loadings. I have tried many methods, but I can’t get a satisfying results, there are always two totally different variables loading to one new factor. I feel so frustrated. Then I found SmartPLS, and conducted the CFA by following your video, I deleted 7 items in total, the results look good now. My question is: does this acceptable? If I don’t do EFA in SPSS and do CFA in SmartPLS directly, will that be a problem?
And I have HTMT larger than 1, what should I do?
@@karama8260 Doing the analysis in SmartPLS is fine. Many scholars argue that an EFA is unnecessary if all latent factors are known already. As for the HTMT greater than 1, that means that two factors are very strongly correlated. These are probably the two factors that were loading together in your EFA. This means they are not discriminant - i.e., they are measuring something very similar. They might be different dimensions or manifestations of the same construct.
@@Gaskination Thank you James! That really helps. Do you have any suggestions on the literature that might support the argument? In terms of the HTMT, the latent variable “customer trust” is very highly correlated with customer satisfaction and repurchase intention, so should I select customer trust and give up studying this? Is this the easiest way? I don’t have chance to re- design the questionnaire and re-collect data.
@@karama8260 A non-trivial amount of debate exists among methodologists regarding whether the EFA is absolutely necessary, particularly when the set of observed measures have all been validated in prior literature (Costello and Osborne 2005), and when the factor structure is already theorized (i.e,. when we already expect, for example, that the jsat1-5 variables should factor together to represent the job satisfaction construct). As has been shown in replication studies (cf. Luttrell et al. 2017; Taylor et al. 2003), the same scales will perform differently when in the presence of other measures (not in the original study) or in the context of a different sample. Thus, in my view, the EFA should always be conducted to surface validity issues idiosyncratic to the current context. My personal school of thought is that it is best to do an EFA first because discriminant validity problems manifest more visibly in the EFA than the CFA. Then follow this up with a CFA informed by the EFA. The EFA is for exploration only, and should be used mainly to highlight potential problems (such as discriminant validity) that will likely resurface in the CFA.
-Costello, A. B., and Osborne, J. 2005. "Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most from Your Analysis," Practical assessment, research, and evaluation (10:1), p. 7.
-Luttrell, A., Petty, R. E., and Xu, M. 2017. "Replicating and Fixing Failed Replications: The Case of Need for Cognition and Argument Quality," Journal of Experimental Social Psychology (69), pp. 178-183.
-Taylor, G. J., Bagby, R. M., and Parker, J. D. A. 2003. "The 20-Item Toronto Alexithymia Scale: Iv. Reliability and Factorial Validity in Different Languages and Cultures," Journal of Psychosomatic Research (55:3), pp. 277-283.
It makes sense that those three variables are highly related, as they all capture something about customer satisfaction, or positive affect towards the product/service. You might try conducting an EFA with just the variables from these three latent factors. Then try to discriminate them by seeing which variables refuse to load where you want them. This is probably easiest by forcing the EFA to extract three factors and then producing a loadings plot to see the distance between variables in 3D space.
@@Gaskination thank you James! I feel very appreciated for your detailed explanation! I will follow your advice and have an exploration!!!
Great appreciation! Thank you so much.
How can every indicator for a latent variable have a loading of 0.000? Or your bootstrapping shows a sample mean for all (M) for all connections of "N/A", standard deviation of "N/A"? and then leave blank t-statistics and p-values?
That is weird. Sounds like a calculation error. Try running with PLS algorithm rather than PLSc
Hi! Mr. James,
I wanna ask something. I have good AVE, CR, cronbach alpha etc but one of my HTMT result are greater than 0.85. What should i do?
CR and AVE are measures of convergent validity, while HTMT is a measure of discriminant validity. So, the best way to fix this (if it is above 0.900) is to remove the item that is most shared between the two factors.
@@Gaskination Thank u so much Mr James for the information.
@ James. Please lemme know in my result during the CFA three loadings are in negative value (_). Should I remove that factor? Or is there any justification of it?
If the loadings are negative, then perhaps those variables were reverse-coded. If they were, then you need to re-reverse them by subtracting their values from 1+scalesize. So, if on a 5-point likert scale, subtact from six. If they are not reverse coded, then it is probably due to being more formative in nature. If not formative, then they are just bad items and can be deleted.
@@Gaskination thanks. Kindly also clarify, CFA or Outer loading values both are same in SEM-PLS?
@@savouryspices4656 yes
Dear Mr. Gaskin, Thank you so much for your videos, they have helped me a lot! Do you know a good source to justify using constructs that do not meet the threshold of .5 for the AVE? I cannot get my factor solution to meet the threshold. Thanks in advance!
AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702). Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.
James Gaskin Thank you for your response! Nevertheless, this quote means that an AVE below .5 should not be accepted since more than 50% of the variance is due to error, right?
joh anna This quote says that AVE is too strict, and CR is good enough on its own.
@@Gaskination Hi, Mr. Gaskin. Sorry for joining the conversation. You said that AVE below 0.5 still acceptable if only the CR is good enough, does it work for AVE in PLS-SEM also? Thank you!
@@sharfinazatadini3398 Yes, same for PLS if it is a reflective factor.
Dear sir in many papers the researchers use reflective indicators those are not interchangeable, in their papers they show the outer loadings and other conditions as fulfilled by them. sir I don't know the reason of using these concepts in the papers. There indicators show the formative nature and they have used the exploratory factor analysis for factor extraction (for non-interchangable indicators)? sir are there data shows manipulation.
There was a good paper about this a few years back. They showed how so many researchers misspecify their models. Here is the reference:
Jarvis, C. B., MacKenzie, S. B., & Podsakoff, P. M. (2003). A critical review of construct indicators and measurement model misspecification in marketing and consumer research. Journal of consumer research, 30(2), 199-218.
Dear Dr. Gaskin,
Thanks for the tutorial. Can you please suggest how can we perform a CFA using smartpls. I was conducting a scale validation study which would need an EFA and CFA. Thanks.
Hey Sarkar, actually CFA is what you have on SEM. Take a look into some theory on it. Let me suggest you:
1) www.revistabrasileiramarketing.org/ojs-2.2.4/index.php/remark/article/view/2698/pdf_154
2) www.revistabrasileiramarketing.org/ojs-2.2.4/index.php/remark/article/view/2718/pdf_166
3) www.revistabrasileiramarketing.org/ojs-2.2.4/index.php/remark/article/view/2717/pdf_215
I hope you like it.
Regards
Marcelo
Thanks for your videos, Sir.
Hi, Dr. Gaskin. Is it the Exploratary Factor Analysis or Confirmatory Factor Analysis?
It is confirmatory because you are designating which items belong to which factors. In EFA, the software decides which items belong to which factors.
Hello Prof. James
About the threshold of HTMT, you said It should be less than 1. Could you please tell me where I can find the reference?
Best regards
Nearly any of the references found here should be fine: www.smartpls.com/documentation
Hello Prof. James,thanks for the tutorial. Prof. I have all good values for HTMT,AVE, R^2, Crobach alpha but path coefficient & effect size is very low. Can you please tell me how to address this issue with path coefficient , beta vlaues & effect size using smartpls?
That just means they're not strongly related. So, there are other variables that have stronger effects, but they are not included in your model. The only thing you can do to try to boost these values, without altering their validity, is to see if there are any outliers (univariate or multivariate) that might be influencing the relationships away from their true estimates.
@@Gaskination thank you prof. Would you like to explain more on boosting the values ?? How to boost these values ? And is there any reference for low path coefficients & low f square values ??
Thanks in advance, prof.
@@nazreenchowdhury5635 To boost these values (i.e., make them bigger), you would delete outliers (univariate or multivariate) that might be influencing the relationships away from their true estimates
Hi Gaskin, great tutorials, can you please help me with explanation on subgroup analysis?
Never heard of subgroup analysis. But here is a video on multigroup analysis: ruclips.net/video/b3-dyfhGE4s/видео.html
Hi Dr. James, thank you for the video. It helps me a lot. But, I have one more question. Is it ok to conduct CFA for pilot study with only 30 sample size? Thank you.
In SmartPLS, that is probably fine if the model is not too complex (too many variables). You could also just do EFA instead.
@@Gaskination Thank you for your response! I have 13 constructs with 52 indicators. Actually, I have tried EFA for my data. But, I got non positive definte value and it couldn't be interpreted properly.
@@sharfinazatadini3398 Yes, that is a lot of variables for very little sample size. I would recommend breaking the EFA in half, or in thirds.
I want to ask you.. what I need to do when I have good AVE and Composite Reliability, but the Cronbach's alpha is less than 0.7?
You can rely on the CR and not worry about the Cronbach's Alpha. CR is considered a more accurate measure of reliability.
James Gaskin Thank you, Mr. Gaskin.. Your video and your answer is helpful.. thank you
Hi Mr. James
Is there a problem if the sample size is small? thanks
Dear Mr. Gaskin,
Thank you for your informative video. I just have one question left. Do we need to conduct EFA in spss before conducting PLS.
FYI, my model is fully reflective, and the measurement scale is modified from another research.
Sincerely.
EFA first is helpful to find any discriminant validity issues, but it is not necessary since SmartPLS has built-in discriminant validity assessment (HTMT).
Dear Dr. Gaskin, you mean even if my scale items are adapted ones (Ex: one item added to an existing measure or language changed to suit to my sample), still I do not need to do EFA in SPSS before come to SmartPLS? just HTMT checking is enough?
Thank you so much for your valuable videos.
Dear Mr. Gascin,
Thank You for the Video. I want total perform confirmatory FA in smartpls. Is this the right one, or would You rather call this exploratory Factor Analysis? If Not, Whats would be a CFA in smart pls? Thousand Thanks in advance from Germany.
Stefan
It is CFA because you are dictating which indicators go with which factors.
Dear Sir
Thank you very much for the videos. I have two latent IVs in my model. One has 40 items whereas second one has 44 items. Can I use this method to reduce items in my thesis? If yes then can I make factors on the bases of the reduced items?
Yes. The most appropriate approach may be to use SPSS for an exploratory factor analysis. That would help you identify dimensions within the sets of items. Here is a video on that: ruclips.net/video/VBsuEBsO3U8/видео.html
@@Gaskination Thanks a lot Prof.
Hello Dr. James ~ I'm newbie to SmartPLS. Grateful to hv yr videos ^^. But I'm a bit confused. Hopefully you could help to clear my mind a bit ~ My Prof. asked me to report him the factor validity value for each latent variable.. I'm not sure which one should I report. Is it the outer loadings value? Thank you very much in advanced ~ Stay blessed.
Factor validity comes in the form of convergent and discriminant, as well as reliability. I would guess your Prof is looking for either the AVE or the Cronbach's alpha.
@@Gaskination Thank you very much for yr advice, Dr. James ^^
Hi James, thanks for your vedio. however, i am facing a problem now. i have two constructs. one is formative and the other is reflective. besides, i will not test the relationship between these contructus afterwards, meaning there will be not arrows between them. in this case, how can i test their reliability and validity?
look forward to hearing from you and thanks!
SmartPLS requires all latent factors be attached by regression lines to some other latent factor. Otherwise it will not run. So, just include the line, but run the factor method (rather than path).
Thanks for your answer, James.
1. so i just need to include the line between two constructs and it does not matter the arrow is from which construct to which construct, right?
2. when you said running the factor method, do you mean that i just read the index related to factor analysis, even though factor and path analyses will run simultaneously, like what happen in AMOS? i am not familiar with SmartPLS and assume that factor analysis and path analysis cannot run separately in SmartPLS.
3. For the formative construct, what do I need to report in terms of its reliability and validity?
4. Regarding the model tested by using SmartPLS, do i also need to report the model fit, like what i need to do in AMOS? if so, what do i need to report?
once again, thanks for your help and look forward to hearing from you.
1. Correct.
2. In the run properties before you execute the analysis, there is a tab called Partial Least Squares. In that tab, one of the basic settings is Path, Factor, or Centroid. Choose Factor.
3. Just discriminant validity through the HTMT table. Other papers and books will also recommend other metrics to report. You can follow these as well.
4. No need to report model fit. It is not relevant to a PLS model.
Thanks, James.
I have run the model according to your suggestions. However, I did not obtain any value in the HTMT table. Whether is this because I only have two constructs, and one is reflective and the other is formative? Or does this represent poor discriminant validity? If so, what can i do then?
Once again, thanks for your help!
Oh. I'm not sure if it is limited if you have only two factors. I would assume it works as long as you have at least two... So, I'm not sure. Make sure you are using the PLS Algorithm (not 'consistent'), and that you then click on the Discriminant validity link in the output.
Dear Mr james
i have one question regarding my adapted variable from different sources ..can i use CFA alone in SmartPLS to check its reliability and validity Or is it necessary to do EFA in SPSS before proceeding to CFA?
Thanks in advance!
There are different schools of thought on this. I always recommend doing an EFA for all reflective constructs. If it is formative though, you can move straight to PLS.
yes sir my constructs are reflective and one more thing after performing EFA on that particular adapted construct.. how can i drop or delete items on which basis ? like in output should i consider component matrix or rotated component matrix? and under which criteria i can keep items or delete them .. because its so confusing if spss give me eigen values and make 3 new components then looking at rotated component matrix table.. what should i do next ?
p.s i m sorry for long comment but i m so confused and need help.
@@gulrukh101 Yes, look at the rotated components matrix. I have several videos on EFA in SPSS. These can help guide you.
@@Gaskination thank you Sir I'll watch your tutorials
if on fornell larcker criterion square root of variables less than corelation, what should i do?
You can try to separate the two factors. Look at the loadings matrix to see which items load most strongly on the other factor, then try removing those items.
The indicator will connect from the dependent variable to independent or independent to dependent mine is Primary data so PLS replay me
Usually the lines in a regression-based method (like PLS) go from the IV to the DV.
If for my discriminant validity i have a value of 0.92, is that an issue?
Yes. As the other commentator mentioned, 0.90 is the conservative threshold, while 1.00 is a somewhat relaxed threshold.
@@Gaskination Thank you for replying.
Considering that the HTMT score lies between the confidence interval - 1 and 1, should i let go of the construct or should I argue that discriminant validity has been established?
@@manishamodun6544 I always try to keep what I can, unless it is a gross violation of the criteria.
ok thank you for information abaout SMART PLS
My T stastistics are between 6 to 58 is that normal?
Yes, that is totally fine. Those are all statistically significant.
@@Gaskination Thank you very much for reassuring me. You gave me hope that I can solve the little issues that appeared!! THANK YOU!!!!
What to do when the SRMR is 0.093?
Model fit issues can be due to many things, such as variable normality, factor validity, and sample size inadequacy. So, make sure you meet all these criteria first. Then, if there are still problems with model fit, check the modification indices to see if one particular item is causing the issue. If so, delete that item and check again.
Thank you very much.
I checked all of what you suggested (variable normality and factor validity) and they were good! I The model fitted, but found no significant relationships between all the variables! However, the sample size is 105 and have 8 variables! Is this sample not sufficient for PLS?
It is a low sample size if you have 8 latent factors.
Thank you very much for help, James.
what if outer loading is more than 1?
That's called a Heywood case. In SmartPLS, there's not much you can do. However, it implies there is some error happening in the measurement. So, make sure you're not including categorical variables or variables that are completely redundant.
@@GaskinationCan you please help me with the factors you have considered in the video ? How are these factors vis a vis complete model? I mean are they all IVS or anyone DVS? please explain variables little more?
@@2375nikhil The factors are CSE (computer self-efficacy), Innovativeness (with computer), and Skill Acquisition. These were the only three construct being considered in the video. If I were running the factor analysis on my own model, I would include all latent factors during the factor analysis.
@@Gaskination Thanks, so much Sir 🙏🏻
Dear Mr. Gaskin,
I have one question. I have 2 independent latent variables and one dependent LV. When I run the "normal" bootstrap (with path weighting scheme), I get very significant T-statistics. When I run the same model as a "Consistent PLS" bootstrap, it comes up with 0.000 everywhere in the model. What does that indicate?
The PLSc algorithm is still imperfect. If ti gives you weird results, just revert to the normal PLS algorithm.
@@Gaskination Thanks a lot. Does that mean I do the entire analyses with the normal algorithm? So from identifying the factor loadings, running the algorithm and the bootstrap, or do I use the PLS consistent Algorithm to calculate the factor loadings, and take it from there on the normal algorithm?
@@TheShroogle Assuming all of your factors are reflective, if the PLSc works properly (i.e., you don't see any weird results) then go ahead and use it where you can. Otherwise, use the normal algorithm. If any of your factors are not reflective, then I recommend using the normal PLS algorithm for everything.
Please i need your help where i can contact you
google me. I'm the first hit. You can find my email by following the first google result.
Hello professor Gaskin, at 10:30 when I run my 'consistent PLS bootstrap' I dont get an AVE section in the report. What should I do?
Looks like the newest versions of SmartPLS have moved this over to the non-bootstrapped report under the section: "Construct Reliability and Validity"
Thank you Dr Gaskin for this and all other videos.
As for the model fit, PLS team seems to be skeptical (at least not very confident) of its adequacy of SRMR in PLS-SEM. This is mentioned in: www.smartpls.com/documentation/functionalities/goodness-of-fit
What's your opinion, should SRMR and other values be used to determine GoF or should we rely on CB-SEM for this.
SRMR is the only measure they seem okay with because it is not built upon the covariance matrix. They are more skeptical of the other measures and are hesitant to recommend them. Model fit is more appropriate for CB-SEM, since nearly all measures of fit are built off the chi-square, which is a difference measure of covariance matrices. So, I would rely on SRMR for now. Hopefully they'll implement the BIC and AIC and GM (I think that's what it is called...) soon, as these are robust to PLS.
Thank you.
Hello Dr. , Thank you very much for for all of your videos. I was wondering would it effect much on the results if i use path rather than factor in running my model? What the major difference between them while reporting in a thesis?