SmartPLS 3 Factor Analysis

Поделиться
HTML-код
  • Опубликовано: 21 авг 2024

Комментарии • 171

  • @Gaskination
    @Gaskination  3 года назад

    Here's a fun pet project I've been working on: udreamed.com/. It is a dream analytics app. Here is the RUclips channel where we post a new video almost three times per week: ruclips.net/channel/UCiujxblFduQz8V4xHjMzyzQ
    Also available on iOS: apps.apple.com/us/app/udreamed/id1054428074
    And Android: play.google.com/store/apps/details?id=com.unconsciouscognitioninc.unconsciouscognition&hl=en
    Check it out! Thanks!

  • @user-iw9ig1dt8u
    @user-iw9ig1dt8u 7 лет назад +19

    Mr. James Gaskin
    I am about to hand over my thesis. I would like to thank you without your videos and explanations I have not been able to finish my thesis.
    I mentioned your name on the Acknowledgment page because you really deserve it

    • @Gaskination
      @Gaskination  7 лет назад +1

      Thanks! Congrats! Best of luck to you.

  • @user-yo1pd3gc1k
    @user-yo1pd3gc1k 5 лет назад +2

    Dear Mr. Gaskin,
    I am writing my thesis now, and I can't say how grateful I am. I literally build all the analysis by learning from your videos! It's tooooo bad that in my country we don't have access to youtube. I really wish my friends in China could see these amazing tutorials and benefit from them as I did.
    Thank you so much:)
    All the best,
    Lu

    • @dipeshkh
      @dipeshkh 5 лет назад

      same here, these videos are saviour ..but the sad part is currently I am in China O.o its difficult to access them whenever I want.

  • @Hish84
    @Hish84 2 года назад

    Thank you very much, James for this PLS videos; it makes it easy for me to learn it.

  • @mldsg72
    @mldsg72 7 лет назад

    Hey James, well done my friend! Keep movin'

  • @MrKaushal1988
    @MrKaushal1988 6 лет назад

    Good explanation and very easy to understand and run SmartPLS...... I am always your great fan..... Merry Christmas

  • @ASWINALORA
    @ASWINALORA 6 лет назад

    Thank you so much sir. Your videos are really helpful.

  • @ayanawyeneneh411
    @ayanawyeneneh411 3 года назад

    Great appreciation! Thank you so much.

  • @monaisazad34
    @monaisazad34 3 года назад

    Dear Professor James, thanks for your great videos. I am confused about the value of the HTMT. In the video, you have mentioned less than 1 is fine. However in one of the comments when a person asked you about having a HTMT greater than 0.85, you have recommended removing the item. So, now my question is, if below 1 is fine, why that person should remove the item?
    Thank you so much. I really appreciate your response. Looking forward to your response. I really need clarification on this to complete my thesis.

    • @Gaskination
      @Gaskination  3 года назад +1

      The published threshold is 0.850 for conservative and 0.900 for liberal. I can't remember why I said 1.00 in this video. Sorry about that.

    • @monaisazad34
      @monaisazad34 3 года назад

      @@Gaskination dear james thanks for your clarification. What should we do if the HTMT is between 0.9 and 1!?

  • @christinepham5955
    @christinepham5955 6 лет назад +1

    Hi James. Thank you for all the helpful videos. There's an issue when running the data that I want to ask you. I'm following your instruction for factor analysis. After deleting all the items with low factor loading, I run bootstrapping, and then there's no Quality Criteria column in the report. So I don't know where to see the AVE. It's actually the step at 10:10 min in your video. Have you ever experienced this kind of issue or I should have missed a detail? Thank you so much.

    • @Gaskination
      @Gaskination  6 лет назад +2

      That is weird. I sometimes still get bugs when I use the PLSconsistent algorithm. If you are using that, maybe try using the regular PLS algorithm. The only other thing I can think of is that you don't have any reflective factors. In this case, no AVE would be calculated, as AVE is not relevant to formative factors.

    • @HSoner-be2ub
      @HSoner-be2ub 6 лет назад +5

      Hi Christine. In the settings when conducting the bootstrapping procedure go to the tab bootstrapping. There you have to check "Complete Bootstrapping" under Amount of Results ("Basic Bootstrapping" is the default which won't provide the quality criteria).

  • @supermarshmallow1996
    @supermarshmallow1996 28 дней назад

    Hey James, upon reading Hair et al's Primer on PLS-SEM, there's a section on model evaluation that states "the concept of fit-as defined in CB-SEM- does not apply to PLS-SEM. Efforts to introduce model fit measures have generally proven unsuccessful'. Does that mean SRMR values are negligible?

    • @Gaskination
      @Gaskination  28 дней назад

      Correct. The model fit measures are not really appropriate for PLS. I do not even bother reporting them unless I'm using CB-SEM.

  • @tbasou
    @tbasou 5 лет назад

    Hi Dr. James Gaskin,
    You continued with lower AVE saying that composite reliability is already achieved. Can we continue a model with lower AVE? What are the supportive literature. Thank you so much for your wonderful video series.

    • @Gaskination
      @Gaskination  5 лет назад

      AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).
      Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.

    • @tbasou
      @tbasou 5 лет назад

      @@Gaskination Dear Dr. James Gaskin, Thank you so much for replying to my question. So nice of you. Your videos helped me a lot in learning PLS-SEM. Many thanks. Love from Asia.

  • @HH-zu4li
    @HH-zu4li 2 месяца назад

    THANK YOU, I still have a question for a CFA is it better to use the process that you suggest in this video or is it better to use CB SEM? thank you again

    • @Gaskination
      @Gaskination  2 месяца назад

      CB-SEM is better for all reflective factors. SmartPLS now has a CB-SEM side: ruclips.net/video/FS1D4KmmABU/видео.htmlsi=I6Yo_i-1JDVFT3-g

  • @VuongNguyen-bb6hh
    @VuongNguyen-bb6hh 6 лет назад

    Dear Mr. Gaskin,
    Thank you for your informative video. I just have one question left. Do we need to conduct EFA in spss before conducting PLS.
    FYI, my model is fully reflective, and the measurement scale is modified from another research.
    Sincerely.

    • @Gaskination
      @Gaskination  6 лет назад

      EFA first is helpful to find any discriminant validity issues, but it is not necessary since SmartPLS has built-in discriminant validity assessment (HTMT).

    • @mufithabuhari4021
      @mufithabuhari4021 6 лет назад

      Dear Dr. Gaskin, you mean even if my scale items are adapted ones (Ex: one item added to an existing measure or language changed to suit to my sample), still I do not need to do EFA in SPSS before come to SmartPLS? just HTMT checking is enough?
      Thank you so much for your valuable videos.

  • @LifeIsBayWatch
    @LifeIsBayWatch 2 года назад

    Hello professor Gaskin,
    I have studied your "EAM 2021 SEM Workshop" video and have learned how to conduct EFA (I have also learned to use eigenvalues and doublecheck factors using parallel analysis). Would you suggest me to conduct EFA separately (in SPSS) and then re-test the new model in SMARTPLS like shown in this video? This way I feel more "safe" than just importing my data directly into SMARTPLS.
    Thank you for your kind help.

    • @Gaskination
      @Gaskination  2 года назад +1

      Yes, this is the approach I would recommend. However, most scholars would argue that the EFA is not necessary. The reason I conduct it anyway is because it allows you to identify discriminant validity concerns that might be difficult to identify in SmartPLS.

  • @mufithabuhari4021
    @mufithabuhari4021 6 лет назад

    Dear Dr. Gaskin, related to the question by VLion on on EFA ; you mean even if my scale items are adapted ones (Ex: one item added to an existing measure or language changed to suit to my sample), still I do not need to do EFA in SPSS before come to SmartPLS? just HTMT checking is enough? Thank you so much for your valuable videos.

    • @Gaskination
      @Gaskination  6 лет назад

      correct. However, I still do EFA in SPSS because it is better at identifying discriminant validity issues.

    • @mufithabuhari4021
      @mufithabuhari4021 6 лет назад

      Dear Dr. Gaskin, Thank you so much for your kind explanation.

  • @gwenkmou
    @gwenkmou 3 года назад

    Hello Dr. James ~ I'm newbie to SmartPLS. Grateful to hv yr videos ^^. But I'm a bit confused. Hopefully you could help to clear my mind a bit ~ My Prof. asked me to report him the factor validity value for each latent variable.. I'm not sure which one should I report. Is it the outer loadings value? Thank you very much in advanced ~ Stay blessed.

    • @Gaskination
      @Gaskination  3 года назад +1

      Factor validity comes in the form of convergent and discriminant, as well as reliability. I would guess your Prof is looking for either the AVE or the Cronbach's alpha.

    • @gwenkmou
      @gwenkmou 3 года назад

      @@Gaskination Thank you very much for yr advice, Dr. James ^^

  • @utkalkhandelwal7571
    @utkalkhandelwal7571 4 года назад

    very good tutorial. Can you clarify how critical ratios have been extracted in smart pls

    • @Gaskination
      @Gaskination  4 года назад

      I'm sure they use some standard approach. It is just a t-statistic.

  • @maryyy5551
    @maryyy5551 5 лет назад +1

    Dear Mr. Gaskin, Thank you so much for your videos, they have helped me a lot! Do you know a good source to justify using constructs that do not meet the threshold of .5 for the AVE? I cannot get my factor solution to meet the threshold. Thanks in advance!

    • @Gaskination
      @Gaskination  5 лет назад

      AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702). Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.

    • @maryyy5551
      @maryyy5551 5 лет назад

      James Gaskin Thank you for your response! Nevertheless, this quote means that an AVE below .5 should not be accepted since more than 50% of the variance is due to error, right?

    • @Gaskination
      @Gaskination  5 лет назад

      joh anna This quote says that AVE is too strict, and CR is good enough on its own.

    • @sharfinazatadini3398
      @sharfinazatadini3398 3 года назад

      @@Gaskination Hi, Mr. Gaskin. Sorry for joining the conversation. You said that AVE below 0.5 still acceptable if only the CR is good enough, does it work for AVE in PLS-SEM also? Thank you!

    • @Gaskination
      @Gaskination  3 года назад

      @@sharfinazatadini3398 Yes, same for PLS if it is a reflective factor.

  • @afrinasari9644
    @afrinasari9644 4 года назад

    ok thank you for information abaout SMART PLS

  • @deejo2278
    @deejo2278 2 года назад

    Hi Dr. Gaskin - thanks for this great video. I'm using reflective constructs in my model, and in analyzing the outer loadings, I'm noticing that an item with an outer loading of 1.279. The construct has 3 items and the other 2 item outer loadings are .508 and .671. I thought I might have an issue with model misspecification or Heywood case - I'm still investigating. I would appreciate any insight you could provide. Thank you!

    • @Gaskination
      @Gaskination  2 года назад +1

      Yes, this is a heywood case. Try using the regular PLS algorithm instead of the PLSc algorithm. If you were not using PLSc, try using it. Hopefully this will fix it. If not, try removing that item.

    • @deejo2278
      @deejo2278 2 года назад

      @@Gaskination thank you!

  • @jackiehuang5391
    @jackiehuang5391 4 года назад

    Hi, Dr. Gaskin. Is it the Exploratary Factor Analysis or Confirmatory Factor Analysis?

    • @Gaskination
      @Gaskination  4 года назад

      It is confirmatory because you are designating which items belong to which factors. In EFA, the software decides which items belong to which factors.

  • @isaackosi2889
    @isaackosi2889 7 лет назад +1

    Invalid path model in smartPLS. How do I correct that in SmartPLS 3? The comment says I should check the reds elements but I don't see any on the console or the panel.

    • @Gaskination
      @Gaskination  7 лет назад

      Make sure all latent factors have items attached to them.

    • @isaackosi2889
      @isaackosi2889 7 лет назад

      Hello Dr Gastin,
      I have done that and has been able to run the data. Thanks for the assistance.

  • @joannalim3140
    @joannalim3140 3 года назад

    Hi James,
    Would you recommend to create different new projects for different measurement dimension? I have run the PLS and bootstrap for many times and it is very hard to get Pvalue for

    • @Gaskination
      @Gaskination  3 года назад +1

      No. I would recommend putting all latent factors in the same model. This is best practice, as it minimizes the possibility of false positives.

  • @sharfinazatadini3398
    @sharfinazatadini3398 4 года назад

    Hi Dr. James, thank you for the video. It helps me a lot. But, I have one more question. Is it ok to conduct CFA for pilot study with only 30 sample size? Thank you.

    • @Gaskination
      @Gaskination  4 года назад

      In SmartPLS, that is probably fine if the model is not too complex (too many variables). You could also just do EFA instead.

    • @sharfinazatadini3398
      @sharfinazatadini3398 4 года назад

      @@Gaskination Thank you for your response! I have 13 constructs with 52 indicators. Actually, I have tried EFA for my data. But, I got non positive definte value and it couldn't be interpreted properly.

    • @Gaskination
      @Gaskination  4 года назад +1

      @@sharfinazatadini3398 Yes, that is a lot of variables for very little sample size. I would recommend breaking the EFA in half, or in thirds.

  • @jiangpan7333
    @jiangpan7333 5 лет назад

    Hello Prof. James
    About the threshold of HTMT, you said It should be less than 1. Could you please tell me where I can find the reference?
    Best regards

    • @Gaskination
      @Gaskination  5 лет назад +1

      Nearly any of the references found here should be fine: www.smartpls.com/documentation

  • @jackiehuang5391
    @jackiehuang5391 4 года назад

    Hi, Dr. James. When doing the factor analysis, is the T value threshhold for formative indicators the same with that of reflective indicators?

    • @Gaskination
      @Gaskination  4 года назад

      Yes. 1.96 for 95% confidence. 1.65 for 90% confidence.

  • @YY-vy2bq
    @YY-vy2bq 4 года назад

    Hi James, quick question please. For formative constructs, do I have to mention AVE and CR? Read some paper, they did, but formative items do not need to be correlated....so I am confusing.

    • @Gaskination
      @Gaskination  4 года назад

      You are correct. AVE and CR are not relevant to formative factors.

  • @jonathandabi199
    @jonathandabi199 6 лет назад

    Hi Mr. Gaskin, what happens if for example in the reflective measurement model the constructs fail to demonstrate either convergent or discriminate validity? what happen to my analysis, do i just continue and report that the constructs do not show, say discriminate validity? I want to really understand the purpose for internal consistency reliability, convergent validity and discriminate validity and if the construct in my research model do not demonstrate the acceptable values for these, what happens to my research?
    Thank you.

    • @Gaskination
      @Gaskination  6 лет назад

      You can go back to your data screening to check normalities and outliers. Also see if anything can be done to improve the validities during the EFA. This video may help: ruclips.net/video/oeoTpXiSncc/видео.html
      Moving forward without good validity is not advised. It will undermine and invalidate your findings.

  • @subhrosarkar4366
    @subhrosarkar4366 7 лет назад

    Dear Dr. Gaskin,
    Thanks for the tutorial. Can you please suggest how can we perform a CFA using smartpls. I was conducting a scale validation study which would need an EFA and CFA. Thanks.

    • @mldsg72
      @mldsg72 7 лет назад +1

      Hey Sarkar, actually CFA is what you have on SEM. Take a look into some theory on it. Let me suggest you:
      1) www.revistabrasileiramarketing.org/ojs-2.2.4/index.php/remark/article/view/2698/pdf_154
      2) www.revistabrasileiramarketing.org/ojs-2.2.4/index.php/remark/article/view/2718/pdf_166
      3) www.revistabrasileiramarketing.org/ojs-2.2.4/index.php/remark/article/view/2717/pdf_215
      I hope you like it.
      Regards
      Marcelo

    • @2375nikhil
      @2375nikhil 4 года назад

      Thanks for your videos, Sir.

  • @user-gk7tb3rw3j
    @user-gk7tb3rw3j Год назад

    Hello dr.Gaskin. Thanks for the video. Could I perform PCA on spss for each construct separately befor testing hypotheses on SmartPls (PLS-SEM). All first order and second order constructs are reflective.

    • @Gaskination
      @Gaskination  Год назад +1

      factor analysis is much better when all factors are included together. This allows for discriminant validity checks.

    • @user-gk7tb3rw3j
      @user-gk7tb3rw3j Год назад

      ​​​@@Gaskination Thanks Dr. Gaskin. I remember that you said somewhere that when there are multidimensional (higher order) constructs in the model it's better to conduct PCA with promax rotation for each one of them separately. Is this right or I misunderstood something? The problem is that the factor structure produced by factor analysis gives indiscriminately valid higher order construct. Can I ignore factor analysis totally and go directly to SmartPls and obtain measure qualities (reliability and validity) by investigating cross loadings, composite reliability, and HTMT?

    • @Gaskination
      @Gaskination  Год назад +1

      @@user-gk7tb3rw3j Oh, I misunderstood. Didn't realize you were working with higher order factors. You can test them separately. However, if they are established scales, you can also jump straight to PLS and assess validity there.

    • @user-gk7tb3rw3j
      @user-gk7tb3rw3j Год назад

      @@Gaskination Thanks Dr. This was really helpful.

  • @ibrahimhussain7395
    @ibrahimhussain7395 3 года назад

    Dear Sir
    Thank you very much for the videos. I have two latent IVs in my model. One has 40 items whereas second one has 44 items. Can I use this method to reduce items in my thesis? If yes then can I make factors on the bases of the reduced items?

    • @Gaskination
      @Gaskination  3 года назад +1

      Yes. The most appropriate approach may be to use SPSS for an exploratory factor analysis. That would help you identify dimensions within the sets of items. Here is a video on that: ruclips.net/video/VBsuEBsO3U8/видео.html

    • @ibrahimhussain7395
      @ibrahimhussain7395 3 года назад

      @@Gaskination Thanks a lot Prof.

  • @soo5441
    @soo5441 6 лет назад

    Dear Mr. Gascin,
    Thank You for the Video. I want total perform confirmatory FA in smartpls. Is this the right one, or would You rather call this exploratory Factor Analysis? If Not, Whats would be a CFA in smart pls? Thousand Thanks in advance from Germany.
    Stefan

    • @Gaskination
      @Gaskination  6 лет назад

      It is CFA because you are dictating which indicators go with which factors.

  • @strom933
    @strom933 4 года назад

    Do you look at the saturated or estimated model fit values when you consider the SRMR?
    my research have different values saturated=0.050 and estimated=0.158

    • @Gaskination
      @Gaskination  4 года назад +1

      Estimated. But I also don't put much trust in model fit in PLS....

  • @wlidster
    @wlidster 6 лет назад

    How can every indicator for a latent variable have a loading of 0.000? Or your bootstrapping shows a sample mean for all (M) for all connections of "N/A", standard deviation of "N/A"? and then leave blank t-statistics and p-values?

    • @Gaskination
      @Gaskination  6 лет назад

      That is weird. Sounds like a calculation error. Try running with PLS algorithm rather than PLSc

  • @dharmendrahariyani7114
    @dharmendrahariyani7114 3 года назад

    Dear sir in many papers the researchers use reflective indicators those are not interchangeable, in their papers they show the outer loadings and other conditions as fulfilled by them. sir I don't know the reason of using these concepts in the papers. There indicators show the formative nature and they have used the exploratory factor analysis for factor extraction (for non-interchangable indicators)? sir are there data shows manipulation.

    • @Gaskination
      @Gaskination  3 года назад

      There was a good paper about this a few years back. They showed how so many researchers misspecify their models. Here is the reference:
      Jarvis, C. B., MacKenzie, S. B., & Podsakoff, P. M. (2003). A critical review of construct indicators and measurement model misspecification in marketing and consumer research. Journal of consumer research, 30(2), 199-218.

  • @adamkahraman3122
    @adamkahraman3122 4 года назад

    Dear James,
    Could you help me to clarify a test step concept regarding T statistics, please?
    For familiarizing to the smart pls procedures:
    In your video running bootstrapping to be able to check T statistics, correct?
    How to interpret a situation:
    Although factor loadings, CR, Alpha, AVEs are all green and above 0.70, it has only a few T statistics above 1,96
    and those above Show 0,00 p values.
    Rest of the items don't show neither 0,00 p values nor t statistics
    .
    What would be the potential cause of showing low T statistics?
    Is there any video or source to check regarding T statistics?
    Kind regards,

    • @Gaskination
      @Gaskination  4 года назад +1

      Do you mean the t-statistics for the structural (inner) paths or for the measurement (outer) paths? If for outer, then this is weird. If for inner, then this is completely fine. The quality and validity of the factors does not determine the strength of the relationships between factors.

    • @adamkahraman3122
      @adamkahraman3122 4 года назад

      @@Gaskination Sincere Thanks...

  • @jonathandabi199
    @jonathandabi199 6 лет назад

    Hi Gaskin, great tutorials, can you please help me with explanation on subgroup analysis?

    • @Gaskination
      @Gaskination  6 лет назад

      Never heard of subgroup analysis. But here is a video on multigroup analysis: ruclips.net/video/b3-dyfhGE4s/видео.html

  • @maxkoghut2417
    @maxkoghut2417 6 лет назад

    Hi James, for an Exploratory study if the number of indicators dropped to TWO for one factor due to a low loading (less than 0.7) Should this be considered as problematic for PLS-SEM? If not, could you suggest any literature back-up please?

    • @Gaskination
      @Gaskination  6 лет назад

      I would not drop indicators unless they are less than 0.500. Two indicators is okay, but not ideal. Four is the ideal. I can't remember which reference says this though...

    • @maxkoghut2417
      @maxkoghut2417 6 лет назад

      Thanks James

  • @vahanprasan
    @vahanprasan 4 года назад

    Hi James. I have a question. I am writing a paper about designing and validating a questionnaire with 232 samples. While running pls algorithm, model has passed through reliability and validity tests but NFI of model fit is 0.5 which is too low. Could it be due to sample size or issue with model? I am yet to do primary data collection and hence I cannot test with more samples at the moment. So is it a good idea to skip the model fit iand just show reliability and validity results in the manuscript?

    • @Gaskination
      @Gaskination  4 года назад

      Unless the model is very complex, the sample of 232 is probably sufficient to determine if the model fit is good. So, it is probably due to something else. I recommend checking the validities to see if they pass. If they do, then perhaps ignore model fit since this is PLS and model fit is not a PLS measure.

  • @gulrukh101
    @gulrukh101 4 года назад

    Dear Mr james
    i have one question regarding my adapted variable from different sources ..can i use CFA alone in SmartPLS to check its reliability and validity Or is it necessary to do EFA in SPSS before proceeding to CFA?
    Thanks in advance!

    • @Gaskination
      @Gaskination  4 года назад

      There are different schools of thought on this. I always recommend doing an EFA for all reflective constructs. If it is formative though, you can move straight to PLS.

    • @gulrukh101
      @gulrukh101 4 года назад

      yes sir my constructs are reflective and one more thing after performing EFA on that particular adapted construct.. how can i drop or delete items on which basis ? like in output should i consider component matrix or rotated component matrix? and under which criteria i can keep items or delete them .. because its so confusing if spss give me eigen values and make 3 new components then looking at rotated component matrix table.. what should i do next ?
      p.s i m sorry for long comment but i m so confused and need help.

    • @Gaskination
      @Gaskination  4 года назад

      @@gulrukh101 Yes, look at the rotated components matrix. I have several videos on EFA in SPSS. These can help guide you.

    • @gulrukh101
      @gulrukh101 4 года назад

      @@Gaskination thank you Sir I'll watch your tutorials

  • @yiyan6607
    @yiyan6607 Год назад

    Dear teacher, if I measure independent variable using one item, is that OK to run structural model by smart PLS? Thank you

  • @saeipul8455
    @saeipul8455 4 года назад

    Hi Mr. James
    Is there a problem if the sample size is small? thanks

  • @nazreenchowdhury5635
    @nazreenchowdhury5635 4 года назад

    Hello Prof. James,thanks for the tutorial. Prof. I have all good values for HTMT,AVE, R^2, Crobach alpha but path coefficient & effect size is very low. Can you please tell me how to address this issue with path coefficient , beta vlaues & effect size using smartpls?

    • @Gaskination
      @Gaskination  4 года назад

      That just means they're not strongly related. So, there are other variables that have stronger effects, but they are not included in your model. The only thing you can do to try to boost these values, without altering their validity, is to see if there are any outliers (univariate or multivariate) that might be influencing the relationships away from their true estimates.

    • @nazreenchowdhury5635
      @nazreenchowdhury5635 4 года назад

      @@Gaskination thank you prof. Would you like to explain more on boosting the values ?? How to boost these values ? And is there any reference for low path coefficients & low f square values ??
      Thanks in advance, prof.

    • @Gaskination
      @Gaskination  4 года назад

      @@nazreenchowdhury5635 To boost these values (i.e., make them bigger), you would delete outliers (univariate or multivariate) that might be influencing the relationships away from their true estimates

  • @malmigdadimigdadi6928
    @malmigdadimigdadi6928 3 года назад

    Can you advise me, please?
    My outer model has perfect results.
    My inner model has some issues, I think: My R² and F² are very low. Is there a way to improve them, any recommendations, please?
    There were some path coefficients that showed insignificant value, I am not too sure if this means I should delete the paths or the latent variable?
    Or if I just do not interpret the results of insignificant p value?

    • @Gaskination
      @Gaskination  3 года назад

      If R2 and F2 are low, then that just means the relationships between the latent factors are weak. This is perfectly normal, even with a strong and valid outer model. Here are some relevant thoughts on this: gaskination.com/forum/discussion/6/why-i-found-no-significant-relationships-and-all-my-hypotheses-failed

  • @farishakim6759
    @farishakim6759 2 года назад

    Hi Dr James..just wondering, I have good CR, CA, VIF, loadings, and AVE. But my T value and P value exceed 0.05...can you suggest me what to do for that?

    • @Gaskination
      @Gaskination  2 года назад +1

      Hi t-value is good. High p-value is probably not what you were hoping for. Here is something I've written about that: gaskination.com/forum/discussion/6/why-i-found-no-significant-relationships-and-all-my-hypotheses-failed#latest
      To this I'll just add that sometimes a theory just isn't a good approximation of truth.

  • @SalusPiedade
    @SalusPiedade 5 лет назад

    Hi Mr. James Gaskin
    Could you help me, please! I am running analysis on SmartPLS 3.0 for path analysis, let say i have latent variable with 3 dimension. the result show that all dimension have value of cronbach's alpha, CR, and AVE but in the latent variable the value is not appear. How i to get the latent variable value or any manual calculation. I want to calculate the goodness of fit (GoF) but I didn't' have the value of latent variable.
    Your's sincerely,

    • @Gaskination
      @Gaskination  5 лет назад

      These measures will only be computed for reflective factors. If the factor is formative, the AVE, CR, etc. will not be calculated. If your factors are reflective, but the measures are still not being calculated, then make sure to use the PLS algorithm instead of PLSc algorithm (which still seems to have some bugs).

  • @karama8260
    @karama8260 3 года назад

    Hi James, I am now writing my master thesis, and I really need your help. I adapted the measurement scales of my five independent variables based on the literature review. And I did EFA in SPSS, but there are so many cross-loadings. I have tried many methods, but I can’t get a satisfying results, there are always two totally different variables loading to one new factor. I feel so frustrated. Then I found SmartPLS, and conducted the CFA by following your video, I deleted 7 items in total, the results look good now. My question is: does this acceptable? If I don’t do EFA in SPSS and do CFA in SmartPLS directly, will that be a problem?

    • @karama8260
      @karama8260 3 года назад

      And I have HTMT larger than 1, what should I do?

    • @Gaskination
      @Gaskination  3 года назад

      @@karama8260 Doing the analysis in SmartPLS is fine. Many scholars argue that an EFA is unnecessary if all latent factors are known already. As for the HTMT greater than 1, that means that two factors are very strongly correlated. These are probably the two factors that were loading together in your EFA. This means they are not discriminant - i.e., they are measuring something very similar. They might be different dimensions or manifestations of the same construct.

    • @karama8260
      @karama8260 3 года назад

      @@Gaskination Thank you James! That really helps. Do you have any suggestions on the literature that might support the argument? In terms of the HTMT, the latent variable “customer trust” is very highly correlated with customer satisfaction and repurchase intention, so should I select customer trust and give up studying this? Is this the easiest way? I don’t have chance to re- design the questionnaire and re-collect data.

    • @Gaskination
      @Gaskination  3 года назад

      ​@@karama8260 A non-trivial amount of debate exists among methodologists regarding whether the EFA is absolutely necessary, particularly when the set of observed measures have all been validated in prior literature (Costello and Osborne 2005), and when the factor structure is already theorized (i.e,. when we already expect, for example, that the jsat1-5 variables should factor together to represent the job satisfaction construct). As has been shown in replication studies (cf. Luttrell et al. 2017; Taylor et al. 2003), the same scales will perform differently when in the presence of other measures (not in the original study) or in the context of a different sample. Thus, in my view, the EFA should always be conducted to surface validity issues idiosyncratic to the current context. My personal school of thought is that it is best to do an EFA first because discriminant validity problems manifest more visibly in the EFA than the CFA. Then follow this up with a CFA informed by the EFA. The EFA is for exploration only, and should be used mainly to highlight potential problems (such as discriminant validity) that will likely resurface in the CFA.
      -Costello, A. B., and Osborne, J. 2005. "Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most from Your Analysis," Practical assessment, research, and evaluation (10:1), p. 7.
      -Luttrell, A., Petty, R. E., and Xu, M. 2017. "Replicating and Fixing Failed Replications: The Case of Need for Cognition and Argument Quality," Journal of Experimental Social Psychology (69), pp. 178-183.
      -Taylor, G. J., Bagby, R. M., and Parker, J. D. A. 2003. "The 20-Item Toronto Alexithymia Scale: Iv. Reliability and Factorial Validity in Different Languages and Cultures," Journal of Psychosomatic Research (55:3), pp. 277-283.
      It makes sense that those three variables are highly related, as they all capture something about customer satisfaction, or positive affect towards the product/service. You might try conducting an EFA with just the variables from these three latent factors. Then try to discriminate them by seeing which variables refuse to load where you want them. This is probably easiest by forcing the EFA to extract three factors and then producing a loadings plot to see the distance between variables in 3D space.

    • @karama8260
      @karama8260 3 года назад

      @@Gaskination thank you James! I feel very appreciated for your detailed explanation! I will follow your advice and have an exploration!!!

  • @chengyangxy
    @chengyangxy 7 лет назад

    Hi James, thanks for your vedio. however, i am facing a problem now. i have two constructs. one is formative and the other is reflective. besides, i will not test the relationship between these contructus afterwards, meaning there will be not arrows between them. in this case, how can i test their reliability and validity?
    look forward to hearing from you and thanks!

    • @Gaskination
      @Gaskination  7 лет назад

      SmartPLS requires all latent factors be attached by regression lines to some other latent factor. Otherwise it will not run. So, just include the line, but run the factor method (rather than path).

    • @chengyangxy
      @chengyangxy 7 лет назад

      Thanks for your answer, James.
      1. so i just need to include the line between two constructs and it does not matter the arrow is from which construct to which construct, right?
      2. when you said running the factor method, do you mean that i just read the index related to factor analysis, even though factor and path analyses will run simultaneously, like what happen in AMOS? i am not familiar with SmartPLS and assume that factor analysis and path analysis cannot run separately in SmartPLS.
      3. For the formative construct, what do I need to report in terms of its reliability and validity?
      4. Regarding the model tested by using SmartPLS, do i also need to report the model fit, like what i need to do in AMOS? if so, what do i need to report?
      once again, thanks for your help and look forward to hearing from you.

    • @Gaskination
      @Gaskination  7 лет назад

      1. Correct.
      2. In the run properties before you execute the analysis, there is a tab called Partial Least Squares. In that tab, one of the basic settings is Path, Factor, or Centroid. Choose Factor.
      3. Just discriminant validity through the HTMT table. Other papers and books will also recommend other metrics to report. You can follow these as well.
      4. No need to report model fit. It is not relevant to a PLS model.

    • @chengyangxy
      @chengyangxy 7 лет назад

      Thanks, James.
      I have run the model according to your suggestions. However, I did not obtain any value in the HTMT table. Whether is this because I only have two constructs, and one is reflective and the other is formative? Or does this represent poor discriminant validity? If so, what can i do then?
      Once again, thanks for your help!

    • @gaskinstories7726
      @gaskinstories7726 7 лет назад

      Oh. I'm not sure if it is limited if you have only two factors. I would assume it works as long as you have at least two... So, I'm not sure. Make sure you are using the PLS Algorithm (not 'consistent'), and that you then click on the Discriminant validity link in the output.

  • @TheShroogle
    @TheShroogle 5 лет назад

    Dear Mr. Gaskin,
    I have one question. I have 2 independent latent variables and one dependent LV. When I run the "normal" bootstrap (with path weighting scheme), I get very significant T-statistics. When I run the same model as a "Consistent PLS" bootstrap, it comes up with 0.000 everywhere in the model. What does that indicate?

    • @Gaskination
      @Gaskination  5 лет назад +1

      The PLSc algorithm is still imperfect. If ti gives you weird results, just revert to the normal PLS algorithm.

    • @TheShroogle
      @TheShroogle 5 лет назад

      @@Gaskination Thanks a lot. Does that mean I do the entire analyses with the normal algorithm? So from identifying the factor loadings, running the algorithm and the bootstrap, or do I use the PLS consistent Algorithm to calculate the factor loadings, and take it from there on the normal algorithm?

    • @Gaskination
      @Gaskination  5 лет назад

      @@TheShroogle Assuming all of your factors are reflective, if the PLSc works properly (i.e., you don't see any weird results) then go ahead and use it where you can. Otherwise, use the normal algorithm. If any of your factors are not reflective, then I recommend using the normal PLS algorithm for everything.

  • @selaludiam
    @selaludiam 7 лет назад

    if on fornell larcker criterion square root of variables less than corelation, what should i do?

    • @Gaskination
      @Gaskination  7 лет назад

      You can try to separate the two factors. Look at the loadings matrix to see which items load most strongly on the other factor, then try removing those items.

  • @vesadeka
    @vesadeka 4 года назад

    I want to ask you.. what I need to do when I have good AVE and Composite Reliability, but the Cronbach's alpha is less than 0.7?

    • @Gaskination
      @Gaskination  4 года назад

      You can rely on the CR and not worry about the Cronbach's Alpha. CR is considered a more accurate measure of reliability.

    • @vesadeka
      @vesadeka 4 года назад

      James Gaskin Thank you, Mr. Gaskin.. Your video and your answer is helpful.. thank you

  • @minji5978
    @minji5978 3 года назад

    Hi! Mr. James,
    I wanna ask something. I have good AVE, CR, cronbach alpha etc but one of my HTMT result are greater than 0.85. What should i do?

    • @Gaskination
      @Gaskination  3 года назад +1

      CR and AVE are measures of convergent validity, while HTMT is a measure of discriminant validity. So, the best way to fix this (if it is above 0.900) is to remove the item that is most shared between the two factors.

    • @minji5978
      @minji5978 3 года назад

      @@Gaskination Thank u so much Mr James for the information.

  • @kalakammashetty7874
    @kalakammashetty7874 4 года назад

    The indicator will connect from the dependent variable to independent or independent to dependent mine is Primary data so PLS replay me

    • @Gaskination
      @Gaskination  4 года назад

      Usually the lines in a regression-based method (like PLS) go from the IV to the DV.

  • @savouryspices4656
    @savouryspices4656 4 года назад

    @ James. Please lemme know in my result during the CFA three loadings are in negative value (_). Should I remove that factor? Or is there any justification of it?

    • @Gaskination
      @Gaskination  4 года назад +1

      If the loadings are negative, then perhaps those variables were reverse-coded. If they were, then you need to re-reverse them by subtracting their values from 1+scalesize. So, if on a 5-point likert scale, subtact from six. If they are not reverse coded, then it is probably due to being more formative in nature. If not formative, then they are just bad items and can be deleted.

    • @savouryspices4656
      @savouryspices4656 4 года назад

      @@Gaskination thanks. Kindly also clarify, CFA or Outer loading values both are same in SEM-PLS?

    • @Gaskination
      @Gaskination  4 года назад

      @@savouryspices4656 yes

  • @manishamodun6544
    @manishamodun6544 5 лет назад

    If for my discriminant validity i have a value of 0.92, is that an issue?

    • @Gaskination
      @Gaskination  5 лет назад

      Yes. As the other commentator mentioned, 0.90 is the conservative threshold, while 1.00 is a somewhat relaxed threshold.

    • @manishamodun6544
      @manishamodun6544 5 лет назад

      @@Gaskination Thank you for replying.
      Considering that the HTMT score lies between the confidence interval - 1 and 1, should i let go of the construct or should I argue that discriminant validity has been established?

    • @Gaskination
      @Gaskination  5 лет назад

      @@manishamodun6544 I always try to keep what I can, unless it is a gross violation of the criteria.

  • @user-cg7ti3vh1t
    @user-cg7ti3vh1t 3 года назад

    My T stastistics are between 6 to 58 is that normal?

    • @Gaskination
      @Gaskination  3 года назад +1

      Yes, that is totally fine. Those are all statistically significant.

    • @user-cg7ti3vh1t
      @user-cg7ti3vh1t 3 года назад

      @@Gaskination Thank you very much for reassuring me. You gave me hope that I can solve the little issues that appeared!! THANK YOU!!!!

  • @muhammadtalhasalam7805
    @muhammadtalhasalam7805 7 лет назад

    Thank you Dr Gaskin for this and all other videos.
    As for the model fit, PLS team seems to be skeptical (at least not very confident) of its adequacy of SRMR in PLS-SEM. This is mentioned in: www.smartpls.com/documentation/functionalities/goodness-of-fit
    What's your opinion, should SRMR and other values be used to determine GoF or should we rely on CB-SEM for this.

    • @Gaskination
      @Gaskination  7 лет назад +1

      SRMR is the only measure they seem okay with because it is not built upon the covariance matrix. They are more skeptical of the other measures and are hesitant to recommend them. Model fit is more appropriate for CB-SEM, since nearly all measures of fit are built off the chi-square, which is a difference measure of covariance matrices. So, I would rely on SRMR for now. Hopefully they'll implement the BIC and AIC and GM (I think that's what it is called...) soon, as these are robust to PLS.

    • @muhammadtalhasalam7805
      @muhammadtalhasalam7805 7 лет назад

      Thank you.

    • @bizanjo
      @bizanjo 6 лет назад +1

      Hello Dr. , Thank you very much for for all of your videos. I was wondering would it effect much on the results if i use path rather than factor in running my model? What the major difference between them while reporting in a thesis?

  • @silvergold1817
    @silvergold1817 6 лет назад

    What to do when the SRMR is 0.093?

    • @Gaskination
      @Gaskination  6 лет назад

      Model fit issues can be due to many things, such as variable normality, factor validity, and sample size inadequacy. So, make sure you meet all these criteria first. Then, if there are still problems with model fit, check the modification indices to see if one particular item is causing the issue. If so, delete that item and check again.

    • @silvergold1817
      @silvergold1817 6 лет назад

      Thank you very much.
      I checked all of what you suggested (variable normality and factor validity) and they were good! I The model fitted, but found no significant relationships between all the variables! However, the sample size is 105 and have 8 variables! Is this sample not sufficient for PLS?

    • @Gaskination
      @Gaskination  6 лет назад

      It is a low sample size if you have 8 latent factors.

    • @silvergold1817
      @silvergold1817 6 лет назад

      Thank you very much for help, James.

  • @2375nikhil
    @2375nikhil 4 года назад

    what if outer loading is more than 1?

    • @Gaskination
      @Gaskination  4 года назад +1

      That's called a Heywood case. In SmartPLS, there's not much you can do. However, it implies there is some error happening in the measurement. So, make sure you're not including categorical variables or variables that are completely redundant.

    • @2375nikhil
      @2375nikhil 4 года назад

      @@GaskinationCan you please help me with the factors you have considered in the video ? How are these factors vis a vis complete model? I mean are they all IVS or anyone DVS? please explain variables little more?

    • @Gaskination
      @Gaskination  4 года назад +1

      @@2375nikhil The factors are CSE (computer self-efficacy), Innovativeness (with computer), and Skill Acquisition. These were the only three construct being considered in the video. If I were running the factor analysis on my own model, I would include all latent factors during the factor analysis.

    • @2375nikhil
      @2375nikhil 4 года назад

      @@Gaskination Thanks, so much Sir 🙏🏻

  • @samahmahmah6670
    @samahmahmah6670 4 года назад

    Please i need your help where i can contact you

    • @Gaskination
      @Gaskination  4 года назад

      google me. I'm the first hit. You can find my email by following the first google result.

  • @LifeIsBayWatch
    @LifeIsBayWatch 2 года назад

    Hello professor Gaskin, at 10:30 when I run my 'consistent PLS bootstrap' I dont get an AVE section in the report. What should I do?

    • @Gaskination
      @Gaskination  2 года назад

      Looks like the newest versions of SmartPLS have moved this over to the non-bootstrapped report under the section: "Construct Reliability and Validity"