SmartPLS 4: Validating a (reflective) measurement model

Поделиться
HTML-код
  • Опубликовано: 30 ноя 2024

Комментарии • 115

  • @hamidnaidjat336
    @hamidnaidjat336 2 года назад

    Thank you Mr Gaskin for all the clarifications, always the best

  • @lina.r11
    @lina.r11 Год назад

    Hello Mr Gaskin, what happened if one of the exogenous variable has two items, one with outer loading greater than 1, and the other item with outer loading of 0.430. The other validity value is 0.63 for AVE, 0.619 for the cronbach alpha, and composite reliability (rho_c) of 0.750. What approach should I use for these two items? Do I still use them or delete one of them. Thank you for sharing your insight!

    • @Gaskination
      @Gaskination  Год назад

      Sounds like these two items are not truly reflective. If there were more items that were already trimmed, please bring them back and model them formatively. If it is just these two, then consider whether one of the items is a better measure of the construct. Then just use that one item (and then there will be no CR, AVE etc.).

    • @lina.r11
      @lina.r11 Год назад

      Dear Mr. James Gaskin. Thank you for the clarification. I applied PLSc when I generated the loading values above 1 and below 0.4. But when I used the PLS Algorithm , the loading values were around 0.9 and 0.6.

  • @KOKYONGCHEW
    @KOKYONGCHEW 3 месяца назад

    Good evening, Mr Gaskin. I did check others platform to verify whether weighting scheme in factor or path. However, I can't make a conclusion to determine it. May I know what's main reason to select factor weighting scheme? Thanks for your kindness

    • @Gaskination
      @Gaskination  3 месяца назад

      We use factor in measurement model analysis and we use path in structural model analysis. Here is an explanation: chatgpt.com/share/146712d5-b88d-4bc2-8609-ab72cf4fdbf2

    • @KOKYONGCHEW
      @KOKYONGCHEW 3 месяца назад

      @@Gaskination GOT IT, thanks

  • @anna73469
    @anna73469 Год назад

    Thank you for the video. I have a question regarding the convergent validity test. In my analysis, for three items in a construct with loadings 0.872, 0.917, and 0.381, the Cronbach's alpha is 0.595 and AVE is 0.582, while CR (rho_a) and CR (rho_c) values are above 0.7.
    When I removed one of the three items with the loading value 0.381, the results were all over 0.8 with rho_c 0.916. Should I remove the third item value 0.381? And how do I report this?

    • @Gaskination
      @Gaskination  Год назад

      It seems like the third item does not equally contribute to the measurement of the construct. In this particular case, I would probably recommend to remove the poorly performing item, even though that leaves you with only two indicators.

  • @ThanhLe-vd1uo
    @ThanhLe-vd1uo 15 дней назад

    Professor, I am so confused. I have a variable of 12 items, including 2 red items. Another variable has 11 items, including 3 red items. Those 5 red items are all above 0.6 and my cronbach alpha and AVE are all green, so should I remove the red items in outerloading? And how should I explain when the committee asks about me still keeping the item below 0.7?

    • @Gaskination
      @Gaskination  14 дней назад

      Item loadings are fine to be below 0.700 as long as the AVE and CR are above recommended thresholds.

    • @ThanhLe-vd1uo
      @ThanhLe-vd1uo 14 дней назад

      @@Gaskination Is there any citation as proof that I still kept the red loading items when asked by the committee?

    • @Gaskination
      @Gaskination  13 дней назад

      @@ThanhLe-vd1uo Hair et al 2010 ("Multivariate Data Analysis") page 117 has a table (3-2) that states you can have factor loadings as low as 0.300 if you have a sample size of 350.

    • @ThanhLe-vd1uo
      @ThanhLe-vd1uo 13 дней назад

      @@Gaskination my sample size under 200 🥲

    • @ThanhLe-vd1uo
      @ThanhLe-vd1uo 13 дней назад

      @@Gaskination my sample size under 200 🥲🥲

  • @chosy-p1v
    @chosy-p1v Год назад

    Thank you so much Mr Gaskin for this video. I have one question regarding outer loading. One of my construct has 4 items and the outer loading for each item is 0.879, 0.878, 0.859 and 0.414. The AVE value is 0.613, Cronbach alpha value is 0.767 and composite reliability is 0.840. Should I retain item with outer loading 0.414, or delete it?

    • @chosy-p1v
      @chosy-p1v Год назад

      other constructs in the model have AVE, Cronbach alpha, and composite reliability values above 0.5 (AVE), 0.7 (Cronbach/composite), respectively and the outer loading of 0.414 is the lowest one in the model

    • @Gaskination
      @Gaskination  Год назад

      @@chosy-p1v Yes, it is fine to delete the low loading if it is a reflective factor that has still three items remaining.

  • @nurfadillah1738
    @nurfadillah1738 Год назад

    Thank you so much for your great explanation but i still wonder
    what is the meaning of negative sign, how to intrepret it? I mean, aren't these values the result of squaring, how it can be negative? Thanks

    • @Gaskination
      @Gaskination  Год назад

      A negative in the loadings of the pattern matrix: gaskination.com/forum/discussion/144/negative-loadings-in-pattern-matrix#latest
      A negative path coefficient: gaskination.com/forum/discussion/131/what-if-my-positive-hypothesis-results-in-a-negative-relationship#latest
      Or this one: gaskination.com/forum/discussion/120/why-is-my-hypothesis-test-result-significant-and-negative-when-i-expected-it-to-be-positive#latest

  • @luapnus
    @luapnus 5 месяцев назад

    Hi Prof Gaskin, it looks to me the model is a structure model. So in PLS-SEM, the structure model and the measurement model can be the same?

    • @Gaskination
      @Gaskination  5 месяцев назад

      Correct, unless conducting a factor analysis in CBSEM, the measurement and structural models can be specified the same way at the same time in SEM.

  • @asmaryadia4846
    @asmaryadia4846 Год назад

    Thanks so much Mr. Gaskin, your videos so helpfull.
    I have questions for my high-order model CFA. I have a varabel with three sub variabels, and each sub variabel have indicators. I assue that model is reflective. The questions are:
    1. I confuse, which approach i should use, repeated indicator or two step... which one approach according to you that should I use and what the reference?
    2. in my case, which weight scheme should i use?

    • @Gaskination
      @Gaskination  Год назад

      1. Yes, repeated indicator approach with two step to validate the higher order factor.
      2. factor weighting scheme for factor validation, path weighting scheme for path testing.

    • @asmaryadia4846
      @asmaryadia4846 Год назад

      @@Gaskination should I use path testing to, Sir?

    • @Gaskination
      @Gaskination  Год назад

      @@asmaryadia4846 path testing is for testing structural hypotheses, or hypotheses between constructs.

    • @asmaryadia4846
      @asmaryadia4846 Год назад

      @@Gaskination thanks so much Sir

  • @a.rizalkhabibi9416
    @a.rizalkhabibi9416 Год назад

    One of the variables in my model has a loading score below 0.2 which is very low when compared to other items which average above 0.7. What does it mean? Thank You

    • @Gaskination
      @Gaskination  Год назад

      If it is a loading for an indicator on a factor, then it implies that this indicator does not strongly correlate with the other indicators on that factor. Check whether it was reverse-coded or worded in a very different way.

  • @Moe4572
    @Moe4572 Год назад

    Dear Mr. Gaskin,
    I am currently in a serious struggle with my master thesis. If you read this comment, could you help me asses the severity of issues of masther thesis' model? Particularly: How important are SRMR and NFI? Can I really ignore the fit measures? No mather what i do, the NFI is always problematic.
    I am trying to examine a possible relationship between a construct and another group of constructs that have not been connected before in international marketing.
    I chose PLS-Sem because it is more robust to non-normal data, because I use Likert-Scales and additionally expected quite some skewness and heterogeinity in the data, because of differences in brand loyalty (BL) among the respondents. If the BL is omitted, my SRMR is at 0.075 (PLSc / 0.076 for PLS), however NFI is only at 0.830 (PLS). When I include BL as binary controll variable in the model the SRMR rises to 0.190, and the NFI takes the strange value of 1.050 (PLS).
    I am reflectively measuring the relationship of 4 first-order constructs using PLSc algorithm. (However, NFI only shows when i use PLS-algorithm instead of PLSc, which at the same time greatly improves all other validity measures)
    I am absolutely puzzled, and don't know what to do.
    You would be my absolute hero, if you could help me out!😁
    Greets Moe
    Further info (if necessary or interesting, I am happy to share):
    Saple size: 191 (quite some missing data, but no big changes with missing data treatment method)
    Loadings: Some loadings are rather low (PLSc; only one item below 0.708 with regular PLS), but deleting does not improve construct reliability. Further I am allready struggling with Content validity, because an adapted measurement scale (construct 1) turned out to be quite unreliable. I had to erase 2 of 3 expected dimensions and turn it into a first-order construct, but I think I can justify theoretically to continue with only one dimension.
    Construct 1: 0.662 - 0,570 - 0.815 - 0.685 ; Construct 2: 0.615 - 0.771 - 0.797 - 0.882
    Content Validity: Measures are all fine except AVE of construct 1 (0.471 with PLSc; about 0.6 with regular PLS)
    Discriminant Validity: Everything is fine (HTMT & Fornell-Larcker).
    Multicollinearity: there are some issues with one Construct, but to my understanding (Hair et al) this should only be an issue with formative measures. Construct 3 VIFs: 8.084 - 8.181 - 3.334 - 3.931 - 6.329 - 2.513
    All expected paths-relationships are significant (worst is 0.001)

    • @Gaskination
      @Gaskination  Год назад

      Some thoughts and responses:
      1. VIF is only relevant for formative factors or for prediction of an outcome via multiple IVs.
      2. Model fit is not relevant to PLS, but if you want it, you can run the new CB-SEM model. Here is a quick video about it: ruclips.net/video/FS1D4KmmABU/видео.html
      3. Not all fit measures need to be met. If enough are adequate, then it is probably sufficient. I prefer SRMR, CFI, and RMSEA.
      4. AVE is a strict measure of convergent validity. CR is probably sufficient evidence for convergence.
      5. Dropping factors due to failed loadings might imply that the factor would better be specified as formative.
      Hope this helps.

    • @Moe4572
      @Moe4572 Год назад

      @@Gaskination Dear Mr. Gaskin, I really can't thank you enough! Espescially, considering the speed of the reply! This really helps me right now! :)

  • @구태봉
    @구태봉 Год назад

    Thanks for the great video
    I have one question
    I'm doing a study on how different predictors affect intention to use through perceived value, and the effect size of perceived value is 1.066, while the effect size of other factors in the model is very low. I know that researchers usually expect the effect size to be in the range of 0-1. Can you provide an explanation for this?
    If this is normal, I would appreciate any references to support this.
    I really appreciate the opportunity to ask questions.
    I look forward to your response

    • @Gaskination
      @Gaskination  Год назад

      Effect size can be more than 1, though it is uncommon. Here is a discussion about it: forum.smartpls.com/viewtopic.php?t=1902#:~:text=A%20higher%20effect%20size%20than,effect%20on%20your%20endogenous%20variable.

  • @SJain-v9r
    @SJain-v9r 8 месяцев назад

    Thanks so much for sharing the knowledge.
    Analysis of my empirical model in Smartpls 4 shows that rho-A and rho-C (composite reliability) values are above 0.95 (between 0.95 to 0.965) for two of my exogenous and one endogenous construct. Rest all parameters are under range for both measurement and structural models (including VIF for inner model), and CMB is also not there. Please advise if it is a matter of concern and how it can be addressed? Any resource/video or paper which can help me with the process if correction is needed? Thanks so much.

    • @Gaskination
      @Gaskination  8 месяцев назад

      I don't think I understand the problem. VIF should be low and CMB is not there. So what is the problem?

    • @SJain-v9r
      @SJain-v9r 8 месяцев назад

      ​@@Gaskinationsome researchers advocate that Composite Reliability and rho-A values above 0.95 are not desirable in the measurement model and it indicates multi-collinearity and need to be corrected for. Please advise your opinion on the same. As highlighted above, all other parameters like no CMB, low VIF for inner model etc are as per the recommended range in my measurement and structural model (except CR and rho-A being above 0.95).
      Quoting a few resources:
      "values of 0.95 and higher are problematic, since they indicate that the items are redundant, thereby reducing construct validity (Diamantopoulos et al., 2012; Drolet and Morrison, 2001)."
      "Values above 0.90 (and definitely above 0.95) are not desirable because they indicate that all the indicator variables are measuring the same phenomenon and are therefore not likely to be a valid measure of the construct. Specifically, such composite reliability values occur if one uses semantically redundant items by slightly rephrasing the very same question. As the use of redundant items has adverse consequences for the measures’ content validity (e.g., Rossiter, 2002) and may boost error term correlations (Drolet & Morrison, 2001; Hayduk & Littvay, 2012), researchers are advised to minimize the number of redundant indicators." - Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A Primer on Partial Least Squares Structural Equation Modeling. write on page 112:

    • @Gaskination
      @Gaskination  8 месяцев назад

      @@SJain-v9rYou can either justify the high CR by showing VIF is sufficiently low, or you can drop redundant items to achieve the "ideal" number of items stated by Hair et al for a reflective latent factor: four.

    • @SJain-v9r
      @SJain-v9r 8 месяцев назад

      ​@@GaskinationThanks you for the suggestions. Just to clarify:
      1. I can justify high CR by showing VIF of inner model being sufficiently low, correct?
      2. Can you please share the reference by Hair et al where they mention the 'ideal' number of items for a reflective construct?
      3. For your second suggestion, does it imply I would need to drop items with highest outer/factor loadings? Also, I believe, then the paper needs to report only the remaining items and the structural measurement runs on reduced model. - correct? Any illustrative paper you may recommend?

    • @Gaskination
      @Gaskination  8 месяцев назад

      @@SJain-v9r 1. Yes, if the concern of high CR is multicollinearity, then a low VIF should alleviate that concern. 2. statwiki.gaskination.com/index.php?title=Citing_Claims#Four_Indicators_Per_Factor 3. If you were to drop items, you would want to look at the wording of the items to see which ones are truly redundant.

  • @padmavathychandrasekaran8958
    @padmavathychandrasekaran8958 Год назад

    Dear Professor, suppose I have perceived severity (PS) with four items as a moderator, I should NOT include (PS) in the measurement model validity? Is'nt it Professor? Thank you in advance

    • @Gaskination
      @Gaskination  Год назад

      If it is a latent factor, then I would include it in the measurement model.

  • @padmavathychandrasekaran8958
    @padmavathychandrasekaran8958 Год назад

    Dear Professor, If I have a continuous moderator in the model, should I include it in validating the measurement model? Or I can take it while doing the moderation analysis?

    • @Gaskination
      @Gaskination  Год назад

      Only latent factors should be part of the measurement model validation. You can bring single measures into the model after validating the factors.

    • @padmavathychandrasekaran8958
      @padmavathychandrasekaran8958 Год назад

      @@Gaskination Thank you for your clarification Prof..

  • @g0916086082
    @g0916086082 2 года назад

    When will the SmartPLS be launched? It’s so much easier to use.

    • @Gaskination
      @Gaskination  2 года назад

      Not available yet. I think they plan to release it in early June 2022.

  • @hazuvlen
    @hazuvlen 4 месяца назад

    Hi, thank you for the great video! I want to ask whether we use all the data collected or just take part of it to test the reliability and validity? Thank youu

    • @Gaskination
      @Gaskination  4 месяца назад

      It is common to use all the data, though some conduct measurement model and structural model in two stages with either two datasets (considered most rigorous) or random samples of the same dataset. Personally, I've always used all the data.

  • @farishakim6759
    @farishakim6759 8 месяцев назад

    i have question for the AVE for a construct. Since it does not pass the thresholf of 0.5, we need to look at the loading of associated items to see which one does not pass the 0.7. And if it doesnt pass, you delete them. So, how can we justify in our thesis why we delete the item if conceptually speaking, the items are mentioned throughout literature?

    • @Gaskination
      @Gaskination  8 месяцев назад

      Before deleting an item, check CR. AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).
      Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.

    • @JanisFranxis
      @JanisFranxis 6 месяцев назад

      I´d be interested in this as well but somehow the answer of Mr. Gaskin doesn´t show up for me for this question :(

    • @Gaskination
      @Gaskination  6 месяцев назад

      @@JanisFranxis Here is the reply from above: Before deleting an item, check CR. AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).
      Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.

  • @Gug_family
    @Gug_family 2 года назад

    Thanks for the video. Looking at the discriminant validity, HTMT and cross-loadings are good. Still, one of the values at the Fornell-Larker criterion is slightly higher than one construct's square root of AVE. So I looked up the outer model correlations at residuals and found one value is about.3. If I remove one of the problem indicators, everything becomes good. However, I would like to know if this is how I should proceed. If so, how do I report this? If this is not an adequate procedure, what else can I do? I really appreciate any help you can provide.

    • @Gaskination
      @Gaskination  2 года назад +1

      This approach is fine as long as that latent factor had enough items that losing one did not bring it below three. You can simply report that the fornell larcker test indicated discriminant validity would be achieved only if this item was omitted, and that omitting it was permitted because it was part of a reflective factor, for which all indicators are interchangeable. Thus omitting one of them does not change the trait being measured. You can cite Hair et al 2010, or Lowry and Gaskin 2014, or Jarvis et al (about misspecification).

    • @Gug_family
      @Gug_family 2 года назад

      @@Gaskination Yes, it is one of the six indicators. I really do appreciate all the provided information. Truly helpful!

  • @abdullahalmahroqi8166
    @abdullahalmahroqi8166 Год назад

    Thank you professor.
    I have a question.
    When I run a standardized pls-algorithm I get acceptable results in terms of discriminant validity (HTMT) and that is not the case when I run an unstandardized pls-algorithm.
    Can I proceed the analysis with standardized ones? (my study is on factors influencing loyalty)

  • @장두영-p4b
    @장두영-p4b Год назад

    Thank you for teaching SmartPLS 4.0. I have a question regarding the convergent validity test. In my analysis, the Cronbach's alpha and CR (rho_a) values are below 0.7, while CR (rho_c) is above 0.7. Can I conclude that the construct reliability is established because the rho_c value is above 0.7, despite the lower values for Cronbach's alpha and rho_a?

    • @Gaskination
      @Gaskination  Год назад

      This is common to have rho_c be the highest. It is probably sufficient, though it would be good to have multiple points of evidence, such as a strong AVE.

    • @haiyenle1511
      @haiyenle1511 Год назад

      can I ask what is the different between rho_a and rho_c ?@@Gaskination

  • @sochtosach6861
    @sochtosach6861 2 месяца назад

    Hii.
    Is there anyway i can validate an instrument in SmartPLS. since it gives overall goodness of fit indices for complete model.

    • @Gaskination
      @Gaskination  2 месяца назад

      You can use the CB-SEM side. It has full model fit metrics.

    • @sochtosach6861
      @sochtosach6861 2 месяца назад

      @@Gaskination Thank you so much. In my model, I am analyzing the impact of internal marketing dimensions on my consequence variable and have used it at first-order. I have been asked why i have not used it at second order. i am not sure how to defend it. My focus has been on the individual dimension since beginning so I have been working in that direction. Let me know if i can somehow defend my work with regards to the concern.
      Thank you so much for always guiding and supporting.

    • @Gaskination
      @Gaskination  2 месяца назад

      @@sochtosach6861 If it works well at the first order, you can argue that more precise theorizing can be done at that level. For more reasons, check out my conversation with chatgpt o1: chatgpt.com/share/66e73b3e-a3d4-800b-bc01-4108f1f80ca5
      I've reviewed the output and agree with it.

    • @sochtosach6861
      @sochtosach6861 2 месяца назад

      @@Gaskination thank you so much for your guidance. So for the purpose of validation of scale, I am going to show CFA results along with factorial invariance using MICOM. Is there any comment you would like to give on that approach. Also, i am not including my multistage sampling parameters (city and bank name) into consideration while analysing the results. I created groups for age, income, and qualification since these are also my control variables. Further, gender is my mediator on one specific path; therefore, i have included that in the final model and have not checked MICOM, taking gender into attention. Please guide me if i am on the right path.

    • @Gaskination
      @Gaskination  2 месяца назад

      @@sochtosach6861 MICOM is useful for determining measurement invariance. Using gender as a mediator does not make sense unless you are studying gender orientation among a population of gender fluid or transitional people. If you are in any other population, the number of participants who would have a gender that could be the causal outcome of any non-deterministic (non-physiological) independent variable would be too small to provide sufficient variance for meaningful analysis. Mediators must be the causal outcome of the independent variable.

  • @syedsana123
    @syedsana123 Год назад

    Hello james...can u plz state how to address collinearity issues and wt should be done if NFI value comes below 0.90..

    • @Gaskination
      @Gaskination  Год назад

      NFI is not really relevant for smartpls. As for collinearity, you can try to better distinguish between the factors by checking the loadings matrix for high cross-loadings between factors. If there is a manifest variable that loads highly on two factors, it might cause collinearity issues.

    • @syedsana123
      @syedsana123 Год назад

      @@Gaskination Thank u very much for ur instant response. Then what are the measures by which we can check model fit in PLS 4 ?

    • @Gaskination
      @Gaskination  Год назад

      @@syedsana123 Model fit and PLS are not very compatible. Model fit is based on the covariance matrix, but PLS is not a covariance-based SEM method. So, even the creators of SmartPLS recommend against trying to assess model fit in PLS.

  • @mohamedzaki5795
    @mohamedzaki5795 2 года назад

    Thank you, Mr. Gaskin. Can I use the dependent variable which is a one-item construct in pls?

    • @Gaskination
      @Gaskination  2 года назад +1

      Yes. That is fine.

    • @mohamedzaki5795
      @mohamedzaki5795 2 года назад

      @@Gaskination thank you very much for your kind help

    • @soehartosoeharto8471
      @soehartosoeharto8471 2 года назад

      @@Gaskination are there any citing claims for one-item construct in PLS?

    • @Gaskination
      @Gaskination  2 года назад +1

      @@soehartosoeharto8471 None needed. It is not uncommon practice.

    • @soehartosoeharto8471
      @soehartosoeharto8471 2 года назад

      @@Gaskination ok, thank you. i just remeber last time in statwiki, citing claim for at least minimal 3 items for one construct in CBSEM

  • @BiErLiN99
    @BiErLiN99 Год назад

    What if deleting an item leads to increasing the AVE above the threshold, but simultaneously decreases cronbach's alpha under the threshold. In my case the AVE is at 0.490 before and at 0.510 after deleting an item with a loading of 0.613 (not terribly low), but cronbach's alpha decreases from 0.784 to 0.656. Is there a "right" approach to this? I hope someone can help me as I am working with PLS-SEM and smartPLS for the first time.

    • @Gaskination
      @Gaskination  Год назад +1

      I would always lean towards keeping items. So, in this case, I would argue that the AVE is close enough. There is precedence for this with composite reliability (similar to Cronbach's Alpha). AVE is a strict measure of convergent validity. Malhotra and Dash (2011) note that "AVE is a more conservative measure than CR. On the basis of CR alone, the researcher may conclude that the convergent validity of the construct is adequate, even though more than 50% of the variance is due to error.” (Malhotra and Dash, 2011, p.702).
      Malhotra N. K., Dash S. (2011). Marketing Research an Applied Orientation. London: Pearson Publishing.

  • @padmavathychandrasekaran8958
    @padmavathychandrasekaran8958 Год назад

    Professor, I have a few doubts. 1. when my goal is to just test the model validity (not theory testing), can I use SmartPLS over CBSEM? 2. When I use SmartPLS for measurement model validity, should I state the results of R2 and RMR, which is always poor than CBSEM. Please advice

    • @Gaskination
      @Gaskination  Год назад

      1. If all factors are reflective, CB-SEM is a better choice because it allows for model fit tests. However, SmartPLS can do most validity tests for reflective models as well (just not model fit).
      2. You can ignore those tests in SmartPLS for validating your model. Instead focus on convergent and discriminant validity and reliability.

  • @ConnetieAyesigaNinaz
    @ConnetieAyesigaNinaz Год назад

    Does anyone know why I keep getting the error of the singular matrix problem? kindly help as I seem not to go beyond this error to assess measurement model

    • @Gaskination
      @Gaskination  Год назад

      gaskination.com/forum/discussion/169/what-might-cause-a-singularity-matrix

  • @forever763
    @forever763 Год назад

    May I know what exactly is htmt inference? And how can we measure the htmt inference through bootstrapping?

    • @Gaskination
      @Gaskination  Год назад

      The HTMT ratio is calculated by comparing the average correlation between indicators (observed variables) of different constructs (heterotrait) to the average correlation between indicators of the same construct (monotrait).

    • @forever763
      @forever763 Год назад

      @@Gaskination is it htmt0.85 and htmt0.90 both are considered one criterion test and htmt inference is another criterion test? Is it can I just use htmt0.85 and htmt0.9, and no using htmt inference?

    • @Gaskination
      @Gaskination  Год назад +1

      @@forever763 I don’t think I know what you mean by HTMT inference. If the HTMT values are less than .85, then there is no discriminant validity issue.

  • @fatimafifi2398
    @fatimafifi2398 5 месяцев назад

    Sir what is the solution for cr and ave is more than 0.95 also the factor loading is more than 0.92

    • @Gaskination
      @Gaskination  5 месяцев назад +1

      Wow, that's really high. In general, if the factor is reflective, then this is not a problem because it just means the items are all very consistently measuring the same dimension (which is the purpose of reflective measurement).

  • @ghadaeltazy735
    @ghadaeltazy735 2 года назад +1

    Hey again 😀
    I noticed that you used the consistent PLS-ESM algorithm not the first choice "PLS-SEM algorithm"! my question is there a real difference or both can be used?😁
    Thanks in advance

    • @Gaskination
      @Gaskination  2 года назад

      For models that have all reflective factors, PLSc should be used. For models that include any non-reflective constructs, the regular PLS algorithm should be used.

    • @ghadaeltazy735
      @ghadaeltazy735 2 года назад

      @@Gaskination Thank you a dozen 😀

    • @abdullahgenc2929
      @abdullahgenc2929 Месяц назад

      @@Gaskination Dear Gaskin, In a high-level analysis, all my structures are reflective and when I use PLSc, I have to remove a lot of items to get my AVE and outer loadings to the values ​​I want. When I do standard PLS, I reach the appropriate AVE and outer loadings by removing much fewer indicators. Would there be a problem if I use standard PLS? Do I need to cite a source that says there will be no problem when I use the standard?

    • @Gaskination
      @Gaskination  Месяц назад

      @@abdullahgenc2929 If all factors are reflective, you should be using CB-SEM, which is also available now in SmartPLS 4.

    • @abdullahgenc2929
      @abdullahgenc2929 Месяц назад

      @@Gaskination thank u I will try tonight.

  • @wibowo_ha
    @wibowo_ha 4 месяца назад

    Apa yang terjadi saat variabel latent miliki reliabilitas dan validitas yang baik, namun banyak hipotesis yang tidak terdukung? Help me please

    • @Gaskination
      @Gaskination  4 месяца назад +1

      Itu hanya berarti bahwa variabel-variabel tersebut tidak saling berkaitan. Hal ini bisa terjadi meskipun dengan data yang baik dan faktor yang valid.

    • @wibowo_ha
      @wibowo_ha 4 месяца назад

      @@Gaskination Thank you verymuch

  • @dubai815
    @dubai815 2 года назад

    How may I find smart pls 4 as it's not available on it's official website? Please let me know the source for downloading...

    • @Gaskination
      @Gaskination  2 года назад +1

      Not available yet. I think they plan to release it in early June 2022.

    • @dubai815
      @dubai815 2 года назад

      @@Gaskination Okay noted with Thanks

  • @Hashimhamza007
    @Hashimhamza007 2 года назад

    It looks like this analysis is very similar to CFA. You look for convergent validity and discriminant validity.
    However, the model you drew in this video is not similar to CFA models you created in AMOS videos.
    In your AMOS videos, you put all the latent variables vertically and connect each of them with all of them with double-headed arrows for CFA.
    But in this video, the model is not like the CFA model in AMOS. I wonder why they are different.

    • @Gaskination
      @Gaskination  2 года назад +2

      Correct. AMOS is a covariance-based SEM software that allows for explicit control over correlations. However, PLS does not include the covariance matrix in its default algorithm. It can still produce the correlation matrix and we can use it for factor validities.

  • @noonatatao687
    @noonatatao687 2 года назад

    Thank you very very much

  • @talhamansoor7108
    @talhamansoor7108 2 года назад

    How to download smartpls 4?

    • @Gaskination
      @Gaskination  2 года назад +1

      Not available yet. I think they plan to release it in early June 2022.