Validating Formative Models in SmartPLS 3

Поделиться
HTML-код
  • Опубликовано: 28 ноя 2024
  • In this video I show how to validate models with formative factors in smartPLS 3

Комментарии • 175

  • @Gaskination
    @Gaskination  4 года назад

    Here's a fun pet project I've been working on: udreamed.com/. It is a dream analytics app. Here is the RUclips channel where we post a new video almost three times per week: ruclips.net/channel/UCiujxblFduQz8V4xHjMzyzQ
    Also available on iOS: apps.apple.com/us/app/udreamed/id1054428074
    And Android: play.google.com/store/apps/details?id=com.unconsciouscognitioninc.unconsciouscognition&hl=en
    Check it out! Thanks!

  • @alessandrocomai1855
    @alessandrocomai1855 10 дней назад

    Thank you James for the instructional video. I am a bit new to Pls-sem and just learning with one dataset. The model has mostly reflective latent variables. I changed a construct of 3 variables from reflective to formative and I find that the p-value of 2 variables are insignificant (0.851 and 0.876). However, the collinearity check is okay. All the variables are green (1.939 and 2.064). Do you think it is (a) acceptable, (b) eliminate the two variables and keep only one, or (c) make them reflective? Thank you.

    • @Gaskination
      @Gaskination  10 дней назад +1

      Typically whether something is formative or reflective is determined conceptually, rather than statistically. So, I would just stick with whatever is consistent with how they are measured. Usually it is easier to model factors reflectively.

    • @alessandrocomai1855
      @alessandrocomai1855 9 дней назад

      ​@@Gaskination Thank you very much. Much appreciated

  • @InspiriedByNature
    @InspiriedByNature 5 лет назад

    Hello James, thank you for all your videos. They are very clear and informative, they have helped me a lot with my research. I have a few questions about the validation of the formative models. Hair et al. (2016) suggest validating formative constructs by correlating it with its reflective measure. One approach is to include a question in the survey that may be then used as a single-item endogenous construct for this purpose. I added the question that was supposed to reflect four formative constructs and then used it to validate each of them. The correlations turned out to be low. I then realized that there was a major issue with the single-item construct, and there was a reasonable explanation for the low correlations.
    Since I didn't have any other variables I could use to validate the constructs, I created a composite measure of all four constructs (multiplied their means) and then correlated it with each construct separately. All correlations exceeded the 0.7 level. How appropriate is this solution? If this doesn't work, then what can be done? Should I create an index for each of the constructs and then validate them using the indices?
    And another question is if the model is comprised of four first-order constructs that from a single second-order formative construct, how do you validate such a model? The point of the model is to develop an index and not to test some other relationships with the reflective measures, should I stop after validating first- and second-order constructs or there is another statistics I need to use? R-squared is irrelevant in this case since the first-order constructs perfectly explain the second-order construct. Thank you for all your help.

    • @Gaskination
      @Gaskination  5 лет назад

      1. Since the single item correlation didn't work out, you could just use the significance and VIF approach shown in this video.
      2. I'm not sure if the index approach is appropriate. Logically it makes some sense that they at least move together (0.700 correlations), but I'm not aware of any publications that take this approach.
      3. You are correct. No need for R2. Just use the significance and VIFs.

    • @InspiriedByNature
      @InspiriedByNature 5 лет назад +1

      @@Gaskination Thank you for the advice! I very much appreciate your help. I picked up the index approach from some online discussion but was not sure how appropriate this approach was. As you said, it did make some sense, but I could not find any support for the approach in the literature either. Thank you again!

  • @ghadaeltazy735
    @ghadaeltazy735 2 года назад

    Dear Dr. Gaskin,
    I would like to thank you on this great video that shows two of the measurement model for formative constructs as mentioned in hair et al. book 😊 However, I have two questions that I would like to ask 😅 First, u did not mention the convergent validity which is good to me because I did not know before distributing the survey that I should include a general accepted reflective indicators for the focal construct, to then compare between the reflective and formative indicators. so, I just need to know is there a problem if I excluded it although my source "hair et al. different book editions" mentioned it?😁 Second, my formative construct "Brand equity" is composed by three indicators (loyalty, Perceived quality and brand association), each one of these three is actually measured by different indicators not single sentence. So, should I first apply reflective measurement model for each one of these three then apply the formative measurement model for the focal construct "brand equity"? 🤔
    Hope you answer 🙈
    and thanks in advance ☺

    • @Gaskination
      @Gaskination  2 года назад

      While it is best practice to include a generic measure to compare the formative measures, it is not currently universally practiced. So, you are probably okay. Regarding brand equity, yes, that is the correct approach. I also talk a bit about it here: ruclips.net/video/LRND-H-hQQw/видео.html

    • @ghadaeltazy735
      @ghadaeltazy735 2 года назад

      @@Gaskination thanksss a million Dr. Gaskin for replying 😃😀

  • @ahmedm.elayat5611
    @ahmedm.elayat5611 4 дня назад

    Thankl a lot Dr. James. I would be grateful if you could help clear the following question:
    Regarding the assessment of my formative measurement model (which has 4 dimensions), when checking the outer weights, I found that one of the indicators has significant weight and p-value (0.001), but its outer loading is 0.3 (below 0.7) .. How does an indicator have significant weight and p-value while the outer loading is less than 0.7?
    Additionally, another indicator does not have significant weight (0.004) and (P-value = 0.94), but its outer loading is above 0.7. Shall I remove this indication?
    Hope you answer and thanks in advance!

    • @Gaskination
      @Gaskination  3 дня назад

      0.700 is not a firm cutoff. It is just a recommended target for reflective indicator coefficients. Formative indicator coefficients do not have the same target.

    • @ahmedm.elayat5611
      @ahmedm.elayat5611 День назад

      @@Gaskination Thanks so much prof

  • @rekharisaac
    @rekharisaac 6 месяцев назад

    @Gaskination ...I have been learning Smart PLS through your videos. It has been a great learning experience. Thank you so much. I would be grateful if you could help clear the following doubts:
    Question 1: Can it be deleted if the outer weight of a formative indicator is negative and not significant?
    Question 2: Latent Variable constructs that are created by asking respondents (end consumers) to rate which attributes (or features or activities or factors) they consider important (concerning a product/brand/company) are formative, correct? Essentially, the measurement model with formative indicators involves the construction of an index (list of items) rather than a scale. Have I understood this correctly?

    • @Gaskination
      @Gaskination  6 месяцев назад +1

      Hi! Sorry for the slow reply. It is a holiday weekend in the US. 1. There are debates on this. The short answer is yes, but just understand that the nature of what you are measuring has changed. 2. I would use a set of measures like that in a cluster analysis or create preference profiles or something like that, rather than trying to put them into a latent factor.

    • @rekharisaac
      @rekharisaac 5 месяцев назад

      @@Gaskination Thank you!

  • @monaghaffari7072
    @monaghaffari7072 2 года назад

    Hi James,
    Thank you so much for your video.
    Regarding formative constructs in second-order models, I have a general question. What will happen if one of the first-order constructs remains the same across the entire sample (in my case, the accuracy of the prediction by the machine)? My motivation for including it in my model is that, if this model is applied to examine another sampling where the values are different (e.g., users come from different platforms), then it must be retained.
    I appreciate your response very much.

    • @Gaskination
      @Gaskination  2 года назад +1

      Good question! Variables must vary. If they do not vary then they will create a problem in the calculation. So, you might propose it be included, but then have a footnote for your implementation to note that you needed to omit it due to no variance on that variable.

    • @monaghaffari7072
      @monaghaffari7072 2 года назад

      ​@@Gaskination Thank you, James. Your advice is very useful. Could you please provide me with any resources that I can use to reference this?

    • @Gaskination
      @Gaskination  2 года назад +1

      @@monaghaffari7072 This probably doesn't need a reference. It is the most fundamental characteristic of a variable. It is so fundamental, it shares a root word (variance and variable)

  • @shaangenius
    @shaangenius 4 года назад

    Dear James,
    1. I have a 2nd order construct (formative-formative)...in case of repeated indicator approach, all items measuring first order formative factors when loaded on 2nd order in the first step: this mode of loading will be REFLECTIVE or FORMATIVE ? pls suggest.
    2. after first step, we can get the latent variable scores (standardized scores) and then use them for checking effect of the 2nd order factor on any other theoretically connected factor for establishing Criterion/ Nomological validity of 2nd order or even the reflective measure of its own self for redundancy assessment?

    • @Gaskination
      @Gaskination  4 года назад

      1. All repeated indicators should be modeled reflectively.
      2. Yes, except I don't understand the last part about using a single score for redundancy assessment.

  • @YY-vy2bq
    @YY-vy2bq 4 года назад

    Hi James, some questions, please: 1) For the formative model with all formative constructs, should I mention the model fit (e.g. SRMR, NFI)? 2) There are 2 things of model fit (d_ULS, d_G), don't know if they are relevant to formative and how to use them. 3) For formative items, can I delete some items if I found they are not good for the result?

    • @Gaskination
      @Gaskination  4 года назад

      Model fit is not relevant for a model with all formative factors. As for removing items from a formative factor, it is probably not a good idea since each item helps define the construct.

    • @YY-vy2bq
      @YY-vy2bq 4 года назад

      Got it, thanks a lot! If the formative factors are way too high for their P-value (less than 0.05) and I have to delete it, what's the good way to explain?

    • @YY-vy2bq
      @YY-vy2bq 4 года назад

      Would it be possible to say, as they have good VIFs so no delete is needed?

    • @Gaskination
      @Gaskination  4 года назад

      chan yin yi essentially you are correct. We don’t delete the items based on p-values for formative factors. Instead, we rely on VIF.

  • @YY-vy2bq
    @YY-vy2bq 3 года назад

    Hi James, for the formative model, is it necessary to do EFA? How can I explain to the reviewer for not doing so? Any reference? Thanks!

    • @Gaskination
      @Gaskination  3 года назад

      EFA is only for reflective latent factors because an EFA assumes item correlations within factors, but formative factors do not require items to be correlated. Here are some references:
      Jarvis, C. B., MacKenzie, S. B., and Podsakoff, P. M. 2003. "A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research," Journal of consumer research (30:2), pp. 199-218.
      Petter, S., Straub, D. W., and Rai, A. 2007. "Specifying Formative Constructs in Information Systems Research," MIS Quarterly (31:4), pp. 657-679.

  • @urielaldavapardave4582
    @urielaldavapardave4582 2 года назад

    Buenas noches Doctor James. Tengo una pregunta, ¿Un indicador formativo puede tener un peso con signo negativo?

    • @Gaskination
      @Gaskination  2 года назад

      Sí puede. Sin embargo, esto significa que este indicador en particular está inversamente relacionado con los otros indicadores. En tal situación, socavaría la medición. Entonces, considere si sería mejor revertir los valores de ese indicador. Si es así, asegúrese de darle una nueva etiqueta apropiada. Por ejemplo, si tuviera una construcción formativa para "satisfacción laboral" que consistiera en "satisfacción con la compensación", "satisfacción con las tareas laborales" e "intención de renunciar", entonces invertiría los valores de "intención de renunciar" para que se movió en la misma dirección que los otros indicadores. Sin embargo, luego deberá volver a etiquetar ese indicador como algo como: "intención de quedarse"

  • @chefberrypassionateresearcher
    @chefberrypassionateresearcher 7 месяцев назад

    Prof. Do I need to do validity assessment of my formative constructs for overall data, and for the data of all the groups as well, if I have groups and further would be doing MGA?

    • @Gaskination
      @Gaskination  7 месяцев назад

      validity assessment is usually done with all data, then invariance is conducted across groups.

    • @chefberrypassionateresearcher
      @chefberrypassionateresearcher 7 месяцев назад

      @@Gaskination Professor, please check this paper
      Multigroup analysis of more than two groups in PLS-SEM: A review, illustration, and recommendations.
      It shows that validation has to be done for all the groups alongwith complete data set

    • @Gaskination
      @Gaskination  7 месяцев назад

      @@chefberrypassionateresearcher Then that's what you should do. I'll just note that if you find full measurement invariance, then the measurement validation should be consistent between all data and its subgroups (assuming sufficient sample size in each group).

  • @joaorossiborges5404
    @joaorossiborges5404 5 лет назад

    Dear prof James. I am not sure if you are familiar with the theory of planned behavior. In this theory, there are three main latent constructs: attitude, subjective norm, and perceived behavioral control. These three constructs are assumed to predict another latent construct: intention. This theory assumes that attitude, subjective norm, and perceived behavioral control are based on respective beliefs, which from a measurement perspective means a formative model. To make it simple, let's assume I have attitude as a latent construct and a set of 7 formative indicators. I run a formative model, but only three indicators were statistical significant. My question is: is it still possible to argue that these set of statistical significant indicators are the the drivers of attitude?

    • @Gaskination
      @Gaskination  5 лет назад

      I am familiar with TPB. The two scenarios you propose are different though. One is at the measurement level, and the other is at the structural level. In either case, it would be appropriate to retain all predictor variables, because we want to control for their effects. In the case of measurement, we might argue that the construct is not complete without all these indicators, even the non-significant ones.

    • @joaorossiborges5404
      @joaorossiborges5404 5 лет назад

      @@Gaskination Thank you very much for your reply. It was very usefull!

  • @bubbyzblue2576
    @bubbyzblue2576 4 года назад

    Hi Dr.Gaskin, thank you so much for enlighten me with this software. But, why it took so fast when you performing bootstrap analysis? I have a problem because I need almost an hour to complete a single bootstrap analysis....

    • @Gaskination
      @Gaskination  4 года назад

      The bootstrap analysis is affected by sample size, number of variables, number of subsamples, and computer speed. My computer is very fast and I was using only a very basic model with a sample size around 300 I think. So, it was not too slow. I think I also edited the video to go faster at that point. The real time it took was probably around 45 seconds.

  • @shaangenius
    @shaangenius 4 года назад

    Dear James, Thanks for clarifying.
    But one more query:
    1. Why a 2nd order construct( formative-formative) should have the repeated indicators loaded in REFLECTIVE mode?ideally they should be loaded in FORMATIVE mode ?
    can you please share some papers/ resources that suggests that this repeated loadings should be in REFLECTIVE mode for a formative-formative 2nd order construct?
    2. For redundancy analysis of a 2nd order construct( formative-formative), its reflective counter part should be measured with single indicator or multiple indicators ? Cheah et al. (2018) suggests SINGLE indicator for this. Further Hair et al. (2019) and (2020) also suggests the same. Can you please give your expert view.
    Cheah, J. H., Sarstedt, M., Ringle, C. M., Ramayah, T., & Ting, H. (2018). Convergent validity assessment of formatively measured constructs in PLS-SEM. .
    Hair Jr, J. F., Howard, M. C., & Nitzl, C. (2020). Assessing measurement model quality in PLS-SEM using confirmatory composite analysis. , , 101-110.
    Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. .

    • @Gaskination
      @Gaskination  4 года назад +1

      1. "When using the repeated indicator approach, researchers must determine the measurement mode of the HOC with the repeated indicators. Mode A is the recommended method, although Mode B is often used to provide alternative estimates (see Becker et al. 2012; Ringle et al. 2012). This approach has three major disadvantages..." Cheah, J. H., Ting, H., Ramayah, T., Memon, M. A., Cham, T. H., & Ciavolino, E. (2019). A comparison of five reflective-formative estimation approaches: reconsideration and recommendations for tourism research. Quality & Quantity, 53(3), 1421-1458.
      2. The consensus is a single wholistic indicator.

    • @shaangenius
      @shaangenius 4 года назад

      @@Gaskination Much thanks sir. Its a great help indeed!!

  • @leovaniguimaraes3491
    @leovaniguimaraes3491 4 года назад

    Dear Dr. Gaskin, I have a PLS-SEM modeling issue: One of the exogenous variables has formative indicators. The indicator' scales are continuous from 0% to 100%. And some indicators have opposite directions, e.g. IND1 - 0% (worst) to 100% (best); IND2 - 0% (Best) to 100% (worst). In this case:
    1) Can the scale be continuous?
    2) What about the indicators with the opposite scale directions, as above?
    3) Will the modeling software SmartPLS mix things up with this scale design?
    Thanks a lot!

    • @Gaskination
      @Gaskination  4 года назад

      Continuous is fine (better actually). Just re-reverse the scale (i.e., subtract scores from 101).

    • @leovaniguimaraes3491
      @leovaniguimaraes3491 4 года назад

      @@Gaskination Thanks a lot Dr. Gaskin!

  • @jackiehuang5391
    @jackiehuang5391 4 года назад

    Hi, Dr. James. For the formative model, do we need to analyze its the Fornell and Larcker criteria and HTMT?

    • @Gaskination
      @Gaskination  4 года назад

      Fornell-Larcker won't apply because the AVE is not relevant to a formative factor. HTMT is also not calculated for formative factors. So, the best you can do is look at the correlation matrix (in the Latent Variable output) and look at structural (inner) VIFs. Correlations should be ideally less than 0.700 (although some say up to 0.900 is okay), and VIFs should be less than 3 (although some also argue as high as 10).

    • @jackiehuang5391
      @jackiehuang5391 4 года назад

      @@Gaskination I see. Thank you for your detailed answer.

  • @AnaRodrigues-rk8lp
    @AnaRodrigues-rk8lp 4 года назад

    Hello James, thank you for all your videos. I have one formative factor, with only one indicator (which measures frequency of use). How is it supposed to analyze this construct? Thank you so much.

    • @Gaskination
      @Gaskination  4 года назад

      No need to validate the single-item factors, because they are not latent. They have perfect reliability (because there is only one item).

  • @sofisvimor
    @sofisvimor 5 лет назад

    Those two analyses are enough? Isn't it mandatory to perform a redundancy analysis?
    Thank you for your videos!

    • @Gaskination
      @Gaskination  5 лет назад

      There are plenty more validations you can make, such as a redundancy analysis. However, these two are a minimum.

  • @LockyLawPhD
    @LockyLawPhD 6 лет назад

    Hi Prof. Gaskin, thanks for making all these videos. I have watched over 10 hours of your videos including the ones from bootcamp 2018. They are all really good. Really appreciate your work. I have a model in mind, to see how Student Characteristics (gender, faculty, public exam scores, language of instructions), Perceived Understanding of Learning (Likert scale of speaking skill 1,2,..; writing skill 1,2,...; other skill 1,2,...), Transfer Effectiveness of Learning (Likert scale of speaking skill 1,2,..; writing skill 1,2,...; other skill 1,2,...), and Content Relevance ('This course's content is relevant to other courses I am studying.') will affect Learning and Transfer Outcome ('I can effectively apply what I learned in new situations'). My questions are: 1) is Student Characteristics even a latent variable because I can get the actual values of each item; 2) if Perceived Understanding of Learning and Transfer Effectiveness of Learning are both formative while Content Relevance and Learning and Transfer Outcome are both reflective, then can I still evaluate the effect of each variable on the others and mainly on Learning and Transfer Outcome using SmartPLS 3? 3) if this is possible, what do I need to look at? 4) Will EFA, CFA work in my case or should I go for Causal SEM because I am also interested in possible mediation and interaction effects. Many thanks!

    • @Gaskination
      @Gaskination  6 лет назад +1

      Good questions. For the demographic variables with actual values (not latent), these do not belong in a factor analysis. For the factor analysis, use SmartPLS because you also have formative factors. Then, when you are ready to test the causal model, you can insert the demographic variables (if categorical, then use dummy variables). Each demographic variable should be part of a separate factor.

    • @LockyLawPhD
      @LockyLawPhD 6 лет назад

      @@Gaskination Thank you. Just a follow-up question using your example in this video, because CompUse is a formative factor and we do the steps you showed us, but what about the connections between the formative and reflective factors? Do we need to look at them and do we look at the outer weights or outer loadings in this case?

  • @chefberrypassionateresearcher
    @chefberrypassionateresearcher 7 месяцев назад

    What to do if outer weights are negative, but outer loadings are significant for my formative items?

    • @Gaskination
      @Gaskination  7 месяцев назад +1

      negative weights are permitted in formative measurement.

  • @YY-vy2bq
    @YY-vy2bq 4 года назад

    Hi James, I'm doing the formative model, PLS-SEM multigroup analysis. I found there is a function of Smartpls called 'importance-performance map analysis (IPMA)", if I wanna use this, should I do a separate analysis for each group or just once for all groups? Thanks!

    • @Gaskination
      @Gaskination  4 года назад

      If you are theorizing moderation by groups, then I would do it separately by groups.

  • @rekharisaac
    @rekharisaac 3 месяца назад

    @Gaskination I have a question. In this video, I don't think you mention checking the Convergent validity of the formative constructs using redundancy analysis. Is that not necessary? I ask because I am currently assessing a formative model, but I do not have global single items for my formative constructs. I am unable to check convergent validity via redundancy analysis. Is this a problem? If it is required, is there any other way to check convergent validity? Thank you in advance!

    • @Gaskination
      @Gaskination  3 месяца назад +1

      That is certainly the best approach. If you don't have a global construct, then the approach in this video is the next best alternative.

    • @rekharisaac
      @rekharisaac 3 месяца назад

      @@Gaskination Thank you and noted. So I cannot do convergent validity via SPSS or any other software. Is that correct?

    • @Gaskination
      @Gaskination  3 месяца назад +1

      @@rekharisaac convergent validity for formative factors is not logically consistent.

    • @rekharisaac
      @rekharisaac 3 месяца назад

      @@Gaskination That's what I believe too. If there are any literature references about this (convergent validity not being logically consistent for formative factors), please let me know so I can use them in my paper.

    • @Gaskination
      @Gaskination  3 месяца назад +1

      @@rekharisaac Pretty much anything on formative assessment should work. Here are some references here: statwiki.gaskination.com/index.php?title=References#Constructs_and_Validity

  • @saram4026
    @saram4026 4 года назад

    Dear Dr. Gaskin,
    first of all, I should thank you for these precious videos for teaching Smartpls3, it's really helpful. I have some question regarding my research model, I discuss them here and I would really appreciate if you address them.
    I'm measuring 5 personality traits, for each, I have 4 indicators. after factor analysis, I needed to delete 2 of 4 indicators from each trait and now my model has 5 personality traits, each with 2 indicators. Is it acceptable from the perspective of the reviewer?

    • @Gaskination
      @Gaskination  4 года назад

      Two indicators is acceptable, but not ideal. If you had to delete half the indicators from each, you might consider whether the items are actually formative. You might have also been too strict. Try a more relaxed approach for convergent validity by using the CR instead of AVE. Then use HTMT for discriminant validity instead of Fornell Larcker.

  • @damdakos
    @damdakos 5 лет назад

    Hi James, thank you very much for your informative video. I have one question. Could you also use the path weighting scheme? Why do you use the factor weighting scheme? I tried both and I am getting different results in terms of p-values. The factor weighting scheme gives me significant p-values but the path weighting scheme gives me a few non-significant values. Can I keep the factor weighting scheme for validating the formative model? I would be grateful for an answer. Thanks

    • @Gaskination
      @Gaskination  5 лет назад

      I use factor weighting when assessing the outer model (measurement model) and path weighting when assessing the inner (structural) model.

  • @alifaoctavianarachma6528
    @alifaoctavianarachma6528 4 года назад

    Hi James, as a new user of smartPLS, your videos have been such huge help for me to understand it. I have some questions. I used 4 Likert scales on my survey, do you think this type of scale would be valid on smartPLS? Also, I have some questions that were using Yes/No answers, how could I transform those answers into smartPLS? Thank you.

    • @Gaskination
      @Gaskination  4 года назад

      The likert scaes are fine. The binary scales are fine if you interpret them the same way as you would for logistic regression (where zero is the reference category)

    • @alifaoctavianarachma6528
      @alifaoctavianarachma6528 4 года назад

      @@Gaskination thanks alot!

  • @jackiehuang5391
    @jackiehuang5391 4 года назад

    Hi,Dr. James. Could you please specify how to establish construct validity of formative indicators with the MTMM approach?

    • @Gaskination
      @Gaskination  4 года назад

      I don't have any videos on this. SmartPLS also doesn't have a video on this, and there is no feature for it in SmartPLS.

    • @jackiehuang5391
      @jackiehuang5391 4 года назад

      ​@@Gaskination Thank you for your reply. But I'm still very confused. In your article "Partial Least Squares (PLS) Structural EquationModeling
      (SEM) for Building and Testing Behavioral Causal Theory:
      When to Choose It and How to Use It", you mention that, "no single
      technique is universally accepted for validating
      formative measures. However, the modified MTMM approach is regarded by many as a promising solution." and " However, a
      simpler approach is just to ensure the indicator weights for formative constructs are roughly equal and all have significant -values." Do you mean that either of these two approaches can be selected to validate the formative measures in my thesis? Only use the second approach is also acceptable, right? Looking forward to your reply. Thank you.

    • @Gaskination
      @Gaskination  4 года назад

      @@jackiehuang5391 Yes, although both are good approaches, SmartPLS still does not have a way to conduct such an analysis (as far as I'm aware). So, for now, the best option is to just look at size and significance of the indicator weights.

  • @shaotingzheng514
    @shaotingzheng514 3 года назад

    Hi James, I set construct as formative, however, the outer weights are not significant. When I set it as reflective, the loadings are all above 0.8. Should I treat it as a reflective one?

    • @Gaskination
      @Gaskination  3 года назад

      Yes, it sounds like this is "sufficiently reflective".

    • @xuegewang9335
      @xuegewang9335 4 месяца назад

      @@Gaskination Hi Professor, I have the same problem. The outer weights of one construct are mostly not significant as formative but loadings are all above 0.8 as reflective. I treated it as a reflective one, but I feel like it is more formative theoretically (it is based on a developed index, and each item is an aspect of the construct). I am recently defending my thesis, and I was just wondering how shall I defend this construct as reflective? Thanks!

    • @Gaskination
      @Gaskination  4 месяца назад

      @@xuegewang9335 You could do a confirmatory tetrad analysis: ruclips.net/video/Qu9U6Be2fOc/видео.htmlsi=hgR8iRNvCFNCgglw

  • @YY-vy2bq
    @YY-vy2bq 3 года назад

    Hi James, if possible would you show how to interpret Confirmatory Tetrad Analysis using SmartPLS? I found a journal of Gudergan 2008, but don't get it practically. Do you know if this is necessary to present this if I wanna develop a research about index construction? Thanks :)

    • @Gaskination
      @Gaskination  3 года назад

      I've never used Confirmatory Tetrad Analysis, so I'm not sure... Sorry about that. If I ever do use that feature in SmartPLS, I'll make a video about it.

  • @chantalmartinez5414
    @chantalmartinez5414 4 года назад

    Hi Dr. Gaskin. I have a somewhat complex model with multiple formative variables. After running the boostrap method quite a few of my variables returned a P value >.05. However, the VIF checked out (VIP.05. Thank you!

    • @Gaskination
      @Gaskination  3 года назад

      For these scales, you might take a more theoretical approach by creating a score or weighted sum. Determine which variable values are most important and weight those more heavily. If they are equally important, then just take a straight sum or average. Then use this resultant score instead of the latent factor.

    • @chantalmartinez5414
      @chantalmartinez5414 3 года назад

      @@Gaskination Thank you! Most of the scales have sub-scales, so for example i'm looking at the familism scale and it has 3 subscales. I am mostly interested in looking at 2 of the subscales (as moderating variables). So in this case I would make sum scale averages for each subscale? Thanks again, your videos are getting me through my dissertation analysis!

    • @Gaskination
      @Gaskination  3 года назад

      @@chantalmartinez5414 yes, that could work.

  • @chefberrypassionateresearcher
    @chefberrypassionateresearcher 7 месяцев назад

    prof. what to do if outer weights significance is .5. and negative outer weights?

    • @Gaskination
      @Gaskination  7 месяцев назад

      That's not good. It means there is probably a problem with the measure or the collected data.

  • @ulaila2002
    @ulaila2002 3 года назад

    Hello Dr james.
    I want to know . If all vif are found to be significant and also p value but the item outer weights are less than0.2 also loading are not greater than 0.7.
    In this scenario will we be able to retain the item. Or we have to drop it.

    • @Gaskination
      @Gaskination  3 года назад

      If all VIFs are above 5 (or 10 if you're being liberal), then the items are redundant and they are probably reflective rather than formative.

    • @ulaila2002
      @ulaila2002 3 года назад

      @@Gaskination majority VIF are less than 5 . Few are above 3. But the outer weights are lower ranging from 0.2 to 0.7. infact outer loading of some factors are found to be less than 0.7.
      My main concern is about weights and laoding are considered important for retaining item. Or VIF is all alone k for the decision making.( Regarding item retention)

    • @Gaskination
      @Gaskination  3 года назад

      @@ulaila2002 For reflective factors, the path coefficient is important. Generally, regression weights greater than 0.500 are desirable for reflective indicators (although greater than 0.700 is nice). For formative, it is much more difficult to justify removing an item unless its VIF is very high and its path coefficient is very small and non-significant. Otherwise, removing an indicator may alter the nature of the construct.

  • @monicatbgibson8754
    @monicatbgibson8754 3 года назад

    Hello, I have a formative variable ABC, which has 4 very different belief indicators (subdimensions). I am told to set the Path Model as a second-order variable. Meaning....for the subdimension of denial. then, ABC ------>ABC-DNL ----> each DNL1 to 6 indicators, with 6 different indicators. Okay ABC for acceptance has 8 different indicators. What you can determine is that the definitions for both subdimension examples are totally different. The weights p-value for almost all are 0.24 and below. No good p-values to speak of. For each, the VIFs are all under 2.0 which is very good. What do you see?

    • @Gaskination
      @Gaskination  3 года назад

      I don't think I understand the question. Feel free to email me a picture of the model. You can find my email by googling me.

  • @BlackJack-rs6ix
    @BlackJack-rs6ix 5 лет назад

    Hi James,
    For Path Analysis, which output that i have to see ? And if the model is not fit, must must i decrease bad indicators?

    • @Gaskination
      @Gaskination  5 лет назад

      For path analysis, look at the betas and p-values. R2 is also important. Model fit is less important in SmartPLS.

    • @BlackJack-rs6ix
      @BlackJack-rs6ix 5 лет назад

      @@Gaskination thanks so much James

  • @saranyapanthayil6225
    @saranyapanthayil6225 4 года назад

    Dear Dr. Gaskin,
    One of my 2nd order factor (service orientation) have two sub constructs, in which one is reflective (cooperative behaviour) and other one is formative (service performance).
    My question is -do I need to check the discriminant validity here?
    And thanks for your tutorials

    • @Gaskination
      @Gaskination  4 года назад +1

      No, as long as they are not too highly correlated (like > 0.900). We expect 1st order dimensions of a 2nd order factor to be strongly correlated, just not completely interchangeable.

  • @daniol1123
    @daniol1123 5 лет назад

    Hello Prof. Gaskin, what happen when the result of VIF for the construct is more than 5? what does it indicate for the construct? can I move forward with analysis ?

    • @Gaskination
      @Gaskination  5 лет назад

      5 means there is probably influential multicollinearity. 10 is the absolute cutoff. So, if you want to keep the predictor, just list this as a limitation.

  • @nurulrahman766
    @nurulrahman766 7 лет назад

    Dear Prof. Gaskin, I have a question here. What if I am using second order reflective-formative, how should I assess the formative second order?
    Example: Construct A has three dimensions (1, 2, 3). There are five items to Dimension 1, four items to Dimension 2, and three items to Dimension 3, reflectively. However, dimensions 1, 2, and 3 are formatively link to Construct A. From what I understand, I need to report the t-value, significance, VIF and weights of the items. But in my case, my items are all reflective but the dimensions are formative. Where should I get the outer weights and what should I report?
    Thank you in advance for your reply, Prof.

    • @Gaskination
      @Gaskination  7 лет назад +1

      I think I will have to make a video for this... The short answer is that you follow procedures for both formative and reflective at the appropriate level. So, for the reflective level, use CR and AVE, but for the formative level, use VIF and t-stat (or p-val).

    • @dr.halimabegum8601
      @dr.halimabegum8601 7 лет назад

      James Gaskin thanks..I am also using same process for analysis.

  • @GG-sc5pp
    @GG-sc5pp 5 лет назад

    Prof Gaskin, greetings from Indonesia. Can we use dummy variable as formative indicators ? For example the Convenience latent variables are comprised of 1. Don't have to walk far , 2. Don't have to wait long, 3. Don't have to stay in bad weather. My factors are all encoded in 1 or 0, because I downloaded the data from SurveyMonkey. Should I select other types of data from SurveyMonkey, or should I transform this data into something else, or what should I do ? I am quite desperate because I created the survey as if I was a business-oriented surveyor, and hoped that I could identify the variables that can predict consumers' behaviour in booking a ride-sharing from transport companies such as Uber, Grab, Lyft, Gojek, etc. After watching your videos and reading Prof Hair's book, I suddenly realized that I might have made grave mistakes by trying to illustrate the variables using formative factors instead of using reflective factors.

    • @Gaskination
      @Gaskination  5 лет назад

      No worries. You can still use this type of variable to predict consumer behavior. If you are interested in their combined effect, then just create a score (sum up all the 1s). If you are interested in their unique effects, then just include each as a separate dummy variable.

    • @yanahadi5941
      @yanahadi5941 5 лет назад

      @@Gaskination Hi James, following up this conversation, I am wondering in the similar case, can I put all the dummy variables as reflective items but then set the indicator weighting to sumscore? does this similar to test the combined effect?

    • @Gaskination
      @Gaskination  5 лет назад

      @@yanahadi5941 I'm not sure if that would work. Usually dummy variables are not good in reflective factors because they are generally not interchangeable.

  • @empaulstube6947
    @empaulstube6947 3 года назад

    How about the convergent validity test thru redundancy analysis? is it also important?

    • @Gaskination
      @Gaskination  3 года назад +1

      Yes. In this video, I sort of cover that in the VIF analysis for multicollinearity.

  • @rokhsarehmobarhan3261
    @rokhsarehmobarhan3261 5 лет назад

    Dear Dr. Gaskin
    first of all, thank you for your excellent videos.
    I have one question: In one case, I had three ID variables pointing at DV. two of those variables didnt have effect on DV. but if I remove the third variable, both of them had significant effect on DV. I can't understand how can it happen? could you please help me?
    thanks a lot.

    • @Gaskination
      @Gaskination  5 лет назад

      This is common in regression-based models (such as path modeling in SEM). In regression, all effects account for (control for) the effects of other variables. So, if you have a potent predictor, then it will drown out the effects of lesser predictors. But when the potent predictor is missing, those lesser effects become significant. That's just how regression works.

  • @creativeschool5232
    @creativeschool5232 3 года назад

    Hi Dr. Do you have a video on global item and index construction for cases where global measure wasn’t included in the questionnaire. Thank you in advance.

    • @Gaskination
      @Gaskination  3 года назад +1

      I don't have a video for that. However, when a global item is not included in the questionnaire, the next best thing you can do is to just validate the formative factors as shown in this video, and the reflective factors as shown here: ruclips.net/video/J_etGiwbOoM/видео.html

    • @creativeschool5232
      @creativeschool5232 3 года назад

      @@Gaskination thanks Dr. Appreciate that.

  • @fabiankreitner9961
    @fabiankreitner9961 5 лет назад

    Hello Prof. Gaskin,
    i have a construct with four formative indicators. The weight of one indicator is above 1, while the weights of the other indicators are negative and low in magnitude. I could not find a paper where this case is explained and how to evalute it. I did a redundancy analysis using reflective global indicators. The formative indicator with a value above 1 on the other model was now insignificant and low in magnitude (and negative). Therefore I assume, that it is not a dimension of the construct, but i am not entirely sure. Can you help me how I should interpret this specific case?
    Thanks a lot

    • @Gaskination
      @Gaskination  5 лет назад

      This is one I would recommend reevaluating at the measure level. Take a look at the wording of the measures. If they are consistently in the same direction (i.e., a higher response on the measure indicates a higher level of the construct), and they are measuring the same construct, then there is little you can do if they are not multicollinear.

    • @fabiankreitner9961
      @fabiankreitner9961 5 лет назад

      @@Gaskination Thanks a lot for your answer. The wording of the items should be right and multicollinearity should also not be a problem, as the inner VIFs are all below 3.3. I checked the literature for a similar problem but could not find anything. In general, it is recommended to drop formative indicators that have low and insignificants weights and outer loadings. But what if only one formative indicator is significant. Doesn't it question the multidomensionality of a hierarchical construct?

    • @Gaskination
      @Gaskination  5 лет назад

      @@fabiankreitner9961 correct. dropping all but one would reduce it to unidimensional.

  • @chefberrypassionateresearcher
    @chefberrypassionateresearcher 7 месяцев назад

    Professor, Can u please share any good paper , which has complete analysis and interpretation done i a sequential manner for measurement model, structural model & MGA in smartpls3..

    • @Gaskination
      @Gaskination  7 месяцев назад +1

      I'm not sure of any, though this one does some of it except MGA: doi.org/10.1002/bse.3495
      This one as well: link.springer.com/article/10.1007/s10490-021-09784-8
      For general papers on smartPLS and examples of it being used in empirical studies: www.smartpls.com/documentation/literature/some-recommended-readings

  • @gingerl2166
    @gingerl2166 7 лет назад

    Hi Dr. Gaskin. What is the difference between formative and composite models? It is wrong to use the terms interchangeably? why/why not? Then how does Mode B come into play? Same as formative but not same as composite? Thanks

    • @Gaskination
      @Gaskination  7 лет назад

      Jörg Henseler has some good info on this. I would recommend his work. He will likely disagree with my next statement, but it will be sufficient I think. Formative is a latent variable with indicators which predict the factor (unlike reflective where the factor reflects/predicts its indicators), whereas a composite variable is a collapsed set of variables into a single score (through summing or averaging, or some other method).

  • @chefberrypassionateresearcher
    @chefberrypassionateresearcher 7 месяцев назад

    Dear professor, I have a doubt. Why didnt you check validity using single global measurement item (Redundancy analysis). There is no overall/ global measurement item in your model. Please clarify..

    • @Gaskination
      @Gaskination  7 месяцев назад

      You are correct. The complete approach would be to use the global measure. I didn't have such a measure in this dataset.

    • @chefberrypassionateresearcher
      @chefberrypassionateresearcher 7 месяцев назад

      @@Gaskination Thanks Professor, But can I go ahead without redundancy analysis, if I don't have a global single measurement item in my data? Or is there any other way to go forward. Please advice.

    • @Gaskination
      @Gaskination  7 месяцев назад

      @@chefberrypassionateresearcher If you don't have a global item, then it is not possible to create one. You would have had to collect data for it. So, in this case, you would just follow the guidelines in the video above.

  • @nazreenchowdhury5635
    @nazreenchowdhury5635 4 года назад

    Hello prof.
    Looking forward to your suggestion on my formative dv construct with three items where outer weights for item 1= 0.259, item 2=0.588 & item 3= 0.0000.
    But all VIF values have reached cut off points green color.
    In this case, what's your opinion ? As I am scared to proceed with 1 item construct due to different scholars argument, can I keep item 1 & item 3?

    • @Gaskination
      @Gaskination  4 года назад

      Item 3 seems to be unhelpful. If any item is dropped, I would suggest item 3. You can also check the p-values to see if the items have meaningful contribution to the factor.

    • @nazreenchowdhury5635
      @nazreenchowdhury5635 4 года назад

      @@Gaskination hello prof.
      p values are: item 1 = 0.259, item 2 = 0.576, item 3= 0.0000
      Outer weights are : item 1= 0.273, item 2= 0.092, item 3 =1.084

    • @Gaskination
      @Gaskination  4 года назад

      @@nazreenchowdhury5635 Oh, those are the p-values. In this case, item2 seems to be the weakest link.

    • @nazreenchowdhury5635
      @nazreenchowdhury5635 4 года назад

      @@Gaskination yes, Prof. That's why I am considering proceeding with Item1 & Item 3.

  • @arunavaghosh8507
    @arunavaghosh8507 7 лет назад

    I have a query not related to this video. There is a paper which said that an item (of a factor) can be dropped if it had a significant shared error variance with other items of the factor. How can we use SmartPLS to check whether an item has significant shared error variance?

    • @Gaskination
      @Gaskination  7 лет назад

      I don't know if SmartPLS provides this information.

  • @DrMWA08
    @DrMWA08 7 лет назад

    dear prof, hi this is Wasim PhD scholar UTM Malaysia. I have understood the criteria you discussed in this video. I request you please make a video on "second order formative construct validation".
    this is first time I request you. thanks

    • @Gaskination
      @Gaskination  7 лет назад +1

      They are validated the same way as for first order. Look at the loadings from their 1st order dimensions to the 2nd order. Check they are significant after bootstrapping. Also check VIFs (collinearity) to make sure no VIFs are greater than 3. This should be sufficient to validate the 2nd order formative factor.

    • @DrMWA08
      @DrMWA08 7 лет назад

      dear sir, thanks for your kind reply. Can you please provide me any reference?

    • @Gaskination
      @Gaskination  7 лет назад +1

      I'm terrible with references, but any paper that talks about 2nd order formative factors should suffice.

    • @DrMWA08
      @DrMWA08 7 лет назад

      dear sir James Gaskin, I have one formative second order IV, second order reflective DV, one second order reflective Moderator and one mediator. the model is moderated mediation. should I need to go with conditional process analysis. if yes, please suggest with which software PLS or AMOS?

    • @Gaskination
      @Gaskination  7 лет назад

      I'm not sure what conditional process analysis is, but the model you proposed can be tested in SmartPLS and not in AMOS.

  • @jiangpan7333
    @jiangpan7333 6 лет назад

    Hello Prof. Do we also need to test assess convergent validity for the formative constructs?

    • @Gaskination
      @Gaskination  6 лет назад +1

      No. Convergent validity assumes correlation between the indicators, but formative indicators are not required to correlate.

    • @jiangpan7333
      @jiangpan7333 6 лет назад

      But how do we truly know there is no correlation between the formative indicators if we don't assess convergent validity. look at VIF?

    • @Gaskination
      @Gaskination  6 лет назад

      The indicators are allowed to be correlated, they are just not required to be. VIF is a required assessment of formative factors though, as shown in the video.

    • @jiangpan7333
      @jiangpan7333 6 лет назад

      ok I see. Thank you Prof.

    • @golfzwo
      @golfzwo 6 лет назад +1

      Hair et al. actually recommend to assess convergent validity when evaluating formative measurement models by checking whether the formatively measured construct is highly correlated with a reflective measure of the same construct. They recommend to use a global item in a reflective single-item construct that summarizes the essence of the formative construct. The reflective equivalent is then modeled as the outcome of the formative construct. For the assessment of convergent validity, they state that the path between both constructs should have a magnitude of 0.90 or at least 0.80, and the R² of the reflective construct should at least be above 0.64. Otherwise “the indicators of the formative construct do not contribute at a sufficient level to its intended content.” This of course already needs to be considered *before* the data collection (i.e. developing a global reflective indicator that summarizes the formative variable).
      Great videos btw ;-)

  • @arsalanzp
    @arsalanzp 5 лет назад

    Prof. can u put video related to confirmatory tetrad analysis in PLS SEM.. It will be great help for us
    .. I have one higher order construct which is formative formative in nature. I will be grateful if you upload video related to formative formative construct. Especially demonstrating confirmatory tetrad analysis. I will be grateful for your kind response

    • @Gaskination
      @Gaskination  5 лет назад

      I've never done Confirmatory Tetrad Analysis. If I end up needing it for one of my papers or classes, I'll make sure to create a video for it.

  • @ibtissemhamouda5492
    @ibtissemhamouda5492 5 лет назад

    Hello professor, i think i have one formative variable.. the tests of vif and bootsrap are ok.. now i tried to calculate the gof but i have no AVE for this formative variable? is it normal ?!

    • @Gaskination
      @Gaskination  5 лет назад +1

      Yes, that is normal. AVE is a measure for reflective factors only. It assumes inter-correlations among indicators (which is not an assumption of formative factors).

    • @ibtissemhamouda5492
      @ibtissemhamouda5492 5 лет назад

      @@Gaskination thank you so for formative i just need to check the vif right ?

    • @Gaskination
      @Gaskination  5 лет назад

      @@ibtissemhamouda5492 Yes, although you might also check the significance of indicators and the correlation matrix (to make sure no factor correlations are above 0.700).

  • @zakoot
    @zakoot 5 лет назад

    Hi James. How to do a CFA using smartPLS?

    • @Gaskination
      @Gaskination  5 лет назад

      Here you go: ruclips.net/video/J_etGiwbOoM/видео.html

  • @mostafajerari7560
    @mostafajerari7560 2 года назад

    Hi Sir. Could you tell me how to solve the problem of inverted signs (positive/negative) of the outer weight (Path coefficients) in Smart PLS. Thank you

    • @Gaskination
      @Gaskination  2 года назад

      Negative weights can imply an inverse relationship with everything else affecting that latent factor. I've never seen this occur in SmartPLS before though. Perhaps it could be due to using the PLSc algorithm. You might also check to make sure your indicators are coded in the correct direction. For example, if you're measuring job satisfaction, positive coding would be things like: "I like my job", whereas negative coding would be things like: "I hate my job". You can also make sure the scale itself was in a positive direction (e.g., strongly disagree = 1, strongly agree = 5) and not a negative direction (e.g., strongly agree = 1, strongly disagree = 5)

    • @mostafajerari7560
      @mostafajerari7560 2 года назад

      @@Gaskination Thank you very much for your hopeful explanations. I'll try to see what you have recommande me and i hope it resolve the issue.

    • @mostafajerari7560
      @mostafajerari7560 2 года назад

      @@Gaskination I checked my scale, it's in the positive direction: from "1-completely disagree" to "5-completely agree". I don't know exactly the origin of this problem and how to overcome it.

    • @mostafajerari7560
      @mostafajerari7560 2 года назад

      @@Gaskination By the way, to construct my formative construct in PLS, I used nominal, ordinal and scale variables. This may be the cause of the problem. Doesn't it?

  • @LockyLawPhD
    @LockyLawPhD 5 лет назад

    Prof., what if the p-values for outer weights after bootstrapping are much much higher than 0.1? Say, 0.2-0.9?

    • @Gaskination
      @Gaskination  5 лет назад

      Then they are considered to be non-significant.

    • @LockyLawPhD
      @LockyLawPhD 5 лет назад

      @@Gaskination I'm reading Hair et al.'s book, and it seems that handling non-significant formative indicators is rather complicated, involving a reconstruction of the inner model

    • @Gaskination
      @Gaskination  5 лет назад

      @@LockyLawPhD Non-significant indicators are not contributing to the construct. However, removing them also changes the nature of the thing being measured. If the indicators are removed, you'll have to reevaluate the nature of the construct.

    • @daniell.6623
      @daniell.6623 5 лет назад

      @@Gaskination Hi James, I know that you are "terrible with references" (as you say in another answer for this video), but do you have any for the "liberal" p-values (e.g., 0.2) that you advocate in this video? They make sense to me in this context, but I guess, it would be hard to publish an index constructed this way without a good reference. Thank you!

    • @Gaskination
      @Gaskination  5 лет назад

      @@daniell.6623 This article has some thoughts on it and some useful references: www.ncbi.nlm.nih.gov/pmc/articles/PMC5059270/

  • @dr.halimabegum8601
    @dr.halimabegum8601 7 лет назад

    Hi prof I m confused about tolerance value. From where am I able to get the tolerance value pls?Tqvm

    • @Gaskination
      @Gaskination  7 лет назад +1

      I don't think SmartPLS shows the tolerance value. However, tolerance and VIF are nearly perfectly inverse, so you only need one or the other, not both.

    • @dr.halimabegum8601
      @dr.halimabegum8601 7 лет назад

      Prof James Gaskin thank you so much for your responses. Best regards

  • @chefberrypassionateresearcher
    @chefberrypassionateresearcher 7 месяцев назад

    Prof. Do I need to do validity assessment of my formative constructs for overall data, and for the data of all the groups as well, if I have groups and further would be doing MGA?

    • @Gaskination
      @Gaskination  7 месяцев назад

      Do it for all data, then during MGA you can do invariance to see if it was all measured the same anyway.