Introduction to latent class / profile analysis

Поделиться
HTML-код
  • Опубликовано: 11 дек 2024
  • Although latent class analysis (LCA) and latent profile analysis (LPA) were developed decades ago, these models have gained increasing recent prominence as tools for understanding heterogeneity within multivariate data. Dan introduces these models through a hypothetical example where the goal is to identify voter blocks within the Republican Party by surveying which issues voters regard as most important...
    Dan begins by contrasting LCA/LPA models to the more familiar factor analysis model: Whereas factor analysis assumes that individuals differ by degrees on continuous latent dimensions (e.g., fiscal conservatism, social conservatism), LCA/LPA models instead posit that individuals fall into latent categories (e.g., fiscal conservatives, social conservatives). Dan then describes the implementation and interpretation of LCA/LPA models and the potential inclusion of predictors and outcomes of class membership. He also briefly notes several advanced extensions of LCA/LPA, including latent transition analysis, growth mixture modeling, and factor mixture models.
    Early references on LCA and LPA include:
    Gibson, W. A. (1959). Three multivariate models: Factor analysis, latent structure analysis, and latent profile analysis. Psychometrika, 24, 229-252.
    Lazarsfeld, P. F., & Henry, N. W. (1968). Latent structure analysis. Boston: Houghton Mifflin.

Комментарии • 55

  • @samuelsander9101
    @samuelsander9101 2 года назад +6

    probably the best introduction in LCA I have seen :) awesome

  • @beckybrambilla
    @beckybrambilla Год назад +1

    Who knew GOP voters would make the best example for LCA applications! This is an excellent introduction!

  • @MahamKhan-o9s
    @MahamKhan-o9s 5 месяцев назад

    Incredibly helpful! Please make a video about Latent Transition Analysis too. Thank you!

  • @nillepil1
    @nillepil1 7 лет назад +6

    That was an incredibly helpful explanation. Thanks a lot!!

  • @hillygoose
    @hillygoose Год назад

    This was incredibly helpful! Thank you so, so much!

  • @greggoodman6545
    @greggoodman6545 4 года назад

    All off the top of your head - very impressive.

  • @gaoxiangyu2011
    @gaoxiangyu2011 3 года назад +1

    Thank you very much! It's helpful and informative.

  • @ReTr093
    @ReTr093 3 года назад

    thanks for this video. truly a fantastic explanation of LCA from a practical perspective.

  • @Vishnuputhiyedam92
    @Vishnuputhiyedam92 3 года назад

    Excellent explanation! Thank you.

  • @miaoyu205
    @miaoyu205 4 года назад

    Excellent introduction to LCA! Thank you so much.

  • @quesiacataldo
    @quesiacataldo 2 года назад

    thank you! this video helped me a lot!

  • @vidad1
    @vidad1 4 года назад

    Thank you to introduce me in Latent class Analysis

  • @guynotelaers3467
    @guynotelaers3467 6 лет назад

    execellent introductory overview of latent class models

  • @jaakkomiettinen4206
    @jaakkomiettinen4206 2 года назад

    Very helpful, thank you!

  • @nikkiow5975
    @nikkiow5975 6 лет назад

    Thank you for the video! Excellent explanation of LCA!

  • @Jontem1928
    @Jontem1928 7 лет назад +1

    Excellent video!! Thank you!!

  • @alman1104
    @alman1104 5 лет назад

    Excelent video my sir!

  • @TB-wd5yv
    @TB-wd5yv 4 года назад

    Thank you, that was very helpful!

  • @keremmorgul367
    @keremmorgul367 6 лет назад +1

    Excellent introduction to LCA and its potential extensions. Could you recommend some readings for beginners?

  • @justinhayes3434
    @justinhayes3434 2 года назад

    I'm not sure if you keep track of these comments, as it's been quite some time since this was posted. But I have a few questions if you happen to see this: 1) when working with Likert scale data should I use a latent class or latent profile analysis, and 2) if you have a ton of variables, would it be appropriate to first use PCA or FA to reduce dimensionality and then do latent profile analysis on factor loadings to identify groups?

    • @centerstat
      @centerstat  2 года назад +1

      1) With ordinal Likert scale items, it's probably better to use LCA than LPA, especially if you have a large sample, so as to avoid making assumptions of within-class normality that are likely untenable. Inspect the distributions of the responses in case any categories are too sparsely endorsed to support stable estimation (in which case you could collapse adjacent categories).
      2) People differ in opinions regarding whether data reduction should be done before fitting an LCA/LPA, i.e., conducting an LPA on component or factor scores. The problem is that it is easy to construct scenarios in which the component or factor scores occlude the class differences. There's been some work on mixtures of factor analyzers and the like that attempts to do both simultaneously. However, these models get quite complex and, in practice, it is not that unusual to see people compute scale scores in some way prior to analysis.

    • @justinhayes3434
      @justinhayes3434 2 года назад

      @@centerstat awesome response! So helpful! The reason I asked is that I'm getting ready to analyze a somewhat onerous data set with the goal of identifying distinct classes of participants but I'm worried that with so many items in the survey, it will be difficult to interpret what's driving class affiliation, hence my impulse to do some sort of dimensionality reduction before doing LCA.

    • @centerstat
      @centerstat  2 года назад

      @@justinhayes3434 Ideally, you might be able to narrow the item set for the analysis based on a priori hypotheses about which things ought to differentiate classes. If it must be empirically driven, another option is that you might be able to do some preliminary graphical analysis using multidimensional scaling to see if there are visually distinct clusters in two dimensions (hard to visualize beyond that).

  • @gabriellajiang792
    @gabriellajiang792 7 лет назад

    Very nice video!

  • @amyhord1879
    @amyhord1879 4 года назад

    Very helpful explanation. Thank you!

  • @stateside_story
    @stateside_story 7 лет назад

    Really helpful... Thanks !

  • @lenaschweizer421
    @lenaschweizer421 2 года назад

    I still struggle with the difference between a cluster analysis and class/profile analysis. Can someone explain it to me?

    • @centerstat
      @centerstat  2 года назад

      Hi Lena, Great question -- and we actually have another video just on this: ruclips.net/video/HwsMZwhO7wU/видео.html

  • @louis1505
    @louis1505 2 года назад

    Hi just wodnering how you would recocmend approaching a situation where you had univariate non-normal distributions of your variables and doing some form of LPA? Would using ML robust std errors work? and if you had any idea on where I could find out how to implement these corrections on programs like R's tidyLPA? Thanks!

    • @centerstat
      @centerstat  2 года назад

      Non-normality is tricky because it might actually arise from the mixture of unobserved groups. That is, a mixture of normal distributions will look non-normal, sometimes strikingly so. As such, seeing a non-normal overall distribution isn't necessarily a problem, and doesn't necessarily require moving away from the assumption of within-class normality.
      But if non-normality is intrinsic to the variable and would, theoretically, be expected even within a given latent class (e.g., the variable is a count, or ordinal, or bounded, or naturally skewed like reaction times or $ spent), then you are best off exchanging distributional assumptions. Continuing to assume within-class normality will result in poor recovery of the underlying class structure. Robust SEs won't fix that.
      For instance, for a count variable, you might replace the assumption of within-class normality with a negative binomial distribution. For continuous but non-normal variables, you could try a skew-normal mixture. Really, you can consider just about any distribution under the sun, provided you can find it implemented in some software. Various R packages and commercial software (e.g., Mplus, Latent Gold), offer different options for distributions.

    • @louis1505
      @louis1505 2 года назад

      @@centerstat Thanks for such a detailed response. Unfortunaltey you have lost me a bit, I'm just a lowly honours student doing my thesis! I am planning on running an LPA to identify classes of various pathological personality traits and use of different emotion regulation strategies and I ask about non-normlaity as some of my individual variables, both traits and strategies are non-normaly distributed. Previous studies using these traits have found them to be non-normal univariate distributions of these traits with a skew.
      Thus when I go to run my LPA I'm wondering what sort of checks I should be running to ensure that the "assumptions" (if that's even the right word in this case) for LPA have been met. I'm already identifying some multivraite outlierts and such for some multiple regressions I'm running earlier though if that adds context.
      Again amazing video an awesome resource for people like myself!

    • @centerstat
      @centerstat  2 года назад

      @@louis1505 Sorry if that was a bit too high level. To try to put it more simply, a typical LPA assumes that each variable has a normal distribution *within* each latent class. But when you look at the distributions of your sample data, e.g., histograms, you are pooling all the data together rather than looking within a latent class. So the distributions you observe in your overall sample data don't provide direct information on the assumptions of LPA. You could observe very skewed histograms simply because you have latent classes mixed together, even if the distribution *within* each latent class is normal.
      In terms of what checks to do, sometimes people will (1) run the LPA, (2) use the LPA results to assign members of the sample to classes, (3) inspect distributions within these assigned classes to see if they look normal.
      This is OK, but is not a perfect strategy, as the classes were generated under the assumption of within-class normality in the first place. That is, there is some circularity here -- let's find classes with normal distributions, then let's inspect classes to see if they show normal distributions. But if the distributions do look off despite this circularity, that's a clear indication that something about the model is wrong.

    • @louis1505
      @louis1505 2 года назад

      @@centerstat Thanks so much for the feedback! Really appreciate it, this will be very helpful.

  • @katrinaanderson2731
    @katrinaanderson2731 2 года назад

    Hello! Thank you for this video! I am interested in running an LPA, but being able to look at change over time and grouping different individuals by how they change (for example, a group that starts high and drops quickly in their scores, another that starts low and stays low etc). I tried running an LPA using slope from a linear regression that I ran based on the data points over time, but R seems to be grouping my classes based on difference in slope from the mean (ie clumping high positive slope values and high negative slope values together). Can you think of another way of doing this analysis? Thank you so much! (From a biologist trying to make her way through stats).

    • @centerstat
      @centerstat  2 года назад

      HI Katrina. There is an LPA-like model for growth called Latent Class Growth Analysis that you can use to group individuals based on the intercepts and slopes of their trajectories of change over time. Within R, the lcmm package offers this functionality. In brief, this is LPA like in the sense that local independence is assumed within classes (i.e., repeated measures don't correlate within class) and only the means of the repeated measures vary between classes but where the means are restricted to take a particular form such as linear or quadratic change over time. Another type of longitudinal mixture model, called the general growth mixture model, allows for variance in change over time within classes as well (i.e., random effects). These would both be preferable to conducting individual linear regressions, saving out intercepts and slopes, and doing a standard LPA on the output values, though they are conceptually similar in what they are doing. If you think it would be helpful to get additional training on this topic, at CenterStat.org we offer both asynchronous and livestream classes on mixture modeling and latent class analysis. Next livestream is December 12-14.
      Hope this helps

    • @hannahalveshagy4141
      @hannahalveshagy4141 Год назад

      @@centerstathi! Does your course include information on covariates in LPA? I’ve done longitudinal gmm but I need to conduct an LPA cross section ally and can’t find a package in R that allows me to include a covariate.

    • @centerstat
      @centerstat  Год назад

      @@hannahalveshagy4141 Yes, our workshop includes an entire chapter on what one might call "external variables" -- predictors of class membership and distal outcomes. If of interest, we have a version of the workshop coming up in January. There is also a self-paced version of the class available at any time. Details on both can be found at centerstat.org. However, to be fully transparent about it, there are not great R packages for these extensions of the model. Some packages allow for a 1-step modeling approach for class predictors, but the preferred 2- and 3-step approaches (which prevent the latent classes from becoming redefined with the inclusion of predictors / outcomes) do not appear to be available. The commercial software programs Latent Gold and Mplus currently offer the best options in this regard (of these, we cover Mplus in the class).

    • @hannahalveshagy4141
      @hannahalveshagy4141 Год назад

      Hello, thank you so much for the response. I took this course back in 2021 and found it so useful in running a GMM for my masters thesis - I was able to include a covariate (classm = Variable) for longitudinal latent trajs, but I am trying to include a variable I would like to control for during that process such that I assume education will impact how these latent cognitive profiles emerge and I feel torn between using the raw data or education-corrected data since I cannot figure out how to control for this in the LPA.
      @@centerstat

  • @ryantsai2066
    @ryantsai2066 2 года назад

    Thanks for your video, that is super helpful! I have a question: If I am going to use a 25-item scale to measure one construct with 5 dimensions (i.e., 5 items for each dimension), but I would also like to see the different clusters of people (e.g., some may have higher scores in dimension 1, 3, & 5; some may have higher scores in dimension 3 & 4). Is that means I should use the extension you talked about in the end: doing the CFA first to extract the 5 dimensions and then do the LPA (and they are all in the same model)? I am struggling about that it seems like I can directly do the LPA with 25 items but the items were developed from the idea of 5 dimensions. Which does not make sense to do so but also makes sense to some degree.

    • @centerstat
      @centerstat  2 года назад +1

      When you are measuring items that you believe reflect different dimensions, including them all in a standard LPA can be problematic -- the primary issue is that the assumption of local dependence requires that the responses to the items be independent within classes. When multiple items tap the same dimension, this is unlikely to be true, and what you might end up with are classes that try to approximate the continuous dimensions rather than reflecting the more discrete differences you're trying to recover.
      One option is to use the extension you mentioned at the end of the video in which you posit a 5-factor confirmatory factor structure for the 25 items, then add latent classes to this to make it a factor mixture model. Gitta Lubke has some nice early work on these models. Within a factor mixture model, your latent classes will capture profiles defined by differences on these dimensions. The inclusion of the factors accounts for the fact that items within dimensions tend to be rated similarly (the dependence that the standard LPA would have trouble with).
      Factor mixture models can be a bit tricky to work with, so another option is to calculate scale scores for each of the 5 dimensions (e.g., a mean of the items for each factor) and then run a standard LPA on the scale scores. This involves more assumptions and limitations, but could have practical advantages if you have a limited sample or a true factor mixture model becomes too unwieldy and complex.

    • @ryantsai2066
      @ryantsai2066 2 года назад

      @@centerstat Thank you for your informative and useful replying. It makes sense that directly application to all items leads to a problem.
      As for the solutions you mentioned, I have som follow-up questions:
      The sentence in the end of the 2nd paragraph "The inclusion of the factors accounts for the fact that items within dimensions tend to be rated similarly (the dependence that the standard LPA would have trouble with)." -- does it mean that the items from the same cluster were weighted equally by the respondents. But that may not be the case for subjects maybe value one item more than the other items?
      I also thanks for your 2nd solution, and would like to know more about what are the "more assumptions and limitations". I wonder if you could specify more or recommend some resources to dig into this issue.
      Thanks again!

    • @centerstat
      @centerstat  2 года назад

      @@ryantsai2066 In the factor mixture model, you estimate the factor loadings of the items within each class so that some items are more reflective of the latent construct than others (higher versus lower standardized factor loadings). You can specify a model where the factor loading values are replicated across classes (measurement invariance), or you can allow items to be differentially reflective of the factors across classes (non-invariance or differential item functioning). Katherine Masyn & Veronica Cole have both done some nice work on measurement invariance in mixture models.
      Within the second approach (computing sum scores or mean scores for use in LPA) you assume measurement invariance across classes, you assume all items are equally reflective of their factors, and you have no tests of these assumptions or even whether the putative factor structure itself is reasonable. So lots of assumptions built in.

    • @ryantsai2066
      @ryantsai2066 2 года назад

      @@centerstat Thanks for your replying! I will dig into more with the measurement invariance and assumption issues. Helps a lot for my project!

  • @mugomuiruri2313
    @mugomuiruri2313 11 месяцев назад

    good

  • @mariacamilaumanaruiz2602
    @mariacamilaumanaruiz2602 6 лет назад

    Thanks a lot! this is very helpful!
    What software would you recommend for latent profile analysis?

  • @fernandojackson7207
    @fernandojackson7207 5 лет назад

    Thanks, Curran-Bauer: How does one determine the classes? Is it because of similarity of responses to each item, e.g., group 1 tends to say items x1, x2,..,xj are important, while others are not?

    • @centerstat
      @centerstat  5 лет назад +1

      Hi, Fernando. You have the right idea. We try to identify all the classes we need to account for dependence among the item responses. Classes can differ in which items they think are important but once we account for this (condition on class) there should be no further relationship (responses to the items are independent within any given class). We usually don't evaluate this "local independence" property directly but instead look at a variety of numerical measures to establish the optimal number of latent classes. For instance, it is common to fit models with 1, 2, 3, 4, 5, etc. classes, and then compare Bayes' Information Criterion (BIC) values, selecting the model (number of classes) that produced the minimum BIC. The topic of "class enumeration" is a big one -- there are many numerical measures and tests that don't always agree with one another in any given application, and there is some controversy about which are best. In practice, one tries to triangulate information across different criteria to select a model that is statistically justifiable while also providing useful interpretations.

    • @fernandojackson7207
      @fernandojackson7207 5 лет назад

      @@centerstat Thanks again, C-B , one last one , please: Do we use Maximum Likelihood too?

  • @haodeng1592
    @haodeng1592 7 лет назад

    Really helpful!

  • @mish7992
    @mish7992 3 года назад

    I have a question: Is there a way I can understand the transition between profiles when using LPA? (Just as in LCA I can use LTA to understand the transition between classes can that be possible to do in LPA?) It would be great if you could please explain. Thank you.

    • @centerstat
      @centerstat  3 года назад +1

      Yes. So let's say you have repeated measures at two time points. You can have an LPA at time 1 and an LPA at time 2 that are estimated together in one model and then look at how people move (or stay) over time. There are a couple of different ways to set this up, but often the model is parameterized to provide conditional transition probabilities to show, for instance, if someone is in Class 1 at Time 1, what's the probability they are in Class 1 again at Time 2? Or that they move to Class 2? Often of preliminary interest is establishing whether the classes are the same or different over time. One can test across-time equality constraints on the parameters defining the profiles to see whether invariance is supported or not (predicated on at least having the same number of classes at each time point). Sometimes transition probabilities are also constrained -- for instance, if you were looking at profiles that represented stages of cognitive development, you would expect only forward movement and not backward, so backward transition probabilities might be constrained to zero. Hope this helps.

    • @mish7992
      @mish7992 3 года назад

      @@centerstat Thank you! I just want to reiterate what you said. So, we can technically use LTA for understanding transition between groups when conducting LPAs, is that correct? Not quite sure if I understood what it means by the model being parameterized. Would it be correct to assume that when you say whether the classes are same or different over time, it means checking for the stability of the indicator variables defining the profiles? In other words, by “whether the classes are the same or different over time” do you mean if Class 1 at Time 1 is the same as Class 1 at Time 2? Perhaps, here examples could have helped to clarify. Furthermore, you suggested to test across-time equality constraints on the indicator variables, is there a specific method to do this? What if the indicator variables weren’t stable and the classes changed? Also, do you mention backward and forward transitional probabilities because LTA involves a directional prediction? The example you provided for stages of cognitive development; it makes sense. But I am curious as to studying what all processes would have had the forward transition probability constrained to zero?

    • @centerstat
      @centerstat  3 года назад +1

      @@mish7992 Yes, whether Class 1 is the same at both time points, and Class 2, and so on. Are the profiles maintained over time and people just move between them or do the profiles themselves also change?
      Although it's not requisite for the latent classes/profiles to be the same across time, often one is interested in knowing whether they are or not, so a preliminary test of invariance is quite common. This is done by placing equality constraints on the means across classes and testing whether these are tenable (e.g., by a likelihood ratio test).
      If the classes change in their profiles completely, then it would be more difficult to interpret an LTA in terms of people being stable or moving classes but you could still interpret it in terms of people in Class 1 at time one tend to be in Class X at time 2, etc.
      Backward and forward change are terms that are used when the profiles characterize a progression. Stage theories are great examples (e.g., cognitive development, stages of grief, etc). Engagement in substance use might be another where, according to "gateway" theories of drug use, people might move from more less concerning classes (e.g., marijuana only) to more concerning classes (e.g., polysubstance use) over time.

  • @MARS-SCHMOLI
    @MARS-SCHMOLI 6 лет назад

    thanks

  • @Liz-qg2ef
    @Liz-qg2ef 4 года назад +1

    Great video, thank you. But you need a trigger warning for this example 😂