Mixed ANOVA in R - calculation and interpretation

Поделиться
HTML-код
  • Опубликовано: 22 авг 2024

Комментарии • 12

  • @statorials
    @statorials  3 месяца назад

    ➡Follow-up for observable effects:
    ruclips.net/video/kTGwVo9g_uA/видео.html
    ruclips.net/video/fpXJjF33Le0/видео.html
    ruclips.net/video/GKvRAlAmb9s/видео.html
    ➡Reporting results: ruclips.net/video/jfy2A5O2wJk/видео.html

  • @rolas.635
    @rolas.635 6 месяцев назад

    Dear Björn, first of all thank you so much for your whole playlist about mixed anova. The videos are perfectly compact, short, while nothing is missing & also the design looks great!
    Now I have a question, maybe you would be able to help me out. I tried the anova_test()-function and it worked, the output looked just like yours in the video. Then very soon I got problems with the package under R (version 4.2.2), so installed the newest version of R 4.3.2 & though I can use the anova_test()-function again - I only get 1/3 of the ouput now, no mauchly test ist integrated and no corrections.
    So I tried alternatives, in this case the aov_car()-function & I switched so many times the R version (because they are still on my hard disk), but under no R version are the needed packages working.
    Since you are familiar with R, do you have any idea what I could do to fix this problem?
    Thank you so much in advance, and thanks for your help in generell!
    Marleen

    • @statorials
      @statorials  6 месяцев назад

      Hello Marleen, thanks for the positive feedback! I was able to reconstruct the versions of R and rstatix I used. I had R 4.2.3 and rstatix 0.7.2 installed when recording the video.
      It's odd that you get no Mauchly's test. What else is missing from the output, or rather, what output are you getting? What are you specifying within the anova_test()-function? I assume you have between as well as within specified. If your code lacked either, the output would be for a repeated measures ANOVA or an ordinary ANOVA, which would shorten the output accordingly; especially the latter has only the between-subject effect as output.
      Best Regards, Björn.

    • @rolas.635
      @rolas.635 6 месяцев назад

      @@statorials Dear Björn, thanks for your answer.
      I took over every argument from your example, so nothing is missing in the function.
      I think my problem ist, that the package is not used in the correct way. I do the anova_test()-function and thanks to you I know now, that I get not the correct mixed ANOVA, but instead a type II anova test result.
      The error message while loading package rstatix is , that it is masked by another package & at the moment I don't know how to fix this...

    • @rolas.635
      @rolas.635 6 месяцев назад

      @@statorials Update: I know now why I don't geht any Mauchly's test results! So it wasn't any problem of the package or whatsoever.
      but I tried the anova before only with 2 time points (T1 & T2) and when I tried it with all 4 measurements (T1 - T4) I get the complete output & just read that the test for sphericity needs at least 3 measurements. So that is clear now:D
      It would make much more sense to do two anovas (t1 & t2; t3 & t4), though we investigated a longterm effect, in terms of content we don't assume relevant change between t3 & t4...
      So I hope in such a case, it is proper to just drop checking for the sphericity assumption?:)
      Thank you so much, this really helped me to get back on track (der Stein kam wieder ins Rollen)!

    • @statorials
      @statorials  6 месяцев назад

      You're welcome! Glad to hear it works now. :-)
      Actually, sphericity is not a big deal - one can correct (with GG or HF) for it, regardless of what Mauchly's test states and be safe. If sphericity is not violated the corrections will yield the same results as the sphericity assumed calculations.
      If you only want to test t1 and t2 and t3 and t4 you might want to consider custom contrasts within a repeated measures ANOVA (no between-subject effect, right?). I have not fiddled with custom contrasts within anova_test() yet though. It seems to be a bit of a neglected argument, if my memory does not fail me.
      Maybe going through the emmeans()-function could be a remedy.
      Greetings, Björn.

  • @tschuliatschuu
    @tschuliatschuu 8 месяцев назад

    Hi, I have a question regarding p-values in mixed model anovas. I have directional hypotheses and therefore decided to test one-tailed, with halved p-values (in case the effect is as expected). I hypothesised that my intervention leads to an improvement of scores in the intervention cohort, compared to the control cohort. Now, the intervention cohort shows an improvement, the control cohort not. I understand that the p-value of the significant interaction effect can be halved, also that the simple main effect of time on the intervention cohort is as expected and can thus be halved. But how do I treat the control cohort? I did not really say anything about the longitudinal development of the control cohort, now it remains the same over three measurements. Do I just leave the p-value because there are no hypotheses?

    • @statorials
      @statorials  8 месяцев назад

      Hi there, firstly, wouldn't you ONLY hypothesize an interaction of time and group when you go through the trouble of setting up your experiment that way?
      Secondly, if the p value of your interaction is low enough to be considered an effect, I see no benefit in interpreting the "main effects" - you have just discovered that both, group and time, depend on each other and should therefore not be interpreted holding the other constant.
      Thirdly, in accordance with your hypothesis of an interaction and the setting, I would much rather look at the estimated marginal means (emmeans_test) to determine at which points in time group differences (between-subject effects) exist - maybe in combination with a line chart.
      data.ma %>%
      group_by(time) %>%
      emmeans_test(value~group, p.adjust.method = "bonferroni")
      Regarding p values and the strict alpha level, I would recommend Wasserstein, R. L., & Lazar, N. A. (2016). The ASA statement on p-values: context, process, and purpose. The American Statistician, 70(2), 129-133 to get rid of the strict testing, based solely on p values.
      Best Regards, Björn.

  • @muhammedhadedy4570
    @muhammedhadedy4570 6 месяцев назад

    Excuse me, sir. The video titled "Mixed ANOVA post-hoc test" doesn't exist. Can you upload it, please?

    • @statorials
      @statorials  6 месяцев назад +1

      That is true, I split up the pos-hoc analysis into a significant (ruclips.net/video/kTGwVo9g_uA/видео.html) and non-significant interaction (ruclips.net/video/fpXJjF33Le0/видео.html) using estimated marginal means. I also linked them in the endcards. Thanks for bringing it to my attention.
      Best Regards, Björn.