Statorials
Statorials
  • Видео 202
  • Просмотров 125 046
How to do a simple linear regression in SPSS and output interpretation - step by step
This tutorial shows you how to conduct a simple linear regression in SPSS and interpret the output /regression results.
📋 Simple linear regression basics
============================
The simple linear regression aims on using only one independent variable (predictor) to predict the dependent variable (outcome). For example, you want to check the influence of IQ on income.
The linear regression uses the least squares method, meaning the squared distances between the obervations and an ideal line would be minimal. This, of course, comes with a lot of requirements, which are listed below and will receive separate videos going forward.
When it comes to a simple linear regression, I highly recommen...
Просмотров: 132

Видео

Spearman correlation - calculate required sample size with SPSS
Просмотров 3521 час назад
The Spearman correlation tests two variables for a monotonic relationship (can be linear or not). In the run-up to an empirical study or data collection for the Spearman correlation, the necessary sample size must be determined, for example using SPSS. The minimum sample size depends on one- or two-sided testing, the assumed effect (aka. correlation coefficient), the alpha level, the statistica...
Pearson correlation - calculate required sample size with SPSS
Просмотров 7114 дней назад
The pearson correlation tests two variables for a linear relationship. In the run-up to an empirical study or data collection for the pearson correlation, the necessary sample size must be determined, for example using SPSS. The minimum sample size depends on one- or two-sided testing, the assumed effect (aka. correlation coefficient), the alpha level, the statistical power and the the null hyp...
One-Way ANOVA - calculate required sample size with SPSS
Просмотров 5421 день назад
// One-Way ANOVA - calculate required sample size with SPSS // The one-way ANOVA (analysis of variance) is a parametric statistical method that is used to test the means of at least three groups for differences. In the run-up to an empirical study or data collection for the one-way ANOVA, the necessary sample size must be determined, for example using SPSS. The minimum sample size depends on th...
Multiple Linear Regression - calculate required sample size with SPSS
Просмотров 3828 дней назад
The multiple linear regression tests the influence of at least two independent variables (= predictors) on a dependent variable. In the run-up to an empirical study or data collection for the multiple linear regression, the necessary sample size must be determined. As of late, you can even do that in SPSS. G*Power tutorial: ruclips.net/video/2KK5G05Z-4A/видео.html The minimum sample size depend...
Point-biserial correlation in SPSS
Просмотров 97Месяц назад
This video shows how to conduct a point-biserial correlation in SPSS. Point-biserial correlation is used when you have a dichotomous and a continuously measured variable. Before you start, make sure that you know the labels of the dichotomous variable to interpret the results properly. Example In my example I want to correlate gender and weight. Male participants are coded as 0 and female parti...
Partial correlation in SPSS
Просмотров 41Месяц назад
This video shows how to conduct a partial correlation given at least one additional variable in SPSS and interpret the results. Remember, that partial correlation measures the degree of association between two variables, controlling for other associated variables, in this case only one additional variable. For this example, we will correlate income and IQ and will control for the motivation, si...
SPSS Labels: Why You Should and How To Use Them
Просмотров 27Месяц назад
Learn how to effectively use SPSS labels in syntax with this comprehensive step-by-step tutorial. Whether you are a beginner or an experienced user looking to enhance your SPSS skills, this video will guide you through the entire process of using SPSS labels to organize, manage, and analyze your data. By the end of this tutorial, you will have a clear and practical understanding of how to imple...
How To Calculate Standard Deviation in Statistics (EASY & FAST)
Просмотров 109Месяц назад
This tutorial is a short but comprehensive guide on understanding the concept of standard deviation in statistics. The standard deviation quantifies how much the values of a variable tend to differ from the mean, aka. average value. A large standard deviation indicates a large spread, while a small standard deviation represents a smaller spread. Mathematically, the standard deviation is the squ...
Multiple Linear Regression - calculate required sample size with G*Power
Просмотров 952 месяца назад
The multiple linear regression tests the influence of at least two independent variables (= predictors) on a dependent variable. In the run-up to an empirical study or data collection for the multiple linear regression, the necessary sample size must be determined, for example using G*Power. The minimum sample size depends on the assumed effect (f² or R²) the alpha level, the statistical power ...
Understanding Measurement Levels in Statistics: Nominal, Ordinal, Interval and Ratio Scale explained
Просмотров 442 месяца назад
The scale level (level of measurement) of variables determines evaluation and analysis methods in statistics. Reason enough to explain them and give some examples so you can recognize them in the future. In statistics, four scales or measurement levels are known: - Nominal - Ordinal - Interval - Ratio The last two are often summarized under the generic term metric. Nominal scale The nominal sca...
How to Do Kendall's Tau c Correlation Analysis in SPSS (Full Guide)
Просмотров 862 месяца назад
The Kendall's Tau c rank correlation coefficient can be easily calculated in SPSS. Just follow along on this simple, step by step guide using Crosstabs on how to run a Kendall's Tau c correlation in SPSS and interpret the results. In addition to the Kendall's Tau c correlation calculation, I'll show the details that will enhance your data analysis. For instance, an exact significance or a cross...
How to Do Kendall's Tau b Correlation Analysis in SPSS (Full Guide)
Просмотров 1012 месяца назад
The Kendall's Tau b rank correlation coefficient can be easily calculated in SPSS. Just follow along on this simple, step by step guide on how to run a Kendall's Tau b correlation in SPSS and interpret the results. In addition to the Kendall's Tau b correlation calculation, I'll show the details that will enhance your data analysis. For instance, one-sided testing is often forgotten despite bei...
How to Do Spearman Correlation Analysis in SPSS (Full Guide)
Просмотров 1172 месяца назад
The Spearman rank correlation coefficient can be easily calculated in SPSS. Just follow along on this simple, step by step guide on how to run a Spearman correlation in SPSS and interpret the results. In addition to the Spearman correlation calculation, I'll show the details that will enhance your data analysis. For instance, one-sided testing is often forgotten despite being applicable - provi...
How to Do Pearson Correlation Analysis in SPSS (Full Guide)
Просмотров 1132 месяца назад
How to Do Pearson Correlation Analysis in SPSS (Full Guide)
How to create Boxplots for groups in SPSS (1 Min Tutorial)
Просмотров 522 месяца назад
How to create Boxplots for groups in SPSS (1 Min Tutorial)
How to create Boxplots in SPSS (50 sec Tutorial)
Просмотров 582 месяца назад
How to create Boxplots in SPSS (50 sec Tutorial)
Reporting Mixed ANOVA - results from SPSS
Просмотров 2222 месяца назад
Reporting Mixed ANOVA - results from SPSS
Effect sizes for pairwise comparisons of main effects in a mixed ANOVA in SPSS
Просмотров 1642 месяца назад
Effect sizes for pairwise comparisons of main effects in a mixed ANOVA in SPSS
Effect size for pairwise comparisons for an interaction in a mixed ANOVA in SPSS
Просмотров 1592 месяца назад
Effect size for pairwise comparisons for an interaction in a mixed ANOVA in SPSS
Effect size Eta-squared for the mixed ANOVA in SPSS
Просмотров 1592 месяца назад
Effect size Eta-squared for the mixed ANOVA in SPSS
Main effects in a mixed ANOVA in SPSS - follow-up
Просмотров 982 месяца назад
Main effects in a mixed ANOVA in SPSS - follow-up
Interaction in a mixed ANOVA in SPSS - follow-up
Просмотров 1733 месяца назад
Interaction in a mixed ANOVA in SPSS - follow-up
Mixed ANOVA in SPSS - calculation and interpretation in 6 minutes
Просмотров 3873 месяца назад
Mixed ANOVA in SPSS - calculation and interpretation in 6 minutes
How to create a simple boxplot with ggpubr in R (2.5 Min Tutorial)
Просмотров 863 месяца назад
How to create a simple boxplot with ggpubr in R (2.5 Min Tutorial)
How to create a simple boxplot with ggplot2 in R (2 Min Tutorial)
Просмотров 563 месяца назад
How to create a simple boxplot with ggplot2 in R (2 Min Tutorial)
How to create Boxplots in base R (1 Min Tutorial)
Просмотров 433 месяца назад
How to create Boxplots in base R (1 Min Tutorial)
Homogeneity of Variances for the mixed ANOVA in SPSS
Просмотров 1673 месяца назад
Homogeneity of Variances for the mixed ANOVA in SPSS
Checking Homogeneity of Covariance Matrices for the Mixed ANOVA in SPSS
Просмотров 1523 месяца назад
Checking Homogeneity of Covariance Matrices for the Mixed ANOVA in SPSS
Mixed ANOVA in SPSS - checking normality assumption
Просмотров 2433 месяца назад
Mixed ANOVA in SPSS - checking normality assumption

Комментарии

  • @Ken-kr4sr
    @Ken-kr4sr Час назад

    Wonderful explanation. Thank you. 🙏🏽🙏🏽💯💯

  • @vikramshukla5532
    @vikramshukla5532 2 дня назад

    Thanks for the video man, could you do a similar one for OLS regression please ?

    • @statorials
      @statorials 2 дня назад

      Hi there, actually, OLS regression is the method for estimating the parameters in a linear regression model, minimizing the sum of the squared differences between observed and predicted values. Cheers, Björn.

  • @truchuynhhue
    @truchuynhhue 23 дня назад

    So helpful, thanks so much! Very clearly explained!

    • @statorials
      @statorials 22 дня назад

      Glad you liked it and found it helpful. Many more videos on this channel that might help you. ;-) Cheers, Björn.

  • @GM__user
    @GM__user 24 дня назад

    Hello, thanks for the video! Just a question, how does this differ from G*Power's one-way ANOVA? Or in other words, what makes this calculation a Kruskal-Wallis test per se?

    • @statorials
      @statorials 24 дня назад

      Hey there, it is the same calculation as for the one-way ANOVA, since the goal is the same and power differences are often negligible between the two, except in rare cases. Some researchers will add 15% to the sample size to be safe. Cheers, Björn.

    • @GM__user
      @GM__user 24 дня назад

      @@statorialsOh okay, thanks for the clarification! Do you by any chance know where I can find the actual calculations G*Power uses to ultimately reach these sample sizes? I have been running power analysis in Stata, Python, & R using the same inputs but I get different Ns!

    • @statorials
      @statorials 22 дня назад

      I would have pointed to the GPower manual, but that is not revealing the formulas used. Cohen (1988) is always a good start to search, but I could not see anyhting either while taking a quick glance.

    • @GM__user
      @GM__user 22 дня назад

      @@statorials Oh, thank you !

  • @Travelingsafari
    @Travelingsafari 25 дней назад

    i have 3 groups and gender variable for between subject design. for anova results only the group variable has significant main effect while the interaction and gender show no significant effect. how would i perform the Post-hoc test in R?

    • @statorials
      @statorials 24 дня назад

      Hey there, it sounds like you have no repeated measures when you have only between subject factors, hence, I would recommend going with the two-factorial ANOVA and not a mixed ANOVA. df %>% anova_test(dv ~ iv1*iv2) Post-hoc testing depends on the interaction effect. If you have none, you will use something like that: df %>% pairwise_t_test(dv ~ iv1, p.adjust.method = "bonferroni", detailed = TRUE) with iv1 being your group variable. Cheers, Björn.

    • @Travelingsafari
      @Travelingsafari 24 дня назад

      Thanks

  • @achmadsamjunanto6410
    @achmadsamjunanto6410 27 дней назад

    Hi I wonder, i cannot tick the box for comparing means for Factor Interactions. I could only compare the means for between subject only, or wtihin subject only. Is there anything i could do? I'm using SPSS 27, btw.

    • @statorials
      @statorials 26 дней назад

      Hi there, if you have specified a between-subjects factor, SPSS automatically uses it within products in EM Means. The only thing I can think of, is that you specified a custom model under the Model button that consits only of the respective "main" effects. Maybe check on that. Cheers, Björn.

    • @achmadsamjunanto6410
      @achmadsamjunanto6410 18 дней назад

      @@statorials Yes, actually, under EM Means Button, there's no Compare Simple Main Effect's Box to tick, like what's shown in your video. Only Compare Main Effect's Box. I still don't know how i could specified it under the Model Button. Will there be any problems if I try to manually use independent t test to determine the mean difference for each measurement as a post-hoc analysis? without Bonferroni correction. Thank you very much

    • @statorials
      @statorials 17 дней назад

      Hmm, I suppose SPSS 27 could have EM Means not included (yet). You can of course do the t-tests yourself. You can use ruclips.net/video/PW4OlfCLrM8/видео.html or ruclips.net/video/INg9oHbmsJY/видео.html as orientation. You only have to take care when doing independent or paired t-tests when you observe between-subjects effects and within-subjects effects or even when doing the interaction effect. Cheers, Björn.

    • @achmadsamjunanto6410
      @achmadsamjunanto6410 17 дней назад

      @@statorials thank you very much. Very helpful...

  • @la_lisamarie
    @la_lisamarie Месяц назад

    Hallo Björn, vielen Dank für deine hilfreichen Videos! 🙏 Kann man für die Post-Hoc-Tests auch eine Effektstärke (Cohen's d) berechnen und wenn ja, wie? Über eine (zeitnahe) Antwort wäre ich sehr dankbar. Die Abgabe der MA rückt immer näher. 😅

    • @statorials
      @statorials Месяц назад

      Hallo Lisa, danke für das Lob! Ja, das ist möglich. Auf dem deutschen Kanal ist das erste von zwei Videos schon online: ruclips.net/video/f6JRhnCmePc/видео.html Die Effektstärke für die Haupteffekte kommt in genau einer Woche. Viele Grüße, Björn.

    • @la_lisamarie
      @la_lisamarie Месяц назад

      @@statorials Danke für deine schnelle Antwort - das hilft mir so sehr! 🙏 Tatsächlich hätte ich noch eine Frage: Bei mir kam ein signifikanter Interaktionseffekt raus und die Post-Hoc-Tests haben gezeigt, dass sich die Gruppen zum Prä- und zum Post-Messzeitpunkt signifikant unterscheiden. Ist es sehr problematisch, dass sie sich auch zum Prä-Messzeitpunkt unterscheiden? Denn das würde man ja eigentlich nicht erwarten. Die Werte der EG steigen vom Prä- zum Post-Messzeitpunkt auch signifikant an. Die der KG nicht. Das ist also erwartungsgemäß. Ich danke dir jetzt schon vielmals!

    • @statorials
      @statorials Месяц назад

      Hallo Lisa, gerne! Zu deiner Frage: Wie stark ist der Gruppen-Unterschied in der Prä-Messung mit Cohen's d denn? Wenn es ein noch kleiner Unterschied ist, wäre das sicher verkraftbar und könnte auch problemlos als vertretbar eingeordnet werden. Die Tatsache, dass sich nur die EG steigert, unterstützt entsprechend die Hypothese, dass die Intervention hilfreich war. Viele Grüße, Björn.

    • @la_lisamarie
      @la_lisamarie 29 дней назад

      @@statorials Hallo Björn, es handelt sich um einen kleinen Unterschied (d = .441). Wie würdest du hier dann argumentieren? Ich bin dir so dankbar!!

    • @statorials
      @statorials 29 дней назад

      @@la_lisamarie Hallo Lisa, zwar ist der Effekt schon tendenziell mittel, dennoch würde man den hier relativieren können, wenn man gleichzeitig die Steigerung nur bei der EG hat. Ist die EG oder KG höher in der Prä-Messung höher? Also überholt die EG die KG oder wird der Unterschied zwischen EG und KG größer, wenn die EG auch bei der Prä-Messung höher lag? Viele Grüße, Björn.

  • @la_lisamarie
    @la_lisamarie Месяц назад

    Hallo Björn, vielen Dank für deine ganzen hilfreichen Videos zur mixed ANOVA in R! 🙏 Ich muss im Rahmen meiner Masterarbeit das partielle Eta-Quadrat berechnen. Ist das hier auch möglich? Ebenso muss ich für die Post-Hoc-Tests der mixed ANOVA eine Effektstärke (Cohen's d) berechnen. Könntest du mir hier weiterhelfen?

    • @statorials
      @statorials Месяц назад

      Hallo Lisa, das partielle Eta² erhältst du, wenn du effect.size="pes" in anova_test() einfügst. Viele Grüße, Björn.

    • @la_lisamarie
      @la_lisamarie Месяц назад

      @@statorials Danke!!!

  • @chirstinajohnson4342
    @chirstinajohnson4342 Месяц назад

    Hi, Thank you for providing the tutorial. I have a couple of questions. I want to calculate the number of participants required for the following study. The study is longitudinal in nature and has two groups. Data is collected at two time points. The experiment has three conditions and a control condition. I am planning to compare the two groups across time and the different experimental conditions within each group and across time. How do I calculate the number of participants required in a group for a large effect size? Also, while selecting, should I select between factors, within or interaction? Thank you

    • @statorials
      @statorials 28 дней назад

      Hi there, I would recommend doing a mixed ANOVA approach that covers between and within effects: ruclips.net/video/5AflJkhaln0/видео.html The estimated marginal means (posthoc-testing) you mentioned, cannot be directly considered in the sample size calculation though, which is not a deal breaker. Cheers, Björn.

  • @Adam-dd3ki
    @Adam-dd3ki Месяц назад

    Hi, thanks for this great tutorial, with 16 groups and 8 measurements I obtained total sample size of 272, this means 17 per group. Does it mean that per each measurement I need 17 replicates or 17/8, which is 2.125, which would make 3 replicates. This concerns a pot experiment with 4 variables (e.g., presence/lack of plants, presence/lack of contaminant 1, presence/lack of contaminant 2, presence/lack of contaminant 3). Thank you for help!

    • @statorials
      @statorials 28 дней назад

      Hi Adam, glad you like it. The way I read you post, it appears to me that you have 8 measurements per 272 individuals that will be split into 16 groups. Do you have an additional between-subject factor with the pots, meaning your subjects will be put into 16 groups and within each group you have another experiment? What is the distinction between the groups, if not? Cheers, Björn.

    • @Adam-dd3ki
      @Adam-dd3ki 25 дней назад

      @@statorials Hi Björn, thanks for the reply. So the experiment is divided into 4 variables: presence of plants (yes or no), presence of contaminant (yes or no), type of soil (type 1 or 2), aeration (yes or no), which gives 16 combinations. There would be 8 samplings every week. We would like to used repeated measures ANOVA afterwards and we're interested if 3 replicates per each group would be enough. All the groups would be affected in the same way by changing temperature. Thanks. Adam

  • @LauraVossen
    @LauraVossen Месяц назад

    Hi! I was wondering why you call it a mixed ANOVA if you are doing a repeated measures ANOVA? Even though linear mixed-effects models can analyse the same data as RM-ANOVA, these are actually 2 different types of models.

    • @statorials
      @statorials Месяц назад

      Hey there, the term "mixed ANOVA" is used because it includes both within-subjects (repeated measures) and between-subjects factors. in the traditional ANOVA model world. While the term "mixed" in mixed ANOVA may be misleading in regard to linear mixed models it is, as you pointed out, a different class of models. Depending on your research field, the terms may vary a bit, like split-plot ANOVA or something with mixed, like mixed-design ANOVA or mixed-model ANOVA. Cheers, Björn.

  • @not_amanullah
    @not_amanullah Месяц назад

    Great

    • @statorials
      @statorials Месяц назад

      Thanks, mate! Cheers, Björn.

  • @Shonade_Malik
    @Shonade_Malik Месяц назад

    Nice video!

    • @statorials
      @statorials Месяц назад

      Thanks, appreciate the feedback!

  • @weizhang8133
    @weizhang8133 Месяц назад

    Very useful😀

  • @MK-dj5vz
    @MK-dj5vz Месяц назад

    What do you do if Levenes test/the assumption is violated ? I have a total sample of 110. I am using JAMOVI as my stats software.

    • @statorials
      @statorials Месяц назад

      I need some more information. How many time points do you have and how many show a low p-value for Levene's test? Cheers, Björn.

    • @MK-dj5vz
      @MK-dj5vz Месяц назад

      @@statorials i have 3 levels for my within subjects factor and two show a low p value for the levenes test. However, my sample sizes for the between groups only have a difference of 11 participants and the standard deviations for each group do not vary much. Is there another rule of thumb I can use to evaluate this assumption ?

  • @julianjamaal7417
    @julianjamaal7417 Месяц назад

    you're the GOAT

    • @statorials
      @statorials Месяц назад

      Thanks for the feedback! Cheers, Björn.

  • @tillbarrabas660
    @tillbarrabas660 Месяц назад

    Danke Björn, für deine ganzen Bemühungen!

    • @statorials
      @statorials Месяц назад

      Freut mich, wenn es hilft. 🙂

  • @DaisyWaters
    @DaisyWaters 2 месяца назад

    I don't understand why for rank biserial you calculate it with spearman and point biserial with pearson. rank biserial is not run with rank_biserial of effectsize?

    • @DaisyWaters
      @DaisyWaters Месяц назад

      a question, do you know how I can ‘correlate’ an ordinal variable with a nominal variable (with categories)?

    • @statorials
      @statorials Месяц назад

      Hey there, I dug into that a bit and Spearman is an approximation (at best). I'll swap out the video in the oncoming weeks when I had time to record a new video regarding that. One can use the effect size calculation of the two-sample Wilcoxon test, meaning Glass rank biserial correlation coefficient or Cliff's delta. For your question, you can "correlate" an ordinal an a nominal variable using Chi² test of independence and use the effect size Cohen's ω to quantify the association. Cheers, Björn.

  • @odaimaihoub1222
    @odaimaihoub1222 2 месяца назад

    Thank you Dr. Björn! Please can you correct me if my understanding is wrong? If we have paired data ( before treatment , after treatment) first we need to check normality assumption ( for residuals NOT for each group separately ) using Shapiro-wilk then if p_value > 0.05 , the data is normally distributed, so we can use paired T-test if p_value <0.05 , the data is not normally distributed , so we can use Mann-Whitney Wilcoxon test. thanks again.

    • @statorials
      @statorials 2 месяца назад

      Hi there, you are almost correct in your proceedings. 1) I would be careful though when it comes to the Shapiro-Wilk test and its sensitivity in larger samples, rejecting the Null Hypothesis because of negligible differences. Consider a Q-Q-plot for that task. 2) If that looks not approximately normally distributed, go with the Wilcoxon paired test aka. Wilcoxon-signed rank test (ruclips.net/video/9sX62Qagrs0/видео.html), NOT the Mann-Whitney-Wilcoxon test. Cheers, Björn.

    • @odaimaihoub1222
      @odaimaihoub1222 2 месяца назад

      @@statorials I really appreciate your help and time Thank you so much <3

    • @statorials
      @statorials 2 месяца назад

      @@odaimaihoub1222 you're welcome! Best of luck for your analysis!

  • @RodrigoAriasInostroza1970
    @RodrigoAriasInostroza1970 2 месяца назад

    Hi, thanks for your video. When I tried to runa the post hoc test I have an error. "Error in `map()`: ℹ In index: 1. ℹ With name: V1. Caused by error in `complete.cases()`: ! not all arguments have the same length Run `rlang::last_trace()` to see where the error occurred." I guess it is because I remove some extreme outliers and data is unbalanced. Any way to follow with the comparison (Sum of Squares IV perhaps) DO you know how to do it?

    • @statorials
      @statorials 2 месяца назад

      Hi there, you migt be able to solve this by adding na.rm=TRUE, which will listwise exclude cases with missing values. Best Regards, Björn.

  • @statorials
    @statorials 2 месяца назад

    ➡ Follow-up: ruclips.net/video/74vSNwyd2ys/видео.html

  • @statorials
    @statorials 2 месяца назад

    ➡ Follow-upt: ruclips.net/video/Rrqt_yLaa34/видео.html

  • @statorials
    @statorials 2 месяца назад

    Recommended folluw-up: ruclips.net/video/5HAAKea4TzE/видео.html

  • @statorials
    @statorials 2 месяца назад

    Recommend follow-up: ruclips.net/video/6UUK7Adk7MY/видео.html

  • @statorials
    @statorials 2 месяца назад

    Recommende Follow-up: ruclips.net/video/tfjoqHo104g/видео.html

  • @dsavkay
    @dsavkay 2 месяца назад

    Great, thanks!

  • @realtheodore
    @realtheodore 2 месяца назад

    damn bro much needed video well explained

    • @statorials
      @statorials 2 месяца назад

      You're welcome. Glad the video about Pearson helps. :-) Cheers, Björn.

  • @saraa.s.8912
    @saraa.s.8912 2 месяца назад

    Thanks for the tutorial. If using the "determine" option and the effect size specification as in Cohen (1988), should the variance explained by effect and the error variance entered be the squared values or not ("absolute values")? I find it a bit tricky to find values from comparable studies. For the variance explained by effect, could I enter the change observed in the main outcome variable in a comparable study, while using the error between measurements of the (same) main outcome variable from another study?

    • @statorials
      @statorials 2 месяца назад

      Hello Sara, my G*Power does not show error variance under the Determine option for the mixed ANOVA. Which version of G*power are you using? You could also use the sum of squares, if provided by the studies, to calculate the Eta-squared and then transform it to f. See here, how to do it: ruclips.net/video/YPO0hBKCUQc/видео.html . By the way, it should be sressed that the effect siize for the mixed ANOVA is based on the interaction effect. Cheers, Björn.

    • @saraa.s.8912
      @saraa.s.8912 2 месяца назад

      @@statorials Thanks for the reply, Björn. I have the same version as you. The error variance will appear under the Determine option if choosing "as in Cohen (1988) - recommended" under Options.

  • @Itsnegar
    @Itsnegar 2 месяца назад

    Thankss

    • @statorials
      @statorials 2 месяца назад

      You're welcome! Cheers, Björn.

  • @sachaleroyer8084
    @sachaleroyer8084 2 месяца назад

    Is it possible to obtain a négative effect group ?

    • @statorials
      @statorials 2 месяца назад

      Yes, because the order of the groups plays a role. Report only absolute effect sizes. Cheers, Björn.