Hi Josh, Love your content. Has helped me to learn a lot & grow. You are doing an awesome work. Please continue to do so. Wanted to support you but unfortunately your Paypal link seems to be dysfunctional. Please update it.
Hi, I got my master in Epidemiology, trying to review statistics and found your channel, you are awesome, you really make statistics easy to understand, TRIPLE BAM for you
I remember learning about t-tests well before linear regression, but it's cool seeing things applied in a different way, especially while going into the deeper concepts. This whole playlist is a stats and machine-learning goldmine!
BAM! Yes, usually t-tests are taught before linear regression, but I like teaching them in this order (regression first) since the extension of a t-test into ANOVA is way more obvious.
Thanks!!! I know, it's a little weird to look at a t-test from this perspective, but it shows how the F-statistic is a generalization of T-statistics. (Here's a cool hint - just like the F-statistic is a generalization of T-statistics, Chi-square statistics are a generalization of normal statistics....)
Love the channel, one request to put some more statquests practising different stats model on different data sets! Thanks so much, binged the whole playlist
i am seriously failing my beginner stats course because try as i might the lectures are quite literally incomprehensible. i owe you my life!!! thank you for these amazing videos -- i feel like this is the first time ALL semester I am understanding something!
Wow, this is smart way to explain ANOVA test, it looks so complicated at first, now it looks straight forward after resembling with linear regression. Great video!!!
I hope you will have the time to answer just in few words please! R sqr tell us how x is useful to predict y, so in the case of a t test or anova how to use it? we just talk about F & p, can we say it explains some % of the variance between treatments or it's useless!? Thank you so much Mr. Starmer
This is a great question. The traditional way to teach and perform t-tests (and ANOVA) only results in 't' or 'F' statistics and a p-value - no R-squared. However, as you see in this video, it's easy to also report R-squared - you just have to want to do it. The case of t-tests and ANOVA are just like regression and R-squared tells you the same thing - it gives you an estimate on the magnitude of the difference. The p-value just tells you that it is significant. If you did a t-test and got a small p-value, but also a small R-squared, then you could easily deduce that there's not a huge difference between the two groups (even if is statistically different). In contrast, if you did a t-test and got a small p-value and a large R-squared, then you would know that there's a big difference between the two groups. So we can see that R-squared is useful for even the t-test. I suspect that one reason presenting R-squared with t-test results is rare, is that often with t-tests, it is easy and very common to plot the data - so people will show you their data and give you the p-value. Seeing the data is sort of like a "visual R-squared" - you can see if the data are very close to each other or far apart.
THANK YOU SO MUCH.... YOU ARE VERY KIND SIR. I summarize if you allow : "significant p-value + R-squared" = how much is the différence Really GREAT! Thanks again & Good luck!
Thanks for the video. I'll have to watch this one a couple more times to fully digest it. It's the first time I've heard of a design matrix, so I'll have to spend some time looking into that.
Hablo muy poco inglés, pero tu metodología de enseñanza( es muy profesional) es magnífica. A pesar que es inglés, yo logro entender mejor que todas las clases de estadísticas en español. Haces un enorme esfuerzo para que tus clases sean intuitivas y fáciles de comprender para personas no expertas en estadísticas. Te felicito.
These videos are brilliant! I'm completing my PhD and there really isn't enough statistics support available which is as accessible as these videos (and considering we're meant to be doing research, that's not really good enough!) - thanks!
Love your videos. I have 3 requests... 1. Degrees of freedom 2. Linear regression with regularisation 3. Log linear regression and why coefficient indicates % change Thanks so much!
Thanks so much! The degrees of freedom StatQuest is high, high on the to-do list. It is never far from my mind. I have it about 1/2 done in my head, but the second half is tricky - some situations are easier to illustrate then others - but it's just a matter of setting aside time just for it and nothing else and it will get done. The good news is that I'm maybe 1 or 2 months away from doing StatQuests on ridge, lasso and elastic-net regression - all examples of linear regression (or, more generally, generalized linear regression since these ideas can be applied to logistic regression) with regularization. So that's sure to happen soon (just as soon as I can!) The last one, log-linear regression, is the logical follow up to logistic regression. I may do a "big picture/main ideas" StatQuest on that as soon as I can. It's on the list!
Hi Josh, Your vidoes are amazing, easy to follow and understand. Just wondering if you could upload video on GLMM, LMM models and when to use which model? This will help to clarify.
Great video! I think you should put parentheses around your SS differences in the F-statistics to have correct equations; (SS(mean)-SS(fit))/(p_fit-p_mean). Divisions have generally a higher priority than differences, but you want to first subtract and then divide.
I love your voice both while singing and explaining statistical concepts. Thank a ton for these videos. Do you mind if I can request you the videos on the following topics 1) 2 or more factor ANOVA (to be used as reducing the number of the independent variable) 2) Linear Multiple regression (to be used as reducing the number of the independent variable) 3) DOE and Taguchi
Glad you like the videos! I've added Taguchi, DOE and 2 or more factor ANOVA to my to-do list. I believe that my video on Multiple Regression in R may already satisfy your second request: ruclips.net/video/hokALdIst8k/видео.html
1) So are we basically comparing the variability of the each data point of that categorical feature around the sample mean to the variability of individual data points around the grouped mean? Or how can I explain in a simple sentence what these tests are and what we can infer? 2) This is a univariate analysis right? 3) In the figures of Gene Expression you've taken 4 data points as example. What those are? I mean to say, how can I interpret those? are those control and mutant categories encoded? This was the only video, that dared to visualize what T-test & ANNOVA are
1) Yes, that's the main idea 2) The t-test is univariate. However, this series of videos also gives many multivariate examples. 3) Those 4 data points reflect how many mRNA transcripts are measured. If that doesn't mean anything to you, just imagine we counted something, like green apples, at 4 different grocery stores.
Your videos are extremely helpful! Can you go through things like kruskal-wallis test and why it is not sensitive to normal distribution? If you can share some insights on chi-squared test etc, it would be really helpful too!
Thanks @Josh i have a some questions:- 1] Suppose we have 5 independent variables and a label ,How does ANNOVA calculates p-value for each feature in this case? 2] Does it fits a regression for each indipendentVariable~Label separately and than calculates p-value?
I describe how p-values are calculated for individual features in these videos: ruclips.net/video/zITIFTsivN8/видео.html ruclips.net/video/hokALdIst8k/видео.html The concepts apply to ANOVA in the exact same way.
Hi, love your videos. Just a quick checkup to see if I'm still on track. In the previous videos, I thought that you mentioned 'Degree of freedom' as an equation of (n-Pfit)/(Pfit-Pmean), if so, in the ANOVA example, since Pfit = 5, Pmean = 1, does the 'degree of freedom' equals (n-5)/4? if not, I think I need a solid explaination on this matter.
First of all, Thank you so much, Josh, for the time you spend sharing your knowledge about statistics. Students need more people like you... I wanted to ask something likely silly, can you make an ANOVA with an unbalanced sample? What can I do if some categories have more data than others? Thanks again, Josh!! I am looking forward to hearing from you!!!
Hi Josh, quick Q. Isn't the test you explained here F-test? Isn't t-test use t-score=(slope beta-0)/standarderror , and then get p-value from t-table? or are they the same thing? little confused here. Thank you!
This is a great question. The t-test is just a specific type of F-test. If you have statistics software, you can compare the results and see that the p-values are the same (however, the F-statistic itself will be the square of the t-statistic. Why the square? Because, as you saw in the first video in this series, the F-statistic can never be negative, but the t-statistic can.) There are multiple ways to calculate a t-test, this using an F-test is my favorite because it is much more flexible. Does that make sense?
@@statquest I knew you would took it to the further level. So basically the two tests are both about model parameters hypothesis significance test, just use different methods, so p-value should refer the same thing. BAM! Thank you so much!
This channel is a gift from the math gods. Question: I'm having a hard time linking this to Design of Experiments methods. It seems like it should be an easy connection, but I somehow can't quite work it out in my head. How would one use this to calculate the explained variation by individual terms of a linear model? 1 term == 1 "category"? And how do degrees of freedom factor into it?
Thanks so much for this video!!! Never heard anyone explain those concepts so well. Do you have any plan to make videos about multiple comparisons adjustment?
Will the F-statistic calculated from this method be equal to the t-statistic? I understand that you are trying to standardize the way to calculate the t-test by using methods from linear regression, but does it produce the same values that a regular t-test does?
According to this website [ onlinecourses.science.psu.edu/stat501/node/297/ ], the t-statistic and F-statistic produce equivalent p-values when the F-statistic's degrees of freedom in the numerator is 1. The relationship is t^2(n-p) = F(1,n-p), which apparently means the p-values for each will be identical. Don't know why that is but videos on the relationship between those two distributions may help. Anyway, I assume the relationship applies here in which the df = 1 for the F-statistic numerator when comparing two groups. As a side note, most slopes for p-values in multiple linear regression are calculated with t-tests. However, F-tests comparing the variance between models with and without the slope produce an identical p-value due to the above mentioned relationship. Thinking of slope significance in terms of how much more variance the model explains with vs without the slope seems much more intuitive to me, and I'm glad I found these videos.
Thank you so much Josh for all your amazing content and great silly songs. I don't manage to wrap my head around the reason you say the fit equation is written out like: y = mean_control + mean_mutant at 6:48 and 9:05. I would have written something like y = mean_control * x + mean_mutant (1-x), x taking 1 or 0. Any explanation on that from you or someone else is appreciated.
Hi Professor Josh, Anova(F-test) is often used in Filter method for feature selection. Theory says, Anova should be used for feature selection when target is Binary but I saw in some practical use people also uses Anova when target is multi class. So Anova(F-test) can also be applied if our target is not binary and has multiple classes? another question Anova assumes features to be normally distributed, But in practice most of the time we encounter data that are not fully normal in such case does it matter much to apply it? or Transformation is compulsion?
hi Joshua, thanks for sharing. These videos are step-by-step processing and makes so much sense to me than the hedious textbooks. I was wondering if you can make a videos on repeated measures ANOVA biting into small pieces, thanks in advance.
Thank you, I had a few questions. At 6:37, is there a reason we did not include the residuals in the overall equation of y? Also, why do we need the y equation at 6:13 to create a design matrix? Is it just not just a matrix where the number of ones corresponds to the number of data points for control and zero for mutant and vice versa for the next data point number of entries? Also, does the sample size have to be the same per category to create a design matrix? Great Tutorial!
1) This equation simply represents what goes into to the design matrix. The residual is the difference between this equation and what is observed. 2) The equation just illustrates how we create the design matrix and what it represents. 3) You don't need to have equal numbers of samples for each category (they can be different).
At 4:20 of the video, you mention the reason we combine the two lines of best fit into a single equation is to make the steps for computing "F" identical for regression and the t-test meaning a computer can do it automatically. In terms of what this actually looks like, I think this means having a single equation means one value for SS(fit) (instead of 2) which means we can use the "F" equation for regression. Is my reasoning correct? Also, why does a single equation mean a computer can do it automatically? Why could a computer not do it automatically if we had 2 equations? Thanks I love your videos!
Sure, a modern computer can handle more than one equation. But back in the day memory was limited and that limited the number of tests a computer could perform. So the the original idea was to unify as much of linear models into a single framework called "General Linear Models", with the idea that one equation could be used in a general setting on a computer without having to check a bunch of different conditions. In the early days, different conditions meant different look-up tables for figuring out the p-values and since computers had very little memory, this limited what they could do.
I really like all of your videos :) Could you please put ads in the beginning and end and less in the middle? If there is an ad in between every 5 minutes, it's very distracting and I need so much time to get back to the topic and to my concentration.
We can fit a vertical line passing through all the points of control data which will give the Least sum of squared residuals, @3:09, right? If that's the case then why did we fit horizontal line? Thanks in advance P.S.: The channel is awesome. Recommended it to many.
Sure, a vertical line would minimize the squared residuals, but you can't use it to make predictions. What Gene Expression value would you predict with a vertical line? All of them, and that makes vertical lines useless.
@@statquest sorry, wouldn't that effectively mean that for t-test we're not really looking for a best fit? Calling a something a fit then it's really not makes things confusing, in all the examples you show so-called "fit" is represented as a "mean" so wouldn't "just find equation for mean line" a better rule of thumb rather than talking about "lest squares"? Head melting right now
@@teammdyss Maybe a better way to say it is "best fit given some restrictions", and those restrictions are 1) the number of parameters we want to use and 2) we want a model that is useful for making predictions.
Such a great video, Josh. Really enjoyed your videos. Can you please recommend a text book which reflects your way of teaching? Are there any such which I'll be hooked at reading (just like your videos)? Thanks
Hey Josh, Could you please answer this? If i calculate p-value using this method and also using student's t-test. Will it be the same? If yes, why? If not, why?
It will be the same. The F-distribution is just the square of the distribution. For more details: coursekata.org/preview/book/fd645e20-5a0d-482e-ad16-ee689acb7431/lesson/15/6#:~:text=The%20F%2DDistribution%20and%20T%2DDistribution%20are%20Actually%20the%20Same&text=The%20reason%20is%20that%20fundamentally,get%20exactly%20an%20F%2Ddistribution!
great goldmine i found !!!. btw one concern : don't u think in t-test it is somewhat showing the dependence among the independent variables?? when we consider ss(fit)
@@statquest sure thanks for replying .. in timestamp 6:00 we see we are taking residuals refrencing both the independent variables right ?? they there is reference between them does that mean they are dependent . Please let me know if im clear
@@mrcharm767 Again, I'm not really sure what you are asking. The independent variables are independent. We are using the residuals to determine if a model with 2 independent variables results statistically significantly smaller residuals than a model with just one independent variable.
@@mrcharm767 Regardless of the distances between the two variables, we calculate the squared residuals using the same formula: (observed - predicted)^2
Thank you for making these amazing videos! Questions: Are we calculating R squared for the t-test and ANOVA as well? Additionally, if the p-value is small, does that mean both 1) the fitted lines are statistically significant and 2) Two categories' means are significantly different from each other?
Yes, you are calculating R^2 for the t-test and ANOVA. If the p-value is small, then using two means and fitted lines result a significant reduction in residuals compared to using a single mean and a single fitted line. Thus, the two categories' means are significantly different from each other.
@@Denise_lili No, we are comparing using one mean that is calculated from all of the data - control and murant - to using two means, one for control and another one for mutant.
@@MR-yi9us If you think of the t-distribution like a knife, then the F-distribution is a Swiss army knife. It does what the t-distribution does, but much more. That being said, when the t-test was first created, they were only thinking about it, and not the broader class of problems (like ANOVA), and so the t-distribution was enough to get the job done. Thus, it's called a t-test. However, because the F distribution does everything the t-distribution does and more, we use the F-distribution here to be consistent among all the different things we can do.
1:36 The goal of t-test is to compare means (eg two groups or categories of data) and see if they are significantly different. 9:23 ANOVA. Compare three or more groups of data.
Can you do a Video on Tukey-Kramer HSD please. I’m a Chemist and we use that at work but I’m having a difficult time getting an intuitive understanding of it. Thank you for this channel!
Hi Josh,I just wanted to confirm that if we have a data with very high cardinality then we would use anova and if we have data with only two categories then we would t test right?
When you only have 2 categories, you use a t-test. When you have more than 2 categories, you use anova. However, as you can see, a t-test is just a special case of the anova.
Hi Josh, great video as always. Just wanted to ask, what happens to the residual in the equations earlier in the video that had “+ residual” in them? Thanks so much for your help, definitely learning alot
What time point, minutes and seconds, are you asking about? (However, I'm guessing that you are asking about the difference between the equation that perfectly fits the data, because it includes the means + the residuals, and the equation that generates the residuals (because it only includes the means). The equation that does not include the residuals is the one we use to make predictions with future data.
Hi Josh, at time stamp 6.48 when you write the equation y = mean of control + mean of mutant, where have the residuals gone. How will we get the value of y using this equation without residuals. As y = mx + c in linear regression helps get y values from given x and same concept is being applied here. So why are dropping the residuals.
We drop the residuals because it doesn't make any sense to include them in the predictions we make with this equation. The residuals only make sense when we are evaluating how well the model fits the data. But with predictions based on new data, we don't know the actual values, so we don't know the residuals.
Amazing video Josh!!! Could you also do a video of two-way ANOVA with block design and calculating the significance of the factors, their interaction, block and the residuals? It would be great!
Hi Josh, great video here. Would really appreciate if you have a statquest on the F-statistics/f-value and also on degree of freedom. Its kinda hard for me to grasp the concept of these two topics.
Thanks for the awesome video. I have a question about the p-value generated from the DE analysis by DESeq2. According to the description in DESeq2, the p-value seems calculated from "negative binomial GLM fitting for βi and Wald statistics". I wondered is this the same concept in the video? Is negative binomial regression also a kind of general linear model, and the variance of the negative binomial (μ+α μ^2) same with to the SS(Mean) and SS (fit)? Also, is Wald test the same with the t-test in the video, except that n is large in Wald test? Sorry for asking so many questions, I'm so confused.
GLM stands for two things "General Linear Models" and "Generalized Linear Models". Unfortunately, those two things are different - but when most people say "GLM", they most frequently mean "Generalized Linear Models". Generalized Linear Models are, in essence, a way to adapt the concept of a "design matrix" to a variety of problems and models. For example, in this video, we used design matrices to do t-tests and ANOVA. However, these same design matrices can be used with Logistic Regression (see those videos if you're interested) and they can also be used for DE analysis with DESeq2. However, the underlying math is different in all three cases. So the good news is that if you understand design matrices, you can do amazing things in a wide variety of contexts. The bad news is that SS(mean) and SS(fit) in these videos may or may not correspond to something in another system, like with DESeq2 or Logistic Regression. Logistic Regression, for example, doesn't use least squares at all, but instead relies on maximum likelihood to optimize the fit. Does this make sense?
@@statquest Thanks for the reply! I think I got your point. So the basic idea is to use the generalized linear model (GLM), which is more like a concept, to fit the data, and in the video the linear regression, which is more like a method, is used for the fitting. In programs like DESeq2, they use the negative binomial regression method to fit the RNA-Seq read counts, but the overall idea is still using GLM to describe how experimental factors (e.g. genotype and treatment) determine the expression of a gene (by a design matrix), and the p-value is kind of telling me how well the GLM fits ( or how convincing the result is).
@@statquest Hooray! Before watching your videos, I had a really hard time understanding the statistics behind the data analysis of RNA-seq, and I can't express how grateful I am to you & the videos.
Great job as usual, but this is still quite a confusing topic for me, will Pmean aways be one? Also is there a nice explanation for the formula of the F value? And how does F value relate to p value?
Hi Josh, great video as always. Just wanted to ask, How to do the post hoc tests in linear models just like post hoc tests in ANOVA to explore differences between two groups? Thank you.
Post-hoc tests with ANOVA are just a matter of defining your "design matrices", which I illustrate in the next video in this series: ruclips.net/video/CqLGvwi-5Pc/видео.html
@@statquest If there are three drugs: drug A, drug B, and drug C, we use drug A as the reference level. We then use dummy coding to compare B vs. A; C vs. A in the linear model. In the linear model, we can determine the difference of B vs. A; C vs. A by calculating the p value of the coefficient. However, it seems that we can not determine the difference of B vs C in the above linear model? Thank you for your reply.
Hi Josh, upon reviewing this, I'm wondering why you say you're using a t-test, but you actually calculate an F-statistic? In this case, isn't the two group case you show an F-test (i.e. a two group ANOVA) ?
t-test = two group ANOVA. In other words, a t-test is just a specific example of ANOVA, and an ANOVA is just a specific example of general linear models. In this case, the F-statistic is just the square of the t-statistic that we wold have gotten for a t-test and the p-values are the exact same. There are two ways to do a t-test, the way most people teach it and by using a general linear model, both give you the exact same results.
@@statquest Great, thanks for explaining the relationship between them, very helpful! But technically, because in this video you are comparing the ratios of variances and not the difference between means across groups, this is an f-test, not a t-test, right? Or does t-test not necessarily imply comparing the difference between means (though I've seen this in multiple other resources) ?
@@urdeathisnear885 In both the t-test and in ANOVA, we are testing to see if the difference between (or among) the means is statistically signifiant. The concepts are the exact same. The differences in the equations are just technical details. In other words, if someone asked me to give them directions from my house to the grocery store, I could give them multiple routes to get there - all of them, however, would qualify as "directions from my house to the grocery store".
@@statquest Sure, but in your analogy, there is likely one, optimal route to the grocery store, right? So to take the reverse approach and go from real-world to stats analogy, I guess a related question I have is: there are two types (F, T) of tests that yield two different statistics that share the same concepts, but surely there may be times where it's preferable to use one over the other, else why would there be two separate tests? If so, could you maybe give a simple example of when you'd prefer one over the other? Thanks, this feedback is really helpful!
@@urdeathisnear885 Ah, I have to be careful with my analogies. The F-test and Student's t-test yield different, but mathematically related, statistics. The F-distribution generalizes the t-distribution, just like the F-test generalizes Student's t-test, and it can be shown, mathematically, that a 2 sample ANOVA is equivalent to Student's t-test. So there is no difference and no reason to choose one over the other. That said, the Student's t-test was later modified (updated) by Welch to allow for unequal variances in the two groups. So there is a difference between Welch's t-test and a 2 sample ANOVA - and this is important. If you think you have different variances, then you need to use Welch's t-test (not Student's t-test or an F-test).
Thanks a lot and at 8:40, after obtaining F value, to obtain p value, is it the same as in the linear regression video? another sample of data (n=9) --> obtain SS(mean) & SS(fit) --> obtain F --> plug into F value histogram and repeat... --> obtain distribution and obtain F value of original data --> p value? Thanks again in advance :)
can you make a statquest about linear mixed models / random effects? I'm extremely confused about them, when to use them and how to interpret the results...
In terms of when we should use linear regression vs. t-tests vs. ANOVA for testing our data, is linear regression for when our independent variable is continuous while t-tests and ANOVA for when our independent variable is discrete (e.g. categorical variables)? Thank you!
Technically, it is all linear regression. However, they give it different names. t-tests are when you have two distinct groups and ANOVA is when you have more than 2 distinct groups.
Thank you for the amazing video, as always. If you have time to spare, I want to ask about 'how to test the model with the new data?' If I understand correctly, then we just need to calculate the new data with the following equation y = switch*mean_control + switch*mean_mutant edit: when i watch the video again, it seems like the purpose is to find wether the mean between the values is significant or not. Am i correct?
The purpose of the t-test is to determine if there is a significant difference between mice with the normal gene and mice with the mutant gene. However, we can also use the model to make predictions with new data. If my test tells me that there is a significant difference between normal and mutant mice, if you tell me you have a mutant mouse, I can tell you that the gene expression should be the mean of the mutant mice. If my test tells me that there is not a significant difference, then I will use the mean of all the mice, normal and mutant, as my prediction.
I'm not sure what your question is. A t-test is a way to compare to categories of things (like "normal diet" vs "special diet") when you measure something continuous (like weight).
Thanks a lot for your video, it's really helpful! but i have a question, why the equation of y can be written as y= mean (control) + mean (mutant), where are the residuals in each set of data?
StatQuest, is there a version of a T-Test, or an ANOVA Test, that allows me to compare the Standard Deviation, Skewness, of Kurtosis of two or more sample means to see if there is any statistical difference between the two? If not, is there any particular reason why? To me, it seems as if knowing if these statistical quantities were different from each other would also provide useful information or features for machine learning algorithms.
This is a great question. Unfortunately there are not many good or well known tests to compare standard deviations and other features (other than means). I'm not sure, but it could be that this is due to the lack of a central limit theorem like concept for standard deviations etc. (That's just a guess, so don't quote me on that).
Hi Josh, thank you for your great videos! Is it necessary to perform a post-hoc test to determine which of the groups performed better than the others (using multiple comparisons between groups with some adjustment for multiple tests, such as Bonferroni)?
It depends on the goals of the experiment. However, typically people will do post-hoc tests with a multiple testing correction - however, FDR is way better than Bonferroni, so use FDR if you can.
@@statquest Thanks for your reply. If we use multiple linear regression models to replace ANOVA, whether the t tests on the regression coefficients is like the post-hoc tests in ANOVA without multiple testing correction?
@@statquest Thank you very much for your explanations. On the premise that the t tests on the regression coefficients is like the post-hoc tests in ANOVA *without* multiple testing correction, I am wondering how to appropriately interpret the p value of t tests on the regression coefficients in multiple linear regression analysis? (To mitigate against multiple comparison problems). By the way, is there any learning resource about using post-hoc tests with FDR after multiple linear regression analysis?
@@Doctor_CCC I should clarify. The t-tests compare the model with and without individual variables. This is different from Post-hoc tests in ANOVA, where we test all possible pair-wise combinations. Testing all possible pair-wise combinations can quickly add up to a lot of tests, necessitating adjusting p-values. In contrast, when we just test the model with and without individual variables, we only do as many tests as we have variables - and usually this means we've only done a few extra tests, which, typically, does not necessitate adjusting the p-values. However, if you have a ton of parameters (variables), then you should adjust them with FDR. In R, this is super easy: stat.ethz.ch/R-manual/R-devel/library/stats/html/p.adjust.html
Dear Josh, just to make sure that F value = ((ss(mean)-ss(fit))/(p(fit)-p(mean)))/(ss(fit)/(n-p(fit))) right? Not F value = (ss(mean)-(ss(fit)/(p(fit)-p(mean)))/(ss(fit)/(n-p(fit))) where the numerator is “ss(mean)-ss(fit)”over(p(fit)-p(mean)) instead of ss(mean) - “ss(fit)” over (p(fit)-p(mean)) right?
I am really sorry! I don't know what is going on. I've contacted RUclips and have not heard anything back. This is breaking my heart because I never wanted this to happen, but somehow it is. I am sorry and doing everything I can to fix this.
I am absolutely the least mathematically minded person you will ever meet. Can you do a StatQuest explaining basic statistics terminology so that a sixth-grader could grasp the concepts?
I've already got a bunch of videos that go through the basics. Start at the top of this list and work your way down: statquest.org/video-index/#statistics
Many thanks, great channel! I have a question please.. does t test approach here is what's called "one way ANOVA".. and f test for "factorial ANOVA" since there are more levels for the categorical variable?
Hi, So how to do a two-sample t test with bootstrapping for rna seq data? There are hardly any examples in the literature. Considered as an alternative method to EdgeR, but is it possible to get a bootstrapped t test for each gene in group comparison (like the model matrix in edgeR)? So how is the bootstrap t test used for gene expression analysis? (e.g. boot package in R). I 'dont understand how is identified differential expressed genes with botstrapping. Can you share information on the subject?
@@statquest Thank you very much I checked it. I understood hypothesis for mean between two groups, bu still I do not understand how it is used for genes. This is complicated I think. I wanted to see a table for t and p values of genes. Am I thinking wrong?
@@akyanus7042 Replace the responses people had to the drugs (feeling better or worse) with the read counts for a gene in different samples. For example, you might have 3 samples that took drug A and 3 samples that took drug b. For Gene "X", bootstrap the read counts for the genes and calculate p-values as described.
Hi Josh - I am struggling to understand what the p-value means in this scenario. What would be the hypothesis statement that the p-value enables us to accept / reject?
The null hypothesis is that there is no difference. Thus, the p-value tells us if having parameters (other than just the intercept) are useful for distinguishing between groups. If there is no difference, then we should fail to determine that the estimated parameters values are significantly different from 0.
For a classification problem "Gene Expression" would be the feature and ("Control", "Mutant") the classes of the target variable (Control = 0, Mutant = 1) right?
@@statquest Hey Josh! Yes, I was trying to fit a classifier which predicted if someone's income was greater than $50K (income = 1) or under or equal (income = 0) based on a lot of different features (age, education level, marital status, occupation, native country, etc). I tried training a support vector classifier based on radial basis functions but each fit was taking ages because the dataset was huge, so I looked up different methods for eliminating the less relevant features and came across a function in python's sklearn library called SelectKBest that computes the F-value of each feature and keeps the top k features. I didn't quite understand what the F-value meant so I checked statquest to see if there was a video about it and ended up here. At first I was strugling to understand the concept but I think I've finally wrapped my head around it. For a feature like age, i get a f-value of 2692.08 and a p-value below 1e-8. Since the f-value is large, it means that the age difference between observations with an income over 50K and under 50K is high and centered around each group mean. If, on the other hand, the f-value was small, it could mean that either the mean age between groups is very simmilar, or that its large but with a lot of variance within each group. Also, I understand now that I probably shouldn't use the F-score to weed out irelevant features of a rbf-based support vector classifier since its not linear. Keep up the good work Josh, your channel is amazing.
We are predicting the y-axis value, and that is why we are interested in the y-axis more than the x-axis (the stuff on the x-axis is only being used to predict y-axis values.)
Thanks for the video, but I have a question: in the design matrix you didn't take into account the residual and then when you calculated the P(fit) you also ignored it ! , I am having trouble understanding that, I thought it should be included as a paramater
The residual is the difference between our model's prediction and the actual value. Mathematically, it is "Observed value - model = residual", where model is the design matrix times the parameters. If we added the residual to our design matrix, we would get "Observed value - (model + residual) = Observed value - model - residual = (observed value - model) - residual = residual - residual = 0." And that wouldn't be very helpful.
Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
Hi Josh,
Love your content. Has helped me to learn a lot & grow. You are doing an awesome work. Please continue to do so.
Wanted to support you but unfortunately your Paypal link seems to be dysfunctional. Please update it.
Rarely do I recommend a youtube channel for someone, but this channel is must-watch!
Thank you! :)
You're making the life of a student so much easier and happier ... Thankkkkk youuuuuu !!!
You're welcome!!! :)
I literally watch your videos as if I'm watching TV. I don't know how you've pulled this off but you are incredible
Wow, thank you!
Hi, I got my master in Epidemiology, trying to review statistics and found your channel, you are awesome, you really make statistics easy to understand, TRIPLE BAM for you
Thank you very much! :)
I remember learning about t-tests well before linear regression, but it's cool seeing things applied in a different way, especially while going into the deeper concepts. This whole playlist is a stats and machine-learning goldmine!
BAM! Yes, usually t-tests are taught before linear regression, but I like teaching them in this order (regression first) since the extension of a t-test into ANOVA is way more obvious.
@@statquestThat sounds like a good plan.
This is a really smooth transition from linear models to ANOVA, which is sadly not covered in many stats textbooks.
Thanks!
It actually takes me a while to realize the F-statistic shown in this video is the same as standard T-statistics. Great vid!
Thanks!!! I know, it's a little weird to look at a t-test from this perspective, but it shows how the F-statistic is a generalization of T-statistics. (Here's a cool hint - just like the F-statistic is a generalization of T-statistics, Chi-square statistics are a generalization of normal statistics....)
Never in my life has learning math been easier. Excellent work Josh!
Thank you very much!!! :)
Love the channel, one request to put some more statquests practising different stats model on different data sets! Thanks so much, binged the whole playlist
That's a great idea.
this is the clearest explanation of design matrices I've ever seen!! Thank you soooo much Joshua!
i am seriously failing my beginner stats course because try as i might the lectures are quite literally incomprehensible. i owe you my life!!! thank you for these amazing videos -- i feel like this is the first time ALL semester I am understanding something!
HOORAY! I'm glad the videos are helpful.
Your videos are very well-prepared and informative. Great teaching materials. You are so generous. Thanks a million.
Thank you very much! :)
Awesome, there's nothing that can't be understood when you explain it, thanks a millionnnn
Thank you very much! :)
Thank you very much for all your clear explanations.
It's a real pleasure to listen to you and learn more about Statistics !
You're welcome! I'm glad to hear you think the videos are helpful. :)
Wow, this is smart way to explain ANOVA test, it looks so complicated at first, now it looks straight forward after resembling with linear regression. Great video!!!
Hooray!!! I'm so glad you like this video - it's one of my all time favorites. :)
Hooray! :)
I hope you will have the time to answer just in few words please!
R sqr tell us how x is useful to predict y, so in the case of a t test or anova how to use it? we just talk about F & p, can we say it explains some % of the variance between treatments or it's useless!?
Thank you so much Mr. Starmer
This is a great question. The traditional way to teach and perform t-tests (and ANOVA) only results in 't' or 'F' statistics and a p-value - no R-squared. However, as you see in this video, it's easy to also report R-squared - you just have to want to do it. The case of t-tests and ANOVA are just like regression and R-squared tells you the same thing - it gives you an estimate on the magnitude of the difference. The p-value just tells you that it is significant. If you did a t-test and got a small p-value, but also a small R-squared, then you could easily deduce that there's not a huge difference between the two groups (even if is statistically different). In contrast, if you did a t-test and got a small p-value and a large R-squared, then you would know that there's a big difference between the two groups. So we can see that R-squared is useful for even the t-test.
I suspect that one reason presenting R-squared with t-test results is rare, is that often with t-tests, it is easy and very common to plot the data - so people will show you their data and give you the p-value. Seeing the data is sort of like a "visual R-squared" - you can see if the data are very close to each other or far apart.
THANK YOU SO MUCH.... YOU ARE VERY KIND SIR.
I summarize if you allow :
"significant p-value + R-squared" = how much is the différence
Really GREAT!
Thanks again & Good luck!
I think you might have just saved my life. This is so clearly explained, thank you!
Glad it helped!
You are blessed and STAY BLESSED. You significantly changed my life with STAT!!!
Thank you very much! :)
StatBlessed :D
The interesting thing about this video is that it taught me something that I haven't noticed I didn't know!
bam! :)
Thanks for the video. I'll have to watch this one a couple more times to fully digest it. It's the first time I've heard of a design matrix, so I'll have to spend some time looking into that.
Hablo muy poco inglés, pero tu metodología de enseñanza( es muy profesional) es magnífica. A pesar que es inglés, yo logro entender mejor que todas las clases de estadísticas en español. Haces un enorme esfuerzo para que tus clases sean intuitivas y fáciles de comprender para personas no expertas en estadísticas. Te felicito.
Muchas gracias!!!!
These videos are brilliant! I'm completing my PhD and there really isn't enough statistics support available which is as accessible as these videos (and considering we're meant to be doing research, that's not really good enough!) - thanks!
Thank you for your amazing videos Josh. You make us smarter!
Glad you like them!
I've really had trouble understanding what a t test *is* and this was super helpful.
Hooray!!!! :)
Really appreciate the refresher of the regression on the side of the t-test! REINFORCEMENT FOR THE WIN!
Yes! :)
Best statistics teacher on internet!!!!
Thank you very much!!!! :)
Love your videos.
I have 3 requests...
1. Degrees of freedom
2. Linear regression with regularisation
3. Log linear regression and why coefficient indicates % change
Thanks so much!
Thanks so much! The degrees of freedom StatQuest is high, high on the to-do list. It is never far from my mind. I have it about 1/2 done in my head, but the second half is tricky - some situations are easier to illustrate then others - but it's just a matter of setting aside time just for it and nothing else and it will get done.
The good news is that I'm maybe 1 or 2 months away from doing StatQuests on ridge, lasso and elastic-net regression - all examples of linear regression (or, more generally, generalized linear regression since these ideas can be applied to logistic regression) with regularization. So that's sure to happen soon (just as soon as I can!)
The last one, log-linear regression, is the logical follow up to logistic regression. I may do a "big picture/main ideas" StatQuest on that as soon as I can. It's on the list!
StatQuest with Josh Starmer thanks for your reply. Can't wait for the next videos
Hi Josh, Your vidoes are amazing, easy to follow and understand. Just wondering if you could upload video on GLMM, LMM models and when to use which model? This will help to clarify.
I hope to do that one day, however, it will probably be a while since I'm writing a book on neural networks right now.
Great video! I think you should put parentheses around your SS differences in the F-statistics to have correct equations; (SS(mean)-SS(fit))/(p_fit-p_mean). Divisions have generally a higher priority than differences, but you want to first subtract and then divide.
Great suggestion! I've added your correction to a pinned comment that will be easy for other people to find.
I just a video on Confidence Intervals back from 2015 and the song was pretty much the same, yet what a difference!
:)
I will just go on a liking spree on all of your videos
Hooray! :)
Bought the book. Nicely done and useful!
Awesome, thank you!
Hi Josh, It's time to bring linear mixed models. Thankkkk Youuuuu!!!
I'll keep that topic in mind.
I love your voice both while singing and explaining statistical concepts. Thank a ton for these videos. Do you mind if I can request you the videos on the following topics
1) 2 or more factor ANOVA (to be used as reducing the number of the independent variable)
2) Linear Multiple regression (to be used as reducing the number of the independent variable)
3) DOE and Taguchi
Glad you like the videos! I've added Taguchi, DOE and 2 or more factor ANOVA to my to-do list. I believe that my video on Multiple Regression in R may already satisfy your second request:
ruclips.net/video/hokALdIst8k/видео.html
StatQuest with Josh Starmer Thanks 😀
1) So are we basically comparing the variability of the each data point of that categorical feature around the sample mean to the variability of individual data points around the grouped mean? Or how can I explain in a simple sentence what these tests are and what we can infer?
2) This is a univariate analysis right?
3) In the figures of Gene Expression you've taken 4 data points as example. What those are? I mean to say, how can I interpret those? are those control and mutant categories encoded?
This was the only video, that dared to visualize what T-test & ANNOVA are
1) Yes, that's the main idea
2) The t-test is univariate. However, this series of videos also gives many multivariate examples.
3) Those 4 data points reflect how many mRNA transcripts are measured. If that doesn't mean anything to you, just imagine we counted something, like green apples, at 4 different grocery stores.
@@statquest In that sense those green apples are the dependent variable in our dataset and are we grouping them by 4 different grocery store?
@@rahuldey6369 yes
@@statquest Thank you so much for the clarification. Best wishes. Looking forward to learn more from you
Thank you for your clear explanations.
Bam! :)
Your videos are extremely helpful!
Can you go through things like kruskal-wallis test and why it is not sensitive to normal distribution?
If you can share some insights on chi-squared test etc, it would be really helpful too!
I'll keep those topics in mind.
Thanks @Josh i have a some questions:-
1] Suppose we have 5 independent variables and a label ,How does ANNOVA calculates p-value for each feature in this case?
2] Does it fits a regression for each indipendentVariable~Label separately and than calculates p-value?
I describe how p-values are calculated for individual features in these videos: ruclips.net/video/zITIFTsivN8/видео.html ruclips.net/video/hokALdIst8k/видео.html The concepts apply to ANOVA in the exact same way.
Hi, love your videos. Just a quick checkup to see if I'm still on track. In the previous videos, I thought that you mentioned 'Degree of freedom' as an equation of (n-Pfit)/(Pfit-Pmean), if so, in the ANOVA example, since Pfit = 5, Pmean = 1, does the 'degree of freedom' equals (n-5)/4? if not, I think I need a solid explaination on this matter.
Linear models have 2 different degrees of freedom - one for the numerator of the equation (n-pfit) and one for the denominator (pfit-pmean).
First of all, Thank you so much, Josh, for the time you spend sharing your knowledge about statistics. Students need more people like you...
I wanted to ask something likely silly, can you make an ANOVA with an unbalanced sample? What can I do if some categories have more data than others?
Thanks again, Josh!! I am looking forward to hearing from you!!!
ANOVA works fine with unbalanced samples. You just have more rows in your design matrix for one category than another.
Maaan! I found you, I found glm, finally! Thanks!
Hooray! :)
Hi Josh, great work on these videos, very helpful! One question: is it safe to say that ANOVA is just a generalized t-test for >2 groups?
Sure, I think that is a safe thing to say.
Hi Josh, quick Q. Isn't the test you explained here F-test? Isn't t-test use t-score=(slope beta-0)/standarderror , and then get p-value from t-table? or are they the same thing? little confused here. Thank you!
This is a great question. The t-test is just a specific type of F-test. If you have statistics software, you can compare the results and see that the p-values are the same (however, the F-statistic itself will be the square of the t-statistic. Why the square? Because, as you saw in the first video in this series, the F-statistic can never be negative, but the t-statistic can.) There are multiple ways to calculate a t-test, this using an F-test is my favorite because it is much more flexible. Does that make sense?
@@statquest I knew you would took it to the further level. So basically the two tests are both about model parameters hypothesis significance test, just use different methods, so p-value should refer the same thing. BAM! Thank you so much!
What else i can say about this clip? You're the best
Hooray!!! :)
This channel is a gift from the math gods.
Question: I'm having a hard time linking this to Design of Experiments methods. It seems like it should be an easy connection, but I somehow can't quite work it out in my head.
How would one use this to calculate the explained variation by individual terms of a linear model? 1 term == 1 "category"?
And how do degrees of freedom factor into it?
The next video in this series may help you understand how to design experiments: ruclips.net/video/CqLGvwi-5Pc/видео.html
Thanks so much for this video!!! Never heard anyone explain those concepts so well. Do you have any plan to make videos about multiple comparisons adjustment?
It would be a triple BAM if you could do a quick Stat Quest about residual diagnosis in linear models!
I'll keep that in mind.
OMG, now that's how ANOVA and linear regression is connected.
Will the F-statistic calculated from this method be equal to the t-statistic? I understand that you are trying to standardize the way to calculate the t-test by using methods from linear regression, but does it produce the same values that a regular t-test does?
According to this website [ onlinecourses.science.psu.edu/stat501/node/297/ ], the t-statistic and F-statistic produce equivalent p-values when the F-statistic's degrees of freedom in the numerator is 1. The relationship is t^2(n-p) = F(1,n-p), which apparently means the p-values for each will be identical. Don't know why that is but videos on the relationship between those two distributions may help. Anyway, I assume the relationship applies here in which the df = 1 for the F-statistic numerator when comparing two groups. As a side note, most slopes for p-values in multiple linear regression are calculated with t-tests. However, F-tests comparing the variance between models with and without the slope produce an identical p-value due to the above mentioned relationship. Thinking of slope significance in terms of how much more variance the model explains with vs without the slope seems much more intuitive to me, and I'm glad I found these videos.
Thank you so much Josh for all your amazing content and great silly songs.
I don't manage to wrap my head around the reason you say the fit equation is written out like: y = mean_control + mean_mutant at 6:48 and 9:05.
I would have written something like y = mean_control * x + mean_mutant (1-x), x taking 1 or 0.
Any explanation on that from you or someone else is appreciated.
Because my equation is being multiplied by the design matrix, it is essentially the exact same thing that you have.
@@statquest Bam!! Thank you for the explanation
Hi Professor Josh, Anova(F-test) is often used in Filter method for feature selection. Theory says, Anova should be used for feature selection when target is Binary but I saw in some practical use people also uses Anova when target is multi class. So Anova(F-test) can also be applied if our target is not binary and has multiple classes?
another question Anova assumes features to be normally distributed, But in practice most of the time we encounter data that are not fully normal in such case does it matter much to apply it? or Transformation is compulsion?
ANOVA is really only intended to be used when the dependent variable is continuous.
Thank you so much for your videos.
Thanks!
hi Joshua, thanks for sharing. These videos are step-by-step processing and makes so much sense to me than the hedious textbooks. I was wondering if you can make a videos on repeated measures ANOVA biting into small pieces, thanks in advance.
Thank you, I had a few questions. At 6:37, is there a reason we did not include the residuals in the overall equation of y? Also, why do we need the y equation at 6:13 to create a design matrix? Is it just not just a matrix where the number of ones corresponds to the number of data points for control and zero for mutant and vice versa for the next data point number of entries? Also, does the sample size have to be the same per category to create a design matrix? Great Tutorial!
1) This equation simply represents what goes into to the design matrix. The residual is the difference between this equation and what is observed.
2) The equation just illustrates how we create the design matrix and what it represents.
3) You don't need to have equal numbers of samples for each category (they can be different).
At 4:20 of the video, you mention the reason we combine the two lines of best fit into a single equation is to make the steps for computing "F" identical for regression and the t-test meaning a computer can do it automatically. In terms of what this actually looks like, I think this means having a single equation means one value for SS(fit) (instead of 2) which means we can use the "F" equation for regression. Is my reasoning correct? Also, why does a single equation mean a computer can do it automatically? Why could a computer not do it automatically if we had 2 equations? Thanks I love your videos!
Sure, a modern computer can handle more than one equation. But back in the day memory was limited and that limited the number of tests a computer could perform. So the the original idea was to unify as much of linear models into a single framework called "General Linear Models", with the idea that one equation could be used in a general setting on a computer without having to check a bunch of different conditions. In the early days, different conditions meant different look-up tables for figuring out the p-values and since computers had very little memory, this limited what they could do.
I really like all of your videos :)
Could you please put ads in the beginning and end and less in the middle? If there is an ad in between every 5 minutes, it's very distracting and I need so much time to get back to the topic and to my concentration.
I'm sorry about the ads. Unfortunately RUclips sticks those in the middle automatically and it's not something that I can control.
please do a MANOVA video !! this was so useful, Im doing a 2x2x3 MANOVA for my research project and would really appreciate a video :)
I'll keep that in mind.
We can fit a vertical line passing through all the points of control data which will give the Least sum of squared residuals, @3:09, right? If that's the case then why did we fit horizontal line?
Thanks in advance
P.S.: The channel is awesome. Recommended it to many.
Sure, a vertical line would minimize the squared residuals, but you can't use it to make predictions. What Gene Expression value would you predict with a vertical line? All of them, and that makes vertical lines useless.
@@statquest sorry, wouldn't that effectively mean that for t-test we're not really looking for a best fit? Calling a something a fit then it's really not makes things confusing, in all the examples you show so-called "fit" is represented as a "mean" so wouldn't "just find equation for mean line" a better rule of thumb rather than talking about "lest squares"? Head melting right now
@@teammdyss Maybe a better way to say it is "best fit given some restrictions", and those restrictions are 1) the number of parameters we want to use and 2) we want a model that is useful for making predictions.
Such a great video, Josh. Really enjoyed your videos.
Can you please recommend a text book which reflects your way of teaching? Are there any such which I'll be hooked at reading (just like your videos)? Thanks
I'm writing my own book right now. I hope it is out next year.
@@statquest Woah! Looking forward to read that.
Hey Josh, Could you please answer this?
If i calculate p-value using this method and also using student's t-test. Will it be the same? If yes, why? If not, why?
It will be the same. The F-distribution is just the square of the distribution. For more details: coursekata.org/preview/book/fd645e20-5a0d-482e-ad16-ee689acb7431/lesson/15/6#:~:text=The%20F%2DDistribution%20and%20T%2DDistribution%20are%20Actually%20the%20Same&text=The%20reason%20is%20that%20fundamentally,get%20exactly%20an%20F%2Ddistribution!
great goldmine i found !!!. btw one concern : don't u think in t-test it is somewhat showing the dependence among the independent variables?? when we consider ss(fit)
I'm not sure I understand your question. Can you rephrase it?
@@statquest sure thanks for replying .. in timestamp 6:00 we see we are taking residuals refrencing both the independent variables right ?? they there is reference between them does that mean they are dependent . Please let me know if im clear
@@mrcharm767 Again, I'm not really sure what you are asking. The independent variables are independent. We are using the residuals to determine if a model with 2 independent variables results statistically significantly smaller residuals than a model with just one independent variable.
@@statquest in short if i ask does the calculation depend on the distance among these 2 independent variables?
@@mrcharm767 Regardless of the distances between the two variables, we calculate the squared residuals using the same formula: (observed - predicted)^2
Thank you! Really great and helpful videos!
Glad you like them!
Thank you for making these amazing videos! Questions: Are we calculating R squared for the t-test and ANOVA as well? Additionally, if the p-value is small, does that mean both 1) the fitted lines are statistically significant and 2) Two categories' means are significantly different from each other?
Yes, you are calculating R^2 for the t-test and ANOVA. If the p-value is small, then using two means and fitted lines result a significant reduction in residuals compared to using a single mean and a single fitted line. Thus, the two categories' means are significantly different from each other.
@@statquest Do you mean compare R squared between 1) Control mice and 2) Control mice and mutant mice?
@@Denise_lili No, we are comparing using one mean that is calculated from all of the data - control and murant - to using two means, one for control and another one for mutant.
@@statquest I also have a silly question, since we are calculating F-statistics why do we call it a t-test?
@@MR-yi9us If you think of the t-distribution like a knife, then the F-distribution is a Swiss army knife. It does what the t-distribution does, but much more. That being said, when the t-test was first created, they were only thinking about it, and not the broader class of problems (like ANOVA), and so the t-distribution was enough to get the job done. Thus, it's called a t-test. However, because the F distribution does everything the t-distribution does and more, we use the F-distribution here to be consistent among all the different things we can do.
1:36 The goal of t-test is to compare means (eg two groups or categories of data) and see if they are significantly different.
9:23 ANOVA. Compare three or more groups of data.
bam!
I keeep coming here to hear the Baaaaam !! :)
Hooray! :)
Can you do a Video on Tukey-Kramer HSD please. I’m a Chemist and we use that at work but I’m having a difficult time getting an intuitive understanding of it. Thank you for this channel!
I'll keep that in mind.
Hi Josh,I just wanted to confirm that if we have a data with very high cardinality then we would use anova and if we have data with only two categories then we would t test right?
When you only have 2 categories, you use a t-test. When you have more than 2 categories, you use anova. However, as you can see, a t-test is just a special case of the anova.
@@statquest thanks for answering
Hi Josh, great video as always. Just wanted to ask, what happens to the residual in the equations earlier in the video that had “+ residual” in them? Thanks so much for your help, definitely learning alot
What time point, minutes and seconds, are you asking about? (However, I'm guessing that you are asking about the difference between the equation that perfectly fits the data, because it includes the means + the residuals, and the equation that generates the residuals (because it only includes the means). The equation that does not include the residuals is the one we use to make predictions with future data.
@@statquest Thanks Josh, that is the point I was asking about, I will review the video again once more
Hi Josh, at time stamp 6.48 when you write the equation y = mean of control + mean of mutant, where have the residuals gone. How will we get the value of y using this equation without residuals. As y = mx + c in linear regression helps get y values from given x and same concept is being applied here. So why are dropping the residuals.
We drop the residuals because it doesn't make any sense to include them in the predictions we make with this equation. The residuals only make sense when we are evaluating how well the model fits the data. But with predictions based on new data, we don't know the actual values, so we don't know the residuals.
Amazing video Josh!!!
Could you also do a video of two-way ANOVA with block design and calculating the significance of the factors, their interaction, block and the residuals?
It would be great!
I'll keep it in mind.
@@statquest that will be awesome. Triple BAM!!!
Hi Josh,Should it be like F= (SS(mean)-SS(fit))/(p_fit-p_mean) on the top of the formula?(one more bracket)
yep
Hi Josh, great video here. Would really appreciate if you have a statquest on the F-statistics/f-value and also on degree of freedom. Its kinda hard for me to grasp the concept of these two topics.
The first video in this series explains F-statistics and f-values: ruclips.net/video/nk2CQITm_eo/видео.html
Thanks for the awesome video. I have a question about the p-value generated from the DE analysis by DESeq2. According to the description in DESeq2, the p-value seems calculated from "negative binomial GLM fitting for βi and Wald statistics". I wondered is this the same concept in the video? Is negative binomial regression also a kind of general linear model, and the variance of the negative binomial (μ+α μ^2) same with to the SS(Mean) and SS (fit)? Also, is Wald test the same with the t-test in the video, except that n is large in Wald test? Sorry for asking so many questions, I'm so confused.
GLM stands for two things "General Linear Models" and "Generalized Linear Models". Unfortunately, those two things are different - but when most people say "GLM", they most frequently mean "Generalized Linear Models". Generalized Linear Models are, in essence, a way to adapt the concept of a "design matrix" to a variety of problems and models. For example, in this video, we used design matrices to do t-tests and ANOVA. However, these same design matrices can be used with Logistic Regression (see those videos if you're interested) and they can also be used for DE analysis with DESeq2. However, the underlying math is different in all three cases. So the good news is that if you understand design matrices, you can do amazing things in a wide variety of contexts. The bad news is that SS(mean) and SS(fit) in these videos may or may not correspond to something in another system, like with DESeq2 or Logistic Regression. Logistic Regression, for example, doesn't use least squares at all, but instead relies on maximum likelihood to optimize the fit. Does this make sense?
@@statquest Thanks for the reply! I think I got your point. So the basic idea is to use the generalized linear model (GLM), which is more like a concept, to fit the data, and in the video the linear regression, which is more like a method, is used for the fitting. In programs like DESeq2, they use the negative binomial regression method to fit the RNA-Seq read counts, but the overall idea is still using GLM to describe how experimental factors (e.g. genotype and treatment) determine the expression of a gene (by a design matrix), and the p-value is kind of telling me how well the GLM fits ( or how convincing the result is).
@@drzun You've got it!
@@statquest Hooray! Before watching your videos, I had a really hard time understanding the statistics behind the data analysis of RNA-seq, and I can't express how grateful I am to you & the videos.
@@drzun Hooray!!! That's great. I'm glad my videos were so helpful! :)
Could you please make a video about "Granger Causality" for time series? That would be a tripple bam!!
I'll keep that in mind.
@@statquest Thank you so much. I really love and highly appreciate your content!! Helped me a lot!
Great job as usual, but this is still quite a confusing topic for me, will Pmean aways be one? Also is there a nice explanation for the formula of the F value? And how does F value relate to p value?
Did you watch part 1 in this series? If not, it should answer all of your questions: ruclips.net/video/nk2CQITm_eo/видео.html
Hi Josh, great video as always. Just wanted to ask, How to do the post hoc tests in linear models just like post hoc tests in ANOVA to explore differences between two groups? Thank you.
Post-hoc tests with ANOVA are just a matter of defining your "design matrices", which I illustrate in the next video in this series: ruclips.net/video/CqLGvwi-5Pc/видео.html
@@statquest If there are three drugs: drug A, drug B, and drug C, we use drug A as the reference level. We then use dummy coding to compare B vs. A; C vs. A in the linear model. In the linear model, we can determine the difference of B vs. A; C vs. A by calculating the p value of the coefficient. However, it seems that we can not determine the difference of B vs C in the above linear model? Thank you for your reply.
Hi Josh, upon reviewing this, I'm wondering why you say you're using a t-test, but you actually calculate an F-statistic? In this case, isn't the two group case you show an F-test (i.e. a two group ANOVA) ?
t-test = two group ANOVA. In other words, a t-test is just a specific example of ANOVA, and an ANOVA is just a specific example of general linear models. In this case, the F-statistic is just the square of the t-statistic that we wold have gotten for a t-test and the p-values are the exact same. There are two ways to do a t-test, the way most people teach it and by using a general linear model, both give you the exact same results.
@@statquest Great, thanks for explaining the relationship between them, very helpful! But technically, because in this video you are comparing the ratios of variances and not the difference between means across groups, this is an f-test, not a t-test, right? Or does t-test not necessarily imply comparing the difference between means (though I've seen this in multiple other resources) ?
@@urdeathisnear885 In both the t-test and in ANOVA, we are testing to see if the difference between (or among) the means is statistically signifiant. The concepts are the exact same. The differences in the equations are just technical details. In other words, if someone asked me to give them directions from my house to the grocery store, I could give them multiple routes to get there - all of them, however, would qualify as "directions from my house to the grocery store".
@@statquest Sure, but in your analogy, there is likely one, optimal route to the grocery store, right? So to take the reverse approach and go from real-world to stats analogy, I guess a related question I have is: there are two types (F, T) of tests that yield two different statistics that share the same concepts, but surely there may be times where it's preferable to use one over the other, else why would there be two separate tests? If so, could you maybe give a simple example of when you'd prefer one over the other? Thanks, this feedback is really helpful!
@@urdeathisnear885 Ah, I have to be careful with my analogies. The F-test and Student's t-test yield different, but mathematically related, statistics. The F-distribution generalizes the t-distribution, just like the F-test generalizes Student's t-test, and it can be shown, mathematically, that a 2 sample ANOVA is equivalent to Student's t-test. So there is no difference and no reason to choose one over the other.
That said, the Student's t-test was later modified (updated) by Welch to allow for unequal variances in the two groups. So there is a difference between Welch's t-test and a 2 sample ANOVA - and this is important. If you think you have different variances, then you need to use Welch's t-test (not Student's t-test or an F-test).
Thanks a lot and at 8:40, after obtaining F value, to obtain p value, is it the same as in the linear regression video?
another sample of data (n=9) --> obtain SS(mean) & SS(fit) --> obtain F --> plug into F value histogram and repeat... --> obtain distribution and obtain F value of original data --> p value?
Thanks again in advance :)
The histogram that I used in the linear regression was intended to illustrate what an F-distribution represents, and it is the same here as well.
can you make a statquest about linear mixed models / random effects? I'm extremely confused about them, when to use them and how to interpret the results...
I'll keep that in mind.
In terms of when we should use linear regression vs. t-tests vs. ANOVA for testing our data, is linear regression for when our independent variable is continuous while t-tests and ANOVA for when our independent variable is discrete (e.g. categorical variables)? Thank you!
Technically, it is all linear regression. However, they give it different names. t-tests are when you have two distinct groups and ANOVA is when you have more than 2 distinct groups.
Thank you for the amazing video, as always.
If you have time to spare, I want to ask about 'how to test the model with the new data?'
If I understand correctly, then we just need to calculate the new data with the following equation y = switch*mean_control + switch*mean_mutant
edit: when i watch the video again, it seems like the purpose is to find wether the mean between the values is significant or not. Am i correct?
The purpose of the t-test is to determine if there is a significant difference between mice with the normal gene and mice with the mutant gene. However, we can also use the model to make predictions with new data. If my test tells me that there is a significant difference between normal and mutant mice, if you tell me you have a mutant mouse, I can tell you that the gene expression should be the mean of the mutant mice. If my test tells me that there is not a significant difference, then I will use the mean of all the mice, normal and mutant, as my prediction.
@@statquest I see, now I undertand better, thank you Mr. Josh.
These videos are great! Thanks!
Hi Josh, Thanks for the video. :) What about adding residuals to the equations at 6:27 and 6:57 ? Isn't it necessary ?
The residuals squared and added when we solve for the optimal parameters. For details, see: ruclips.net/video/nk2CQITm_eo/видео.html
@@statquest thank you josh :)
hi thank you for the vedio ,Is the t-test is the machine learning regression with discrete inputs ?
I'm not sure what your question is. A t-test is a way to compare to categories of things (like "normal diet" vs "special diet") when you measure something continuous (like weight).
Dude i have a makeup exam in five days, wish me luck ^^
Good luck!!
Thanks a lot for your video, it's really helpful! but i have a question, why the equation of y can be written as y= mean (control) + mean (mutant), where are the residuals in each set of data?
I'm not sure I understand your question. The residual for each measurement is paired with that measurement, so it is easy to keep track of.
StatQuest, is there a version of a T-Test, or an ANOVA Test, that allows me to compare the Standard Deviation, Skewness, of Kurtosis of two or more sample means to see if there is any statistical difference between the two? If not, is there any particular reason why? To me, it seems as if knowing if these statistical quantities were different from each other would also provide useful information or features for machine learning algorithms.
This is a great question. Unfortunately there are not many good or well known tests to compare standard deviations and other features (other than means). I'm not sure, but it could be that this is due to the lack of a central limit theorem like concept for standard deviations etc. (That's just a guess, so don't quote me on that).
Hi Josh, thank you for your great videos! Is it necessary to perform a post-hoc test to determine which of the groups performed better than the others (using multiple comparisons between groups with some adjustment for multiple tests, such as Bonferroni)?
It depends on the goals of the experiment. However, typically people will do post-hoc tests with a multiple testing correction - however, FDR is way better than Bonferroni, so use FDR if you can.
@@statquest Thanks for your reply. If we use multiple linear regression models to replace ANOVA, whether the t tests on the regression coefficients is like the post-hoc tests in ANOVA without multiple testing correction?
@@Doctor_CCC Pretty much
@@statquest Thank you very much for your explanations. On the premise that the t tests on the regression coefficients is like the post-hoc tests in ANOVA *without* multiple testing correction, I am wondering how to appropriately interpret the p value of t tests on the regression coefficients in multiple linear regression analysis? (To mitigate against multiple comparison problems). By the way, is there any learning resource about using post-hoc tests with FDR after multiple linear regression analysis?
@@Doctor_CCC I should clarify. The t-tests compare the model with and without individual variables. This is different from Post-hoc tests in ANOVA, where we test all possible pair-wise combinations. Testing all possible pair-wise combinations can quickly add up to a lot of tests, necessitating adjusting p-values. In contrast, when we just test the model with and without individual variables, we only do as many tests as we have variables - and usually this means we've only done a few extra tests, which, typically, does not necessitate adjusting the p-values. However, if you have a ton of parameters (variables), then you should adjust them with FDR. In R, this is super easy: stat.ethz.ch/R-manual/R-devel/library/stats/html/p.adjust.html
Dear Josh, just to make sure that F value = ((ss(mean)-ss(fit))/(p(fit)-p(mean)))/(ss(fit)/(n-p(fit))) right? Not F value = (ss(mean)-(ss(fit)/(p(fit)-p(mean)))/(ss(fit)/(n-p(fit))) where the numerator is “ss(mean)-ss(fit)”over(p(fit)-p(mean)) instead of ss(mean) - “ss(fit)” over (p(fit)-p(mean)) right?
That's correct - I was a little sloppy with the parentheses when I made these videos.
StatQuest with Josh Starmer noooo, this is very helpful, really appreciate that. I have to make sure just because I am not that familiar with this.
thanks for making it unavailable without payment, people like you are making money off of such an open source platform!
I am really sorry! I don't know what is going on. I've contacted RUclips and have not heard anything back. This is breaking my heart because I never wanted this to happen, but somehow it is. I am sorry and doing everything I can to fix this.
I am absolutely the least mathematically minded person you will ever meet. Can you do a StatQuest explaining basic statistics terminology so that a sixth-grader could grasp the concepts?
I've already got a bunch of videos that go through the basics. Start at the top of this list and work your way down: statquest.org/video-index/#statistics
Many thanks, great channel!
I have a question please.. does t test approach here is what's called "one way ANOVA".. and f test for "factorial ANOVA" since there are more levels for the categorical variable?
Bro, thank you so much man......
Thanks! :)
Hi, So how to do a two-sample t test with bootstrapping for rna seq data? There are hardly any examples in the literature. Considered as an alternative method to EdgeR, but is it possible to get a bootstrapped t test for each gene in group comparison (like the model matrix in edgeR)? So how is the bootstrap t test used for gene expression analysis? (e.g. boot package in R). I 'dont understand how is identified differential expressed genes with botstrapping. Can you share information on the subject?
I have a video that shows how bootstrapping can be used for a t-test here: ruclips.net/video/isEcgoCmlO0/видео.html
@@statquest Thank you very much I checked it. I understood hypothesis for mean between two groups, bu still I do not understand how it is used for genes. This is complicated I think. I wanted to see a table for t and p values of genes. Am I thinking wrong?
@@akyanus7042 Replace the responses people had to the drugs (feeling better or worse) with the read counts for a gene in different samples. For example, you might have 3 samples that took drug A and 3 samples that took drug b. For Gene "X", bootstrap the read counts for the genes and calculate p-values as described.
@@statquestThank you.
Hi Josh - I am struggling to understand what the p-value means in this scenario. What would be the hypothesis statement that the p-value enables us to accept / reject?
The null hypothesis is that there is no difference. Thus, the p-value tells us if having parameters (other than just the intercept) are useful for distinguishing between groups. If there is no difference, then we should fail to determine that the estimated parameters values are significantly different from 0.
if you have a video on time series, please share or make one
I'll keep that in mind!
For a classification problem "Gene Expression" would be the feature and ("Control", "Mutant") the classes of the target variable (Control = 0, Mutant = 1) right?
For a classification problem, you want to use logistic regression: ruclips.net/video/yIYKR4sgzI8/видео.html
@@statquest Hi Josh! Thanks for answering! yes, I meant for selecting the k best features of a dataset based on the F-statistic
@@franciscopala8655 I don't fully understand your question. Can provide more details?
@@statquest Hey Josh! Yes, I was trying to fit a classifier which predicted if someone's income was greater than $50K (income = 1) or under or equal (income = 0) based on a lot of different features (age, education level, marital status, occupation, native country, etc). I tried training a support vector classifier based on radial basis functions but each fit was taking ages because the dataset was huge, so I looked up different methods for eliminating the less relevant features and came across a function in python's sklearn library called SelectKBest that computes the F-value of each feature and keeps the top k features.
I didn't quite understand what the F-value meant so I checked statquest to see if there was a video about it and ended up here. At first I was strugling to understand the concept but I think I've finally wrapped my head around it. For a feature like age, i get a f-value of 2692.08 and a p-value below 1e-8. Since the f-value is large, it means that the age difference between observations with an income over 50K and under 50K is high and centered around each group mean. If, on the other hand, the f-value was small, it could mean that either the mean age between groups is very simmilar, or that its large but with a lot of variance within each group. Also, I understand now that I probably shouldn't use the F-score to weed out irelevant features of a rbf-based support vector classifier since its not linear.
Keep up the good work Josh, your channel is amazing.
@@franciscopala8655 Awesome! Sounds like you have it all figured out. :)
How do you calculate the residuals for the equation + design matrix? Wouldn't that involve subtracting a matrix from a scalar?
The design matrix is just a general way to specify how each measurement fits into the equation.
Is the overall mean always on the y axis because it is the outcome of interest? are we never interested in means on the x-axis?
We are predicting the y-axis value, and that is why we are interested in the y-axis more than the x-axis (the stuff on the x-axis is only being used to predict y-axis values.)
Thanks for the video, but I have a question: in the design matrix you didn't take into account the residual and then when you calculated the P(fit) you also ignored it ! , I am having trouble understanding that, I thought it should be included as a paramater
The residual is the difference between our model's prediction and the actual value. Mathematically, it is "Observed value - model = residual", where model is the design matrix times the parameters. If we added the residual to our design matrix, we would get "Observed value - (model + residual) = Observed value - model - residual = (observed value - model) - residual = residual - residual = 0." And that wouldn't be very helpful.
@@statquest thank you for the clarification!