Many machine learning algorithms only work if the assumption is made that different groups share the same sampling variance. It is also one of the assumptions for the t-test and ANOVA (t-test for multiple groups) which is most likely what you will do in basic statistics.
@@SuperMixedd Yea, that is a great option for the comparison of null vs alternative hypothesis. Like I mentioned though, there are more methods of statistics than null hypothesis comparisons such as Linear Discriminant Analysis or Principle Component Analysis. These methods don't always rely on sample variances to be equal, but a lot of the calculations and algorithms for these methods are a hell of a lot easier to perform and optimize when the variances are equal. One example of this is the Fischer Criterion for LDA which requires the Within Scatter matrix of each class to be equal. This method drastically simplifies the non-convex optimization problem proposed in LDA.
Hi, your software is very cool. Maybe you should put it on a github repository so we all can see it, execute it, and modify it. Maybe some person can collaborate with you there, on a public repo. They cual add new features like simulations like T - test, or some other things. It's just a suggestion. Great job!
We have a site (learncheme.com) that has a lot of material for both students and professors to use. This includes list of our videos, more simulations (including one on t-test), some tutorials on various softwares, and course packages. But thank you for the suggestion!
If two different populations (having different population means) have the same variance, you are likely to conclude from two samples from each of these populations, that they come from the same population, when in fact they come from different populations. Does ANOVA not account for the mean? Wouldn't we incorrectly conclude that those two samples came from the same population even though the two populations have two different means?
I could be totally talking out of me bum here, but I think this is where you have to employ non-statistical thinking. Could a scenario exist where this could conceivably happen (e.g. a Type I despite 99% certainty)? Yes - but how likely is it? I would imagine very unlikely in a natural example but more likely when comparing certain man-made things. From what little I know about the creation of ANOVA (for astronomy and geodesy), I would assume it wasn't designed for the scenario you have in mind. For two populations to be both entirely identical and yet not share a history would require divine intervention. Hopefully someone else who knows more can chime it. That's just my two cents.
Amazing explanation
How is the p value calculated?
so how is it different from effect size?
I remember that the p-value comes from the F-statistic, but what is the F-statistic?
but why? Why would we need to test the equality of variances? What's the point?
Many machine learning algorithms only work if the assumption is made that different groups share the same sampling variance. It is also one of the assumptions for the t-test and ANOVA (t-test for multiple groups) which is most likely what you will do in basic statistics.
@@chasecolin22 No, I will do Mann-Whitney-Wilcoxon test to avoid that non-normality BS altogether
@@SuperMixedd Yea, that is a great option for the comparison of null vs alternative hypothesis. Like I mentioned though, there are more methods of statistics than null hypothesis comparisons such as Linear Discriminant Analysis or Principle Component Analysis. These methods don't always rely on sample variances to be equal, but a lot of the calculations and algorithms for these methods are a hell of a lot easier to perform and optimize when the variances are equal. One example of this is the Fischer Criterion for LDA which requires the Within Scatter matrix of each class to be equal. This method drastically simplifies the non-convex optimization problem proposed in LDA.
Hi, your software is very cool. Maybe you should put it on a github repository so we all can see it, execute it, and modify it. Maybe some person can collaborate with you there, on a public repo. They cual add new features like simulations like T - test, or some other things. It's just a suggestion. Great job!
We have a site (learncheme.com) that has a lot of material for both students and professors to use. This includes list of our videos, more simulations (including one on t-test), some tutorials on various softwares, and course packages. But thank you for the suggestion!
If two different populations (having different population means) have the same variance, you are likely to conclude from two samples from each of these populations, that they come from the same population, when in fact they come from different populations. Does ANOVA not account for the mean? Wouldn't we incorrectly conclude that those two samples came from the same population even though the two populations have two different means?
I could be totally talking out of me bum here, but I think this is where you have to employ non-statistical thinking. Could a scenario exist where this could conceivably happen (e.g. a Type I despite 99% certainty)? Yes - but how likely is it? I would imagine very unlikely in a natural example but more likely when comparing certain man-made things. From what little I know about the creation of ANOVA (for astronomy and geodesy), I would assume it wasn't designed for the scenario you have in mind. For two populations to be both entirely identical and yet not share a history would require divine intervention.
Hopefully someone else who knows more can chime it. That's just my two cents.
Good Explaination ....
This is super helpful.
It is reciting not explaining !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
G Kk