The best explanation of Statistical Distances that I have found. Easy and nice explanation of Kolmogrov-Smirinov, Wasserstein distance, and KL-divergence.
docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html As of this date, latest stable version of scipy is 1.3.1 on pip. This has been allegedly available after 1.0.0
Semantically, "accepting H0" and "failing to reject H0" are the same. But they are not! P-value is a measure of the probability of our data assuming that the null hypothesis (such as no difference between two groups) is true. So, it is a measure against the null, not in favour of the null. This is why we have a statistical test of no difference, or similarity, that is called equivalence tests.
The best explanation of Statistical Distances that I have found. Easy and nice explanation of Kolmogrov-Smirinov, Wasserstein distance, and KL-divergence.
Love it when a talk presents the material in a carefully developed, logical, manner. Merci!
Great explanation! Easy to digest.
Well done! Nice explanation!
T'es le meilleur fillot ! J'ai rien compris mais c'est quand même la classe !
17:13 there should be a negative in the definition of KL
I think the negative should be based on whether you are minimizing or maximizing it. By definition, distances are always positive.
This is a great presentation! Is there a reason why you did not commit the nth Wasserstein distance to SciPy?
docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html
As of this date, latest stable version of scipy is 1.3.1 on pip. This has been allegedly available after 1.0.0
6:15 we should either reject or fail to reject H0 i believe. Instead of “accept H0”
AKA accept. I think it depends on where you learn statistics. My professors always said accept and reject.
Semantically, "accepting H0" and "failing to reject H0" are the same. But they are not!
P-value is a measure of the probability of our data assuming that the null hypothesis (such as no difference between two groups) is true. So, it is a measure against the null, not in favour of the null. This is why we have a statistical test of no difference, or similarity, that is called equivalence tests.
@@harry8175ritchie AKA NO. You can't conclude your assumption based on your assumption. This is like logic 101. HARD FAIL GO DRIVE A TRUCK FOR LIVING.
Not the way to handle it buddy.
Good!
Thanks
G