*My takeaways:* 1. Probability sampling: simple random sampling 0:58 2. A data analysis example 6:11 - How to tight standard deviation: take larger samples, not more samples 14:40 3. How to visualise and understand the data: error bar 17:30 - When confidence intervals don't overlap, we can conclude that means are statistically significantly different 18:30 4. Standard error 25:04 - Standard error vs standard deviation 29:33 - Problem with standard error: we don’t have population standard deviation 30:46, but we can use sample standard deviation to get a close estimation - Three different distributions and their skews 36:15, when we use sample standard deviation to estimate population standard deviation, more samples are needed for distributions with more skews 5. The good results are always aligned with confidence intervals 43:35
It would be useful to carefully explain the difference between sample (a random draw of size n from the population), individual sample elements ( each member of the sample), and replications (number of samples drawn). Otherwise it could be easy to confuse which one your talking about.
Standard deviation: Value of symmetric distance towards both sides from the mean that accounts for ~96% of the samples. Confidence interval: 1.96*sd (accounts for a symmetric interval around the mean that accounts for 95% of the samples) Standard error (of the means): is standard deviation of a sample population scaled by the number of samples. It is approximately equal to the standard error of the whole population. 37:00 - The bigger extremes between samples, the higher error between sample and whole population would be for lesser number of samples. 40:00 - The size of population doesn't matter, but the skew and the size of the step.
I would also recommend reading the recommended book and taking some time to digest the information there (like thinking and staring at the wall). It can be really confusing if we just watch the lecture. Especially this lecture requires reading from students' side too. It builds upon a few lectures before. The lectures before this particular one were easy to follow. Nonetheless, the material is very important and extremely interesting if you understand it well.
+1 for reading the text - I've been reading the assigned readings after each lecture and the way it covers the same material in a slightly different way with different examples has really helped to set the knowledge in my mind.
I am a bit surprised he didn't comment on how the standard deviation of the sample has a bias as an estimator for the standard deviation of the population, and how you should divide by (n-1) instead of n (n being the sample size) when doing this estimation...
@@aidenigelson9826 actually we always take just one sample but the idea is that if u keep taking infinitely many samples the mean of these samples mean is the true population mean. For experiments we always take one sample and calculate the p values and thus confidence interval to get an idea of how that sample represents the true population.
Thank you MIT for making these courses open! To verify if we chose the right sample size, we identify what fraction of times we break the Empirical rule. But I'm not clear on why it is fine to use the estimated standard error (and not the population standard error) while computing this fraction? Say the population is skewed and we chose a small sample, wont the estimated standard error be more inaccurate? If that is so, how can it be used to verify the distance between population and sample means?
I am wondering if the numTrials plays a part at the end in calculating the confidence interval. Sample size of 200 gave 95% but how does numTrials affect this?
Well, when he is comparing different distributions vs population size... why the uniform has at sample size 25 more or less a difference of 7.5% ( 37:21 ) and in the next slide about 25% ( 39:12 )?
*My takeaways:*
1. Probability sampling: simple random sampling 0:58
2. A data analysis example 6:11
- How to tight standard deviation: take larger samples, not more samples 14:40
3. How to visualise and understand the data: error bar 17:30
- When confidence intervals don't overlap, we can conclude that means are statistically significantly different 18:30
4. Standard error 25:04
- Standard error vs standard deviation 29:33
- Problem with standard error: we don’t have population standard deviation 30:46, but we can use sample standard deviation to get a close estimation
- Three different distributions and their skews 36:15, when we use sample standard deviation to estimate population standard deviation, more samples are needed for distributions with more skews
5. The good results are always aligned with confidence intervals 43:35
Thanks Lei. You did a great job.
@@xingnanzhou8628 You are welcome
@@leixun Quick question, when do we NEED to have more than one sample? And when can we just use one?
14:00 - 95% Confidence Interval
Thank you ☺️❤️
It would be useful to carefully explain the difference between sample (a random draw of size n from the population), individual sample elements ( each member of the sample), and replications (number of samples drawn). Otherwise it could be easy to confuse which one your talking about.
All 3 concepts have been explained in previous lectures, from this same playlist.
Great video. I had always wondered about how size of the population affects size of the sample. Was surprised to see that it doesn't!
Yeah! It doesn't, as long the distribution stays the same.
Standard deviation: Value of symmetric distance towards both sides from the mean that accounts for ~96% of the samples.
Confidence interval: 1.96*sd (accounts for a symmetric interval around the mean that accounts for 95% of the samples)
Standard error (of the means): is standard deviation of a sample population scaled by the number of samples. It is approximately equal to the standard error of the whole population.
37:00 - The bigger extremes between samples, the higher error between sample and whole population would be for lesser number of samples.
40:00 - The size of population doesn't matter, but the skew and the size of the step.
I would also recommend reading the recommended book and taking some time to digest the information there (like thinking and staring at the wall). It can be really confusing if we just watch the lecture. Especially this lecture requires reading from students' side too. It builds upon a few lectures before. The lectures before this particular one were easy to follow.
Nonetheless, the material is very important and extremely interesting if you understand it well.
+1 for reading the text - I've been reading the assigned readings after each lecture and the way it covers the same material in a slightly different way with different examples has really helped to set the knowledge in my mind.
This is pure gold.
I am a bit surprised he didn't comment on how the standard deviation of the sample has a bias as an estimator for the standard deviation of the population, and how you should divide by (n-1) instead of n (n being the sample size) when doing this estimation...
I also thought that due to degrees of freedom but since we hav libraries that do this maybe he ignored
Quick question, when do we NEED to have more than one sample? And when can we just use one?
@@sharan9993 Quick question, when do we NEED to have more than one sample? And when can we just use one?
@@aidenigelson9826 I don't think I understand your question...you always need more than one sample to meaningfully calculate mean and deviation
@@aidenigelson9826 actually we always take just one sample but the idea is that if u keep taking infinitely many samples the mean of these samples mean is the true population mean.
For experiments we always take one sample and calculate the p values and thus confidence interval to get an idea of how that sample represents the true population.
Thank you MIT for making these courses open!
To verify if we chose the right sample size, we identify what fraction of times we break the Empirical rule. But I'm not clear on why it is fine to use the estimated standard error (and not the population standard error) while computing this fraction? Say the population is skewed and we chose a small sample, wont the estimated standard error be more inaccurate? If that is so, how can it be used to verify the distance between population and sample means?
this is such an amazing lecture.
I hope more people learn about this free lecture
I am wondering if the numTrials plays a part at the end in calculating the confidence interval. Sample size of 200 gave 95% but how does numTrials affect this?
Thanks a lot for publishing this video. Did someone else also replicate this code in R? Thumbs up!
Quick question, when do we NEED to have more than one sample? And when can we just use one?
*☼ **31:00** THE CATCH!*
Well, when he is comparing different distributions vs population size... why the uniform has at sample size 25 more or less a difference of 7.5% ( 37:21 ) and in the next slide about 25% ( 39:12 )?
Masterclass.
Good job sir very helpful
17:15 Increase size of sample than number of sample. What does he mean?
do we need stats and probability before this course? I'm a bit lost T_T
Thank you.
Thanks.
Thank you very much.
Can anyone tell me why the states are independent in an election?
Because one states voting cannot affect how another state votes
But he stated that this was false
There are so many errors in the notes...
29:35 When I felt I was human like the rest.
Where is the link for lecture 9
ruclips.net/video/vIFKGFl1Cn8/видео.html
20:29
23:21. So 100*600=600k 😂