*My takeaways:* 1. History of Monte Carlo Simulation 0:56 2. Monte Carlo Simulation 3:23 - Example1: coins 6:03 - Variance 10:00 - Example2: Roulette 11:00 3. Law of large numbers 18:40 4. Misunderstanding on the law of large numbers: Gambler's fallacy 19:48 5. Regression to the mean 22:42 6. Quantifying variation in data: variance and standard deviation 30:14 - Always think about standard deviation in the context of mean 35:10 7. Confidence level and intervals 36:00 8. Empirical rule for computing confidence intervals 39:27 9. Assumptions underlying empirical rule 43:40 - mean estimation error is 0 - Normal distribution 10. Probability density function 46:25
COULD NOT AGREE MORE!!! He is truly amazing. Suddenly the Stats I did on a Data Science Coursera course start to make sense. A couple of more lectures by him and I will have everything sorted out in my mind... My God. Some lecturers just Got it and some just Don't.
I wonder how much time and effort was made to ensure every word was meaningful and carefully stated (just been through a course with a lecturer who knew his stuff but mostly winged it which was one of the biggest wastes of my time). I also noticed not a single 'um' or 'uh' which is amazing.
00:00 Monte Carlo simulation is a method of estimating unknown quantities using inferential statistics. 06:48 Variance affects confidence in probability predictions 13:09 Law of large numbers: Expected return of fair roulette wheel is 0 over infinite spins 19:23 Understanding the Gambler's Fallacy and Regression to the Mean 25:16 Regression to the mean is a statistical phenomenon where extreme events tend to move towards the average with more samples. 31:11 Understanding variance and standard deviation for computing confidence intervals. 37:37 Understanding confidence intervals and the empirical rule 44:04 Probability distributions can be discrete or continuous, and are described by probability density functions. Crafted by Merlin AI.
For those that may be confused, he misspoke at 23:36 "taller than average" should have been "taller than the parents". In the case that parents are shorter than average, it is expected that their children will be taller than them, not taller than average.
An instructor of the highest caliber; clear explanations, projects a seemingly universal likeable and fair personality, low intensity approach. Good hire MIT!
What a beautiful way to explain a concept. Starts with something so simple and gradually builds up to the more complex part, also delivers the lecture in a way that even a tiny bit of boredom can't creep in.
Excellent presentation. Don't know why RUclips presented the option of the video, but watched until the end. Very gifted professor. The only thing that I can think to improve it is to repeat the question from the audience so that the question is picked up on the recording.
+Isaac Park I've heard everything but a Monte Carlo here. Confidence intervals, regression to the mean, Gambler's Fallacy etc, but not much about Monte Karlo and its many alghorithms.
Thanks for addressing the apparent contradiction of the Gambler's Fallacy vs Regression to the Mean ~25:00 in. I'd always thought these 2 were in opposition, but guess I'd never heard (or thought of it) in the right frame of reference.
This is the best lecture I have ever seen on statistics. It wasn't even what I was looking for but couldn't take my eyes off it till the end. Thank you Professor! Thank you MIT!
Suddenly the Stats I did on a Data Science Coursera course start to make sense. A couple of more lectures by him and I will have everything sorted out in my mind... My God. Some lecturers just Got it and some just Don't.
Very good introduction of how the e-Pi-i conception of probabilistic Calculus by Pi circularity numberness/orbital is a dualistic +/- possible Infinite Sum, Normal/orthogonal self-defining "e", metastable +/- singularity convergence to zero difference, balance of frequency constants in Totality.
At 8:30 he misses implications of Bayes theorem - if you observe 52 heads from 100 flips, it is still much more likely that the coin is fair than biased. Because as he mentions, there are many many more fair coins and dice our there than weighted ones. The probably you have to assess is P(52 heads | coin is fair) * P(coin is fair) vs P(52 heads | coin is biased) * P(coin is biased). Far more likely that it is fair.
Regression to mean is not the same as Gambler's fallacy in that Regression to mean basically says after an extreme event you are unlikely to get a successive extreme event. Gambler's fallacy says it is definite to get successive extreme events. Gambler's fallacy falls into the trap of assuming the events are dependent/correlated (linearly +ve/-ve). That is not the case in Fair Roulette.
Excellent lecture. Prof. Guttag is a great teacher. Thank you. Every course or lecture I have watched in this MIT Open Courseware has been superb. Thank you to the teachers and to MIT for posting.
39.07 That a result will lie within an interval with probability 95% doesn't mean it will be within that interval 95% of the time. Probability cannot be directly translated into percent of times.
I feel like I with no prior knowledge just intuitively already understand all of this and use it in daily life. Cool to hear it's basis though and a more technical presentation
In the slide "Gambler's Fallacy" it reads at the bottom: "Probability of 26 consecutive reds when the previous 25 rolls were red is:" The wording is poor in my opinion. Does it mean: "What is the probability of the next roll being red?" Or Does it mean: "What is the probability of the next 26 rolls being red?" Or maybe : "What is the probability of 26 consecutive reds occurring in the next roll if the previous 25 rolls were red?" Based on his answer I think that the question should have read: "What is the Probability of the next outcome being red when the last 25 outcomes where red?" And then he goes on to talk about it being independent after this question. He didn't establish at the beginning that the outcomes were independent.
One observation, the code returns totPocket/numSpins, which is in fact return per spin, not the expected return in %. In the exemple in particular since the bet is 1, numSpins equals the total value payed to play, hence the expected return in %. If you change the value of the bet, the output is not right.
If all mathematic teachers taught like this in classes, I'm pretty sure the amount of those who grew up hating math would have been a lot less. Very clever way of teaching by giving scenarios, explaining them with mathematical concepts, without diving too quick to the expressions or formulas which not everyone is ready for.
Brilliant lecture...brought me back memories of school. Just one mistake @45:46 (perhaps oversimplification - discrete random variable need not have "finite" number of possible values, it can also be "countably infinte" as in Poisson). Again, I'm not trying to be a smart-ass...but this is an important consideration
As already stated a great lecture by a great lecturer. Though I be!ieve he misspoke @23.33. when he regarding "regression to the mean" said that "two parents who are shorter than the average, likely would have a child that is taller than THE AVERAGE", which (I believe) is incorrect. What I think he meant to say is that "... They are likely to have ahold that is taller THAN THEM"... And thanks again for making this and so much other fantastic content freely available :} Brgds
The roulette and coin flip needs to input other variables: maybe the next turn of the roulette the dealer spins the wheel harder or slower, or the balls shoots out of the fingers faster or slower. When you flip a coin maybe the thumb throws the coin harder or slower or you raise the hard to high and the results change. So, despite the simulations, in real life the odds are different. But, who has infinite time to flip infinite coins to confirm the mean value of 50% in a coin flip :)
Three things that are wrong with this: 1.- It's not "Von Noiman" *Von Neumann* is pronounced: "Fon Noiman". 2.- The ENIAC machine wasn't John Von Neumann's but John Presper Eckert's and John Mauchly's brainchild. 3.- The "stored program computer" is an invention being attributed to Von Neumann because after Eckert had bumped into him at a train station, he asked Neumann to provide some ideas for some functions to be added to his computer, but Eckert had already intended to have a stored program in the ENIAC since he envisioned it for ballistic table computation, and only wanted Neumann's contribution on added operational functions, Neumann as was his character told Eckert he'd write a paper to explain what he though would be convenient to add to the ENIAC, the result was a 100 or so pages essay that prior to give to Eckert he distributed among colleagues and everyone who read it believed the stored program paradigm he made reference in it was his original idea, then he didn't made any attempt to correct the confusion about it.
27:30 But if we start counting from the beginning of the series, when we have 5 blacks in row, then the next black would change the series of 5 into the series of 6 ,which is more extreme. Can't I think this way ?
As roulette dealer I am interested in how smaller bankrolls and length of playing sessions affect these numbers. Hold percentage for Roulette is much higher than 3% in our Casino. Most likely 20%+
I am the Great Canadian Gambler and can attest that my biggest two 6.2 Standard Deviation swings ever were back to back. Same in my early years when I played Craps to get the free junket to the casinos. Biggest win followed by biggest loss. I note that because I heard poker champ Daniel Negreanu mention the same back-to-back phenomenon. Always believed in the odds but back-to-back streaks leave an eerie feeling.
Great professor! A slight hiccup on 23:38; I believe he meant to say if the parents are both shorter than average it is likely that the child will be taller than their parents (not average).
I feel like the slide at 22:00 is a good opportunity to introduce probability notation, since in English the second sentence sounds really misleading. The first sentence is P(26 consecutive reds). The second sentence is P(26 consecutive reds | the FIRST 25 are red). Strictly speaking the second sentence is grammatically incorrect, what the professor means is "Probability of a single roll being red, given that the last 25 were red." This makes it WAY easier to understand that rolls are not correlated. What is written on the slide makes it sound like there are 26+25 rolls taking place.
Good lecture overall but there is a bug in the code at 14:32 and 15:25 -- playRoulette should instead print 100 * totPocket / (numSpins * bet). The output in his example only looks correct because `bet` is 1. If `bet` were 2 and `numSpins` were 1, it either prints "-200%" or "7200%" (obviously you can't lose more than 100% or win more than 3600%).
4 года назад
same thought. Should have divided the bet amount to calculate the percentage
A base ballbatter is a complicated example. Not independently random, player could be injured, getting divorced, loosing his house, about to be sacked or close to making his bonus. Over a season there will be more factors at work, such as different pitchers and weather conditions, more random but still not perfectly independent random. It is likely that data here will be skewed since the worst batter can do no worse than zero. A great average (above average) is 0.4. A batting average of 1.0 is theoretically possible but I doubt that it has ever been achieved over a career or even several seasons. Maybe in a single game or a one season carreer (odd to quit with that record short of serious injury or jail). It is very hard to prove that data is truely random by sampling even if it is. There are many ways though to prove that it is not random. Note; 36 Fair, 37 Europe and 37 US spins, not 35 are required. If you win on every one you will be asked to leave.
Million is still closer to zero than the infinity and for that matter any number will be always closer to zero than the infinity. So, Bernoulli's law should be " as number of trials goes farther from the zero, variance in outcome approaches zero.
The slide at 25:05 is wrong ! For a system without memory (like a roulette), the past has NO EFFECT on future events. Therefore, the probability of any event remains the same, even after the occurrence of an extreme event. That means ; after an extreme event the system is exactly in the same state as it was when we started the game. After a sequence o 10 reds, the probability of getting a red at the next trial is just 18 ou of 37. Some people lost a lot of money in Monte-Carlo the day "red" turned up 26 times in a row. When doing Monte-Carlo simulations be careful of so-called "cyclic" random number generators. From a mathematical point of view, be aware that variance on the estimated mean value tends to zero as the number of trials increases, but the variance on the number of events does not. Check any good book on probability.
Nice. Can I just argue something, there is always an example with coins but I don't think I have ever heard someone just adding a disclaimer that tossing a coin is not a random process. In theory, if you could start the tossing under the same initial conditions you would get the same outcome. So it is possible to get an infinite number of heads if you just manage to toss the coin the same way an infinite number of times (i.e. with the same initial conditions). I don't think that would be too difficult to achieve either. An example of a true random process is nuclear decay of radioactive atoms.
Under what conditions is "conformity to expectation" distinct from "regression to the mean" ? When are these phrases used equivalently ? by whom , and why ? In what ways does the use of statistically derived results differ between the population of typical social engineers and the population of physically science theorists , and "Why?" ?
I think that explaining the gambler fallacy should take into account how the gambler thinks. The gambler thinks that one rare event has to compensate by another very rare event, counter to the one the gambler just experienced. In fact, the counter-event is as rare as the currently observed one, and is not likely to happen, well, because it is rare as well! What is likely to happen is a less rare event, rather than another extreme one. Would that be one of the ways to explain the gambler fallacy?
What's the main difference between the law of large numbers and the monte Carlo experiment? Monte Carlo is using the law of large numbers to run the experiment? Why do we need the monte Carlo experiment if we know the characteristic of the law of large numbers? Can anyone tell?
I had a difficult time understanding the _Gambler’s Fallacy_ , well I understood it but couldn’t fully believe it. So back in the day on a PC with a 386 processor I wrote a Basic program that flipped a coin continuously and kept track of the results. After 2 “heads” in a row I bet on tails. I won roughly 50% of the time. I then incremented the waiting period by one more flip, betting tails after only 3 heads in a row, then 4 in a row, and so forth. I came ahead after waiting 8 flips, so then I ran the 8 heads in a row routine many times to test it and of course I never came ahead again. I fully grasp that the coin has no memory but my feelings still tug at me saying _Tails is overdue!_
Kinda rushed at the end, but i'm very fortunate to have studied probability and random processes under prof SNS of our institute, he took his sweet time to ensure that we all (class strength is 22) understand the basics of stats before moving on to analysis of Random variables. Great lecture nonetheless, far better than the Data Science lectures we have Love from IIITDMJ, India
His explanation of regression to the mean is slightly confuzing. Because the main point is that the number of samples increases. (Is this true?) in other words as like Law if large numbers, as sample converge to population varinace around mean will be smaller. NOT that there is an influnce.(i.i.d. Events).
Small mistake in minute 23:36 I'm sure what he meant to say is the child would be taller than the parents, but instead said taller than the average which makes no sense.
The sign of a good teacher--I landed here by accident, stayed for the entire lecture, and understood all of it...
*My takeaways:*
1. History of Monte Carlo Simulation 0:56
2. Monte Carlo Simulation 3:23
- Example1: coins 6:03
- Variance 10:00
- Example2: Roulette 11:00
3. Law of large numbers 18:40
4. Misunderstanding on the law of large numbers: Gambler's fallacy 19:48
5. Regression to the mean 22:42
6. Quantifying variation in data: variance and standard deviation 30:14
- Always think about standard deviation in the context of mean 35:10
7. Confidence level and intervals 36:00
8. Empirical rule for computing confidence intervals 39:27
9. Assumptions underlying empirical rule 43:40
- mean estimation error is 0
- Normal distribution
10. Probability density function 46:25
thank you Mr. Lei
Dr. Mohamed Ait Nouh you’re welcome :)
Thanks Mr. Lel
Pajeet Singh you’re welcome
Thank you Mr. Lei
This is a true teacher. He actually explains the concepts instead of just scribbling equations on the board.
Couldn't agree more. I am hooked.
Why MIT is a top school. I love that MIT allows anyone to watch these for free.
COULD NOT AGREE MORE!!! He is truly amazing. Suddenly the Stats I did on a Data Science Coursera course start to make sense. A couple of more lectures by him and I will have everything sorted out in my mind... My God. Some lecturers just Got it and some just Don't.
I wonder how much time and effort was made to ensure every word was meaningful and carefully stated (just been through a course with a lecturer who knew his stuff but mostly winged it which was one of the biggest wastes of my time). I also noticed not a single 'um' or 'uh' which is amazing.
@@benphua Well, I noticed four "ums" or "uhs" in second 0:35 to 0:45 alone, but I agree the lecture is very clear.
I've never met him, but he taught me python years ago.
we should be grateful for such giving human beings.
00:00 Monte Carlo simulation is a method of estimating unknown quantities using inferential statistics.
06:48 Variance affects confidence in probability predictions
13:09 Law of large numbers: Expected return of fair roulette wheel is 0 over infinite spins
19:23 Understanding the Gambler's Fallacy and Regression to the Mean
25:16 Regression to the mean is a statistical phenomenon where extreme events tend to move towards the average with more samples.
31:11 Understanding variance and standard deviation for computing confidence intervals.
37:37 Understanding confidence intervals and the empirical rule
44:04 Probability distributions can be discrete or continuous, and are described by probability density functions.
Crafted by Merlin AI.
Watching Prof. Guttah teaching is a joy. A true inspiration for those of us who also like teaching and want to do better
For those looking for some visuals of how a Monte Carlo simulation works, see the second half or so of lecture 7 on Confidence Intervals.
MVP
Thanks a lot, that was what I was looking for!
Which playlist??
This guy is such a fantastic teacher. I would love to have him in person, thanks again for uploading the video!
Have him for ... breakfast?
@@zZE94 Ken really sounded weird ahahahha
He prolly would love have you in person too, for sure.
At the university where I studied all teachers were also fantastic teachers until the exam. Afterwards they were all a**h****.
this man right here is a true teacher, understands the subject topic deeply and speaks passionately
I came here for the Monte Carlo simulation but got unexpectedly thus far the best explanation for simple concepts like Variance or Standard Deviation
Unfortunately, during my studies at Bachelor and Master, I never had such great real professor. Thanks so much for sharing such great video.
Not what I was looking for, but couldn't help but watch the entire video. Well done sir.
same
The same!
I love random walks through youtube
wanted to know what a monte carlo simulation is but I guess ill revise some stats intuition ¯\_(ツ)_/¯
@@GaoyuanFanboy123 hahaah same xD
For those that may be confused, he misspoke at 23:36 "taller than average" should have been "taller than the parents". In the case that parents are shorter than average, it is expected that their children will be taller than them, not taller than average.
Some of the best explanations of statistics I’ve heard. Does a great job of breaking down concepts.
After watching this lecture, I wish I was smart enough to get into such elite schools and be taught by such passionate teachers.
Respect!
But you have access to MIT open courseware
An instructor of the highest caliber; clear explanations, projects a seemingly universal likeable and fair personality, low intensity approach. Good hire MIT!
Brilliant lecture. I can binge watch Professor John Guttag's lectures. Amazing.
Should of done better in highschool and went to MIT. This is great. A true teacher
What a beautiful way to explain a concept. Starts with something so simple and gradually builds up to the more complex part, also delivers the lecture in a way that even a tiny bit of boredom can't creep in.
I love professors who make mistakes and make corrections accepting help from students.
Excellent presentation. Don't know why RUclips presented the option of the video, but watched until the end. Very gifted professor. The only thing that I can think to improve it is to repeat the question from the audience so that the question is picked up on the recording.
I really love the teachers at MIT. I have watched a ton of lectures from them and all have been great
Lies again? Support Indonesia Malaysia
What a great introduction course that is simple to understand yet extremely powerful to student.
Makes even high level material understandable to a neophyte. That's the mark of a skilled educator.
Great teaching style. Small number of teachers can teach such concise and clarify. I learn a lot from the great educators.
I love these old school professors. They are true masters.
Wow..... He truly explained what monte carlo simulation in 50 min. Thank you Prof.
+Isaac Park I've heard everything but a Monte Carlo here. Confidence intervals, regression to the mean, Gambler's Fallacy etc, but not much about Monte Karlo and its many alghorithms.
Hayatımdaki en iyi üniversite dersiydi.Thanks Prof J. Guttag
Isn't he the most adorable teacher ever?
Great job walking your audience through the material!
Thanks for addressing the apparent contradiction of the Gambler's Fallacy vs Regression to the Mean ~25:00 in. I'd always thought these 2 were in opposition, but guess I'd never heard (or thought of it) in the right frame of reference.
Wonderful professor. So casual but I believe what the students learn will stick with them forever.
Finally understood what statistics is about after 10 years of endeavour! Thanks so much!
Trying applying it to obtain Lebsegue Integral. See, you probably have understood nothing.
Kasra Keshavarz your face shows how stupid you are
Howard Lam. It is “Lebesgue”
This is the best lecture I have ever seen on statistics. It wasn't even what I was looking for but couldn't take my eyes off it till the end. Thank you Professor! Thank you MIT!
He is such a great teacher on multiple topics. After this course I plan to finally take Linear Allgebra.
Hint: Playing on 1.25 speed is ideal for this video.
Thanks. :))
2x for engineering students in south asia
For an foreign student from germany like me - 1.0 speed is good. But for all native english speakers i think he speaks quite slow.
But 1.0 speed is too good.
pro-tip, mate. Thx for the time back.
Thank you Professor Guttag and thank you late Stanislaw Ulam.
Suddenly the Stats I did on a Data Science Coursera course start to make sense. A couple of more lectures by him and I will have everything sorted out in my mind... My God. Some lecturers just Got it and some just Don't.
I give this professor two thumbs up. I like his style. Good presentation also. A hardy bravo zulo to the man.
very explanatory ways to teach ... Sir you should teach teachers ... What a teaching style!!!
I love the sense of humour of the instructor. A great lecture indeed!
Very good introduction of how the e-Pi-i conception of probabilistic Calculus by Pi circularity numberness/orbital is a dualistic +/- possible Infinite Sum, Normal/orthogonal self-defining "e", metastable +/- singularity convergence to zero difference, balance of frequency constants in Totality.
At 8:30 he misses implications of Bayes theorem - if you observe 52 heads from 100 flips, it is still much more likely that the coin is fair than biased. Because as he mentions, there are many many more fair coins and dice our there than weighted ones. The probably you have to assess is P(52 heads | coin is fair) * P(coin is fair) vs P(52 heads | coin is biased) * P(coin is biased). Far more likely that it is fair.
My thoughts too
the frequentist approach would work too
I love a professional, whether he be a doctor or a scientist, who has the confidence and grace to admit that he makes an honest mistake.
26:53 Great answer to make the difference between gambler's fallacy and regression to the mean clear!
Regression to mean is not the same as Gambler's fallacy in that Regression to mean basically says after an extreme event you are unlikely to get a successive extreme event. Gambler's fallacy says it is definite to get successive extreme events. Gambler's fallacy falls into the trap of assuming the events are dependent/correlated (linearly +ve/-ve). That is not the case in Fair Roulette.
Excellent lecture. Prof. Guttag is a great teacher. Thank you.
Every course or lecture I have watched in this MIT Open Courseware has been superb. Thank you to the teachers and to MIT for posting.
Thank you Prof. Guttag & MIT.
39.07 That a result will lie within an interval with probability 95% doesn't mean it will be within that interval 95% of the time. Probability cannot be directly translated into percent of times.
I feel like I with no prior knowledge just intuitively already understand all of this and use it in daily life. Cool to hear it's basis though and a more technical presentation
Great lecture, awesome teacher. Concepts were explained really well.
Wow... fantastic lecture by Prof. Guttag... Thank you and congratulations.
Had this same lecture in PSYCH Stats class at CofC. Learned a lot and this was fun to watch again
In the slide "Gambler's Fallacy" it reads at the bottom:
"Probability of 26 consecutive reds when the previous 25 rolls were red is:"
The wording is poor in my opinion.
Does it mean:
"What is the probability of the next roll being red?"
Or Does it mean:
"What is the probability of the next 26 rolls being red?"
Or maybe :
"What is the probability of 26 consecutive reds occurring in the next roll if the previous 25 rolls were red?"
Based on his answer I think that the question should have read:
"What is the Probability of the next outcome being red when the last 25 outcomes where red?"
And then he goes on to talk about it being independent after this question.
He didn't establish at the beginning that the outcomes were independent.
This is what is used to determine results of A/B testing folks, i had to learn this on the fly at my job
WANTED MORE ABOUT MONTE CARLO, but he is such an amazing teacher that I got stuck anyways!!!!
One observation, the code returns totPocket/numSpins, which is in fact return per spin, not the expected return in %. In the exemple in particular since the bet is 1, numSpins equals the total value payed to play, hence the expected return in %. If you change the value of the bet, the output is not right.
Thank you for this great lecture. You explain it so well. I was looking for Monte Carlo Simulation but ended up watching the whole video.
If all mathematic teachers taught like this in classes, I'm pretty sure the amount of those who grew up hating math would have been a lot less. Very clever way of teaching by giving scenarios, explaining them with mathematical concepts, without diving too quick to the expressions or formulas which not everyone is ready for.
if all mathematics teachers taught like this, nobody would know any maths
Extremely Based series of lectures. Top tier professor!
thanks lord for these free lectures
Didn't understand any of it but I appreciate the teacher's methods. Well done.
Its a great lecture. Covering basics of statistics but doesn't really have anything to offer on Monte Carlo Simulation 😐
Brilliant lecture...brought me back memories of school. Just one mistake @45:46 (perhaps oversimplification - discrete random variable need not have "finite" number of possible values, it can also be "countably infinte" as in Poisson). Again, I'm not trying to be a smart-ass...but this is an important consideration
As already stated a great lecture by a great lecturer. Though I be!ieve he misspoke @23.33. when he regarding "regression to the mean" said that "two parents who are shorter than the average, likely would have a child that is taller than THE AVERAGE", which (I believe) is incorrect. What I think he meant to say is that "... They are likely to have ahold that is taller THAN THEM"...
And thanks again for making this and so much other fantastic content freely available :}
Brgds
The roulette and coin flip needs to input other variables: maybe the next turn of the roulette the dealer spins the wheel harder or slower, or the balls shoots out of the fingers faster or slower. When you flip a coin maybe the thumb throws the coin harder or slower or you raise the hard to high and the results change. So, despite the simulations, in real life the odds are different. But, who has infinite time to flip infinite coins to confirm the mean value of 50% in a coin flip :)
it says monte carlo simulation, but it's talking about distribution, conf interval. nice teacher tho
Thank you Sire.
I hope you're okay wherever you are
12:47 "win some lose some, it's all the same to me"
Lemmy
Three things that are wrong with this:
1.- It's not "Von Noiman" *Von Neumann* is pronounced: "Fon Noiman".
2.- The ENIAC machine wasn't John Von Neumann's but John Presper Eckert's and John Mauchly's brainchild.
3.- The "stored program computer" is an invention being attributed to Von Neumann because after Eckert had bumped into him at a train station, he asked Neumann to provide some ideas for some functions to be added to his computer, but Eckert had already intended to have a stored program in the ENIAC since he envisioned it for ballistic table computation, and only wanted Neumann's contribution on added operational functions, Neumann as was his character told Eckert he'd write a paper to explain what he though would be convenient to add to the ENIAC, the result was a 100 or so pages essay that prior to give to Eckert he distributed among colleagues and everyone who read it believed the stored program paradigm he made reference in it was his original idea, then he didn't made any attempt to correct the confusion about it.
The explanation is clear, his lecture is great!
27:30 But if we start counting from the beginning of the series, when we have 5 blacks in row, then the next black would change the series of 5 into the series of 6 ,which is more extreme. Can't I think this way ?
such respect for these fantastic teachers
As roulette dealer I am interested in how smaller bankrolls and length of playing sessions affect these numbers. Hold percentage for Roulette is much higher than 3% in our Casino. Most likely 20%+
I wish every teacher is just like him. Then every child would get to enjoy studying. Thanks professor. Thanks for making the content available online.
Well said, agreed!!
I am the Great Canadian Gambler and can attest that my biggest two 6.2 Standard Deviation swings ever were back to back. Same in my early years when I played Craps to get the free junket to the casinos. Biggest win followed by biggest loss. I note that because I heard poker champ Daniel Negreanu mention the same back-to-back phenomenon. Always believed in the odds but back-to-back streaks leave an eerie feeling.
He is the best! Such a pleasure and luck to be able to access this lecture.
Great professor! A slight hiccup on 23:38; I believe he meant to say if the parents are both shorter than average it is likely that the child will be taller than their parents (not average).
A good session, I'll search for the prof and watch more videos. 👍
I feel like the slide at 22:00 is a good opportunity to introduce probability notation, since in English the second sentence sounds really misleading. The first sentence is P(26 consecutive reds). The second sentence is P(26 consecutive reds | the FIRST 25 are red).
Strictly speaking the second sentence is grammatically incorrect, what the professor means is "Probability of a single roll being red, given that the last 25 were red." This makes it WAY easier to understand that rolls are not correlated. What is written on the slide makes it sound like there are 26+25 rolls taking place.
Ok, he is really good 33:45, how I hoped to have a prof. like him back in college.
Love your Data Table hack at 2'. Thank you for that!
Good lecture overall but there is a bug in the code at 14:32 and 15:25 -- playRoulette should instead print 100 * totPocket / (numSpins * bet).
The output in his example only looks correct because `bet` is 1. If `bet` were 2 and `numSpins` were 1, it either prints "-200%" or "7200%" (obviously you can't lose more than 100% or win more than 3600%).
same thought. Should have divided the bet amount to calculate the percentage
A base ballbatter is a complicated example. Not independently random, player could be injured, getting divorced, loosing his house, about to be sacked or close to making his bonus. Over a season there will be more factors at work, such as different pitchers and weather conditions, more random but still not perfectly independent random. It is likely that data here will be skewed since the worst batter can do no worse than zero. A great average (above average) is 0.4. A batting average of 1.0 is theoretically possible but I doubt that it has ever been achieved over a career or even several seasons. Maybe in a single game or a one season carreer (odd to quit with that record short of serious injury or jail).
It is very hard to prove that data is truely random by sampling even if it is. There are many ways though to prove that it is not random.
Note; 36 Fair, 37 Europe and 37 US spins, not 35 are required. If you win on every one you will be asked to leave.
23:32 If the parents are shorter than average then the child will likely be taller than the parents, but not taller than average.
He probably just misspoke.
yup. It would be gambler's fallacy to say that.
caught that too. just a slip of the tongue.
Yeah, slip of the tongue, one of those is not worth to correct at the momento because are understood right away
I think he meant the average of their height
My big interest is Monte Carlo simulation and Markov chain!!!
Thats the best lecture I have ever seen.
Million is still closer to zero than the infinity and for that matter any number will be always closer to zero than the infinity. So, Bernoulli's law should be " as number of trials goes farther from the zero, variance in outcome approaches zero.
The slide at 25:05 is wrong ! For a system without memory (like a roulette), the past has NO EFFECT on future events. Therefore, the probability of any event remains the same, even after the occurrence of an extreme event. That means ; after an extreme event the system is exactly in the same state as it was when we started the game. After a sequence o 10 reds, the probability of getting a red at the next trial is just 18 ou of 37. Some people lost a lot of money in Monte-Carlo the day "red" turned up 26 times in a row. When doing Monte-Carlo simulations be careful of so-called "cyclic" random number generators. From a mathematical point of view, be aware that variance on the estimated mean value tends to zero as the number of trials increases, but the variance on the number of events does not. Check any good book on probability.
Nice. Can I just argue something, there is always an example with coins but I don't think I have ever heard someone just adding a disclaimer that tossing a coin is not a random process. In theory, if you could start the tossing under the same initial conditions you would get the same outcome. So it is possible to get an infinite number of heads if you just manage to toss the coin the same way an infinite number of times (i.e. with the same initial conditions). I don't think that would be too difficult to achieve either. An example of a true random process is nuclear decay of radioactive atoms.
Under what conditions is "conformity to expectation" distinct from "regression to the mean" ? When are these phrases used equivalently ? by whom , and why ? In what ways does the use of statistically derived results differ between the population of typical social engineers and the population of physically science theorists , and "Why?" ?
Actually you are an amazing demonstrator
I think that explaining the gambler fallacy should take into account how the gambler thinks. The gambler thinks that one rare event has to compensate by another very rare event, counter to the one the gambler just experienced. In fact, the counter-event is as rare as the currently observed one, and is not likely to happen, well, because it is rare as well! What is likely to happen is a less rare event, rather than another extreme one. Would that be one of the ways to explain the gambler fallacy?
What's the main difference between the law of large numbers and the monte Carlo experiment?
Monte Carlo is using the law of large numbers to run the experiment?
Why do we need the monte Carlo experiment if we know the characteristic of the law of large numbers?
Can anyone tell?
I had a difficult time understanding the _Gambler’s Fallacy_ , well I understood it but couldn’t fully believe it. So back in the day on a PC with a 386 processor I wrote a Basic program that flipped a coin continuously and kept track of the results. After 2 “heads” in a row I bet on tails. I won roughly 50% of the time. I then incremented the waiting period by one more flip, betting tails after only 3 heads in a row, then 4 in a row, and so forth. I came ahead after waiting 8 flips, so then I ran the 8 heads in a row routine many times to test it and of course I never came ahead again. I fully grasp that the coin has no memory but my feelings still tug at me saying _Tails is overdue!_
I can only imagine how computationally heavy running Monte Carlos at the LHC are.
he is so funny, i wish i had such professors
proper: denoting a subset or subgroup that does not constitute the entire set or group, especially one that has more than one element.
Kinda rushed at the end, but i'm very fortunate to have studied probability and random processes under prof SNS of our institute, he took his sweet time to ensure that we all (class strength is 22) understand the basics of stats before moving on to analysis of Random variables.
Great lecture nonetheless, far better than the Data Science lectures we have
Love from IIITDMJ, India
His explanation of regression to the mean is slightly confuzing. Because the main point is that the number of samples increases. (Is this true?) in other words as like Law if large numbers, as sample converge to population varinace around mean will be smaller. NOT that there is an influnce.(i.i.d. Events).
Small mistake in minute 23:36 I'm sure what he meant to say is the child would be taller than the parents, but instead said taller than the average which makes no sense.