Hi Emma, thanks for keeping producing such high quality videos. Do you mind explain how to deal with multiple test problems in other scenarios that you mentioned, much as sliced data into multiple segments?
thx for sharing Emma, i found these very helpful and actually thats what i encountered in real life work. mkt ran some wrong a/b testing without actually understanding it then asked me to analyze the result. maybe i should share your channel to them lol
Can you clarify: multi hypothesis problem arise by testing a segment? Control vs. only Web segment? Or multiple treatment group such as :Control vs. Web vs. IOS?
Great video. Very insightful. I would be interested to hear your thoughts on A/B tests set up to ensure that changes do not break anything. For example if an e-commerce website begins to place ads on the website, they want to make sure that adding ads onto the website does not cause people to buy less. What would be a good way to think about tests like this?
Thanks for your videos! For using tiered sig. Lvls to check multiple metrics, how can we calculate the sample size? Should we calculate based on the most important metrics or should we calculate for each metric and choose the largest needed sample size?
Good explanation but I think the last error is incorrectly handled.... Imagine you run an experiment and it is significant (you haven't checked the observed power yet), if you accept it then it is wrong but if you rerun it you just nearly doubled the p value. We should be only looking at the rerun or let the experiment have significant power (probably more than our threshold)
Hi emma, this is very helpful! Thank you for making those videos, just a quick follow-up questions, I think testing multiple metrics in an A/B test is common; like usually we will pick one metric as the main metric and the couple other will just serve as support metrics; so if I just make whether or launch decision based on the significance of this one key metrics, it's fine, right?
The reason why I ask this is sometimes we got like tradeoff type of questions: you key metrics goes as expected but one supporting is going conflicts, will you launch, then we talk about short term and long term benefits something like that
sublime video, emma. I get the feeling you will explain every useful thing in the Hippo book, thus saving us all the bother of reading it.
Emma your videos are amazing! Keep doing what you do. Looking forward to more such content
Hi Emma, thanks for keeping producing such high quality videos. Do you mind explain how to deal with multiple test problems in other scenarios that you mentioned, much as sliced data into multiple segments?
Your videos are super helpful, thank you!
awesome teacher! 🙌
Your videos are gold! Thanks!!
You're really great!!! Please upload Facebook Data Scientist interview experience. Thanks!
Thank you! this is very helpful! Im done with this video. All the content are noted!
thx for sharing Emma, i found these very helpful and actually thats what i encountered in real life work. mkt ran some wrong a/b testing without actually understanding it then asked me to analyze the result. maybe i should share your channel to them lol
Hi Emma! I really like your videos. They are very insightful and helped me a lot when I was preparing my DS interviews:)
Can you clarify: multi hypothesis problem arise by testing a segment? Control vs. only Web segment? Or multiple treatment group such as :Control vs. Web vs. IOS?
Great video. Very insightful. I would be interested to hear your thoughts on A/B tests set up to ensure that changes do not break anything.
For example if an e-commerce website begins to place ads on the website, they want to make sure that adding ads onto the website does not cause people to buy less. What would be a good way to think about tests like this?
Thanks for your videos! For using tiered sig. Lvls to check multiple metrics, how can we calculate the sample size? Should we calculate based on the most important metrics or should we calculate for each metric and choose the largest needed sample size?
Good explanation but I think the last error is incorrectly handled.... Imagine you run an experiment and it is significant (you haven't checked the observed power yet), if you accept it then it is wrong but if you rerun it you just nearly doubled the p value. We should be only looking at the rerun or let the experiment have significant power (probably more than our threshold)
Good explanation 👌 keep going 😊
Hi emma, this is very helpful! Thank you for making those videos, just a quick follow-up questions, I think testing multiple metrics in an A/B test is common; like usually we will pick one metric as the main metric and the couple other will just serve as support metrics; so if I just make whether or launch decision based on the significance of this one key metrics, it's fine, right?
The reason why I ask this is sometimes we got like tradeoff type of questions: you key metrics goes as expected but one supporting is going conflicts, will you launch, then we talk about short term and long term benefits something like that
Cannot find the material on p119 in the hippo book, the whole chapter talks about ethics.