Thanks for this basic summary video. Example was quite helpful. Would have loved to see the following: - A live demo of setting up the experiments on Amplitude along with their trackers - More real life insights about how many experiments does an average PM conduct concurrently, what's the average user segment size and whether they always complete their decided time limit
Thanks so much! Glad you found value. As for your requests: 1. This is something I can look into adding in a future lesson! 2. Teams are usually running a small number of A/B tests at a given time. The reason is so that they can quickly finish one and then move onto the next. If you have multiple running at once, you need to make sure they're all mutually exclusive from one another, which cuts down the number of users who are eligible for the test. If you have an enormous product (like RUclips) you are able to more easily do this, especially if you are testing completely unrelated parts of the product. In terms of average user segment size, it's very dependent on the product & their current metrics. I would rely on the a/b test sample size calculators online to calculate. Usually teams do leave a test running for the recommended time so they can be very confident that the results are actually statistically significant and not due to random noise. If the results are overwhelming positive or negative though, teams usually see that quickly and can turn off the test and move on to another one.
Thanks for this basic summary video. Example was quite helpful.
Would have loved to see the following:
- A live demo of setting up the experiments on Amplitude along with their trackers
- More real life insights about how many experiments does an average PM conduct concurrently, what's the average user segment size and whether they always complete their decided time limit
Thanks so much! Glad you found value. As for your requests:
1. This is something I can look into adding in a future lesson!
2. Teams are usually running a small number of A/B tests at a given time. The reason is so that they can quickly finish one and then move onto the next. If you have multiple running at once, you need to make sure they're all mutually exclusive from one another, which cuts down the number of users who are eligible for the test. If you have an enormous product (like RUclips) you are able to more easily do this, especially if you are testing completely unrelated parts of the product.
In terms of average user segment size, it's very dependent on the product & their current metrics. I would rely on the a/b test sample size calculators online to calculate.
Usually teams do leave a test running for the recommended time so they can be very confident that the results are actually statistically significant and not due to random noise. If the results are overwhelming positive or negative though, teams usually see that quickly and can turn off the test and move on to another one.
@ProductManagerAnthony Thanks a lot for those insights. Looking forward to the next one. Cheers!