I think i have been part of one of RUclips's experiments for the last 10 years or so. =D Where they gradually increase the amount of bad suggestions I recieve day by day to see where my breaking point is. By this point the entire suggestions tab are basicly useless, and I'm stuck manually browsing the subscriptions tab.
Very useful topic, thanks for covering it. BTW, is there any approach how to structure feature flags? I had a project where we tested and rolled out many experimental features, but the code looked really ugly and hard to maintan since we had many of flags.
Thanks Eduard! I think the tools I mentioned all have some sort of tagging or group option, so you can organize your feature flags that way. You could then use protocols/abstraction to only supply the set of feature flags to each part of your application that it actually needs, so that it's decoupled more.
If the codebase is the same no matter features are activated or not, how do we track which version of configs a user saw? Is that done by having logging statements that print the configs?
Really nice video! I believe it may be a bit of an overkill, but I would have created a decorator with parameters, taking the name of the event as parameter and getting it trigger whenever the decorated method is called. It makes little sense here but it could be a nice addition to scale this to a larger codebase.
Love the video! Just a mild criticism - the side angles that the video occasionally cuts to are kind of jarring and out of place. It makes it feel like the video is *trying* to be cool and fancy, and is mostly distracting. You're great at presenting! And youtube viewers are incredibly conditioned to accept the "vlog format" of somebody talking straight to the camera and using jump cuts or zooming in and out from the same angle. If I were you, I would drop camera B entirely (which is probably easier to film anyway!) Again, love the videos generally though. You fill a great niche for mid/senior level python topic videos and I try to watch every single one!
Makes sense, thank you! However I guess even in that case, a bad actor could just fake requests, screwing with the analytics. That's where logins + per-user keys /API tokens come in I imagine.
What I don't like about A/B testing is that companies tend to end up completely ignoring user feedback. They'll just make two versions, answer "which one is used more often?" and just ram that version down our throat. Looking at you, Reddit. Anyway, there's usually something that something has been wanting for YEARS, but companies won't do it, because no A/B testing :( That just ends up me not wanting to use it at all, because it's so impersonal. Not that users are always right (only "in matters of taste", as the quote goes), but those large entities end up becoming impersonal.
While I'm not familiar with the Reddit case, I firmly believe that listening to user feedback is just as crucial as conducting A/B testing. However, it's important to note that there can be drawbacks to relying solely on explicit user feedback. For instance, it can often be biased. Users don't always express their genuine needs initially, and I've observed numerous cases where they've changed their stance after considering the potential consequences of the changes they initially requested. Additionally, there's the issue of representation. A small group of users may provide extensive feedback, while the majority, potentially with different views, remain silent. This disparity makes decision-making based on a limited number of user feedback responses quite challenging.
Hi Arjan. Hoe gaat het ? :) At 19:22 there is no link for the rest api in the top right corner :) Very interesting and your calm way of explaining makes your content also very relaxing ;) Cheers from Romania.
👷 Join the FREE Code Diagnosis Workshop to help you review code more effectively using my 3-Factor Diagnosis Framework: www.arjancodes.com/diagnosis
Great content as always.
Thanks so much Enio, glad the content is helpful!
15:27 You don't need OR here. Just use empty string as the second argument in os.getenv.
thanks alot, I am learning something new every time I watch your channel!
Thanks so much Tim, glad the content is helpful!
I think i have been part of one of RUclips's experiments for the last 10 years or so. =D
Where they gradually increase the amount of bad suggestions I recieve day by day to see where my breaking point is.
By this point the entire suggestions tab are basicly useless, and I'm stuck manually browsing the subscriptions tab.
so concise and to the point!
Thanks so much Shaheer, glad you liked it!
Great and very interesting video! Thank you!!
Thanks Loic, happy you’re enjoying the content!
Just great and we'll prepared content. Thanks for sharing it Arjan.
Greetings from Brazil.
Thank you Leonardo, glad to hear you like the content!
Where’s the hoodie from?
Also great video.
Good stuff! Thanks for sharing!
Thanks so much, glad the content is helpful!
Very useful topic, thanks for covering it. BTW, is there any approach how to structure feature flags? I had a project where we tested and rolled out many experimental features, but the code looked really ugly and hard to maintan since we had many of flags.
Thanks Eduard! I think the tools I mentioned all have some sort of tagging or group option, so you can organize your feature flags that way. You could then use protocols/abstraction to only supply the set of feature flags to each part of your application that it actually needs, so that it's decoupled more.
If the codebase is the same no matter features are activated or not, how do we track which version of configs a user saw? Is that done by having logging statements that print the configs?
Great explanation, thanks for sharing
Thanks so much, glad the content is helpful!
Thank you for the amazing video!
Thank you Akhil, glad you liked the video!
Really nice video! I believe it may be a bit of an overkill, but I would have created a decorator with parameters, taking the name of the event as parameter and getting it trigger whenever the decorated method is called. It makes little sense here but it could be a nice addition to scale this to a larger codebase.
Good content, but the side camera angle shots are really distracting. Was doing that an A/B test itself?
great one
Thank you!
Love the video! Just a mild criticism - the side angles that the video occasionally cuts to are kind of jarring and out of place. It makes it feel like the video is *trying* to be cool and fancy, and is mostly distracting.
You're great at presenting! And youtube viewers are incredibly conditioned to accept the "vlog format" of somebody talking straight to the camera and using jump cuts or zooming in and out from the same angle.
If I were you, I would drop camera B entirely (which is probably easier to film anyway!)
Again, love the videos generally though. You fill a great niche for mid/senior level python topic videos and I try to watch every single one!
Totally agree about camera B 👀 just drop it Arjan 👌
Side question but I think an interesting topic: how would you deliver this application to users without leaking the API key for Growthbook?
Like @100rubles says, there is no way to hide it. But you should use an API key that only has permission to send an event and nothing else.
Makes sense, thank you! However I guess even in that case, a bad actor could just fake requests, screwing with the analytics.
That's where logins + per-user keys /API tokens come in I imagine.
Can you please explain how you think your channel name?? Whats it means? Your name arjan?
What I don't like about A/B testing is that companies tend to end up completely ignoring user feedback. They'll just make two versions, answer "which one is used more often?" and just ram that version down our throat. Looking at you, Reddit.
Anyway, there's usually something that something has been wanting for YEARS, but companies won't do it, because no A/B testing :(
That just ends up me not wanting to use it at all, because it's so impersonal.
Not that users are always right (only "in matters of taste", as the quote goes), but those large entities end up becoming impersonal.
While I'm not familiar with the Reddit case, I firmly believe that listening to user feedback is just as crucial as conducting A/B testing. However, it's important to note that there can be drawbacks to relying solely on explicit user feedback. For instance, it can often be biased. Users don't always express their genuine needs initially, and I've observed numerous cases where they've changed their stance after considering the potential consequences of the changes they initially requested. Additionally, there's the issue of representation. A small group of users may provide extensive feedback, while the majority, potentially with different views, remain silent. This disparity makes decision-making based on a limited number of user feedback responses quite challenging.
I miss your normal voice but get your testing different audio stuff.
interesting content, but i don't like A/B at all.
Hi Arjan. Hoe gaat het ? :) At 19:22 there is no link for the rest api in the top right corner :) Very interesting and your calm way of explaining makes your content also very relaxing ;) Cheers from Romania.
Hi Cezar, thanks for the heads-up!