Best Multi-Armed Bandit Strategy? (feat: UCB Method)
HTML-код
- Опубликовано: 8 авг 2024
- Which is the best strategy for multi-armed bandit? Also includes the Upper Confidence Bound (UCB Method)
Link to intro multi-armed bandit video: • Multi-Armed Bandit : D...
Link to code used in this video: github.com/ritvikmath/Time-Se...
Link to Hoffding's Inequality: lilianweng.github.io/lil-log/...
This is called teaching with the highest standards
You're a good teacher, man! Too bad only very few academics can explain things with the clarity and simplicity you do.
I appreciate that!
I've watched loads of your videos and it's given me so much clarity with so many different data science concepts. You're a really great teacher, hope you keep posting videos and hope your channel keeps growing!
Love this video man. Just the simple message the viewer gets that you're here to help them and break down higher, abstract concepts into simpler terms they can grasp is incredibly reassuring. Even if I failed to understand any given part as a student I'd go back over and over with the confidence you're willing and able to help me get there eventually. Even if this channel isn't around forever never stop sharing your knowledge.
You simply rock 👍Your teaching style, way of explaining complex things in such a simpler fashion makes learning much easier and faster. Wonderful.
I really appreciate your videos. i’m taking a course on machine learning and a/b testing and after every lesson I come watch your videos to actually understand what I just learned.
One of the best video explanations I have seen on Data science so far. Please keep up the good work Ritvik. Thanks a lot!!
Most welcome!
Choosing a place for dinner will never be the same again...your videos are fantastic, man! I was so frustrated earlier today because I simply couldn't get a grip on the UCB algorithm. Now, I am more than happy not only because I finally understood it (at least the intuition behind it), but also because I have a name for one of the dominating stories of my life (exploration - exploitation - dilemma). You, sir, are one of the most amazing teachers I ever experienced!
This is very, very well explained. Concise, yet conversational. Excellent stuff.
So great, clear my doubt completely. Please keep doing this!!
Your ability to communicate difficult concepts using story telling is unparalleled.
this is too good Ritvik. Congrats you made learning UCB easier
The best video explanation I have seen so far. Could not stop paying attention. Thank you!
Glad it was helpful!
Thank you, brother! You are very good at explaining and giving the right information. Respect!
Your videos has cleared my concepts over the years. Please make a playlist on Reinforcement Learning.
I was stuck on bandit algorithm for a day before I found your video. Excellent work!
Thanks!
Thank you, you're talented teacher. You explained it very well and clear.
BEST EXPLANATION EVER.
Thank you so much, Ritvik!
This is such a great explanation! Thank you!
thanks a lot for these multi-bandit videos..........
spent ages trying to figure this stuff out, your explanations have helped a lot
Thank you :-)
Made kid easy. Thanks for teaching this and being clear as day.
The best math-computer-science instructor online. Much appreciated
love the way you explain by examples!
This is great. You should definitely continue with reinforcement learning applications!!!
Warning Everybody... Very adictive vídeos... I just can't stop seeing one after another. Fantastic job!!!
Thank you so much for also providing the link to the Hoeffding's inequality! Most other sources for this just skip the theory which I dislike since I would like to understand this algorithm.
That's an amazing explanation!
You are a great teacher indeed.
better than my professor thank god i found your video, thank you very much!!
I like your videos dude. Thank you for creating them!
Glad you like them!
Always the best 👌 I hope you design a RL course one day. It will definitely be one of the best🌝
great explanation bro!
Your videos are getting only better! Thank you very much. Is the restaurant's happiness score equivalent to the rewards delivered?
Thank you so much you explained that very well
This is such a good explanation. Brilliant.
Glad you think so!
Amazing! Thanks a lott!!
The explanation makes the concepts very clear.
thanks!
Ritvik you are a pedagogical GOD
Awesome explanation. Thanks a lot
The thing I love the most about your videos is the perfect balance between intuition, theory and matching them to results. Keep going!
If you have a Patreon or equivalent account, I'd be honored to support you in this terrific journey of yours.
I appreciate that!
Agree!
Nicely explained, Thanks.
Wonderful explanation
Glad you think so!
Came here to get better at picking restaurants but stayed for the data science teaching!
Woo!
Keep up the good work !!
This was an excellent video. Thanks.
Glad it was helpful!
Thank you very much..you made it very easy to understand
You are welcome!
This is super cool! Thanks :)
Hey thx a lot for the explanations! Maybe you can make a third video about random and directed exploration. There are a lot more models like the UCB :)
Clear explanation
Best explanation, p.s. it would be nice to see results for 300+ days in this competition of ucb vs exploitation
Mindblowing!
while watching this vid, i unconsciously started nodding!!!
Another very good viedo.
Glad you enjoyed it
very nice lecture
You’re the goat
That's probably Hoeffding's inequality. Maybe the name sounds strange, but nevertheless deserves to be spelled correctly!
thank you!
PERFECT !!!
perfect!
First, i love your channel!
Hi. I have just watched a couple of your videos and couldn't resist the temptation to subscribe and binge on all the materials. Very impressed by the intuitiveness of your approach. May I ask if you have or recommend any materials to intuitively understand epsilon automata machines and CSSR algorithm. Utterly grateful for your reply.
Very helpful!! Just wanna know if we don't have any prior info about happiness distribution of each restaurant, then how to use this UCB algorithm. In total cold start problem what parameters will be helpful to decide happiness distribution of restraunt in city.
Nice!! thank you
No problem!
great tutorial brother can you make an lecture on ucb1 derivation
Hi, first of all, very well put together video!
One question: in exploitation approach, in the example, we visited each restaurant once (n times in total) and then continued with the best observed one for the rest of 300 - n days, right?
Also, I find it quite surprising that exploitation only outperforms UCB1 for larger n, intuitively it seems that exploitation only approach is less stable/more up to chance (may perform worse than even exploration only). I guess the second term based on Hoeffding's inequality really punishes UCB1 in the example 🤔
thanks
Are there any models that factor in staleness? I would image going to the same restaurant 297 days in a row would be pretty boring so the optimal strategy should include the other restaurants every once in a while.
Kindly, also upload a video about Thompson Sampling as well! Exam in 4 days
nice :)
Hi, MAB seems to be inefficient when there are lots of arms. One way to calculate q-value for multiple arms using single model is by using contextual bandits, could you explain how contextual bandit does this? I cannot understand how one model outputs q-value for multiple arms..
Wouldn't the averages have to be within a specific range (e.g. [0,1])? Considering the explanation in the video, if the means move in an order of thousands, the bound would have practically no effect on the decision. Please correct me if this is not correct. Thanks!
Last option (n=100) is akin to real life. There are so many things to do and choose from in a short time. Exploiting is a better strategy to reduce regret - Make the most of what you got !
One additional question: can this be solved through an optimization problem's solution?
Can you make a video on Contextual Bandit
cannot believe
In real-world problems, state-space will be very big and we will not get enough time to explore all possible states. In such cases, UCB1 should perform better than exploitation..
To use hoeffdings, you need to be bounded. Why do we see that here?
after seeing this video I decide not to continue exploration
explained everything in a hurry, till I reached the end of the video I have already had forgotten what did you say at the start of the video.
and watching again and again is also not helping.
please put the other formulas on white board as well and show by calculating a manually a bit, so that the ideas and concept has time to sink in our brains.
running to the end, won't help the learners.