Choosing a place for dinner will never be the same again...your videos are fantastic, man! I was so frustrated earlier today because I simply couldn't get a grip on the UCB algorithm. Now, I am more than happy not only because I finally understood it (at least the intuition behind it), but also because I have a name for one of the dominating stories of my life (exploration - exploitation - dilemma). You, sir, are one of the most amazing teachers I ever experienced!
I really appreciate your videos. i’m taking a course on machine learning and a/b testing and after every lesson I come watch your videos to actually understand what I just learned.
Love this video man. Just the simple message the viewer gets that you're here to help them and break down higher, abstract concepts into simpler terms they can grasp is incredibly reassuring. Even if I failed to understand any given part as a student I'd go back over and over with the confidence you're willing and able to help me get there eventually. Even if this channel isn't around forever never stop sharing your knowledge.
I've watched loads of your videos and it's given me so much clarity with so many different data science concepts. You're a really great teacher, hope you keep posting videos and hope your channel keeps growing!
The thing I love the most about your videos is the perfect balance between intuition, theory and matching them to results. Keep going! If you have a Patreon or equivalent account, I'd be honored to support you in this terrific journey of yours.
Thank you so much for also providing the link to the Hoeffding's inequality! Most other sources for this just skip the theory which I dislike since I would like to understand this algorithm.
In real-world problems, state-space will be very big and we will not get enough time to explore all possible states. In such cases, UCB1 should perform better than exploitation..
Hey thx a lot for the explanations! Maybe you can make a third video about random and directed exploration. There are a lot more models like the UCB :)
Hi. I have just watched a couple of your videos and couldn't resist the temptation to subscribe and binge on all the materials. Very impressed by the intuitiveness of your approach. May I ask if you have or recommend any materials to intuitively understand epsilon automata machines and CSSR algorithm. Utterly grateful for your reply.
Are there any models that factor in staleness? I would image going to the same restaurant 297 days in a row would be pretty boring so the optimal strategy should include the other restaurants every once in a while.
Very helpful!! Just wanna know if we don't have any prior info about happiness distribution of each restaurant, then how to use this UCB algorithm. In total cold start problem what parameters will be helpful to decide happiness distribution of restraunt in city.
Hi, first of all, very well put together video! One question: in exploitation approach, in the example, we visited each restaurant once (n times in total) and then continued with the best observed one for the rest of 300 - n days, right? Also, I find it quite surprising that exploitation only outperforms UCB1 for larger n, intuitively it seems that exploitation only approach is less stable/more up to chance (may perform worse than even exploration only). I guess the second term based on Hoeffding's inequality really punishes UCB1 in the example 🤔
Wouldn't the averages have to be within a specific range (e.g. [0,1])? Considering the explanation in the video, if the means move in an order of thousands, the bound would have practically no effect on the decision. Please correct me if this is not correct. Thanks!
Last option (n=100) is akin to real life. There are so many things to do and choose from in a short time. Exploiting is a better strategy to reduce regret - Make the most of what you got !
Hi, MAB seems to be inefficient when there are lots of arms. One way to calculate q-value for multiple arms using single model is by using contextual bandits, could you explain how contextual bandit does this? I cannot understand how one model outputs q-value for multiple arms..
explained everything in a hurry, till I reached the end of the video I have already had forgotten what did you say at the start of the video. and watching again and again is also not helping. please put the other formulas on white board as well and show by calculating a manually a bit, so that the ideas and concept has time to sink in our brains. running to the end, won't help the learners.
This is called teaching with the highest standards
You're a good teacher, man! Too bad only very few academics can explain things with the clarity and simplicity you do.
I appreciate that!
One of the best video explanations I have seen on Data science so far. Please keep up the good work Ritvik. Thanks a lot!!
Most welcome!
Choosing a place for dinner will never be the same again...your videos are fantastic, man! I was so frustrated earlier today because I simply couldn't get a grip on the UCB algorithm. Now, I am more than happy not only because I finally understood it (at least the intuition behind it), but also because I have a name for one of the dominating stories of my life (exploration - exploitation - dilemma). You, sir, are one of the most amazing teachers I ever experienced!
I was stuck on bandit algorithm for a day before I found your video. Excellent work!
Thanks!
You simply rock 👍Your teaching style, way of explaining complex things in such a simpler fashion makes learning much easier and faster. Wonderful.
I really appreciate your videos. i’m taking a course on machine learning and a/b testing and after every lesson I come watch your videos to actually understand what I just learned.
Love this video man. Just the simple message the viewer gets that you're here to help them and break down higher, abstract concepts into simpler terms they can grasp is incredibly reassuring. Even if I failed to understand any given part as a student I'd go back over and over with the confidence you're willing and able to help me get there eventually. Even if this channel isn't around forever never stop sharing your knowledge.
I've watched loads of your videos and it's given me so much clarity with so many different data science concepts. You're a really great teacher, hope you keep posting videos and hope your channel keeps growing!
This is very, very well explained. Concise, yet conversational. Excellent stuff.
The best video explanation I have seen so far. Could not stop paying attention. Thank you!
Glad it was helpful!
this is too good Ritvik. Congrats you made learning UCB easier
So great, clear my doubt completely. Please keep doing this!!
The best math-computer-science instructor online. Much appreciated
Thank you, brother! You are very good at explaining and giving the right information. Respect!
BEST EXPLANATION EVER.
Thank you so much, Ritvik!
The thing I love the most about your videos is the perfect balance between intuition, theory and matching them to results. Keep going!
If you have a Patreon or equivalent account, I'd be honored to support you in this terrific journey of yours.
I appreciate that!
Agree!
Your ability to communicate difficult concepts using story telling is unparalleled.
Your videos has cleared my concepts over the years. Please make a playlist on Reinforcement Learning.
Thank you, you're talented teacher. You explained it very well and clear.
thanks a lot for these multi-bandit videos..........
spent ages trying to figure this stuff out, your explanations have helped a lot
Thank you :-)
This is such a great explanation! Thank you!
Warning Everybody... Very adictive vídeos... I just can't stop seeing one after another. Fantastic job!!!
Thank you so much for also providing the link to the Hoeffding's inequality! Most other sources for this just skip the theory which I dislike since I would like to understand this algorithm.
love the way you explain by examples!
better than my professor thank god i found your video, thank you very much!!
Made kid easy. Thanks for teaching this and being clear as day.
The explanation makes the concepts very clear.
thanks!
This is such a good explanation. Brilliant.
Glad you think so!
Amazing! Thanks a lott!!
I like your videos dude. Thank you for creating them!
Glad you like them!
Ritvik you are a pedagogical GOD
Came here to get better at picking restaurants but stayed for the data science teaching!
Woo!
while watching this vid, i unconsciously started nodding!!!
You are a great teacher indeed.
This is great. You should definitely continue with reinforcement learning applications!!!
Always the best 👌 I hope you design a RL course one day. It will definitely be one of the best🌝
This was an excellent video. Thanks.
Glad it was helpful!
The video was very useful, Thank you !
That's an amazing explanation!
Thank you very much..you made it very easy to understand
You are welcome!
Wonderful explanation
Glad you think so!
great explanation bro!
In real-world problems, state-space will be very big and we will not get enough time to explore all possible states. In such cases, UCB1 should perform better than exploitation..
Nicely explained, Thanks.
Awesome explanation. Thanks a lot
Keep up the good work !!
Thank you so much you explained that very well
Your videos are getting only better! Thank you very much. Is the restaurant's happiness score equivalent to the rewards delivered?
Mindblowing!
Another very good viedo.
Glad you enjoyed it
That's probably Hoeffding's inequality. Maybe the name sounds strange, but nevertheless deserves to be spelled correctly!
Hey thx a lot for the explanations! Maybe you can make a third video about random and directed exploration. There are a lot more models like the UCB :)
This is super cool! Thanks :)
Clear explanation
You’re the goat
First, i love your channel!
Hi. I have just watched a couple of your videos and couldn't resist the temptation to subscribe and binge on all the materials. Very impressed by the intuitiveness of your approach. May I ask if you have or recommend any materials to intuitively understand epsilon automata machines and CSSR algorithm. Utterly grateful for your reply.
Are there any models that factor in staleness? I would image going to the same restaurant 297 days in a row would be pretty boring so the optimal strategy should include the other restaurants every once in a while.
great tutorial brother can you make an lecture on ucb1 derivation
Very helpful!! Just wanna know if we don't have any prior info about happiness distribution of each restaurant, then how to use this UCB algorithm. In total cold start problem what parameters will be helpful to decide happiness distribution of restraunt in city.
very nice lecture
Hi, first of all, very well put together video!
One question: in exploitation approach, in the example, we visited each restaurant once (n times in total) and then continued with the best observed one for the rest of 300 - n days, right?
Also, I find it quite surprising that exploitation only outperforms UCB1 for larger n, intuitively it seems that exploitation only approach is less stable/more up to chance (may perform worse than even exploration only). I guess the second term based on Hoeffding's inequality really punishes UCB1 in the example 🤔
Wouldn't the averages have to be within a specific range (e.g. [0,1])? Considering the explanation in the video, if the means move in an order of thousands, the bound would have practically no effect on the decision. Please correct me if this is not correct. Thanks!
Nice!! thank you
No problem!
PERFECT !!!
perfect!
Kindly, also upload a video about Thompson Sampling as well! Exam in 4 days
Can you make a video on Contextual Bandit
Last option (n=100) is akin to real life. There are so many things to do and choose from in a short time. Exploiting is a better strategy to reduce regret - Make the most of what you got !
thank you!
Hi, MAB seems to be inefficient when there are lots of arms. One way to calculate q-value for multiple arms using single model is by using contextual bandits, could you explain how contextual bandit does this? I cannot understand how one model outputs q-value for multiple arms..
One additional question: can this be solved through an optimization problem's solution?
thanks
nice :)
To use hoeffdings, you need to be bounded. Why do we see that here?
cannot believe
after seeing this video I decide not to continue exploration
explained everything in a hurry, till I reached the end of the video I have already had forgotten what did you say at the start of the video.
and watching again and again is also not helping.
please put the other formulas on white board as well and show by calculating a manually a bit, so that the ideas and concept has time to sink in our brains.
running to the end, won't help the learners.