Machine learning - Bayesian optimization and multi-armed bandits

Поделиться
HTML-код
  • Опубликовано: 7 фев 2025

Комментарии • 43

  • @subtlethingsinlife
    @subtlethingsinlife Год назад +7

    He is a hidden gem .. I have gone through a lot of his videos , they are great in terms of removing jargon .. and bringing clarity

  • @rikki146
    @rikki146 2 года назад +1

    Learning advanced ml concepts for free! What a time to be alive. Thanks a lot for the vid!

  • @Старкрафт2комедия
    @Старкрафт2комедия 8 лет назад +32

    wow, this professor is such a great teacher. Model for all profs!

    • @always-stay-positive5187
      @always-stay-positive5187 8 лет назад

      please explain the first colourful plot he illustrates with

    • @ibrararshad1650
      @ibrararshad1650 7 лет назад +3

      You gotta watch his previous videos on Gaussian processes to understand this lecture. Basically you need to understand the Gaussian processes first.

    • @bingchaowang6073
      @bingchaowang6073 4 года назад

      @@always-stay-positive5187 using the acquisition function to locate the point we want to optimize I guess

  • @S25plus
    @S25plus Год назад

    Thanks prof. Freitas, this is extremely helpful

  • @emmanuelonyekaezeoba6346
    @emmanuelonyekaezeoba6346 2 года назад +1

    Very elaborate and simple presentation. Thank you.

  • @truptimohanty9386
    @truptimohanty9386 2 года назад +1

    This is the best video for understanding the Bayesian Optimization. It would be a great help if you could you post a video on multi objective Bayesian optimization specifically on expected hyper volume improvement. Thank you

  • @JS-bo1ns
    @JS-bo1ns 3 года назад

    Thank you for providing excellent resources

  • @michaelcao9483
    @michaelcao9483 2 года назад

    Thank you! Really great explanation!!!

  • @SnoopingDope
    @SnoopingDope 5 лет назад

    finally found a nice class. thank you very much

  • @yuanyuan3056
    @yuanyuan3056 7 лет назад

    Too good in explaining, I never took such detailed notes.

  • @DanHaiduc
    @DanHaiduc 11 лет назад +12

    "heuristics" -> "terrorist sex"
    youtube automatic captions is getting better :D

  • @hohinng8644
    @hohinng8644 2 года назад

    The use of notation at 23:00 is confusing for me

  • @taozhuo
    @taozhuo 5 лет назад +1

    change playback speed to 1:1.25. btw great lecture!

  • @ar_rahman_90
    @ar_rahman_90 7 лет назад

    Thankyou! Great lecture. Really enjoyed

  • @michaelmoore7568
    @michaelmoore7568 2 месяца назад

    Is the data that he's analyzing the some of many different Gaussians?

  • @yuanyuan3056
    @yuanyuan3056 8 лет назад +1

    Great explaination!

  • @abbasalili9057
    @abbasalili9057 2 года назад

    Awesome!!!

  • @michaelmoore7568
    @michaelmoore7568 2 месяца назад

    Why is he trying to maximize mean and variance?

  • @manoharg.h2993
    @manoharg.h2993 4 года назад +1

    Hi,
    If we have A=0,1 , B =0,1 and C =0,1,2 total combination is 12 ..How can we reduce using Bayesian optimization

  • @jubintkm
    @jubintkm 7 лет назад

    great teacher...

  • @HiteshParmar
    @HiteshParmar 11 лет назад +1

    Hello Sir (Nando de Freitas) , A Really Great lecture on this optimization method, I am a Computer Science student, and i have gone through your other lectures on Random Forests as well, sir i am working on a research project based on automatic tuning of the hyperparameters in Random Forests. This method is really great for that but i was wondering like are there any other optimization methods available to tune the hyperparameters ? It will be a really great help from your side Sir.

  • @rajupowers
    @rajupowers 5 лет назад

    Thompson sampling @59:00

  • @kapilagrawal5885
    @kapilagrawal5885 6 лет назад +4

    Say we have n bandits labelled from 1 to n. And if on the x-axis, I take 1 to n and on y-axis I take their corresponding rewards. Then I don't think it would be safe to say that my function is smooth. What are alternatives when you don't have smooth functions?

    • @zhouxinning7284
      @zhouxinning7284 4 года назад +2

      I think when your actions are discrete and your utility function over actions f(a) is not be smooth, GP might not be your best choice.
      Instead, you can model a distribution for every action, e.g. using beta distribution for each bandit.

    • @seeungeheuer7083
      @seeungeheuer7083 3 года назад

      @@zhouxinning7284 though beta distribution is as far as I understand only a good choice for Bernoulli-bandits, where you either win or lose, isn't it?

  • @linweili9238
    @linweili9238 5 лет назад

    a stupid question: how to do x_{n+1} = argmax u(x | D)? just randomly choose x and see if it generate biggest u(x | D)? how essentially to generate the curve of the acquisition function? Thanks!

  • @leolaranjeiragomes
    @leolaranjeiragomes 8 лет назад

    Thanks!

  • @amiltonwong
    @amiltonwong 11 лет назад

    It seems there were some contents presented after 1:20:30.

  • @NirandikaWanigasekara
    @NirandikaWanigasekara 10 лет назад +1

    in the PI acquisition function, the Phi function has the variance as the denominator. So to maximise Phi(x), mean needs to increase and variance need to decrease right? But the explanation in 40.31 the prof says high variance is needed since we are trying to maximise the area under the curve. Can someone clear this to me and show a way to connect the equation with the graph explanation please.

    • @dustintranv
      @dustintranv 9 лет назад

      Nirandika Wanigasekara There's an error in the slides, which is that the probability for a right-tail should be 1 - the CDF. This corresponds to wanting the CDF to be close to zero, i.e., mean close to (mu^+ + epsilon) and variance as large as possible.

    • @pklalu
      @pklalu 9 лет назад +3

      +Dustin Tran +Nirandika Wanigasekara I believe the equation is correct as 1-CDF(x) = CDF(-x) for Gaussian distribution. Higher variance is justified as long as mu(x) is less than (mu^+ + eps), but a lower variance might be preferred when mu(x) > (mu^+ + eps) which is counterintuitive.

  • @looper6394
    @looper6394 7 лет назад

    refering to GP-UCB (around 57 min). Do you discretize the x domain and then search for the min (argmin GP-UCB(x)) or do you use a gradient-based optimizer on GP-UCB(x)? In case you use the second option, how do you calcuate the gradient of GP-UCB(x)? This should be analytically trackable.

  • @femtogary3723
    @femtogary3723 6 лет назад

    Hello, professor Nando, I have som questions about the smoke simulation auto optimization.Since Bayesian optimization is about better places at next round, I see user can choose many places, so I think it should be many good candidates, and also,
    the objective function is what is still not clear, in user's mind? So you mean we using Bayesian optimization to approach the function in user's mind? And also, I check main open-source library out there, optunity seems quite nice and have a very easier api for end user like me. It suggest using Particle Swarm Optimization or Tree-structured Parzen Estimator to optimize,so can PSO and TPE also can do things like in the video, let user choose many candidate? Is it possbile? Thanks

  • @jakobbarger1260
    @jakobbarger1260 7 лет назад +4

    Professor de Freitas has neat paper on this very topic. Do yourself a favor and grab the pdf at arXiv:1012.2599

  • @glendepalma7057
    @glendepalma7057 6 лет назад

    Good to know slot machines always pay out the same amount and there's no variability.

  • @always-stay-positive5187
    @always-stay-positive5187 8 лет назад

    i dont understand those plots. they dont look lie Gaussians at all.

    • @shobhithathi9278
      @shobhithathi9278 6 лет назад

      Always-Stay-Positive Actually, they aren’t supposed to be! The Gaussian process induces a gaussian prior over all possible functions. What’s being plotted is the mean function (the function that gives the mean at a particular point). Does that make sense?

  • @eduardocesargarridomerchan5326
    @eduardocesargarridomerchan5326 4 месяца назад

    Tutorial en castellano de optimizacion bayesiana, por si a alguien le interesa: ruclips.net/video/nNRGOfneMdA/видео.html

  • @IgorAherne
    @IgorAherne 6 лет назад +2

    I am becoming smarter ...muahaha