Michael Nef
Michael Nef
  • Видео 4
  • Просмотров 6 534
Sobolev Preconditioning
Sobolev Preconditioning
Просмотров: 423

Видео

Inverse Knapsack Problem
Просмотров 577Год назад
In this video I describe the problem of inverse optimsiation. I then present the inverse knapsack problem followed by an algorithm to solve this inverse problem. Inverse optimisation is a modern area of research with many new developments. The algorithm presented in this video is based on the paper "The inverse {0,1}-knapsack problem: Theory, algorithms and computational experiments" (Roland, e...
Why the Integers and Rationals aren't Isomorphic (Intuition and Proof)
Просмотров 2242 года назад
Why the Integers and Rationals aren't Isomorphic (Intuition and Proof)
Laplace-Beltrami Operator Intuition
Просмотров 5 тыс.2 года назад
What is the Laplace-Beltrami Operator? This small presentation outlines the fundamental intuition behind the construction and definition of the Laplace-Beltrami operator. I do not cover how to do actual computations using the Laplace-Beltrami operator in this video and I also don't cover the divergence & gradient structure of the operator either. Both of which are extremely important to underst...

Комментарии

  • @薛常宏
    @薛常宏 9 месяцев назад

    magnetic voice! Thanks for your explanation!

  • @yingjianwang3045
    @yingjianwang3045 9 месяцев назад

    What a cute deck of hand drawing slides! Thanks for showing the intuition behind the Laplace-Beltrami operator.

  • @royvelich6279
    @royvelich6279 10 месяцев назад

    Any reference for literature where they use this derivation?

  • @abutalibnuramatov2086
    @abutalibnuramatov2086 11 месяцев назад

    the topic is completely unsolved

  • @ZiwenGu
    @ZiwenGu Год назад

    Thanks, very brief introduction! Watching the last past for three time I understand the intuition meaning of this operator

  • @hexeldev
    @hexeldev Год назад

    great video, really needed more than 480p though

  • @guilhermecoelho5202
    @guilhermecoelho5202 Год назад

    Great introduction! Thank you!

  • @blaskowic5366
    @blaskowic5366 Год назад

    great explanation

  • @gabrielorlanski7185
    @gabrielorlanski7185 Год назад

    Wow good video! What a good cursor!

  • @essam_ly
    @essam_ly Год назад

    Amazing video bro, I'm coming from Neo😁 good luck with your channel.

  • @hamzahussain6619
    @hamzahussain6619 Год назад

    Hey Michael, fantastic video as always. Can you discuss any applications of Dirichlet energy minimization in machine learning or data science?

    • @michael-nef
      @michael-nef Год назад

      Thanks for the comment Hamza, It's hard to say. I think it's unlikely you'll find many particularly useful direct applications of Dirichlet energy minimisation in data science, but the general problem of energy minimisation can be used to solve a whole range of tasks. Check out this video for an example: ruclips.net/video/-uXFYpVumh4/видео.html It's possible you could also use gradient flows as a theoretical tool to understand neural networks. See for example this paper from Michael Bronstein's lab on GNNs: arxiv.org/pdf/2206.10991.pdf

  • @kormm
    @kormm Год назад

    You can optimise me inversely anytime ;)

  • @ash_ithape
    @ash_ithape Год назад

    Love the lightbulb at 8:32

  • @hamzahussain6619
    @hamzahussain6619 Год назад

    Hi Michael, fantastic video. I just had a question regarding Inverse Optimization. Are there any limitations or assumptions associated with inverse optimization that researchers should be aware of when applying it to different scenarios?

    • @michael-nef
      @michael-nef Год назад

      Thanks for the comment Hamza, To address your question: It depends on the particular scenario you are applying inverse optimisation to. In this example I show an algorithm for solving the inverse knapsack problem with the infinity norm penalty. Depending on the application it may be undesirable to use the infinity norm to measure distance from your estimate y vector. But this is really an implementation detail, so really this could be changed to suit the problem. In more general inverse optimisation problems, it's unlikely that you will have a perfect understanding of the constraints (i.e. the forwards feasible set). The consequence of this is that you have to also estimate the constraints in some way which could mean the forwards feasible set ends up being empty (and therefore the inverse problem won't work either). Typically inverse optimisation problems are also more computationally expensive than their forwards counterparts (since usually you solve the forwards problem during the running of your inverse algorithm). In the video you can see at the end how the time complexity of this IKP-infty algorithm is O(nC * log kmax) rather than the forwards problem which is just O(nC). So that's another consideration.

  • @salmonfeedforward
    @salmonfeedforward Год назад

    off topic but the dog is impressively well drawn

  • @kemijarks
    @kemijarks Год назад

    We need more

  • @taa805
    @taa805 2 года назад

    Thanks a lot! Very well explained

  • @keenancrane
    @keenancrane 2 года назад

    Love your drawings!

  • @ARBB1
    @ARBB1 2 года назад

    Very well made video, but the papers could be clearer

  • @ManishPatel-qy5dd
    @ManishPatel-qy5dd 2 года назад

    Thanks. This was a very established and strong presentation. I hope you can make the further presentation with the example that how this Laplacian acts on function.