The Kolmogorov-Arnold Theorem

Поделиться
HTML-код
  • Опубликовано: 26 янв 2025

Комментарии • 52

  • @Alteaima
    @Alteaima Месяц назад +23

    First I hope you see this comment we need a video on graph neural networks and we can’t find someone who breaks the topic down to this degree of simplicity so thanks for your help and we appreciate your efforts 🎉

    • @SerranoAcademy
      @SerranoAcademy  Месяц назад +6

      Thank you so much! Great suggestion! I'm actually working on an explanation of GNNs, with some people who are friends, and some like sports and some like music. Hoping to get it out pretty soon!
      If you have any other suggestions, please feel free to throw them in, I'm always looking for good topics to learn and explain. :)

    • @Alteaima
      @Alteaima Месяц назад

      @@SerranoAcademy thank you again i hope you’re doing great

    • @revimfadli4666
      @revimfadli4666 Месяц назад

      ​@@SerranoAcademycan you please link it to chemistry gnns and modular agents by deepak pathak?

  • @trantandat2699
    @trantandat2699 Месяц назад +9

    One of the best teacher i have seen so far. make complicated thing like this Kolmogorov Arnold Theorem to be very simple explanation

    • @SerranoAcademy
      @SerranoAcademy  Месяц назад

      @@trantandat2699 thank you for your kind words, I’m glad you enjoyed it! :)

  • @Atlas92936
    @Atlas92936 14 дней назад

    Luis, I have the utmost respect for you. I’ve been keeping up with your content in various platforms, coursera, LinkedIn, RUclips, and I really think you’re a great human being. I related to your story about starting in mathematics and struggling as a student. Now you are well known in the ML community and make math more accessible for everyone. You are also conscious about social issues which is an overlooked quality. You’re clearly an achieved hard worker, yet humble. Thank you for the inspiration always.

    • @SerranoAcademy
      @SerranoAcademy  14 дней назад +1

      Thank you for such kind message. It's a real honor to be part of your learning journey, and to share our desire for a better world. :)

  • @znglelegendaire3005
    @znglelegendaire3005 13 дней назад

    You are the best professor that I know at the moment in the world! Thank you very much for the explanations.

  • @jamesmcadory1322
    @jamesmcadory1322 Месяц назад +2

    This is one of the best educational videos I’ve ever seen. It went at a good pace, had helpful visuals, and I feel like I understand the main idea of this theorem now. Thank you for the video!

  • @frankl1
    @frankl1 Месяц назад +2

    Best explanation of KAT and KAN with intuitive drawings, very much appreciated

  • @shivakumarkannan9526
    @shivakumarkannan9526 15 дней назад

    Such a brilliant theorem and very clear explanation using diagrams.

  • @MoreCompute
    @MoreCompute 6 дней назад

    Louis! What a great video you've made. Thank you for making it.

  • @BananthahallyVijay
    @BananthahallyVijay Месяц назад

    🎉🎉🎉🎉 The most lucid video I've wanted to see on why in theory you need only one hidden layer in a NN. A big thanks to the content creator. ❤

  • @Gamingforfunpeace
    @Gamingforfunpeace Месяц назад +3

    Honestly this is amazing. Could you please create a 5 part video series on these visual explanations of the Langlands Proof that just came out ( you know which one ) ...
    You have a gift for Mathematical Storytelling , I absolutely loved the visualizations .... That is what math is about .... The elegance of visual storytelling... Would love to see your visualization of that proof

  • @sahil_shrma
    @sahil_shrma Месяц назад +2

    Wow! the everything in two-layer thing and summation part seems fantastic. Thank you! Luis 💚

    • @SerranoAcademy
      @SerranoAcademy  Месяц назад

      @@sahil_shrma thank you so much, I’m glad you liked it! I was pretty amazed too when I first saw that the theorem implies the two-layer universality. :)

  • @sohaibahmed9165
    @sohaibahmed9165 18 дней назад

    Thanks bro! You made it really simple. Highly recommended❤

  • @jasontlho
    @jasontlho Месяц назад +1

    beautiful explanation

  • @Sars78
    @Sars78 Месяц назад

    This IS the most important theorem to appreciate the power of DNN in general.

  • @Harshtherocking
    @Harshtherocking Месяц назад

    i tried reading this paper in month of June 2024. Couldn't understand much of it. Thanks Luis for the amazing explanation.

  • @RasitEvduzen
    @RasitEvduzen Месяц назад

    Thanks for your beautiful explanation, I think next video should about Automatic Differentiation.

  • @sunilkumarvengalil2305
    @sunilkumarvengalil2305 14 дней назад

    Nice explanation! Thank you!

  • @junborao8910
    @junborao8910 16 дней назад

    Really helpful video. I really appreciate it.

  • @behrampatel3563
    @behrampatel3563 Месяц назад

    Louis I wish you health and happiness so you can continue to educate those of us who are way past their academic prime. For many reasons I never had the luxury of learning engineering . Khan academy , 3blue1brown and you made education accessible and approachable. Thank you , live long and prosper my friend. ❤

  • @djsocialanxiety1664
    @djsocialanxiety1664 Месяц назад +3

    awesome explanation

    • @SerranoAcademy
      @SerranoAcademy  Месяц назад

      Thank you, I'm glad you like it!

    • @djsocialanxiety1664
      @djsocialanxiety1664 Месяц назад +1

      @@SerranoAcademyany chance on a video that explains the training of KANs

    • @SerranoAcademy
      @SerranoAcademy  Месяц назад

      @@djsocialanxiety1664 this video has the architecture: www.youtube.com/watch?v=myFtp58U
      In there I talk a little bit about the training, which is mostly finding the right coefficients of the B-splines, using the usual gradient descent. AFAIK, the training is very analogous to a regular neural network, which is why I only mention it briefly, but if it's something more, I may make another video. If you know of any nuances in the training that can be explored, please let me know. Thanks!

  • @djsocialanxiety1664
    @djsocialanxiety1664 5 дней назад +1

    could you maybe explain why its „not so bad“ the (x+y)^2 is still entangled, but if thats not so bad then whats the whole thing with entanglement for in the first place?

  • @cathleenparsons3435
    @cathleenparsons3435 Месяц назад

    This is excellent! Thanks so much, really helpful

  • @neelkamal3357
    @neelkamal3357 Месяц назад

    crystal clear as always

  • @eggs-istangel4232
    @eggs-istangel4232 Месяц назад +1

    Not that I want to look like "oh I think there is a mistake" kid, but at 8:33 shouldn't first lower phi function(with respect to x_2) be phi_{1,2} (x_2) instead of phi_{2,1} (x_2)?

    • @SerranoAcademy
      @SerranoAcademy  Месяц назад +1

      Thank you so much! Yes, you're absolutely right. And I think also in the first term, with \Phi_1, they should be \phi_{1,1}x_1 + \phi_{1,2}x_2.
      I changed it so many times, and it was so hard to get the indices right, lol...

  • @hayksergoyan8914
    @hayksergoyan8914 Месяц назад

    nice job, thanks. Have you checked how this works for prediction of time series kind of data compared to LSTM,Arima ?

  • @alivaziri7843
    @alivaziri7843 22 дня назад

    Thanks for the video! Are the slides available freely?

    • @SerranoAcademy
      @SerranoAcademy  19 дней назад

      Thanks! Not yet, but I'll message here when they're out.

  • @akirakato1293
    @akirakato1293 Месяц назад

    So essentially you can train non-linear regression or boundary models without the need to expand feature space by, for example, appending x1*x2 column to training set before performing fit? I can see that it's computationally better for finding an approximate solution and naturally less overfitting but how well does the computation complexity perform when accuracy requirement is extremely high?

  • @SohaKasra
    @SohaKasra Месяц назад

    That was too fluent as always ❤

  • @GerardoGutierrez-io7ss
    @GerardoGutierrez-io7ss Месяц назад

    Where can I see the proof of this theorem?😮

  • @Pedritox0953
    @Pedritox0953 Месяц назад

    Great video! Peace out

  • @jimcallahan448
    @jimcallahan448 Месяц назад

    What about log(x) + log(y) ?
    Of course, because you mentioned Kolmogorov I assumed you are talking about probabilities.

    • @SerranoAcademy
      @SerranoAcademy  Месяц назад

      @@jimcallahan448 that’s a good example. Log(xy) is one that looks entangled, but can be written as log(x)+log(y), so it’s separable (i.e., a one layer KA network).

  • @csabaczcsomps7655
    @csabaczcsomps7655 Месяц назад

    Amazing.

  • @colonelmoustache
    @colonelmoustache Месяц назад

    This was so good, but i feel like there should be a nice matrix way to write this.
    Time to search deeper i guess
    Great topic btw

    • @SerranoAcademy
      @SerranoAcademy  Месяц назад +1

      Thanks for the suggestion! They do have a matrix with the capital \Phi's, multiplied by another one with the lowercase \phi's, where multiplication is instead composition of functions. I was going to add it here, but it started getting too long, so I had to cut it, but most other videos in the topic (plus the paper) have it.

  • @brandonprescott5525
    @brandonprescott5525 Месяц назад

    Reminds me of node based graphics software like Houdini or touchdesigner

  • @tomoki-v6o
    @tomoki-v6o Месяц назад

    I have an engineering degree ,no PHD, I am ML enthusiast how can join research in this case ? . i dony want to work as data scientisc . because i like pla ying with math.

  • @AI_ML_DL_LLM
    @AI_ML_DL_LLM Месяц назад

    Great video! You will definitely go to the heaven, see you there not soon :)

  • @sufalt123
    @sufalt123 Месяц назад

    so coooooool

  • @moonwatcher2001
    @moonwatcher2001 Месяц назад

  • @tigu511
    @tigu511 Месяц назад

    oh god!... ¿the translation in spanish is from an AI?, is really bad