Deep Dive on Kolmogorov-Arnold Neural Networks (MLBoost Seminars 6)

Поделиться
HTML-код
  • Опубликовано: 6 авг 2024
  • In this seminar, we had the pleasure of hosting Ziming Liu from MIT, who presented an insightful talk on Kolmogorov Arnold Neural Networks. The seminar covered key concepts, innovative methodologies, and potential applications of this advanced neural network framework. During and following the presentation, there was a vibrant Q&A session where participants engaged with Ziming on various topics related to the seminar. We hope you find this recording informative and inspiring. Don't forget to like, share, and subscribe to our channel for more exciting seminars and discussions!
    #deeplearning #machinelearning #neuralnetworks
    00:00 - Welcome to Deep-Inquiry Seminar Series!
    00:43 - Introduction to Kolmogorov Arnold Neural Networks
    01:34 - Kolmogorov Arnold Representation Theorem
    05:17 - Mathematical Foundation of KNNs
    09:27 - MLP vs. KNNs
    10:58 - Question: What is the Difference between KANs and RBFs?
    13:53 - Question: Can you Turn a KAN into an MLP?
    17:37 - Question: Can we KANs are More Flexible than RBFs?
    19:59 - Question: Does KAN Approximation Theorem Hold for Non-B-Spline Basis Functions?
    21:23 - Algorithmic Details
    22:23 - Question: How and When Are B-Spline Grid Points Refined?
    25:08 - Accuracy: Scaling of KANs
    28:06 - Question: Do the Scaling Laws Hold When the Input Domain is Unbounded or for Stochastic Input Functions?
    30:06 - GIven the number of parameters, how does the depth of a KAN compare to the depth of an MLP?
    32:49 - Has the effect of noise on the performance of KANs been studied?
    35:56 - How does KAN accuracy depend on activation functions? Are polynomial activations recommended?
    38:01 - Interpretability: KANs for Scientific Discovery
    39:30 - For PDEs, does the performance of KANs depend on the size of input domain?
    40:56 - Interpretability Example
    41:46 - Communicating with KANs
    42:56 - A Math Example from Knot Theory: How KANs Can be Used as Collaborators
    46:51 - A Physics Example: Anderson Localization
    49:53 - What are KANs Good/Bad at?
    51:32 - Philosophy: KAN vs. MLP
    53:00 - Question: What are KANs loosely Related to Kolmogorov-Arnold Approximation Theorem?
    55:10 - Question: Was There a Function that KANs Were Not Able to Approximate Using Only Two Layers?
    59:22 - Question: KANs and Continuaul Learning
    1:02:53 - Question: Do KANs Provided Symbolism Layers for AGI?
    1:14:44 - Are KANs Natural?
    1:16:12 - GitHub Repos
    1:16:31 - Thoughts
    1:18:42 - Question: In Symbolic Regression, How do You Determine if an Edge Function Corresponds to a Specific Function?
  • НаукаНаука

Комментарии •