Inventing liquid neural networks

Поделиться
HTML-код
  • Опубликовано: 8 май 2023
  • Paper: www.science.org/doi/10.1126/s...
    Publication/Event: Science Robotics
    Key authors: Makram Chahine, Ramin Hasani
    Mathias Lechner, Alexander Amini, and Daniela Rus
    MIT News article: news.mit.edu/2023/drones-navi...
    Video Director: Rachel Gordon
    Videographer: Mike Grimmett
  • НаукаНаука

Комментарии • 76

  • @notallm
    @notallm Год назад +48

    Thousands of neurons in conventional models to 19 liquid neurons performing this well is really commendable! I can't wait to see more developments!

  • @musicMan11537
    @musicMan11537 5 месяцев назад +18

    The “liquid neural network” idea in this work is a nice (re-)popularization of neural ODEs (which have been around for some time) with the modification that the integration time constant “tau” is a function of the input (as opposed to being a constant). The use of “liquid” in the model name is not the best choice of word as historically (as early as 2004 and before) there’s a class of neural models called “liquid state machines” (LSM): en.m.wikipedia.org/wiki/Liquid_state_machine
    (In fact, an LSM are a kind of spiking neural model and this actually more brain-like than neural ODEs as in the work of this video)
    It’s important to be clear that these authors’ work is a nice little innovation on neural ODEs, but this is a far cry from biological neurons as in the actual paper they even use backpropagation through time which is clearly biologically implausible (the brain does not unroll itself backwards through time). It’s also important to know the historical context as works like this one are being a bit disingenuous by not making it more clear that they are popularizing good classic ideas (e.g., from ODEs and neural ODEs).

    • @musicMan11537
      @musicMan11537 5 месяцев назад +4

      ODE = ordinary differential equation

    • @Alpha_GameDev-wq5cc
      @Alpha_GameDev-wq5cc 3 дня назад

      Could you please elaborate on the back-propagation critique? It’s just a computation right? Why can’t the brain just do it… I get it that in cyber implementations, it’s an external classical instruction based algorithm operating on the architecture but maybe the brain does it within the neural networks?
      Is that not plausible?

  • @SureshKumar-qi7cy
    @SureshKumar-qi7cy Год назад +18

    Inspirational conversation

  • @-www.chapters.video-
    @-www.chapters.video- 11 месяцев назад +12

    00:01 Exciting times and the start of the project
    01:00 Implementing smaller neural networks for driving
    02:05 Revolutionary results in different environments
    03:03 Creating abstract models for learning systems
    04:00 Properties and applications of liquid neural networks
    05:17 Challenges in implementing the models
    06:32 Testing and pushing the models to their limits
    07:12 Expanding to drone navigation and tasks
    08:05 Extracting tasks and achieving reasoning
    09:11 Surprising and powerful properties of liquid networks
    10:06 Zero-shot learning and adaptability to different environments
    11:27 Extraordinary performance in urban and dynamic environments
    12:31 Resiliency maps provide visual and concise answers to model decision-making
    13:14 Interpretable and explainable machine learning for safety critical applications
    14:23 Liquid networks as a counter to the scaling law of generative AI
    15:42 Under parametrized neural networks like liquid networks for future generative models
    17:10 Exploring multiple agents and composing solutions for different applications
    18:02 Extending liquid networks to perform well on static data and new types of data sources
    18:41 Embedding intelligence into embodied agents and society

  • @alexjbriiones
    @alexjbriiones 11 месяцев назад +14

    I was just reading that the main bottleneck for AI is the huge and specialized data sets. This next-level genius invention is going to be revolutionary.

  • @Michallote
    @Michallote 3 месяца назад +2

    Why has there not been any impact of this tech? I have seen it twice already in blogs and videos but nothing reflects that on the number of articles published, the references from other authors and most of all... No replicable code or pretrained models to actually test those claims. Has anyone here been able to corroborate their results??

  • @berbudy
    @berbudy 11 месяцев назад +5

    Thank you worm for leading us to this

  • @atakante
    @atakante 11 месяцев назад +3

    Very important algorithmic advance, kudos to the team!

  • @benealbrook1544
    @benealbrook1544 11 месяцев назад +5

    Amazing job, this is a fundamental shift and the way forward,. I am interested in seeing the memory footprint and cpu demands. Looking forward to the applications in other fields and perhaps replacement of traditional state of the art models.

  • @bvdlio
    @bvdlio 11 месяцев назад

    Great, exciting work!

  • @erikdong
    @erikdong 11 месяцев назад

    Bravo! 👏🏼

  • @palfers1
    @palfers1 10 месяцев назад

    Thank you Mr. or Mrs. Worm!

  • @koustubhavachat
    @koustubhavachat 11 месяцев назад +3

    do we have pytorch implementation available for liquid neural network ? how can one start with this !

  • @ajithboralugoda8906
    @ajithboralugoda8906 Месяц назад

    WOW! Is Liquid Neural Networks, the Signal in the LLM sea of Noise? So exciting! Great Job folks waiting to hear more breakthroughs from the great research!

  • @LoanwordEggcorn
    @LoanwordEggcorn 11 месяцев назад +4

    Thanks for the talk. Would it be accurate to say the approach models nonlinearities of natural (biological) neural networks?

    • @SynaTek240
      @SynaTek240 10 месяцев назад +1

      Kinda, but it doesn't use spikes to communicate like biological networks, but rather it uses magnitudes of values just like normal ANNs. However the way it mimics biological networks is in it being temporally freed rather than clocked, if that says anything to you :D

    • @LoanwordEggcorn
      @LoanwordEggcorn 10 месяцев назад

      @@SynaTek240 Thanks, and that seems like an important part of how it works. Biological networks are not clocked.
      The details of how the values propagate may be less relevant, though spikes can have higher order effects from interference effects or other interactions that magnitudes don't.
      Sound right?

  • @bobtivnan
    @bobtivnan 11 месяцев назад +19

    I wonder how much of this work makes other progress in this field obsolete? I hope Lex Fridman, who also works on autonomous vehicles at MIT, invites them to his podcast.

    • @sb_dunk
      @sb_dunk 11 месяцев назад +2

      Does he work on autonomous vehicles? I get the impression he's not as much of an AI expert as many would lead you to believe.

    • @bobtivnan
      @bobtivnan 11 месяцев назад

      @@sb_dunk he has mentioned many times on his podcast that he has worked in the autonomous vehicles field. I found this lecture at MIT ruclips.net/video/1L0TKZQcUtA/видео.html

    • @sb_dunk
      @sb_dunk 11 месяцев назад +2

      @@bobtivnan I wouldn't say that lecturing at MIT is equivalent to working on autonomous vehicles at MIT, the latter implies you're at the forefront of the research. The papers that I can see that he's published don't appear to be massively cutting edge either, nor even directly related to autonomous driving - the closest are about traffic systems and how humans interact with autonomous vehicles.
      My point is that we should take the supposed expertise of these people with a pinch of salt.

    • @bobtivnan
      @bobtivnan 11 месяцев назад +3

      @@sb_dunk let it go dude

    • @sb_dunk
      @sb_dunk 11 месяцев назад +3

      @@bobtivnan Oh I'm sorry, I didn't realize I wasn't allowed to question someone's credentials or expertise.

  • @arowindahouse
    @arowindahouse 10 месяцев назад

    How can liquid neural networks have the necessary inductive biases for computer vision? If I'm not wrong, you need to add classical convolutional neural networks before the liquid for the model to be usable

  • @sapienspace8814
    @sapienspace8814 Год назад +4

    Outstanding work, and very fascinating about looking at the worm's neurons, and figuring out how to practically apply it to autonomous navigation systems, with an even less number of neuron-like modules!
    The "liquid" aspect, I'm attempting to understand more, though it is fascinating to me, as it might be a natural characteristic of electro-magnetic, inductive coupling, between nearby synapses (the same, often perceived problem with human created wiring as interference "noise"/fuzziness, though applied here as a benefit, providing an opportunity for DE-fuzzification between synapses or overlapping/liquid states).
    It almost seems intriguingly similar to overlapping membership (probability/statistical distribution) functions of fuzzy states (such as what is used in Fuzzy Logic, e.g. a room is "hot", "warm", "cool", "cold"), using an application of a type of K-means clustering, or similar, to focus attention on the most frequently used regions of state space classification.
    One might be able to perceive "liquid time constant", just like as in Fuzzy Logic, as a method of merging knowledge (abstract, qualitative, fuzzy, noisy, symbolism) and mathematics together (through interpolation, or extrapolation), but it seems the self incriminating nomenclature of "Fuzzy" has been lost in machine intelligence (maybe via the Dunning-Kruger effect, paradox of humility).
    Merging "liquid time constant" (or Fuzzy Logic) with Reinforcement Learning can help naturally generate the simpler inference rules of a vast state space, and allow the machine to efficiently self learn, without an expert human creating the inference rules. I have seen this done with a neural fuzzy, reinforcement learning experiment with an inverted pendulum control experiment.
    Lately, I have been reading a book titled "The Hedonistic Neuron" written in 1982 to try to understand how these RL systems work more, they seem quite profoundly incredible.
    Thank you for sharing your incredible work!

    • @keep-ukraine-free
      @keep-ukraine-free 11 месяцев назад +1

      Your idea that the "liquidity" (or the "liquid" nature of information flow) "might be a natural characteristic of electro-magnetic, inductive coupling" is incorrect. It can't be so, since information between real neurons (between any two synapses) passes not electromagnetically, but using molecules (commonly called neurotransmitters) -- which act as a key to a lock.
      To help you understand, the described NN model uses differential equations -- which give the model its "liquid" nature.

    • @sapienspace8814
      @sapienspace8814 11 месяцев назад

      @@keep-ukraine-free the molecules are going to have electron orbits, and the orbits will inductively interact with each other via Maxwell's equations, whether you like it or not, this is natural, it is physics.

    • @Theodorus5
      @Theodorus5 11 месяцев назад +1

      I think you mean 'ephaptic coupling' between neurons

  • @77sanskrit
    @77sanskrit 11 месяцев назад +1

    9:01 One environment that would be interesting to train it in, would be like those wind tunnel things, that they test aerodynamics of plane parts and shit. Get it trained on turbulence and loop di loops, that would be awesome!!! Just a thought🤔👍 You Guys are amazing!!!! Absolutely Genius!!!!🙏🙏🫀🧠🤖

  • @adamgm84
    @adamgm84 Месяц назад

    My wig always melts when we get into composing algorithms.

  • @HitAndMissLab
    @HitAndMissLab 11 месяцев назад +1

    How are liquid neural networks performing in language models, where differential equations are of very little use?

    • @magi-1
      @magi-1 11 месяцев назад

      a transformer is a fully connected graph neural network and each layer is essentially a cross section of a continuous process. I the same way an RNN is a discrete autoregressive model, you can reformat LLMs as a continuous process and sample words via a fixed integration step using euler integration.

  • @qhansen123
    @qhansen123 11 месяцев назад +2

    Why does this say “Inventing liquid neural networks”, when there’s been papers on liquid neural networks from the concept/terminology before 2004?

  • @Niamato_inc
    @Niamato_inc 11 месяцев назад +3

    What a time to be alive.

    • @prolamer7
      @prolamer7 11 месяцев назад

      mental virus is in you

  • @computerconcepts3352
    @computerconcepts3352 Год назад +1

    Interesting 🤔

  • @spockfan2000
    @spockfan2000 7 месяцев назад

    Is this tech available to the public? Where can I learn how to implement it? Thanks.

    • @User_1795
      @User_1795 5 месяцев назад

      Be careful

  • @jimj2683
    @jimj2683 19 дней назад

    The average neuron in the human brain has 10000 synapses (connections with other neurons). The nodes/neurons in the neural network should be able to make connections with other nodes/neurons to mimmick the human brain.

  • @shivakumarv301
    @shivakumarv301 11 месяцев назад

    Will it not be wise to do SWOT of the new technology and understand it's consequence before jumping into it

  • @joaopedrorocha4790
    @joaopedrorocha4790 11 месяцев назад

    This is exciting ... This kind of network would be able to forget data that doesn't fit it's needs? For example ... I give it time series data on the first training, and it learns some pattern based on it and act according to it, but them things change on the world, and the pattern it learned is no longer that useful ... This kind of network could just forget this pattern gradually and adapt based on its new input?

  • @tachoblade2071
    @tachoblade2071 10 месяцев назад +1

    if the network can understand the reason behind the tasks or causality of them.... could it "understand" language rather than just being a auto completer like gpt?

  • @hansadler6716
    @hansadler6716 11 месяцев назад +2

    I would like to hear a better explanation of how an image could be input to such a small network.

    • @antoruby
      @antoruby 11 месяцев назад +7

      The first layers are still regular convolutional neural nets. The decision making layers (last ones), that are traditionally fully connected layers, were substituted by the 19 liquid neurons.

  • @nikhilshingadiya7798
    @nikhilshingadiya7798 11 месяцев назад +1

    Now llms model competition is rising 😂😂😂 love you guys for your great efforts 🎉🎉

    • @DarkWizardGG
      @DarkWizardGG 11 месяцев назад

      In the near future one of those LLM models will secretly integrate into that liquid neural network. Guys let us all welcome the "T1000" model....TAAADDDDAAAA. It's hunting time. lol😁😉😄😅😂😂😂🤦‍♂️🤦‍♂️🤦‍♂️🤖🤖🤖🤖🤖🤖🤖🤖🤖

  • @realjx313
    @realjx313 Месяц назад

    The attention focus,isn't that about labels?

  • @RoyMustang.
    @RoyMustang. 11 месяцев назад

    ❤❤❤

  • @lesmathsparexemplesatoukou3454
    @lesmathsparexemplesatoukou3454 Год назад +2

    CSAIL, I' m coming to you

  • @and_I_am_Life_the_fixer_of_all
    @and_I_am_Life_the_fixer_of_all Год назад +5

    wow, I'm the 4th comment! What a time to be alive! Welcome to history key authors :D

  • @kesav1985
    @kesav1985 6 месяцев назад +1

    (Re)-inventing time integration schemes would have been dubbed as crappy if they were "invented" by some unknown academic at a non-elite university.
    But, hey ML hype and MIT brand would work wonders in selling ordinary stuff!

  • @wiliamscoronadoescobar8113
    @wiliamscoronadoescobar8113 Год назад

    About this... Tell the people and the university around ...

  • @vallab19
    @vallab19 11 месяцев назад +1

    Liquid Neural Network is another level in the AI revolution.

    • @DarkWizardGG
      @DarkWizardGG 11 месяцев назад

      Yeah, T1000 in the making. Lol😁😉😄🤖🤖🤖🤖

  • @DanielSanchez-jl2vf
    @DanielSanchez-jl2vf 11 месяцев назад

    Guys lets show this to Demis Hassabis, Yann lecunn, Ilya Sutskever, Yoshua Bengio.

  • @francisdelacruz6439
    @francisdelacruz6439 11 месяцев назад

    Once you hv collision detection which can be a separate system you hv a certifiable self driving ecosystem. Maybe it’s time to go start-up and hv the resources to get this to an actual product. Maybe raising usd 100mn would be easy and manageable equity exposure. The drone example is a game change in the new type of Ukraine war you could use a raspberry pi equivalent board in drones and implications there will change how wars are done likely harder to invade other countries with this tech add on.

  • @Lolleka
    @Lolleka 11 месяцев назад

    And here I was, thinking that the video was about liquid-phase computing. Silly me.

  • @BradKittelTTH
    @BradKittelTTH 11 месяцев назад +1

    This means that the mature neurons that we humans can produce starting in our late 40s and after if in good health that operates at 10-100 times the speeds of juvenile neurons suggests that the elders could far out-think, grow new ideas, abilities, and potential just being unleashed after our 60's when we have accumulated a host of higher operating systems, synapses and neural networks that are superior to quantum computers. Given there is evidence that wii can produce 700 neurons a night, what is our potential into our 80s to get smarter too. This is the potential of humans once we understand our potential if we master the vessel Wii, all the "I"s that understand the bio-computers wii communicate through, the avatars that form the mii, with eyes watching you. A fabulous new discovery, and amazing you have been able to tap these incredible liquid neural networks that also suggest that all Beings have the potential to understand and navigate more of reality than humans ever imagined before these discoveries. If a worm can do so well with 302 neurons, what is the limitation, if any, of a billion-neuron network comprised of millions of tinier networks that intermingle on optical cable speeds. Thank you for this interview.

  • @DarkWizardGG
    @DarkWizardGG 11 месяцев назад

    This is T1000 in the making. Liquid shapeshifter AI. lol😁😉😄🤖🤖🤖🤖🤖

  • @randomsitisee7113
    @randomsitisee7113 11 месяцев назад +2

    Sounds like bunch of Mumbo jumbo trying to cash in on AI train

  • @arowindahouse
    @arowindahouse 10 месяцев назад

    I thought Liquid State Machines had already been invented in the 90s by Maass

  • @Stopinvadingmyhardware
    @Stopinvadingmyhardware 11 месяцев назад

    Grokked.

  • @amarnathmutyala1335
    @amarnathmutyala1335 11 месяцев назад +1

    So worms can mentally drive cars ?

  • @MuscleTeamOfficial
    @MuscleTeamOfficial 11 месяцев назад +1

    Elongated muskrat is comin🎉

  • @ethanwei5060
    @ethanwei5060 11 месяцев назад

    Another scope for this solution is for banning bad social media posts, banning bad content from children and unwanted content. Current human moderators require counselling after moderating in a day, and if this liquid neural networks can focus on the task instead of context like traditional AI, it can be game changing.

  • @ibrremote
    @ibrremote 11 месяцев назад

    Task-centric, not context-centric…

  • @wiliamscoronadoescobar8113
    @wiliamscoronadoescobar8113 Год назад

    Here...And For the goodness of education i wanna insert, introduce a camp of investigation talking about the information in the world, the contamination in it and surely...touching topics like the fotons that my investigations can touch. In the web. For references the data talked in live in a conference of my friend Andres Manuel Lopez Obrador. President Of Mexico

  • @Jediluvs2kill
    @Jediluvs2kill 11 месяцев назад +1

    This ramim guy is timewaste

  • @alexjbriiones
    @alexjbriiones 11 месяцев назад +1

    I am sure Elon Musk is paying attention to this group and would probably try to hire them to complement Tesla's autonomous driving. Even more ominous is that China and Russia are probably setting their engineers to duplicate this invention.

    • @bobtivnan
      @bobtivnan 11 месяцев назад

      I also thought that Musk would pursue this, for Tesla and for more general application with his new company xAI.

  • @Lolleka
    @Lolleka 11 месяцев назад

    And here I was, thinking that the video was about liquid-phase computing. Silly me.