Inventing liquid neural networks

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 พ.ค. 2023
  • Paper: www.science.org/doi/10.1126/s...
    Publication/Event: Science Robotics
    Key authors: Makram Chahine, Ramin Hasani
    Mathias Lechner, Alexander Amini, and Daniela Rus
    MIT News article: news.mit.edu/2023/drones-navi...
    Video Director: Rachel Gordon
    Videographer: Mike Grimmett
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 75

  • @notallm
    @notallm ปีที่แล้ว +44

    Thousands of neurons in conventional models to 19 liquid neurons performing this well is really commendable! I can't wait to see more developments!

  • @musicMan11537
    @musicMan11537 5 หลายเดือนก่อน +17

    The “liquid neural network” idea in this work is a nice (re-)popularization of neural ODEs (which have been around for some time) with the modification that the integration time constant “tau” is a function of the input (as opposed to being a constant). The use of “liquid” in the model name is not the best choice of word as historically (as early as 2004 and before) there’s a class of neural models called “liquid state machines” (LSM): en.m.wikipedia.org/wiki/Liquid_state_machine
    (In fact, an LSM are a kind of spiking neural model and this actually more brain-like than neural ODEs as in the work of this video)
    It’s important to be clear that these authors’ work is a nice little innovation on neural ODEs, but this is a far cry from biological neurons as in the actual paper they even use backpropagation through time which is clearly biologically implausible (the brain does not unroll itself backwards through time). It’s also important to know the historical context as works like this one are being a bit disingenuous by not making it more clear that they are popularizing good classic ideas (e.g., from ODEs and neural ODEs).

    • @musicMan11537
      @musicMan11537 5 หลายเดือนก่อน +4

      ODE = ordinary differential equation

  • @SureshKumar-qi7cy
    @SureshKumar-qi7cy ปีที่แล้ว +18

    Inspirational conversation

  • @alexjbriiones
    @alexjbriiones 10 หลายเดือนก่อน +14

    I was just reading that the main bottleneck for AI is the huge and specialized data sets. This next-level genius invention is going to be revolutionary.

  • @berbudy
    @berbudy 10 หลายเดือนก่อน +5

    Thank you worm for leading us to this

  • @-www.chapters.video-
    @-www.chapters.video- 10 หลายเดือนก่อน +11

    00:01 Exciting times and the start of the project
    01:00 Implementing smaller neural networks for driving
    02:05 Revolutionary results in different environments
    03:03 Creating abstract models for learning systems
    04:00 Properties and applications of liquid neural networks
    05:17 Challenges in implementing the models
    06:32 Testing and pushing the models to their limits
    07:12 Expanding to drone navigation and tasks
    08:05 Extracting tasks and achieving reasoning
    09:11 Surprising and powerful properties of liquid networks
    10:06 Zero-shot learning and adaptability to different environments
    11:27 Extraordinary performance in urban and dynamic environments
    12:31 Resiliency maps provide visual and concise answers to model decision-making
    13:14 Interpretable and explainable machine learning for safety critical applications
    14:23 Liquid networks as a counter to the scaling law of generative AI
    15:42 Under parametrized neural networks like liquid networks for future generative models
    17:10 Exploring multiple agents and composing solutions for different applications
    18:02 Extending liquid networks to perform well on static data and new types of data sources
    18:41 Embedding intelligence into embodied agents and society

  • @koustubhavachat
    @koustubhavachat 10 หลายเดือนก่อน +4

    do we have pytorch implementation available for liquid neural network ? how can one start with this !

  • @atakante
    @atakante 10 หลายเดือนก่อน +3

    Very important algorithmic advance, kudos to the team!

  • @bvdlio
    @bvdlio 10 หลายเดือนก่อน

    Great, exciting work!

  • @benealbrook1544
    @benealbrook1544 10 หลายเดือนก่อน +5

    Amazing job, this is a fundamental shift and the way forward,. I am interested in seeing the memory footprint and cpu demands. Looking forward to the applications in other fields and perhaps replacement of traditional state of the art models.

  • @erikdong
    @erikdong 10 หลายเดือนก่อน

    Bravo! 👏🏼

  • @ajithboralugoda8906
    @ajithboralugoda8906 หลายเดือนก่อน

    WOW! Is Liquid Neural Networks, the Signal in the LLM sea of Noise? So exciting! Great Job folks waiting to hear more breakthroughs from the great research!

  • @palfers1
    @palfers1 10 หลายเดือนก่อน

    Thank you Mr. or Mrs. Worm!

  • @LoanwordEggcorn
    @LoanwordEggcorn 10 หลายเดือนก่อน +4

    Thanks for the talk. Would it be accurate to say the approach models nonlinearities of natural (biological) neural networks?

    • @SynaTek240
      @SynaTek240 10 หลายเดือนก่อน +1

      Kinda, but it doesn't use spikes to communicate like biological networks, but rather it uses magnitudes of values just like normal ANNs. However the way it mimics biological networks is in it being temporally freed rather than clocked, if that says anything to you :D

    • @LoanwordEggcorn
      @LoanwordEggcorn 9 หลายเดือนก่อน

      @@SynaTek240 Thanks, and that seems like an important part of how it works. Biological networks are not clocked.
      The details of how the values propagate may be less relevant, though spikes can have higher order effects from interference effects or other interactions that magnitudes don't.
      Sound right?

  • @qhansen123
    @qhansen123 10 หลายเดือนก่อน +2

    Why does this say “Inventing liquid neural networks”, when there’s been papers on liquid neural networks from the concept/terminology before 2004?

  • @Niamato_inc
    @Niamato_inc 10 หลายเดือนก่อน +3

    What a time to be alive.

    • @prolamer7
      @prolamer7 10 หลายเดือนก่อน

      mental virus is in you

  • @adamgm84
    @adamgm84 หลายเดือนก่อน

    My wig always melts when we get into composing algorithms.

  • @HitAndMissLab
    @HitAndMissLab 10 หลายเดือนก่อน +1

    How are liquid neural networks performing in language models, where differential equations are of very little use?

    • @magi-1
      @magi-1 10 หลายเดือนก่อน

      a transformer is a fully connected graph neural network and each layer is essentially a cross section of a continuous process. I the same way an RNN is a discrete autoregressive model, you can reformat LLMs as a continuous process and sample words via a fixed integration step using euler integration.

  • @arowindahouse
    @arowindahouse 10 หลายเดือนก่อน

    How can liquid neural networks have the necessary inductive biases for computer vision? If I'm not wrong, you need to add classical convolutional neural networks before the liquid for the model to be usable

  • @jimj2683
    @jimj2683 2 วันที่ผ่านมา

    The average neuron in the human brain has 10000 synapses (connections with other neurons). The nodes/neurons in the neural network should be able to make connections with other nodes/neurons to mimmick the human brain.

  • @bobtivnan
    @bobtivnan 10 หลายเดือนก่อน +19

    I wonder how much of this work makes other progress in this field obsolete? I hope Lex Fridman, who also works on autonomous vehicles at MIT, invites them to his podcast.

    • @sb_dunk
      @sb_dunk 10 หลายเดือนก่อน +2

      Does he work on autonomous vehicles? I get the impression he's not as much of an AI expert as many would lead you to believe.

    • @bobtivnan
      @bobtivnan 10 หลายเดือนก่อน

      @@sb_dunk he has mentioned many times on his podcast that he has worked in the autonomous vehicles field. I found this lecture at MIT th-cam.com/video/1L0TKZQcUtA/w-d-xo.html

    • @sb_dunk
      @sb_dunk 10 หลายเดือนก่อน +2

      @@bobtivnan I wouldn't say that lecturing at MIT is equivalent to working on autonomous vehicles at MIT, the latter implies you're at the forefront of the research. The papers that I can see that he's published don't appear to be massively cutting edge either, nor even directly related to autonomous driving - the closest are about traffic systems and how humans interact with autonomous vehicles.
      My point is that we should take the supposed expertise of these people with a pinch of salt.

    • @bobtivnan
      @bobtivnan 10 หลายเดือนก่อน +3

      @@sb_dunk let it go dude

    • @sb_dunk
      @sb_dunk 10 หลายเดือนก่อน +3

      @@bobtivnan Oh I'm sorry, I didn't realize I wasn't allowed to question someone's credentials or expertise.

  • @sapienspace8814
    @sapienspace8814 ปีที่แล้ว +4

    Outstanding work, and very fascinating about looking at the worm's neurons, and figuring out how to practically apply it to autonomous navigation systems, with an even less number of neuron-like modules!
    The "liquid" aspect, I'm attempting to understand more, though it is fascinating to me, as it might be a natural characteristic of electro-magnetic, inductive coupling, between nearby synapses (the same, often perceived problem with human created wiring as interference "noise"/fuzziness, though applied here as a benefit, providing an opportunity for DE-fuzzification between synapses or overlapping/liquid states).
    It almost seems intriguingly similar to overlapping membership (probability/statistical distribution) functions of fuzzy states (such as what is used in Fuzzy Logic, e.g. a room is "hot", "warm", "cool", "cold"), using an application of a type of K-means clustering, or similar, to focus attention on the most frequently used regions of state space classification.
    One might be able to perceive "liquid time constant", just like as in Fuzzy Logic, as a method of merging knowledge (abstract, qualitative, fuzzy, noisy, symbolism) and mathematics together (through interpolation, or extrapolation), but it seems the self incriminating nomenclature of "Fuzzy" has been lost in machine intelligence (maybe via the Dunning-Kruger effect, paradox of humility).
    Merging "liquid time constant" (or Fuzzy Logic) with Reinforcement Learning can help naturally generate the simpler inference rules of a vast state space, and allow the machine to efficiently self learn, without an expert human creating the inference rules. I have seen this done with a neural fuzzy, reinforcement learning experiment with an inverted pendulum control experiment.
    Lately, I have been reading a book titled "The Hedonistic Neuron" written in 1982 to try to understand how these RL systems work more, they seem quite profoundly incredible.
    Thank you for sharing your incredible work!

    • @keep-ukraine-free528
      @keep-ukraine-free528 10 หลายเดือนก่อน +1

      Your idea that the "liquidity" (or the "liquid" nature of information flow) "might be a natural characteristic of electro-magnetic, inductive coupling" is incorrect. It can't be so, since information between real neurons (between any two synapses) passes not electromagnetically, but using molecules (commonly called neurotransmitters) -- which act as a key to a lock.
      To help you understand, the described NN model uses differential equations -- which give the model its "liquid" nature.

    • @sapienspace8814
      @sapienspace8814 10 หลายเดือนก่อน

      @@keep-ukraine-free528 the molecules are going to have electron orbits, and the orbits will inductively interact with each other via Maxwell's equations, whether you like it or not, this is natural, it is physics.

    • @Theodorus5
      @Theodorus5 10 หลายเดือนก่อน +1

      I think you mean 'ephaptic coupling' between neurons

  • @tachoblade2071
    @tachoblade2071 10 หลายเดือนก่อน +1

    if the network can understand the reason behind the tasks or causality of them.... could it "understand" language rather than just being a auto completer like gpt?

  • @hansadler6716
    @hansadler6716 10 หลายเดือนก่อน +2

    I would like to hear a better explanation of how an image could be input to such a small network.

    • @antoruby
      @antoruby 10 หลายเดือนก่อน +7

      The first layers are still regular convolutional neural nets. The decision making layers (last ones), that are traditionally fully connected layers, were substituted by the 19 liquid neurons.

  • @77sanskrit
    @77sanskrit 10 หลายเดือนก่อน +1

    9:01 One environment that would be interesting to train it in, would be like those wind tunnel things, that they test aerodynamics of plane parts and shit. Get it trained on turbulence and loop di loops, that would be awesome!!! Just a thought🤔👍 You Guys are amazing!!!! Absolutely Genius!!!!🙏🙏🫀🧠🤖

  • @computerconcepts3352
    @computerconcepts3352 ปีที่แล้ว +1

    Interesting 🤔

  • @nikhilshingadiya7798
    @nikhilshingadiya7798 10 หลายเดือนก่อน +1

    Now llms model competition is rising 😂😂😂 love you guys for your great efforts 🎉🎉

    • @DarkWizardGG
      @DarkWizardGG 10 หลายเดือนก่อน

      In the near future one of those LLM models will secretly integrate into that liquid neural network. Guys let us all welcome the "T1000" model....TAAADDDDAAAA. It's hunting time. lol😁😉😄😅😂😂😂🤦‍♂️🤦‍♂️🤦‍♂️🤖🤖🤖🤖🤖🤖🤖🤖🤖

  • @lesmathsparexemplesatoukou3454
    @lesmathsparexemplesatoukou3454 ปีที่แล้ว +2

    CSAIL, I' m coming to you

  • @spockfan2000
    @spockfan2000 6 หลายเดือนก่อน

    Is this tech available to the public? Where can I learn how to implement it? Thanks.

    • @User_1795
      @User_1795 4 หลายเดือนก่อน

      Be careful

  • @Michallote
    @Michallote 2 หลายเดือนก่อน

    Why has there not been any impact of this tech? I have seen it twice already in blogs and videos but nothing reflects that on the number of articles published, the references from other authors and most of all... No replicable code or pretrained models to actually test those claims. Has anyone here been able to corroborate their results??

  • @shivakumarv301
    @shivakumarv301 10 หลายเดือนก่อน

    Will it not be wise to do SWOT of the new technology and understand it's consequence before jumping into it

  • @realjx313
    @realjx313 หลายเดือนก่อน

    The attention focus,isn't that about labels?

  • @and_I_am_Life_the_fixer_of_all
    @and_I_am_Life_the_fixer_of_all ปีที่แล้ว +5

    wow, I'm the 4th comment! What a time to be alive! Welcome to history key authors :D

    • @ldgarcia27
      @ldgarcia27 ปีที่แล้ว +5

      Hold on to your papers!

    • @hamamathivha6055
      @hamamathivha6055 10 หลายเดือนก่อน

      Dear fellow scholars!

  • @joaopedrorocha4790
    @joaopedrorocha4790 10 หลายเดือนก่อน

    This is exciting ... This kind of network would be able to forget data that doesn't fit it's needs? For example ... I give it time series data on the first training, and it learns some pattern based on it and act according to it, but them things change on the world, and the pattern it learned is no longer that useful ... This kind of network could just forget this pattern gradually and adapt based on its new input?

  • @vallab19
    @vallab19 10 หลายเดือนก่อน +1

    Liquid Neural Network is another level in the AI revolution.

    • @DarkWizardGG
      @DarkWizardGG 10 หลายเดือนก่อน

      Yeah, T1000 in the making. Lol😁😉😄🤖🤖🤖🤖

  • @wiliamscoronadoescobar8113
    @wiliamscoronadoescobar8113 ปีที่แล้ว

    About this... Tell the people and the university around ...

  • @DanielSanchez-jl2vf
    @DanielSanchez-jl2vf 10 หลายเดือนก่อน

    Guys lets show this to Demis Hassabis, Yann lecunn, Ilya Sutskever, Yoshua Bengio.

  • @kesav1985
    @kesav1985 5 หลายเดือนก่อน +1

    (Re)-inventing time integration schemes would have been dubbed as crappy if they were "invented" by some unknown academic at a non-elite university.
    But, hey ML hype and MIT brand would work wonders in selling ordinary stuff!

  • @RoyMustang.
    @RoyMustang. 10 หลายเดือนก่อน

    ❤❤❤

  • @Lolleka
    @Lolleka 10 หลายเดือนก่อน

    And here I was, thinking that the video was about liquid-phase computing. Silly me.

  • @arowindahouse
    @arowindahouse 10 หลายเดือนก่อน

    I thought Liquid State Machines had already been invented in the 90s by Maass

  • @amarnathmutyala1335
    @amarnathmutyala1335 10 หลายเดือนก่อน +1

    So worms can mentally drive cars ?

  • @DarkWizardGG
    @DarkWizardGG 10 หลายเดือนก่อน

    This is T1000 in the making. Liquid shapeshifter AI. lol😁😉😄🤖🤖🤖🤖🤖

  • @francisdelacruz6439
    @francisdelacruz6439 10 หลายเดือนก่อน

    Once you hv collision detection which can be a separate system you hv a certifiable self driving ecosystem. Maybe it’s time to go start-up and hv the resources to get this to an actual product. Maybe raising usd 100mn would be easy and manageable equity exposure. The drone example is a game change in the new type of Ukraine war you could use a raspberry pi equivalent board in drones and implications there will change how wars are done likely harder to invade other countries with this tech add on.

  • @Stopinvadingmyhardware
    @Stopinvadingmyhardware 10 หลายเดือนก่อน

    Grokked.

  • @randomsitisee7113
    @randomsitisee7113 10 หลายเดือนก่อน +2

    Sounds like bunch of Mumbo jumbo trying to cash in on AI train

  • @BradKittelTTH
    @BradKittelTTH 10 หลายเดือนก่อน +1

    This means that the mature neurons that we humans can produce starting in our late 40s and after if in good health that operates at 10-100 times the speeds of juvenile neurons suggests that the elders could far out-think, grow new ideas, abilities, and potential just being unleashed after our 60's when we have accumulated a host of higher operating systems, synapses and neural networks that are superior to quantum computers. Given there is evidence that wii can produce 700 neurons a night, what is our potential into our 80s to get smarter too. This is the potential of humans once we understand our potential if we master the vessel Wii, all the "I"s that understand the bio-computers wii communicate through, the avatars that form the mii, with eyes watching you. A fabulous new discovery, and amazing you have been able to tap these incredible liquid neural networks that also suggest that all Beings have the potential to understand and navigate more of reality than humans ever imagined before these discoveries. If a worm can do so well with 302 neurons, what is the limitation, if any, of a billion-neuron network comprised of millions of tinier networks that intermingle on optical cable speeds. Thank you for this interview.

  • @MuscleTeamOfficial
    @MuscleTeamOfficial 10 หลายเดือนก่อน +1

    Elongated muskrat is comin🎉

  • @ibrremote
    @ibrremote 10 หลายเดือนก่อน

    Task-centric, not context-centric…

  • @wiliamscoronadoescobar8113
    @wiliamscoronadoescobar8113 ปีที่แล้ว

    Here...And For the goodness of education i wanna insert, introduce a camp of investigation talking about the information in the world, the contamination in it and surely...touching topics like the fotons that my investigations can touch. In the web. For references the data talked in live in a conference of my friend Andres Manuel Lopez Obrador. President Of Mexico

  • @Jediluvs2kill
    @Jediluvs2kill 10 หลายเดือนก่อน +1

    This ramim guy is timewaste

  • @ethanwei5060
    @ethanwei5060 10 หลายเดือนก่อน

    Another scope for this solution is for banning bad social media posts, banning bad content from children and unwanted content. Current human moderators require counselling after moderating in a day, and if this liquid neural networks can focus on the task instead of context like traditional AI, it can be game changing.

  • @alexjbriiones
    @alexjbriiones 10 หลายเดือนก่อน +1

    I am sure Elon Musk is paying attention to this group and would probably try to hire them to complement Tesla's autonomous driving. Even more ominous is that China and Russia are probably setting their engineers to duplicate this invention.

    • @bobtivnan
      @bobtivnan 10 หลายเดือนก่อน

      I also thought that Musk would pursue this, for Tesla and for more general application with his new company xAI.

  • @Lolleka
    @Lolleka 10 หลายเดือนก่อน

    And here I was, thinking that the video was about liquid-phase computing. Silly me.