Liquid Neural Networks | Ramin Hasani | TEDxMIT

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 พ.ย. 2024

ความคิดเห็น • 52

  • @andreasholzinger7056
    @andreasholzinger7056 ปีที่แล้ว +47

    Well done Ramin, shows nicely that robustness and explainabability (re-traceability, interpretability) are among the core future research topics - a clear thumbs-up from me 🙂

  • @krox477
    @krox477 ปีที่แล้ว +10

    Things are changing so fast

  • @UsedRocketDealer
    @UsedRocketDealer ปีที่แล้ว +15

    That graph at 2:53 matches the shape of the notional Dunning-Krueger effect graph. Simple patterns are enough early on to get the 80% solution, but as you learn more, you realize there's more you don't know. Only with real expertise do you start to get the best outputs. I think we're seeing the same effect here.

    • @kaushiksivashankar9621
      @kaushiksivashankar9621 ปีที่แล้ว

      very interesting perspective thanks

    • @MachineManGabb
      @MachineManGabb ปีที่แล้ว +3

      Yeah, tons of people are going to think ML is easy after watching this video.

    • @Bosse015
      @Bosse015 2 วันที่ผ่านมา

      i was thinking the same thing seeing that graph

  • @carlosarosero2266
    @carlosarosero2266 ปีที่แล้ว +8

    Congratulations on such disruptive and provocative invention! I would like to go deeper on how this technology works and help to understand decisions that prevent the occurrence of errors, like human errors or false positives, how the model allows the logic the liquid network to take the right decision and the machine learning algorithm help to understand the logic and the learning process.

  • @betanapallisandeepra
    @betanapallisandeepra ปีที่แล้ว +1

    This is really amazing… thank you for sharing about it… liquid neural networks…

  • @BooleanDisorder
    @BooleanDisorder 10 หลายเดือนก่อน +6

    I'd like to know how they actually work. Feels like no resource explains it.

  • @Mina-bc5sz
    @Mina-bc5sz ปีที่แล้ว +1

    Just listened to your interview with Marketplace Tech!

  • @dhanooshpooranan1861
    @dhanooshpooranan1861 ปีที่แล้ว +4

    this is mind blowing

  • @gayathrirangu5488
    @gayathrirangu5488 ปีที่แล้ว

    Sounds interesting!

  • @NewNerdInTown
    @NewNerdInTown 3 หลายเดือนก่อน

    I want to build one. I'm so freakin' excited. Thank you for your research efforts, Ramin and team at MIT CSAIL.

    • @joeybasile545
      @joeybasile545 3 หลายเดือนก่อน

      Have you started? If not, what are you waiting on?

  • @marcosadelino6990
    @marcosadelino6990 9 หลายเดือนก่อน

    Totally

  • @djasnive
    @djasnive 6 หลายเดือนก่อน

    Very interresting 🤔🤔

  • @gbenga5420
    @gbenga5420 ปีที่แล้ว +5

    Is the liquid neural network spoken about here the same as the paper Hasani published called Liquid Time-Constant Networks?

  • @arthurpenndragon6434
    @arthurpenndragon6434 ปีที่แล้ว +8

    I often think about the kind of dissonance that must exist in an ML researcher's mind when enthusiastically training agents to navigate the real world, with a sterile academic mindset, knowing that the technology will inevitably used for indiscriminate violence in the years to come.

  • @kobi2187
    @kobi2187 หลายเดือนก่อน

    wow

  • @MuscleTeamOfficial
    @MuscleTeamOfficial ปีที่แล้ว +8

    i too prefer having a fruit fly brain sized network rather than a datacenter sized neural network fetching my beer

  • @trbt177
    @trbt177 3 หลายเดือนก่อน

    MIT will be the last when it comes to leading research in ai , openai or anthropic / google are light years ahead of these guys

  • @Tbone913
    @Tbone913 ปีที่แล้ว +5

    Brilliant work, hope to see it in a transformer model for NLP soon!!

    • @shawnvandever3917
      @shawnvandever3917 ปีที่แล้ว +8

      These don't work with the transformer . They have a real time "liquid" algorithm. This has the potential to become the future of AI, I have been watching the progress over the past year and its impressive to say the least

    • @keep-ukraine-free
      @keep-ukraine-free ปีที่แล้ว +3

      It's unclear whether "Liquid" NNs have direct applicability to NLP. LNNs are designed to process time-based information (audio, video, but also potentially typed/spoken text/NLP). One difficulty: in NLP, the time "intervals" are not fixed (between tokens).
      As to transformers, LNNs are an alternative architecture. Also, both transformers & LNNs use "attention" mechanisms -- so both may be complementary or both may contain mutually exclusive/incompatible elements.

    • @shawnvandever3917
      @shawnvandever3917 ปีที่แล้ว +1

      @@keep-ukraine-free Last I heard they are planning on working on a NLP solution full generative AI. It would be nice if they can work between the two. Do you know if the team of 4 are the only ones working on this ? I would think more people would wanna jump on it

    • @Tbone913
      @Tbone913 ปีที่แล้ว +4

      ​@@keep-ukraine-free I wonder if the function of brainwaves, in humans, is to synchonise it from a time based method to a static method. Text is still sequential so should be adaptable from time-based methods.... I often wonder how humans read so slowly, and so few books, while transformers must read millions of books, millions of times.... there must be a fundamental inefficiency at present in the transformer model (because by rights they should be already better read than every human in history)

    • @Tbone913
      @Tbone913 ปีที่แล้ว

      @@shawnvandever3917 It should work, as a tranformer model is effectively still convertible to a vision model... it is just a matter of converting the information from bytes to text or photo... The transformer model should be able to operate on raw bytes and make predictions.

  • @robmyers8948
    @robmyers8948 ปีที่แล้ว +1

    The driving example seems to mostly have attention in the far distance, ignoring directly infront and on the sides, which would suggest it would ignore the person or car coming from the side or obstacles directly infront. The example works well with a clear uninterrupted path ahead.

    • @Youtuberboi596
      @Youtuberboi596 ปีที่แล้ว +2

      could be that if training data with close proximity obstacles and humans is introduced (maybe regular dashcam footage), the neural network pays attention to closer stuff aswell

    • @SloverOfTeuth
      @SloverOfTeuth ปีที่แล้ว

      That may be because there are no objects on the road to pay attention to. Perhaps the attention map would change if there were, it would be interesting to see.

    • @imperson7005
      @imperson7005 5 หลายเดือนก่อน +1

      I was thinking you could combine this tech with Tesla's car tech that creates an inner world model. That way it can fly autonomous

    • @ibgib
      @ibgib หลายเดือนก่อน +1

      @robmyers8948 I was thinking the same thing, although often that is the best place to focus on. Would have been nice if the speaker mentioned what happens if other stimulus is introduced, because this could either be through a clear understanding or just naivete on the model's part.

  • @WizardofWar
    @WizardofWar ปีที่แล้ว +85

    19 Neurons for lane keeping is just clickbait. All the heavy lifting is done in the perception layers in your 3 convolutional layers and the condensed sensory neurons.

    • @PhilF.
      @PhilF. ปีที่แล้ว +41

      FYI What he is showing is that the 19 ltc neurons are replacing the original 4000 fully connected neurons presented in other videos.

    • @keep-ukraine-free
      @keep-ukraine-free ปีที่แล้ว

      @@PhilF. Not really. The 19 neurons in the 2 final "liquid" layers are not replacing neurons that comprise the CNNs. Those 19 are the "only" ones making the output decision (of where to set the steering angle). Yet those 19 definitely need the 1000s of neurons in the CNN. He's made a semantic distinction that the 19 are doing the decision-making. Some may argue that the 1000s of neurons in the CNN are "helping" (doing all preprocessing for) those 19 in the final stage.

    • @TheNanobot
      @TheNanobot ปีที่แล้ว +2

      ​​@@PhilF.Agree, but still he didn't say that explicitly in the video, "we used only 19 neurons to process the output of the convolutional network" it's something more fair from my perspective.

    • @PhilF.
      @PhilF. ปีที่แล้ว

      @@TheNanobot when I saw the videos I donwloaded the github library to test it. It's great.

    • @pariveshplayson
      @pariveshplayson 11 หลายเดือนก่อน +4

      Read the papers instead of coming off so strong.

  • @LoyBukid
    @LoyBukid 10 หลายเดือนก่อน

    Is there a Python library available for NLP tasks? :)

    • @nizardhahri493
      @nizardhahri493 4 หลายเดือนก่อน

      There is a library called "ncps" you can design liquid neural network with it, yet the NLP, I am not sure

  • @CARLOSINTERSIOASOCIA
    @CARLOSINTERSIOASOCIA ปีที่แล้ว

    Just to clarify a misconception ,humans are animals... it makes no sense to say "looking brains but not even human brains, animal brains"

  • @MaJetiGizzle
    @MaJetiGizzle ปีที่แล้ว +2

    Now my only question is whether these LNNs are as scalable or capable of being put into an architecture that is as scalable as a transformer?

    • @keep-ukraine-free
      @keep-ukraine-free ปีที่แล้ว +4

      Your question may be based on a misunderstanding many have about information processed by artificial & biological neural networks. NNs must use one of many approaches to processing information, since information is multi-dimensional (in space & time). Liquid NNs are best for processing time-based data (audio, video, dancing/robotics, even typed/spoken language). So your question is equivalent to asking: "can an LLN *_listen_* to a photo and tell me what music it shows?". The question (yours, and this one) don't make sense -- because photos don't have info encoded in the time-domain. This is why different parts of our brain -- and why different artificial neural networks -- use very different structures (topologies/designs - "algorithms").
      So one answer to your question is that transformers and "LNN"s already share a critical characteristic -- they both deal in (have & use) "attention". They already share that part.
      And yes, "LLN"s do scale, in fact better than traditional deep neural networks since the "liquid" part (which is the "decision-making" part, the most important part) is usually tiny (in his talk, Hasani said their LLN had only 19 "liquid" neurons). Isn't it great that tiny scales so well!!?!

  • @MarkusSeidl
    @MarkusSeidl ปีที่แล้ว +10

    It may be noteworthy that this design was found with a human brain and not with a deep neural network. Giving all the hype it seems necessary to point this out.

    • @keep-ukraine-free
      @keep-ukraine-free ปีที่แล้ว +7

      This design was inspired from a small worm's "brain" (nervous system), called the nematode (C. elegans), which has only approx. 300 neurons. So no, not from the human brain. The speaker Ramin Hasani has explained this before. His team's work shows the power of modeling artificial neural networks from even very simple biological systems. The features exhibited by their "liquid" neural networks exist (but using different molecules/structures) in most simple-to-complex animals, including humans.

    • @StefanReich
      @StefanReich ปีที่แล้ว +1

      @@keep-ukraine-free He said "WITH a human brain", so I think it means a human found it 😃

  • @AdrienLegendre
    @AdrienLegendre 2 หลายเดือนก่อน +1

    The comparison between "classical statistics" and AI was unexplained and likely misleading. If a speaker compares two graphs, the speaker should explain the underlying math explaining why the graphs differ. The failure to do this, as in this presentation, suggests the speaker has an incomplete knowledge of this field of study and reduces the credibility of the speaker's claims.

  • @joaogoncalves1149
    @joaogoncalves1149 16 วันที่ผ่านมา

    The presenter doesn't explain what lnn are and how they work. He just gives some crumbs. Useless.

  • @frun
    @frun ปีที่แล้ว +2

    AI 🤖 can create children to achieve smarter design.