Liquid Neural Networks, A New Idea That Allows AI To Learn Even After Training

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 ก.ค. 2023
  • Daniela Rus currently serves as the Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Rus is a renowned Andrew (1956) and Erna Viterbi professor at CSAIL. With a passion for advancing the field of robotics, Rus has made significant contributions to areas such as autonomous vehicles, swarm robotics, and distributed algorithms. Her research and leadership have earned her numerous accolades, establishing her as a prominent figure in the world of robotics and artificial intelligence.
    Subscribe to FORBES: th-cam.com/users/Forbes?s...
    Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
    account.forbes.com/membership...
    Stay Connected
    Forbes newsletters: newsletters.editorial.forbes.com
    Forbes on Facebook: forbes
    Forbes Video on Twitter: / forbes
    Forbes Video on Instagram: / forbes
    More From Forbes: forbes.com
    Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 283

  • @LindsayHiebert
    @LindsayHiebert 11 หลายเดือนก่อน +124

    Kudos Daniela Rus and Computer Science and Artificial Intelligence Laboratory (CSAIL) team at MIT! Excellent work and innovation!

  • @justinlloyd3
    @justinlloyd3 11 หลายเดือนก่อน +14

    No paper link no other links. No name for the paper. Thanks Forbes.

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน +4

      I know, while I thank them for posting the vid, not crediting and posting sources in this day and age of journalism is pathetic

    • @shubhamdhapola5447
      @shubhamdhapola5447 11 หลายเดือนก่อน +7

      Maybe, you're pre-occupied with some higher priority task
      Let me help you out. Here you go fella -
      arxiv.org/pdf/2006.04439.pdf
      P. S. -
      It just required typing literally 25 characters (including whitespace)
      "liquid networks mit csail"
      into the search bar of the search engine you prefer and then investigating the top 2-3 links for the exact match.
      As far as giving "credit and recognition" goes - that's utterly unprofessional & unethical from Forbes' side.

  • @JChen7
    @JChen7 11 หลายเดือนก่อน +160

    Getting strong “Any sufficiently advanced technology is indistinguishable from magic” vibes right now thanks to the wizards at MIT. Wish I was smart and dedicated enough to learn what's happening here. Looks amazing.

    • @mattjohnson1775
      @mattjohnson1775 11 หลายเดือนก่อน +4

      I agree 100%. Magik is very real....but its divination/ witchcraft. Ask Gordie Rose, he doesnt sugarcoat depending who hes speaking to .

    • @CrazyAssDrumma
      @CrazyAssDrumma 11 หลายเดือนก่อน +4

      Me too my friend. The good news is there ireally is no point now. Given how much time and effort it would take you (and me). By the time you got there, we'll have SuperAI anyways lol 😂

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน +22

      The better news is that anyone can learn it. There are enough resources now that anyone can dive into AI if that is your passion, and learn from the ground up. I did, and now I've trained several AI (technically machine learning algorithms) and look forward to getting my hands on (or building) a liquid network soon!
      For reference, I started on Google's AI courses, but SentDex here on TH-cam does a great job with several of his tutorials too. Also 3Blue1Brown does a great job of explaining anything math related, and he has a series on machine learning.

    • @AnExplorer1000
      @AnExplorer1000 11 หลายเดือนก่อน

      @@LabGecko Do you write a blog about your projects or have public GitHub projects?

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน +1

      @@AnExplorer1000 hadn't thought of that. I'm retired with PTSD though, so my projects are pretty hit and miss, might not be something ppl would want to follow.

  • @thearchersb
    @thearchersb 11 หลายเดือนก่อน +11

    I don't understand anything but I completely agree with her.

  • @crackrule
    @crackrule 11 หลายเดือนก่อน +69

    This will make learning faster and better networks,. the vision to detect object seems to be much more clear. Hope this will be out soon. Or may be we need to push this to Tensorflow or Pytorch soon for easy accessibility with wide frameworks. The more experiments get performed using this, the better outcomes can be seen in real world.

    • @arlogodfrey1508
      @arlogodfrey1508 11 หลายเดือนก่อน +3

      Move fast break things let's go

    • @Supreme_Lobster
      @Supreme_Lobster 11 หลายเดือนก่อน +4

      I already did some testing with ego localization (finding your relative coordinates at every frame by watching a video) and it seemed promising

    • @whannabi
      @whannabi 11 หลายเดือนก่อน

      ​@@arlogodfrey1508 depends what relies on the things you wanna break...

    • @crackrule
      @crackrule 11 หลายเดือนก่อน +1

      @@Supreme_Lobster can you share github link?

    • @Supreme_Lobster
      @Supreme_Lobster 11 หลายเดือนก่อน +1

      @@crackrule search for the repo CfC_LiquidNetwork-DeepVO . I use the same username as on here. Cant post the url because it gets deleted

  • @concernedspectator
    @concernedspectator 11 หลายเดือนก่อน +15

    Absolutely incredible.

  • @energyeve2152
    @energyeve2152 10 หลายเดือนก่อน +1

    very cool. I look forward to the many applications this can be used in. Thanks for sharing.

  • @dreamphoenix
    @dreamphoenix 11 หลายเดือนก่อน +4

    Fascinating. Thank you.

  • @francisdelacruz6439
    @francisdelacruz6439 11 หลายเดือนก่อน +10

    Really important work, does it scale to 1000x neurons? Cooperative networks?

  • @thomasfreund-programandoha961
    @thomasfreund-programandoha961 11 หลายเดือนก่อน +5

    Wow! This is amazing. Thanks for sharing

  • @ChesterSings
    @ChesterSings 11 หลายเดือนก่อน +5

    Amazing!! Thank you ❤

  • @SilenceOnPS4
    @SilenceOnPS4 11 หลายเดือนก่อน +3

    Can someone please inform me of the advantages of LNNs (if it can be used) on diffusion models such as Stable Diffusion, DALL-E and Midjourney?
    If I am right, these diffusion models use DNNs?

  • @the_curious1
    @the_curious1 11 หลายเดือนก่อน +3

    Very interesting and a good presentation, thank you!

  • @MathPhysicsEngineering
    @MathPhysicsEngineering 11 หลายเดือนก่อน +2

    No link for the original paper in the description?

  • @user-qw1rx1dq6n
    @user-qw1rx1dq6n 11 หลายเดือนก่อน +2

    Incredible that in my limited understanding this seems to perform almost the same process in the sense that the effects are the same as self attention but it’s just more direct about it.

  • @LearnAINow
    @LearnAINow 11 หลายเดือนก่อน +7

    How does this network react when confronted with outside noise that directly affects the trained task? How does this compare with the other forms of networks? Thank you. I’d love to know more

    • @revanthchouhan3068
      @revanthchouhan3068 4 หลายเดือนก่อน

      Same doubt @LearnAINow. If anyone’s into this, please share some knowledge.

  • @mahmga1
    @mahmga1 11 หลายเดือนก่อน +2

    Unbelievably ground breaking from lay view. I was just saying the other day that there had to be a better way that redefines the NN

  • @web3global
    @web3global 9 หลายเดือนก่อน

    WOW! Amazing, thanks for sharing Forbes! 🚀

  • @Viewpoint314
    @Viewpoint314 11 หลายเดือนก่อน +47

    What is the difference between neural network and liquid neural network?
    Unlike traditional neural networks that only learn during the training phase, the liquid neural net's parameters can change over time, making them not only interpretable, but more resilient to unexpected or noisy data.Apr 19, 2023

    • @michaelm6928
      @michaelm6928 11 หลายเดือนก่อน +1

      This shouldn’t give the smooth results as she showed? It’s learning on the validation set?

    • @vaakdemandante8772
      @vaakdemandante8772 11 หลายเดือนก่อน +6

      How is network's ability to change its parameters over time connected with it being explainable? Isn't learning "after learning" still learning? It's a lot of claims for really not a lot of substance. Where's the link to the publication? Did anybody replicated those findings? Looks like a bunch of PR fluff really.

    • @katherandefy
      @katherandefy 11 หลายเดือนก่อน +1

      @@vaakdemandante8772
      It’s because they simplified the structure and added a tree which makes the parameters easier for us and the machine to focus data that previously computed from one large block.

    • @rpcruz
      @rpcruz 9 หลายเดือนก่อน

      The parameters do NOT change after training. The neural network output is a derivative - that is, the outputs are relative to each other - the neural network focus is on how the new input influences the output relative to the previous one.

    • @gpt-jcommentbot4759
      @gpt-jcommentbot4759 7 หลายเดือนก่อน

      @@rpcruz Reread the comment

  • @jeanbernardmbarga3265
    @jeanbernardmbarga3265 11 หลายเดือนก่อน +2

    Great presentation

  • @md.adnannabib2066
    @md.adnannabib2066 11 หลายเดือนก่อน

    that's the most impressive thing i have ever seen,kudos to the researcher

  • @philforrence
    @philforrence 11 หลายเดือนก่อน +56

    Curious how the field will receive this. Let's get her on Lex Fridman!

    • @The-Singularity-M87
      @The-Singularity-M87 11 หลายเดือนก่อน +6

      Before I saw your comment, I had already saw this video and posted a link on One of lex Freeman's videos. Since I don't know how to email him directly but that's the same thing I thought.
      Great minds right 👍

    • @philforrence
      @philforrence 11 หลายเดือนก่อน +4

      @@The-Singularity-M87 you and me, myron. The greatest of minds!

    • @electrolove9538
      @electrolove9538 11 หลายเดือนก่อน +1

      That autonomous driving was one of Lex's projects right?

    • @katherandefy
      @katherandefy 11 หลายเดือนก่อน +1

      Gosh yeah I would love to hear more than we can get in a short talk.

    • @chrisf1600
      @chrisf1600 11 หลายเดือนก่อน +6

      Oh god please no. "So, uhhhh, can a liquid network ever fall in love ?"

  • @user-qw1rx1dq6n
    @user-qw1rx1dq6n 11 หลายเดือนก่อน +4

    It is unbelievable what they managed to do with 20000 parameters I must learn this technique fast

  • @coalkey8019
    @coalkey8019 11 หลายเดือนก่อน +1

    Wow. That is absolutely huge.

  • @SuperMaDBrothers
    @SuperMaDBrothers 11 หลายเดือนก่อน

    9:40 love this cut

  • @joeriben
    @joeriben 11 หลายเดือนก่อน +13

    Amazing. Moving human targets can be tracked by drones independent of place and season. Isn't that what we all have been waiting for.

    • @daveloomis
      @daveloomis 11 หลายเดือนก่อน

      😅

    • @ps3301
      @ps3301 11 หลายเดือนก่อน

      The perfect terminator machine which can track you all day long. The utopia we all looking for

  • @benealbrook1544
    @benealbrook1544 11 หลายเดือนก่อน +4

    Amazing this is revolutionary

  • @PixelPulse168
    @PixelPulse168 11 หลายเดือนก่อน

    Thanks for sharing

  • @KevonLindenberg
    @KevonLindenberg 11 หลายเดือนก่อน +53

    This is the innovation in AI that is going to change our world beyond recognition.

    • @Gabcikovo
      @Gabcikovo 11 หลายเดือนก่อน

      Yes

    • @Gabcikovo
      @Gabcikovo 11 หลายเดือนก่อน +1

      9:38

    • @Gabcikovo
      @Gabcikovo 11 หลายเดือนก่อน

      10:00

    • @semitope
      @semitope 11 หลายเดือนก่อน

      It's not AI. Or at least it's not intelligent. It's really fancy real world data processing. Like feeding an algorithm data on the stock market and it doing it's best to make predictions. Except this time they do it with real world images, human generated information etc. It's good they are able to get computers to produce meaningful calculations from real world data but it's to be expected. How do you get a computer to navigate the real world? Feed it a bunch of images and make it capable of combining all of that data to process what the camera is capturing. Next you make it flexible enough to handle data outside what is feed and hope to minimize errors
      Thinking of, the idiots who thought it was ok to let computers mess around with the stock market better watch what people do on there with these new pieces of code.

    • @hufficag
      @hufficag 11 หลายเดือนก่อน

      Yes

  • @dairop3220
    @dairop3220 11 หลายเดือนก่อน +1

    Is it fundamentally different from the NEAT algorithm ?

  • @bhargavsai2449
    @bhargavsai2449 11 หลายเดือนก่อน

    Excellent blown away

  • @j.d.4697
    @j.d.4697 11 หลายเดือนก่อน +25

    Wow from 100 000 to 19 neurons!
    Can those liquid neurons be similarly scaled??

    • @ricosrealm
      @ricosrealm 11 หลายเดือนก่อน +4

      I don't think that's what she meant. She said those are 19 liquid networks, which likely comprise of thousands of neurons as well.

    • @alaapdhall8541
      @alaapdhall8541 11 หลายเดือนก่อน +15

      @ricosrealm Look at the Fig she showed. Essentially she replaced the FC Layer having 100k neurons spread throughout layers with her 19 liquid neurons that are divided into 3 parts, 12 inter neurons, 6 command neurons, and 1 final motor neuron. Not 19 diff networks. She calls these networks with liquid neurons as liquid networks.
      That's what is impressive that instead of 100k neurons she used only 19 neurons. However, as mentioned in video each neuron is a series of diff. equations instead of typical f(a(wx+b)). Each neurons computation is more complex but still better than 100k also she has not replaced CNN and only replaced FC Layer.
      Ideally a FCNN model like Yolo should still be better as they don't have FC layers as such, but in Attention based transformers this can be useful as they often use FC layers.

    • @AnExplorer1000
      @AnExplorer1000 11 หลายเดือนก่อน +6

      @@alaapdhall8541 Hi there! I don't comment very often, but I have a couple of questions for you if you don't mind. You seem very knowledgeable about these things. As for myself, I love mathematics and work on it whenever I can, but I don't know anything about LLNs, ML, or AI. How did you reach a point where you understood all these things you're commenting about? Did you perhaps go through Andrew Ng's course or something similar? What's your educational background? Thanks in advance.

    • @GreenCowsGames
      @GreenCowsGames 11 หลายเดือนก่อน +11

      @@AnExplorer1000 If you binge a bunch of youtube video's on deep learning you can learn a lot. Then the jump to reading papers will not be that much. Channels like yannic kilcher are really good. If you want to implement then there are lots of resources on the pytorch website itself.

    • @han_0210
      @han_0210 11 หลายเดือนก่อน +2

      As she mentioned in the video , you can see the liquid time constant equation for each neuron and she also mentioned about how they change the wiring of the network

  • @eduardomanotas7403
    @eduardomanotas7403 11 หลายเดือนก่อน +1

    Hey, Any link to the paper or git repository?

  • @vladyslavkorenyak872
    @vladyslavkorenyak872 11 หลายเดือนก่อน +27

    This feels like the brain is trying new neurons to better improve its functioning!

    • @omop5922
      @omop5922 11 หลายเดือนก่อน +2

      This video was obv not for u mate.

    • @jakebrowning2373
      @jakebrowning2373 11 หลายเดือนก่อน +2

      ​@@omop5922who's it for?

    • @technolus5742
      @technolus5742 11 หลายเดือนก่อน +1

      Exactly. Changing the internal configuration of the neurons during training seems to allow for more efficient and powerful models. This is the kind of fumdqmental breakthroughs that this field needs in order to continue making progress beyond larger and larger models.

  • @muhonbhuiyan8687
    @muhonbhuiyan8687 11 หลายเดือนก่อน +1

    What is the problem if you use Sonar instead of Camera 📷, as Input? 🤔

  • @ItzGanked
    @ItzGanked 11 หลายเดือนก่อน

    good for alignment if the arch works well

  • @education.online_frevryone
    @education.online_frevryone 10 หลายเดือนก่อน

    I was wondering a few days ago about black boxes and now we have liquid neural networks. Amazing 😍

  • @superuser8636
    @superuser8636 11 หลายเดือนก่อน +10

    CSAIL is the premiere AI lab at MIT! I know because I worked there developing their AI infrastructure 😂 I really dig this experiment and talk

  • @sydneyrenee7432
    @sydneyrenee7432 6 หลายเดือนก่อน

    When is this going to be used for NLP and AGI?

  • @kkviks
    @kkviks 11 หลายเดือนก่อน

    Interesting!

  • @thorvaldspear
    @thorvaldspear 11 หลายเดือนก่อน +54

    It's interesting how this team has been talking about this invention for over a year, and yet has failed to gather significant attention despite the revolutionary qualities of liquid neural networks. Perhaps there is a catch that they are not telling us about?

    • @sidneymonteiro3670
      @sidneymonteiro3670 11 หลายเดือนก่อน +3

      MARKETING!
      The commercial products capture the headlines.
      It has nothing to do with having a catch.

    • @BB-uy4bb
      @BB-uy4bb 11 หลายเดือนก่อน +29

      ​​@@sidneymonteiro3670ah, all the team members of the big labs/companies always scan/read through all the new papers, if this was so much better than what we have currently than everyone would use it. Science in this area does not need Marketing at all

    • @jeremykothe2847
      @jeremykothe2847 11 หลายเดือนก่อน +3

      @@sidneymonteiro3670 So why has nothing been commercialised?

    • @thad1300
      @thad1300 11 หลายเดือนก่อน +19

      @@jeremykothe2847 you guys are talking as if all the current existing work was in research phase a year ago lol. Even the pathway from "Attention is All You Need" to recent LLMs took a few years and that's really fast.
      The recent explosion in AI is really hardware driven, the realization that as long as you just add more compute power, your models become a lot better. But the fundamental research on AI/ML was decades ago. We're hitting a hardware compute wall soon and further improvements will be made on the algorithmic side.

    • @NeonTooth
      @NeonTooth 11 หลายเดือนก่อน +5

      ​I think there are much better ways of conceptualizing neural networks in development that will drastically lower the compute necessary to run them. And liquid neurons are certainly an example of this. I mean you can run it on a raspberry pi. That's the kind of thing that will make models far more accessible to open source folks, as well as strip larger tech companies of their monopolies over the models.

  • @pytebyte
    @pytebyte 11 หลายเดือนก่อน +1

    Wondering why in the first driving example the camera input stream is quite noisy but when they switch to the liquid network its smooths. Anyway. Interesting work.

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน

      First, a few assumptions: 1) I think they probably developed the driving in-house so it isn't likely to have the same richness of data as something like Tesla or Waymo. 2) I doubt they're employing the same level of computational power as Tesla, Google, et al for this project. Haven't read their paper on the topic, so this is completely off-the-cuff, and take it as such.
      Given that, I think the first is grainy simply because the model's data is noisier than it possibly had to be, possibly not finely-tuned, and so it has some inherent bias issues that make it pay attention to more details than it needs. As for the liquid net, that's simply the nature of derivative math. They (tend to) smooth out lower level formulas, so it makes sense to me that the image recognition result is going to be more gradients than pixel-sharp black and white decisions.

    • @pytebyte
      @pytebyte 11 หลายเดือนก่อน +1

      ​@@LabGecko Thank you for the answer. Maybe I get it completely wrong but when I read "camera input stream" I assume we are talking about untouched data coming directly from camera of the car. So whatever Model is processing this data in the next step gets the same kind of quality. In their presentation it gives me the feeling they wanted to show more noise in the attention map and that's why they added some extra noise to the camera input feed.

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน

      @@pytebyte good chance you're right, good point. I certainly can't be sure, not being there myself. :D

    • @rheedan
      @rheedan 11 หลายเดือนก่อน +3

      If I understand it correctly, the noise in the attention map isn't noise from the camera feed. The noisy bright parts of the image represent the attention weights of the model. In other words, the bright pixels represent areas the model thinks are important for making its predictions. The problem with the CNN is it pays attention to lots of things that humans wouldn't pay attention to, while the liquid network has a tighter focus, and pays attention in a way that make sense to us.

  • @stanleyashiwel7047
    @stanleyashiwel7047 11 หลายเดือนก่อน

    Thank you

  • @imranbaloch3414
    @imranbaloch3414 11 หลายเดือนก่อน

    Excellent achievement!

  • @hellucination9905
    @hellucination9905 6 หลายเดือนก่อน

    I'm no expert, just curious, but it seems to me like (1) a form of continous self-reflexivity regarding the specific neuronal changes produced within the liquid network in relation to the produced output effects; and (2) a mapping of the causal relationships of the temporalized relations between these internal neuronal reconfigurations and the specific output effects produced by them.

  • @GBlunted
    @GBlunted 11 หลายเดือนก่อน +1

    No link to her paper?? That's messed up...

  • @Kugelschrei
    @Kugelschrei 11 หลายเดือนก่อน +3

    So basically liquid networks are neural networks with differntial equations instead of "standard" sigmoid activation functions?

  • @ophthojooeileyecirclehisha4917
    @ophthojooeileyecirclehisha4917 11 หลายเดือนก่อน

    thank you

  • @erikdong
    @erikdong 11 หลายเดือนก่อน

    Bravo!

  • @johnniefujita
    @johnniefujita 11 หลายเดือนก่อน +1

    I learned about liquid neural network something like 3 years ago... but i just couldn't find any base model to implement it... does anyone know where to find any repositoty with code to implement it.

    • @Tbone913
      @Tbone913 11 หลายเดือนก่อน +2

      repo of the inventor

  • @Jukau
    @Jukau 10 หลายเดือนก่อน +1

    Isn't that another huge step in the direction of AGI?

  • @aboucard93
    @aboucard93 11 หลายเดือนก่อน

    This is amazing

  • @Asha-td7bm
    @Asha-td7bm 11 หลายเดือนก่อน

    Amazin

  • @SaiyanGokuGohan
    @SaiyanGokuGohan 11 หลายเดือนก่อน +2

    Neuromorphic computing is where it’s at, using spiking neural networks.

    • @chrisf1600
      @chrisf1600 11 หลายเดือนก่อน

      Great comment. It's striking that none of the current ANN approaches uses "spikes" of activation. That's totally unlike how our brains work. Presumably, evolution uses spiking neurons for a reason. I wonder if the AI industry is heading down yet another blind alley ?

  • @mariomariovitiviti
    @mariomariovitiviti 9 หลายเดือนก่อน

    This is huge.
    Robust under data distribution variance. By targeting more task relevant features. This means less data necessary for continuous learning which is the only and super costly way to keep a model in production.

  • @prilep5
    @prilep5 11 หลายเดือนก่อน +3

    Imagine Lewis Hamilton training an A.I.bot only if computer can scan his brain and eye movements to learn his decision making

  • @GT-tj1qg
    @GT-tj1qg 11 หลายเดือนก่อน

    This talk seems a bit off. Any of that testing she showed could be completely unfair and we would have no way to know without looking at her data. How many neurons did they give the deep nets vs their liquid nets? Did they use the same training data for both? Did they choose an unusually small dataset to make the deep net underperform?

  • @PeterMoueza
    @PeterMoueza 11 หลายเดือนก่อน +2

    7:20 causal

  • @sumansaha295
    @sumansaha295 11 หลายเดือนก่อน +5

    Hmm I thought the hype had died down on these. Hopefully something good comes out of it so people don't need server farms to do AI research.

  • @FrankAbagnale1
    @FrankAbagnale1 11 หลายเดือนก่อน

    How do I implement this tensorflow pytorch etc as a programmer? My brain fried at the math part.

  • @broyojo
    @broyojo 11 หลายเดือนก่อน +19

    I would have liked to see a transformer model comparison, seeing as it is the current state of the art for many AI problems

    • @michaelm6928
      @michaelm6928 11 หลายเดือนก่อน +5

      You can probably make the transformer “liquid”

    • @DS-nv2ni
      @DS-nv2ni 11 หลายเดือนก่อน +13

      You cannot make a Transformer "Liquid", embeddings are not compatible with this approach.
      Regarding Transformers being the SOTA for AI problems, I'm not sure about that, they work great for NLP problems, and generative content creation, but those only overlap with a fraction of the issues that are important to solve through AI, and not even the most important ones.
      On top of that, the results are not good enough, Transformers have no causality to start with, and even if Liquid Networks seem to have causality, *they can't really "understand", a device that can represent causality doesn't mean it has understanding of it (like when you program a computer), so Transformers are two steps behind what we need, and Liquid Networks seem to be a step forward in that direction, still they can't solve the problems at which Transformers excel to.
      The step about "understanding" we are missing, it's more than a step, it's a *long run, and doesn't look close at all, probably decades from now, at the moment AI is mostly hype unfortunately.
      EDIT: Typos.

    • @skadday
      @skadday 11 หลายเดือนก่อน +1

      @@DS-nv2ni you dont have a single clue what you are talking about

    • @DS-nv2ni
      @DS-nv2ni 11 หลายเดือนก่อน +13

      @@skadday I think I have a clue and an informed opinion, after spending twenty years as researcher and engineer on AI systems. On the other hand, I think that someone that like you jumps on a topic, saying that others "can't understand", without even pointing out why in a reasonable way, it's indeed the type of person that doesn't understand the topic but has some strong bias to defend.
      I can't even image how I may have triggered you, perhaps the part about AI being hype, just because I had previous experience of people deranging on that.
      I hope for you is not the case and you have some valid point to make actually, otherwise I suggest you to find a better way to spend your time.

    • @skadday
      @skadday 11 หลายเดือนก่อน

      @@DS-nv2ni you clearly dont even have an argument

  • @The-Martian73
    @The-Martian73 11 หลายเดือนก่อน +2

    The thrive of ai is exponential, meaning it is an industrial revolution, I am not freaked out tho, things really are and will be going naturally as expected !!

  • @tallwaters9708
    @tallwaters9708 11 หลายเดือนก่อน

    But, just because that first car is focused on e.g. the side of the road (which is perhaps a heuristic visualization anyway), that's not bad. What if, for example, there's a kid on the side of the road, I'd want the network to be on the lookout for that!

  • @kipling1957
    @kipling1957 11 หลายเดือนก่อน

    Relevance realization

  • @balakrishnaprabhunallendra999
    @balakrishnaprabhunallendra999 11 หลายเดือนก่อน +10

    She should have been given an opportunity/ a facility to sit down & present her matter in question!

    • @DistortedV12
      @DistortedV12 11 หลายเดือนก่อน +1

      She needs a chair for sure

    • @fog3911
      @fog3911 11 หลายเดือนก่อน +2

      @@DistortedV12 aw man

    • @raphaelcardoso7927
      @raphaelcardoso7927 11 หลายเดือนก่อน +3

      maybe she was offered a chair and denied. we all know that to stand in a presentation makes it easier to retain attention

    • @gaetanomontante5161
      @gaetanomontante5161 11 หลายเดือนก่อน +1

      My dear friend, that is irrelevant. I am glad she stood tall, very tall, when presenting us with such an innovation that has the potential to truly change many of the "old" ways we look at things. I am a journeyman but I was totally awed at the ability of this new approach, liquid neurons, to deliver effective solutions to many problems. I want to kiss her own liquid neurons with true Human love.
      One thing seems a little strange at the time of my writing: Despite having been there 10,502 view, I witness only 257 likes, including mine, and ONLY 38 previous comments, and, may I add, most of them being perfunctory and at least one being totally inane.

    • @escesc1
      @escesc1 11 หลายเดือนก่อน

      I doubt she was not offered a chair. She simply preferred to stand up :D

  • @Aldraz
    @Aldraz 11 หลายเดือนก่อน +3

    I mean this is cool and all, but can it be applied to language models?

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน +1

      Of course. It's just a different dataset. The neurons structure / math just allows it to learn post-training, which should be a definite advantage for LMs

    • @Aldraz
      @Aldraz 11 หลายเดือนก่อน +1

      @@LabGecko I am not so sure about that, transformers are very different. Even if it would work, it may not work as efficiently.

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน +1

      @@Aldraz We're talking about predicting language tokens right? To me they're just a gradient of a list of sounds with a statistical chance of being used based on its neighbors, and image data is effectively grayscale gradient with a statistical chance of being useful based on what is around it. Am I missing a piece?

    • @Aldraz
      @Aldraz 11 หลายเดือนก่อน +1

      @@LabGecko I guess you are correct, but with my limited understanding you wouldn't be able to easily switch and use transformers as before. Because most transformers are not RNN based. But I could be wrong.

    • @clray123
      @clray123 11 หลายเดือนก่อน +1

      @@LabGecko The current transformer-based algorithms process discrete sequences of tokens. CNNs were tried in the beginning and they struggled with modelling language in which there are strong time/causal dependencies between tokens at different distances in the sequence that CNNs can't capture well. RNNs did ok in principle, but they did not scale because they could not be trained on an entire sequence of tokens in parallel like transformers can today. I have no idea whether Liquid NNs suffer from the same problem, but the comparisons to LSTM and RNN do not bode well.

  • @greyowlaudio
    @greyowlaudio 11 หลายเดือนก่อน +1

    The entire field of AI is at risk with companies now pay-walling off their data and/or charging obscene amounts to use it. That's a major short-term spectre that will need to be dealt with.

  • @nias2631
    @nias2631 11 หลายเดือนก่อน +1

    Are these based upon Liquid State Machines or entirely different?

    • @exmachina767
      @exmachina767 11 หลายเดือนก่อน +2

      According to ChatGPT: “Liquid neural networks (Liquid NNs) and liquid-state machines (LSMs) are closely related concepts, often used interchangeably or as variations of the same idea. Both Liquid NNs and LSMs emphasize the utilization of continuous dynamics and interaction to process information.
      Liquid-state machines were introduced as a type of recurrent neural network inspired by the behavior of liquid matter. In LSMs, the computational units, often called "liquid neurons," have continuous activation dynamics governed by nonlinear differential equations. These units interact with each other through dense and recurrent connections, forming a liquid-like medium. The dynamics of the liquid neurons allow for the computation of temporal patterns and the processing of time-varying information.
      The term "liquid neural networks" is often used to refer to a broader class of unconventional neural network architectures that share similarities with liquid-state machines. While LSMs specifically emphasize the liquid metaphor and dynamics, liquid neural networks can encompass a wider range of architectures that incorporate liquid-like properties.
      In essence, liquid neural networks and liquid-state machines are closely related concepts that aim to harness the power of continuous dynamics and interaction in neural computation. The distinction between them may lie in the specific architectural variations, training algorithms, or implementation details, but they share the common goal of utilizing liquid-like properties for information processing.”

    • @nias2631
      @nias2631 11 หลายเดือนก่อน

      ​@@exmachina767So it's in the reservoir computing family. LSMs have been around since 2002 or so. If this group is rebranding it, I guess I will have to go through their paper and see what is so different.

  • @blengi
    @blengi 11 หลายเดือนก่อน

    what's the killer app using this versus AI products like fsd, gpt4,midjourney, alphafold etc that are already changing the world?

    • @samaBR_85
      @samaBR_85 11 หลายเดือนก่อน

      if it gives good results and uses less energy, its already killer!

    • @blengi
      @blengi 11 หลายเดือนก่อน

      ​@@samaBR_85 so beyond self aggrandizing claims the market must be already implementing this self evidently superior technology in some awesomely popular application or 20 that I can download or read drooling reviews about - What are they?

    • @GT-tj1qg
      @GT-tj1qg 11 หลายเดือนก่อน

      I don't know the killer app as you describe it, but this is a clue to finding it: the main difference of liquid nets is that they try to find fundamental relationships, rather than statistical clusters. This has the benefit of being less noisy and more consistent, but at the cost of potentially learning a completely wrong solution to the problem.

    • @gpt-jcommentbot4759
      @gpt-jcommentbot4759 7 หลายเดือนก่อน

      @@blengi There is no "app". AI takes a while to actually get recognized in the market. Just take a look at GPT-3, it was widely known in ML but not really talked about outside of it. Besides, apps are probably not gonna recognize this and are just going to use the same generic architecture. Over and over again until something truly revolutionary arrives, and they will use that over and over again.

  • @Glowbox3D
    @Glowbox3D 11 หลายเดือนก่อน

    Stupid question: If Elon and his team weren't aware of this method, and then became aware of it already years in development on their own systems, would they potentially re-route their own models, or switch them out totally to use a new method like LNN? Or are they too far in on their own models, they wouldn't dare touch a new system? I would assume, if a *clearly* better technology comes around, innovators are sort of 'forced' to make the change as well?

    • @TimothyOBrien6
      @TimothyOBrien6 11 หลายเดือนก่อน

      They will switch to use this. Their system is modular enough that they can swap out the black box neural network with another that has the same inputs and outputs, and it shouldn't be very hard.

    • @GT-tj1qg
      @GT-tj1qg 11 หลายเดือนก่อน +1

      ​@@TimothyOBrien6 Where did you learn that, I wonder? Everything I've read would indicate that Tesla has dedicated vast computational resources to their existing deep neural network system and to discard it would waste a significant financial investment.

    • @GT-tj1qg
      @GT-tj1qg 11 หลายเดือนก่อน +1

      Glowbox, I'm not sure this is what the creators are claiming it to be. I suspect it has more limitations than they are letting on.

    • @technolus5742
      @technolus5742 11 หลายเดือนก่อน

      ​@@GT-tj1qg My guess is that they will be forced to change if a better route becomes apparent. Continuing to sink money into something that doesn't work very well would be a worse alternative.
      Besides the data they have collected and their current model can be used to train the new model, avoiding the issue of having to start from scratch.

  • @MrChaluliss
    @MrChaluliss 11 หลายเดือนก่อน +2

    How y'all gonna not link the papers relevant to this talk?

  • @Voltlighter
    @Voltlighter 11 หลายเดือนก่อน +2

    So they work more similarly to real neurons then?

  • @andybrice2711
    @andybrice2711 11 หลายเดือนก่อน +1

    Surely it's entirely reasonable that an autonomous vehicle would be paying attention to bushes at the side of the road? There could be potential hazards obscured by those bushes. Like people or animals behind them.

    • @GT-tj1qg
      @GT-tj1qg 11 หลายเดือนก่อน +1

      In an advanced system, perhaps that would be a very good feature to include. But I suspect the models they were demonstrating here were just designed for lane-keeping (finding the road and staying on it)

  • @Sooyush
    @Sooyush 11 หลายเดือนก่อน

    Daniela & Team❤❤❤

  • @guten5221
    @guten5221 9 หลายเดือนก่อน

    Please make something like skynet

  • @yogiwp_
    @yogiwp_ 10 หลายเดือนก่อน

    This seems like a bigger breakthrough than whatever else on the AI news in the past couple of months?

  • @ApteraEV2024
    @ApteraEV2024 11 หลายเดือนก่อน

  • @alirezagoudarzi1915
    @alirezagoudarzi1915 11 หลายเดือนก่อน +2

    Amini and Hasani , Two Iranians are leading this project. it amazes me how these boys are changing the world !!👏👏

  • @MrChaluliss
    @MrChaluliss 11 หลายเดือนก่อน +7

    Kind of strange just how quickly she's going through these slides, like what the heck else is so important that this needs to be rushed? This seems like a significant breakthrough, maybe I am over reacting, but I get the sense this is one of those big steps forward that may make a big difference in enabling AI to be used in a variety of problem cases.

    • @technolus5742
      @technolus5742 11 หลายเดือนก่อน +4

      She likely needs to stay within the allocated time for her talk. While this seems like a breakthrough, this is only a talk.

  • @sachinknight19
    @sachinknight19 10 หลายเดือนก่อน

    ❤❤❤

  • @redblue2644
    @redblue2644 11 หลายเดือนก่อน +2

    Did she say Liquid Networks solutions adapt better because the equations are in effect less complex and so they focus on less?

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน +1

      That isn't what I heard. My understanding is that they adapt better because part of the current data gets re-introduced on a continuous basis, but is smoothed out by the derivative functions. But I'm open to being corrected.

  • @StephenRoseDuo
    @StephenRoseDuo 6 หลายเดือนก่อน

    Isn't this from ~4 years ago?

  • @Viewpoint314
    @Viewpoint314 11 หลายเดือนก่อน +1

    What is liquid neural network?
    Liquid Neural Networks: Definition, Applications ...
    A Liquid Neural Network is a time-continuous Recurrent Neural Network (RNN) that processes data sequentially, keeps the memory of past inputs, adjusts its behaviors based on new inputs, and can handle variable-length inputs to enhance the task-understanding capabilities of NNs.May 31, 2023

  • @Madayano
    @Madayano 10 หลายเดือนก่อน

    👍

  • @technowey
    @technowey 11 หลายเดือนก่อน +1

    Neural nets that learn after training is *not* a new idea. I have a book about that, with the algorithms, that I bought in 2015.
    If I had access to my library now, I’d post the title.
    I’ll watch this video to see if these are similar algorithms.
    I’m skeptical about the claims about “causality.” Even adaptive networks find patterns in data that might show causality, however, they will also find correlations that are *not* necessarily have a causal relationship.

    • @technolus5742
      @technolus5742 11 หลายเดือนก่อน

      Looking at those attention graphs, it does seem to do well regarding causality, sifting better through the noise.

    • @jebprime
      @jebprime 11 หลายเดือนก่อน

      It creates a representation of the environment that should converge to an equilibrium/ some sort of pattern unless new input changes it's internal representation of the environment.
      I think that's what they use to help their notion of causality.

    • @georgetrench2809
      @georgetrench2809 10 หลายเดือนก่อน

      The thing is that with the incredible number of neurons and all the interactions that take place between them, it is very difficult, if possible at all, to fathom how the transformer network arrived at its conclusions. This, whereas with the liquid neural network this all beoomes quite evident....

  • @jackbauer322
    @jackbauer322 7 หลายเดือนก่อน

    yeah well basically they mimic focusing like we do and are robust to context change ... at last !

  • @Gabcikovo
    @Gabcikovo 11 หลายเดือนก่อน

    8:26

  • @dag410
    @dag410 11 หลายเดือนก่อน

    🎉

  • @krox477
    @krox477 11 หลายเดือนก่อน +1

    What is the "liquid" here

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน

      If I had to guess, it's the derivational smoothing of equations handling input

    • @shubhamdhapola5447
      @shubhamdhapola5447 11 หลายเดือนก่อน +2

      It's the ability of the network to learn new patterns during inference on the real-world, test data. Canonical way, for traditonal networks is to perform all the "learning" during the training phase, and become rigid/static once it has been sufficiently trained.
      Another aspect of it being "liquid" is that it can handle variable length time-series data.
      Link to the paper :
      arxiv.org/pdf/2006.04439.pdf

    • @krox477
      @krox477 11 หลายเดือนก่อน

      ​@@shubhamdhapola5447thanks I'll definately learn about it

  • @benhammond6717
    @benhammond6717 หลายเดือนก่อน

    Is this what Tesla is using in their latest FSD update?

  • @testales
    @testales 10 หลายเดือนก่อน

    It's a little bit of cheating to give one NN a noisy camera input stream and the other a clear stream, isn't it? ;-) Either way, I'm looking forward to some implementations of this.

  • @lostpianist
    @lostpianist 11 หลายเดือนก่อน +5

    This seems like a natural consequence of improved tech rather than a 'new idea', but all discovery is serendipity, really, anyway. Cool.

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน +12

      No, their method of manipulating the math is groundbreaking. Current models need _billions_ of neurons to do things like what OpenAI has done on ChatGPT, and this model does the same with *_19!?_* Seriously groundbreaking.

    • @kayakexcursions5570
      @kayakexcursions5570 11 หลายเดือนก่อน +3

      I agree. Nothing to see here.

    • @f.jideament
      @f.jideament 11 หลายเดือนก่อน +1

      ​@@LabGeckohow do you know their efficiency and precision is the same? Checked and compared the data for every possible problem? What is the definition of better here?

  • @emreon3160
    @emreon3160 11 หลายเดือนก่อน

    Try chocolate AI, eye i sir! 😂

  • @ps3301
    @ps3301 11 หลายเดือนก่อน +7

    There are no simple math demonstrations of this model

  • @stevenesposito9305
    @stevenesposito9305 11 หลายเดือนก่อน

    Interesting…

  • @katherandefy
    @katherandefy 11 หลายเดือนก่อน

    Hopefully YT does not delete the paper link for this idea since the platform hosting these talks does not advertise the work directly but is neutral or that is my assumptions…
    See link in my reply to myself here.

  • @salehisabeyki4275
    @salehisabeyki4275 9 หลายเดือนก่อน

    1:12

  • @samaBR_85
    @samaBR_85 11 หลายเดือนก่อน +1

    that would be great running on a Photonic chip

    • @GT-tj1qg
      @GT-tj1qg 11 หลายเดือนก่อน

      Photonic chips are irrelevant to this topic.

  • @ghostvocal
    @ghostvocal 11 หลายเดือนก่อน +1

    Any frameworks incorporating this already? tf? PyTorch?

    • @yashwardhansable5187
      @yashwardhansable5187 11 หลายเดือนก่อน +1

      I don't think so, if you find anything tag me too, I have a problem where I would like to try this

    • @LabGecko
      @LabGecko 11 หลายเดือนก่อน

      Seconding this request for sources. Motion carries. :D

  • @yueliu2917
    @yueliu2917 11 หลายเดือนก่อน

    The title of this video is completely wrong...