The future of AI looks like THIS (& it can learn infinitely)

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ม.ค. 2025

ความคิดเห็น • 789

  • @theAIsearch
    @theAIsearch  7 หลายเดือนก่อน +37

    Thanks to our sponsor, Bright Data:
    Train your AI models with high-volume, high-quality web data through reliable pipelines, ready-to-use datasets, and scraping APIs.
    Learn more at brdta.com/aisearch
    Viewers who enjoyed this video also tend to like the following:
    You Don't Understand AI Until You Watch THIS th-cam.com/video/1aM1KYvl4Dw/w-d-xo.html
    These 5 AI Discoveries will Change the World Forever th-cam.com/video/fyVja-57EIs/w-d-xo.html
    The Insane Race for AI Humanoid Robots th-cam.com/video/90TMZ2fq9Gs/w-d-xo.html
    These new AI's can create & edit life th-cam.com/video/3K_LAGonsPU/w-d-xo.html

    • @Miparwo
      @Miparwo 7 หลายเดือนก่อน +1

      Clickbait. Your "sponsor" deleted the key part, if it ever existed.

    • @alexamand2312
      @alexamand2312 6 หลายเดือนก่อน

      Ok there is so much issue in this video. neural network are fixed ? what does this even mean ? we just stop training them, it's a version checkpoint. we could train them continuously. they could learn even by themself without human by "reinforcement learning"... classic neural network can emule any partition or specialisation. this reflexion come from somone that does not really understand how it's works. Liquid neural network what you explained it's smething like a feature extractor, basically an encoder . feel like a lot of bullsht. spking neuron wtf you just discover an activation function... RNN with a simple Relu have the same behaviour. reaching a superior intelligenceby mimicking the brain, holy fk, i was waiting some quantum sht.
      you don't understand what your are talking about

    • @jeremiahlethoba8254
      @jeremiahlethoba8254 6 หลายเดือนก่อน

      @@alexamand2312 by "neural network are fixed"he means the current weights are based on last date of training, like different chatgpt versions...I haven't watched the whole video but the only issue I have is why the narrator keeps using the verb "compute" when in the context should be computation 😅...is it a bot

    • @JohnSmith-ut5th
      @JohnSmith-ut5th 6 หลายเดือนก่อน +1

      Wrong... But nice try. Liquid NNs are not the solution. It's actually much simpler

    • @billkillernic
      @billkillernic 6 หลายเดือนก่อน

      Ai sh*t is the next .com bubble it has its use case but it's not nearly as cool as ppl think it will be stupid forever because it is a flawed design that just seems to do some stuff relatively fast (which a monkey could do though but slower ) it's a glorified parrot or mechanical turk

  • @williamb.7134
    @williamb.7134 7 หลายเดือนก่อน +11

    Thanks!

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน +4

      Wow, thanks for the super!

  • @enzobarbon4501
    @enzobarbon4501 7 หลายเดือนก่อน +412

    Seems to me like a lot of people compare the "learning" a human does during it's lifetime to the training process of a LLM. I think that it would make more sense to compare the training process of a neural network to the evolution process of the human being, and the "learning" a human does during its lifetime to in-context learning in a LLM.

    • @Rafael64_
      @Rafael64_ 7 หลายเดือนก่อน +58

      More like evolution being the base model training, and lifetime learning being fine-tuning.

    • @Tayo39
      @Tayo39 7 หลายเดือนก่อน +8

      The crazy thing is we can dublicate the best latest level or version of a program or cyborg, with the push of a button.... And keep fine-tune it while it fine tunes and instantly updates itself and all connected devices with the new bit of info that will never be lost while my brain is about explode lol things about get turned upside down fwiw...

    • @CrimpingPebbles
      @CrimpingPebbles 7 หลายเดือนก่อน +16

      Yep, that’s all I kept thinking, we took a long time to get where we are now, millions of generations going all the way back to the origin of life, that’s a lot of energy to get to our current brain organization

    • @Alpha_GameDev-wq5cc
      @Alpha_GameDev-wq5cc 7 หลายเดือนก่อน

      No… these are simply statistical models. Nothing compared to the brain, it’s a sad thing that many “brains” aren’t capable of understanding this. Funny how dampening stupidity can be

    • @mikezooper
      @mikezooper 7 หลายเดือนก่อน +8

      @@CrimpingPebblesThis! Also the evolutionary aspect of AI doesn’t hunt out efficiency, hence why we’ll need lots of energy and data. The training should hunt out energy efficiency but also data efficiency (thinking / deducing more with less data/information).

  • @keirapendragon5486
    @keirapendragon5486 7 หลายเดือนก่อน +53

    Absolutely would love that video about the Neuromorphic Chips!!

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน +3

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @kevinmaillet8017
    @kevinmaillet8017 7 หลายเดือนก่อน +157

    I built a custom spiking neural network for resolving a last-mile logistics efficiency problem.
    I agree with your assessment:
    Very efficient.
    Complex logic.

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน +14

      That's cool! Thanks for sharing

    • @ALLIRIX
      @ALLIRIX 6 หลายเดือนก่อน +1

      Oh really? I'd love to hear more. I've been relying on stitching together Google Maps API requests that are limited to 25 nodes a request, but I do 200+ locations. I've been wondering I'd there was an ai solution.

    • @regulardegular5
      @regulardegular5 6 หลายเดือนก่อน

      hey how does a noob like me go around building ai

    • @JordanMetroidManiac
      @JordanMetroidManiac 6 หลายเดือนก่อน

      @@regulardegular5 I learned through TH-cam channels 3blue1brown, sentdex, and deeplizard. I also already had a strong background in math, but if you don’t, you might skip the deeplizard videos. sentdex’s videos are most hands-on. 3blue1brown offers the most intuitive explanations of deep neural networks. deeplizard nicely explains the technical parts of training various neural network architectures.

    • @zenbauhaus1345
      @zenbauhaus1345 6 หลายเดือนก่อน +19

      @@regulardegular5 ask ai

  • @Tracing0029
    @Tracing0029 7 หลายเดือนก่อน +51

    The opening statement is so true. As a student of this field, I think that this is not said enough and anyone not well versed in machine learning just does not get how bad the current situation is.

    • @Alexander-or7vr
      @Alexander-or7vr 7 หลายเดือนก่อน +4

      Dude I’m a completely rookie, what is so bad about the current situation please? Would love to know.

    • @Tracing0029
      @Tracing0029 6 หลายเดือนก่อน +16

      @@Alexander-or7vr using statistical models without understanding or considering the output validity is borderline insanity

    • @Alexander-or7vr
      @Alexander-or7vr 6 หลายเดือนก่อน +1

      @@Tracing0029 can you tell me why? What’s is the outcome you are worried about?

    • @GodzillaGoesGaga
      @GodzillaGoesGaga 6 หลายเดือนก่อน +8

      @@Tracing0029 The output validity is known. This is what back propagation does. Basically we have a generalised function finder.

    • @no-lagteardown3558
      @no-lagteardown3558 6 หลายเดือนก่อน +4

      ​@@Tracing0029Bruh what r u even talking about.

  • @olhoTron
    @olhoTron 7 หลายเดือนก่อน +82

    8:33 not only is this "human brain" computer more efficient, but I heard the first stages of creating a new instance is pretty fun, can't confirm, never done it, but they say it is

    • @takkik282
      @takkik282 7 หลายเดือนก่อน +13

      It's later stages that are more consuming. You sign for life! Perhaps brain is more efficient, but I think Neural Network train faster. Consider the years needed for an human to master language, and think about all the support it need (adults, books, computers etc..). Think about mileniums of human progress needed to develop our current intelligence. Think about all the time needed for life to get where we are! If we ever get to AGI in the next decenies, it's like creating new intelligent life in a fraction of the time needed by nature.

    • @volkerengels5298
      @volkerengels5298 7 หลายเดือนก่อน +5

      @@takkik282 _"Consider the years needed for an human to master *language,..."
      *culture - which is a far bigger challenge. With "language" you have to learn concepts and abstracts on any level: syntax, semantic, perfomance...
      The list what a child learns in these years (0-3-6-10) is far far longer.

    • @tsanguine
      @tsanguine 6 หลายเดือนก่อน +1

      i have, it's alright, but it's probably even better when you create the instance with someone who is more knowledgeable and a bit freakier

    • @someone9927
      @someone9927 5 หลายเดือนก่อน

      ​@@takkik282 Current llms can't create new non-fake information that weren't already written somewhere. Humans also teaching to walk, run, jump, eat, breathe, feel speed, their position, heat, pain. What about smelling, hearing sound and extracting words from it (and other sounds also), see, separating colors, also internal editing of image from eyes so you don't see your nose and vessels, what about felling by touch every small detail of object. What about precisely controlling your fingers so you don't miss that button on your small phone screen.
      Also you can download game and even if it has strange controls, most likely after some time you would be good at it (can't say same about ai)
      Lets take gpt-4o for example. It can't hear you (audio translated to text by another ai), it can't feel anything, it doesn't have physical body, it doesn't have to precisely control muscles to say something, it can't feel anything humans can, it can't teach. It can see images, but that's not continuous video and audio stream that out brain can accept and work with.
      Even with these limitations of current ai, it uses much more energy that our brain in whole life

    • @hamishahern2055
      @hamishahern2055 4 หลายเดือนก่อน +1

      a calculator is more efficient than using an excel sheet, one is more versatile than the other. so why bother building a human brain (efficient like a calculator) and just build a swiss army knife instead (a CPU and GPU)

  • @kairi4640
    @kairi4640 7 หลายเดือนก่อน +35

    Spiking neural networks and any future neural networks, sounds like where agi and asi are actually at.

    • @olhoTron
      @olhoTron 7 หลายเดือนก่อน +13

      Or maybe to reach AGI (do we really want that?) we need to ditch neural nets and actually discover what makes inteligence work on a high level and reimplement to work on computers...
      Seems to me like doing any type of neural network is like trying to emulate a game console by simulating each transistor in its circuit... sure, it can work, but it would take the most powerful Threadripper CPU to emulate a Atari 2600 at full speed this way
      Maybe neural nets will help us understand what makes a brain tick on a high level, then we will make a "brain JIT recompiler"... and then... who knows what will happen next

    • @eddiedoesstuff872
      @eddiedoesstuff872 7 หลายเดือนก่อน

      @@olhoTronwow, never heard this perspective before. The problem I think tho, is that while it’s easy to simulate neurons, the real issue is arranging them correctly to create higher level behaviours. Using your analogy, yeah you can simulate for example the CPU using transistors and then implement it in a higher level way, but to do that first u need a schematic of how each transistor connects to the next. So, rather we brute force arrangements until we make human-like neuron arrangements, or brain scanning technology needs to improve so we can view whole sections of the brain at neuron level

    • @nemesiswes426
      @nemesiswes426 7 หลายเดือนก่อน +8

      That is what I believe. To me, AGI means the digital equivalent of a human, conscious, self aware and all that. Since the only known example for running AGI (ourselves) is our brain, then we should probably aim to replicate it. Maybe not the cellular biophysics etc.. happening but the overall more abstract ways it works. Any other method has no proven way of getting to AGI. It is how I am going about working on these things at least, using modified spiking neural networks to more closely resemble the brain. It truly is an amazing time to be alive. We are on the brink of a new species being created. Potentially the first time in the entirely of the universe's existence when a given species has created another species smarter than themselves.

    • @charlesmiller8107
      @charlesmiller8107 6 หลายเดือนก่อน

      @@olhoTron It's not the same. Using transistors to emulate transistors? What we need are actual neurons, artificial of course but electronic. Maybe an integrated circuit that has interconnected devices that function like a neuron but also somewhat transistor like. A transistor with like hundreds of inputs and outputs but really small. 🤔It would need to be dynamic but that's way beyond our current capabilities. Bay just using biology is the best option. Cyborgs.

    • @olhoTron
      @olhoTron 6 หลายเดือนก่อน

      @@charlesmiller8107 *if* (and its a big if) inteligence is actually computable (and not some kind of quantum or spiritual thing) then it is just a computer program like any other, only difference is its running on wetware
      simulating the basic blocks of the wetware is not the way to go, its too ineficient, we need to actually understand the problem and reimplementing it to run on current computer architectures
      If its not computable, then we will never reach AGI with classical computers and no amount of nested dot products will make inteligence emerge

  • @Ding63
    @Ding63 7 หลายเดือนก่อน +33

    Definetly make a video on neuromorphic chips
    And i think the other neural networks outside the scope of this video should have their seperate video as well

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน +1

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @alkeryn1700
    @alkeryn1700 6 หลายเดือนก่อน +5

    i wrote a spiking neural network from scratch, it can learn but it's not as efficient as learning as typical nn as you can't do gradient descent effectivey, instead you need to adjust the neurons based on a reward.
    now you can backtrace and reward the last neurons and synapses that resulted to the output you want but it is limited, it works better when you don't just reward the lasts, but reward according to the desired output, still, pretty cool to run and it makes nice visualizations.

  • @dmwalker24
    @dmwalker24 7 หลายเดือนก่อน +47

    First and foremost, I am a biologist, but I have quite an extensive background in computer science as well. I have some fundamental concerns with the efforts to develop AI, and the methodologies being used. For these models to have anything like intelligence, they need to be adaptable, and they need memory. Some temporal understanding of the world. These efforts with LNN strike me as attempting to re-invent the wheel.
    Our brains are not just a little better at these tasks than the models. They are exponentially better. My cats come pre-assembled with far superior identification, and decision-making systems. Nevertheless, that flexibility and adaptability require an almost innumerable set of 'alignment' layers to regulate behavior, and control impulses. To make a system flexible, and self-referential is to necessarily make it unpredictable. Sometimes the cat bites you. Sometimes you end up with a serial killer.

    • @camelCased
      @camelCased 6 หลายเดือนก่อน +3

      Right, and human brain has constant learning feedback loop not only from outside world (through all senses) but also the internal (through self awareness, reflection, critique etc.). Current LLMs don't ever check their responses for validity because there is nowhere to get the feedback from, except the current user, but then the correction will work only in the current short context and not for retraining. So, LLMs essentially just spit out the first response that has the highest probability based on the massive amounts of the training data. And it's quite amazing how often LLMs get it right. Imagine a human not actually solving an equation but spitting out the first response that comes to mind - we would miserably fail all the tests that LLMs pass with ease. Self-correction based on the context awareness is mandatory for an AI.

    • @honkhonk8009
      @honkhonk8009 6 หลายเดือนก่อน +4

      Its not reinventing the wheel if the wheel hasn't even been invented yet.
      What neuroscientist say differs largely from what you say. From what iv seen, its hard to take lessons we learnt from real neurons, and put it into computer neurons.
      It takes 8 whole layers of machine neurons to simulate a human neuron. Human neurons arent just a soma, the dendrites do alot of computation aswell.
      Current machine learning is inspired by biology but not based on it.
      If we knew how neurons actually worked, ML wouldv been already solved.

    • @CaiodeOliveira-pg4du
      @CaiodeOliveira-pg4du 6 หลายเดือนก่อน

      One might argue that current neural network models are both adaptable (they have millions of parameters being updated at each step, throughout hundreds of thousands of training epochs) and memory (they remember the instances to which they have been trained on through the weights between layers). There’s also a lot of highly effective unsupervised learning algorithms that learn complex patterns from unlabeled data, which one might call self-assessment.

    • @TheMetalisImmortal
      @TheMetalisImmortal 6 หลายเดือนก่อน +1

      Hello 😉

    • @someone9927
      @someone9927 5 หลายเดือนก่อน

      ​@@camelCased the thing is that you can't teach llm in normal way.
      If you explain person that something is false like this:
      You: do turtles fly?
      Person: yes.
      You: nah, they don't
      Person: oh, i will remember this
      Person would remembered that turtles don't fly
      If you do same thing with ai, ai will remember, that when you ask "do turtles fly?", they should reply "yes", and if you reply "nah, they don't", they should reply "oh, i will remember this"
      This is the problem with ai

  • @saurabhbadole777
    @saurabhbadole777 6 หลายเดือนก่อน +2

    this is my first video from your channel, and I am already impressed!

  • @WJohnson1043
    @WJohnson1043 7 หลายเดือนก่อน +67

    We currently simulate neural networks programmatically, which is why they are so inefficient. The problem is, people are so impatient for AGI that they have concentrated all their efforts on achieving it rather than developing an actual neural network.

    • @beowulf2772
      @beowulf2772 7 หลายเดือนก่อน +3

      Yeah, it's like their building a slave rather than a free person. Let the little ai have it's own baby, childhood, etc. These machines only need to be turned on all the time and have something to interact with the real world and parents. Even data didn't just download everything and downloaded the crew's psych profiles just to connect with them.

    • @lpmlearning2964
      @lpmlearning2964 7 หลายเดือนก่อน

      How else do you want to stimulate them other than with a computer which understand machine code aka programming? 🙃

    • @lpmlearning2964
      @lpmlearning2964 7 หลายเดือนก่อน

      You can’t simulate more than a few milliseconds of a fly’s brain let alone a human brain. Check EPFL’s research

    • @WJohnson1043
      @WJohnson1043 7 หลายเดือนก่อน +1

      @@lpmlearning2964 not a simulation. Have actual neural nets instead. Can’t be done on a chip. Some sort of 3D construction is required.

    • @khanfauji7
      @khanfauji7 6 หลายเดือนก่อน

      Use AI to build AI 🤖

  • @TheCategor
    @TheCategor 7 หลายเดือนก่อน +45

    8:40 "Human brain only uses 175kWh in a year" - Since human brain cannot work without the body you have to treat [brain+body] as one entity (which is ~4 times more), unless it's a brain in a jar.. but yea i guess still very efficient.

    • @GhostEmblem
      @GhostEmblem 7 หลายเดือนก่อน +13

      If you apply that logic to the AI then you'd need to factor in many other things too. You are fundamentally misunderstanding what is being compared here.

    • @lagaul5124
      @lagaul5124 7 หลายเดือนก่อน +11

      got to take into account the millions of years of evolution to even get to the human brain.

    • @Instant_Nerf
      @Instant_Nerf 7 หลายเดือนก่อน +3

      @@lagaul5124that’s a bunch of bs

    • @viperlineupuser
      @viperlineupuser 7 หลายเดือนก่อน +2

      @@Instant_Nerf he is not wrong, but development costs ≠ training cost

    • @dezh6345
      @dezh6345 หลายเดือนก่อน

      @@viperlineupuser This logic ensures AI will always have a higher energy cost than the human mind. If we account for evolution in humans, we also have to add in the evolution of humans to the development of AI, since this creation comes from us.
      Putting this logic into an equation it could look like this
      ** > evolution energy + human learning energy = modern humans**
      since modern humans are needed to create AI:
      **> modern humans + AI learning energy = modern AI**
      AI, being dependent on humans means they will always take more energy according to that logic when extrapolated.

  • @markldevine
    @markldevine 7 หลายเดือนก่อน +5

    Really nice recap. I've subscribed. Keep it up.

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน +3

      Thanks for the sub!

  • @wellbishop
    @wellbishop 7 หลายเดือนก่อน +22

    Awesome content, as always. I would love to know more about neuromorphic chips. Thanks.

    • @lolo6795
      @lolo6795 7 หลายเดือนก่อน

      So do I.

    • @sekkitsek
      @sekkitsek 7 หลายเดือนก่อน

      Same here

    • @staticlee4287
      @staticlee4287 7 หลายเดือนก่อน

      Same

    • @gmuranyi
      @gmuranyi 7 หลายเดือนก่อน

      Yes, please.

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน +1

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @alabamacajun7791
    @alabamacajun7791 7 หลายเดือนก่อน +5

    Glad to hear this. Back in 2010 I was looking for an alternative to network graph system still in use today. Basically we have a scaled version of a decades old tech that we now have the horsepower to run. I will say current neural matrices are but a partial model of a brain. I studied a portion of grey matter neural matrices see the book Spikes. "Fred Rieke, David Warland, Rob de Ruyter van Steveninck, William Bialek". The human brain is exponentially more complex than any scaled multi million GPU, TPU, CPU system. Good video.

  • @eSKAone-
    @eSKAone- 7 หลายเดือนก่อน +157

    The human brain developed over a time span of millions of years. How much energy did that process use?

    • @dylanlodge4905
      @dylanlodge4905 7 หลายเดือนก่อน +51

      That and the fact that each human has learnt to speak and regurgitate useful information for 30 years. Assuming that a human from the age of birth consumes 175 kW h for simplicity per year, and the entirety of chatGPT 3 was created using 1287 mW h, ChatGPT is ~245X less efficient than a human - (1287 * 1000) / (175 * 30)
      Overall, only considering 1 humans energy consumption compared to ChatGPT, which can communicate with more than 200,000 people across the internet simultaneously and a response time of

    • @olhoTron
      @olhoTron 7 หลายเดือนก่อน +40

      To be fair, the energy spent on humans evolving also went into developing AI, since we are the ones who are developing them, without spending energy on human evolution, there would be no AI
      Lets say AIs become sentient... our history will also be part of their history

    • @ickorling7328
      @ickorling7328 7 หลายเดือนก่อน

      OC tries to do science but never heard of entropy in information theory, which eliminates evolution of DNA of a brain from nothing under no guiding intelligence. Is wild ass guess, not even theory. Where the evidence?

    • @eSKAone-
      @eSKAone- 7 หลายเดือนก่อน

      In the end if it wasn't efficient we wouldn't use it (even if energy was for free). It obviously produces something that you couldn't reproduce even with the same amount of megawatthours of human brains brainstormed together.

    • @smicha15
      @smicha15 7 หลายเดือนก่อน +5

      nice. don't really hear that side of things.

  • @vladartiomav2473
    @vladartiomav2473 7 หลายเดือนก่อน +21

    Just waited for somebody to point the tremendous energy problems of current AI.
    Thank you

    • @kevoreilly6557
      @kevoreilly6557 4 หลายเดือนก่อน

      Still consumes less that a fat asses sitting watching TH-cam

  • @dimii27
    @dimii27 7 หลายเดือนก่อน +18

    So basically neural networks are nerds while liquid neural networks are street smart

  • @kbimm
    @kbimm 5 หลายเดือนก่อน +1

    Excellent video, very informative! Are there studies showing explicitly that recurrent ANNs are more energy-efficient than feedforward ones?

  • @aiforculture
    @aiforculture 7 หลายเดือนก่อน +4

    Super useful video, thank you!

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน

      You're welcome!

  • @digitalconstructs6207
    @digitalconstructs6207 6 หลายเดือนก่อน +4

    The age of analogue computing is coming. Great video for sure.

    • @Citrusautomaton
      @Citrusautomaton 6 หลายเดือนก่อน +2

      I believe that analog/digital hybrid computers will change AI massively in the realm of energy efficiency!

    • @paatagigolashvili9551
      @paatagigolashvili9551 6 หลายเดือนก่อน

      @@Citrusautomaton Exactly,i am rooting for aspinity analog and risc-v digital technologies

  • @stevengill1736
    @stevengill1736 7 หลายเดือนก่อน +4

    Oh good - was wondering if spiking and liquid NNs were similar. Both trying to emulate our current understanding of human neurons....neat!

  • @RICARDO_GALDINO_GABBANA_LIMA
    @RICARDO_GALDINO_GABBANA_LIMA 7 หลายเดือนก่อน +9

    Fantastic chanell! Super nice!👏👏👏🗣💯💯🔥‼️‼️‼️‼️‼️‼️❤

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน +2

      Thanks!

    • @Mega-wt9do
      @Mega-wt9do 7 หลายเดือนก่อน +3

      bot 👏👏👏🗣💯💯🔥‼‼‼‼‼‼❤

    • @RICARDO_GALDINO_GABBANA_LIMA
      @RICARDO_GALDINO_GABBANA_LIMA 7 หลายเดือนก่อน +3

      @@Mega-wt9do 🤜🤛‼️🗣🔥🔥💯💯💯

  • @lucavogels
    @lucavogels 6 หลายเดือนก่อน +4

    I don’t get how the Liquid NN should continue to learn if you only train the output layers once and the Reservoir stays the same as well (according to you the reservoir gets randomly initialized before training and from there never changes, just allows for information to circle/ripple in it)

  • @howardb.728
    @howardb.728 6 หลายเดือนก่อน

    A very competent compression of complex ideas - well done mate!

  • @marcosny2010
    @marcosny2010 3 หลายเดือนก่อน

    Tnks for the organization idea . About the other video based on neural simulation is very valuable for all .

  • @Joseph-nw3gw
    @Joseph-nw3gw 7 หลายเดือนก่อน +2

    You earned a subscriber rom Kenya....kudos

  • @High-Tech-Geek
    @High-Tech-Geek 7 หลายเดือนก่อน +34

    1. It's funny that we are trying to create something (AGI) that replicates something else that we do not understand (the human brain).
    2. Any neural network that truly emulates the human brain won't need to be trained in the sense you discuss. It would just be. It would learn and be trained by it's design. It would start training immediately and continue to train throughout it's existence. I don't see us ever creating something like this anytime soon (see statement #1).

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน +10

      Great point (#2). If it can keep learning, we just need to create it and it would naturally improve over time, or even learn to reconfigure itself

    • @helloyes2288
      @helloyes2288 6 หลายเดือนก่อน +5

      humans receive constant input and data. We frontload that data requirement. A system that improves over time will need constant input data.

    • @helloyes2288
      @helloyes2288 6 หลายเดือนก่อน +2

      @@theAIsearch he's acting like improvement can happen in a vacuum

    • @alexamand2312
      @alexamand2312 6 หลายเดือนก่อน +2

      @@helloyes2288 ye wtf happen in this video and this commentary, is everyone is a religious bitcoiner that not understand anything ?

    • @samuelbucher5189
      @samuelbucher5189 6 หลายเดือนก่อน +5

      Humans actually come somewhat pre-trained from the womb. We have instincts and reflexes.

  • @SangramMukherjee
    @SangramMukherjee 6 หลายเดือนก่อน +2

    A neural network is just probability function which gives how likely the occurrence is, so for a given input how much probability of getting the following output. A output with high probability is the most likely answer to your input. And the network just help to calculate that probability by nodes networks biases, back propagation, residual network, matrix, calculus etc, it seems maths, computer and physics coming together in one place

    • @drdca8263
      @drdca8263 6 หลายเดือนก่อน

      The output layer doesn’t have to be probabilities. It can be other things as well, such as “how much to drive each motor”, or “how much does each pixel change”

    • @gpt-jcommentbot4759
      @gpt-jcommentbot4759 6 หลายเดือนก่อน

      its not probability, it's just how high the activation of a neuron is.

  • @ekaterinakorneeva4792
    @ekaterinakorneeva4792 6 หลายเดือนก่อน +1

    Great video, thank you for your work!

    • @theAIsearch
      @theAIsearch  6 หลายเดือนก่อน +1

      My pleasure!

  • @SarkasticProjects
    @SarkasticProjects 7 หลายเดือนก่อน +1

    and YES- i would love to learn from You about the neuromorphic chips :)

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน

      Noted!

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @ProfessorNova42
    @ProfessorNova42 7 หลายเดือนก่อน +1

    Thanks for the video 😁. I really enjoyed it! I'm also very interested in those neuromorphic chips you talked about in the end.

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @Rawi888
    @Rawi888 7 หลายเดือนก่อน +1

    Thank you for your hard work.

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน

      My pleasure!

  • @gaius_enceladus
    @gaius_enceladus 6 หลายเดือนก่อน

    Yes please - I'd love to see you do a video on neuromorphic chips!
    Keep up the good work!

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @rockochamp
    @rockochamp 6 หลายเดือนก่อน

    Very well explained

  • @sammar1446
    @sammar1446 3 หลายเดือนก่อน

    great and easy to understand explanation thanks...

    • @theAIsearch
      @theAIsearch  3 หลายเดือนก่อน

      You are welcome!

  • @JoshKings-tr2vc
    @JoshKings-tr2vc 6 หลายเดือนก่อน

    Good simple explanation of the current state of neural nets and where they’re going.

  • @RolandoLopezNieto
    @RolandoLopezNieto 7 หลายเดือนก่อน +2

    Great educational video, thanks.

  • @kellymoses8566
    @kellymoses8566 6 หลายเดือนก่อน +2

    Not being able to self-improve is the single greatest limitation of LLMs.

  • @dhammikaweerasingha9894
    @dhammikaweerasingha9894 6 หลายเดือนก่อน

    Nice explanation. Thanks.

  • @korrelan
    @korrelan 5 หลายเดือนก่อน

    Excellent video.

  • @sandrocavali9810
    @sandrocavali9810 4 หลายเดือนก่อน

    Very good video. You forgot to mention new hardware based neural networks

  • @chibrax54
    @chibrax54 3 หลายเดือนก่อน +4

    Aren't you confusing Liquid Neural Networks and Liquid State Machines? I do not think LNNs have fixed random weights like LSM which are reservoirs.

  • @annieorben
    @annieorben 7 หลายเดือนก่อน +2

    This is very interesting! The reservoir layer seems like the digital analog to the subconscious mind! I really love your explanation of this new type of neural network.

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน

      Thanks!

    • @alexamand2312
      @alexamand2312 6 หลายเดือนก่อน

      wtf

    • @QuantumVirus
      @QuantumVirus 5 หลายเดือนก่อน

      ​@@alexamand2312?

  • @momirbaborac5536
    @momirbaborac5536 5 หลายเดือนก่อน +1

    What is the difference between learning and interpolation?

  • @immortalityIMT
    @immortalityIMT 5 หลายเดือนก่อน

    Is this computer code based on the fluid dynamics or actually liquid?

    • @scoffpickle9655
      @scoffpickle9655 5 หลายเดือนก่อน

      It's digital. There is no actual liquid. Even then, the "liquid" part of the neural network is only named that because it is dynamic and can adapt in real time.

  • @culture-jamming-rhizome
    @culture-jamming-rhizome 7 หลายเดือนก่อน +1

    Neuromorphic chips are a topic I would like to see a video on. seems like there is a lot of potential here.

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @florianuhlmann6478
    @florianuhlmann6478 5 หลายเดือนก่อน

    hey, super nice high level overview.

  • @tobiaspucher9597
    @tobiaspucher9597 7 หลายเดือนก่อน +1

    Yes neuomorphic chips video please

  • @sailingby
    @sailingby 3 หลายเดือนก่อน

    Yes, please do a video on neuromorphic chips - thanks

    • @theAIsearch
      @theAIsearch  3 หลายเดือนก่อน

      See this th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @smellthel
    @smellthel 7 หลายเดือนก่อน

    Awesome video! I would love that video on neuromorphic chips!

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน

      Thanks!

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @UltraStyle-AI
    @UltraStyle-AI 7 หลายเดือนก่อน

    Very informative and well put together video, thanks!

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน

      Very welcome!

  • @fmka-kg5rz
    @fmka-kg5rz 5 หลายเดือนก่อน

    Would love to see some comparison between traditional and liquid NN

  • @MSIContent
    @MSIContent 7 หลายเดือนก่อน +1

    Feels like there would be a tipping point with a liquid model where it starts out by just working, and then is tasked with making a better model based on its learned measurement of its own current performance. Given it can change and adapt, it could improve on its own design and rince/repeat.

    • @gpt-jcommentbot4759
      @gpt-jcommentbot4759 6 หลายเดือนก่อน

      that requires different input data shapes

    • @Alex-ns6hj
      @Alex-ns6hj 6 หลายเดือนก่อน

      @@gpt-jcommentbot4759likely its ability to reason on its own and make judgements on its own to progress to said goal

  • @JB52520
    @JB52520 7 หลายเดือนก่อน +3

    👍 for neuromorphic chips

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @Sweenus987
    @Sweenus987 7 หลายเดือนก่อน +1

    I used a CNN combined with a Liquid Time-constant Network (which is part of LNNs) for my university dissertation, which seems pretty powerful itself, I was able to train a robot to follow me based on image input given the same trained environment and clothes as in the training data. It's interesting stuff

    • @jeffreyjdesir
      @jeffreyjdesir 7 หลายเดือนก่อน +1

      🤯 WOAH! could you share your work? That sounds so fascinating and what I'm interested specifically, it seems like you had some kind of Real-Time feature with LTCN? I'd like to see if there's even a description in literature about this part of AI - the time window it exists in. 🤯

    • @Sweenus987
      @Sweenus987 7 หลายเดือนก่อน

      @@jeffreyjdesir Sure, I have an unlisted video that shows it working.
      It's a little jank since I didn't have the time to code in and train for smoother motion. It was specifically trained to pick up me at various locations within the frame, so if I was to the right of its frame, it would turn right by a specified amount and if I was too far it would move forward by a specified amount and so on.
      The description has links to the dissertation itself on Google Drive and the code on github
      th-cam.com/video/ZI2mLThnprM/w-d-xo.html

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน +1

      That's very cool! Thanks for sharing!

  • @aednil
    @aednil 7 หลายเดือนก่อน +9

    I still don't quite understand how randomly initialised weights that randomly rearrange themself could give you anything but garbage information. whatever patterns the readout layer would learn would be gone after the next rearranging of the liquid reservoir, wouldn't they?

    • @karensams994
      @karensams994 7 หลายเดือนก่อน +5

      That’s the thing. Nobody understands why they work. Not even the creators.

    • @unspecialist
      @unspecialist 6 หลายเดือนก่อน

      Think of the reservoir as a complex, nonlinear filter:
      Input Transformation: When you input data, the reservoir transforms it into a high-dimensional signal.
      Feature Extraction: The fixed reservoir acts like a feature extractor, turning the input into a form that's easier to learn from.
      Stable Representation: Even though the reservoir is random, its fixed nature means that the same input will consistently produce similar high-dimensional states.
      Ignore the comment of the dude above me saying nobody understands this, that’s not true, what people have trouble understanding is the configuration after training due to the sheer number of weights and nodes, the same reason we have trouble understanding the human brain just out of it’s natural structure.

    • @aednil
      @aednil 6 หลายเดือนก่อน

      @@unspecialist I can understand somewhat why it would work with the reservoir staying fixed, but the reservoir is supposed to change somewhat too, right? it's supposed to be fluid. or is the rate of change just slow enough that the readout layer can adapt to it?

    • @a_name_a
      @a_name_a 6 หลายเดือนก่อน

      This makes no sense, why not just randomly initialize then freeze some dense layers in a regular NN, how would this not be the same ?

    • @kickingnscreaming
      @kickingnscreaming 6 หลายเดือนก่อน +1

      ​@@a_name_aThe goal is to adjust each weight slightly in the direction of minimizing error. So if the picture is a cat but the network concludes dog then the weights that contributed most to dog are moved slightly in the direction that would have indicated cat. In that way the network learns what features are common to both cats and dogs and which features distinguish cats from dogs.

  • @marcoottina654
    @marcoottina654 5 หลายเดือนก่อน

    Can you please link the papers about liquid neural networks?

  • @kroniken8938
    @kroniken8938 7 หลายเดือนก่อน +9

    Well, how long would it take for a human brain to learn everything gpt knows- probably hundreds or thousands of years

    • @jeffreyjdesir
      @jeffreyjdesir 7 หลายเดือนก่อน

      If we can use Blooms Taxonomy as any standard, it seems like something like GPT will EVER "understand" anything; just relative semantic mappings to input which can't be the same thing as hermeneutics ontologically. The AGI singularity should be more about when the new human intelligence with a revitalized cosmic identity (as opposed to national or tribal) that comes with Star Trek like planetary ambitions...hopefully soon (or else).

    • @Me__Myself__and__I
      @Me__Myself__and__I 7 หลายเดือนก่อน +1

      A single human brain, even though something like 1,000x as complex, could never learn all of human knowledge. Humans have a limit on how much they can store, which is why we forget things. Yet a single LLM that has 1,000x less complexity can know the sum total of all human knowledge. Which is why this comparison of an LLM to a single human brain is ridiculous.

    • @jeffreyjdesir
      @jeffreyjdesir 7 หลายเดือนก่อน

      @@Me__Myself__and__I 1. Interesting...I'd say the brain more around 1,000,000,000x complex given its what we know knowledge through and transduces reality...its hard to really call it a process since consciousness is seamless with necessary reality (the world that generates perception).
      2. (more nit pick) but I wouldn't say LLMs "know" anything in the degree or relevance (given a Blooms taxonomy approach)that humans do. Some humans many not have every true description of the nature of the world, but can see the "Truth" of the world in a gestalt manner that goes beyond computation and semantics into hermeneutics and telelogy. what say you?

    • @antonystringfellow5152
      @antonystringfellow5152 7 หลายเดือนก่อน

      Yet the GPT models don't have human-level intelligence.
      Knowledge is not intelligence just as cheese is not electricity.

  • @Jianju69
    @Jianju69 6 หลายเดือนก่อน

    Very interesting to hear about these emerging architectures.

  • @mahiaravaarava
    @mahiaravaarava 4 หลายเดือนก่อน +1

    The future of AI looks incredibly promising and dynamic! With its ability to learn infinitely, AI is set to revolutionize how we interact with technology. Embrace the endless possibilities and advancements on the horizon! #FutureOfAI #InfiniteLearning #TechRevolution #AI

  • @JosephLuppens
    @JosephLuppens 7 หลายเดือนก่อน

    Amazing presentation, thank you! I would love for your to do a follow-up on the potential of neuro-morphic architectures.

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน +1

      Thanks! Will do

  • @MrWeb-dev
    @MrWeb-dev 6 หลายเดือนก่อน +1

    Training a traditional network takes a lot of energy, but the advantage is that running it takes less energy. With the temporal-dependent networks, they need to constantly be running to work. Won't that take more energy than a traditional neural network?

  • @homewardboundphotos
    @homewardboundphotos 3 หลายเดือนก่อน

    IMO the biggest benefit of spiking neural networks is that they express permanence, they can operate continuously like a flow of consciousness, as opposed to since input output steps.

  • @bitdynamo365
    @bitdynamo365 7 หลายเดือนก่อน

    great informative video! Thanks a lot
    Please make us a deep dive into neuromorphic hardware.

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน

      Noted!

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @m_sedziwoj
    @m_sedziwoj 4 หลายเดือนก่อน

    I must add, that most energy use in training is by back propagation, not by inference (is using NN, not training, but is part of training)

  • @cokoala5137
    @cokoala5137 3 หลายเดือนก่อน

    I'd say there is a scarcity of open source neuromorphic neural networks. An example is the TENNs network built by BrainChip of which we have no idea how good the performance is, but the zero shot learning is purportedly faster than NNs built for systems under the current Von Neumann architecture. There's so much buried within IPs atm that it's hard to find out where the progress really is for neuromorphic computing

  • @salestarget
    @salestarget 5 หลายเดือนก่อน

    Is this already out for us to use? I have 4o

  • @AaronNicholsonAI
    @AaronNicholsonAI 6 หลายเดือนก่อน

    So awesome. Thanks! Neuromorphic, please :)

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

    • @theAIsearch
      @theAIsearch  3 หลายเดือนก่อน

      Here you go th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @michaelaraki3769
    @michaelaraki3769 6 หลายเดือนก่อน

    Silly question: Why not use (some) all of them in combination, with a superodinate network (perhaps the liquid one) that either learns or is told by training which method to deploy for which type of data? The idea, once again, is to mimic the brain, with modular information processing at lower levels but the executive function at a higher level.

  • @SS801.
    @SS801. 7 หลายเดือนก่อน +3

    Make video on chips yes

  • @rekasil
    @rekasil 6 หลายเดือนก่อน

    Hi, I would like to know more about the spiking neural networks, their types, limitations, challenges, performance, online learning capabilities, etc. Thanks!

  • @RobertsMrtn
    @RobertsMrtn 7 หลายเดือนก่อน

    Firstly, thank you for producing such an informative video. One thing I would like to add is to say that current neural networks require a lot more training data than say a three year old child in order to preform a simple classification from such as distinguishing cats from dogs. Our current models require tens of thousands of examples in order to be properly trained. Where as a three year old child would require perhaps five or six examples of each. I would propose an architecture which I am calling Predictive Neural Networks where neurons are arranged in layers where neurons predict which other neurons will fire depending on the input data. For example, high level neurons may be trained to detect an eye but should also 'know' where to find the next eye or where to find the nose. Because a cats nose looks different from a dogs nose and is one of the main distinguishing features, it should be possible to train these networks with much fewer examples.

  • @ninjoor_anirudh
    @ninjoor_anirudh 6 หลายเดือนก่อน

    @theAIsearch can you please share the source from where you get these information

  • @kuroallen6419
    @kuroallen6419 7 หลายเดือนก่อน

    Super nice and educational video 👏🏻👏🏻👏🏻👏🏻👏🏻

  • @justremember7876
    @justremember7876 7 หลายเดือนก่อน

    I enjoyed this one great job

  • @XDgamer1
    @XDgamer1 7 หลายเดือนก่อน +1

    will Liquid neural network be able to work on cpu? 😢😅

  • @tapizquent
    @tapizquent 7 หลายเดือนก่อน +11

    5:18 I agreed with everything until this point. Gemini did prove that models can learn post training. As they did learning a new language

    • @Me__Myself__and__I
      @Me__Myself__and__I 7 หลายเดือนก่อน +6

      Correct. I just posted a lengthy comment that contained that very detail.

  • @Exitof99
    @Exitof99 7 หลายเดือนก่อน

    As for the fixed models, Grok is apparently an active learning model, so it is not a fixed model.

  • @rev.jonathanwint6038
    @rev.jonathanwint6038 3 หลายเดือนก่อน +2

    LNN Don't use spiking You have it all mixed up. You are thinking about liquid state machines.. What Google's own Gemini own explanation
    to process stimuli:
    Liquid neural networks
    LNNs are designed for continuous adaptation and can update parameters in real-time. They are able to process a wide range of data distributions, respond quickly, and filter out noisy data. LNNs are also smaller, use less data and training time, and have a shorter inference time.
    VS
    Liquid state machines
    LSMs are dynamic neural network models that are based on spiking neural networks (SNNs). They are robust to noise and disturbances in input signals because the internal state of the liquid reservoir acts as a filter. LSMs are well-suited for tasks like pattern recognition and time-series analysis, and have been used in neuroscience and speech

  • @nowsc
    @nowsc 4 หลายเดือนก่อน

    … it seems that back propagation would be much easier if each note contains some kind of adjunct node that merely counts whether the signal took a path through that node.

  • @nemonomen3340
    @nemonomen3340 7 หลายเดือนก่อน +6

    This comment is to let you know that you should, in fact, make a video on neuromorphic chips.

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน +6

      noted!

    • @theAIsearch
      @theAIsearch  4 หลายเดือนก่อน

      Just posted: th-cam.com/video/LxOOj-mkQV8/w-d-xo.html

  • @nikitos_xyz
    @nikitos_xyz 7 หลายเดือนก่อน +1

    yes, it is the plasticity and learning ability that neural networks lack, thank you for these ideas.

  • @Noone-mo4dr
    @Noone-mo4dr 7 หลายเดือนก่อน +1

    5:37 the fact that these systems don't learn in real time is a feature, not a bug. I think anyone who remembers the consequences of Tay AI realizes why they deferred to offline learning.

    • @TheRightWay11
      @TheRightWay11 6 หลายเดือนก่อน

      Perhaps there are different forms of learning. The chatbot shouldn't update it's behaviour based on what it is taught by the public, but simply its model of the world. But I am not sure if we know how to seperate the two? Our architecture is probably not that complex as of yet.

    • @fl3xi3
      @fl3xi3 6 หลายเดือนก่อน

      @@TheRightWay11 let's suppose chatgpt is an online learning based model, so if a company which is against this idea of chatgpt decides to give some ridiculus input to chatgpt ( i mesn if more than a certain percentage of input given to chatgpt is manipulated so that chatgpt doesn't function well, then in case of online learning it will cause a problem, think like, we have made a small chat bot by fine tunning the LLM and its based on online learning then there are more chances that a certain type of input can affect the output.

    • @fl3xi3
      @fl3xi3 6 หลายเดือนก่อน

      @@TheRightWay11 i searched on web and found out that, chat gpt is based on offline / batch learning. and yes in online learning the training process would need more resources / GPU.

  • @loganwright3227
    @loganwright3227 4 หลายเดือนก่อน

    This question will probably make me sound naive, but is the liquid literally a liquid where inputs affect it and images of the effect are sent through the final network layer(s)?
    And to add something hopefully to the comment section, this reminds me a lot of quantum mechanics and superposition principles. The fact that the wave state of the liquid encodes information about the input must mean that there is a strong correlation between the frequencies and amplitudes to the output. If this is true, then adding a layer to embed the input into a set of vectors representing frequencies and phase shifts should equivalently do the trick as this idea of using a liquid to transform input to some high dimensional vector.
    Another question - how is this idea of a liquid network different from just adding an embedding layer to recode the input into some new shape?

  • @CharlesBrown-xq5ug
    @CharlesBrown-xq5ug 7 หลายเดือนก่อน +2

    《 Arrays of nanodiodes promise full conservation of energy》
    A simple rectifier crystal can, iust short of a replicatable long term demonstration of a powerful prototype, almost certainly filter the random thermal motion of electrons or discrete positiive charged voids called holes so the electric current flowing in one direction predominates. At low system voltage a filtrate of one polarity predominates only a little but there is always usable electrical power derived from the source Johnson Nyquest thermal electrical noise. This net electrical filtrate can be aggregated in a group of separate diodes in consistent alignment parallel creating widely scalable electrical power. As the polarity filtered electrical energy is exported, the amount of thermal energy in the group of diodes decreases. This group cooling will draw heat in from the surrounding ambient heat at a rate depending on the filtering rate and thermal resistance between the group and ambient gas, liquid, or solid warmer than absolute zero. There is a lot of ambient heat on our planet, more in equatorial dry desert summer days and less in polar desert winter nights.
    Refrigeration by the principle that energy is conserved should produce electricity instead of consuming it.
    Focusing on explaining the electronic behavior of one composition of simple diode, a near flawless crystal of silicon is modified by implanting a small amount of phosphorus on one side from a ohmic contact end to a junction where the additive is suddenly and completely changed to boron with minimal disturbance of the crystal pattern. The crystal then continues to another ohmic contact.
    A region of high electrical resistance forms at the junction in this type of diode when the phosphorous near the ĵunction donates electrons that are free to move elsewhere while leaving phosphorus ions held in the crystal while the boron donates a hole which is similalarly free to move. The two types of mobile charges mutually clear each other away near the junction leaving little electrical conductivity. An equlibrium width of this region is settled between the phosphorus, boron, electrons, and holes. Thermal noise is beyond steady state equlibrium. Thermal transients where mobile electrons move from the phosphorus added side to the boron added side ride transient extra conductivity so they are filtered into the external circuit. Electrons are units of electric current. They lose their thermal energy of motion and gain electromotive force, another name for voltage, as they transition between the junction and the array electrical tap.
    Aloha

  • @Vpg001
    @Vpg001 7 หลายเดือนก่อน +2

    I think compression is underrated

  • @Shadowtime2449
    @Shadowtime2449 7 หลายเดือนก่อน

    Thanks

  • @OHYEAH-km3md
    @OHYEAH-km3md 2 หลายเดือนก่อน +1

    look at windows 30 years ago, now think where AI will be in 30 years

  • @maximelectron9949
    @maximelectron9949 6 หลายเดือนก่อน

    Great video, but one small correction.
    LLMs can actually "learn", it's just their memory is bad (they remember stuff only as long as it is in their context window).
    For example, you can actually "teach" ChatGPT to count the letters in the word. As of now if you ask ChatGPT "what's the n-th letter in _" it will guess a random one.
    However if you explain to it that it has to write the letters out and enumerate them to find the n-th letter, and then ask to use that process it will be able to do so.
    So in a way you can "teach" ChatGPT to count the letters and find the letter in a specific place of a word.

    • @williamwilkinson2748
      @williamwilkinson2748 6 หลายเดือนก่อน

      I have just tried asking ChatGPT the nth letter in a word and it gets it right without having to explain how to do it.

    • @wawan_ikhwan
      @wawan_ikhwan 2 หลายเดือนก่อน

      context window? i think it's not true for the attention mechanism in transformer.
      unless you are reffering to recurrent network family that use timestep (context window).

  • @claudiaweisz8129
    @claudiaweisz8129 6 หลายเดือนก่อน

    Awesome Video!!!! 👌👌👌 Earned a new subscriber from Europe > Austria 👋😁 🇦🇹

    • @theAIsearch
      @theAIsearch  6 หลายเดือนก่อน +1

      Thank you!

  • @lucidglobalwarning8707
    @lucidglobalwarning8707 6 หลายเดือนก่อน +1

    In response to you question at approx 28 minutes: Yes, I would like to see more on Spiking Neural Networks!

  • @macmaniac77
    @macmaniac77 6 หลายเดือนก่อน

    Sooo, where is an example I can test and train?

  • @exilibris
    @exilibris 3 หลายเดือนก่อน +1

    Паттерны, которые будет воспроизводить слой "жидкой нейронной сети" будет напрямую зависеть от правил, по которым он функционирует при возбуждении из входного слоя. Скорее всего, самым эффективным средством в будущем, станет представление слоя "жидкой нейронной сети" в виде некоторой совокупности кубитов, при этом такая сеть будет наделена в том числе и свойствами спайковой нейронная сети, но с нулевой задержкой, так как спайк в такой сети будет представлять собой не накопление потенциала, а детерминирование квантового состояния с определенной вероятностью, то есть задержку заменят на вероятность, таким образом, повторяющиеся стимулы необходимы будут лишь для набора финальной статистики, которая, к слову, тоже может определять паттерн для обучаемой сети распознавания.

    • @marcosny2010
      @marcosny2010 3 หลายเดือนก่อน

      Yes you are right , but always has a limitation , computers are based in dimensional space , ON OFF and this will improve a lot with Quantum computing but still energy a problem
      All the best to you .

  • @kimcosmos
    @kimcosmos 7 หลายเดือนก่อน

    when will we get neural network chips in our phones? I want a layer of capacitors on my transistors

  • @bmobert
    @bmobert 6 หลายเดือนก่อน

    It seems to me liquid neural network (lnn) could be super imposed over a spiking neural network (snn). The liquid would be a permanently or semi-permanent pattern of timed pulses that would morph depending on input. At the same time, the same nodes would also implement snn protocols, including back-propagation. Lnn patterns would automatically effect and be computed by the snn as it learned.

  • @austinrusso2178
    @austinrusso2178 7 หลายเดือนก่อน

    Thank you so much for this information video. It was great to learn what is happening and research in AI. At the current rate, do you think we could see AGI robots in the coming years?

    • @theAIsearch
      @theAIsearch  7 หลายเดือนก่อน +1

      Thanks. I'd guess before 2030

    • @gpt-jcommentbot4759
      @gpt-jcommentbot4759 6 หลายเดือนก่อน +1

      2050 if computing doesn't reach it's end and AI continues expanidng

  • @cesarlagreca8076
    @cesarlagreca8076 6 หลายเดือนก่อน

    excelente resumo para entender as dificudades de software e de hardware para a construção dessas redes "liquidas". obrigado