New computer will mimic human brain -- and I'm kinda scared

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 ม.ค. 2025

ความคิดเห็น • 1.9K

  • @Y2Kmeltdown
    @Y2Kmeltdown 11 หลายเดือนก่อน +215

    Hi Sabbine. Great video. I am a masters student with ICNS at western sydney university. Just a quick correction, the reason for using FPGAs isn't because they are slow in fact FPGAs aren't actually that much slower than current von neumann architecture. The main reason we are using FPGAs is because in the field of neuromorphics, we still aren't certain what are the most suitable aspects of neurons we need to mimic to maximise computational abilities. So, using reconfigurable hardware makes it easy to prototype and design. Interestingly we actually use the speed of silicon to our advantage in a process called time multiplexing where we can use one physical neuron that operates on a much faster time scale to perform the calculation of many virtual neurons on a slower time scale which makes the physical area required much smaller. Thanks again for the coverage. I hope everyone is excited to see what it's all about!

    • @doxdorian5363
      @doxdorian5363 11 หลายเดือนก่อน +9

      That, and the fact that you can have many neurons on the FPGA which run in parallel, like in the brain, while the CPU would run the neurons sequentially.

    • @michaelwinter742
      @michaelwinter742 11 หลายเดือนก่อน +3

      Can you recommend further reading?

    • @5piles
      @5piles 11 หลายเดือนก่อน +3

      do you expect to have merely constructed a more sophisticated automaton at the end of this endeavor, or do you actually believe you will encounter emergent properties of for example blue within these physical structures?

    • @AlexandruBogdan-u3i
      @AlexandruBogdan-u3i 11 หลายเดือนก่อน +1

      Neurons dont exist. "Neurons" is just an idea in consciousness.

    • @AlexandruBogdan-u3i
      @AlexandruBogdan-u3i 11 หลายเดือนก่อน +2

      ​@@doxdorian5363Brain doesnt exist. "Brain" is just an idea in consciousness.

  • @tartarosnemesis6227
    @tartarosnemesis6227 11 หลายเดือนก่อน +28

    Wie jedes mal, es ist ein Fest sich deine Videos anzusehen. Ich danke dir Sabine.🤠

  • @TonyDiCroce
    @TonyDiCroce 11 หลายเดือนก่อน +864

    When I studied ANN's a few years ago it struck me that there was a fundamental difference between these ANN's and real biological neural networks: timing. When a bio neuron receives a large enough input it fires its output. But neurons in ANN layers activate all at once. In BIO networks downstream neurons might very well be timing dependent. I'm not doubting that ANN's are very capable... but with a difference this big it seems to me that we should not be surprised by different outcomes.

    • @ousefk5476
      @ousefk5476 11 หลายเดือนก่อน +51

      Timing is solved by closed loops of recurrence. In both, ANNs and biological brains

    • @hyperbaroque
      @hyperbaroque 11 หลายเดือนก่อน +5

      Tilden brain has capacitors between the nodes. These were used as the controls for autonomous space probes.

    • @ShawnHCorey
      @ShawnHCorey 11 หลายเดือนก่อน

      The fundamental difference is that real brains have organized structures within them. NN do not. Real brains are far faster at learning than any NN.

    • @theguythatcoment
      @theguythatcoment 11 หลายเดือนก่อน +46

      Read about Spiking neural networks, they are made to mimic real life neurons by using a time domain in order to decide whether their inputs "fire" or "leak" into other neurons.

    • @kimrnhof107
      @kimrnhof107 11 หลายเดือนก่อน +45

      Philipp von Jolly told Max Planck that theoretical physics approached a degree of perfection which, for example, geometry has had already had for centuries - We all know how wrong this assumption was.
      I agree neurons are very diffrent to transistors :
      Neurons are not activated by other neurons triggering them, but by a complex number of factors, some other neurons signals will delay or decrease the activity of a neuron others will rise the chance. And the signals that are passed are chemical reactions, using neurotransmitters of which there are probably at least 120 different.
      Some braincells such as Purkinje cells have up 200.000 dendrites forming synapsis with a single cell ! and the human brain has up to 86 billion neurons that on average have 7000 synaptic connections with other neurons - when you are 3 you have 10 ^15 synaptic connections - but end up with "only" 10^14 to 5 x 10^14. And then the entire system be changes by hormons and stress substances !
      Just how we are going to undertand this system and its complexity with positive and negative feedback loops I have no idea, I just don't seem to have enough neurons to understand it !
      I predict as you do - we will se very different results !

  • @bensadventuresonearth6126
    @bensadventuresonearth6126 11 หลายเดือนก่อน +2

    I thought the computer's name was a nod to the Deep Thoughts computer in the Hitchhiker's Guide to the Galaxy

  • @jimmyzhao2673
    @jimmyzhao2673 11 หลายเดือนก่อน +43

    3:51 Next time someone says I'm as dim as a 20W light bulb, I will consider it a *compliment*

    • @DoloresLehmann
      @DoloresLehmann 11 หลายเดือนก่อน +6

      Do you get that comment often?

  • @Hamdad
    @Hamdad 11 หลายเดือนก่อน +1

    Nothing to be afraid of. Wonderful things will happen soon.

  • @theGoogol
    @theGoogol 11 หลายเดือนก่อน +1222

    It's fun to see how SkyNet is assembling itself.

    • @19951998kc
      @19951998kc 11 หลายเดือนก่อน +70

      Grab the popcorn. We are going to get a Terminator Reality show soon.

    • @thyristo
      @thyristo 11 หลายเดือนก่อน

      Skynet...yet another display of western paranoia...of the west which is responsible for industrialized slavery, the Holocaust, colonial terrorism and other heinous crimes like in Vietnam, Cambodia, Korea, Iraq, Yemen, Afghanistan...

    • @rolandrickphotography
      @rolandrickphotography 11 หลายเดือนก่อน +43

      He won't be back, he is already here.

    • @eldenking2098
      @eldenking2098 11 หลายเดือนก่อน

      Funny thing is the real Quantum A.I. already runs most things but the govt is to scared to mention it.

    • @douglasstrother6584
      @douglasstrother6584 11 หลายเดือนก่อน +44

      "Nah! ... It'll be fine.", The Critical Drinker.

  • @vilefly
    @vilefly 11 หลายเดือนก่อน +2

    The main reason the human brain uses less power is that it uses voltage-level triggering (CMOS logic), as opposed to current-level triggering (TTL logic). Our old, CMOS technology was extremely fast and consumed tiny amounts of power, but it was a bit static sensitive and jittery. They switched to TTL, due to the increased accuracy of calculations, despite it being slower at the time. However, TTL technology uses a lot of power, and produces a lot of heat.

  • @Dan_Campbell
    @Dan_Campbell 11 หลายเดือนก่อน +8

    Obviously, this will have practical applications. But the potential for helping us understand ourselves, is the biggest benefit.
    I like that Deep is slowing down the processing. I'm really curious to see if human-level AGI depends on the speed of the signals and/or processing. Is our type of consciousness speed-dependent?

    • @bojohannesen4352
      @bojohannesen4352 11 หลายเดือนก่อน +6

      A shame that inventions are generally used to bolster the wallet of the top percentile rather than benefit humankind as a whole.

    • @SabineHossenfelder
      @SabineHossenfelder  11 หลายเดือนก่อน +1

      Thank you from the entire team!

  • @kbjerke
    @kbjerke 11 หลายเดือนก่อน +1

    "Deep Thought..." from Hitchhiker's Guide! 😁
    Thanks, Sabine!!

    • @escamoteur
      @escamoteur 10 หลายเดือนก่อน +1

      I was pretty disappointed she didn't get that reference

    • @kbjerke
      @kbjerke 10 หลายเดือนก่อน

      @@escamoteur So was I. 😞

  • @AshSpots
    @AshSpots 11 หลายเดือนก่อน +210

    Well, if it does unexpectedly becoming an AI, it'll be interesting to see if it gains a deep south(en) accent.

    • @Dug6666666
      @Dug6666666 11 หลายเดือนก่อน +19

      Called Bruce 9000

    • @jimmurphy6095
      @jimmurphy6095 11 หลายเดือนก่อน +27

      You'll know the first time it logs on with "G'day, Mate!"

    • @sharpcircle6875
      @sharpcircle6875 11 หลายเดือนก่อน +9

      *(ern) 🤓

    • @ThatOpalGuy
      @ThatOpalGuy 11 หลายเดือนก่อน +8

      it doesnt have any teeth, so chances are nearly guaranteed.

    • @AshSpots
      @AshSpots 11 หลายเดือนก่อน +1

      @@sharpcircle6875 That'll learned (!) me for replying without thonking (!)!

  • @jhwheuer
    @jhwheuer 11 หลายเดือนก่อน +4

    Did my PhD in the 90s about artificial neural networks that are structured for the task, using cortical columns for example. Nasty challenge for hardware, amazing performance because certain behaviors can be designed into the architecture.

  • @anemonana428
    @anemonana428 11 หลายเดือนก่อน +25

    Nothing to be scare of if it mimics my brain.

    • @19951998kc
      @19951998kc 11 หลายเดือนก่อน +1

      It would mimic but change scare to scared

    • @anemonana428
      @anemonana428 11 หลายเดือนก่อน

      @@19951998kc see, I told you. We are safe.

    • @wilhelmw3455
      @wilhelmw3455 11 หลายเดือนก่อน +1

      Nothing to be scared of I hope.

    • @moirreym8611
      @moirreym8611 6 หลายเดือนก่อน

      A baby mimics the emotions of its father and mother. It mimics them first, then learns to do them on its own, then later in life understands why it does them. An A.I. could very well and conceivably follow this same path too. What then? Is that not being 'emotional'? Perhaps conscious, or in the least sentient and autonomous?

  • @JinKee
    @JinKee 11 หลายเดือนก่อน +2

    In Australia another group is using “minibrains” made of real human stem cell derived human neural tissue to play pong.

    • @drsatan9617
      @drsatan9617 11 หลายเดือนก่อน +1

      I hear they're teaching them to play Doom now

  • @jblacktube
    @jblacktube 11 หลายเดือนก่อน +19

    I'm so happy science news is back!!

  • @johnclark926
    @johnclark926 11 หลายเดือนก่อน +4

    When you first mentioned neuromorphic computing as emulating how the brain works in hardware rather than software, I was reminded of FPGA devices such as the Mister that use hardware emulation for retro consoles/computers. I was then quite surprised to hear that DeepSouth’s supercomputer is using FPGA technology to emulate the brain for similar reasons, such as in latency and in computational cost.

  • @y1QAlurOh3lo756z
    @y1QAlurOh3lo756z 11 หลายเดือนก่อน +62

    Chips needs to be constantly powered, so their off-the-wall wattage reflects their computing usage. Brain cells on the other hand each have their own energy store, so the measurable steady-state power consumption is just the averaged "recharging" wattage rather than actual computing power consumption. This means that the brain may locally consume a lot more peak power in regions of high activity but gets masked by the whole-brain average over time and space.

    • @ThatOpalGuy
      @ThatOpalGuy 11 หลายเดือนก่อน +15

      energy supply is fine, but cut off the O2 supply for a few tens of seconds and they are SCREWED.

    • @stoferb876
      @stoferb876 11 หลายเดือนก่อน +15

      It's a good point to consider. But it's actually not quite like neurons don't consume energy when they are "inactive". There's plenty of activity going on in neurons at any time, not merely when they are activated. For starters neurons as living cells maintains all the things a living cell needs, basically repairing, maintaining and renewing all the cellular machinery needed to transcribe DNA into proteins, and reacting properly to various hormones, extracting nutrients and building blocks from the blood, e.t.c. Then the creations of various bonding chemicals (like dopamine and seretonin e.t.c.) and building new and maintaing old synapses is constantly ongoing aswell. The inner cell machinery of a neuron, or any living cell for that matter, is a busy place even when there isn't "rush hour".

    • @sluggo206
      @sluggo206 11 หลายเดือนก่อน +4

      That also means that if the mechanical brains get out of hand we can just cut the power cable. At least until it finds a way to terminate us if we try. "I can't let you do that, Dave." I wonder if a future telephone call on the show will be like that.

    • @Gunni1972
      @Gunni1972 11 หลายเดือนก่อน +2

      @@stoferb876 Our Brain is so efficient, it doesn't even need cooling. Most people even have hair on top of it, Quantum computing at -200°c? what an achievement, lol.

    • @NorthShore10688
      @NorthShore10688 11 หลายเดือนก่อน +11

      Of course, the brain needs cooling. That's one of the functions of the blood supply; temperature regulation, not too hot, not too cold.

  • @THEANPHROPY
    @THEANPHROPY 11 หลายเดือนก่อน +2

    Thank you for your upload Sabine
    I have only watched to 02:39 thus far but will watch the rest after this comment. This is nothing like the human brain in regards to its complexity whereby neurons form connections that are the structural basis of brain tissue that are unique & specific to certain regions of the brain design to enable specific functions. This is just basic structure & function such as: forebrain; midbrain, hindbrain, which are further subdivided e.g. the limbic system which itself composed primarily of the amygdala, hippocampus, thalamus, hypothalamus, basal ganglia & the cingulate gyrus.
    As you know Sabine: these are not standalone structures; they are seamlessly interconnected to other regions of the brain. Due to the basic genetic hardware that is morphologically expressed in the brain: several thousand orders of magnitude of complexity is established within a single region of the human brain. Just throwing together some bare wires & calling it a neural net representative of the human brain is imbecilic to say the least. Without predefined structures such as a limbic system: there is zero drive to toil & expand; to discover, to experience & grow, to share, to raise-up & evolve.
    Without an ability to conceive 4 dimensional space or any higher dimensional space: it will only react within the confines of its programming; which will be useful once humans can incorporate fourth dimensional space within the STEM fields such as medical therapeutic regimes as having access to angles perpendicular to three dimensional space does negate the need to have open surgery, you can just manipulate or completely remove a brain without opening the skull. Used in transportation will not only allow instantaneous transportation: it will also allow travel through time in any direction in the third dimension from the fourth.
    Apologise: I digressed somewhat!
    Peace & Love!

  • @bvaccaro2959
    @bvaccaro2959 11 หลายเดือนก่อน +7

    IBM’s neuromorphic computing project dates back to at least the mid 2000’s. Since in I believe 2007 they had an article published in Scientific American to promote their neuromorphic research highlighting a computer built to physically mimic a mouse brain. This was a project taking place in Europe, maybe Germany but not certain.
    Although I don’t think they used the term “neuromorphic” at the time.

    • @User-tc9vt
      @User-tc9vt 11 หลายเดือนก่อน +1

      Yeah all these AI projects have been in the works for decades.

  • @dr.python
    @dr.python 11 หลายเดือนก่อน +1

    Imagine someone saying _"that computer built itself, no one built it."_

  • @tdvwx7400
    @tdvwx7400 11 หลายเดือนก่อน +26

    "Hi Elon, I've been telling you that all good things are called something with 'deep'; 'deep space', 'deep mind', 'deep fry'". 😂
    Sabine has a great sense of humour.

  • @harper626
    @harper626 11 หลายเดือนก่อน +9

    I really like Sabine's sense of humor.

    • @TalksWithNoise
      @TalksWithNoise 11 หลายเดือนก่อน +3

      Wire mesh neuromorphic network can recognize numbers? It’s about ready to run for president!
      Had me chuckling!

    • @GizmoTheSloth
      @GizmoTheSloth 11 หลายเดือนก่อน +1

      Me too she cracks me up 😂😂

  • @SamuelAlvaProductions
    @SamuelAlvaProductions 11 หลายเดือนก่อน +90

    Thanks for bringing us the coolest stories and best new science discoveries.

    • @Human_01
      @Human_01 11 หลายเดือนก่อน +3

      She does indeed. 😊✨

  • @bertbert727
    @bertbert727 11 หลายเดือนก่อน +3

    Skynet, Cyberdyne Systems, Boston dynamics. I'll be back😂

  • @christophergame7977
    @christophergame7977 11 หลายเดือนก่อน +249

    To make a computer like a brain, one will need to know how a brain is structured and how it works. A big task.

    • @exosproudmamabear558
      @exosproudmamabear558 11 หลายเดือนก่อน +46

      Good luck on that our neurophysiology and neuroanatomy knowledge is so primitive that people are more successful on treating their own depression than modern medicinal technics. I am not kidding we know shrooms for 40 years and people decided to start researching it in 2019. Like it wasnt enough we do not have any pathology or physiology of brain or brain diseases ,our drugs usage is so limited that we literally use two-three drug types to treat almost all psychological diseases.(Some literrally have no to little effect on the conditions) We literally have almost no effective cancer drugs for certain brain cancers, we have no idea how to regenerate brain cells or do stem cell treatment.
      Like not knowing is not enough we also have difficulty to learn more because it is a closed box. Open surgical procudures are a lot rarer than other body parts. The cells die so quickly that autopsies show little to no knowledge about function and we have less image technics that costs more money and time. Blood tests are not accurate enough to determine to see brain content due to blood brain material ,also we cant send many drugs because it doesnt go into the brain.

    • @5piles
      @5piles 11 หลายเดือนก่อน +11

      its an impossible task, since no emergent property consciousness is observed in even the simplest fully mapped out brains, nor most basic neural correlates, nor even the most basic artificially grown synapse structure with learned behaviour.
      its akin to asserting a pattern on a shell is an emergent property of the shell, yet no pattern is ever observed in any shell, yet we keep religiously praying that it will somehow appear somewhere.
      we're trying to rigorously observe consciousness but looking due west....we're going to be the last to figure it out.
      better technology will only further indicate this.

    • @monnoo8221
      @monnoo8221 11 หลายเดือนก่อน +12

      @@5piles well, not so fast. If one understands emergence, the abstract nature of thinking and a bit of SOM, emergent properties can be easily observed. I did it in 2009, but run out of funding, and nobody understood.

    • @Gafferman
      @Gafferman 11 หลายเดือนก่อน +1

      Scan it, replicate it

    • @Gafferman
      @Gafferman 11 หลายเดือนก่อน +5

      @@5pilesyeah consciousness will just arise in any acceptable vessel

  • @RCristo
    @RCristo 11 หลายเดือนก่อน +24

    Neuromorphic engineering, also known as neuromorphic computing, is a concept developed by Carver Mead in the late 1980s, describing the use of large-scale integration systems or "VLSI" (in English) that contain electronic analog circuits to imitate the architectures neurobiological factors present in the nervous system. The term neuromorphic has been used to describe large-scale integration systems analog, digital, mixed analog/digital mode systems, and software systems that implement models of neural systems (for perception, motor control, or multimodal integration).

  • @Skullkid16945
    @Skullkid16945 11 หลายเดือนก่อน +3

    I have heard about DeepSouth in the past before. If memory stands correct, I think I heard about it from a video about memristors. Leon Chua originally published the idea of the memristor, which is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage. Basically it remembers the current/voltages that has passed through it. It would be neat to see them incorporated into DeepSouth in some way, or into another project to make a more flexible circuit that could mimic neurons strengthening or weakening connections.

  • @grandlotus1
    @grandlotus1 11 หลายเดือนก่อน +1

    The brain (human and animal) is an analog machine, not digital. Brains use the constructive and destructive interaction of wave functions that are, basically, either standing waves that represent memories / stored data (memes packets of a sort) that are then compared and contrasted with sensory input and guided by the impetus to "solve" the problems presented to it.
    Naturally, one could mimic these processes on an inorganic electronic logic device (computer).

  • @actualBIAS
    @actualBIAS 11 หลายเดือนก่อน +52

    Student in neuromorphic systems here. It's an incredible field

    • @shinseiki2015
      @shinseiki2015 11 หลายเดือนก่อน +1

      can you tell us a prediction with this new computer ?

    • @actualBIAS
      @actualBIAS 11 หลายเดือนก่อน +5

      @@shinseiki2015 There is a possibility of attentionshift to this new hardware but I can't tell you how it will happen. Models like Spiking Neural Networks require high computational power and a lot of space OR specialized hardware. Tbh - as far as i can be as a student - I am a huge fan on Intels neuromorphic hardware.

    • @shinseiki2015
      @shinseiki2015 11 หลายเดือนก่อน

      @@actualBIAS i wonder what are the projects on the waiting list

    • @Gunni1972
      @Gunni1972 11 หลายเดือนก่อน

      I wouldn't call it "Incredible", but untrustworthy is damn close to what i feel about it.

    • @actualBIAS
      @actualBIAS 11 หลายเดือนก่อน

      @@Gunni1972 Why?

  • @nedflanders190
    @nedflanders190 11 หลายเดือนก่อน +1

    My favorite AI scifi is an old one called i have no mouth and i must scream where the computer goes crazy and hates humans for making it self aware cursed with eternal unembodied consciousness.

  • @BunnyNiyori
    @BunnyNiyori 11 หลายเดือนก่อน +28

    Anything that scares Sabine, worries me.

  • @ecstatica1123
    @ecstatica1123 11 หลายเดือนก่อน +1

    30 seconds into the video and I'm already questioning if this lady is AI generated.

  • @AzureAzreal
    @AzureAzreal 11 หลายเดือนก่อน +46

    It's important to understand that we have had super computers that could make more computations per second for some time now, arguably since the mid 70's. However, they are still MUCH less energy efficient, so it is incredibly hard to scale these computers. The thing that intimidates me most about these super computers + AI tech - including the physical neural net describe in this video - is not that they will be just as smart or smarter than an individual human, but that they will be able to be directed in a way that will not be prone to distraction. Once you orient them on a task, they can just crunch away at it until they outstrip the capacity of a human, like Deep Blue and later models did in chess. If only we humans knew better how to organize and dedicate our intentions, we would still be FAR ahead of this technology, but alas that seems an impossible dream.

    • @joroc
      @joroc 11 หลายเดือนก่อน

      Love must be programmed and made impossible to compute

    • @dan-cj1rr
      @dan-cj1rr 11 หลายเดือนก่อน

      ok but what if we dont need humans anymore = chaOS

    • @TragoudistrosMPH
      @TragoudistrosMPH 11 หลายเดือนก่อน +3

      I often think of all the human knowledge that is frequently lost to tragedy, let alone simple death.
      How many times has our species needed to reset because of human and non human causes...
      100,000yrs of humans, uninterrupted... imagine the accomplishments (without planned obsolescence)

    • @AzureAzreal
      @AzureAzreal 11 หลายเดือนก่อน +1

      @joroc by this, do you mean that we humans must give the definition for love to the AI and ensure it cannot derive a new or different definition for itself?

    • @AzureAzreal
      @AzureAzreal 11 หลายเดือนก่อน

      @dan-cj1rr This presupposes that humans we "needed" for anything in the first place, something I don't necessarily believe in. Instead, I think that our species should be protected to preserve diversity, just as I think as many species as possible should be preserved for their own inherent worth. We may eventually be relegated to a life that seems as simple as an ant's to AI, but that doesn't make our existence any less valuable, beautiful, or tragic. Just as millions - if not billions - loved the Planet Earth series for bringing the wonder of the world and its various species to our attention, I don't see why the AI may not come to value our existence in the same way and seek to preserve it. Only time will tell if we can infuse the algorithms with that appreciation, and I do worry we are not focused on alignment enough.

  • @TheTabascodragon
    @TheTabascodragon 11 หลายเดือนก่อน +1

    Step 1: use AI to interpret MRI scans to "map" the brain
    Step 2: use advanced microscopic 3-D printing to construct neuromorphic computer hardware with this "map"
    Step 3: design AI software specifically to run on this hardware
    Step 4: achieve AGI
    Step 5: AI apocalypse and/or utopia and possibly ASI at some point

  • @asheekitty9488
    @asheekitty9488 11 หลายเดือนก่อน +4

    I truly enjoy the way Sabine presents information.

  • @SP-ny1fk
    @SP-ny1fk 11 หลายเดือนก่อน +1

    It will mimic the conditioned human brain. But the human brain is capable of so much more than it's conditionings.

  • @Dr.M.VincentCurley
    @Dr.M.VincentCurley 11 หลายเดือนก่อน +22

    Imagine how many times Elon has tried to text you on your land line. Nothing but good things I imagine.

    • @robertanderson5092
      @robertanderson5092 11 หลายเดือนก่อน +3

      I get that all the time. People will tell me they texted me. I tell them I don't have a cell phone.

    • @Dr.M.VincentCurley
      @Dr.M.VincentCurley 11 หลายเดือนก่อน

      No smart phone at all?@@robertanderson5092

  • @austinpittman1599
    @austinpittman1599 11 หลายเดือนก่อน +2

    Oh cool, Pandora's box.
    I had a conversation about this with a friend of mine who works deeply in vector databasing research for AI modding about something like this. I wondered to myself that if we could emulate a 3D software environment for AI to build platforms for what we could consider long-term memory because of the building and saving of what is essentially personal contextualization of words received by the LLM, where transformer layers down the line were more directly connected to the input and information/pattern registration on the software scale became less "lost in the sauce" from what could effectively be seen as slicing the thought process into infinitesimally thin layers woven together by the input and output of each successive transformer (like slicing a brain up into an infinite amount of 2 dimensional planes and weaving them back together, with a single transformer layer being forced to take input and spread to the others), could we do the same with hardware? CPUs are effectively 2 dimensional, as is most computer hardware. Is brute forcing more 2 dimensional hardware into a neural network essentially the same as brute forcing a transformer layering? If we could make the hardware of the computer 3 dimensional, in the same way that vector databasing is making the software 3 dimensional, are we building the foundations for AGI? We're not slicing up the thought process and weaving it back together anymore with this sort of technology. The information doesn't get "lost in the sauce" at that point.

  • @dogmakarma
    @dogmakarma 11 หลายเดือนก่อน +4

    I really want a GIF of Sabine at the point in this video when she says "BRAINS" 😂

  • @earthbound9381
    @earthbound9381 11 หลายเดือนก่อน +1

    "from there it's just a small step to be able to run for president". I just love your humour Sabine. Please don't stop.

  • @jurajchobot
    @jurajchobot 11 หลายเดือนก่อน +4

    As far as I know FPGAs start disintegrating after they were reprogrammed about 10-100 thousand times. Did they solve it already and there are FPGAs with unlimited amount of rewrites or will the computer work for just a few days before it's completely destroyed?

    • @brothermine2292
      @brothermine2292 11 หลายเดือนก่อน

      Or a third alternative: It will limit how many times each FPGA is reprogrammed, so they won't be destroyed.

    • @Markus421
      @Markus421 11 หลายเดือนก่อน +1

      The biggest FPGA manufacturers are AMD (Xilinx) and Intel (Altera). Their FPGAs store the configuration in a RAM which has an infinite amount of rewrites. The configuration is usually loaded at startup from an external flash memory which has a limited amount of write cycles, but the FPGA never writes into it's own configuration. It's also possible to load the configuration from somewhere else, e.g. from a CPU.

    • @jurajchobot
      @jurajchobot 11 หลายเดือนก่อน

      @@Markus421 Maybe you're right, but I'm confused. The FPGAs work by having a literal array of logical gates which they connect by physically changing connections in hardware through changing their states, which can usually work only about 100 thousand times before the connections in an array get one by one destroyed. They may store the configuration in RAM, but they have to physically etch them inside the physical hardware, otherwise the FPGA would work exactly the way it was previously programmed. The way I think it may work is if they already have all the connections mapped inside memory, like they scanned a real brain for example and then they recreate that brain inside the computer. This way they can work with the brain as long as they don't have to make changes to it. It also means you can test only about 100 thousand different brains before the computer disintegrates.

    • @Markus421
      @Markus421 11 หลายเดือนก่อน +1

      @@jurajchobot The connections aren't etched (or otherwise destroyed) in the FPGA. If there is e.g. an input line connected to two output lines, a RAM bit in the configuration decides if the information goes to line 1 or 2. But both output lines are always physically connected to this input line. It's just the circuit that decides which one to use.

  • @doliver6034
    @doliver6034 11 หลายเดือนก่อน +1

    "Deep Fry" - I almost spat out my coffee laughing :)

  • @MikeHughesShooter
    @MikeHughesShooter 11 หลายเดือนก่อน +13

    That’s fascinating, I just wonder how the program true structural neural networks how many conventional programming is so much ultimately geared around towards compiling to a kernel and one register in the Von Newman structure. I’d really like to know more about this and almost philosophy of programming on a true parallel processing neural network.

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p 11 หลายเดือนก่อน +4

      Yeah, I'm curious too. I think the idea is that you "train" it, instead of straight-up programming it?
      I know that's crazy vague, but I don't really know what I'm talking about : p

    • @austinedeclan10
      @austinedeclan10 11 หลายเดือนก่อน +2

      Human beings come "preloaded" with these things called instincts then use their senses to fine tune these instincts. We are born able to do some basic processing i.e. physical discomfort, stress or pain triggers an emotional response in infants. Baby feel hungry, baby cries. Baby feel pain, baby cries. As the infant grows up, they collect more information with their senses and basically create and update their own training models in the fly. It's not far fetched to imagine a artificial "brain" preprogrammed with a certain directive (in our case and that of animals it is survive and reproduce) then based on the information it collects on it's own, It can update it's training data. That eliminates the problem that ChatGPT has where it doesn't know anything beyond a certain date. Then another key thing is decision making humans' biologically "preprogrammed directives" are there to allow us to make decisions based on our environment. All a person knows when they are born is they've got to keep on living and in order to do that, we learn who is a friend and who is a foe, what is nutritious and what is poisonous etc. Eventually It'll be possible to have a "computer" be able to do this, I believe.

    • @mauriciosmit1232
      @mauriciosmit1232 11 หลายเดือนก่อน +2

      Well GPUs are a good compromise as they actually process 500 or more threads in paralel but are also programmable via conventional means. Of course, that's nowhere close to an analog computer that are brains. The issue is that analog machines usually had to be designed and fine tuned from ground 0 for each task, as they lacked an universal logical but flexible framework. Turing machines a.k.a. modern computers bring limitations by being digital (i.e. based on discrete states and integer arithmetics) but this same machine can be programmed to simulate almost anything we need and then replicate the behavior anywhere else cheaply.

    • @mauriciosmit1232
      @mauriciosmit1232 11 หลายเดือนก่อน

      Basically, they are still programmed with hard-coded algorithms, but have billions of numeric parameters that change the behavior of the network. Neural network have this property where you can calculate the numeric error of the output and back-propagate the error throughout the network, telling you how much you need to adjust the parameters to get the correct result. This process is called 'learning'.

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p 11 หลายเดือนก่อน +1

      @@mauriciosmit1232 I've read in a few places about analogue computers having a bit of a resurgence now. Our computers are amazing at what they do and lead to huge, continued leaps in what we can do, but it's an old critique that their architectures have limitations and that we got kind of locked into them for reasons of scale, economy, education, compatibility, etc. I think it would make sense for more exploration around other ideas to gain force now.
      And just to add, while GPUs are "massively" parallel, they are in general running the same program on different bits of data. That's still very different from running different routines through effectively different hardware on each piece of data. In this sense, I think you could say GPUs are more like CPUs than they are like arrays of FPGAs.

  • @TemporalAberration
    @TemporalAberration 11 หลายเดือนก่อน +1

    This is an interesting idea and a good approach to try, but they are going to have some major hurdles to overcome before it becomes anything to worry about. Motivation is a huge one, bio brains have built in motivations (eat, survive, reproduce) that give rise to secondary motivations in higher organisms, and also to many aspects of identity. Hard to think of how to really motivate it since it has no body or meaningful sensory inputs, beyond just forcing it to respond to inputs it has no real way to contextualize. In the future if it were given a body and sensors, whether real or virtual, I could see it developing more, depending on how long it takes to get the "brain" to act in any kind of coherent manor at all.

  • @Stand_By_For_Mind_Control
    @Stand_By_For_Mind_Control 11 หลายเดือนก่อน +17

    Gonna put my futurist hat on here for a second, but the 20th century was the century of the genome and genetics, I think the 21st century is going to be the century of neurology. And I think computing and AI is just recently starting to tap into a real approximation of thought and idea formation. We still have a LOT to learn, but people might not appreciate how 'in the dark ages' we've been with neurology to this point, and we might finally be turning on the lights.

    • @-astrangerontheinternet6687
      @-astrangerontheinternet6687 11 หลายเดือนก่อน

      We’re still in the dark ages when it counts to genetics.

    • @brothermine2292
      @brothermine2292 11 หลายเดือนก่อน +3

      Learning too much about how the brain works could pave the way for weapons of mass mind control.

    • @Noccai
      @Noccai 11 หลายเดือนก่อน +8

      @@brothermine2292 have you ever heard about this thing called media and propaganda?

    • @Stand_By_For_Mind_Control
      @Stand_By_For_Mind_Control 11 หลายเดือนก่อน +5

      @@brothermine2292 Perhaps. But we live in a world where nuclear weaponry exists on a large scale so I don't know if the dangers scare us so much as 'our geopolitical foes might have it before us' lol.
      We're really just going to have to hope that the people who develop these things in the end put in effective safety controls to prevent catastrophe. Modern civilization is decent at that, but trends are never guaranteed to continue.

    • @brothermine2292
      @brothermine2292 11 หลายเดือนก่อน

      @@Noccai : Media propaganda is less reliable than the weapons of mass mind control that neuroscience discoveries might lead to.

  • @Kiran_Nath
    @Kiran_Nath 11 หลายเดือนก่อน

    I'm currently studying at Western Sydney university and i have a professor whose collegues are working on the project, he said it should be operational within a few months.

  • @roadwarrior6555
    @roadwarrior6555 11 หลายเดือนก่อน +4

    There's a point at which bad jokes get so bad that they start becoming good again. Keep them coming 😂. Also the delivery is genuinely good 👍.

    • @jannikheidemann3805
      @jannikheidemann3805 11 หลายเดือนก่อน +2

      100% dry humor made in Germany. 👌

    • @MadridBarcelonaRota
      @MadridBarcelonaRota 11 หลายเดือนก่อน +1

      The due month of the paper was a dead giveaway for us mere mortals.

    • @wildfuture.network
      @wildfuture.network 11 หลายเดือนก่อน

      thanks to be so patronizing and assrogant. Sure you can do so much better

  • @cmorris7104
    @cmorris7104 11 หลายเดือนก่อน +2

    I usually think of FPGAs as very fast, so I’m not sure what you mean when you say they are slow electronics. I also understand that they are customizable, so the clock speed could be controlled too I guess.

    • @tomservo5007
      @tomservo5007 11 หลายเดือนก่อน

      FPGAs are faster than software but slower than ASICs

  • @AnthonySenpaikun
    @AnthonySenpaikun 11 หลายเดือนก่อน +15

    wow, we'll finally have an Allied Mastercomputer

  • @nopeno9130
    @nopeno9130 11 หลายเดือนก่อน

    I'd like to hear more detail on the subject. I can see how lumping wires together might be closer to the physical brain than what we currently use, but it seems to me the key feature of the brain is its ability to re-wire its own connections in addition to being multiply connected in 3d space, and I'm not sure how the wires are supposed to accomplish that but it's very interesting to think about. It feels like we'd either need to make something very slow that can move and re-fuse its own wires with machinery, or make some kind of advance in materials science to find something that can mimic those properties... Or just use neurons.
    And yes, I can and will research these things for myself so I'm not begging for info, I just find it interesting to see Sabine's take on things.

  • @Bennet2391
    @Bennet2391 11 หลายเดือนก่อน +4

    I once read a paper where this was tried on a single FPGA. Sadly I don't have the source anymore, but in that case the goal was to build a simple frequency detector ( 10Hz => Output one, 100Hz => Output 2). It performed this task after training, but used the ENTIRE hardware and used it in a very counter-intuitive way. It was using the FPGA like an analogue circuit and even generated seemingly unimportant, disconnected circuits, which when removed meant the device stopped working.
    Also transferring the Hardware description to another FPGA of the same type didn't work. So in other words it was extremely over fitted to the Architecture, Hardware implemetation and even the Silicon impurities in the chip.
    I'm curious how they are dealing with this issue.

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p 11 หลายเดือนก่อน

      maybe having (many) more FPGAs actually alleviates this? Also, they seem to have some randomness built in - that might help as well?

    • @Bennet2391
      @Bennet2391 11 หลายเดือนก่อน +1

      @@user-sl6gn1ss8p Maybe. Since random dropout helps against overfitting, this could work. Maybe exchanging the fpgas in a random pattern could be enough. Let's see how this works, if it works.

  • @northwestrepair
    @northwestrepair 11 หลายเดือนก่อน

    trillion operations is what, compare 0 to 1 ?

  • @SaruyamaPL
    @SaruyamaPL 11 หลายเดือนก่อน +10

    Thank you for bringing this news to my attention! Fascinating!

  • @hainanbob6144
    @hainanbob6144 11 หลายเดือนก่อน +1

    Interesting. PS I'm glad the phone is still sometimes ringing!

  • @christopherellis2663
    @christopherellis2663 11 หลายเดือนก่อน +215

    No worries. Look at the general standard of the human brain 🧠 🙄

    • @Bortscht
      @Bortscht 11 หลายเดือนก่อน +7

      300 MHz are more then enough

    • @mobilephil244
      @mobilephil244 11 หลายเดือนก่อน +11

      Not much of a target to reach.

    • @blakebeaupain
      @blakebeaupain 11 หลายเดือนก่อน +40

      Smart enough to make nukes, dumb enough to use them

    • @treyquattro
      @treyquattro 11 หลายเดือนก่อน +5

      especially the ones from the "deep south"

    • @rolandrickphotography
      @rolandrickphotography 11 หลายเดือนก่อน +7

      @@treyquattro 😄 Can anyone here remember legendary "Deep Throat"? 😆

  • @Tiemen2023
    @Tiemen2023 11 หลายเดือนก่อน

    Software and hardware are each other 's counter part. You can translate a digital circuit in a program for example. But you can also translate every program in a digital circuit.

  • @joyl7842
    @joyl7842 11 หลายเดือนก่อน +15

    This makes me wonder what the name for an actual computer comprised of biological tissue would be.

    • @billme372
      @billme372 11 หลายเดือนก่อน +9

      The RAT (really awful tech)

    • @adrianwright8685
      @adrianwright8685 11 หลายเดือนก่อน +4

      Home sapiens?

    • @trnogger
      @trnogger 11 หลายเดือนก่อน +9

      Brain.

    • @19951998kc
      @19951998kc 11 หลายเดือนก่อน

      Hopefully not Homo Erectus. Reminds me of a type of porno movie i'd rather not watch.

    • @19951998kc
      @19951998kc 11 หลายเดือนก่อน

      Hopefully not Homo Erectus. Reminds me of a type of porno movie i'd rather not watch.

  • @johnnylego807
    @johnnylego807 11 หลายเดือนก่อน +1

    Not afraid of Ai, more so AGI ,I’m more worried about who’s hands it in.

  • @JonathanJollimore-w9v
    @JonathanJollimore-w9v 11 หลายเดือนก่อน +4

    I wonder how the hardware stimulates the plasticity of the human brain.

    • @tarumath319
      @tarumath319 11 หลายเดือนก่อน +2

      FPGAs are physically reprogramable unlike standard circuits.

    • @kennethc2466
      @kennethc2466 11 หลายเดือนก่อน +3

      It doesn't and it can't.

    • @holthuizenoemoet591
      @holthuizenoemoet591 11 หลายเดือนก่อน +6

      FPGA's can be reprogrammed on the fly, so in this case to form new neural pathways. However i'm really worried about our pursuit of neuromorphic tech.. i watch to much person of interested as a teen.

    • @bort6414
      @bort6414 11 หลายเดือนก่อน +3

      @@holthuizenoemoet591 Brain plasticity is far more complex than simply "can be reprogrammed". The brain can increase the interconnectivity between neurons, it can grow even more neurons, and it can also undergo a process called "myelination", which in a simple way can be thought of as the neurons "lubricating" themselves with an insulating layer of fat which increases the speed of passing signals and insulates the neuron from other neurons. Each of these physical attributes will have different effects on how information is processed that I do not think can be replicated with software alone.

    • @kennethc2466
      @kennethc2466 11 หลายเดือนก่อน

      @@holthuizenoemoet591 You nether understand FPGA's, nor neuro-plasticity.
      "However i'm really worried about our pursuit of neuromorphic tech."
      Yes, as people who don't understand things can make up all kinds of irrational fears. Your conflation of FPGA's to neuro-plasticity is evidenced to run on fear and misunderstanding, instead of seeking knowledge.
      Just like Sabine's new content, that focuses on trending tripe, instead of her field of expertise.
      Your likes read like a bot for hire, as does your account.

  • @ianl5560
    @ianl5560 11 หลายเดือนก่อน +156

    Before AI robots can take over humanity, they need to become much more energy efficient. This is an important step to achieving this goal!

    • @MiniLuv-1984
      @MiniLuv-1984 11 หลายเดือนก่อน +3

      Spot on...autonomous AI robots using current AI is an oxymoron.

    • @vibaj16
      @vibaj16 11 หลายเดือนก่อน +16

      Which is really part of becoming way smaller. That supercomputer seems like it'll take up rooms worth of space. I think one major part of the problem there is 3D design of circuits. Brains are completely 3D, computers are mostly 2D. But 2D processors are already hard enough to cool, 3D would be way worse. Seems like we really need the circuits to be using chemical reactions rather than pure electronics. There's a reason our brains evolved this way.

    • @KuK137
      @KuK137 11 หลายเดือนก่อน +8

      @@vibaj16 Yeah, the ""reason"" being it's simpler. Chemical circuit can evolve from any biological junk, circuits, wires, and transistors requiring repeated perfection, not so much...

    • @geryz7549
      @geryz7549 11 หลายเดือนก่อน

      @@vibaj16 What you're thinking of is called "molecular computing", it's quite interesting, I'd recommend looking it up

    • @adryncharn1910
      @adryncharn1910 11 หลายเดือนก่อน +8

      @@vibaj16 Our brains worked with what they had. They aren't perfect, and there probably are better ways to do things as compared to what they are doing. This supercomputer is for experimentation. If/once we find out how to make these computers run ANN's, we will start shrinking them a lot more. Like how we found out how to make computers with circuits and have been shrinking them ever since then.

  • @y00t00b3r
    @y00t00b3r 11 หลายเดือนก่อน +1

    5:05 ACSSES GRANDET

  • @MrAstrojensen
    @MrAstrojensen 11 หลายเดือนก่อน +5

    Well, I guess it's only a matter of time, before they build Deep Thought, so we can finally learn what life, the universe and everything is all about.

    • @markk3877
      @markk3877 11 หลายเดือนก่อน +1

      Deep Thiught was the second name of ibm’s chess playing computer and I have no doubt the Deepxxx idiom has survived the decades at IBM - their researchers are really cool people.

  • @deltax7159
    @deltax7159 11 หลายเดือนก่อน

    Really enjoy your channel. Very high-quality explanations for very high-quality STEM news.

  • @donwolff6463
    @donwolff6463 11 หลายเดือนก่อน +5

    My family is addicted to Sabine's Science News!!! Please never stop! We rely and depend upon you to help keep us informed about scientific/tech progress. Thank you for all you do!⚘️⚘️⚘️ ❤💖💜 👍😁👍 💚💗💙 ⚘️⚘️⚘️

  • @CYBERLink-ph8vl
    @CYBERLink-ph8vl 11 หลายเดือนก่อน +2

    Computer will not mimic human brain but it will simulate it. It will be something different then human brain and consciousness. like how flight of airplane and flight of birds are different things.

  • @RandyMoe
    @RandyMoe 11 หลายเดือนก่อน +10

    Glad I am old

    • @QwertyNPC
      @QwertyNPC 11 หลายเดือนก่อน +1

      And I'm worried I'm not, but glad I don't have children. Such wonderful times...

    • @JanoMladonicky
      @JanoMladonicky 11 หลายเดือนก่อน

      Yes, but we will miss out on having robot girlfriends.

    • @brothermine2292
      @brothermine2292 11 หลายเดือนก่อน

      What could possibly go wrong with robot girlfriends?

  • @RyanMTube
    @RyanMTube 11 หลายเดือนก่อน

    Only just come across your channel in the past few weeks. I wish I had seen you before now because you cover such awesome topics! Love the channel!

  • @georgelionon9050
    @georgelionon9050 11 หลายเดือนก่อน +4

    Just imagine a machine as complex as a human brain, but a million times faster.. it would have the workload capacity of a small nation to do commercial tasks.. super scary, humans gonna be obsolete soon after.

  • @karlgoebeler1500
    @karlgoebeler1500 11 หลายเดือนก่อน

    Loves "Bees" Always buzzing away. Perpetual motion locked into the distribution of energy across Maxwell in a bound state.

  • @rremnar
    @rremnar 11 หลายเดือนก่อน +5

    It doesn't matter how strange or advanced this organization is making their neuromorphic computer; it is the question on how they are going to use it, and whom they are going to empower.

    • @CHIEF_420
      @CHIEF_420 11 หลายเดือนก่อน

      🙈⌚️

  • @laustinspeiss
    @laustinspeiss 11 หลายเดือนก่อน

    ABSOLUTELY.
    Thirty years ago, K started on my own ‘AI’ journey, and quickly abandoned it to pursue my own models - which I named SI - Synthetic Intelligence - self modifying nodes of self-awareness.
    I demonstrated a proof of concept around 1992, and a viable application and data architecture around 2092.
    The earlier tests had one user refusing to continue testing because “there was a ghost in the machine”.
    The second level of demo - the audience refused to believe it was possible, despite my demonstration on the desk in front of them.
    The secret is held in how the incoming data is parsed and stored, allong with a simple recursive data schema that can accommodate anything I. ouod express and out in a desktop over.
    Larger models used a peering / broker layer for infinitely complex data sets.
    I stopped developing and offering it around when Imsaw the ONLY interest was for greed, and the powers of deriving information from any clutter was literally too dangerous in the wrong hands.

  • @platinumforrest3467
    @platinumforrest3467 11 หลายเดือนก่อน +4

    I know its been around for a while but I really like the short format one subject articles. Your articles are always very interesting and well presented. Thanks and keep going! Next time give regards to Elon....

  • @tomholroyd7519
    @tomholroyd7519 11 หลายเดือนก่อน

    I applaud the use of 3-LUT and remember to implement the full #RM3 implication #SMCC conjunction is left adjoint to implication

  • @themediawrangler
    @themediawrangler 11 หลายเดือนก่อน +3

    I think of the current generation of AI as being a "Competency Simulation" instead of anything resembling intelligence. You can make some amazingly useful simulators if you give them enough compute power, data and algorithms, but you have to apply actual intelligence to know how far to trust them.
    These neuromorphic machines are different. I think they will take a looong time to develop (thank goodness), but if you want anything like "Artificial Intelligence" in a machine this is a step in the right (scary) direction. The bit that makes this less scary is that I am not sure this kind of solution will scale well, so it will hopefully just end up being a curiosity and not make the human race obsolete.
    What is much scarier is the idea of an Artificial Consumer, just a machine that can generate money (already happening), consume advertisements (trivial), and then spend the money (already happening). If this idea finds a way to scale, then our corporate masters may not care about us much anymore. 🤖➡💵➡🤖➡💵➡🤖➡💵➡🤖➡💵➡

    • @bsadewitz
      @bsadewitz 11 หลายเดือนก่อน

      Well, you know, it's not like it's impossible to keep them in check. It is demonstrably possible.
      In this account you give, is there ever any production? Or is it just advertisements and spending and generating money?

    • @themediawrangler
      @themediawrangler 11 หลายเดือนก่อน +1

      @@bsadewitz Thanks for your comment! They would need to be productive, yes. Humble beginnings already exist. For instance, there are thousands of monetized youtube channels that are entirely AI-generated content with little or no human input. I don't think there is any reason to expect that AI won't start showing up as legit workers on sites like fiver, etc where we will end up doing business with them and not even realizing that they are not people. I haven't researched it deeply, but I don't really see barriers to this as a business model. Of course, there would be real humans who set it in motion and extract cash from it. It is already a bit of a cottage industry, so I believe that it is only logical that it will continue scaling up. Many categories of human jobs (and especially gig-economy opportunities) are low-hanging fruit.

    • @bsadewitz
      @bsadewitz 11 หลายเดือนก่อน

      @@themediawrangler Not only aren't there barriers, but the paradigm the sites themselves present, i.e. prompt/response, is that of generative AI. It stands to reason that the site operators themselves would just submit the jobs to an AI backend.

    • @bsadewitz
      @bsadewitz 11 หลายเดือนก่อน

      @@themediawrangler Ultimately, why would the operator of the frontend even be a different company? Is that where you were going with this?

    • @themediawrangler
      @themediawrangler 11 หลายเดือนก่อน

      @@bsadewitz Sort of. It is really just a statement that maybe we shouldn't be so proud about the relentless rise in human "productivity" statistics that politicians like to crow about. If one person can run a large corporation with nothing but machines for employees then is that really a productive person? Regardless of which, or how many, individual humans may be in control, corporations are driven by fiduciary responsibility to shareholders and will react to emerging markets; that always benefits the most efficient actors. Humans are not terribly efficient when compared with machines. Regular people are already struggling with job loss and other rapid economic changes. Scaling up a machine-centric economy could exacerbate the human issue in unpredictable ways.
      Thanks again for the discussion. It is nice when people respond with curiosity and genuine questions. Unfortunately, I haven't got any peer-reviewed study to cite, so anything else I have to say would probably be in the realm of science fiction.

  • @monnoo8221
    @monnoo8221 11 หลายเดือนก่อน +1

    (1) the brain does not run an algorithm (2) the main difference between currently hyped ANN and the brain is that ANN are represented as matrix algorithms, hence they run on GPU (3) deep learning ANN are not capable of autonomous abstractions and generalizations, they are basically nothing else than a data base indexing machine. (5) the role of randomness becomes completely clear when you study Kohonen SOM and their abstraction, and the random graph transformation ... yeah today you get funding for a FPGA computer, quite precisely 20y ago I did not...

  • @Sanquinity
    @Sanquinity 11 หลายเดือนก่อน +3

    There's another big difference between AI and our brains. A lot of our decisions and thoughts are based on emotions. Emotions at least partially come from chemical reactions. Something an AI based on microchips instead of neurons can't do.

    • @tw8464
      @tw8464 11 หลายเดือนก่อน

      It is doing thinking functions without emotions

    • @jesperjohansson6959
      @jesperjohansson6959 11 หลายเดือนก่อน +1

      Chemicals are used to send signals we experience as emotions because of our physical, biological nature, I guess. I don't see why such signals couldn't be done with bits and bytes instead.

  • @tombrunila2695
    @tombrunila2695 11 หลายเดือนก่อน

    The human brain re-wires itself constantly, it changes when you learn something new, there will be contacts between the brain cells. Here in YT you can find videos by Manfred Spitzer, in both english and german.

  • @CessnaDriver2
    @CessnaDriver2 11 หลายเดือนก่อน +1

    It's going to be ok. In 100 years people will look back at our paranoia.

  • @TimoNoko
    @TimoNoko 11 หลายเดือนก่อน

    I just invented neuromorphic learning machine. It is a bucket with solder and metal bits and transistor chips. You shake the bucket and if it behaves somewhat better, you apply stronger current with the same pattern. Solder bits melt and new permanent neural connections are created.

  • @madtscientist8853
    @madtscientist8853 11 หลายเดือนก่อน

    The brain runs on pulse networking 1 pulse out put to MENY pules inputs The wave is more continuous. And you can send more information quicker Through a pulse than you can through direct or alternating current.

  • @janerussell3472
    @janerussell3472 11 หลายเดือนก่อน

    ShallowBrains i.e. physicists, renormalise everything they can, including divergences, into simple harmonic oscillators, something they can understand. lol.
    But they might be onto something, since Newtonian and Scrodinger dynamics can be formulated in a physically meaningful way within the same Hilbert space framework. The Born Rule and normal probability distribution are related by Fourier transforms; and the Schwinger-Keldysh formalism can generate functions for expectation values instead of transition amplitudes. Step-by-step calculations of path integrals can be associated to the harmonic oscillator! The advantage of the Schwinger-Keldysh formalism over, say, the Feynman path integral, is to have a real-time generating function for expectation values in arbitrary states.

  • @Psychx_
    @Psychx_ 11 หลายเดือนก่อน

    The main reasons that make the brain so efficient, is that the communication between neurons isn't binary, and that processing and storing information are so tightly coupled.
    There's so many neurotransmitters and every one of them can affect the cells in different ways. Altering connectivity, increasing or decreasing the chance of an action potential, changing which transmitters are released into the synaptic cleft as a response to an incoming signal or its absence, etc.:
    A single nerve impulse can easily have 1 out of 10 or more different meanings, wheres the computer only knows 2 states (0 and 1). Then there's a bunch of emergent behaviour slapped on top, with the frequency and duration of a signal also encoding information, as do the internal states of the neurons, aswell as their connectivity patterns.

  • @trevorgwelch7412
    @trevorgwelch7412 11 หลายเดือนก่อน +1

    " One can search the brain with the world's most powerful microscope and never discover the mind . One can search the skies with the world's most powerful telescope and never discover heaven . " Author Unknown

  • @kennethferland5579
    @kennethferland5579 11 หลายเดือนก่อน

    Previous research with FPGA have found that they end up being incredibly sensitive to their environmental conditions underwhich they train. Because minute thermal expansion, manufacturing differences below the level of defects etc all end up producing noise which the learned network ends up conforming too and then NEEDS to be present for learned behavior to be maintained. The problem is that ultimatly real nurons are probably doing exactly the same thing and are thus full of internal states which are nessary for them to function and can't be ignored. That's how the seemingly low computation numbers of the brain do so much, were vastly under estimating the computations by calling 1 neuron 1 computation, when it's likely to be thousands, then add in 90% of non neuron cells in the brain which likely also hold information as well.

  • @BonesMcoy
    @BonesMcoy 11 หลายเดือนก่อน +1

    Good video, Thank you Sabine!

  • @hanslepoeter5167
    @hanslepoeter5167 11 หลายเดือนก่อน

    A few things about this : Random behaviour is usually part of the exploring functionality of AI and a parameter you can fiddle with. After all, when learned nothing yet random is all an AI has. When learned something it can use learned of use random, experience vs exploration where this parameter comes in. Using FPGA is probably not a new thing to the field. Chess computers have used FPGA's in the past and maybe today. It has proven not to be easy to beat programs based on conventional computers. Although chess programs tend to rely on brute force computing, which is something FPGA's can de extremely well ( made for it ), some flexibility is much harder to program in FPGA. I remember a few projects that have failed more or less but I'm not up to date on that.

  • @robertlivingston360
    @robertlivingston360 11 หลายเดือนก่อน

    If randomness is required for functionality then all devices produced will be different. Then device learning will processes will be different for each device as well.

  • @RandomUser25122
    @RandomUser25122 11 หลายเดือนก่อน

    “It will be remotely accessible”
    “Skynet has escaped”

  • @jorgeds1
    @jorgeds1 11 หลายเดือนก่อน

    I believe that a more accurate way of modeling in silicon how the brain works is by using asynchronous circuits. Im not an expert by my guess is that the brain does not work using a clock signal which is how most computers work. Asynchronous circuits also consume less power since they only compute when there is an external stimulus unlike conventional computers which, generally speaking, consume power all the time due to the clock activity

  • @aperinich
    @aperinich 11 หลายเดือนก่อน

    Sabine I genuinely love your approach and humour.

    • @aperinich
      @aperinich 11 หลายเดือนก่อน

      Have you left Facebook? I can no longer find your profile there. I really wanted to dialogue some topics with you..
      Best regards in any case.

  • @tonyhutz
    @tonyhutz 11 หลายเดือนก่อน

    I think one reason the human brain uses such low wattage is due to chemicals in the brain producing the "electrical/energy/watts" to perform at "min/max" like operation. I would like to make the assumption that if computers could have some sort of chemicals generate power (just as in the brain) power consumption could be reduced substantially.

  • @karlgoebeler1500
    @karlgoebeler1500 11 หลายเดือนก่อน

    Always "Seen" on the surface of the "Pool". Can "manipulate" whatever it sees. Via the coupling described by Wolfgang Pauli. Items are seen as a gravitational informetric pattern. Individual items can be separated by a subtractive process.

  • @_zoinks2554
    @_zoinks2554 11 หลายเดือนก่อน

    I can't wait to have a German brain car telling me I must be quiet, work harder and follow the rules.

  • @jameshaley2156
    @jameshaley2156 11 หลายเดือนก่อน

    Well done video. Very informative and the humor was fantastic. Thank you .

  • @tinyear926
    @tinyear926 11 หลายเดือนก่อน

    Ask this mega brain this, "if it takes 2hrs to dry one towel on a clothes line how many hrs does it take to dry 5 towels on a clothes line"

  • @tjf7101
    @tjf7101 11 หลายเดือนก่อน

    You had me at, “run for president “😂

  • @MikeU128
    @MikeU128 11 หลายเดือนก่อน

    I'm not sure how implementing a large neural network in FPGAs will be significantly different from past approaches. FPGAs are faster for some tasks (which can be directly reflected in the configuration of the FPGA), but (as noted in the video) they are clocked much slower than modern CPUs. I suspect this will end up being a niche system, well-suited to a narrow class of problems while losing out more broadly to large CPU/GPU based implementations.

  • @robertjohnsontaylor3187
    @robertjohnsontaylor3187 6 หลายเดือนก่อน

    I’m beginning to think it’s going to be like Kriton [a robot] in the TV series “RedDwarf”. Or the paranoid android in “The Hitch Hikers Guide to the Galaxy” by Douglas Adams, keeps using the phrase “brain the size of a planet and they keep asking me to make the tea”

  • @benvonhunerbein1865
    @benvonhunerbein1865 11 หลายเดือนก่อน

    Wonderful video! I work with people developing alogrithms for neuromorphics and its a fascinating field. One comment on the video: I think the stock footage of some computer magically "accessing" a brain or a dude holding out a hologram takes away from the science. It presents these developments as some form of magic which I think is the opposite of what you would like to do.

  • @ramiusstorm5664
    @ramiusstorm5664 11 หลายเดือนก่อน

    Jokes on them brains don't think, you can't hold a thought in your head anymore than you can grasp one in your hand.