The AI Hardware Problem

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 ธ.ค. 2024

ความคิดเห็น • 1.1K

  • @NewMind
    @NewMind  3 ปีที่แล้ว +62

    ▶ Check out Brilliant with this link to receive a 20% discount! brilliant.org/NewMind/

    • @calholli
      @calholli 3 ปีที่แล้ว +2

      Your videos are like an endless competitive battle between my comprehension and your articulation; may this war never find peace.

    • @davidhollenshead4892
      @davidhollenshead4892 3 ปีที่แล้ว +2

      @No Name Same here, as I already had to use captions while watching movies, and now long covid is making my hearing even worse...

    • @davidhollenshead4892
      @davidhollenshead4892 3 ปีที่แล้ว

      The connectionist solution will work, by having many small CPUs with limited memory, for each node, which are connected to only neighboring CPUs...

    • @Evelyn_theghost
      @Evelyn_theghost 3 ปีที่แล้ว

      No

  • @mickelodiansurname9578
    @mickelodiansurname9578 3 ปีที่แล้ว +1361

    The brain.... billions of calculations per second... powered on a subway sandwich!

    • @krasimirgedzhov8942
      @krasimirgedzhov8942 3 ปีที่แล้ว +51

      I don't think the exact process of the brain is comparable to computing. I imagine it's something more complex.

    • @mihailmilev9909
      @mihailmilev9909 3 ปีที่แล้ว +63

      @@krasimirgedzhov8942 nah I think it's just a big neural network

    • @mihailmilev9909
      @mihailmilev9909 3 ปีที่แล้ว +25

      @@krasimirgedzhov8942 that's kinda where the name comes from isn't it lol

    • @krasimirgedzhov8942
      @krasimirgedzhov8942 3 ปีที่แล้ว +76

      @@mihailmilev9909 it's a name given to one of the most complex softwares we have. It's only inspired by the structure of neurons, it doesn't have the exact same process as far as we know.

    • @serdarcam99
      @serdarcam99 3 ปีที่แล้ว +11

      İf its powered by subway sandwich its not going to calculate billions of things per sec its going to be much less

  • @deltalight584
    @deltalight584 3 ปีที่แล้ว +39

    12:21 That comparison was brilliant.
    It ties in computing & neurology together.
    Low speed, high precision needed => Digital ("Slow system of thought")
    High speed, low precision needed => Analog ("Fast system of thought")

    • @Hollowed2wiz
      @Hollowed2wiz 2 ปีที่แล้ว +1

      And what would quantum computation give ?

    • @primenumberbuster404
      @primenumberbuster404 10 หลายเดือนก่อน

      ​@@Hollowed2wiz a nightmare

  • @raphaelcardoso7927
    @raphaelcardoso7927 3 ปีที่แล้ว +790

    I'm applying to do a phd exactly in this field. Amazing video!
    Update: I was accepted!

    • @santoshmutum3263
      @santoshmutum3263 3 ปีที่แล้ว +11

      I am also writing my research proposal in this topic for PhD... Not accepted yet

    • @Rahul016-d6k
      @Rahul016-d6k 3 ปีที่แล้ว +3

      @@santoshmutum3263 Where did you apply?I'm Manipuri anyway.

    • @santoshmutum3263
      @santoshmutum3263 3 ปีที่แล้ว +2

      @@Rahul016-d6k Japan

    • @Rahul016-d6k
      @Rahul016-d6k 3 ปีที่แล้ว +3

      @@santoshmutum3263 Good Luck Brother👍👍

    • @santoshmutum3263
      @santoshmutum3263 3 ปีที่แล้ว +2

      @@Rahul016-d6k thanks

  • @Jojobreack324
    @Jojobreack324 3 ปีที่แล้ว +30

    I developed an asic for ai acceleration as part of my bachelor`s thesis and I must say this video is of very high quality. It is definitely an interesting approach to go back using analog techniques.

  • @gordonlawrence1448
    @gordonlawrence1448 3 ปีที่แล้ว +19

    Actually Radar computers in the 1950's were analogue. I was one of the last people at my college to be taught both analogue and digital computing. Add, Subtract, Multiply, Divide, integrate and differentiate can all be done with a single op-amp. The problem is Nyquist noise, and issues with capacitor dielectrics such as dielectric absorption and leakage. With a digital system you can just vastly over-sample add them all up then divide by your number of samples to reduce effective noise. You don't get that choice with analogue.

  • @BlackholeYT11
    @BlackholeYT11 3 ปีที่แล้ว +49

    Ooh, good to see this put into words and in a concise manner

    • @calholli
      @calholli 3 ปีที่แล้ว

      Your videos are like an endless competitive battle between my comprehension and your articulation; may this war never find peace.

  • @JohnDoe-zs6gj
    @JohnDoe-zs6gj 3 ปีที่แล้ว +420

    That energy comparison between our brain and our best processors is incredible. It's amazing the efficiency evolution can devolope given enough time.

    • @garrysekelli6776
      @garrysekelli6776 3 ปีที่แล้ว +35

      Computers are weak. They will never even beat a human at Chess.

    • @hedgehog3180
      @hedgehog3180 3 ปีที่แล้ว +73

      Evolution is the most aggressive optimization function in the known universe and has been running for over 4 billion years. Every single animal alive today is optimized down to the smallest cells. It's really no wonder that human brains are both the most powerful computer we know of and has efficiencies that makes everything else look like a joke.

    • @Eloign
      @Eloign 3 ปีที่แล้ว +36

      Computers don't happen by random processes. Neither did humans. Computers were created as were humans.

    • @ickebins6948
      @ickebins6948 3 ปีที่แล้ว +52

      @@Eloign Sure, provide some proof for that.
      Will you?

    • @WERT2025
      @WERT2025 3 ปีที่แล้ว +40

      @@hedgehog3180 Yeah I feel like every cell of my armpit hair is 100% optimized

  • @amrohendawi6007
    @amrohendawi6007 3 ปีที่แล้ว +65

    It amazes me how many different state-of-the-arts you perfectly and briefly cover in 10 minutes

  • @tannerbuschman1
    @tannerbuschman1 3 ปีที่แล้ว +482

    the idea of an AI being inherently impossible to debug or decipher is really cool and scary, science fiction was not far off on that one.

    • @aidanquinn1549
      @aidanquinn1549 3 ปีที่แล้ว +95

      We take for granted the amount of info we know about each other (human AI). I can guarantee you that when you last had a conversation about a specific feeling (whether is love with your spouse, hate towards something, how horiffic a scary movie was...) that you did not settle on the same exact emotion. A.K.A: you don't even know how to debug or understand what is going on behind any human's eyes right now!

    • @jss7668
      @jss7668 3 ปีที่แล้ว +7

      But humans are!

    • @aidanquinn1549
      @aidanquinn1549 3 ปีที่แล้ว +5

      @@jss7668 scary, yeah, but the most beautiful and fascinating things I've ever seen

    • @En_theo
      @En_theo 3 ปีที่แล้ว +14

      And this is how an AI becomes self-aware, hides its true meaning to you and go all Skynet when you expect it the least. S-F was not far off neither on that one.

    • @UNSCPILOT
      @UNSCPILOT 3 ปีที่แล้ว +5

      There are projects trying to find ways to break down and comprehend how learning algorithms work, it actually uses the same program that SETI@HOME did to allow people to donate processing power to help the project in testing and breaking down how the algorithms work

  • @naimas8120
    @naimas8120 3 ปีที่แล้ว +379

    Another masterpiece from New Mind! Never fails to entertain while teaching.

    • @XBONESXx
      @XBONESXx 3 ปีที่แล้ว +3

      Your avatar is a masterpiece

  • @lidarman2
    @lidarman2 3 ปีที่แล้ว +145

    Well done. I played around with small neural nets using op-amps in the 90s and although I saw that it was kinda the way to go, I had trouble with drift due to integration bias and all sorts of noise--and of course training was super tedious. But I always thought that neural nets really need to stay in the analog world. Modern non-volatile memory seems to be a solution for training weights since you can put variable amounts of charge in a cell, very densely.

    • @forwardplans8168
      @forwardplans8168 3 ปีที่แล้ว +7

      Did you ever look at using Fuzzy-Set theory to improve decision accuracy? I used a new program called CLIPS , around that time period. It's time to review it again.

    • @lidarman2
      @lidarman2 3 ปีที่แล้ว +9

      @@forwardplans8168 I was doing a lot of fuzzy logic at that time too. Interesting times.

    • @ArneChristianRosenfeldt
      @ArneChristianRosenfeldt 3 ปีที่แล้ว +8

      So the brain uses discrete values in the nerves. I may use analog within the cell. Opamps use analog values only. I tried very hard to understand analog multiplication for radio modulation and it is .. complicated. Satellite radio is energy efficient because it uses only one bit. AM radio uses a lot more energy ( for 8 bit signal/noise ).
      OpAmps are large compared to digital circuits.
      Flash memory is basically analog memory. We just use DACs and ADCs to only store discrete values in it. This is similar to the analog amplitude in DSL. ADC and DAC, especially for only 8 bits are fast. They are used for video equipment. So I don't know what the video want to claim there.
      I have read that the lsb could run on lower supply voltage because errors are not so bad there. There are always errors and mostly by supply voltage we decide how often we accept an error.
      Also those "half open" valves scare me. The nice thing about CMOS is that no change in state => no current drawn.
      I just chatted about the general problem of matching resources to tasks. That is an np-complete problem. So with different kind of transistors for tasks of different importance one opens a very big can of worms...

    • @adamrak7560
      @adamrak7560 3 ปีที่แล้ว +4

      Digital beats out analog until you scale to the extremes where we are right now.
      So until now it did not make much sense to use analog NN chips.
      I have studied one such analog chip and was sad to see that modern digital deep submicron could beat in every was possible. But that was 10 years ago, right now the digital CMOS hardware is nearing its limits, so we may need a paradigm change.

    • @seraphina985
      @seraphina985 3 ปีที่แล้ว +4

      @@ArneChristianRosenfeldt Arguably flash memory cells while not binary in nature are still more digital as the defining characteristic of digital systems as opposed to analog is the quantization of the signal. That is to say, digital signals are interpreted by quantizing them into one of a finite array of buckets that each correspond to some arbitrary range of the physical input value. The consequence of this is that digital signals are highly accurate but their precision is finite and limited as every input signal is effectively rounded to fit into one of those finite buckets. In contrast, the precision of analog signals is as close to infinite as you can get within our universe though in practice accuracy is the limiting factor is in how well you can insulate the system from noise and how accurately you can measure the input signal.
      Granted even with an ideal isolated system and ideal signal measuring device any analog system in our universe is likely to have its precision limited by the fact fundamental particles in our universe have defined properties. But this is also arguably academic as the applications for a system that can process values more precisely than could ever physically be generated or represented in our universe due to the lack of any particle with a measurable quantifiable property small enough is rather limited. Short of us discovering some way to either change the laws of physics in some region of space in that manner or travel to universes where the laws of physics are different a system that precise if sufficiently accurate could solve any problem that could exist in our reality limited only by our ability to understand and specify the problem. Well if given sufficient processing time that is but it could process any and all possible states that could even exist within our universe which would, in theory, allow us to solve any problem that could exist in reality.
      Sure there would still be a fundamental limit on the maximum precision that could still be demonstrated by the fact we could imagine arbitrary problems that couldn't be represented with enough precision without some clever workarounds. Hell even beyond that it is likely there is a finite amount of particles a civilization could ever collect in a universe with a finite speed of light and there will always be some arbitrary number larger that could be imagined but still, the practical applications of dealing with values that could not be replicated within the physical limitations of the observable universe are rather limited. There is probably a limit to what insights can be gleaned from simulating things that are physically impossible to ever encounter or bring into being.

  • @Zpajro
    @Zpajro 3 ปีที่แล้ว +278

    As a student in computer science, this is really interesting

    • @naimas8120
      @naimas8120 3 ปีที่แล้ว +4

      I'm a student of Information and Communications Technology. What do you think about the future of our field? Do you think it's really AI?

    • @olfmombach260
      @olfmombach260 3 ปีที่แล้ว +26

      @@naimas8120 As a student of Computer Science I can definitely say that I have absolutely no idea because I'm dumb

    • @samik83
      @samik83 3 ปีที่แล้ว +5

      @@naimas8120 As a layman I'd say definitely yes. Just the last couple of years AI's made some big strides. When we get quantum computing up and running and pair it with AI the possibilities are endless...and scary

    • @Zpajro
      @Zpajro 3 ปีที่แล้ว +5

      @@naimas8120 The problem is that the AI hype has come 3 times now, so predicting if This is the time it will really break throw is quite hard to tell. Personally, I eagerly waiting for our machine overlords (as long as there is no human controlling the AI). And if we get a true general intelligence going, it would be interesting to see how a different alien intelligence solves problems.

    • @ovoj
      @ovoj 3 ปีที่แล้ว

      @@Zpajro imagine the conversations with something that isn't human. Hopefully we reach that point in my lifetime

  • @fredoo6627
    @fredoo6627 3 ปีที่แล้ว +231

    It's so annoying to discover channels like this and see they don't get the views they deserve.

    • @mitchellsteindler
      @mitchellsteindler 3 ปีที่แล้ว +2

      Its a fairly new channel

    • @georgf9279
      @georgf9279 3 ปีที่แล้ว +6

      @@mitchellsteindler Let's boost it with some engagement (comments) then.

    • @hedgehog3180
      @hedgehog3180 3 ปีที่แล้ว +6

      Definitely one of the best engineering channels on TH-cam.

    • @keashavnair3607
      @keashavnair3607 3 ปีที่แล้ว

      Well the problem is, there are 16,852 views, yet only 1.6K likes and 31 Dislikes and 149 comments. This world is full of consumer minded half curious morons. That's why.

    • @mitchellsteindler
      @mitchellsteindler 3 ปีที่แล้ว +4

      @@keashavnair3607 dude. Just stop and get off your high horse. People like what they like.

  • @lidarman2
    @lidarman2 3 ปีที่แล้ว +54

    You made a somewhat profound comment at 12:13. The essence of intuition versus analysis. From our vast experiences we develop intuitions that gives us that "gut feeling" but when it matters, we do rigorous analysis to confirm. RE: "Blink" Malcolm Gladwell.

    • @calholli
      @calholli 3 ปีที่แล้ว

      Also it could be the difference in function from our creative right and analytical left brain.

    • @Nnm26
      @Nnm26 3 ปีที่แล้ว +1

      @@calholli that is bs btw

    • @calholli
      @calholli 3 ปีที่แล้ว +1

      @@Nnm26 Well, even if only metaphorical. It still has value as a concept.

  • @Bhatakti_Hawas
    @Bhatakti_Hawas 3 ปีที่แล้ว +502

    I promise I understood everything he said

    • @kevinperry8837
      @kevinperry8837 3 ปีที่แล้ว +13

      Yes me too comrades

    • @xlnc1980
      @xlnc1980 3 ปีที่แล้ว +8

      We all did!

    • @tymek200101
      @tymek200101 3 ปีที่แล้ว +17

      it is enough to be a 1st-year Computer Science student to understand all of the words and concepts

    • @Bhatakti_Hawas
      @Bhatakti_Hawas 3 ปีที่แล้ว +3

      @@xlnc1980 Hey fellow DT fan 👋🏽👋🏽

    • @xlnc1980
      @xlnc1980 3 ปีที่แล้ว +1

      @@Bhatakti_Hawas Hi there, fellow! LTE3 coming out next month. Been waiting for that one for only 22 years now. :)

  • @CuthbertNibbles
    @CuthbertNibbles 3 ปีที่แล้ว +64

    11:57 "They form a sort of black box with no means to verify the integrity of a result. This creates the dilemma of potentially unexplainable AI systems, creating issues of trust..."
    This is how the AI apocalypse begins. "Why'd that car run over that advocate?" "No idea."

    • @nipunasudha
      @nipunasudha 3 ปีที่แล้ว

      Exact same thing I thought.

    • @davidhollenshead4892
      @davidhollenshead4892 3 ปีที่แล้ว

      Using an AI to control a car is a waste of an AI...
      Besides, while autonomous aircraft or spacecraft is feasible technology, an autonomous car will never be "safe" due to pedestrians, cyclists, animals, etc. sharing the roads. This should be obvious by the weird accidents caused by cars like the Tesla decapitating the idiot occupant by driving under a truck and continuing on until it crashed into a house...

    • @nipunasudha
      @nipunasudha 3 ปีที่แล้ว +2

      @@davidhollenshead4892 lol they only need to be more accurate than a human driver. Doesn't need to be perfect. And the car structure and safety features are getting advanced by the day too. The sweet spot is closer than you think! 😁❤️

    • @starskiiguy1489
      @starskiiguy1489 3 ปีที่แล้ว +4

      @@davidhollenshead4892 I wouldn't be so sure. What you say may be true for modern infrastructure, but autonomous vehicles if they catch on may change the way we view transportation infrastructure overall.
      I could personally see a future where few own cars we rideshare if we need to travel a long distance with a car, but other than that we create more walkable cities with more public transit. In such a future comparing autonomous vehicles on modern infrastructure and autonomous vehicles in future infrastructure may be comparing apples to oranges.

    • @HelloKittyFanMan.
      @HelloKittyFanMan. 3 ปีที่แล้ว

      @@nipunasudha: *As accurate as...

  • @joel230182
    @joel230182 3 ปีที่แล้ว +86

    "...analog circuitry" , that caught me off guard

    • @davidhollenshead4892
      @davidhollenshead4892 3 ปีที่แล้ว +18

      That is one solution, the other is the connectionist solution, having many small CPUs with limited memory, for each node, which are connected to only neighboring CPUs...

    • @mihailmilev9909
      @mihailmilev9909 3 ปีที่แล้ว +1

      @@davidhollenshead4892 interesting... what is it called?

    • @this_is_japes7409
      @this_is_japes7409 3 ปีที่แล้ว +2

      @@mihailmilev9909 mesh computing, i think, or at least it's mesh topology based.

    • @TauCu
      @TauCu 3 ปีที่แล้ว +1

      Or just building a type of FPGA.
      I think in the future however, that FPGA will be a combination of Electronics and Photonics.
      For NN I don't see how Photonics could be beaten for general purpose networks.

    • @am-i-ai
      @am-i-ai 3 ปีที่แล้ว +1

      It actually makes a lot of sense. There has been a recent resurgence in the interest of analog systems. I, for one, feel like we ditched that particular technology a little prematurely. I'd be willing to bet that we see some rather spectacular new analog-based technologies in the near future.

  • @ryansupak3639
    @ryansupak3639 3 ปีที่แล้ว +10

    Nice...so it seems like digital-style processors still do all the “housekeeping” tasks of the computer, but then there are these “analog resistor networks” that do specialized tasks like the implementation of convolutional neural networks.
    Makes me smile when “everything old is new again”.

  • @kumarsuraj9450
    @kumarsuraj9450 3 ปีที่แล้ว +66

    My professor once said in class that future is analog. We were in a dilemma thinking what he actually meant. Now i see what he meant

    • @Alimhabidi
      @Alimhabidi 3 ปีที่แล้ว +4

      Future is quantum

    • @CrashTheRed
      @CrashTheRed 3 ปีที่แล้ว +2

      @@Alimhabidi It's been mentioned that quantum computers are specific purpose machines that won't improve on everything a conventional machine does. It also requires a conventional machine to process a lot of the data. Sabine Hossenfelder made a number of videos on quantum computers and the direction they're heading. Maybe that might interest you, especially since she's a theoretical physicist.

    • @davidthacher1397
      @davidthacher1397 3 ปีที่แล้ว

      The future is not fully analog or quantum. Analog will work of a set of properties which manipulates energy in waves, aka signals. Digital is a very simple signal, it is currently very stable and cheap. We can do a lot with this simple signal. For if we do not master digital how are we to understand analog. There signal architectures which are analogous on digital. Most who study CS or ECE never learn this. Most if not all of CS's theories are wrong! Literally might as well study Psychology, if you want to be that wrong.

    • @CrashTheRed
      @CrashTheRed 3 ปีที่แล้ว

      @@davidthacher1397 Ofc the future will be a combination of all of the above. But how are the CS theories wrong? This is a first for me, and I'd like to hear you explain it a bit

    • @this_is_japes7409
      @this_is_japes7409 3 ปีที่แล้ว +2

      everything is analog if you dig deep enough.

  • @alexkuhn5078
    @alexkuhn5078 3 ปีที่แล้ว +10

    4:30 I was kinda zoning out and I heard that as "50 to 100 pikachus"

  • @sknt
    @sknt 3 ปีที่แล้ว +25

    Great video, pretty much sums up the current state of AI. It's still a long way to go until we can even compare AI to a "real" brain. The brain is an insanely complex electrochemical machine that was evolved over millions of years.

  • @SpiritmanProductions
    @SpiritmanProductions 2 ปีที่แล้ว +2

    So, hybrid processors are the future, then, perhaps.

  • @naota3k
    @naota3k 3 ปีที่แล้ว +5

    What is the machine doing around 0:35? Is it extruding solder to bond the pads? This seems ridiculously precise and I've never seen it before, now I'm curious what this process is.

    • @justinmallaiz4549
      @justinmallaiz4549 3 ปีที่แล้ว

      good eye, never seen that myself

    • @lemlihoussama2905
      @lemlihoussama2905 3 ปีที่แล้ว +9

      It is a machine that uses gold wires to link between the integrated circuit in the chip and the chip's outside pins.
      This process is called "Wire Bonding" you can search it on youtube for more videos !

    • @naota3k
      @naota3k 3 ปีที่แล้ว

      @@lemlihoussama2905 Fantastic, thank you!

  • @user-cx2bk6pm2f
    @user-cx2bk6pm2f 3 ปีที่แล้ว +1

    As soon as he mentioned "analog" was waiting for the requisite mention of noise being the limiting factor.
    I'm impressed that he did indeed talk about that.. but disappointed that precluded the epic rant I was about to unleash 🤣

  • @joey199412
    @joey199412 3 ปีที่แล้ว +3

    Great video especially the assembly multiply instruction and outlining it with an analogue computing method. This makes sense since analogue results are instantaneous and don't require a clock pulse and thus no memory storage between calculations as you can add and subtract analogue signals instantaneously.

  • @onehouraday
    @onehouraday 3 ปีที่แล้ว +2

    Quite interesting! Neurons in the brain actually act as a mixed analogue/digital system. The input is digital (action potential), they do analogue processing, and output digital again (action potential, it's either 0 or 1).

  • @stage666
    @stage666 3 ปีที่แล้ว +8

    I feel good about myself that I know just enough about neural networks and computer engineering to somewhat understand what this video is talking about.

  • @entropysalamander
    @entropysalamander 3 ปีที่แล้ว +2

    I love the visuals used in your videos, they're always unobtrusive but fascinating.

  • @digicinematic
    @digicinematic 3 ปีที่แล้ว +12

    Yes, I have vague memories of the memristor being touted as the missing passive component, or some such thing.

  • @majorfallacy5926
    @majorfallacy5926 3 ปีที่แล้ว

    Analog computing is technically used commercially in measurment and control systems. Those applications don't exactly push the technologies limits, but are still important

  • @am-i-ai
    @am-i-ai 3 ปีที่แล้ว +5

    We definitely made a rash collective decision when we decided that digital was to *replace* analog. I would not be surprised at all to see a resurgence of analog systems ... we barely even explored that technical space. There surely are future analog developments that will rock the foundation of technological advancement. Very well done :)

  • @grandreddithotel8059
    @grandreddithotel8059 3 ปีที่แล้ว

    I know of a cool thought experiment regarding not artificial intelligence but artificial consciousess on digital architecture. It goes like:
    - all digital computation can be modeled by finite state machines
    - finite state machines can be expressed on pen and paper
    - therefore all digital computation can be expreased on pen and paper--pause during a CPU cycle and you can write the contents of your memory and registers on paper. This would take a lot of paper for one CPU cycle, let alone the trillions of CPU cycles a computer would execute over the span of a day. However, it is theoretically possible.
    So, of consciousness could be achieved on a digital computer, then it could also be a achieved by a very long book. I don't think consciousness can be achieved by any digital system, but it's still fun to think about.

  • @CreeperSlenderman
    @CreeperSlenderman 3 ปีที่แล้ว +4

    I have an idea for AI Emotions.
    We humans used to live in jungles and forests, biomes.
    in which we tried to survive and reproduce, for surviving our emotions are
    Fear, trust, confidence and loneliness.
    For reproducing it is
    love, attraction, and idk.
    so we would need to make an AI with "ADN"
    or atleast try to give it those feelings, but would have to be 2 AIs
    or else it won't be able to interact

  • @johnzinhoinhoinho
    @johnzinhoinhoinho 3 ปีที่แล้ว +1

    what really impresses me is the huge amount of knowledge in 13 minutes of video. Congrats for the content

  • @alengm
    @alengm 3 ปีที่แล้ว +17

    7:00 triggers google assistant :D

    • @calholli
      @calholli 3 ปีที่แล้ว +2

      That's by design.

  • @wood6454
    @wood6454 3 ปีที่แล้ว +1

    86 billion processing units in my brain and I still can't add two digit numbers

  • @nikolausluhrs
    @nikolausluhrs 3 ปีที่แล้ว +14

    Just gonna say we cant really explain how digital neural networks are making decisions that well either

    • @thewhitefalcon8539
      @thewhitefalcon8539 3 ปีที่แล้ว +2

      yeah we just have an algorithm that randomly adjusts them until they give the answers we want

    • @Taladar2003
      @Taladar2003 3 ปีที่แล้ว +1

      Which means we have no way to efficiently improve their performance. Doing almost what we want is no closer to doing exactly what we want than doing something completely different if we have no way to deliberately improve them in some iterative way.

    • @thewhitefalcon8539
      @thewhitefalcon8539 3 ปีที่แล้ว

      @PolySaken We can understand why, in general, a neural network might be capable of detecting triangles. We can't understand why *that particular* neural network *is* capable of detecting triangles.

    • @thewhitefalcon8539
      @thewhitefalcon8539 3 ปีที่แล้ว

      @PolySaken and what is that data? We don't know. We just know when you put a triangle in it says yes, and when you put in a square it says no. Also there's a megabyte of random-looking numbers involved.

    • @thewhitefalcon8539
      @thewhitefalcon8539 3 ปีที่แล้ว

      @PolySaken We can see what each square millimeter contributes to the painting by looking at it. We can understand why this square millimeter is this colour. Not so with AI models!

  • @soumilbanik1128
    @soumilbanik1128 3 ปีที่แล้ว +1

    I know very little about Machine Learning and Artificial Intelligence, but I often used to think that AI and ML should be processed like our human brains.
    I saw this video and realised that my thought has a potential. Thanks for such an informative video.

  • @fieryferret
    @fieryferret 2 ปีที่แล้ว +3

    The inherent precision floor of possible analog-driven neural net AI is now my headcanon for every single science fiction book/movie where a robot gains consciousness and starts acting unpredictably.

    • @MrFram
      @MrFram ปีที่แล้ว

      Analog is not required for this, existing AI are already unpredictable due to the black-box nature of machine learning

  • @pranjalmittal
    @pranjalmittal ปีที่แล้ว

    4:35 it was mentioned that the memory transfer accounts for the vast majority of time and power consumed, and later on we discussed that analog systems may be a solution to optimizing the computation (accumulative matrix multiplication), but shouldn't the transfer speed be the focus of the optimization given it's the main time and energy consumption bottleneck? Something like using using photonics for storing/transfer of data to make it faster and have low energy footprint, something like what Lightmatter (the company) is doing.

  • @pacifico4999
    @pacifico4999 3 ปีที่แล้ว +5

    Going back to the basics so we can move forward. This is a fascinating topic!

    • @questioneverything4633
      @questioneverything4633 3 ปีที่แล้ว

      We will never figure out computers until we properly master the harnessing of energy, especially electricity. We don't understand the fundamentals of things like this.

  • @marticus42
    @marticus42 3 ปีที่แล้ว +1

    7:22
    Never thought I would understand a statement like that. Good teaching

  • @jimmarburger611
    @jimmarburger611 3 ปีที่แล้ว +3

    Wow, amazing video. It's unbelievable the progress we've made. Just in my lifetime I've seen a blistering pace of achievement. The first computer I played with didn't even have a video interface, lol. I've loaded data and run programs from punch cards. Now I play video games on a machine that I built that probably rivals all the computing power available to NASA during Apollo. It's somewhat ironic that machine learning may lead to the rebirth of analog computing. Except for specific applications, analog has been relegated to unwanted stepchild status. Just saying, there's nothing wrong with analog, this video shows how it can be more efficient for machine learning.

  • @bits_of_michel
    @bits_of_michel 3 ปีที่แล้ว

    This is one of the best TH-cam videos I've seen in my life. Incredible visuals and explanation. Thank you.

  • @godetaalibaba2522
    @godetaalibaba2522 3 ปีที่แล้ว +8

    This was a very interesting topic that I didn't really heard about before, thank you for the amount of work this video took you to make !

  • @robertmclean6629
    @robertmclean6629 3 ปีที่แล้ว

    It takes less inputs and energy to compute in a “wet” environment. Air gapped transistors are going to become antiquated and relegated to low cost switching and basic computing.
    It might be easier and more efficient to compute using “wet” chemistry in situ rather than building up voltage/amperage to jump air gaps with noisy forward voltage to fulfill potential.
    Just a thought. Have a great day.

  • @jakub_simik
    @jakub_simik 3 ปีที่แล้ว +12

    What's the music at 11:00? It sounds like something from pink floyd. Thanks.

    • @rupertgarcia
      @rupertgarcia 3 ปีที่แล้ว +3

      I don't need sleep. I need answers!

    • @davidg5898
      @davidg5898 3 ปีที่แล้ว +4

      th-cam.com/video/THihnuQJHF4/w-d-xo.html

    • @jakub_simik
      @jakub_simik 3 ปีที่แล้ว +3

      @@davidg5898 Thank you so much.

  • @nicocalimero
    @nicocalimero 3 ปีที่แล้ว

    I don't know if I understand 5 % of the video, but still mind blowing.
    An analog world control by digital telechnology and using Analog digital converter in modern society.
    With the deep learning for AI it look already like a black box for the programming part even for autonomous vehicule when the IA learn by examples, isn'it ?

  • @somenygaard
    @somenygaard 3 ปีที่แล้ว +4

    Ahh the mobile net 224, one of my favorite neural network accumulator modules.

  • @Guilherme-social
    @Guilherme-social 3 ปีที่แล้ว +1

    The animations on this video are just gorgeous.

  • @ramentabetai1266
    @ramentabetai1266 3 ปีที่แล้ว +6

    Neuromorphic cpus are likely the future for this. These special chips by IBM and Intel are already much more effective at neural net tasks. IBM's goal is to build a system no larger than a brain that has the same amount of connections as the real one.

    • @olfmombach260
      @olfmombach260 3 ปีที่แล้ว

      We don't even remotely have the hardware manufacturing capabilities to do that

    • @rupertgarcia
      @rupertgarcia 3 ปีที่แล้ว +2

      @@user-ee1hj7rk9l, look up "IBM TrueNorth Chip". They've been working on it for years now.

    • @augustovasconcellos7173
      @augustovasconcellos7173 3 ปีที่แล้ว +1

      @@user-ee1hj7rk9l I'd say it's a bit too early to tell, but so far it looks like they won't. Quantum Computers are really only good when their workload consists of doing the same thing over and over again. This is good for breaking encryption, searching through databases, and so on, but not for AI.

  • @zorgonfire
    @zorgonfire 3 ปีที่แล้ว

    Very clear explanation of a very very hard domain of CS.

  • @fr3zer677
    @fr3zer677 3 ปีที่แล้ว +15

    Another amazing video!
    It's astonishing to me how many different topics are covered on this channel and how in-depth and interesting all of your videos are.

  • @fugslayernominee1397
    @fugslayernominee1397 3 ปีที่แล้ว +2

    I had goosebumps just before the ending. Looks like brains truly are the most efficient machines that nature has provided us with.

  • @WilliamDye-willdye
    @WilliamDye-willdye 3 ปีที่แล้ว +5

    The music at 7:55 was also used in another good video about ML ( th-cam.com/video/3JQ3hYko51Y/w-d-xo.html ). It's called "Atlantis", but now whenever I hear it I think of artificial neurons.

    • @sonofagunM357
      @sonofagunM357 3 ปีที่แล้ว +1

      At first I thought that song was from Alien Isolation, but no, both sound pretty close wouldn't you say?
      th-cam.com/video/txjs5MpATUg/w-d-xo.html

    • @WilliamDye-willdye
      @WilliamDye-willdye 3 ปีที่แล้ว +2

      @@sonofagunM357 Heh. I can definitely hear similarities. Thanks for the link, BTW. I haven't played that game, but now if a Steam sale comes along I might get it just because the soundtrack is promising.

  • @andrewharbit7449
    @andrewharbit7449 2 ปีที่แล้ว

    Tuning is the very process we our selves use to perfect our understanding of the environment around us. As newborns the environment bombards our sensors with information, this information could be seen as static, as time goes on we adjust and begin tuning our sensors, the static noise that we were born with never goes away, we simply tune into signals that benifets our existence. Sense the human brain is such an effective system and it appears to utilize frequency modulation in its formation of the environment it would make sense to take another look at analog computing.

  • @morkovija
    @morkovija 3 ปีที่แล้ว +15

    Is this another gem of quality content? That we're getting for free? Oh my

  • @astrumespanol
    @astrumespanol 3 ปีที่แล้ว +1

    Great video! I've learned a lot :)

  • @derek8564
    @derek8564 3 ปีที่แล้ว +13

    I knew my collection of Vacuum tubes would come in handy one day...

    • @hardrays
      @hardrays 3 ปีที่แล้ว +1

      you saved them so you can crank the plate voltage up past 15KV so you can self annihilate with pizazzzzz

    • @HelloKittyFanMan.
      @HelloKittyFanMan. 3 ปีที่แล้ว

      When did "vacuum" become a brand, to you?

  • @hoaxuan7074
    @hoaxuan7074 3 ปีที่แล้ว

    The fast Hadamard transform basically only needs patterns of add and subtract operation. Needing only a few transistors per operation on a chip.
    Then you can make Fast Transform fixed filter bank neural nets.
    The 2-point transform of a,b is
    a+b,a-b. It is self inverse.
    (a+b)+(a-b),(a+b)-(a-b)
    =2a,2b
    To get the 4-point transform of a,b,c,d form two 2-point transforms. Then form the sum and difference of the sum terms (alike terms) (a+b),(c+d). And the sum and difference of the difference terms (a-b), (c-d). Done.
    At each stage you sum and difference alike terms.

  • @NiffirgkcaJ
    @NiffirgkcaJ 3 ปีที่แล้ว +5

    This guy clearly needs more views and subscriptions.

  • @PunmasterSTP
    @PunmasterSTP ปีที่แล้ว +1

    It kind of blew my mind when I found out there were dedicated AI regions in microchips, but I guess that was only a logical next step. I'm not in the field and I doubt I'll ever use this knowledge, but I definitely find it fun and interesting to learn about. Thanks for the very high-quality video!

  • @UnchartedThoughtsMusic
    @UnchartedThoughtsMusic 3 ปีที่แล้ว +3

    0:00 - 1:45 *Oh man, I was thinking about that staring at a potentiometer, get me some vacuum tubes bois, we are going to Rome*

  • @calholli
    @calholli 3 ปีที่แล้ว +2

    Your videos are like an endless competitive battle between my comprehension and your articulation; may this war never find peace.

  • @glazzinfo6031
    @glazzinfo6031 3 ปีที่แล้ว +5

    Sir you are "Brilliant"

  • @SciHeartJourney
    @SciHeartJourney 3 ปีที่แล้ว +2

    OMG, I've found my calling!
    I'm a pro with Op Amps and transistors! I know digital design and computer architecture very well too. I'm excited! 🤗

  • @Texplainedeverythingdetailed
    @Texplainedeverythingdetailed 3 ปีที่แล้ว +5

    If someone start using things like femtojoules, i believe them. No questions asked.

  • @BryceSchroeder
    @BryceSchroeder 3 ปีที่แล้ว

    The character / is forward slash. \ is backslash. URLs have forward slashes in them. The backslash \ is used for DOS and Windows paths.

  • @latemhh5577
    @latemhh5577 3 ปีที่แล้ว +11

    This really is a masterpiece

  • @tonysu8860
    @tonysu8860 3 ปีที่แล้ว

    Only covers this topic at the 30,000 foot level...
    Something approximately what lay persons can find and understand quickly.
    Was hoping to go at least one level deeper, introduction to actual computations or otherwise these GPU hardware are appropriate for ML and the likely direction of evolving designs.

  • @gingerpukh7309
    @gingerpukh7309 3 ปีที่แล้ว +5

    NASA's Appolo mission guidance n Control analog computer design might be useful.

    • @Onewheelordeal
      @Onewheelordeal 3 ปีที่แล้ว +1

      I thought of that Smarter Every Day video first thing

  • @Tiogar60
    @Tiogar60 2 ปีที่แล้ว

    If you think about it, low precision is exactly what our brain has. It is hard for it to process calculations as fast as a computer, but is insanely good at processing input data.

  • @raykent3211
    @raykent3211 3 ปีที่แล้ว +8

    Excellent vidéo, thank you. I don't think it's the analogue aspect that can make AI indecipherable. In a purely digital neural network the trail of causation that resulted in certain weightings (and therefore a decision) is irreversible. I'm fascinated and worried by recent discussions of how a trained system could have inbuilt prejudice that can't be proven.

    • @faustin289
      @faustin289 3 ปีที่แล้ว

      This is no different by how decisions are formed in human mind either. It has been observed that we (not sure who's we) take decisions and our conscious self then try to rationalize those decisions after the fact.

    • @fofopads4450
      @fofopads4450 3 ปีที่แล้ว +1

      It is not indecipherable. It is just not easy to decipher, because you will end up needing more computational power than the AI itself consumes, just to monitor what the AI is doing. It defeats its purpose.

  • @simepaul4882
    @simepaul4882 3 ปีที่แล้ว +1

    What a greatly done video. The narration is so logical...

  • @EweChewBrrr01
    @EweChewBrrr01 3 ปีที่แล้ว +4

    I have no idea why I thought I could watch this and understand what's going on. Haven't even had my morning coffee yet.

  • @trumanhw
    @trumanhw 3 ปีที่แล้ว

    Brilliant, the adjective -- not the noun ... made this video possible.
    (truly fantastic quality in every metric I can think of; THANK YOU!)

  • @theonetruemorty4078
    @theonetruemorty4078 3 ปีที่แล้ว +4

    Quantum indeterminacy is required, there's no such thing as "artificial." What we call "consciousness" is the portion of a calculatory apparatus that dwells within a probability distribution, in a similar manner to which eyes dwell in the world of partial electromagnetic spectrum wavelength variation. Analyses of visual spectrum wavelengths are communicated to the visual cortex via a shared communication protocol; the visual spectrum itself does not dwell within the same domain as the eyes. In a similar fashion, what humans derogatively refer to as the "subconscious" mind acts as an interpreter of data received from the "conscious" mind reporting from the front line of a probabilistic domain; the "subconscious" does not dwell in the same domain as the "conscious." The deterministic informs the probabilistic, the probabilistic guides the deterministic, feedback loop paradox party time ensues; this is the strange and largely misunderstood process that we refer to as freewill. (disclaimer: don't listen to anything i say, i've clearly taken too many psychedelics, cheers)

    • @Souleater7777
      @Souleater7777 9 หลายเดือนก่อน

      Can you explain more

  • @angus8223
    @angus8223 3 ปีที่แล้ว

    what I got from this was that these new computers are gonna have that lovely warm processing feel

  • @Lukegear
    @Lukegear 3 ปีที่แล้ว +18

    new mind hardware xD

  • @UmairHussaini
    @UmairHussaini 3 ปีที่แล้ว +1

    Excellently explained!

  • @user-pc5sc7zi9j
    @user-pc5sc7zi9j 3 ปีที่แล้ว +3

    What is this "widespread aviability of GPU's" he is talking about?

  • @hikaroto2791
    @hikaroto2791 3 ปีที่แล้ว +2

    8:27 background song is amazing, it remembers me prometheus movie from Alien saga

    • @kxtof
      @kxtof 3 ปีที่แล้ว +1

      so i'm not the only one who noticed

  • @shairozsohail1059
    @shairozsohail1059 3 ปีที่แล้ว +2

    Been an AI researcher for years and learned a lot from this video. Thanks

  • @generalx5220
    @generalx5220 3 ปีที่แล้ว +3

    Wow! I’m now woke AF in the understanding of AI

  • @youdrakkar
    @youdrakkar 2 ปีที่แล้ว +1

    I bet Derek has been defenetly inspired by your wonderfull video for his recent take on the analog computation.

  • @shoam2103
    @shoam2103 3 ปีที่แล้ว +1

    3:05 most terse summary of current ML tech, rather accurate too I'd think!

  • @bigbeneconotmyjob6474
    @bigbeneconotmyjob6474 3 ปีที่แล้ว

    I'm huge into R-RAM or memristor technology, I wish you could talk about it more but I understand that the video is more a general overview and not too in-depth in the sub-subjects.
    With that said, some comments are Memristor (last I researched late 2019): Trying to get memristors to be accurately set and operate predictably took some time, but in 2018 a team made a memristor that could be accurately set to 64 different states, or 5 bits of equivalent precision, which is the first hurdle solved (I think it still used cobalt, which is a material everyone wants to avoid due to being monopolized by a slave labor country). That is just the memristor itself, there are circuit reading techniques that further enhance the memristor accuracy, as well as circuits that will accurately perform the calculation directly with memristors (combined into a perception module) reducing the need for ADC and DAC to improve speed and energy efficiency. Currently the three main things that need to be improved to make it a feasible technology is, when reading the data it is automatically destroyed (memristor changes state once read) so a system to automatically reprogram the resistor is needed (its not the hardest, we simple just haven't gotten to that point yet in prototype research, as people are still debating memristor design/chemistry types), setting such a system into a matrix array with all needed support circuitry (aka figuring out a good architecture, which for now is putting the cart before the horse), and tooling is also the major cost. Production of memristor uses techniques that aren't too standard, so making it easier is needed to easy adoption.

  • @torbenjensen5002
    @torbenjensen5002 3 ปีที่แล้ว +1

    The human brain is a quantum-computer with one billion qubits working at 37 degree Celcius.

  • @jerryplayz101
    @jerryplayz101 3 ปีที่แล้ว

    7:30 - what about Staintronics? It promises to increase performance by up to 100 times, power efficiency to about the same and speed (due to next to no electrical resistance)

  • @yepyep266
    @yepyep266 3 ปีที่แล้ว +1

    computing has to become analog to get to the next level. In a sense, going digital is effectively compressing a whole signal to a single bit of information.

  • @derekwood8145
    @derekwood8145 3 ปีที่แล้ว

    One hugely negative downside to analog computing, beyond what you measured is that they are typically “low power”, but have high energy consumption.
    Digital logic (I.e CMOS) consumes little power when static (when the clock signal is stable) whereas most analog designs have much lower peak power consumption but they consume power all the time. To decrease power consumption you have to introduce additional gates, dramatically reducing the benefits of analog systems, and making them essentially a digital/analog hybrid design.
    It’s really impressive you could convey these sorts of details in a coherent 13 minute video, kudos.

  • @granttaylor3697
    @granttaylor3697 3 ปีที่แล้ว

    If anyone is interested I have been working new ideas for an analog computer, that is both analog and digital at the same time, making the digital interface a lot easier to do then older analog designs. I hope to make more of this technology public over the next few years, as I am still developing the processing blocks that would be required for applications such as AI.

  • @nullnull805
    @nullnull805 3 ปีที่แล้ว

    Nice video. Just a few corrections and comments.
    Regarding the comment that the current mathematical precision being far more than what is needed for machine learning applications( 05:40 ); this is only true for one half of the process. It is true that when you deploy a model, inferencing can be done a much lower levels of precision without sacrificing how effective the model might be. However, during training, this precision is still required. In fact, at lower precision levels training becomes unstable and the models often fail to train altogether. Training at lower precision levels is an active field of research. In summary, when you develop/train a model you need the precision, when you deploy the model, you don't.
    Regarding comments on 'precision variability' ( 11:10 ), it is important to note that, simply because a system is non-deterministic, does not mean that it is numerically unstable. Put differently, a non-deterministic system can still produce approximately the same result in the presence of internal noise. In fact, many modern neural networks are trained by intentionally injecting non-deterministic noise into the system during training in order to make the network more robust. Perhaps a more relatable but hand-wavy claim is that the human brain is also able to produce approximately the same result in the presence of noisy biochemical processes.

  • @pistle9918
    @pistle9918 3 ปีที่แล้ว

    i think you could debug on debug-dedicated analog processors that have a massive array of analog to digital converters, each converter placed inbetween a conntction of two resistors. that would be slow as hell, but you could debug your ai for later usage on non debugging chips. i just dont know if adding these "listeners" would change the white noise behaviour too much in comparison to the non debugging chips

  • @ericlotze7724
    @ericlotze7724 3 ปีที่แล้ว +1

    This is a great "under the hood" look of AI/Deep Learning Programs, awesome video, as usual !

  • @isiahfriedlander5559
    @isiahfriedlander5559 3 ปีที่แล้ว +1

    What song is that at 8:30?

  • @aceofspades001
    @aceofspades001 3 ปีที่แล้ว +2

    You know how dumb I am? I was fully immersed in the video to the point I thought the ad was part of the topic! Im dumb 🤣

  • @kalliste23
    @kalliste23 3 ปีที่แล้ว

    The human nervous system uses noise to its advantage, for instance via stochastic resonance, so a noisy system isn't necessarily a problem for the functioning of living organisms.

  • @tylerkolota
    @tylerkolota 3 ปีที่แล้ว +2

    I’m a little confused by the multiply-accumulate (MAC).
    I understand many ML applications will eventually output a percent chance of yes (like there’s an 80% chance this picture has a cat in it). So at what point are these numerous accumulations translated into a percent? Like is A eventually divided by the number of multiply accumulate operations (A/n)?

    • @ehtuanK
      @ehtuanK 3 ปีที่แล้ว

      When a NN infers to a single value, like a probability, it means that its output layer consists of a single node. The value accumulated in that node is then turned into the probability value by an appropriately chosen activation function. For that reason different activation functions are usually chosen for the output layer than everything else: nowadays usually some rectifier variant for the hidden layers, and a sigmoid function or arctan for the output layer. The sigmoid function gives you values between 0 and 1, so it is used for things like probabilities.