Your Next CPU is Bigger Than Your HEAD 🤯 Cerebras Wafer Scale 2

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ก.ย. 2024

ความคิดเห็น • 675

  • @SteelSkin667
    @SteelSkin667 3 ปีที่แล้ว +172

    It took me a minute to realize that the corners of the die are rounded because it's reaching the edge of the wafer. That is insane.

    • @Outland9000
      @Outland9000 3 ปีที่แล้ว +1

      Oh really? Thats why?

    • @johnsmithe4656
      @johnsmithe4656 3 ปีที่แล้ว +14

      @@Outland9000 Well, wafers are about that big, but in circle form. They must just trim the edges. I wonder why they don't just leave the CPU in a perfect circle shape, no cutting required.

    • @mastaw
      @mastaw 3 ปีที่แล้ว +13

      @@johnsmithe4656 You can reuse the cut material

    • @TechTroppy
      @TechTroppy 3 ปีที่แล้ว +5

      @@mastaw oh I didn't know that

    • @-yttrium-1187
      @-yttrium-1187 3 ปีที่แล้ว

      @@johnsmithe4656 Because a wafer has defects, so cutting and pasting dies back together vastly increases yield.

  • @yanniskouretas8688
    @yanniskouretas8688 3 ปีที่แล้ว +117

    One chip to rule them all,
    one chip to find them,
    One chip to bring them all
    and in the darkness bind them.
    I don't know why this made me remember the inscription on Sauron's Ring .....

    • @buggerlugz6753
      @buggerlugz6753 3 ปีที่แล้ว +7

      If only it had integrated graphics, hell be a git spending millions on this chip and you can't use it because no one has a graphics card to sell you!

    • @manum5998
      @manum5998 3 ปีที่แล้ว +2

      ITS SKYNET

  • @blackasthesky
    @blackasthesky 3 ปีที่แล้ว +33

    I stumbled across your channel today. I am a CS student and I am mainly interested in that very low level stuff, barely above the electrical engineering level. This channel is just what I didn't know I was looking for. TH-cam recommendations aren't always bullshit.

    • @Myrddnn
      @Myrddnn 3 ปีที่แล้ว

      Are you looking into Arduino and other micro-controller types?

  • @Apocalymon
    @Apocalymon 3 ปีที่แล้ว +87

    Finally, niche science-fiction stories from the 70s & 80s are becoming a reality.

  • @BarryTheCougar
    @BarryTheCougar 3 ปีที่แล้ว +49

    "If a core has a defect on it, then it can just be bypassed". Reminds me of the 80's and low-level formatting a ST-506 interface HDD and flagging bad sectors.

    • @theantipope4354
      @theantipope4354 2 ปีที่แล้ว +1

      Oh jeez, I don't miss those days at all.

  • @bobafruti
    @bobafruti 3 ปีที่แล้ว +51

    I remember in the 80s when the media first started talking about making supercomputers parallel instead of just faster. People said one day you might have 4000 CPUs in a supercomputer....
    We’ve had that for years now, but 4,000 on a single wafer is next level

    • @McOuroborosBurger
      @McOuroborosBurger 3 ปีที่แล้ว +5

      @@caedenw OP is probably a Boomer or X'er cut them a little slack lol. :)

    • @insane_troll
      @insane_troll 3 ปีที่แล้ว

      It is still only one CPU, just lots of cores.

    • @McOuroborosBurger
      @McOuroborosBurger 3 ปีที่แล้ว +3

      @@insane_troll For a long time a CPU was just one core

    • @Yetoob8lWuxUQnpAahSqEpYkyZ
      @Yetoob8lWuxUQnpAahSqEpYkyZ 3 ปีที่แล้ว +2

      @@micaheiber1419 nice try snp

    • @jeffwells641
      @jeffwells641 3 ปีที่แล้ว +6

      @@micaheiber1419 The definition had to shift, because a "core" is no longer a Central Processing Unit. This should be obvious - a single processing unit (core) cannot be the Central processing unit if there is another processing unit just like it in the package. The CPU is now the package of all the cores together - a multi-core processor. So core and CPU are not interchangeable terms, even though what we now consider a Core was in fact the CPU prior to the mid-2000s.

  • @keyserxx
    @keyserxx 3 ปีที่แล้ว +82

    I love your videos, my IQ went up a notch. and now for balance I'm off to watch an LTT video.

    • @larsalfredhenrikstahlin8012
      @larsalfredhenrikstahlin8012 3 ปีที่แล้ว +13

      @I trigger people I feel like their content is more aimed towards those who wants to keep up with the latest tech consumer wise. Not go balls deep in physics and maths but just know which graphics cards are out and stuff like that. I think they are good at doing what they are aiming to do

    • @peetiegonzalez1845
      @peetiegonzalez1845 3 ปีที่แล้ว +7

      Lol, imagine Linus getting his hands on one of these. *whoops* there goes a $2.5m wafer. OK now it's 2 wafers...

    • @dickJohnsonpeter
      @dickJohnsonpeter 3 ปีที่แล้ว +3

      Jeez, that guy is annoying, thankfully I haven't seen one of those in a couple years.

    • @daometh
      @daometh 3 ปีที่แล้ว +3

      I like linus's video. their a bit more dumbed down so i'm more able to understand.
      I'm just a designer, i like to keep up with the new tech but this is way over my head. it's still interesting but i don't understand much of it.

    • @manum5998
      @manum5998 3 ปีที่แล้ว +3

      LINUS WILL install win 11 on it and Play Games @ 400Fps 16K

  • @liaminwales
    @liaminwales 3 ปีที่แล้ว +25

    2:50 version 1 = 23KW!
    4:20 version 2 = 23KW, amazing how they keep the same power use with a die shrink
    WOW that brings a smile to my face.

    • @michaelrenper796
      @michaelrenper796 3 ปีที่แล้ว +8

      You misunderstand the problem statement. 23KW IS the design goal and the limit. You simply cannot dissipate more heat away from that surface. A chip like this is always running well below is maximum performance. That is, at a lower cycle frequency than the silicon can support.

    • @liaminwales
      @liaminwales 3 ปีที่แล้ว +4

      @@michaelrenper796 I have to assume most server hardware is set closer to a more power efficient setting, it's why server CPU's are mostly 2-4GHZ and not 5GHZ.
      my original intent was to communicate
      'wow die shrink = power savings'
      (and yes as you point out the core speed may be set to hit the power target)
      &
      'wow 23KW is a lot of power for a small box'
      I did think about trying to make a joke about the back of the box being like a jet engine shooting out 23KW of heat but wanted to keep it short.
      did make the joke under some one else comment about if tech tech potato got one at home the first thing that will happen is the beaker blowing when he turns it on.
      the idea of 23KW is mind bending to me.

    • @michaelrenper796
      @michaelrenper796 3 ปีที่แล้ว

      @@liaminwales True, this topic is close to my heart since I own a poorly designed Razer gaming laptop (its my work laptop gaming is secondary), which has roasted ist second battery in two years.
      Fortunately Alibaba is yout friend when it comes to spare parts.

    • @liaminwales
      @liaminwales 3 ปีที่แล้ว

      @@michaelrenper796 I gave up high end laptops years ago, the last one I got was in 2008.
      They just do not last well, I relay think desktop at home and light/ok laptop for when im mobile is the best way to go.
      unless you relay need the mobile power I relay think it's the best way (for me at least).

    • @ThePC007
      @ThePC007 3 ปีที่แล้ว

      @@liaminwales Server CPUs typically work on tasks that are very parallelizable. Therefore, it makes sense for them to sacrifice high clock frequencies for a higher core count.

  • @henrysalayne
    @henrysalayne 3 ปีที่แล้ว +64

    If GPU prices rise even higher, this might be a valid option on the second hand market in a few years.
    Love the new logo!

  • @oiytd5wugho
    @oiytd5wugho 3 ปีที่แล้ว +33

    5:24 "a few that cerebras don't wanna talk about" yeah, this stuff is always exciting and then I go "oh, someone is using that for spooky military things"

    • @Vasharan
      @Vasharan 3 ปีที่แล้ว +7

      Or facebook creating "pre-click" algorithms so they'll know what you'll click on before they even show it to you.

    • @xmlthegreat
      @xmlthegreat 3 ปีที่แล้ว +6

      Nuclear weapons modeling still takes up a lot of US government compute time, but there are also hypersonic flight regime research programs that I'm sure of. Beyond that, maybe creating a ML model for air combat maneuvering, or using AI for acoustic signature detection in submarines. Of course the Facebook thing is true as well, create a ML model that can predict with increasing accuracy what a human would click on next.

  • @gulllars4620
    @gulllars4620 3 ปีที่แล้ว +23

    I first came across Cerebras at STH, and then i fell down the rabbit hole a bit. The amount of new and novel innovations they had to do to be able to run a wafer scale chip is extremely impressive. I'd liken it to almost an Apollo mission of silicon and systems engineering.
    From what I've read, training models is extremely fast, but the size of the onboard SRAM is a constraint on the model size. I wonder if they will make versions later stacking some kind of memory under the compute cores, or using a more dense type of memory than SRAM. Alternatively an edition trading some cores for higher memory.
    It is definitely a new paradigm of computing.
    I'd be curious to hear more about non-ai use cases here as well, and how well it works for those compared to alternatives.

    • @martylawson1638
      @martylawson1638 3 ปีที่แล้ว

      As the cores are probably 1Ghz in order processors, maybe move to a DRAM process for lower cost at higher memory density?

  • @nukedathlonman
    @nukedathlonman 3 ปีที่แล้ว +9

    I find it impressive that they found a way to make the chip with out having to worry about yields.

  • @fotmheki
    @fotmheki 3 ปีที่แล้ว +20

    Amazing work from them. 100% yield of those beast is wonderful! No silicon will be waste

    • @deth3021
      @deth3021 3 ปีที่แล้ว +1

      Technically they still have silicon waste. As each defect causes a core to be useless. It's just that due to the process they leave the wasted silicon in the chip instead of recycling it. Also look at 7:47 to see all the wasted silicon around the edges of the waifer.
      As such this is more wasteful not less arguably.

    • @radnukespeoplesminds
      @radnukespeoplesminds 3 ปีที่แล้ว

      @@deth3021 i think that space around the edges is still usable for other chips tho

    • @deth3021
      @deth3021 3 ปีที่แล้ว

      @@radnukespeoplesminds it could technically be but that isnt how the tsmc busness model works. As he said the minimum order is a batch of silicon wafers. Also due to the up fron cost of creating the masks etc.. it's not like you would randomly add chips for a small run it doesn't make economic sense.
      Long story short I would be extremely surprised if they did that.

  • @johnluuee
    @johnluuee 3 ปีที่แล้ว +102

    When both kidneys isn’t cutting it and you need a arm and a leg plus your first born.
    This sounds like a tv drama.

    • @Reth_Hard
      @Reth_Hard 3 ปีที่แล้ว +5

      Yeah but being able to play Minecraft with shaders at a steady 60 Fps, it's priceless...

    • @raven4k998
      @raven4k998 3 ปีที่แล้ว +1

      @@Reth_Hard bet you could play minecraft at a steady 240 fps and even the biggest explosions wouldn't even slow it down in the slightest

    • @Reth_Hard
      @Reth_Hard 3 ปีที่แล้ว +3

      @@raven4k998
      I'm currently using an old thinkpad dual-core with 1GB of ram, I literaly have to reload firefox between each videos on TH-cam... Are you sure you want to bet with me?

    • @raven4k998
      @raven4k998 3 ปีที่แล้ว +1

      @@metanumia yeah it's the American dream

  • @ChrisDupres
    @ChrisDupres 3 ปีที่แล้ว +123

    The new channel icon is excellent

    • @pytordk
      @pytordk 3 ปีที่แล้ว +3

      Yes, but I miss the old potatoe 😕

  • @alexhaddock4554
    @alexhaddock4554 3 ปีที่แล้ว +7

    Great video and good to see Natalia flash up. She came to talk to us when she was in HPE Labs and my brain melted! Good to see we are working on Neocortex with them.

  • @mt-qc2qh
    @mt-qc2qh 3 ปีที่แล้ว +2

    Great info as always. Glad I found your channel. Frankly I'm amazed at how well you know the chip industry and you presentations are spot on! Keep up the good work...

    • @TechTechPotato
      @TechTechPotato  3 ปีที่แล้ว

      I have been Senior CPU Editor for AnandTech for a decade ;)

  • @drjacksparrow
    @drjacksparrow 3 ปีที่แล้ว +10

    First time commentator.. I have been following you from anandtech.. I like your style. Keep up the good work. I am a tech enthusiast you helped me learn something new today thanks

  • @leinadreign3510
    @leinadreign3510 3 ปีที่แล้ว +5

    "The more gpus you buy, the more money you save"
    Damn, that quote is dangerous in these times....

  • @BRUXXUS
    @BRUXXUS 3 ปีที่แล้ว +15

    The amount of work that's gone into this whole concept and project is mind-boggling. The software side alone, making it "just work" with a few lines of code almost seems impossible.
    I also adore the PotatoChippy character for the channel. 🥰

  • @TauxWau
    @TauxWau 3 ปีที่แล้ว +2

    I read your new Anandtech article on Cerebras and was quite amazed. Thanks for bringing people's attention to this really cool piece of tech :)

  • @woolfel
    @woolfel 3 ปีที่แล้ว +4

    One import thing to note about Cerebras approach is it's fundamentally different than GPU. Cerebras doesn't have much on-board memory because it's streaming centric and doesn't need to have 12 or 24G of memory like RTX cards. It's much closer to FPGA massively parallel IO approach.

    • @Anon.G
      @Anon.G 3 ปีที่แล้ว

      Yeah this guy is completely confusing gpu for graphics card

  • @-Kerstin
    @-Kerstin 3 ปีที่แล้ว +26

    Jesus, I would love to see benchmarks and teardowns of this monster. It's off the charts

    • @benjaminfacouchere2395
      @benjaminfacouchere2395 3 ปีที่แล้ว +18

      Yes, maybe in 20 years we can get a "vintage AI supercomputer" teardown by some guy in his garage of this beast :)

    • @gandautama4141
      @gandautama4141 3 ปีที่แล้ว +6

      I am curious in hashrate LOL.. ^_^

    • @DigitalJedi
      @DigitalJedi 3 ปีที่แล้ว +1

      I like to think this thing could emulate something that smashed benchmarks. A chip like this doesn't toy around the benchmarks for mere mortal hardware.

    • @pcnazillpg5065
      @pcnazillpg5065 3 ปีที่แล้ว +1

      @@gandautama4141 same

    • @bloodypommelstudios7144
      @bloodypommelstudios7144 3 ปีที่แล้ว +1

      I'd not even sure HOW you'd benchmark something like this but on paper it's an absolute monster.

  • @shawncarroll5255
    @shawncarroll5255 3 ปีที่แล้ว +3

    "In a human, there are more than 125 trillion synapses just in the cerebral cortex alone,"
    It also comes with liquid-cooling, though leakage of coolant in the CPU can lead to permanent damage. The fluid pump is good for approximately three billion cycles, and the system consumes give or take 2,300 watt hours a day in the form of chemical energy, which is partially converted to pulsed chemo-electrical signals. MTBF runs about 60-70 years, and a new system typically takes about 20 years from inception to practical usefulness.

  • @Spikeypup
    @Spikeypup 3 ปีที่แล้ว +4

    LOve your new potatoes! So cute munching on those wafers! Great coverage here, thanks for the concise yet bountiful report. Hope you are well and staying safe.

  • @PlanetFrosty
    @PlanetFrosty 3 ปีที่แล้ว +3

    Excellent presentation, we’ve looked at their chips for one our major projects!

    • @TechTechPotato
      @TechTechPotato  3 ปีที่แล้ว +5

      I assume you didn't end up with one? Did you find something better or was it too expensive or ?

  • @jwbowen
    @jwbowen 3 ปีที่แล้ว +1

    I love extreme solutions like this and the Anton ASICs out of D. E. Shaw Research

  • @Ben-ry1py
    @Ben-ry1py 3 ปีที่แล้ว +2

    Great stuff, as always! I really enjoyed your broken silicon too. I listened to all of it and loved it.

  • @ronaldgarrison8478
    @ronaldgarrison8478 3 ปีที่แล้ว +1

    I'm older than Jack Kilby's and Bob Noyce's first chips. Older than Sputnik 1, and the ship that guy on your shirt was in. Even older than the Regency TR-1, and the Admiral C1617A (first color TV on the market). Can you imagine how insane this looks to a guy like me?

  • @marioc9168
    @marioc9168 3 ปีที่แล้ว +13

    Please don't get bought out. Stay scrappy Cerebras!

  • @herrbert2505
    @herrbert2505 3 ปีที่แล้ว +7

    Customers obviously military. Such chips will power battlebots that flickshot you from accross the country.

  • @mr.e5988
    @mr.e5988 3 ปีที่แล้ว

    Great content. In depth explanation of the who, what, when and why. Cerebras is on my radar thanks to you.

  • @lazyman114
    @lazyman114 3 ปีที่แล้ว +3

    Sorry did I just stumble upon the biggest technological breakthrough of the last decade. This is huge and we will be seeing more of these very soon. I'm surprised that it's possible to cool 21KW though. THat's insane.

  • @herrbert2505
    @herrbert2505 3 ปีที่แล้ว +11

    23kW?! The real winners of the future will be energy providers 😅

    • @guytech7310
      @guytech7310 3 ปีที่แล้ว

      Consider how much power would be needed to run a datacenter with 850,000 cores. My guess is that it would be about 200Kw. So about 1/10 of the power consumption for the same level of compute. Very impressive!

  • @conza1989
    @conza1989 3 ปีที่แล้ว

    You are doing nothing wrong, I have been a fan of your work for a long time but have mostly heard of you through other tech YTers, so I'll subscribe because this really is fascinating, I wonder if this method, even if not to this scale, could be applied to chiplets to have better connections in the future? The Chocolate block CPU die approach, it's extraordinary.

  • @rougenaxela
    @rougenaxela 3 ปีที่แล้ว +50

    I have to give them points for the audacity of making such power-dense wafer scale monsters like that

    • @william_SMMA
      @william_SMMA 3 ปีที่แล้ว +4

      There are some laughably hard working and intelligent people roaming this earth
      So many more intelligent than einstein but working under a company somewhere in california

    • @jimjimmy3131
      @jimjimmy3131 3 ปีที่แล้ว +3

      @@william_SMMA no , they are simply not . Einstein himself worked as a teacher but then he went and had an idea about how the universe worked in some sense . So no , they are not smarter than Einstein just because they have more phds .

    • @jonnyj.
      @jonnyj. 2 ปีที่แล้ว +2

      @@william_SMMA More intelligent than einstein...? Unless they make predictions out of thin air that turn out to be accurate through experimentation 100 years later, I highly doubt that...

  • @celivalg
    @celivalg 3 ปีที่แล้ว +2

    this is amazing, this goes so far from whatever people were thinking about when engineering AI specific solutions. I wonder if this technology of creating huge chips like that has other applications in data computing, I assume you can do some fun stuff with something optimized for more general parallel computing...

  • @dylanlac765n6
    @dylanlac765n6 3 ปีที่แล้ว +1

    The funny thing is that in probably 20-30 years, we will have the same power in our desktops

  • @TA-eo2ww
    @TA-eo2ww 3 ปีที่แล้ว +1

    ABSOLUTELY ASTONISHING!!!!

  • @rubinkatz9850
    @rubinkatz9850 ปีที่แล้ว

    you answered my burning question at 8:00

  • @-MaXuS-
    @-MaXuS- 3 ปีที่แล้ว +1

    Super interesting as always! I have one critic that I have found a wee bit frustrating and that would be how quick the b-roll with stats and pics flashes by. Often times you show cool pics/stats and then I find myself having to go back and pause to actually absorb the stats or whatever. Content like this is way I love your content so thanks!! ✌️🖖

  • @Mr_Stone1
    @Mr_Stone1 3 ปีที่แล้ว +9

    I'm trying to deduce how big this AIO liquid cooler is... I'd say size of a fridge sideways? And that is half of the system? Or is that only pumps, and the radiators are elsewhere? I want to see an air cooler for this! A kraken of heatpipes!

    • @matty1234a1
      @matty1234a1 3 ปีที่แล้ว +1

      Heatpipes actually cant function over about 8 inches, a wafer is 12, so it woulnt wick any heat from the center😅

  • @madf00bar15
    @madf00bar15 3 ปีที่แล้ว +3

    Yuri Gagarin shirt ? Cool!

  • @bmurph24
    @bmurph24 3 ปีที่แล้ว +2

    Love the intro and logo lookin slick man!

  • @Alphabass121
    @Alphabass121 3 ปีที่แล้ว +10

    This is crazy I would love to know more about the interconnect they are using.

  • @skyline2203
    @skyline2203 3 ปีที่แล้ว +8

    Ian do you see these dedicated AI hardware chip companies (Cerebras, Tenstorrent) taking away market share from Nvidia?

    • @skyline2203
      @skyline2203 3 ปีที่แล้ว +3

      @TechTechPotato Could you do a video on how some of the licenses work for AI and overall costs? I was reading doing AI training for Nvidia in the data center requires a very expensive license and hardware and that you cannot do AI training in the data center on regular Nvidia GPUs. Would be helpful to understand where the market is headed and how Cerebras and Tenstorrent solutions fit in compared to Nvidia

  • @psycronizer
    @psycronizer 3 ปีที่แล้ว

    it's all good news for us consumers ! this will no doubt translate into competition for the red and green teams...

  • @MatthewHarrold
    @MatthewHarrold 3 ปีที่แล้ว

    Yeah ... alright ... sub'd ... old timer (approaching 50 on the back of Wild Turkey) ... have memories of a trumpet player implementing MIDI 1.0 on an Apple 2E before the DX7 was released, this game is never ending! $0.02

  • @MikaelMurstam
    @MikaelMurstam 3 ปีที่แล้ว

    I don't follow chips but this is just something else. This makes me hopeful for the future. I've long thought "why don't they make the chips bigger?". I've learned that one answer to that question is actually the speed of light. The information can't be sent that far in one clock cycle. That's why you can't have one chip that's this large for serial processing. However, when it comes to parallel processing you can since they work at the same time. They don't have to communicate as much. So this would never work for a single-core CPU, but it works wonders for multi-core processors.

  • @justjoeblow420
    @justjoeblow420 3 ปีที่แล้ว +6

    We are definitely in one of the most exciting times for computing as a science as not only are we seeing large scale AI chips but also now there are commercial Quantum computers out there in the wild.

    • @sunnohh
      @sunnohh 3 ปีที่แล้ว

      Happy holidays and damn right

  • @johnpaulbacon8320
    @johnpaulbacon8320 3 ปีที่แล้ว

    Thanks for making this video.

  • @tanmayd9873
    @tanmayd9873 3 ปีที่แล้ว +1

    This is some great content. Though quite niche content but quite different from most of the other channels I subscribe to. Thanks for the video .

  • @Sam_Saraguy
    @Sam_Saraguy 3 ปีที่แล้ว +1

    Fascinating. I'm so glad I caught this one.

  • @marktackman2886
    @marktackman2886 3 ปีที่แล้ว +2

    Love this content lately, sorry I was not sub'd earlier, this is my jam.

  • @haydennelson9404
    @haydennelson9404 3 ปีที่แล้ว +1

    Awesome channel just discovered it! Extremely interesting!

  • @swipekonme
    @swipekonme 3 ปีที่แล้ว

    11:52 hear this sentence first. first thing that came to mind is cloud availability so you're buying time slices not the whole shebang, the other is of course the software, we should be in charge of scaling, they need to segment the cores by the 1000 then let us address each independently

  • @alexanderfrolov5172
    @alexanderfrolov5172 3 ปีที่แล้ว +3

    Thank you for the T-shirt!!

  • @senri-
    @senri- 3 ปีที่แล้ว +2

    what they're doing is certainly amazing but seeing how big the final product is makes me feel like theres so much more that can be done in terms of the broader picture of compute

  • @dynpallomah5918
    @dynpallomah5918 3 ปีที่แล้ว +1

    Now that's a true frying pan

  • @johnsaylor1583
    @johnsaylor1583 3 ปีที่แล้ว

    That is insane. 2.6 trillion transistors. Human ingenuity is AMAZING. Here's to hoping we make it as a species.

  • @energyexecs
    @energyexecs 3 ปีที่แล้ว

    ... Agree with the Cerebras approach where the speaker said - "...you don't need racks and racks and racks of GPUs to do your AI compute". I would be interested in how a Cerebras compares in a side by side comparison with an Aurora Super Computer business case comparison. Or are they complimentary? When your working on budget forecasting the forecast also needs the "features and benefits" of the investment especially when obligation taxpayer money. At least that is what happens on the ground floor. At first glance the Cerebras system is the AI optimization that is "complimentary " to an Aurora exa scale supercomputer.

  • @adamblance3346
    @adamblance3346 3 ปีที่แล้ว

    This is absolutely mental, wow.

  • @thomasdavid1160
    @thomasdavid1160 3 ปีที่แล้ว

    A 12 inch (300mm) wafer is 70685 sq.mm. This die is ~46K sq mm. They are trashing the rounded portions of the wafer and losing ~24K sq mm...A very interesting take on silicon density.

  • @iforth64
    @iforth64 3 ปีที่แล้ว +1

    Are they building a hypercube with flying leads? I predict the next step: do this in 3D (or stagger a few dies).

  • @bassmechanic237
    @bassmechanic237 3 ปีที่แล้ว

    Great job, thank you for the info

  • @DrWoodyII
    @DrWoodyII 3 ปีที่แล้ว

    One more must-have item to my bucket list. I should hurry, time grows short.
    Edit: A stock to watch.

  • @adondriel
    @adondriel 3 ปีที่แล้ว

    Ok, buy a cpu taking a whole kilowatt is fucking amazing. That is SOOOO much power.

  • @FullyBuffered
    @FullyBuffered 3 ปีที่แล้ว

    What a marvel of engineering and amazing that they're already profitable. Very informative video!

  • @Karthig1987
    @Karthig1987 3 ปีที่แล้ว

    Very cool tech. Nice video. Cool and cute opening and ending graphics lol

  • @gregmaland5318
    @gregmaland5318 3 ปีที่แล้ว

    I'm especially interested in the ramifications for GPT-3 (4,5,6). Very interesting video!

  • @TheZenytram
    @TheZenytram 3 ปีที่แล้ว

    FINALY! I've always thought of, why there were never a giant freaking cpu being build yet.

    • @mysecondaccount7887
      @mysecondaccount7887 3 ปีที่แล้ว

      Probably because we lacked the application for such a chip, the field of AI needed time to develop first before anyone would pay for something like this.

    • @TheZenytram
      @TheZenytram 3 ปีที่แล้ว

      @@mysecondaccount7887 yeah i know, but there are always some crazy company that tries something crazy, lile this one.

  • @jaykemm3472
    @jaykemm3472 3 ปีที่แล้ว

    First off, I bet Dr. Evil has one. And third, I bet you could make a smokin gaming machine with this.

  • @tomwomack3167
    @tomwomack3167 3 ปีที่แล้ว +1

    I don't think they are unreasonable at all in cost. I had looked at the CS2 system before seeing this video. I have spent far more on computing systems than they are asking. It is the floor space the other systems sit on that is the issue.

  • @lazypig93
    @lazypig93 3 ปีที่แล้ว

    Now the computer is looking nostalgic again...

  • @yarox3632
    @yarox3632 3 ปีที่แล้ว

    the new channel icon made me subscribe

  • @TheKutia
    @TheKutia 3 ปีที่แล้ว

    i love this. you explained this so well.
    another big step towards a singularity

  • @Steamrick
    @Steamrick 3 ปีที่แล้ว +2

    In the intro - did you not have any Itanium samples to show off?

    • @TechTechPotato
      @TechTechPotato  3 ปีที่แล้ว +3

      I have one, I didn't think to grab it

  • @johnsmithe4656
    @johnsmithe4656 3 ปีที่แล้ว +3

    That's a hell of an XMR mining rig.

  • @marksmadhousemetaphysicalm2938
    @marksmadhousemetaphysicalm2938 3 ปีที่แล้ว

    Fascinating video...the WSE 2 is just a bit out of my range for my next computer, I'm afraid...as powerful as this new system is for machine learning, it still uses Von Neumann architecture, though...as opposed to combining memory and processing simultaneously like the neuromorphic chips do...I do wonder if the WSE 2 will offer any advantages besides speed for such tasks.

  • @avgStan1234
    @avgStan1234 3 ปีที่แล้ว

    The BRAIN! The BRAIN! When will they build Marvin, the depressed Paranoid Android?

  • @djenning90
    @djenning90 3 ปีที่แล้ว +1

    I love the title, but somehow I’m doubting this will be my next CPU 😂

  • @butterflyblueshorts
    @butterflyblueshorts 3 ปีที่แล้ว +1

    very interesting video Ian. Thanks

  • @randomkitty2555
    @randomkitty2555 3 ปีที่แล้ว

    I kinda had a hunch that CPU's have reached a limit because of their size.
    They keep cramming more stuff in current cpu's but it'll reach a point where they can't anymore.
    So it's obvious that if the limit is being caused by size restrictions then increase the size restrictions and start a new form factor.

    • @Anon.G
      @Anon.G 3 ปีที่แล้ว

      Architecture isn't even close to maxed out so even if CPUs don't get denser they'll get faster

  • @lajosbaranyi7333
    @lajosbaranyi7333 3 ปีที่แล้ว

    Thank you! I learned something important again.

  • @powertoker5000
    @powertoker5000 3 ปีที่แล้ว

    This video is relevant to my interests.

  • @polydynamix7521
    @polydynamix7521 3 ปีที่แล้ว

    It's not likely to be a long term solution. The only reason Cerebras is more powerful (in a few very specific computing instances) is because it cuts out the communication time between nodes in a larger server. Soon though we'll be using fiber to connect between nodes which cuts that time to next to nothing- once again making modular and scalable computing more viable.
    Sure Cerebras is cheaper up front BUT when you've got 500 nodes and a processor burns out you're still running 499 nodes with a small upkeep to replace the ruined node... when your entire supercomputer has only 1 processor and something happens then you're coming to a screeching halt and repairing it would likely make up the lions share of the cost for a new system entirely.

  • @Psychx_
    @Psychx_ 3 ปีที่แล้ว

    Very cool stuff! It will be interesting to see how the competition will react. Waferscale general purpose CPUs and GPUs with a huge amount of on-chip memory and even more external (HBM) cache? Maybe an exascale RISCV system-on-wafer?

  • @amphem
    @amphem 3 ปีที่แล้ว

    your EXACLY! call is never gonna get boring :D hahaha. Love the show

  • @larryteslaspacexboringlawr739
    @larryteslaspacexboringlawr739 3 ปีที่แล้ว

    thank you and please more

  • @sandwich2473
    @sandwich2473 3 ปีที่แล้ว +1

    great video as always, but I feel like that I need to comment that I _really_ like your new profile pic
    it's the best!

  • @erikschiegg68
    @erikschiegg68 3 ปีที่แล้ว

    I would build that beast right into water boilers for hot water production. Has just the right size forr a hot air jacuzzi. Form follows function and not the other way around for pretty looks.

  • @Fifury161
    @Fifury161 3 ปีที่แล้ว

    Sir Clive Sinclair wanted to do something similar with the 486 way back in the 90s, as in multiple 486s on a single die, however he couldn't secure rights or funding IIRC...

  • @nzoomed
    @nzoomed 3 ปีที่แล้ว +1

    I can only assume the "customers" Cerebras doesnt want to talk about are military!

  • @vicmac3513
    @vicmac3513 3 ปีที่แล้ว

    Hopefully RISC-V is the future. That would be customized for everyones needs.

  • @ThunderDraws
    @ThunderDraws 3 ปีที่แล้ว

    this massive chip seems to me like engineers shitposting just trying to fill the entire wafer!
    somehow they made it work and can sell it to people. amazing.

  • @MrCakerape
    @MrCakerape 3 ปีที่แล้ว

    Now this is the thing that will get Arma running at 60FPS

  • @smifffies
    @smifffies 3 ปีที่แล้ว +2

    Ian could I ask you to clarify your comment "none of the others out there can do CFD" If by CFD you mean Computer Fluid Dynamics, I would have though you especially would have known Formula 1 have been doing CFD on those "other" chips for quite some time. Current world champs and other top teams using AMD for their CFD requirements. If I remember correctly Manor Racing entered F1 using solely CFD with no wind tunnel testing in around 2015. Please feel free to correct me if wrong or misinterpreting your comment. Otherwise very interesting and informative video, keep it up.

    • @TechTechPotato
      @TechTechPotato  3 ปีที่แล้ว +1

      I'm speaking about AI chips doing CFD for the actual compute. What you're talking about is usually banks and banks of CPU/GPU.

    • @smifffies
      @smifffies 3 ปีที่แล้ว

      @@TechTechPotato I would think that for about a million fine British pounds for 10 of these with all drive etc: www.broadberry.co.uk/dual-amd-epyc-7003-series-rackmount-servers/cyberserve-as-4124go-nart These would still be more capable. This is based on watching the auction of the IT gear of the Manor Motorsport/Marussia which was a hell of a lot lower spec than these. The power would be a couple of kilowatts higher, but the systems would be much more versatile. The Cerberus is interesting but I personally think it will probably remain fairly niche. My opinion is based on being an old git who has been building PC's since the mid 1980's.😊

  • @trevorloughlin1492
    @trevorloughlin1492 3 ปีที่แล้ว

    Now they need to make a 3D stacked cube version.

  • @karanbirchahal3268
    @karanbirchahal3268 ปีที่แล้ว

    your channel is great

  • @usamwhambam
    @usamwhambam 3 ปีที่แล้ว

    These assemblies should be named something similar to “CPU matrices” or maybe “CPU blocks”.