We tour the world's fastest super computer at Oak Ridge National Laboratory!

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ม.ค. 2025

ความคิดเห็น • 182

  • @martijnb3381
    @martijnb3381 9 หลายเดือนก่อน +49

    "And is 2 Exaflops" big smile 😊 Nice to see someone that is passionate about his work!

    • @Alfred-Neuman
      @Alfred-Neuman 8 หลายเดือนก่อน +1

      It's not even "a" computer, it's basically just a botnet installed locally...
      The only difference I can see between this and a botnet it's this is installed inside a single room so they'll get better latency between the different RAM modules, CPUs and GPUs. What's I'm trying to say is this technology is not very impressing, they're using pretty much the same CPUs and GPUs that we are using in out cheap desktops. Just a lot of them...

    • @NorbertKasko
      @NorbertKasko 8 หลายเดือนก่อน

      ​@@Alfred-NeumanFor a while they used special processors in these systems, developed directly for them. Cray comes to the mind. When clock speeds become less scalable they started to use consumer hardware. In this they have 8.7 million processor cores instead of 16 or 64 (talking about high end desktop machines).

    • @mikmoody3907
      @mikmoody3907 8 หลายเดือนก่อน +1

      @@Alfred-Neuman Why don't you build a better one and impress us all..................

  • @chuckatkinsIII
    @chuckatkinsIII 8 หลายเดือนก่อน +13

    One of the coolest aspect of Frontier's network architecture is at the node level. Since all the compute is done on GPUs the network fabric connects directly to the GPUs instead of something like a PCIe bus. So simulations can transfer directly between GPU memory and the network fabric without involving the CPU or having to move data on or off the GPU to get to the network. It allows for incredibly efficient internode communication.

    • @noth606
      @noth606 8 หลายเดือนก่อน +1

      So the GPU's have NIC's connected directly to them? With some sort of second MMU with it's own NIC? It's a tad unclear from the way you describe it, but I wonder how it connects to the GPU since you say it's not using PCIe?

    • @chuckatkinsIII
      @chuckatkinsIII 8 หลายเดือนก่อน +1

      @@noth606 I slightly misspoke. The NICs use PCIe ESM but connected directly to a PCIe root complex on one of the GPUs. Each node has 4 GPUs each with 2 dies (so 8 visible GPUs) and a dedicated nic, so 4 NICs per node. Thus any CPU operation that has to use the fabric actually traverses one of the GPUs to get to a nic.
      Source: you can find a bunch of architecture docs for Frontier but I also worked for several years on developing some of the library and software stack for this machine and a few others that were just beginning to come online.

  • @JamiesHackShack
    @JamiesHackShack 9 หลายเดือนก่อน +10

    Great video. Thanks for taking us along with you all!

  • @kellymoses8566
    @kellymoses8566 9 หลายเดือนก่อน +24

    Biggest difference between HPC networks and corporate networks is lack of security in favor of performance at all costs. The compute nodes directly access remote memory over the network RoCE

  • @knewdist
    @knewdist 9 หลายเดือนก่อน +4

    Awesome tour/interview. Dan seems like a real genuine dude. 👍

  • @udirt
    @udirt 9 หลายเดือนก่อน +10

    incredibly good interview you did there.

    • @artofneteng
      @artofneteng  9 หลายเดือนก่อน

      Thank you!

  • @trebabcock
    @trebabcock 9 หลายเดือนก่อน +36

    ORNL is my dream job. I'd honestly sweep floors just to be in the building.

    • @iamatt
      @iamatt 8 หลายเดือนก่อน

      It isn't all rainbows and unicorns

    • @jinchey
      @jinchey 8 หลายเดือนก่อน

      ​@@iamattdid you have a traumatic experience at oak ridge national laboratory

    • @pipipip815
      @pipipip815 8 หลายเดือนก่อน +1

      Good attitude. Doesn’t matter where you get on the ladder, just get on, work hard, learn and be agile.

    • @stevestarcke
      @stevestarcke 8 หลายเดือนก่อน +1

      I visited there long ago. It was the most awesome place I have ever seen.

    • @iamatt
      @iamatt 8 หลายเดือนก่อน +1

      @@jinchey it's an interesting place to work when you get in the mix and actually see how the politics are, let's just say that.

  • @artysanmobile
    @artysanmobile 9 หลายเดือนก่อน +8

    Take note of the power cables for each rack, similar to the amount a large house might use, per rack. Removing the heat from those racks is a big part of the design. Air flows from the floor and out the top in active exhausts. A little hard to believe, but compactness is a top priority.

    • @mikepict9011
      @mikepict9011 8 หลายเดือนก่อน +1

      Could you imagine their hvac systems!!!! Chillers rated in swimming pools per min

    • @artysanmobile
      @artysanmobile 8 หลายเดือนก่อน +3

      @@mikepict9011 They have so much heat to get rid of, the concept of blowing cold air is no longer valid. Fluid is far more effective at conducting heat away from a metal structure and processors are manufactured with built-in liquid cooling. Each rack is built for purpose with an exchanger which takes it directly out of the room, then returns cold for the next batch. If you work on your home’s HVAC unit, you’re familiar. A widely distributed system like that can be monitored and adjusted for best efficiency.

    • @mikepict9011
      @mikepict9011 8 หลายเดือนก่อน +2

      @@artysanmobile yeah thats part of a larger cascading system when you consider the envelope usually. The liquid usually and ultimately needs to e rejected outside. And thats called a chiller in a liquid system and a condenser in a direct exchange system. But yeah , vapor compression, pipe joining . Its what i do .

    • @mikepict9011
      @mikepict9011 8 หลายเดือนก่อน +3

      @@artysanmobile i serviced the mini chillers that cool MRI machines, they still had a 1 air to refrigerant dx hx and 2 coaxial heat exchangers ( hx ) with 2 pumps . Simple systems compared to real low temp refrigeration

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน +1

      Frontier is water cooled. You have water doing the heat exchange, not your traditional HVAC. There is still HVAC in the room since there are other non water cooled systems in the same room (storage and commodity gear). The switches, controllers and nodes in Frontier are all water cooled.

  • @alexanderahman4884
    @alexanderahman4884 9 หลายเดือนก่อน +17

    Sorry for nitpicking but he got one thing wrong. The reason you don't use electrical network cables for longer distances is not primarily because of interference from the power cables but has all to do with attenuation.
    At these speeds it is very hard to get the signal more than a few meters, it will be heavily attenuated and very hard to distinguish a 1 from a 0. The solution to the problem is to use fibre optics instead.

    • @grandrapids57
      @grandrapids57 8 หลายเดือนก่อน +2

      This is correct.

    • @steveking7719
      @steveking7719 4 หลายเดือนก่อน

      It's about impedance matching and Standing Wave Ratio that causes the attenuation when sending pulses down copper. The copper need to be matched at both ends so you don't get reflections. Stick to fiber.

    • @alexanderahman4884
      @alexanderahman4884 4 หลายเดือนก่อน

      @@steveking7719 No, attenuation is caused by the cable impedance and skin effect (and dielectric losses).
      Impedance mismatch creates reflections that make it harder for the receiver to receive the signal correctly.
      It is true that both the transmitter and receiver must be impedance matched, but that is already the case. No high speed copper networking would be possible if that wasn't the case.
      But still, even if you have matched tx/rx, you still get very high attenuation that makes copper networking unusable for lengths over just a few meters.

    • @steveking7719
      @steveking7719 4 หลายเดือนก่อน

      @@alexanderahman4884 rut row... here we go again... two people disagreeing and neither will back down from their position.

    • @alexanderahman4884
      @alexanderahman4884 4 หลายเดือนก่อน

      @@steveking7719 I'm just stating facts.

  • @goutvols103
    @goutvols103 8 หลายเดือนก่อน +3

    What happens to the hardware after they are removed from Oak Ridge? Is there still some value in them besides recycling?

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน +1

      It can be sold as spare parts, and/or recycled.

  • @iamatt
    @iamatt 8 หลายเดือนก่อน +4

    You can tell when the machine is running some serious workloads because the lights flicker in the offices next to it.

  • @mattaikay925
    @mattaikay925 8 หลายเดือนก่อน +3

    Did I see Cray - oh my - that is just awesome

    • @iamatt
      @iamatt 8 หลายเดือนก่อน

      And AMD not NVDA 😂

  • @jfkastner
    @jfkastner 9 หลายเดือนก่อน +4

    Great Video, thank you. Interesting would have been the type of Failures they see - Overheating, Bad Solder, Caps fail, Fans/Plumbing fails etc

    • @artofneteng
      @artofneteng  9 หลายเดือนก่อน +3

      We did learn that they have full service staff provided by OEMs of the supercomputer. They were there performing maintenance that day. Our POC didn't have specifics on hardware failures of the HPC environment, I'll see if he has anything on the networking components.

    • @iamatt
      @iamatt 8 หลายเดือนก่อน +1

      L3.cache errs for 1

    • @iamatt
      @iamatt 8 หลายเดือนก่อน +1

      @@artofneteng MTBF was 1 hour at first

    • @iamatt
      @iamatt 8 หลายเดือนก่อน +1

      @@artofneteng blue jackets are pushing carts all day

  • @glennmorrissey5309
    @glennmorrissey5309 19 วันที่ผ่านมา

    Great podcast, thank you.

  • @TaterRogers
    @TaterRogers 8 หลายเดือนก่อน

    I am in Security now but I really miss being a network engineer. Thank you for sharing on this platform.

  • @tsoldrin-TF
    @tsoldrin-TF 9 หลายเดือนก่อน +5

    This is amazingly quiet for a system of that size

    • @artofneteng
      @artofneteng  9 หลายเดือนก่อน +8

      Water cooled! The other half of the data center not shown in the video was all storage and that side was LOUD!

  • @roberthealey7238
    @roberthealey7238 9 หลายเดือนก่อน +5

    No matter how big or small:
    The network IS the computer…
    For the past few decades, outside of embedded applications (and even in many situations there), computers have to be connected to a network to have any practical value; every piece of software, and most if not all its data, is sent over a network at some time in its lifecycle.

    • @dougaltolan3017
      @dougaltolan3017 9 หลายเดือนก่อน +2

      Never underestimate the bandwidth of a FedEx truck.

  • @robertpierce1981
    @robertpierce1981 8 หลายเดือนก่อน +1

    I’ve been in the computer rooms at fort Meade. Awe inspiring

    • @ronaldckrausejr7762
      @ronaldckrausejr7762 8 หลายเดือนก่อน +1

      Fort Meade is also volumes faster than this system. It’s just the specs are classified - someone will know those specs eventually (perhaps in 20-30 years). Even Snowden knew the NSA has had the best computer in the world since 2002

  • @RyanAumiller
    @RyanAumiller 8 หลายเดือนก่อน +1

    Why not check out the visualization suite? That's the coolest part.

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน +1

      We were there specifically to talk about the super computing network. We did have access to and see other things while onsite, but were only authorized to record the networking piece seen here.

  • @artysanmobile
    @artysanmobile 9 หลายเดือนก่อน +3

    The power supply behind them is unbelievable. Enough for a town.

    • @chuckatkinsIII
      @chuckatkinsIII 7 หลายเดือนก่อน +1

      About 10y ago when they built the Trinity supercomputer at Los Alamos they were able to save a bunch on power costs by partially diverting the river that runs through the town into giant pipes that run under the lab's datacenter for cooling. That was a wild one.

    • @artysanmobile
      @artysanmobile 7 หลายเดือนก่อน

      @@chuckatkinsIII Intelligent use of resources makes me happy.

  • @ssmith5048
    @ssmith5048 9 หลายเดือนก่อน +1

    Simply Awesome!

  • @blitzio
    @blitzio 8 หลายเดือนก่อน

    Amazing tour, mind blowing stuff

    • @artofneteng
      @artofneteng  8 หลายเดือนก่อน

      Glad you enjoyed it!

  • @artysanmobile
    @artysanmobile 9 หลายเดือนก่อน +6

    I’m surprised they can even talk in there. I’ve been in some major data centers and communication can be difficult.

    • @artofneteng
      @artofneteng  9 หลายเดือนก่อน +5

      They were water cooled so no fans on that side of the DC. The other side was storage which still had traditional cooling and was very loud!

    • @artysanmobile
      @artysanmobile 9 หลายเดือนก่อน +1

      @@artofneteng Ah, that makes sense.

  • @woodydrn
    @woodydrn 8 หลายเดือนก่อน +1

    You know if they switched off all those small diodes on each server, blinking all the time, consuming power, I wonder how many watts that is total. You really only need those lights to debug if something is working right? could be a little switch instead to toggle those on and off

    • @sky173
      @sky173 8 หลายเดือนก่อน

      You can think of five L.E.D.s using about 1 watt of power. In the grand scheme of things, If they were switched off, most people would not know that some energy was saved.
      If you look at the home computer, it's costs (on average) $35-$40+/- a year to run a home computer 8 hours a day for one year (possibly much less). Those same five LEDs (diodes) that you mentioned would cost 35-40 cents to run them 8 hours a day for a full year (or just over a dollar per year if running 24/7)

    • @woodydrn
      @woodydrn 8 หลายเดือนก่อน

      @@sky173 But it's quite redundant to have them right? You dont need them at all really

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน +2

      If their blinking you know it's working.

  • @Dasycottus
    @Dasycottus หลายเดือนก่อน

    You know you've got a serious lab when it has a SCIF 🥰

  • @glennosborne4555
    @glennosborne4555 9 หลายเดือนก่อน +2

    After working with one we heard the gruntiest one is in Japan now rather than Oakridge.

  • @greekpapi
    @greekpapi 11 วันที่ผ่านมา

    I worked in a data center for 10 years, cant tell you how many times I was under that raised floor...lol

  • @calebwyman5510
    @calebwyman5510 8 หลายเดือนก่อน +1

    Computers are like watches now we need to start making computers that last hundreds of years in my opinion

    • @Dasycottus
      @Dasycottus หลายเดือนก่อน

      We can make computers that last hundreds of years already. For most purposes, it's fairly pointless. The technology simply advances too quickly.

  • @indisongsinmyvoice3311
    @indisongsinmyvoice3311 6 หลายเดือนก่อน

    200GBPS for Outof band network ?! I never thought of that.. I was wondering may be it is 2GBPS for management network

  • @jonshouse1
    @jonshouse1 8 หลายเดือนก่อน +1

    Not sure I understand the "noise" issue with copper Ethernet? It is transformer coupled at each end, self balancing with common mode induced noise rejection via the twist. I've seen it run around along with the wiring for 3 phase CNC equipment with no issues. Even at those scales I am not sure I buy that explanation. Length would be a real issue at that scale rather than noise I would have thought.

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน

      The speed of the link and acceptable error rates for encoding on those links is a limiting factor. Noise, heat and other factors all play a role in maintaining high speed links.

  • @vanhetgoor
    @vanhetgoor 8 หลายเดือนก่อน

    It would have been nice to know a few things about how that plethora of processors is organised, how they work together and most of all how does the output from all processors is combined to one knowledgeable fact. I can imagine myself a number of cores where on each core is a part of a programme working, But with a numerous number of processors this can't be done any more.

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน +1

      Granted this is a 'network' overview. Go read about MPI. Swear a lot. And it will start to come together. Jobs that run on systems like this can run on hundreds of nodes at the same time. It's not at all impossible. And yes, they can run containers if you wanted to do so.

  • @you2be839
    @you2be839 8 หลายเดือนก่อน

    Fascinating... still don't understand much of what that "time machine" is all about, but fascinating nevertheless... even though I think a DMC DeLorean properly retrofitted for time travel offers a bit more practicality and excitement in terms of time travelling!! Haha

    • @artofneteng
      @artofneteng  8 หลายเดือนก่อน

      The time machine reference was that the supercomputer has done in a shorter amount of time what would have taken us years to complete without it. It dramatically speeds up research.

  • @brookerobertson2951
    @brookerobertson2951 8 หลายเดือนก่อน +2

    But can it run doom ?

  • @bits2646
    @bits2646 9 หลายเดือนก่อน +11

    In supercomputing it's either Network or Notwork :DD

    • @tuneboyz5634
      @tuneboyz5634 9 หลายเดือนก่อน

      thats really funny lil buddy 😊

  • @grantwilcox330
    @grantwilcox330 8 หลายเดือนก่อน

    Great video

  • @minicoopertn
    @minicoopertn 8 หลายเดือนก่อน

    Are these super computers shielded against EMP

    • @artofneteng
      @artofneteng  8 หลายเดือนก่อน

      Great question! I don't recall if he said whether they are or not.

  • @deeneyugn4824
    @deeneyugn4824 9 หลายเดือนก่อน +2

    Where old system goes, eBay?

    • @olhoTron
      @olhoTron 8 หลายเดือนก่อน

      I think it will go to auction

    • @eliasboegel
      @eliasboegel 8 หลายเดือนก่อน

      It's usually auctioned off.

  • @youtubeaccount931
    @youtubeaccount931 8 หลายเดือนก่อน

    How many instances of doom can it load?

  • @switzerland3696
    @switzerland3696 8 หลายเดือนก่อน +1

    200Gb, lol I have that between the switches at work which I put in like 3 years ago.

    • @stacksmasher
      @stacksmasher 8 หลายเดือนก่อน +1

      You don’t understand. Each device in this platform has 200Gb so every CPU, GPU and storage connection has 200Gb direct to the platform.

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน

      Thank you!

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน

      Yes, you likely did a few uplinks as ISL.That's cool. However, on a slingshot switch -every- single one of those 64 ports on that switch do 200gpbs. To the nodes connected to it, and towards the fabric.

    • @gregorssamsa
      @gregorssamsa 7 หลายเดือนก่อน

      AWS has 400gbit per gpu over EFA (3.2tbit in total per host), and can sustain 3.6tbit on NVLink intranode, any to any.

  • @dougaltolan3017
    @dougaltolan3017 9 หลายเดือนก่อน +4

    No way do you get access to the world's fastest computer...
    Hypersonic missile systems are classified :P

    • @iamatt
      @iamatt 8 หลายเดือนก่อน

      Open research, class are in other DCs

    • @Dasycottus
      @Dasycottus หลายเดือนก่อน

      Oak Ridge houses a SCIF too.
      I assure you, they can do plenty without telling anybody. That said, it wouldn't surprise me if a few facilities had clusters expressly for classified work. Area 51 and plant 42 probably have some spicy CFD simulators with no USB ports...

  • @JustAnotherThisDJ
    @JustAnotherThisDJ 8 หลายเดือนก่อน

    wheres the NSA Stickers?

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน

      I'm sure the NSA has their own super computers.

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน +1

      At Fort Meade. :P

    • @JustAnotherThisDJ
      @JustAnotherThisDJ 7 หลายเดือนก่อน

      @@artofneteng yeah, and theyre plugged into this.

  • @brookerobertson2951
    @brookerobertson2951 8 หลายเดือนก่อน

    We will have the same processing power and a phone and around 20 years.. I watched the documentary about a super computer the size of a factory and it wasn't as fast as a new phone 10\15 years later.

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน

      Moore's law says yes. We just need people learning engineering so that we can push those limits. Innovation and education go hand in hand.

  • @DMSparky
    @DMSparky 8 หลายเดือนก่อน

    You’re here to look at the networking in here as an electrician looking at the electrical.

  • @bmurray330
    @bmurray330 8 หลายเดือนก่อน

    The guy in light blue needs a trimmer wardrobe.

  • @seanlheeger
    @seanlheeger 3 หลายเดือนก่อน

    Beowulf Cluster of Doom!

  • @waterdude123123
    @waterdude123123 9 หลายเดือนก่อน +5

    But can it run crysis?

    • @tuneboyz5634
      @tuneboyz5634 9 หลายเดือนก่อน +2

      no

    • @drooplug
      @drooplug 9 หลายเดือนก่อน +1

      You spelled Doom wrong.

    • @munocat
      @munocat 9 หลายเดือนก่อน +1

      how many chrome tabs can it handle?

    • @TAP7a
      @TAP7a 8 หลายเดือนก่อน

      In all seriousness, not very well.
      Games have such miniscule latency requirements that any distributed system is immediately going to fall on its face. Even chiplet-to-chiplet within the same CPU package has proven to be enough to affect game experience - reviews of the R9 7950X all identified that frame pacing was affected dramatically when threads moved between CCDs, let alone moving between entire racks.
      Now, playing Crysis on a single unit, especially if it has both CPU and GPU compute...

    • @id104335409
      @id104335409 8 หลายเดือนก่อน

      Nothing can.

  • @nowhearthis5067
    @nowhearthis5067 2 หลายเดือนก่อน

    ..."the types of problems" they are targeted to solve is the real answer. The problem dictates the architecture. It's still built on geometries and silicon, "Same - Same" but targeted differently. Tools for science not tools for enterprise etc.

  • @robeigner4390
    @robeigner4390 หลายเดือนก่อน

    Not any more. LLNL's El Capitan is now the leading supercomputer.

  • @Terost36
    @Terost36 8 หลายเดือนก่อน +1

    I can't image working there with all these computers so much electric field energy and hopefully is not affecting people's health. Any EMI/EF Faraday cage?

  • @robgandy4550
    @robgandy4550 8 หลายเดือนก่อน +1

    I would love to work there. Tired of making 10 gb as fast as possible. Mind you, I got into a terraflop

  • @ml.2770
    @ml.2770 8 หลายเดือนก่อน

    But can it run Crysis?

  • @tironhawk1767
    @tironhawk1767 8 หลายเดือนก่อน +3

    So SkyNet is a Tennesseean.

  • @BreakpointFun
    @BreakpointFun 8 หลายเดือนก่อน

    7:50 his head got a head 😂 i cant stop seeing this

  • @Gumplayer2
    @Gumplayer2 8 หลายเดือนก่อน

    did he really tell what is the use of these machines?

  • @Derekbordeaux24
    @Derekbordeaux24 8 หลายเดือนก่อน

    But can it run doom

  • @technicalthug
    @technicalthug 7 หลายเดือนก่อน

    I didn't enjoy this as much as I thought I would. My criticism and suggestions below:
    1. It would have made more sense to plan/script some of these conversations (e.g. the example of what a flop/exflop was pointless. Even saying: ("Add 1.5 + 1.5", that's a FLOP, if everyone on Earth does the calculation at the same time, that's a 8 Gigaflops, now if we had 125,000,000 Planet Earths doing a calculation per second, this would be the same power.)
    2. Really didn't talk much about the Network, very wishwashy/high level. A 5-minute whiteboard session could have added more detail. Seems there was another Team who managed the HPC Fabric vs. the "Classic Network" that could have been consulted. Really not much detail, most of this detail had to be sighted from the b-roll.
    3. Asking the applications used on the HPC would have been better asked to a user/scientist
    4. Could have included a tour of the operations centre/NOC and other areas (HVAC, Power) would have been interesting. These are all supporting areas which are interesting
    5. This could have been 10 minutes long
    6. The guest mentions early in the video it's 200 Gbps Ethernet, yet the question is asked again in the video

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน

      Great feedback, thank you!

  • @Spya777
    @Spya777 8 หลายเดือนก่อน +2

    OMG he asks so many stupid and repeated questions about the network cables....

    • @artofneteng
      @artofneteng  8 หลายเดือนก่อน +1

      Some clarifying questions never hurt, and this channel is Network Engineering focused.

  • @bmiller949
    @bmiller949 8 หลายเดือนก่อน

    I would hate to see their electric bill.

  • @djtomoy
    @djtomoy 8 หลายเดือนก่อน

    Can it play Minecraft?

  • @illegalsmirf
    @illegalsmirf 6 หลายเดือนก่อน

    love the mildly autistic and awkward geeky conversations

  • @josephconsuegra6420
    @josephconsuegra6420 8 หลายเดือนก่อน +1

    Quantum computers are exponentially faster.

    • @ionageman
      @ionageman 4 หลายเดือนก่อน

      Very limited though

  • @switzerland3696
    @switzerland3696 8 หลายเดือนก่อน

    Those poor bastards having to deal with AMD GPU drivers in HPC.

    • @evanstayuka381
      @evanstayuka381 8 หลายเดือนก่อน

      Is that a problem? Can you expantiate?

    • @switzerland3696
      @switzerland3696 8 หลายเดือนก่อน

      @@evanstayuka381 Driver and firmware stability vs nvidia, look at the drama around Geohot / tinygrad / tinycorp having to abandon AMD as their primary platform due to the lack of stability.

    • @gunturbayu6779
      @gunturbayu6779 8 หลายเดือนก่อน

      None of that matters when they use custom software and write their own codes , the hpc are used for open computing, CUDA wont matter. Now you better go back to your gtx 1650 fanboy.

    • @switzerland3696
      @switzerland3696 8 หลายเดือนก่อน

      @@gunturbayu6779 You statement makes no sense, and why the hate? As they say when you do not have a good argument you resort to person attacks.

    • @switzerland3696
      @switzerland3696 8 หลายเดือนก่อน

      @@evanstayuka381 I thought I already replied to this, or the post got deleted perhaps as the truth was too hard to handle perhaps.
      The AMD driver and firmware are competitively unstable compared with the NVIDIA driver and firmware stack.
      Look at the drama that Geohot / Tinygrad / Tinycorp had trying to go with AMD GPU's and had to abandon going AMD as the tinybox standard and offer the NVIDIA option as the primary option, as they could not get the driver / firmware stability required for the a shippable platform. Lets see if this post gets deleted.

  • @josephmills9031
    @josephmills9031 8 หลายเดือนก่อน

    Asks whats a exaflop, proceeds not to explain a exoflop.
    EXA FLoating point OPerations per Second) One quintillion floating point operations per second.

  • @antonjaden2482
    @antonjaden2482 8 หลายเดือนก่อน +1

    Bitcoin miners😂

  • @j0hnny_R3db34rd
    @j0hnny_R3db34rd 7 หลายเดือนก่อน

    Not impressed. It was outdated as soon as it was deployed.

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน

      Thanks for your feedback

  • @javiermac381
    @javiermac381 8 หลายเดือนก่อน

    For playing games?

  • @Thalarian-vo9gg
    @Thalarian-vo9gg 3 หลายเดือนก่อน

    But can it run Minecraft? 😂

  • @inraid
    @inraid 8 หลายเดือนก่อน +1

    Horrible soundtrack!

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน

      We appreciate your feedback

  • @detectiveinspekta
    @detectiveinspekta 9 หลายเดือนก่อน +1

    Panduit.

  • @evileyemcgaming
    @evileyemcgaming 8 หลายเดือนก่อน

    Heheh all im thing how cool be to play minecraft on it

  • @inseiin
    @inseiin 8 หลายเดือนก่อน

    Balder dude has been wearing headphones for toooooo long....

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน

      Well, he's a podcaster too so...

  • @kennethwren2379
    @kennethwren2379 8 หลายเดือนก่อน

    Are you sure it's the fastest super computer in the world? China has come a long way in this field and would be very competitive.

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน

      At the time that were recorded this video, which was August of 2023, Frontier held the titled of fastest super computer in the world.

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน

      You build the worlds fastest car and run it on your own private hidden race track. How do you then convince the world your car is faster than the fastest car out there when you refuse to show it to anyone? That's what China is doing at the moment. They very likely have some fast system(s) hidden somewhere. Maybe. :)

  • @TabulaRasa001
    @TabulaRasa001 8 หลายเดือนก่อน

    This guy doesn’t seem like he’s ever seen the inside of a data center before what embarrassingly basic questions that didn’t even get to what’s special about their setup or capability.

  • @cod4volume
    @cod4volume 8 หลายเดือนก่อน +1

    Chose amd to save money, could be faster with intel omegalul

    • @channel20122012
      @channel20122012 8 หลายเดือนก่อน

      Faster with Intel ? Are you living under a stone? Lol

    • @kilodeltaeight
      @kilodeltaeight 8 หลายเดือนก่อน +1

      The real crunching of data here is happening on GPU cores, not the CPUs. Those are just managing the GPU cores, effectively. With a system like this, your biggest concern is power and cooling, so efficiency is what matters. AMD very much wins there, and has the experience with building large systems like this - ergo, they won the contract.

    • @gunturbayu6779
      @gunturbayu6779 8 หลายเดือนก่อน

      The funny thing is. This oak ridge will be number 2 once el capitan project done , and it will be amd number 1 and 2 for fastest super computer. Intel will have number 3 with 80% more power usage lmao.

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน

      @@gunturbayu6779 You forgot Aurora!

  • @Kenneth_James
    @Kenneth_James 8 หลายเดือนก่อน

    Get that man clothes that don't look like he was just shrunk into

  • @rtz549
    @rtz549 8 หลายเดือนก่อน

    They need to generate crypto to pay for future machines and upgrades.

    • @kilodeltaeight
      @kilodeltaeight 8 หลายเดือนก่อน

      lol. No. Who needs crypto when you can literally just print more dollars? DoE has a massive budget regardless.

    • @rtz549
      @rtz549 8 หลายเดือนก่อน

      @@kilodeltaeight Then they could construct a supercomputer that had no final size or limitations.

  • @europana7
    @europana7 9 หลายเดือนก่อน +1

    it should mine BTC :P

  • @EvoPortal
    @EvoPortal 8 หลายเดือนก่อน

    It's just a lot of servers clustered together.... whats the big deal? Server clustering has been around for decades....

    • @artofneteng
      @artofneteng  7 หลายเดือนก่อน

      Right, but this particular cluster of servers has a MASSIVE amount of compute that has done amazing things for us!

    • @EvoPortal
      @EvoPortal 7 หลายเดือนก่อน

      @@artofneteng Who cares? its the exact same thing, its just a bunch of servers clustered together. With enough money you can make one twice the size. There is nothing revolutionary here.

    • @hamsolo8165
      @hamsolo8165 7 หลายเดือนก่อน

      @@EvoPortal to a certain degree you are correct. HPC systems are just "bigger meaner" versions of a traditional cluster. Your traditional cluster could be optimized for a multitude of things where most HPC clusters are optimized for parallelism where speed is the key to running and processing massive sets of calculations. Every bit of the HPC cluster is optimized for speed. A traditional cluster is usually not built explicitly for speed. It's built for function. HPC is built for both speed and function. The innovation these days is coming (mostly, and imo) from faster/better connected fabrics. And with PCIe 7.0 soon to come out too. Evolution... with the occasional revolution thrown in there for fun.

  • @tahersadeghi6773
    @tahersadeghi6773 7 หลายเดือนก่อน

    You talk like a child!

  • @ATomRileyA
    @ATomRileyA 8 หลายเดือนก่อน

    What a great video, so informative must be a real privilege to work on that system.
    Reading about it here as well so impressive.
    en.wikipedia.org/wiki/Frontier_(supercomputer)