The problem with cities skylinies is simply that it is relying on single core performance and just isn't optimised at all. Meaning more cores won't give you anymore performance rather it will hurt performance because each single core won't turbo as high. Love the game but pls just optimise for miltithreading.
The most insane feature of Bergamo is its power efficiency. Wendel from Level1Tech mentioned that Zen4c is even more power efficient than most currently available ARM CPUs
@@User-dd2xv you can compare it to an Ampere Altra Max, which also has 128 cores. The Altra consumes about 130W in idle and 350-400W under load. Bergamo consumes abkut 120W in idle and 500-600W under load. But Bergamo is up to three times faster than the Altra Max
We do cloud computing and have been discussing and testing vCPU ratios and core contention. The performance effects of overlapping VM vCPU and VMs with more vCPUs is something we are always tweaking and eventually resolving with better hardware. It’s something that is surprisingly more strange than straightforward, since VMs experience slowdowns from neighboring VMs on the same CPUs/clusters/cores/caches. The perception of performance is related more to cumulative contention at any given instant than it is to peak capability (specifically in reference to user-facing experiences, not infrastructure which can often be measured in terms of load metrics). In many cases, there is no direct software solution, which leads to scale up or scale out. For us, the only way to see how the hardware perform is to benchmark and create these scenarios to see how we can load balance physical and virtual systems. My hope is that the 4c provides better small-mid scale VM performance and responsiveness in a way that is affordable for our needs.
Generally speaking those "cloud native workloads" are based around virtualization and/or containers. For example, you generally give a container soft/hard cpu and memory limits. Then run lots of them per system. The more cpu and memory you have the more you can allocate. Linus made a comment about them not needing to share memory between cores and that is generally correct in this type of workloads.
That's what I was thinking too. Thats what this CPU makes sense for anyways. Usually the only workloads that benefit from more cores (even if the cores themselves are weaker) are anything to do with containerized workloads.
@@quanghuyvu2649 virtual box is an interesting example for this processor. It could very easily utilize all of those cores with vms. Outside of a lab, dev, or test setup it will rarely be used. In the path to prod you'd use something like openstack, docker, K8s, or something similar.
What an epic analysis! I can't wrap my processor around the fact that this beast accommodates 128 cores. Good point about the tech in high-end chips eventually trickling down to our home PCs. From raid controllers to cloud-based workloads, it's fascinating how our everyday tech is influenced by these monstrous CPUs. Thanks for the thorough walkthrough, I was on the edge of my seat - almost fell off when you started to spray the coolant! Maybe next time you can try gaming with a liquid nitrogen cooling setup, just to keep things chill. Looking forward to seeing more from the wild world of CPUs!
and if todays programmers wouldnt be paid off by hardware manufacturers to put out even worse code that uses all 128 cores to full while providing the same performance as a 20 year old processor :)
@@kazoolians It's a Gear Wrench 1/4" Drive Torque Screwdriver 1-6Nm ...a ~$220 screwdriver! Perfect for not overtightening the CPU screw and costing yourself $12,000.
Comparing a server CPU to an enthusiast CPU is like comparing a semi truck to a sports car. It'll pull way more weight, but it's not built for speed (frames in games)
This! A lot of people really don't realize this kind of information. It's also why Threadripper cpu's are more pants at gaming than a lower core count CPU, but are WAY better for content creation.
It would be awesome to see such a cpu compiling a big project like the linux kernel or something else to see how fast it is, the clock may not be so fast but so many threads could do it so fast I can't imagine
I use to work in a factory that made telecom servers. And one day coming to the work there was a batch from might shift where every single server blade of the production line overheated. Open upp the first. The plastic cover was still on the cpu, the second.. well every one of them. Well someone asked the dude that assebmled them and he claim that there was no part in the instructiin to remove the plastic cap. The person who made the instruction thought it was so obvius so he didnt write it.
This is a prime example of exactly why an instruction manual must contain the most blatantly obvious instructions possible. Because someone, somewhere, is going to be brilliant enough to mess it up.
I always think about how sooner or later they'll just run out of Italian cities and they'll have to use smaller and smaller tows. Like, imagine AMD EPYC BASSANO DEL GRAPPA
Linus the way Maxons bucket rendering works the larger memory you have the larger bucket it has. Bucket rendering with small cells was designed to prevent out of meory errors or slow disk swaps on systems with little memory. The new version automatically optimises bucket size, which you can manually do on say Arnold or Vray.
15 years ago, 128 cores where only available in compute clusters with high speed network interconnect.. I remember preparing computationnal fluid dynamics on this type of cluster and waiting a few days in the queue to run a 32 core job...
@@robertt9342people like them can't understand what a KB even truley meant. How large a system like these would have been 10 yet alone 30 years ago. 30 years ago half a GB of storage cost ~300$ That now would get you over 20tb of storage. That's roughly 12 million dollars worth of storage in 1993. Not counting for inflation which make it 25.5 million worth of storage. 🤯
@@funbucket09 That's actually not how technology works. It doesn't just get better automagically every year. These improvements are the combined result of monumental amounts of research, engineering and manufacturing efforts. Thousands of people are working on the incremental improvements we see every year. It's important to look back and realize that what 15 years ago would have taken hundreds of thousands of dollars and a whole room full of equipment now fits into the palm of a hand. We did this. As a species, we figured out how to do that. It's incredible. It's awe-inspiring. It's not just "how technology works". There's lots of stuff to be cynical and jaded about. Try not to extend it to the things were wonder still exists :)
Some other cloud native workloads, in addition to containers, as another commenter mentioned, are people running bare metal servers and they have their own virtualisation layer on top of it. How this helps is since they own the virtualisation also, they can have multiple different VMs running on it, performing different operations on the same set of data. And since that data resides in the same CPU/Memory space, the latencies are ultra low (~10-100 ns), what is called HPC in the industry. My org runs simulations on market data and these are hundreds of files spanning 10s of GBs each. Being able to load all of them into memory and then running parallel simulations, all on 1 single CPU would be a game changer in terms of performance as compared to running 64x2 CPUs in NUMA config as currently, we are maxxed out at the CPU level, only limited by the memory latency and memory throughput.
mesa 3d exists, and doom 2016 runs in opengl or vulkan modes (no d3d if I remember correctly) So your wish might already be possible today Ltt make it happe
It would actually be pretty bad compared to even entry level GPUs because the core sizes for those are so small because they run a very limited number of instructions compared to x86-64, and they perform mathematical calculations with much more precision to make sure things don’t look all shimmery/warbley like a Sony PS1. For comparison the number of GPU cores on any discrete graphics card is going to have (at a minimum) at least 4x the number of threads as this CPU, for the closest comparison you’d have to look at iGPUs, like the Intel UHD 750 that’s included with the 11th Gen i5’s and up, which have 256 cores. We’ve reached a point where the most powerful GPUs have 1:1 core per pixel ratio. So, what you’re suggesting might not be as exciting as you think and might make it look far less impressive than doing what it does, which is perform a ton of complex instructions in parallel and host a ridiculous number of containers and VMs that can all be allocated a reasonable amount of cores and memory, that it wouldn’t fare too poorly against older workstations that shipped with similar specs years ago. With the setup LTT had, they could setup 16 VMs with 16 cores and 48 GB of RAM, which is overkill compared to most virtual servers I’ve ever worked with running on a VMware server farm. If you needed fairly robust virtual servers you could easily provision 32 servers with 8 cores and 24 GB of actual memory. More than likely these are going to be used in a mixture of uses where you’ve got VMs dedicated to specific virtual systems that will reserve the cores and memory, while others that are more dynamically allocated so that at idle it won’t use up very much, but as more demand is placed on the system, it will just start allocating more memory and CPU cores and cycles as needed. If you’ve never seen how VMware server farms show the CPU and Memory usage it’s weird the first time you see it, because it will say goofy shit like 15 GHz CPU used, which is a running total for the usage within a particular timeframe, because that’s how they bill for some of these things, even though it’s not their hardware, they charge licensing fees based on usage, because people are willing to pay that much for their software. Anyway, as interesting as these CPUs are for the most hyperconverged data centers, it’s still really boring for games because GPUs are just so much more parallelized and have been that way for a much longer time. The takeaway that’s most interesting is the CCDs being so jam packed, because it means that we’re going to have consumer desktop and workstation CPUs with 32 or even 64 cores on a much smaller package with higher clock speeds within the next couple of refresh cycles, and/or more PCIe lanes for even more I/O and memory capacity. HEDT is just going to be overshadowed by regular desktops, and laptops that will be able to handle more multi-threaded workloads. Also, the power efficiency of those cores were pretty impressive as well.
With AMD aiming for compacted Zen-C cores and Intel aiming to put something like 300 E-cores in one socket fairly soon, the days of a Kilo-thread box are rapidly approaching.
@@tomstech4390 Clearwater Forrest is on pace. Intel is behind in server core count overall yes, but this isn't another Sapphire Rapids situation with a year of delays.
This video was the last video my dad sent to me before he passed away last week. He was a genius with newer PC technology and I relied on him a lot for any computer questions I had. Recently, I had asked him if he knew of any CPUs that I could upgrade to, and he sent me a link to this. I knew he was looking around to find me something powerful for gaming but within budget before he passed. I'm not exactly fluent when it comes to computer parts, and most of the finer details in this video have gone over my head, but someday I want to learn what it all means. I currently run an AMD FX 8350 but it seems to be somewhat incompatible with my new NVIDIA GeForce 3070- it reaches max CPU usage and gets hot when trying to run games such as Baldur's Gate 3- and now fails to run beyond the title screen. I've tried delegating my GPU to handle my gaming apps with high performance prioritized, but saw no improvement between the games or what task manager clocked my CPU at. I feel strange asking the youtube comment section, but I was hoping if anybody who knew their computers would be able to tell me if the Bergamo would be worth investing in for an upgrade?
Hi, your father most likely sent this to you just to show where the technology is heading. Instead look at some videos about the ryzen 5600x3d and 5800x3d. Good luck and keep thinking good thoughts about your father.
No, Bergamo is not the kind of processor you want for a desktop. These are designed to run cloud apps, not super fast but very efficient and many tasks at once. It's not a fast CPU in terms of running single tasks quickly, only that it can run so many at once. And the cost of a cheap car for a single CPU. Depending on budget you'll be wanting either Ryzen 7000 series or Intel 13/14th gen. If you need a workstation and want the types of cores this thing packs, then you're after either Intel W-2400 or Threadripper, but don't expect much change from $5k for a board, CPU and memory.
A bit late I think, but I figured I could contribute something, if not to you then to someone reading this. A massive amount of "small" cores is likely not going to give you much joy, certainly not in most games, as @morosis82 says. However you could get something almost as special and very closely related by going for a higher clocked Threadripper. Your current setup is very very severely hampered by your CPU, those bulldozer chips were not particularly good when new and at this point your 3070 isn't able to do a lot of what it has the potential to, because of your CPU. I had the 9590 or what it was called, the "fastest" of that family of chips and even overclocked to the max it never did games well. Anything Ryzen based will be a huge leap, at the same clockspeed or lower, because it's massively more efficient and optimized, which includes both Epyc like in the video and Threadripper which I suspect would both make you get a large speedboost, as well as stay within what I would guess was the intent he had when he looked for something powerful for gaming.
I'm always astonished how quickly and how far technology has advanced. The first PC I built back in 2003 had 256MB of RAM. This PC has 768GB. Granted, this is not a home PC and it's likely nobody here will ever be running this setup at home, but it's still crazy to me that my first PC, which was only twenty years ago, has 0.3% of the memory that this one has.
I was talking to a coworker the other day about my first USB thumb drive. A whopping 128MB that probably cost in the neighborhood of $40-$50. A full gig was way out of the question for me at that time. They're sold in packs of 5 for about the same as cigarettes now.
Consumer hardware takes 128GB of DDR4 or 192GB of DDR5, but that could go to 256GB if they end up making 64GB udimm sticks (which would already be 1000 times more than your first computer, still on consumer hardware)
@@albertlong3492 yes i remember people buying HP Z 80something workstations with 1.5 TB of ram around 2014, however that is much slower memory than what's available now
Factorio is limited by cache-, ram speed and timings followed by clockspeed for its ups calculations. Those cores might only be usefull for speeding up map generation
I remember how when I was a kid everyone had 128MB or 256MB of ram and when someone had 512MB it was like, whoah what do you need that much for. And once 512MB became normal but 1GB was still kinda extravagant, people would buy a 512MB stick and then wouldn't know what to do with the old 256 stick, and so they'd put both and run 768MB in single channel. This computer has 768GB. Similarly, when I was a kid a 1GB disk was considered a big one. Now we have disks that can have several TB each. It's like, in some 25-ish? years we got to a point when 1GB nowadays feels like 1MB back then, whether storage or memory, and 1TB today feels like 1GB back then. And it's messing with my head now lol
This computer supports way more than 768GB, it's just infeasible to go higher unless you have a reason to spend the money. 256GB single dimms can be had for $3k each. What's truly crazy is that the 768MB of memory you're talking about is now how much on-die cache these chips have, or a little over 1.1GiB for the X3D Genoa variants.
Worth noting that we may yet have an interesting tech development in the future. 16 core complexes plus some form of Vcache might not be completely off the table! :D
I suspect most end-users of this monstrous Epyc CPU will be running Linux, so it's sad to see LTT yet again put Windows on their test bench for a server CPU. Phoronix Test Suite is probably something LTT should have looked at, because that has quite a few server-related tests in it.
Especially the gaming benchmarks as always. Yes, those CPUs suck at them, because of the low clock speeds. Just in the last video about a server CPU and the one before it and the one before that. Would be interesting to go through some real workloads for that, even if they would have to introduce most viewers to some other benchmarks for that.
The plurality will probably be ESXi followed by Linux and Windows. These really aren't the current best option for Cloud data centers currently, so for right now you would be looking at in house clusters or specialized systems. That means ESXi followed by Windows followed by Linux. And before anyone jumps on me, I do prefer KVM over HyperV, but Windows also has RDS virtualization which helps give them implementation numbers.
they said at the end of the video none of these benchmarks are real tests of what they are for, and that the people buying this sort of stuff would either optimise their systems for it or let the customers who are renting the servers to figure it out themselves, i seriously doubt the real end users of this cpu would actually be getting data from ltt in help with their purchase
The point of running Windows is so that it's a like-for-like comparison. To use Linux, they'd need to go back and test all the others on Linux. And any viewer would need to run Linux to be able to compare it with their own system.
He could at least grab some delta server fans for it like it is supposed to have. He is so cringe I can't stand him. Seriously I threw up a bit when he went to squirt it with water.
Jesus fuck the specs on this thing are insane. 128 cores, 256 threads, 128 pcie 5.0 lanes, 12 channel memory, 1/4 GIGABYTE of L3 cache, and apparently you can slot two of these fuckers into the same motherboard. The sheer amount of power and data infratructure just to keep one of these systems fed, let alone banks of them, is insane. You can fit a whole damn super computer in like one server rack.
I used to have a Opteron X2 170 with a 1Ghz overclock (on stock cooling!) for a gaming rig. Older CPU's of that type had a lot more cache and could have similar clock speeds to their Athlong 64 counterparts, and it meant a lot more performance back then.
Editing for this video was amazing! Really seems like that new work schedule got everyone more relaxed, which in turn makes far better videos. I'm glad you guys were transparent with your viewers. 😍
13:55 this just made me appreciate modern desktop CPUs even more. The 14900k (a 32T CPU) can hit over 43k points in the same test, which is almost half the score as this 256T CPU! (8 times the threads!!!) And remember, this is a test that can actually use all of those threads. Just insane how far technology has come.
Take a look at GPU and cores affinity; in a worst-case scenario, you could potentially reduce GPU performance by a factor of 4. Only a few cores have direct access to the full PCI bandwidth. If the software doesn't manage this properly, it might need to go through another core, causing latency issues that could impact performance significantly. I'm familiar with these kinds of problems as I work with computing clusters.
To linus's point about F1 tech eventually making it into our daily drivers, there's actually a fantastic BBC documentary from 1984/85 that shows the entire process that went into the creation of the Ford Cosworth GBA 1.5L Turbo V6 that went into the 1986 Beaatrice/Lola-Haas/Ford (Carl not Gene). At the end of the first part and beginning of the second it shows the development process of the ECU and they show the electronics engineers from ford motorsports europe working well into the night chasing down problems in the circuitry before the "big box" (the ECU was the size of a toaster oven) went into the car for its first track tests. At one point they realized that there was a huge issue with detonation that was being caused by electromagnetic interference from the combustion chambers interfering with the ECU's operation, so they had to change the design placement from on top of the engine to inside the tub. They even showed the IBM 286 desktop that the engineers in europe used to communicate with the head of ford motorsports over in detroit and explained how the multi-layer cryptographic encryption system it used to prevent competitors (like say Bosch) from getting into their system to steal proprietary information worked.
As a 3970X (32c/64t) owner I’ve been waiting a long time for these replacements. I’d be interested to know how reliable they are at hitting their max boost clock, because mine rarely did, even on a single core
@@a564-c3q Tried that, also tried the IceGiant ProSiphon, in the end none of them allowed it to clock appreciably faster so I've stuck with a NH-U14S. I've tried undervolting too, but it wasn't stable.
too bad Linus didn't remember the windows WARP thingy that runs DX11 in software mode, he did bench that in the past with a threadripper on crysis 1 iirc... would love to see it on this CPU
Fun fact! They used a miniature version of the CPU for this entire video, as to show it's size against a normal person. The real one can be seen in the thumbnail, with linus standing next to it - Yes, he's short.
Just a friendly advice from a "thumbnail maker" and someone who loves looking at them., I know this is irrelevant but the shadow of the CPU is way different than the "drop shadow" on Linus. Anyway, I am a long-time fan and I appreciate how your thumbnails evolved and how "catchy" they are now.
Que bien que tus videos cuente con doblaje al español!! eres un excelente TH-camr dedicado a la tecnologia computacional, es mas, no hay casi ningun TH-camr en mi idioma que abarque temas como este, sobre todo con procesador OEM.
7:12 -- only 768 GB of RAM. I remember way back around 2003, when building my pc with the (at the time) awesome Abit NF-7 motherboard and getting it up to 768 MB of memory, and that was amazing!
I checked my AMD CPU and found out that they use SI units on the spec sheet on their website to represent IEC units for L3 (and other) cache. Please start using IEC/binary units where applicable!
Well... Yeah. CS2 is DX11 where as Doom Eternal is straight up Vulkan. Of course multi-threaded performance is out of this world. The problem is most people don't really like dealing with DX12 or Vulkan just for the performance gains of a game they're not putting special attention to. It's why most indie games are closer to single or light multi-threaded APIs like OpenGL 4 and DX11.
6000 points in CB2024 is 6x that of my 5900x, but its also only 60% of my 4060 that can break 10k points at 100w. I think the biggest thing CB2024 taught us is that CPUs need to leave the rendering to GPUs.
…unless you want to render a real scene for a big budget movie, in which case the textures alone might be in the range of 100 GB - good luck fitting that into any GPU memory.
till you realize that GPU's with a ton of memory cost a whole alot more than this CPU decked out 24GB VRAM costs you 2 grand USD and that translates in roughly 144GB VRAM worth of cards while this CPU can do well over 1TB for smaller scale GPU might be fine but for 100+GB projects CPU's start to be a king due to capacity unless you get a board which can support that many cards
My primary workstation has the 64 core 3990x which I use primarily with Arnold Render in Cinema 4D and Houdini. Been curious how much faster the 128 core CPU is. Looks like about 30% faster based on your test numbers. Great video! Cinebench 2024 is using Redshift XPU which until recently was a GPU only renderer which is quite a bit different than the old CPU based physical renderer on previous cinebench tests.
This is why you should have pallets of spray air on hand. Turn it upside down and go for gold. I used to do that back in the early 2000 as the only way to try and keep up with Gibbo and marshy when doing over clocking comps at work.
Please make a test with Crysis CPU only. Yes you did the test with a 64 Core Epic years ago. Im just curious how the perfomance has increased . Thank you
I literally won't ever use or touch most of, if any, of the stuff reviewed in LTT videos - I mainly watch for the sheer excitement and human that is Linus. You, fun sir, are a treasure ❤️ and are cherished as one, thank you for the joy you bring and share in each video and thing you do ❤️
@@LordApophis100 16 core CCD with 3D Vcache is exactly what I would want as a content creator, and give me another 16-32-48 without it for rendering processes.
Linus is making me feel old.. i had an opteron and a 939 mobo, we OC'd the heck out of it, even had a friend break and keep the world record for OC on it on air. twas quite the time to be alive. technology was literally leaping ahead in terms of pc components.
What would be interesting is somehow do some waterblock on the vrm and the cpu and let it rip. Wonder if you could get a custom bios to allow overclocking?
The M1 Ultra comparison is pretty interesting when you consider it’s a 20 core/20 thread chip vs a 128 core/256 thread chip. I know Apple’s cores are pretty great in terms of IPC but I would have expected more than a 4x difference (and this is as an M1 Ultra owner). I wonder if there’s some other bottleneck in the system or in Cinebench r24 that’s limiting the performance
What is the TDP of M1 Ultra? 50-60 W? Let use 60. So it has 20 cores. That means, 1 core has 3 W TDP. So 128 cores should have 384 W TDP. But it only has 250/300W? That mena, that it is about for 100 cores the same but it alsi has multithreads (2/core), which makes it a bit more difficult.
@@LeLe-pm2pr Where's the problem? Those cores perform pretty similar to modern x86 cores and due to the higher power efficiency of ARM there are far fewer issues with boosting.
@@fireeraser2206 a huge amount of applications are built to run on x86, and not ARM. don't get me wrong, i'm all for ARM being the next big thing in the cpu market, but, as of right now, it cannot be considered a viable alternative for a lot of things
hardware in another ~5 or so years is gonna be wild, with AI in rapid development we are probably going to start seeing a lot more consumer AI tools which need powerful hardware to run maybe in 10 years or so something like this will even be commonplace
Us tech enthusiasts are an odd bunch, we'd think nothing of dropping loads of money on a 128 core CPU just to run benchmarks and be like burrrrrrrr squares go quick
But think about the potential! Then I go open a browser and vscode and the computer basically sleeps the rest of the day while I type into a text editor
i want a video of you guys touring a movie/vfx studio to know what they use. do they use this stuff for rendering. could be a fun video instead of boring data centers
@@bigpodno you would have a vfx or movie studio talking about how long it took to render your favorite scenes/movies. talk about the progression from now and ten years ago. there is a lot more juicy content there than a regular data center that just crunches numbers.
@@vampirefox7 hardware at those studios is still justa datacenter and honestly big datacenters can talk about much more interesting stuff. And honestly cloud datacenters are realy inzeresting
Like others have said, for Minecraft you'd prefer a high clocked CPU. Minecraft server providers often use servers based on consumer CPUs like AMD Ryzen and Intel Core. I went a bit wrong route back in 2013 and bought an old dual socket server with 2x Xeon E5420 2.5GHz (think of a slightly lower clocked Core2 Quad Q9450 but two of them) and 32GB RAM. It did fine and it was fun to play around with real server hardware, but in reality it would have been better to go with a modern, at the time, i5-2500K or i5-3570K with 8GB RAM. A modern i5 system would have cost more but I ended up having to buy two different servers in 2013 because the first one died due to a lightning strike. The money I spent on the two systems combined would have been enough for a custom built i5 system.
128 "execution units", 400W power draw, eye-watering amounts of RAM support, decent rendering capacity, ~$8000 MSRP... this is just a modern-price GPU that goes in a processor socket :P
96GB is actually almost consumer grade density these days. You can get up to 512GB per stick. Those rams were favoring speed, so the 5600MTs, wile a 512GB system would use something like 3400MT on DDR5.
I think the real reason why it “eventually” gets to consumers is because it becomes too expensive to separate the processes and is cheaper to move consumers to what they already use.
Steven Chow movies are legends in my childhood. This relates me back when watching Gintama in my teens. Too bad it's hard to find anything likely in these days. 60 million dollar man, fight back to school, cj7, KFH and etc. are also best watch. It's full of parodies sourcing from anything anyone might encounters in life. This is why I like his movies.
At 6:17 Linus uses the unit 'foot pounds' where he should have used 'pound force inches.' We're working to update the video.
ok
ok
ko
SP5 waterblock needed the ThermoFlex 5000 Hungers!
Thanks Steve
I was looking for a CPU to run Cities Skylines II at a least 15 fps, I think this one will do.
nah this prob gonna give you max 10 fps
Might need a better graphics card tho
It's going to be 10 fps after they release new patch
You'll need three of these. Probably.
The problem with cities skylinies is simply that it is relying on single core performance and just isn't optimised at all.
Meaning more cores won't give you anymore performance rather it will hurt performance because each single core won't turbo as high.
Love the game but pls just optimise for miltithreading.
The most insane feature of Bergamo is its power efficiency. Wendel from Level1Tech mentioned that Zen4c is even more power efficient than most currently available ARM CPUs
What!!!
@@User-dd2xv you can compare it to an Ampere Altra Max, which also has 128 cores. The Altra consumes about 130W in idle and 350-400W under load. Bergamo consumes abkut 120W in idle and 500-600W under load. But Bergamo is up to three times faster than the Altra Max
@@romanpulPower comsumed per unit of work done. For server chip, it kills :)
dang, very cool
Comparing to arm only for watt/power, nah it sucks.
However with 3x the power. Its EPYC.
I can’t wait to spec out a full system I’ll never afford just to see how insane the performance would be😂
i litearly would do the same
Cheaper than a fully specced out Imac desktop
@@jackhemsworth7515definitely not lmao there’s NO way
I did this is 2006 a lot. Dreaming of the best gaming PC money could buy.
@@beeseechurgerIt's cheaper than a whole imac pro
Linus - buys and implements full workshop with cnc
Also Linus - only have this one heatsink
We do cloud computing and have been discussing and testing vCPU ratios and core contention. The performance effects of overlapping VM vCPU and VMs with more vCPUs is something we are always tweaking and eventually resolving with better hardware. It’s something that is surprisingly more strange than straightforward, since VMs experience slowdowns from neighboring VMs on the same CPUs/clusters/cores/caches. The perception of performance is related more to cumulative contention at any given instant than it is to peak capability (specifically in reference to user-facing experiences, not infrastructure which can often be measured in terms of load metrics).
In many cases, there is no direct software solution, which leads to scale up or scale out.
For us, the only way to see how the hardware perform is to benchmark and create these scenarios to see how we can load balance physical and virtual systems. My hope is that the 4c provides better small-mid scale VM performance and responsiveness in a way that is affordable for our needs.
Generally speaking those "cloud native workloads" are based around virtualization and/or containers. For example, you generally give a container soft/hard cpu and memory limits. Then run lots of them per system. The more cpu and memory you have the more you can allocate. Linus made a comment about them not needing to share memory between cores and that is generally correct in this type of workloads.
Openshift benchmark when (not sure how that would work and what the meaning of such a test would be) 3000 apache pods on a single cpu let's gooo
Serverless L
That's what I was thinking too. Thats what this CPU makes sense for anyways. Usually the only workloads that benefit from more cores (even if the cores themselves are weaker) are anything to do with containerized workloads.
So it's like virtual Box but in a bigger packet
@@quanghuyvu2649 virtual box is an interesting example for this processor. It could very easily utilize all of those cores with vms. Outside of a lab, dev, or test setup it will rarely be used. In the path to prod you'd use something like openstack, docker, K8s, or something similar.
What an epic analysis! I can't wrap my processor around the fact that this beast accommodates 128 cores. Good point about the tech in high-end chips eventually trickling down to our home PCs. From raid controllers to cloud-based workloads, it's fascinating how our everyday tech is influenced by these monstrous CPUs. Thanks for the thorough walkthrough, I was on the edge of my seat - almost fell off when you started to spray the coolant! Maybe next time you can try gaming with a liquid nitrogen cooling setup, just to keep things chill. Looking forward to seeing more from the wild world of CPUs!
An "EPYC" analysis if you will ;D
@@Licher_ Dang it you got there first xD
They could have named this entire lineup EPYC Legion cause of the 6096 socketpins
and if todays programmers wouldnt be paid off by hardware manufacturers to put out even worse code that uses all 128 cores to full while providing the same performance as a 20 year old processor :)
@@DeviloftheHelll ???
I was genuinely shocked at Linus using a non-LTT screwdriver.
Torque extension on the store when ?!?!?
Came looking for a comment to figure out why?
haha same
@@kazoolians Torque specifications.
@@kazoolians It's a Gear Wrench 1/4" Drive Torque Screwdriver 1-6Nm
...a ~$220 screwdriver! Perfect for not overtightening the CPU screw and costing yourself $12,000.
Comparing a server CPU to an enthusiast CPU is like comparing a semi truck to a sports car. It'll pull way more weight, but it's not built for speed (frames in games)
This! A lot of people really don't realize this kind of information. It's also why Threadripper cpu's are more pants at gaming than a lower core count CPU, but are WAY better for content creation.
Thank You Dear Sir, You just solved my issues in terms I can understand.
It would be awesome to see such a cpu compiling a big project like the linux kernel or something else to see how fast it is, the clock may not be so fast but so many threads could do it so fast I can't imagine
I use to work in a factory that made telecom servers. And one day coming to the work there was a batch from might shift where every single server blade of the production line overheated.
Open upp the first. The plastic cover was still on the cpu, the second.. well every one of them.
Well someone asked the dude that assebmled them and he claim that there was no part in the instructiin to remove the plastic cap.
The person who made the instruction thought it was so obvius so he didnt write it.
He probably knew but wanted to make an example for why you idiot proof your instructions
This is a prime example of exactly why an instruction manual must contain the most blatantly obvious instructions possible. Because someone, somewhere, is going to be brilliant enough to mess it up.
When you write manuals or instructions, you have to think of all the possible stupid things that another person could do...😅
@@B0B_BELCHER no matter how well you write instructions there will always be an idiot who will beat it.
"Use your brain to command your arm to move towards the CPU, use your brain to command your hand to grab the CPU..."
I always think about how sooner or later they'll just run out of Italian cities and they'll have to use smaller and smaller tows. Like, imagine AMD EPYC BASSANO DEL GRAPPA
Si gode
Commento inaspettato del giorno ahahah
Lan party AMD da Nardini 😎
We have 8000 towns and villages, not happening anytime soon, where is my AMD Epyc Maranello? :D
Da un pò che aspetto AMD EPYC VILLARICCA
yes mom, I need it for my power point.
1:56 The WinRAR reference is not lost here! Epic!
Linus the way Maxons bucket rendering works the larger memory you have the larger bucket it has. Bucket rendering with small cells was designed to prevent out of meory errors or slow disk swaps on systems with little memory. The new version automatically optimises bucket size, which you can manually do on say Arnold or Vray.
As a server technician that works on these daily, I'm happy you mentioned the motherboard CPU pins and what problems just one can cause.
some days, it makes me miss PGA.
some days.
@@chrisbaker8533Imagine 6000 tiny pins
Maybe pseudo BGA is on the books. Something a bit less delicate than pins
15 years ago, 128 cores where only available in compute clusters with high speed network interconnect.. I remember preparing computationnal fluid dynamics on this type of cluster and waiting a few days in the queue to run a 32 core job...
Even worse than that, it was only 5 or 6 years ago that you couldn't have this many cores in a quad socket system.
Yes. And 30 years ago many PCs still ran on MS-DOS. I guess that's how technology works hey?
@@funbucket09. Did you see that….. it was the point going over your head.
@@robertt9342people like them can't understand what a KB even truley meant. How large a system like these would have been 10 yet alone 30 years ago. 30 years ago half a GB of storage cost ~300$ That now would get you over 20tb of storage. That's roughly 12 million dollars worth of storage in 1993. Not counting for inflation which make it 25.5 million worth of storage. 🤯
@@funbucket09 That's actually not how technology works. It doesn't just get better automagically every year. These improvements are the combined result of monumental amounts of research, engineering and manufacturing efforts. Thousands of people are working on the incremental improvements we see every year. It's important to look back and realize that what 15 years ago would have taken hundreds of thousands of dollars and a whole room full of equipment now fits into the palm of a hand. We did this. As a species, we figured out how to do that. It's incredible. It's awe-inspiring. It's not just "how technology works". There's lots of stuff to be cynical and jaded about. Try not to extend it to the things were wonder still exists :)
Some other cloud native workloads, in addition to containers, as another commenter mentioned, are people running bare metal servers and they have their own virtualisation layer on top of it. How this helps is since they own the virtualisation also, they can have multiple different VMs running on it, performing different operations on the same set of data. And since that data resides in the same CPU/Memory space, the latencies are ultra low (~10-100 ns), what is called HPC in the industry.
My org runs simulations on market data and these are hundreds of files spanning 10s of GBs each. Being able to load all of them into memory and then running parallel simulations, all on 1 single CPU would be a game changer in terms of performance as compared to running 64x2 CPUs in NUMA config as currently, we are maxxed out at the CPU level, only limited by the memory latency and memory throughput.
@@KushagraJuneja can you elaborate what software do you use for that simulation and what os,runtime?
glad to see AMD crushing the competition my 3800X is still killin it till this day.
I wanna see this CPU run GPU tasks.
Like, i wanna see DirectX ported to the CPU and have it run something like Doom 2016.
God yes, I need to see DX-for-CPU happen lol
If I remember correctly someone ran crisis on the 64 core epyc back then. Would definitely be interesting to see this on the new epyx
mesa 3d exists, and doom 2016 runs in opengl or vulkan modes (no d3d if I remember correctly)
So your wish might already be possible today
Ltt make it happe
and if I recall correctly, it got what...10 fps @720p? Which for CPU only is not bad at all @@gabenchrist7331
It would actually be pretty bad compared to even entry level GPUs because the core sizes for those are so small because they run a very limited number of instructions compared to x86-64, and they perform mathematical calculations with much more precision to make sure things don’t look all shimmery/warbley like a Sony PS1. For comparison the number of GPU cores on any discrete graphics card is going to have (at a minimum) at least 4x the number of threads as this CPU, for the closest comparison you’d have to look at iGPUs, like the Intel UHD 750 that’s included with the 11th Gen i5’s and up, which have 256 cores. We’ve reached a point where the most powerful GPUs have 1:1 core per pixel ratio.
So, what you’re suggesting might not be as exciting as you think and might make it look far less impressive than doing what it does, which is perform a ton of complex instructions in parallel and host a ridiculous number of containers and VMs that can all be allocated a reasonable amount of cores and memory, that it wouldn’t fare too poorly against older workstations that shipped with similar specs years ago.
With the setup LTT had, they could setup 16 VMs with 16 cores and 48 GB of RAM, which is overkill compared to most virtual servers I’ve ever worked with running on a VMware server farm. If you needed fairly robust virtual servers you could easily provision 32 servers with 8 cores and 24 GB of actual memory. More than likely these are going to be used in a mixture of uses where you’ve got VMs dedicated to specific virtual systems that will reserve the cores and memory, while others that are more dynamically allocated so that at idle it won’t use up very much, but as more demand is placed on the system, it will just start allocating more memory and CPU cores and cycles as needed.
If you’ve never seen how VMware server farms show the CPU and Memory usage it’s weird the first time you see it, because it will say goofy shit like 15 GHz CPU used, which is a running total for the usage within a particular timeframe, because that’s how they bill for some of these things, even though it’s not their hardware, they charge licensing fees based on usage, because people are willing to pay that much for their software.
Anyway, as interesting as these CPUs are for the most hyperconverged data centers, it’s still really boring for games because GPUs are just so much more parallelized and have been that way for a much longer time.
The takeaway that’s most interesting is the CCDs being so jam packed, because it means that we’re going to have consumer desktop and workstation CPUs with 32 or even 64 cores on a much smaller package with higher clock speeds within the next couple of refresh cycles, and/or more PCIe lanes for even more I/O and memory capacity. HEDT is just going to be overshadowed by regular desktops, and laptops that will be able to handle more multi-threaded workloads. Also, the power efficiency of those cores were pretty impressive as well.
I would love to see you attempt some software rendering on one or more of these chips, we are approaching GPU levels of core count here.
Crysis on CPU only
@@PhyrexJI'd love to see that
@@PhyrexJthat's been a thing for a while.
Yeah, a CPU port of DirectX!
they did a while back with Crysis
With AMD aiming for compacted Zen-C cores and Intel aiming to put something like 300 E-cores in one socket fairly soon, the days of a Kilo-thread box are rapidly approaching.
yep sirrra forest successor is supposed to have 512
It’s crazy. Soon we might be referring to CPUs like: “4 Kilocores and 8Kilothreads” In 20 years.
Depends what you mean by box. Some of the server manufacturers have already done 4 node 2u chassis that support 8x Epyc CPUs across the nodes.
Intel was talking about 144 cores last year and they still haven't done it meanwhile bergamo is 6 months old.
Intel are like 3 years behind.
@@tomstech4390 Clearwater Forrest is on pace. Intel is behind in server core count overall yes, but this isn't another Sapphire Rapids situation with a year of delays.
This video was the last video my dad sent to me before he passed away last week. He was a genius with newer PC technology and I relied on him a lot for any computer questions I had. Recently, I had asked him if he knew of any CPUs that I could upgrade to, and he sent me a link to this. I knew he was looking around to find me something powerful for gaming but within budget before he passed. I'm not exactly fluent when it comes to computer parts, and most of the finer details in this video have gone over my head, but someday I want to learn what it all means.
I currently run an AMD FX 8350 but it seems to be somewhat incompatible with my new NVIDIA GeForce 3070- it reaches max CPU usage and gets hot when trying to run games such as Baldur's Gate 3- and now fails to run beyond the title screen. I've tried delegating my GPU to handle my gaming apps with high performance prioritized, but saw no improvement between the games or what task manager clocked my CPU at.
I feel strange asking the youtube comment section, but I was hoping if anybody who knew their computers would be able to tell me if the Bergamo would be worth investing in for an upgrade?
Hi, your father most likely sent this to you just to show where the technology is heading. Instead look at some videos about the ryzen 5600x3d and 5800x3d.
Good luck and keep thinking good thoughts about your father.
No, Bergamo is not the kind of processor you want for a desktop. These are designed to run cloud apps, not super fast but very efficient and many tasks at once. It's not a fast CPU in terms of running single tasks quickly, only that it can run so many at once. And the cost of a cheap car for a single CPU.
Depending on budget you'll be wanting either Ryzen 7000 series or Intel 13/14th gen.
If you need a workstation and want the types of cores this thing packs, then you're after either Intel W-2400 or Threadripper, but don't expect much change from $5k for a board, CPU and memory.
A bit late I think, but I figured I could contribute something, if not to you then to someone reading this. A massive amount of "small" cores is likely not going to give you much joy, certainly not in most games, as @morosis82 says. However you could get something almost as special and very closely related by going for a higher clocked Threadripper.
Your current setup is very very severely hampered by your CPU, those bulldozer chips were not particularly good when new and at this point your 3070 isn't able to do a lot of what it has the potential to, because of your CPU. I had the 9590 or what it was called, the "fastest" of that family of chips and even overclocked to the max it never did games well. Anything Ryzen based will be a huge leap, at the same clockspeed or lower, because it's massively more efficient and optimized, which includes both Epyc like in the video and Threadripper which I suspect would both make you get a large speedboost, as well as stay within what I would guess was the intent he had when he looked for something powerful for gaming.
This one chip has more cores than my entire high school did in 2004.
I'm always astonished how quickly and how far technology has advanced. The first PC I built back in 2003 had 256MB of RAM. This PC has 768GB. Granted, this is not a home PC and it's likely nobody here will ever be running this setup at home, but it's still crazy to me that my first PC, which was only twenty years ago, has 0.3% of the memory that this one has.
You have the decimal point at wrong place. Your first PC had 0.03 % percent of memory.
I was talking to a coworker the other day about my first USB thumb drive. A whopping 128MB that probably cost in the neighborhood of $40-$50. A full gig was way out of the question for me at that time. They're sold in packs of 5 for about the same as cigarettes now.
I think some Ivy-bridge-EP processors already support 768GB RAM, those were released back in 2013
Consumer hardware takes 128GB of DDR4 or 192GB of DDR5, but that could go to 256GB if they end up making 64GB udimm sticks (which would already be 1000 times more than your first computer, still on consumer hardware)
@@albertlong3492 yes i remember people buying HP Z 80something workstations with 1.5 TB of ram around 2014, however that is much slower memory than what's available now
You guys should try running a super massive factorio megabase on it. I wanna see how big one can be too bring that CPU to its knees.
Factorio is limited by cache-, ram speed and timings followed by clockspeed for its ups calculations. Those cores might only be usefull for speeding up map generation
What happened to the blowiematrons? This would have been a perfect scenario for one or 2 of them.
Two super fans on XE04-SP5 cooler was doable. But better would be water cooling with ThermoFlex 5000 Chiller and water block taken from XE360-SP5 😁
I remember how when I was a kid everyone had 128MB or 256MB of ram and when someone had 512MB it was like, whoah what do you need that much for. And once 512MB became normal but 1GB was still kinda extravagant, people would buy a 512MB stick and then wouldn't know what to do with the old 256 stick, and so they'd put both and run 768MB in single channel.
This computer has 768GB.
Similarly, when I was a kid a 1GB disk was considered a big one. Now we have disks that can have several TB each.
It's like, in some 25-ish? years we got to a point when 1GB nowadays feels like 1MB back then, whether storage or memory, and 1TB today feels like 1GB back then. And it's messing with my head now lol
Well, you gotta factor in that eventually adcvancements will slow down. At some point we will reach hard physical boundaries.
This computer supports way more than 768GB, it's just infeasible to go higher unless you have a reason to spend the money. 256GB single dimms can be had for $3k each.
What's truly crazy is that the 768MB of memory you're talking about is now how much on-die cache these chips have, or a little over 1.1GiB for the X3D Genoa variants.
I'm never going to get over how big the die is on EPYC.
“When your socket is almost as big as your memory slot” #justepycthings
It's the same as Threadripper, in fact Threadripper are just consumer versions of EPYC
Compared to all other CPUs, it is GIGANTIC.@@jfolz
One day we're gonna have Epycs that are gonna be the size of a 2.5" drive :D
@@Qardo The combined size of the dies sure, but individually they aren't that big
Worth noting that we may yet have an interesting tech development in the future. 16 core complexes plus some form of Vcache might not be completely off the table! :D
2030: Budget 128-core Gaming PC using psrts I bought from eBay!
I can see this being VERY popular in the HPC industry
Genoa and bergamo r italian cities, as an italian im so proud of amd rn
I suspect most end-users of this monstrous Epyc CPU will be running Linux, so it's sad to see LTT yet again put Windows on their test bench for a server CPU. Phoronix Test Suite is probably something LTT should have looked at, because that has quite a few server-related tests in it.
Especially the gaming benchmarks as always. Yes, those CPUs suck at them, because of the low clock speeds. Just in the last video about a server CPU and the one before it and the one before that.
Would be interesting to go through some real workloads for that, even if they would have to introduce most viewers to some other benchmarks for that.
The plurality will probably be ESXi followed by Linux and Windows. These really aren't the current best option for Cloud data centers currently, so for right now you would be looking at in house clusters or specialized systems. That means ESXi followed by Windows followed by Linux. And before anyone jumps on me, I do prefer KVM over HyperV, but Windows also has RDS virtualization which helps give them implementation numbers.
Thanks for confirming Linux users are the vegans of the PC community. We get it you like Linux.
they said at the end of the video none of these benchmarks are real tests of what they are for, and that the people buying this sort of stuff would either optimise their systems for it or let the customers who are renting the servers to figure it out themselves, i seriously doubt the real end users of this cpu would actually be getting data from ltt in help with their purchase
The point of running Windows is so that it's a like-for-like comparison. To use Linux, they'd need to go back and test all the others on Linux. And any viewer would need to run Linux to be able to compare it with their own system.
Our render farm back in 2003 had 100 3ghz single core INTELs. This one damn chip has 128.
waiting for a chiller cooling setup with this processor
He could at least grab some delta server fans for it like it is supposed to have. He is so cringe I can't stand him. Seriously I threw up a bit when he went to squirt it with water.
Jesus fuck the specs on this thing are insane. 128 cores, 256 threads, 128 pcie 5.0 lanes, 12 channel memory, 1/4 GIGABYTE of L3 cache, and apparently you can slot two of these fuckers into the same motherboard.
The sheer amount of power and data infratructure just to keep one of these systems fed, let alone banks of them, is insane. You can fit a whole damn super computer in like one server rack.
12:50 It explains the best what monster of the chip this is.
"But who's gonna spend 12.000$ on a cpu?"
Data centers
Many many company's will buy hundreds of them not just one ...but I get what your saying
They missed the sarcasm
People who make more money from having more CPU power.
Intel
can't wait to get my hands on this...
in 20 years time when im ready to build a retro PC
lol
I used to have a Opteron X2 170 with a 1Ghz overclock (on stock cooling!) for a gaming rig. Older CPU's of that type had a lot more cache and could have similar clock speeds to their Athlong 64 counterparts, and it meant a lot more performance back then.
Editing for this video was amazing! Really seems like that new work schedule got everyone more relaxed, which in turn makes far better videos. I'm glad you guys were transparent with your viewers. 😍
13:55 this just made me appreciate modern desktop CPUs even more. The 14900k (a 32T CPU) can hit over 43k points in the same test, which is almost half the score as this 256T CPU! (8 times the threads!!!)
And remember, this is a test that can actually use all of those threads. Just insane how far technology has come.
Don't. That cinebench test scene wasn't anywhere near heavy enough for the cpu to shine. The gap is huge.
You should definitely put the giant overkill chiller on the CPU to cool it
Take a look at GPU and cores affinity; in a worst-case scenario, you could potentially reduce GPU performance by a factor of 4. Only a few cores have direct access to the full PCI bandwidth. If the software doesn't manage this properly, it might need to go through another core, causing latency issues that could impact performance significantly. I'm familiar with these kinds of problems as I work with computing clusters.
Not much of a PC guy but, can it run Crysis?
This is iconic for me. I remember watching similar videos on early intels from you and it’s nostalgic.
Do a full build in the Opteron case, PLEASE.
To linus's point about F1 tech eventually making it into our daily drivers, there's actually a fantastic BBC documentary from 1984/85 that shows the entire process that went into the creation of the Ford Cosworth GBA 1.5L Turbo V6 that went into the 1986 Beaatrice/Lola-Haas/Ford (Carl not Gene).
At the end of the first part and beginning of the second it shows the development process of the ECU and they show the electronics engineers from ford motorsports europe working well into the night chasing down problems in the circuitry before the "big box" (the ECU was the size of a toaster oven) went into the car for its first track tests.
At one point they realized that there was a huge issue with detonation that was being caused by electromagnetic interference from the combustion chambers interfering with the ECU's operation, so they had to change the design placement from on top of the engine to inside the tub.
They even showed the IBM 286 desktop that the engineers in europe used to communicate with the head of ford motorsports over in detroit and explained how the multi-layer cryptographic encryption system it used to prevent competitors (like say Bosch) from getting into their system to steal proprietary information worked.
As a 3970X (32c/64t) owner I’ve been waiting a long time for these replacements. I’d be interested to know how reliable they are at hitting their max boost clock, because mine rarely did, even on a single core
Water cool it, bro
@@a564-c3q What about undervolting a Threadripper?
@@a564-c3q Tried that, also tried the IceGiant ProSiphon, in the end none of them allowed it to clock appreciably faster so I've stuck with a NH-U14S. I've tried undervolting too, but it wasn't stable.
too bad Linus didn't remember the windows WARP thingy that runs DX11 in software mode, he did bench that in the past with a threadripper on crysis 1 iirc... would love to see it on this CPU
Fun fact! They used a miniature version of the CPU for this entire video, as to show it's size against a normal person.
The real one can be seen in the thumbnail, with linus standing next to it - Yes, he's short.
8:33 - roooooofl...that was hilarious.
Just a friendly advice from a "thumbnail maker" and someone who loves looking at them., I know this is irrelevant but the shadow of the CPU is way different than the "drop shadow" on Linus. Anyway, I am a long-time fan and I appreciate how your thumbnails evolved and how "catchy" they are now.
Magic Effect at 00:21 seconds is just there to hide the fact he already dropped it
I'm going to absolutely spec these for my next vcenter cluster.
Now we just need a new Linus personal server update with this bad boy
I wasn't expecting a big star reference in a video about computer hardware loll
Que bien que tus videos cuente con doblaje al español!! eres un excelente TH-camr dedicado a la tecnologia computacional, es mas, no hay casi ningun TH-camr en mi idioma que abarque temas como este, sobre todo con procesador OEM.
7:12 -- only 768 GB of RAM. I remember way back around 2003, when building my pc with the (at the time) awesome Abit NF-7 motherboard and getting it up to 768 MB of memory, and that was amazing!
Pathetic. That is only 16 GB more than I have HDD space.
I remember my first PC in 2005 with only 1 gb ram anda a 16gb hard disk. I feel so superior at that time
I checked my AMD CPU and found out that they use SI units on the spec sheet on their website to represent IEC units for L3 (and other) cache. Please start using IEC/binary units where applicable!
This is without question my favorite LTT series. Love looking at things I will never afford
I loved my Opteron 190. A duel core unlocked CPU on AMD's desktop Socket 939 was amazing in its day.
A 190 was super rare! I’ve still got my 185 which was the equivalent of an FX-60. Good times
When x4 Epyc with quad socket motherboard test?
Well... Yeah. CS2 is DX11 where as Doom Eternal is straight up Vulkan. Of course multi-threaded performance is out of this world. The problem is most people don't really like dealing with DX12 or Vulkan just for the performance gains of a game they're not putting special attention to. It's why most indie games are closer to single or light multi-threaded APIs like OpenGL 4 and DX11.
4:25 no love for LTT screwdriver anymore? 😢
If Linus dropped this, he would have to sell the lab to pay it off
This cpu isn't multiple 100k
Its only 12k. Linus could afford it. But he will cry over it for years.
6000 points in CB2024 is 6x that of my 5900x, but its also only 60% of my 4060 that can break 10k points at 100w. I think the biggest thing CB2024 taught us is that CPUs need to leave the rendering to GPUs.
Less that and more that GPU architecture is highly optimized for rendering.
…unless you want to render a real scene for a big budget movie, in which case the textures alone might be in the range of 100 GB - good luck fitting that into any GPU memory.
A bit more than the 822 points my i9-10900X gets lol. My GPU, GTX 1080, scores 3993 points
till you realize that GPU's with a ton of memory cost a whole alot more than this CPU decked out
24GB VRAM costs you 2 grand USD and that translates in roughly 144GB VRAM worth of cards while this CPU can do well over 1TB
for smaller scale GPU might be fine but for 100+GB projects CPU's start to be a king due to capacity unless you get a board which can support that many cards
@@mephistoxd2627 Nvidia H100 says hi
My primary workstation has the 64 core 3990x which I use primarily with Arnold Render in Cinema 4D and Houdini. Been curious how much faster the 128 core CPU is. Looks like about 30% faster based on your test numbers. Great video!
Cinebench 2024 is using Redshift XPU which until recently was a GPU only renderer which is quite a bit different than the old CPU based physical renderer on previous cinebench tests.
This is why you should have pallets of spray air on hand.
Turn it upside down and go for gold.
I used to do that back in the early 2000 as the only way to try and keep up with Gibbo and marshy when doing over clocking comps at work.
Would love to see what something like this would do encoding x264, x265, AV1, etc.
You should have tried the crysis CPU renderer you showed off with the last high core count amd cpu
IIrc you ran crisis on a cpu once. Please do that with this cpu too. I would love to see if the performance improved
Please make a test with Crysis CPU only. Yes you did the test with a 64 Core Epic years ago. Im just curious how the perfomance has increased . Thank you
Man, I love server hardware. I'd totally watch a whole other LMG channel about server hardware.
I literally won't ever use or touch most of, if any, of the stuff reviewed in LTT videos - I mainly watch for the sheer excitement and human that is Linus. You, fun sir, are a treasure ❤️ and are cherished as one, thank you for the joy you bring and share in each video and thing you do ❤️
Now this is truly epyc
To think consumers may soon get a 16 core CPU on a single CCD or CCX is impressive and a worthy upgrade for my 5800x3d
@@LordApophis100 16 core CCD with 3D Vcache is exactly what I would want as a content creator, and give me another 16-32-48 without it for rendering processes.
Zen4c will come to the next generation of AMD mobile CPUs and APUs, but at lower core counts.
@K.C-2049 I bought a used 3900X for cheap recently and its so nice, you might be able to upgrade if you're concerned since they use the same socket
Let Jordan get his shine on these Linus, he should definitely be building these with you imo.
Linus is making me feel old.. i had an opteron and a 939 mobo, we OC'd the heck out of it, even had a friend break and keep the world record for OC on it on air. twas quite the time to be alive. technology was literally leaping ahead in terms of pc components.
What would be interesting is somehow do some waterblock on the vrm and the cpu and let it rip. Wonder if you could get a custom bios to allow overclocking?
This is epic, this will defo change the web hosting and game servers.
It’s spelled epyc
@@ryanhamstra49 I don't wanna offend Epic Games 🤣
@@thekhanbaby You should absolutely go out of your way to offend Epic Games as often as possible.
And if you do need more cache, Genoa has 3D V-cache CPUs too
22:02 I love the "Colton? Fired" easter egg.
The M1 Ultra comparison is pretty interesting when you consider it’s a 20 core/20 thread chip vs a 128 core/256 thread chip. I know Apple’s cores are pretty great in terms of IPC but I would have expected more than a 4x difference (and this is as an M1 Ultra owner). I wonder if there’s some other bottleneck in the system or in Cinebench r24 that’s limiting the performance
What is the TDP of M1 Ultra? 50-60 W? Let use 60. So it has 20 cores. That means, 1 core has 3 W TDP. So 128 cores should have 384 W TDP. But it only has 250/300W? That mena, that it is about for 100 cores the same but it alsi has multithreads (2/core), which makes it a bit more difficult.
M1 is a 5nm chip unlike bergamo.
Have you guys ever tested Ampere's 192 core CPU?
isn't that ARM and not x86 ?
@@LeLe-pm2pr Where's the problem? Those cores perform pretty similar to modern x86 cores and due to the higher power efficiency of ARM there are far fewer issues with boosting.
@@fireeraser2206 a huge amount of applications are built to run on x86, and not ARM.
don't get me wrong, i'm all for ARM being the next big thing in the cpu market, but, as of right now, it cannot be considered a viable alternative for a lot of things
@@LeLe-pm2prmore cores is more cores
This would make an insanely good cloud gaming server or multi-user gaming rig for a lan party, would love to see that!
untill the game DRm or anticheat realises it doesn't like to be virtualized
hardware in another ~5 or so years is gonna be wild, with AI in rapid development we are probably going to start seeing a lot more consumer AI tools which need powerful hardware to run
maybe in 10 years or so something like this will even be commonplace
At 7:45 there is a strange audio issue. I thought LTT checked their own videos now!
The Meta throwback to pre YT is why this YT channel is so meta
Us tech enthusiasts are an odd bunch, we'd think nothing of dropping loads of money on a 128 core CPU just to run benchmarks and be like burrrrrrrr squares go quick
But think about the potential! Then I go open a browser and vscode and the computer basically sleeps the rest of the day while I type into a text editor
i want a video of you guys touring a movie/vfx studio to know what they use. do they use this stuff for rendering. could be a fun video instead of boring data centers
well they would also have just a boring old datacenter
@@bigpodno you would have a vfx or movie studio talking about how long it took to render your favorite scenes/movies. talk about the progression from now and ten years ago. there is a lot more juicy content there than a regular data center that just crunches numbers.
@@vampirefox7 hardware at those studios is still justa datacenter and honestly big datacenters can talk about much more interesting stuff. And honestly cloud datacenters are realy inzeresting
I think these chips are great for hosting multiple game servers on one host. Think, minecraft or your space engineer like games.
or mutiple whatever on one host
What I'm thinking is office thin clients. With CPU overprovisioning, you could fit several hundreds of simultaneous users on a single node.
minecraft would probably run quite poorly, as even with paper it is extremely reliant on at least one fast thread
@@LeLe-pm2prRight. As much as a lot of cores sounds tempting, this is a job for the frequency optimized variants
Like others have said, for Minecraft you'd prefer a high clocked CPU. Minecraft server providers often use servers based on consumer CPUs like AMD Ryzen and Intel Core.
I went a bit wrong route back in 2013 and bought an old dual socket server with 2x Xeon E5420 2.5GHz (think of a slightly lower clocked Core2 Quad Q9450 but two of them) and 32GB RAM. It did fine and it was fun to play around with real server hardware, but in reality it would have been better to go with a modern, at the time, i5-2500K or i5-3570K with 8GB RAM.
A modern i5 system would have cost more but I ended up having to buy two different servers in 2013 because the first one died due to a lightning strike. The money I spent on the two systems combined would have been enough for a custom built i5 system.
I don't know if ARM options will ever catch up in I/O. it is an area where they are constantly behind equivalent x86 chips
128 "execution units", 400W power draw, eye-watering amounts of RAM support, decent rendering capacity, ~$8000 MSRP... this is just a modern-price GPU that goes in a processor socket :P
I didn't even know 96 GB ram sticks existed. That's insane.
Mainframes have many tb 😊
saw a crucial 48gb DDR5 SoDIMM stick other day and I was like "Holy Hell!"
@@vujhvjvgvfujk9888he's talking about per stick... Not total system... Did you not watch the video?
96GB is actually almost consumer grade density these days. You can get up to 512GB per stick. Those rams were favoring speed, so the 5600MTs, wile a 512GB system would use something like 3400MT on DDR5.
Wow the commercial/server space for computers is wild.@@pedro4205
Meanwhile my 12th gen i5: 🗿
I think the real reason why it “eventually” gets to consumers is because it becomes too expensive to separate the processes and is cheaper to move consumers to what they already use.
Steven Chow movies are legends in my childhood. This relates me back when watching Gintama in my teens. Too bad it's hard to find anything likely in these days.
60 million dollar man, fight back to school, cj7, KFH and etc. are also best watch.
It's full of parodies sourcing from anything anyone might encounters in life. This is why I like his movies.
Linus: We want to be a serious competitor in product testing and critical analysis
Also Linus: 17:14