@@LiamNajor or ruin the socket as the CPU falls off that heatsink and drops onto the socket... I witnessed enough people ruin desktop and laptop LGA boards by dropping CPU's in them, this just seems scary.
Also Linus: Thaks to Super-Micro for lending us this (insert expsnive sounding name here). Uh-oh I got electrically conductive paste in the socket. Eh shoudl be fine.
it is better to not to touch a socket that have thousands of small pins. there's solid chance linus will bent a pin or two in the process of cleaning it
spilt a lil bit of paste in my socket. Freaked out and tried to get it out. Bent pin. Proceeded to break pin in process of fixing it. Should have left it alone.
Clean it out with what? Qtip is too wide, cloth would drag all the pins and bend a bunch all at once, you can poke at it with a needle but at that point, why? Just send it back and let them figure it out later if it's that much of a problem because that's the best way to not just break the product
I mean it is kinda odd that he suddenly has all these issues with Intel despite the fact that Intel has done an absolutely terrible job since the 7th gen lol Edit: This is entirely sarcasm, sorry I missed the mark on making that clear. Linus has called Intel out literally for every release since 7th gen. I'll leave it as is so people can understand the convo and put a new comment below.
@@explodatedfaces To be fair he has released several very prominent videos bashing Intel in the past few years, like that one where he was ranting in the rain in Taiwan
@@explodatedfaces 12th gen was fine and is still arguably better value on the low end and performing on the high than anything AMD has. But your point still stands, 5 years of felating Intel to suddenly turn around now and (almost unfairly) blasting their product makes me question the writing team more than I already do.
As a private person, yes, of course it is. The problem is that management or procurement departments or even tech departments prefer this shit. You have a fixed budget for something and that's it. Splitting the investment over two or more budget periods makes it WAY easier to buy something you want or need. It's still bullshit that costs the company more in the long run, but that's how most companies work sadly.
To be fair, this practice of modularization of hardware features through on demand or as a service it's becoming the industry trend. Dell did it with their data protection solutions, like IDPA and Data Domain, Dell APEX and PowerStore. IBM does it with Z framework. HPE does it through HPEFS feature as-a-service solution. Equinix does it in general. Unfortunately it's a trend. At least with Dell, they actually sell the hardware for the price of what would cost without the feature, like storage, (in this case, they sell bellow the cost of production), and then they make up for it as the customer licenses the features.
Intel used to do the same in the hardware, essentially the same chip hardwired to disable some functions. If you think about it, once a chip is designed, producing more of them cost much less than making a simpler new design. This is also why Apple put M1 and M2 chips into laptops and tablets. I am sure they could disable features on the iPad to save power. But to design a different chip just for iPad is much more expensive. Because Apple uses its own chip, they don't have to play the marketing naming game of different versions of M1/2 chips. Just use the same chips. Also this is the reason why they put an A13 chip into the Apple monitor, same reason, the chip is cheap enough.
Next episode: How to clean a CPU socket with a pressure washer. But a good team, they learned: They benchmarked everything BEFORE Linus is allowed to do something with this system. :D
@@raylopez99 I just wanna invest in QPUs at this point tbh I may have to invest a little into more companies I loathe for it, but I am not giving Google a cent. IBM was supposed to be working on this, although I question if we reach quantuum thresholds if they're just gonna keep making bigger surface area dies like Intel, or if they're gonna keep trying to stack like AMD until we have CubePUs. Which in fairness, I think would be rather neat putting a borg cube CPU in my computer like some sort of Numenera, complete with nanotube heatpipes running through the die interior just to desperately keep the core cooled.
The issue with those accelerators is that for them to be worth it (to counter the loses form not going AMD), you will need to use them so much that a dedicated accelerator card would be much better than this CPU.
They are putting themselves in a very small box. Not unlike Optane really. The technology was good for specific use cases but it wasn't compelling enough compared with alternatives to be a commercial success. I guess they have to go with what they have.
@@Coockiez-007 an AI accelerator card is exactly what it sounds like: a pcie-based solution to add dedicated AI accelerator hardware to a system. If you're an organization using ai learning on the reg, you're gonna be using several of those and not bothering with Sapphire Rapids, especially since an AI accelerator card will outperform Sapphire Rapids for that use case while consuming less power.
@@Coockiez-007 An accelerator card is a card that specializes in hosting devices called accelerators. These are hyper specialized devices that operate at a hardware level (and are therefore mostly immutable in application). Think of it in terms of if you separated an ALU out into its various functions such as processing only multiplication of 2 integers, you'd get a lot more efficiency out of it due to needing less loads, inputs, gate logic, etc. But it would be useless for anything else. So when a company needs to perform extremely high intensity and highly specialized tasks, such as machine learning, they would load a computer with accelerator cards where the highly specific workloads would get offloaded as much as possible. 'As much as possible' is a key word there, the CPU will still process whatever it can. So there could be a balance struck here for massive servers dedicated to AI, if the AI dedicated CPU is capable of saturating all the accelerator cards while also having large improvements in its own processing then this is a performance increase for these servers. And processing power is one of the main concerns in future AI endeavors. It also shows a movement towards hyper specialized CPUs that we can expect more improvement from over the years. An accelerator card is only superior in scenarios in which there is reason to use it, a CPU processing things directly is always much faster than it offloading it to a card to do so. So there is definitely argument to be made about it being a 'first generation iteration' of what could become a major player in the future as well, unlikely to actually replace accelerator cards anytime soon but could offer a lower-budget option for smaller workloads in the future.
I used to own a cloud computing company - we were not massive but we had racks filled with Intel Xeon CPU. I was often asked why we didn't consider AMD and the reason is one I never hear you mention: you have to cripple CPU features to allow hot transfer of workload between AMD and Intel. So in an environment where we were counting cores in the thousands or higher, introducing new CPU that are not "compatible" with the fleet would be an insane choice. You'd have to prevent live migration of workload between heads or alternatively, modify hypervisors to limit features to guest workloads. Because of this, we were effectively "married" to Intel. Moving some devices to AMD would have required too much development time to modify control systems, limited interoperability or have meant that we would have had to make very large purchases on AMD to essentially split into two, distinct farms, and that would have been quite a bet to have taken.
Couldn't you have created 'trial servers' to test it out with beta testers to streamline the switch? It would work since the Beta testers are willing to risk data loss, and even then they probably have a backup drive somewhere... And then you work with them to finish the optimizations required. That way, at launch, the new servers would be ready and optimized, as well as take a streamlined approach to the switch to the rest of the clients. Heck the Beta testers could accept a steep discount to the first year or so of the new servers. Note- I'm just making a suggestion. If there are issues with this I would totally understand
@@tatsuyahiiragi416 You don't work in the financial side of a data center do you? Do you have any idea on a mass scale how much that would cost in personnel and R&D? Plus finding the right people for the right tasks makes it basically impossible without a HUMANGOUS investment that does not make sense for the very small advantages. Sometimes we tech nerds forget that these are businesses using the tech, that means it needs to turn profit. If it doesn't or the return does not match the input, you do not do it PUNCT
@@Bultizar I have one question - does making a consolidation of servers up to 6:1 is a small advantage? Because yes - one EPYC server can in many cases replace up to 6 old Xeon ones. When you have huge datacenter with millions of cores then being able to shrink the space and manageability so drastically doesn't seem to me as a small advantage... I have talked recently with engineers from MS Azure cloud and this is exactly what they are doing constantly thanks to these EPYC servers - they constantly consolidate their stuff now.
We have another reason to avoid AMD: They do not work as expected. It takes a year before the boards and drivers are really working as intended. It usually comes down on AMD releasing the CPUs earlier than they should, because "they can fix it via driver update". Intel spend way more time in QA and has better support for board partners. You usually can order them and assume they work like described. Also Intel CPUs have more features that became relevant for our customers. We do have a AMD farm, too. Just for offering low cost KVMs for tasks like Webhostig. But even there we noticed that customers ask more often for accelerators for Network and AI. And Intel CPUs brings both with it.
@@andreas7944 that's not really what enterprises using EPYC say. You might think that's the case for desktop motherboards and consumer GPUs. But in server processors? Companies that already switched to AMD say how much easier their tasks is now. The switch is hard. After switching, not anymore. AMD knows servers are paramount, that's why they have all their software support on point there.
To be fair with CPUs that size if you don't torque them properly they can just completely not boot - there's a reason AMD Threadripper and Epyc have their own torque drivers and instructions on what to torque down first.
Many an overclocker has packed a socket with grease and had it be absolutely fine. Large chips absolutely do have connection issues if not torqued properly. My delid 10920x wouldn’t post a mem channel because one size was a fraction too loose.
@@thebludragon8353 Conductivity is not the only issue here!!! Even perfect insulator around pins can change impedances (and electrical length) of the individual signal lines, from/to CPU, inducing skew. Just take a look, how much attention is paid during design of those traces on the MB, where every micron is important.
I love how people are mentioning torque as if they didn't go get the manual, read the manual for the torque specification, then get the torque screwdriver and torque it down to specification, reseat the chip (because they mentioned it wasn't seated correctly at least twice) and still have it not boot.
3:15 I love how Intel not only replaced their XEON naming scheme with metals, but has _already_ completely borked it by having two different 'golds' and now the highest is 'max' lol.
I hope he faces consequences. I hate seeing him play around with expensive stuff a company provided for the video and then break it and be like "upsiii lets just let supermicro figure it out and act like its not our fault". And yes i know it should be an easy fix for someone at supermicro but still not ok how he is treating someone elses property
@@phillips5001 it’s just cost of doing business. You don’t shoot your golden goose, and if Linus messes up a few thousand dollars, it’s no big deal in a multi-billion dollar revenue stream. Linus is just a marketing expense.
Lisa Su might be one of the greatest tech CEOs ever. What she did with AMD, taking it from almost bankrupt to the behemoth it is today is nothing short of a miracle.
You would think people would celebrate Lisa Su as a role model for women and young girls but no. If I had a daughter, I would want her to be like Lisa Su.
Intel On-Demand is also going to really suck for us in like 10 years, when these servers because super cheap on the used market. Are we going to need to start pirating features of the CPU?
I was thinking this, if the DRM needs to phone home all the time you are relying on Intel keeping that switched on. They could purposely make your system redundant by taking down the DRM server and locking those capability's, very wasteful and underhanded.
@@hubertnnn I haven't looked into the actual specifics but I would assume you get some kind of signed license file that can be dynamically loaded through the BIOS or from the OS (like how the platform can verify secure boot images).
The fact these servers can be in service for 10 years is a good incentive to not just close down their drm system if it doesnt take off in 2 generations.
For Benchmarks, can you make it ultra obvious if it is lower is better or higher is better. Change the color scheme of the bars, give it a texture, change the background, ... something like that. Every time you show a graph, i first have to look to the top to see what for (or listen to you) then look at the bottom to read higher or lower (which isn't always in the same position) then look at the data. During that time it is hard to listen to you so i always miss things.
I see where your coming from, at a minimum they should keep that text in the same place in every graph. I also believe they should make the graphs available in the description lol
Damn it's too hard to read a line and then watch the graphs, lol Also, he said they are working on improving the graphs, which is not something you do in 5 minutes of work, and also shows me you probably only come watch videos that allow you to complain, lol
Every time you're looking at a graph, you should actually understand what the graph is testing, otherwise what's the point of even looking at the graph? You might end up looking at a graph comparing model numbers and deduce that 5700XT is better than 4090. So just pause the video if you need to take a closer look.
We've been performance testing these at work recently - definitely better than last gen but we're also testing EPYC for the first time so things may get interesting
Same here in Intel looks like they finally took the lead back in terms of day-to-day use because we have some excellent numbers coming off the new chips.
@Дмитрий Юрьевич maybe you should watch the video, there's a graph showing Intel using almost twice the power of AMD, and loses all but 1 benchmark test.
Your comment was stolen by a 18+ promotion bot account and got 100+ likes. Just thought you might wanna know. Edit: The comment was removed by anti-spam or comment mods L bots
Compared to basically any other LTT video, it really feels weird to not have a clear cut between sponsor spot and content... I really like the more separated version a lot more that this new style. Thanks for the video! :)
As a technical engineer, I just wanna say that installing the CPU in servers is a hell of a lot easier if you just use the protective foam padding that comes in the CPU box to lay the CPU face down on. Then apply the CPU guide (the black plastic that is removable from the heatsink) to the CPU and then simply click the heatsink onto the guide. Boom. All together, no risk of dropping anything and secure
Not to mention that CPU heat sink arrangement has been used since HPE Gen 9 and Dell 13th Gen servers and, presumably, other major manufacturers. Way more convenient than messing about with levers and having to replace the thermal compound after reseating the CPU. See a lot more system board swaps in a week than I do CPU replacements in a year.
The thing about building your hardware for theoretical future software is that by the time use of that software becomes normalized we'll be several hardware cycles down the road. They built a machine today for the tasks of tomorrow, but also expect you to buy a new machine in the next generation or 2, when the architecture they created finally becomes relevant. There's no such thing as "future proofing" with PC hardware.
You have a point. It is likely that intel and or AMD could hit a major breakthrough in the next few years in which one of them will double what their top of the line chip is now along with improve a vast list of things. I also think we are approching the end of gaming graphics cards. We are about half way there right now. That one computer they did not long ago where they rendered that NASA 14gig per second or frame landing model is really where gaming cards will stop becoming a thing. Once you can say have a 48 inch curved computer screen that can display 32k it will almost be not worth it to upgrade to anything beyond that. Once gaming can render 3d models that look like the highest production movies gaming will then become AI based and you will be able to just build a game on site and call for anyone who wants to play. Its likely that gaming will be groups of friends who play games and then add in new players who scan the boards looking for something that fits their needs or makes their own with in a set friend group. I have a feeling that gaming rigs are slowly fading away as more and more people move to console games which are and can be just as in dept as the PC games.
This is exactly what I was thinking! I was figuring that by the time the On-Demand features would be needed, we'll have one to two new generations of CPUs, which can be anywhere from a 15% jump to even as bad as 30%! That's a HUGE computing power downgrade!
Adding extra memory pretty much anywhere extends the usable life of the hardware when those memory requirements inevitably increase, like cache and ram and vram, but really specific hardware solutions looking for a problem just end up looking like a gimmick and even if the use case takes off the next generations of hardware will immediately have so much better support it still looks like a joke to fleece early adoption addicts and koolaid drinkers.
@@josiahsimmons9866 I think cpu chips are moving in the wrong direction. They are trying to get as much as they can in to a single chip when I think they should be building a embedded chip set board. Where you have say tiny 1 inch or less squares of quad or 12 core chips. Then a heat sink housing. Think GPU as a CPU yet double or quad layer where you have dual sided water cooled heat sink. Each embedded cpu can be 4 to 12 cores and think 10 chips per side of a dual layer cpu card would give you at 420 cores and an unknown logical cores. We know CPU chips can be smaller yet because they require some type of cooling embedded cpu boards are better suited. Because who in their right mind is going to open up 1000 servers and upgrade the cpu? Other than someone small. Along with the GPU and CPU board sets you can also do memory the same way. You can either make a controller board and then add in your gpu, cpu, ram, ssd and chipset all cooled via an exchange and using some form of a pcie slot and your chip set also has your IO which can be a pretty large board in its self. Think more along the lines of hot swapped storage or PS. Turn the computer around push in the PS, IO/Chipset, CPU, GPU, Storage. Then on the inside you can either make a auto connect cooling loop and power along with a auto connect IO front panel. The chipset board will contain things like the powerbutton in the event you are using a server. The rgb and keyboard mouse will be a fixed thing for server models.
Unrelated: TPM 2.0 that is required for Windows 11 has security bugs so AMD and Intel are also putting new hardware security logic in future CPUs beside AI. I wonder when Windows 12+1=13+1=14nm+++ will require those instead. Hoping AM5 still supports those future CPUs in that case, from what I hear Windows 11 is a miss :P
@@josiahsimmons9866 Why, because of the security vulnerabilities? They are unfortunate, but caring about them as a user does not make sense, and here's why: On your current Windows 10 installation, you either are using Full Disk Encryption (likely Bitlocker), or you are not. - If you are not, then you have far larger security concerns than the TPM 2.0 vulnerabilities, and therefore shouldn't care about them. - If you are, and you're using the TPM, then you're already subject to the TPM 2.0 vulnerabilities. - If you are, and you have a different solution to Full Disk Encryption (encryption keys stored elsewhere), you can still use it on Windows 11. Just because you have a TPM doesn't mean you have to store the encryption keys on it.
oh wow, such a good surprise to see stuff like postgresql being tested on server centric machines. The lab is making some dope stuff behind the curtains
I hope they start to include something like the OpenAI tols as part of their evaluation suite, but it's hard to find AI workloads and tools that can be run on CPU, as well as both AMD and NVIDIA GPUs.
For the AI benchmark that you performed, there is an important disclosure to make: OpenVino is developed by intel, so it is only logical to be optimized for their hardware (so the AMD results may not be that bad in AI after all, if the customer is using another Artificial Neural Networks inference engine such as ONNX runtime).
intel has for a very long time provided some of the best open-source tools in all fields. In general more often than not using alternatives just means far worse performance. And also no, their code is optimised for general purpose X64 CPUs as it also has to run from their lowest single-thread Atoms to the 288thread lineups.
@@ABaumstumpf You're wrong, there's no such thing as a "general purpose x64 CPU", if you want your code to run fast, AMD and Intel don't have the same feature set, AMD didn't support AVX-512 till Zen 4, virtualization features are implemented differently, and , Intel C++ compiler is both closed source, in some situations faster than open source alternatives like GCC or llvm, and it deliberately generates a binary that *only* enables fast code if it detects an Intel Branded CPU, even if AMD supports that feature. Intels response on the github bugtracker to AMD bugs is: > Thank you for reporting, OpenVINO is officially supported on Intel Processors. Please take a look at the system requirements for additional information. so yes, OpenVINO is crap for a comparison.
@@woobilicious. "You're wrong, there's no such thing as a "general purpose x64 CPU", " NO there absolutely is. that the CPUs are not identical is the reason why you then compile the source for your specific hardware to take full advantage of that, but the code as written does not rely on any of that. "Intel C++ compiler is" Completely irrelevant cause is a source you compile on your own.
@@ABaumstumpf OpenVINO is only supported for intel and (by community) some ARM chips. Code can be written in ways to benefit your architecture, and thus not a competitor. Matrix multiplication for example is probably way more optimized when using AMX, and not care as much to optimize other. Its incredible hubris to think that software made by intel for intel chips will even try to optimize for AMD, their only (main) competitor.
The real test is how does on Intel's CPU AI compare to GPU AI compute. I gather GPU AI still wins by a wide margin for both power efficiency and speed. However, if you really care about performance you go the custom accelerator route.
I mean, after the Ultimate tech upgrades, and Linus going all AMD on his personal box... Sure looks like Intel shafted them, even if that's not what happened really.
Production quality is top notch, but i guess that there is not a better way for linus to move/speed up/slow down or whatever he is doing with the teleprompter instead of using a remote in his pocket and having to put his hand there every now and again. I've noticed it before, but in this video around the 3 minute mark it was very obvious.
He has all ready mentioned multiple times they used remotes for the teleprompters. I believe he even complained that they aren't the best for it but couldn't find better ones. I vaguely remember him saying these remotes were also difficult to get. but I am probably wrong about the last one. Despite Linus's production, his company is mostly focused on TH-cam content, To me, having a TV quality production is very jarring on youtube and it just removes a lot of the personality. This content is great in quality while retaining the scruffiness. He is not afraid to show a bit of behind the scenes, show a bit of jank because ultimately, that doesn't matter. It's part of the content and I think its fair to say most of us loves to see it. He has given multiple behind the scene tours of his studio space, before it was build, while it was being build and after it was complete. Even now with the lab he is doing the same. It doesn't have to be perfect and being perfect might actually be worse. Honestly I wouldn't mind a seeing a video on how there teleprompter stuff works. It'd be great. Could even be a floatplane exclusive and i'd watch it.
when you described AMX, I was hoping they would also be useful for non-ai simulation and animation type work. That involves a lot of matrix math too. But AMX currently only has one matrix math operation, while those non-ai workloads use a diversity of operations. Using current AMX outside of AI would involve lots of shuffling data in and out of the AMX registers, which would severely limit how much those workloads would be improved by the extensions.
I think it's just a matter of building out support, rather than a physical limitation, unless AMX is pretty useless even for ML. The kind of AI/ML matrix operations that need accelerated depending on what kind of algorithm you are using vary incredibly, and have a lot of overlap with animation+simulation.
@@sophiophile I'm not sure what you are saying. I am saying that viewed through the lens of what AMX stands for or through the lens of the applications that the name reminded me of, AMX is horrifically incomplete. I have no knowledge or opinions on the challenges or lack thereof regarding making AMX more fully-featured, and I have no knowledge or opinions on whether the current state of AMX is actually helpful for machine learning.
AMX matrix multiplication is awesome for many ML workloads. Matrix multiplication is also useful for a ton of physical simulations, but I'm not sure how that translates to animations or real time simulations like games. Not sure what Jeffrey Leung is talking about.
@@cobdole9409 The main thing I had in mind when writing that was that matrix-vector multiplication is a major part of sims and animation. A matrix and a vector are conceptually different to me, so I didn't consider that a vector can be represented as a 1-column matrix, allowing AMX to do matrix-vector multiplication just fine. That substantially increases how much I can expect AMX to benefit those use cases.
@@cobdole9409 The OP claimed that it only supports 'one' matrix math operation. I didn't look into the details of AMX, but if it can only perform, for example matrix addition in single steps and not calculate matrix dot/cross products (and by extension tensor products) directly like a GPU, thereby having to perform n^(w+v) operations for a single matrix/tensor product- it is only going to accelerate most ML/AI operations like 5 times compared to a regular CPU in most cases. When the standard for accelerated ML/AI operations is CUDA where you are looking at a *minimum of hundreds of times* faster than typical CPU operation speeds, I just don't see how this is supposed to be seen as a selling point.
Hey guys, from someone working with molecular dynamics, it’s usually run and optimised for GPU. You probably know and use this for testing raw compute performance but it is by no mean a real life application for these chips ;-) keep up the good work with the lab though can’t wait to see what’s coming 😊
Maybe add an arrow next to the graphs indicating whether higher or lower is better. Rather than needing to read and figure out lower or higher is better and then look back at the graph. And with the switching axes, moving between vertical and horizontal graphs, it only ends up more confusing.
That method of cpu installation has been around since first gen xeon scalable. You're also supposed to leave the cpu in the tray and put the heatsync on, then remove the assembly from the tray and install it on the socket. That method keeps you from getting any oils on the contact space since these cpus are very sensitive to pressure and contamination. Granted, I haven't worked on this newest generation but have plenty of experience with most of the previous revisions and have had to troubleshoot multiple installations where people didn't install these cpus correctly
Pretty sure he doesn't have a CPU tray because it came to him as an assembled system. So he had to improvise. That's not an excuse for getting goop in the socket, but it's why he isn't following the proper method.
In all fairness if your compound is non-conductive it's probably fine. Pressure should spread it out and contact should be fine with nothing shorting. But I'd still not recommend doing it =p
Intel has been using the same installation mechanism for several years on their server sockets... I'd recommend watching the STH video regarding that since you don't seem to know the BKMs on that.
@@hypershock4270 IF its not conductive then yeah, should be fine, however that is not the case as some thermal pastes are literarly made out of metal like copper or alluminium... so depends if he used the metal based one or not
Yes most thermal pastes on the market are not conductive, I've seen a guy on youtube who put a whole tube of thermal paste and a cpu on top of a socket and it still worked fine.
I feel like not yet days ago I watched a video featuring Linus all about how awesome Sapphire Rapids workstations were but maybe that was a hallucination
3-4 weeks ago, had to watch it again to clear some doubts. 1) Mainly happy because competition on the field started again. 2) Prices seemed a little strange. 3) Graphics used on Intel presentation never compared their products to AMD. 4) Explains some of the new points of the architecture know at the time. 5)Hopefully expects the product to trade blows on the same level with AMD releases, or introduce a new mechanism on the market. 6) Finishes the video singing possible praises of hope for the reasonable "niche" market that is affordable workstations for individuals. Results came on this video it: - Sadly It was mostly beaten and punched on the gut by AMD in most tests. - Given the results Intel prices are already a somewhat bad financial choice. - New mechanics introduced are accelerators focused on AI. Interesting concept given the recent times but no guarantee of when you can start getting value of it yet. - Some AI accelerators on some products are a "feature" for the Intel on demand deal. - Continues to expect good competition to drive the market.
Yes, because AMD is currently putting 0 effort into their workstation lineup, cancelling non pro threadripper altogether, and Intel is offering a decent solution for people who need a tonne of ram and pcie lanes but not necessarily cores
@@TimSheehan oh yeah that too, far more lanes and more ram are POSSIBLE on Intel side given this new launch, it may save time and headaches and like the video said Intel will try to launch lines a little more specialized in certain areas/applications (can't really say if that will hold its ground on the long run...for workstations) But that power consumption from Intel is also another point that some may find hard to ignore. Truly AMD has been too laid back on this sector for basically 1 to 2.5 years.
Funny how this video comes out right after a sponsored one praising Sapphire Rapids Edit: I'm actually talking about the "Don't ask me the price" video, sponsored by HP/Intel. Linus says that the Xeon W9-3495X is "arguably the best workstation cpu on the market". To be fair, in this video he's talking about server cpus.
What do you think of Intel’s Sapphire Rapids vs. AMD’s offerings? Let us know below! Check out Intel Xeon Platinum 8468 Processors: lmg.gg/nJmkU Check out AMD Epyc 9654 Processors: lmg.gg/NXFm1 Purchases made through some store links may provide some compensation to Linus Media Group.
Is it just me or is it that after 3:16 all the audio is significantly delayed from the footage? The lip movements and sounds dont match up with the audio until like 5 seconds afterwards.
I remember when Intel used to glue CPUs together, it was the Core 2 Quad Q6600. Great CPU, but it was literally two dual core chips on a single package.
It's so much fun watching LTT videos like this, as an IT pro in "training" I have been studying for certifications for about 6 months now, and being able to understand what you are talking about, even though I haven't done any server work, is so cool.
Is anyone else’s viewing experience being weird? It seems like right at the beginning of “Eating Paste” I’m having audio from a different part of the episode playing, I’ve restarted by phone and everything
I hate that the early companies branded this machine learning procedural generation technology with "AI" and now it's just, forever called AI when it's not.
@@upsilondiesbackwards7360 AI is artificial intelligence. Meaning it can think and reproduce a response base on learning/data. ML is nothing more but a process where the software builds a giant database base on input and create a model such that correct response can be reproduced when presented something (e.g. image recognition, show software an image, and it can identify it accurately). these CPUs accelerates the ML part, nothing to do with AI.
As a system builder and someone who works on putting servers together, this isn’t a new way to install the processor, if you Buy OEM then yes you would have to do it the way you did, if you purchased through likes of Hp/Dell etc they’re already fitted to the heat sink.
Idk if it’s just me but this video’s audio is so out of sync with the visual that it’s literally unwatchable for me. It seems like a 3-5 second disparity between the two. Possibly just TH-cam shenanigans, as I haven’t tried a different device yet.
To adress the introduction, TSMC only builds the chips for Intel's GPUs from my understanding. As they catch up to TSMC (which they are doing) they'll likely go back to doing everything themselves. I also would be surprised if their change in profit is a result of their R&D.
In our case, we have VM infrastructure based on Intel Xeon and you can't mix\match different CPU vendors if you want to expand existing clusters. That said, our new clusters and projects are all entirely AMD Epyc.
My company can't use AMD because of Oracle and their licensing. AMD Epyc with 4 chiplets requires a license for 4 CPUs, and that is hundreds of thousands of Euros. Which is why I'm concerned about Intel going the chiplet route as well.
IBM has been doing the On Demand thing in their mainframes since 2004, I think, and later in Power servers as well. It’s a good way to turn capacity on and off without needing to buy new hw (which may take a while to get to you and you may only need for peak usage). The key *is* in the pricing, of course.
6:50 It wouldn't quite be "on a single board", since the 8-socket systems are blade configurations with multiple CPU daughterboards connected to a backplane.
The irony of it is that Intel was actually first with having glued together CPU with the original Core2 Quadro actually being 2 core2duo glued together.. and that was not even that high speed.
But Linus... You also installed the CPU's into the heatsink on the two previous generation of Intel's scalable processors. You probably threw the bracket for it away, I saw you did in an earlier video! However, this new generation has this small tap to get the CPU off the heatsink, very handy. I work with them everyday :)
So I gotta say that the "heat sink adapter" model has been used for several years in data centers with no issues except for human error issues. This also come from experience using these in 2 of the top cloud computing companies. Also the socket with the lever that Linus is freaking out about is a godsend. Prior to this Intel suggested using a screwdriver or a similar flat blade tool to remove the processors from the adapter/heat sink. Also gotta say this is the first Linus tech tips that had me talking back to the video and had my wife asking me if I'm arguing with myself lol. Still great content, but please reach out to individuals that complete break fix with these components. I offer myself as tribute :😜
Yeah before this, the best was an ifixit plastic spudger. 1st and 2nd gen xeon scalable had the carrier clip without the arm. 3rd and 4th now has that handy arm.
Is there any reason this cpu couldn't have a lever-arm socket similar to Intel or the AM4? Because installation the way LTT showed seems like a way to have more people break their servers so they buy more lol
@@aarcxnum Linus just is doing it wrong. It's really easy and less risky than older socket designs. The samples he gets may not have come with instructions but Linus often refuses to read instructions anyway
I think a lot of the reason that people still choose Intel server CPUs is that vMotion and live migration need the vm to be powered off to switch between Intel/AMD processors so it's easier to add more Intel servers than it is to replace all of the servers in an environment.
I use Proxmox with a mix of AMD Epyc and Intel Xeon servers in the same cluster. I did a lot of test live migrations of VMs while running various applications and with a common subset of CPU flags configured for the test VMs and managed to find the right flags to allow live migrations in both directions without it crashing. It does mean you have to sacrifice some CPU flags though, but I did benchmark some applications (running the Stockfish chess benchmark is one example) and barely lost any performance, so I decided to implement it and it's worked well so far.
It'll take Intel years to get their server segment back on track. For now, AMD will continue to accrue marketshare in the data center market, just as they have in the consumer space. They rested on their laurels for far too long, and until Pat took over, at Intel, they remained stagnant even though AMD was making steady gains. Once their new fabs come on line in Ohio and elsewhere, it'll be interesting, but that's long way off.
Did the audio desync a few minutes in for anyone else? Edit: It was desynced on mobile, watching on desktop now and it's fine. Re-checked on mobile and it's also fine.
Watching this video thinking "Man, Linus seems especially hyper and upbeat today..." I got 1/3 of the way into the video before I realized I had the player on 1.25 lol Really interesting video!
5:20 actually Linus, you're supposed to leave the CPU in the plastic tray with the plastic adapter on the CPU... Then you grab the heatsink and set it on top of that and it clips/clicks on the plastic adapter... Then you lift that heatsink and cpu with sandwiched plastic frame adapter off if the tray... Then bolt it to the mobo. Easy peasy.
I have some doubts here as engineer for the benchmarks, as you said the server is meant for virtualization, so what's the NUMA layout etc etc. Don't get me wrong I'm really pro AMD, but a good benchmark would be to spin up on every machine as example 50 mysql/postgresql databases 50 nginx servers And then benchmark all the vm instances at once (doesn't have to full heavy load, but equal between both servers).
That would mean that he needs to know what he is talking about. This channel is ok for commercial tech, but any time they go into the professional sector, it makes me absolutly cringe how bad the information they give out is.
HOW ARE YOU UPLOADING VIDEOS SO FRICKING FAST MY DUDES? I have not finished whatching some and see a new update, kudos to the team really, holy shit is this a lot of work, well done guys
Around -3:17 the audio loses sync with video. Not sure if it’s a TH-cam player issue, or something baked into the video. It happens for me consistently in the TH-cam app for iOS.
It's the lack of PCI-E lanes on desktop CPUs that's really driving me nuts. After a discrete GPU and NVMes, I couldn't even POST after adding a 10GbE nic because I was out of available lanes.
@@NightshiftCustom Hey, I paid for the premium hardware, let me use it lol. For most people the performance difference is negligible but I typically game in 4k ultra settings and have RAID 0 980pros so even 4 more lanes could be a meaningful performance increase.
Not sure if this is fixable by you guys or if this is just a TH-cam thing, but for some reason at about 2 minutes into the video, the audio started lagging behind video playback by about 2 seconds or so
as a lifelong intel supporter (they had a fab in my hometown, even if it was one of their most outdated ones, I felt at least some pride in supporting them for that reason for a long time), I'm feeling pretty good about the first AMD build I've ever done for myself 6 months ago.
As someone who will be purchasing a good few servers later this year I am happy to see the competition. Finally it is not just dump your money in the Intel pit. I look forward to the discussions this will cause with the upper levels who only know Intel 😉
I wish it was that simple. we tried Epyc several times - it was abysmal. From fringe problems to sporadic performance problems to the utter lack of any realtime or longterm support. I still want them for our workstations but as the software will ultimately not run on AMD it also will not be developed on AMD just so we can test on more similar hardware. It is a shame just how bad the support is.
Someone from Super Micro was sweating really hard while Linus was holding their server at an angle.
theyre prolly crying rn after the thermal paste fell in the socket
They know ... Oh ... we send something to Linus, write that off, it's already broken ! 🤪
Holding the server and applying thermal paste.
Haha was thinking the same 😂 I would have love to have been at the meeting where they decided to send this system 😂
Linus needs to do a video with Phil Swift
Linus - "It's a bit of a finicky socket to install in." Also Linus - **Drops thermal paste into the socket**
That seemed to me an obvious design flaw. Human error will cause many people to intuitively drop paste into their socket due to this design.
@@LiamNajor or ruin the socket as the CPU falls off that heatsink and drops onto the socket... I witnessed enough people ruin desktop and laptop LGA boards by dropping CPU's in them, this just seems scary.
@@LiamNajorfr
@@volvo09 I build a few servers with these heatsinks and it is actualy way better than the normal way of droping a cpu into a socket.
Also Linus: Thaks to Super-Micro for lending us this (insert expsnive sounding name here). Uh-oh I got electrically conductive paste in the socket. Eh shoudl be fine.
I'm absolutely astounded that he dropped thermal paste on the socket and didn't bother to even try and clean it out with alcohol...
He probably drank it all, but it didn't make Intels crap look any better
it is better to not to touch a socket that have thousands of small pins. there's solid chance linus will bent a pin or two in the process of cleaning it
spilt a lil bit of paste in my socket. Freaked out and tried to get it out. Bent pin. Proceeded to break pin in process of fixing it. Should have left it alone.
better not touch it
Clean it out with what? Qtip is too wide, cloth would drag all the pins and bend a bunch all at once, you can poke at it with a needle but at that point, why? Just send it back and let them figure it out later if it's that much of a problem because that's the best way to not just break the product
intel: doesn’t renew their Extreme Tech Upgrade contract
Linus: Intel’s New CPUs are Cringe
I mean it is kinda odd that he suddenly has all these issues with Intel despite the fact that Intel has done an absolutely terrible job since the 7th gen lol
Edit: This is entirely sarcasm, sorry I missed the mark on making that clear. Linus has called Intel out literally for every release since 7th gen. I'll leave it as is so people can understand the convo and put a new comment below.
Now the shoes on the other hand. They try to build their house of cards, but they fall like dominos. Checkmate.
First thought. Not lying. First thought...
@@explodatedfaces To be fair he has released several very prominent videos bashing Intel in the past few years, like that one where he was ranting in the rain in Taiwan
@@explodatedfaces 12th gen was fine and is still arguably better value on the low end and performing on the high than anything AMD has.
But your point still stands, 5 years of felating Intel to suddenly turn around now and (almost unfairly) blasting their product makes me question the writing team more than I already do.
The "On Demand" feature feels like a giant middle finger to Intel customers
Because thats exactly what it is lol.
It's a clusterf*ck of a CPU architecture. Alder Lake is like NASA tech compared to this.
As a private person, yes, of course it is. The problem is that management or procurement departments or even tech departments prefer this shit. You have a fixed budget for something and that's it. Splitting the investment over two or more budget periods makes it WAY easier to buy something you want or need. It's still bullshit that costs the company more in the long run, but that's how most companies work sadly.
To be fair, this practice of modularization of hardware features through on demand or as a service it's becoming the industry trend. Dell did it with their data protection solutions, like IDPA and Data Domain, Dell APEX and PowerStore. IBM does it with Z framework. HPE does it through HPEFS feature as-a-service solution. Equinix does it in general. Unfortunately it's a trend. At least with Dell, they actually sell the hardware for the price of what would cost without the feature, like storage, (in this case, they sell bellow the cost of production), and then they make up for it as the customer licenses the features.
Intel used to do the same in the hardware, essentially the same chip hardwired to disable some functions. If you think about it, once a chip is designed, producing more of them cost much less than making a simpler new design. This is also why Apple put M1 and M2 chips into laptops and tablets. I am sure they could disable features on the iPad to save power. But to design a different chip just for iPad is much more expensive. Because Apple uses its own chip, they don't have to play the marketing naming game of different versions of M1/2 chips. Just use the same chips. Also this is the reason why they put an A13 chip into the Apple monitor, same reason, the chip is cheap enough.
Next episode: How to clean a CPU socket with a pressure washer.
But a good team, they learned: They benchmarked everything BEFORE Linus is allowed to do something with this system. :D
As an ex server engineer those power usage costs are a real killer. I expect my former colleagues are buying all AMD now.
Wait until room-temperature superconductivity goes mainstream...
True
Even using Alder Lake for servers lol.
@@raylopez99 I just wanna invest in QPUs at this point tbh
I may have to invest a little into more companies I loathe for it, but I am not giving Google a cent. IBM was supposed to be working on this, although I question if we reach quantuum thresholds if they're just gonna keep making bigger surface area dies like Intel, or if they're gonna keep trying to stack like AMD until we have CubePUs. Which in fairness, I think would be rather neat putting a borg cube CPU in my computer like some sort of Numenera, complete with nanotube heatpipes running through the die interior just to desperately keep the core cooled.
@@raylopez99 won't reduce the power consumption of your transistors though
The issue with those accelerators is that for them to be worth it (to counter the loses form not going AMD), you will need to use them so much that a dedicated accelerator card would be much better than this CPU.
They are putting themselves in a very small box. Not unlike Optane really. The technology was good for specific use cases but it wasn't compelling enough compared with alternatives to be a commercial success. I guess they have to go with what they have.
@@shirro5 Didn't they reuse the tech to make much more practical ssd products then just discontinue them anyways
whats a accelerator card?
@@Coockiez-007 an AI accelerator card is exactly what it sounds like: a pcie-based solution to add dedicated AI accelerator hardware to a system. If you're an organization using ai learning on the reg, you're gonna be using several of those and not bothering with Sapphire Rapids, especially since an AI accelerator card will outperform Sapphire Rapids for that use case while consuming less power.
@@Coockiez-007 An accelerator card is a card that specializes in hosting devices called accelerators. These are hyper specialized devices that operate at a hardware level (and are therefore mostly immutable in application). Think of it in terms of if you separated an ALU out into its various functions such as processing only multiplication of 2 integers, you'd get a lot more efficiency out of it due to needing less loads, inputs, gate logic, etc. But it would be useless for anything else. So when a company needs to perform extremely high intensity and highly specialized tasks, such as machine learning, they would load a computer with accelerator cards where the highly specific workloads would get offloaded as much as possible.
'As much as possible' is a key word there, the CPU will still process whatever it can. So there could be a balance struck here for massive servers dedicated to AI, if the AI dedicated CPU is capable of saturating all the accelerator cards while also having large improvements in its own processing then this is a performance increase for these servers. And processing power is one of the main concerns in future AI endeavors.
It also shows a movement towards hyper specialized CPUs that we can expect more improvement from over the years. An accelerator card is only superior in scenarios in which there is reason to use it, a CPU processing things directly is always much faster than it offloading it to a card to do so. So there is definitely argument to be made about it being a 'first generation iteration' of what could become a major player in the future as well, unlikely to actually replace accelerator cards anytime soon but could offer a lower-budget option for smaller workloads in the future.
I used to own a cloud computing company - we were not massive but we had racks filled with Intel Xeon CPU. I was often asked why we didn't consider AMD and the reason is one I never hear you mention: you have to cripple CPU features to allow hot transfer of workload between AMD and Intel. So in an environment where we were counting cores in the thousands or higher, introducing new CPU that are not "compatible" with the fleet would be an insane choice. You'd have to prevent live migration of workload between heads or alternatively, modify hypervisors to limit features to guest workloads. Because of this, we were effectively "married" to Intel. Moving some devices to AMD would have required too much development time to modify control systems, limited interoperability or have meant that we would have had to make very large purchases on AMD to essentially split into two, distinct farms, and that would have been quite a bet to have taken.
Couldn't you have created 'trial servers' to test it out with beta testers to streamline the switch? It would work since the Beta testers are willing to risk data loss, and even then they probably have a backup drive somewhere... And then you work with them to finish the optimizations required. That way, at launch, the new servers would be ready and optimized, as well as take a streamlined approach to the switch to the rest of the clients. Heck the Beta testers could accept a steep discount to the first year or so of the new servers.
Note- I'm just making a suggestion. If there are issues with this I would totally understand
@@tatsuyahiiragi416 You don't work in the financial side of a data center do you? Do you have any idea on a mass scale how much that would cost in personnel and R&D? Plus finding the right people for the right tasks makes it basically impossible without a HUMANGOUS investment that does not make sense for the very small advantages.
Sometimes we tech nerds forget that these are businesses using the tech, that means it needs to turn profit. If it doesn't or the return does not match the input, you do not do it PUNCT
@@Bultizar I have one question - does making a consolidation of servers up to 6:1 is a small advantage? Because yes - one EPYC server can in many cases replace up to 6 old Xeon ones. When you have huge datacenter with millions of cores then being able to shrink the space and manageability so drastically doesn't seem to me as a small advantage... I have talked recently with engineers from MS Azure cloud and this is exactly what they are doing constantly thanks to these EPYC servers - they constantly consolidate their stuff now.
We have another reason to avoid AMD: They do not work as expected. It takes a year before the boards and drivers are really working as intended. It usually comes down on AMD releasing the CPUs earlier than they should, because "they can fix it via driver update". Intel spend way more time in QA and has better support for board partners. You usually can order them and assume they work like described. Also Intel CPUs have more features that became relevant for our customers. We do have a AMD farm, too. Just for offering low cost KVMs for tasks like Webhostig. But even there we noticed that customers ask more often for accelerators for Network and AI. And Intel CPUs brings both with it.
@@andreas7944 that's not really what enterprises using EPYC say. You might think that's the case for desktop motherboards and consumer GPUs. But in server processors? Companies that already switched to AMD say how much easier their tasks is now. The switch is hard. After switching, not anymore. AMD knows servers are paramount, that's why they have all their software support on point there.
Step 1: Put thermal paste in the socket.
Step Too: Add more torque because that's the problem, clearly.
To be fair with CPUs that size if you don't torque them properly they can just completely not boot - there's a reason AMD Threadripper and Epyc have their own torque drivers and instructions on what to torque down first.
Many an overclocker has packed a socket with grease and had it be absolutely fine. Large chips absolutely do have connection issues if not torqued properly. My delid 10920x wouldn’t post a mem channel because one size was a fraction too loose.
And that was today's edition of Tech Tip of the Day!
@@thebludragon8353 Conductivity is not the only issue here!!! Even perfect insulator around pins can change impedances (and electrical length) of the individual signal lines, from/to CPU, inducing skew. Just take a look, how much attention is paid during design of those traces on the MB, where every micron is important.
I love how people are mentioning torque as if they didn't go get the manual, read the manual for the torque specification, then get the torque screwdriver and torque it down to specification, reseat the chip (because they mentioned it wasn't seated correctly at least twice) and still have it not boot.
Linus’ clumsiness knows no bounds, he always finds a way to give me a tech panic attack.
If your product can withstand Linus's hands expect money coming in
Linus Panic Tips
At least we know how durable a product is after Linus handle them
A panic a-tech
I'm just glad he didn't try to clean it himself.
3:15 I love how Intel not only replaced their XEON naming scheme with metals, but has _already_ completely borked it by having two different 'golds' and now the highest is 'max' lol.
I hear gold pressed latinum is next.
Linus makes me feel better about how I handle my machines when I'm building them.
Me too, especially at work when clients ask us to assemble desktop computers.
I really think that's part of his program, "if MY shop of monkey's can build these,, YOU CAN TOO".
as soon as I saw the uncovered cpu socket I had that bad gut feeling something might happen to it. Luckily he didnt drop the cpu or the cooler into it
I was screaming internally, why are you screwing around with it *over the server*
I hope he faces consequences. I hate seeing him play around with expensive stuff a company provided for the video and then break it and be like "upsiii lets just let supermicro figure it out and act like its not our fault". And yes i know it should be an easy fix for someone at supermicro but still not ok how he is treating someone elses property
@@phillips5001 it’s just cost of doing business. You don’t shoot your golden goose, and if Linus messes up a few thousand dollars, it’s no big deal in a multi-billion dollar revenue stream. Linus is just a marketing expense.
Lisa Su might be one of the greatest tech CEOs ever.
What she did with AMD, taking it from almost bankrupt to the behemoth it is today is nothing short of a miracle.
You would think people would celebrate Lisa Su as a role model for women and young girls but no. If I had a daughter, I would want her to be like Lisa Su.
Good leadership but let's not forget the designers.
Also she made me rich as I bought a bunch of AMD shares back in the 2014 and still holding them.
@@victorcoda I wish I did that.
Lisa Su is my Oshi!!!!!!!!
Intel On-Demand is also going to really suck for us in like 10 years, when these servers because super cheap on the used market. Are we going to need to start pirating features of the CPU?
I wonder what DRM they use.
I was thinking this, if the DRM needs to phone home all the time you are relying on Intel keeping that switched on. They could purposely make your system redundant by taking down the DRM server and locking those capability's, very wasteful and underhanded.
@@jonathanellis6097 Unlikely.
There are too many use cases where specific servers have no internet connection.
@@hubertnnn I haven't looked into the actual specifics but I would assume you get some kind of signed license file that can be dynamically loaded through the BIOS or from the OS (like how the platform can verify secure boot images).
The fact these servers can be in service for 10 years is a good incentive to not just close down their drm system if it doesnt take off in 2 generations.
For Benchmarks, can you make it ultra obvious if it is lower is better or higher is better. Change the color scheme of the bars, give it a texture, change the background, ... something like that. Every time you show a graph, i first have to look to the top to see what for (or listen to you) then look at the bottom to read higher or lower (which isn't always in the same position) then look at the data. During that time it is hard to listen to you so i always miss things.
I see where your coming from, at a minimum they should keep that text in the same place in every graph.
I also believe they should make the graphs available in the description lol
Like a big arrow pointing left or right
Damn it's too hard to read a line and then watch the graphs, lol
Also, he said they are working on improving the graphs, which is not something you do in 5 minutes of work, and also shows me you probably only come watch videos that allow you to complain, lol
@@thunderarch5951 wtf are you on lmao
Every time you're looking at a graph, you should actually understand what the graph is testing, otherwise what's the point of even looking at the graph? You might end up looking at a graph comparing model numbers and deduce that 5700XT is better than 4090. So just pause the video if you need to take a closer look.
We've been performance testing these at work recently - definitely better than last gen but we're also testing EPYC for the first time so things may get interesting
Same here in Intel looks like they finally took the lead back in terms of day-to-day use because we have some excellent numbers coming off the new chips.
@@thecooleststufficouldfind But the numbers that are in the price and in power consumption => inefficient af...
@@cosminmilitaru9920 If you are a fool.
@Дмитрий Юрьевич maybe you should watch the video, there's a graph showing Intel using almost twice the power of AMD, and loses all but 1 benchmark test.
Cringe is one of the oddest ways to describe a CPU.
Your comment was stolen by a 18+ promotion bot account and got 100+ likes.
Just thought you might wanna know.
Edit: The comment was removed by anti-spam or comment mods L bots
@@ryanspencer6778 it was deleted
@@ryanspencer6778 Comment got removed 🎉
@@ryanspencer6778 why would he care?
@@ryanspencer6778 Well...f**k...
Compared to basically any other LTT video, it really feels weird to not have a clear cut between sponsor spot and content... I really like the more separated version a lot more that this new style. Thanks for the video! :)
Agreed. I really didn’t like the lack of clear cut transitions
That's on purpose so you cannot just skip it
lol its on purpose to trick you into watching it. linus loves $$$$
@Shane not exactly a trick when he talks about it openly. The guys got a lab to pay off and tons of new employees to pay. They need to eat too
I agree, but I think it wasn't too bad here, because the sponsor wasn't Intel or AMD for example. I agree that it's a fine line though
Is the audio out of sync for anyone else after about 3:24
As a technical engineer, I just wanna say that installing the CPU in servers is a hell of a lot easier if you just use the protective foam padding that comes in the CPU box to lay the CPU face down on. Then apply the CPU guide (the black plastic that is removable from the heatsink) to the CPU and then simply click the heatsink onto the guide. Boom. All together, no risk of dropping anything and secure
no no you gotta squirt the paste directly into the socket /s
You underestimate Linus's ability to drop anything
No risk of dropping anything? Doesn't sound very Linus-y...
Not to mention that CPU heat sink arrangement has been used since HPE Gen 9 and Dell 13th Gen servers and, presumably, other major manufacturers. Way more convenient than messing about with levers and having to replace the thermal compound after reseating the CPU. See a lot more system board swaps in a week than I do CPU replacements in a year.
It's funny, precision engineering tends to work a lot better when used properly
When Linus was explaining how the CPU is attached to the heat sink, I pictured him dropping it into the server and destroying the pins in the socket 😂
I think we all did lol
I'm sure this is a 3rd or 4th take when making this video.
This video would have gotten 100M views overnight.
You can go to watch the IBM Z16 Mainframe video! He is scared if the operation because the machine cost over 1M! 😂😂😂
somehow video and audio are not synchronized.
The thing about building your hardware for theoretical future software is that by the time use of that software becomes normalized we'll be several hardware cycles down the road. They built a machine today for the tasks of tomorrow, but also expect you to buy a new machine in the next generation or 2, when the architecture they created finally becomes relevant. There's no such thing as "future proofing" with PC hardware.
You have a point. It is likely that intel and or AMD could hit a major breakthrough in the next few years in which one of them will double what their top of the line chip is now along with improve a vast list of things. I also think we are approching the end of gaming graphics cards. We are about half way there right now. That one computer they did not long ago where they rendered that NASA 14gig per second or frame landing model is really where gaming cards will stop becoming a thing. Once you can say have a 48 inch curved computer screen that can display 32k it will almost be not worth it to upgrade to anything beyond that. Once gaming can render 3d models that look like the highest production movies gaming will then become AI based and you will be able to just build a game on site and call for anyone who wants to play. Its likely that gaming will be groups of friends who play games and then add in new players who scan the boards looking for something that fits their needs or makes their own with in a set friend group.
I have a feeling that gaming rigs are slowly fading away as more and more people move to console games which are and can be just as in dept as the PC games.
This is exactly what I was thinking! I was figuring that by the time the On-Demand features would be needed, we'll have one to two new generations of CPUs, which can be anywhere from a 15% jump to even as bad as 30%! That's a HUGE computing power downgrade!
Adding extra memory pretty much anywhere extends the usable life of the hardware when those memory requirements inevitably increase, like cache and ram and vram, but really specific hardware solutions looking for a problem just end up looking like a gimmick and even if the use case takes off the next generations of hardware will immediately have so much better support it still looks like a joke to fleece early adoption addicts and koolaid drinkers.
@@josiahsimmons9866 I think cpu chips are moving in the wrong direction. They are trying to get as much as they can in to a single chip when I think they should be building a embedded chip set board. Where you have say tiny 1 inch or less squares of quad or 12 core chips. Then a heat sink housing. Think GPU as a CPU yet double or quad layer where you have dual sided water cooled heat sink. Each embedded cpu can be 4 to 12 cores and think 10 chips per side of a dual layer cpu card would give you at 420 cores and an unknown logical cores. We know CPU chips can be smaller yet because they require some type of cooling embedded cpu boards are better suited. Because who in their right mind is going to open up 1000 servers and upgrade the cpu? Other than someone small. Along with the GPU and CPU board sets you can also do memory the same way. You can either make a controller board and then add in your gpu, cpu, ram, ssd and chipset all cooled via an exchange and using some form of a pcie slot and your chip set also has your IO which can be a pretty large board in its self. Think more along the lines of hot swapped storage or PS. Turn the computer around push in the PS, IO/Chipset, CPU, GPU, Storage. Then on the inside you can either make a auto connect cooling loop and power along with a auto connect IO front panel. The chipset board will contain things like the powerbutton in the event you are using a server. The rgb and keyboard mouse will be a fixed thing for server models.
@@kameljoe21 seems like you need to become a computer architect
But wouldn‘t the lower power consumption and lower Price of EPYC make it viable to install a pysical AI accelerator?
Or just wait a generation for amd chips with accelerators on chiplets. They're putting ai accelerators in their next gen laptop parts.
Unrelated: TPM 2.0 that is required for Windows 11 has security bugs so AMD and Intel are also putting new hardware security logic in future CPUs beside AI. I wonder when Windows 12+1=13+1=14nm+++ will require those instead. Hoping AM5 still supports those future CPUs in that case, from what I hear Windows 11 is a miss :P
@@Malc180s but if you can get by, then you can wait.
@@mikfhan From what I've also heard it's a mess, that's why I'm still running 10 and won't be upgrading for a WHILE.
@@josiahsimmons9866 Why, because of the security vulnerabilities? They are unfortunate, but caring about them as a user does not make sense, and here's why:
On your current Windows 10 installation, you either are using Full Disk Encryption (likely Bitlocker), or you are not.
- If you are not, then you have far larger security concerns than the TPM 2.0 vulnerabilities, and therefore shouldn't care about them.
- If you are, and you're using the TPM, then you're already subject to the TPM 2.0 vulnerabilities.
- If you are, and you have a different solution to Full Disk Encryption (encryption keys stored elsewhere), you can still use it on Windows 11. Just because you have a TPM doesn't mean you have to store the encryption keys on it.
yo why is all of the top comments completely ignoring the audio sync after the first segment after the ad
oh wow, such a good surprise to see stuff like postgresql being tested on server centric machines.
The lab is making some dope stuff behind the curtains
I never thought I would see LTT do a video on a useful topic.
Yea, usually they just run games and photoshop or someshit lmao
It's almost like there was a reason he invested so much into the lab
I hope they start to include something like the OpenAI tols as part of their evaluation suite, but it's hard to find AI workloads and tools that can be run on CPU, as well as both AMD and NVIDIA GPUs.
yea, as someone who uses PostgreSQL a bit, I was surprised to see such a common yet to the average person completely unfamiliar thing show up
Linus is the only person that can make me watch a 19 minute long pulseway ad ngl
Anyone else have the audio and video off starting at 3:08?
For the AI benchmark that you performed, there is an important disclosure to make: OpenVino is developed by intel, so it is only logical to be optimized for their hardware (so the AMD results may not be that bad in AI after all, if the customer is using another Artificial Neural Networks inference engine such as ONNX runtime).
intel has for a very long time provided some of the best open-source tools in all fields. In general more often than not using alternatives just means far worse performance.
And also no, their code is optimised for general purpose X64 CPUs as it also has to run from their lowest single-thread Atoms to the 288thread lineups.
@@ABaumstumpf You're wrong, there's no such thing as a "general purpose x64 CPU", if you want your code to run fast, AMD and Intel don't have the same feature set, AMD didn't support AVX-512 till Zen 4, virtualization features are implemented differently, and , Intel C++ compiler is both closed source, in some situations faster than open source alternatives like GCC or llvm, and it deliberately generates a binary that *only* enables fast code if it detects an Intel Branded CPU, even if AMD supports that feature.
Intels response on the github bugtracker to AMD bugs is:
> Thank you for reporting, OpenVINO is officially supported on Intel Processors. Please take a look at the system requirements for additional information.
so yes, OpenVINO is crap for a comparison.
@@woobilicious. "You're wrong, there's no such thing as a "general purpose x64 CPU", "
NO there absolutely is. that the CPUs are not identical is the reason why you then compile the source for your specific hardware to take full advantage of that, but the code as written does not rely on any of that.
"Intel C++ compiler is"
Completely irrelevant cause is a source you compile on your own.
@@ABaumstumpf OpenVINO is only supported for intel and (by community) some ARM chips. Code can be written in ways to benefit your architecture, and thus not a competitor. Matrix multiplication for example is probably way more optimized when using AMX, and not care as much to optimize other.
Its incredible hubris to think that software made by intel for intel chips will even try to optimize for AMD, their only (main) competitor.
The real test is how does on Intel's CPU AI compare to GPU AI compute. I gather GPU AI still wins by a wide margin for both power efficiency and speed. However, if you really care about performance you go the custom accelerator route.
This video feels like as if the writers realized intel ain't coming back for a sponsor and went full throttle on the roasts, love it.
I mean, after the Ultimate tech upgrades, and Linus going all AMD on his personal box... Sure looks like Intel shafted them, even if that's not what happened really.
intel fell off, the chances for them to innovate is low whilst AMD is still making stride
Why do the audio and video go out of sync after 3:17 ? Am I the only one that experienced this?
No me too, I thought my Bluetooth ear buds were acting up
No I thought I was crazy but I noticed it too.
Me too.
Production quality is top notch, but i guess that there is not a better way for linus to move/speed up/slow down or whatever he is doing with the teleprompter instead of using a remote in his pocket and having to put his hand there every now and again. I've noticed it before, but in this video around the 3 minute mark it was very obvious.
Put a god damn pa handling the prometer
I interpretted it as him trying to resettle the chubby he was getting from talkiong about the AMD numbers
I don't see the issue with it
He has all ready mentioned multiple times they used remotes for the teleprompters. I believe he even complained that they aren't the best for it but couldn't find better ones.
I vaguely remember him saying these remotes were also difficult to get. but I am probably wrong about the last one.
Despite Linus's production, his company is mostly focused on TH-cam content, To me, having a TV quality production is very jarring on youtube and it just removes a lot of the personality. This content is great in quality while retaining the scruffiness. He is not afraid to show a bit of behind the scenes, show a bit of jank because ultimately, that doesn't matter. It's part of the content and I think its fair to say most of us loves to see it. He has given multiple behind the scene tours of his studio space, before it was build, while it was being build and after it was complete. Even now with the lab he is doing the same. It doesn't have to be perfect and being perfect might actually be worse.
Honestly I wouldn't mind a seeing a video on how there teleprompter stuff works. It'd be great. Could even be a floatplane exclusive and i'd watch it.
when you described AMX, I was hoping they would also be useful for non-ai simulation and animation type work. That involves a lot of matrix math too. But AMX currently only has one matrix math operation, while those non-ai workloads use a diversity of operations. Using current AMX outside of AI would involve lots of shuffling data in and out of the AMX registers, which would severely limit how much those workloads would be improved by the extensions.
I think it's just a matter of building out support, rather than a physical limitation, unless AMX is pretty useless even for ML. The kind of AI/ML matrix operations that need accelerated depending on what kind of algorithm you are using vary incredibly, and have a lot of overlap with animation+simulation.
@@sophiophile I'm not sure what you are saying. I am saying that viewed through the lens of what AMX stands for or through the lens of the applications that the name reminded me of, AMX is horrifically incomplete. I have no knowledge or opinions on the challenges or lack thereof regarding making AMX more fully-featured, and I have no knowledge or opinions on whether the current state of AMX is actually helpful for machine learning.
AMX matrix multiplication is awesome for many ML workloads. Matrix multiplication is also useful for a ton of physical simulations, but I'm not sure how that translates to animations or real time simulations like games. Not sure what Jeffrey Leung is talking about.
@@cobdole9409 The main thing I had in mind when writing that was that matrix-vector multiplication is a major part of sims and animation. A matrix and a vector are conceptually different to me, so I didn't consider that a vector can be represented as a 1-column matrix, allowing AMX to do matrix-vector multiplication just fine. That substantially increases how much I can expect AMX to benefit those use cases.
@@cobdole9409 The OP claimed that it only supports 'one' matrix math operation. I didn't look into the details of AMX, but if it can only perform, for example matrix addition in single steps and not calculate matrix dot/cross products (and by extension tensor products) directly like a GPU, thereby having to perform n^(w+v) operations for a single matrix/tensor product- it is only going to accelerate most ML/AI operations like 5 times compared to a regular CPU in most cases. When the standard for accelerated ML/AI operations is CUDA where you are looking at a *minimum of hundreds of times* faster than typical CPU operation speeds, I just don't see how this is supposed to be seen as a selling point.
Hey guys, from someone working with molecular dynamics, it’s usually run and optimised for GPU. You probably know and use this for testing raw compute performance but it is by no mean a real life application for these chips ;-) keep up the good work with the lab though can’t wait to see what’s coming 😊
They need to do DFT since that is still CPU based.
Maybe add an arrow next to the graphs indicating whether higher or lower is better.
Rather than needing to read and figure out lower or higher is better and then look back at the graph. And with the switching axes, moving between vertical and horizontal graphs, it only ends up more confusing.
I think you're the only confused one here LMAO.
learn to read data dude.
That method of cpu installation has been around since first gen xeon scalable. You're also supposed to leave the cpu in the tray and put the heatsync on, then remove the assembly from the tray and install it on the socket. That method keeps you from getting any oils on the contact space since these cpus are very sensitive to pressure and contamination. Granted, I haven't worked on this newest generation but have plenty of experience with most of the previous revisions and have had to troubleshoot multiple installations where people didn't install these cpus correctly
Pretty sure he doesn't have a CPU tray because it came to him as an assembled system. So he had to improvise. That's not an excuse for getting goop in the socket, but it's why he isn't following the proper method.
Did anyone else get audio that was ahead of the video? It really disrupts the viewing experience.
Did Linux really drop thermal compound in the socket and went "it's probably fine" lmfao
These aren't the droids you're looking for.
Good thing he didn't drop the CPU or cooler into a server LGA socket handwaving them around like that. Fuck that was scary to watch.
No, Linux is an Operating System, not a person.
And when it did not boot, he just blamed it on the design of the mounting mechanism
In all fairness if your compound is non-conductive it's probably fine. Pressure should spread it out and contact should be fine with nothing shorting.
But I'd still not recommend doing it =p
Linus: Drops thermal paste in socket
Computer: Doesn't boot
Linus: 🤷
It seems like the audio and video is out of sync from 3:19
Intel has been using the same installation mechanism for several years on their server sockets... I'd recommend watching the STH video regarding that since you don't seem to know the BKMs on that.
These stupid little CPU carrier frames have been plaguing us for years at work…
Linus drops a glob of thermal paste in the socket & it doesn't boot. Wow, so unexpected!
Thermal paste in a socket isnt as bad as xou probably think. Its not conductive, so its not going to short anything out if im not mistaken
@@hypershock4270
IF its not conductive then yeah, should be fine, however that is not the case as some thermal pastes are literarly made out of metal like copper or alluminium... so depends if he used the metal based one or not
@@robo-suport_czrobofactory3116 noctua's isnt
Yes most thermal pastes on the market are not conductive, I've seen a guy on youtube who put a whole tube of thermal paste and a cpu on top of a socket and it still worked fine.
That's what I said!
Is it just me or is the video and audio not matching at all after the second chapter? 4:03
Same
Me too.
I feel like not yet days ago I watched a video featuring Linus all about how awesome Sapphire Rapids workstations were but maybe that was a hallucination
3-4 weeks ago, had to watch it again to clear some doubts.
1) Mainly happy because competition on the field started again.
2) Prices seemed a little strange.
3) Graphics used on Intel presentation never compared their products to AMD.
4) Explains some of the new points of the architecture know at the time.
5)Hopefully expects the product to trade blows on the same level with AMD releases, or introduce a new mechanism on the market.
6) Finishes the video singing possible praises of hope for the reasonable "niche" market that is affordable workstations for individuals.
Results came on this video it:
- Sadly It was mostly beaten and punched on the gut by AMD in most tests.
- Given the results Intel prices are already a somewhat bad financial choice.
- New mechanics introduced are accelerators focused on AI. Interesting concept given the recent times but no guarantee of when you can start getting value of it yet.
- Some AI accelerators on some products are a "feature" for the Intel on demand deal.
- Continues to expect good competition to drive the market.
Yes, because AMD is currently putting 0 effort into their workstation lineup, cancelling non pro threadripper altogether, and Intel is offering a decent solution for people who need a tonne of ram and pcie lanes but not necessarily cores
@@TimSheehan oh yeah that too, far more lanes and more ram are POSSIBLE on Intel side given this new launch, it may save time and headaches and like the video said Intel will try to launch lines a little more specialized in certain areas/applications (can't really say if that will hold its ground on the long run...for workstations)
But that power consumption from Intel is also another point that some may find hard to ignore.
Truly AMD has been too laid back on this sector for basically 1 to 2.5 years.
He's talking about HEDT
@@paragonwill this launch is server, I'm talking about workstation/HEDT
Funny how this video comes out right after a sponsored one praising Sapphire Rapids
Edit: I'm actually talking about the "Don't ask me the price" video, sponsored by HP/Intel. Linus says that the Xeon W9-3495X is "arguably the best workstation cpu on the market". To be fair, in this video he's talking about server cpus.
Ya that was also what I was thinking and was confused
@@ELDEHIGHNESS same
which video?
Linus genuinely likes Sapphire Rapids based on his comments on the WAN Show.
I would also like to know which video.
Am I the only one with out of sync audio ?
yes
What do you think of Intel’s Sapphire Rapids vs. AMD’s offerings? Let us know below!
Check out Intel Xeon Platinum 8468 Processors: lmg.gg/nJmkU
Check out AMD Epyc 9654 Processors: lmg.gg/NXFm1
Purchases made through some store links may provide some compensation to Linus Media Group.
Ayeee
dont know
bro your killing me
*I think nothing*
I am an NPC
adalinus tech tips
Linus: Drops thermal paste into the socket
Server: Doesn't turn on
Linus: *Shocked pikachu face*
Is it just me or is it that after 3:16 all the audio is significantly delayed from the footage? The lip movements and sounds dont match up with the audio until like 5 seconds afterwards.
I remember when Intel used to glue CPUs together, it was the Core 2 Quad Q6600. Great CPU, but it was literally two dual core chips on a single package.
They did that before with the Pentium D... But I guess it's clear that glueing two Pentium 4's wouldn't result in a great CPU :/
It's so much fun watching LTT videos like this, as an IT pro in "training" I have been studying for certifications for about 6 months now, and being able to understand what you are talking about, even though I haven't done any server work, is so cool.
Is anyone else’s viewing experience being weird? It seems like right at the beginning of “Eating Paste” I’m having audio from a different part of the episode playing, I’ve restarted by phone and everything
I hate that the early companies branded this machine learning procedural generation technology with "AI" and now it's just, forever called AI when it's not.
Define what you think AI is?
Machine Learning is a subset of AI, its literally what's defined in computer science for decades
@@upsilondiesbackwards7360 AI is artificial intelligence. Meaning it can think and reproduce a response base on learning/data. ML is nothing more but a process where the software builds a giant database base on input and create a model such that correct response can be reproduced when presented something (e.g. image recognition, show software an image, and it can identify it accurately). these CPUs accelerates the ML part, nothing to do with AI.
@@nanotub3s Now, please define "thinking". What makes a piece of silicon think? What is thinking? How do we think?
@@upsilondiesbackwards7360 ...
As a system builder and someone who works on putting servers together, this isn’t a new way to install the processor, if you Buy OEM then yes you would have to do it the way you did, if you purchased through likes of Hp/Dell etc they’re already fitted to the heat sink.
Idk if it’s just me but this video’s audio is so out of sync with the visual that it’s literally unwatchable for me. It seems like a 3-5 second disparity between the two. Possibly just TH-cam shenanigans, as I haven’t tried a different device yet.
1:58 So glad he mentioned this lmao. AMD made Intel finally realize how effective "glue" is on CPUs.
To adress the introduction, TSMC only builds the chips for Intel's GPUs from my understanding. As they catch up to TSMC (which they are doing) they'll likely go back to doing everything themselves. I also would be surprised if their change in profit is a result of their R&D.
Is the audio and video off for anyone else? Starting at 3:15, the video is not matched up with the audio
In our case, we have VM infrastructure based on Intel Xeon and you can't mix\match different CPU vendors if you want to expand existing clusters. That said, our new clusters and projects are all entirely AMD Epyc.
My company can't use AMD because of Oracle and their licensing. AMD Epyc with 4 chiplets requires a license for 4 CPUs, and that is hundreds of thousands of Euros.
Which is why I'm concerned about Intel going the chiplet route as well.
@@dustojnikhummer that looks like Oracles fault to me than cpu manufacturers
@@bingchilling177 Yes, it is absolutely Oracle's fault. But right now we are kinda stuck... not sure what we will do
That's why there is such a massive amount of investment going into RISC virtualization, your issue will be non-existing in a few years time :)
If you're stuffing about in Windows VMs then your operation is inefficient and you will be out of business soon anyway.😩
IBM has been doing the On Demand thing in their mainframes since 2004, I think, and later in Power servers as well. It’s a good way to turn capacity on and off without needing to buy new hw (which may take a while to get to you and you may only need for peak usage).
The key *is* in the pricing, of course.
6:50 It wouldn't quite be "on a single board", since the 8-socket systems are blade configurations with multiple CPU daughterboards connected to a backplane.
The irony of it is that Intel was actually first with having glued together CPU with the original Core2 Quadro actually being 2 core2duo glued together.. and that was not even that high speed.
That's not the original glued chip. Pentium D had two P4 cores. Also, it's Core 2 Quad. Quadro is nVidia's branding.
Didn't they pull such a trick even before that with the Pentium D?
@@samiraperi467 well i didn´t want to remember Pentium D
I mean, yes, but actually no
You can see that as the ancestor of what is a chiplet today, cause in practice it worked differently
@@thunderarch5951 Well yes, the method of pairing them is very different but the core premise of gluing them together is pretty similar.
But Linus... You also installed the CPU's into the heatsink on the two previous generation of Intel's scalable processors. You probably threw the bracket for it away, I saw you did in an earlier video!
However, this new generation has this small tap to get the CPU off the heatsink, very handy. I work with them everyday :)
Is it only me or the vid is of sync with the sound?
So I gotta say that the "heat sink adapter" model has been used for several years in data centers with no issues except for human error issues. This also come from experience using these in 2 of the top cloud computing companies. Also the socket with the lever that Linus is freaking out about is a godsend. Prior to this Intel suggested using a screwdriver or a similar flat blade tool to remove the processors from the adapter/heat sink. Also gotta say this is the first Linus tech tips that had me talking back to the video and had my wife asking me if I'm arguing with myself lol. Still great content, but please reach out to individuals that complete break fix with these components. I offer myself as tribute :😜
Yeah before this, the best was an ifixit plastic spudger. 1st and 2nd gen xeon scalable had the carrier clip without the arm. 3rd and 4th now has that handy arm.
linus has also had videos on videos on previous gen servers that had this same cpu heatsink carrier... which he also installed wrong.
… Screwdriver? That sounds like a pretty good way to ruin the chip if you angle it wrong.
Is there any reason this cpu couldn't have a lever-arm socket similar to Intel or the AM4? Because installation the way LTT showed seems like a way to have more people break their servers so they buy more lol
@@aarcxnum Linus just is doing it wrong. It's really easy and less risky than older socket designs. The samples he gets may not have come with instructions but Linus often refuses to read instructions anyway
I think a lot of the reason that people still choose Intel server CPUs is that vMotion and live migration need the vm to be powered off to switch between Intel/AMD processors so it's easier to add more Intel servers than it is to replace all of the servers in an environment.
I use Proxmox with a mix of AMD Epyc and Intel Xeon servers in the same cluster. I did a lot of test live migrations of VMs while running various applications and with a common subset of CPU flags configured for the test VMs and managed to find the right flags to allow live migrations in both directions without it crashing.
It does mean you have to sacrifice some CPU flags though, but I did benchmark some applications (running the Stockfish chess benchmark is one example) and barely lost any performance, so I decided to implement it and it's worked well so far.
Uhhh anyone else’s audio out of sync after the 2nd chapter?
This is genuinely one of the most ham fisted sponsor videos I’ve seen in a while, felt more like an infomercial than the content i clicked for.
It'll take Intel years to get their server segment back on track. For now, AMD will continue to accrue marketshare in the data center market, just as they have in the consumer space. They rested on their laurels for far too long, and until Pat took over, at Intel, they remained stagnant even though AMD was making steady gains. Once their new fabs come on line in Ohio and elsewhere, it'll be interesting, but that's long way off.
Did the audio desync a few minutes in for anyone else?
Edit: It was desynced on mobile, watching on desktop now and it's fine. Re-checked on mobile and it's also fine.
Watching this video thinking "Man, Linus seems especially hyper and upbeat today..." I got 1/3 of the way into the video before I realized I had the player on 1.25 lol
Really interesting video!
Lol the opposite happens to me, as I default to 2x
5:20 actually Linus, you're supposed to leave the CPU in the plastic tray with the plastic adapter on the CPU... Then you grab the heatsink and set it on top of that and it clips/clicks on the plastic adapter... Then you lift that heatsink and cpu with sandwiched plastic frame adapter off if the tray... Then bolt it to the mobo. Easy peasy.
Anyone else’s audio massively de synced from the “eating paste” chapter onwards??
9:26 yeah, its totally the socket design, not the fact that you dropped a cup of thermal paste on the pins.
the audio is unsynced at 3:17 onwords and bugs the hell out of me.
For your compute benchmark using GROMACS, keep in mind that it is optimized for intel CPUs, where it parallelizes much better.
Why is the sound out of sync?!?!
Is the audio out of sync or is my phone slowly dying?
Out of sync on mine as well
On mine to
Am I the only one whose video and audio are de-synced?….I tried to read the comments but no one has mentioned it. I hope I just need to restart
Linus should try working as an Intern in a DataCenter to be tasked with server upgrades and make a video about it.
And give him more hardware to destroy?
I have some doubts here as engineer for the benchmarks, as you said the server is meant for virtualization, so what's the NUMA layout etc etc.
Don't get me wrong I'm really pro AMD, but a good benchmark would be to spin up on every machine as example
50 mysql/postgresql databases
50 nginx servers
And then benchmark all the vm instances at once (doesn't have to full heavy load, but equal between both servers).
not a coincidence that every other video on this channel is sponsored by AMD, and its basically turned into a commercial for them over the past year.
That would mean that he needs to know what he is talking about. This channel is ok for commercial tech, but any time they go into the professional sector, it makes me absolutly cringe how bad the information they give out is.
@@ion1984 that's what happens when AMD has the better product bro. Why cry?
About a third of the way through, my video and audio are out of sync.
HOW ARE YOU UPLOADING VIDEOS SO FRICKING FAST MY DUDES? I have not finished whatching some and see a new update, kudos to the team really, holy shit is this a lot of work, well done guys
They have about a hundred employees. Literally dozens of writers and editors
@@Crusader1089 Yes
@@Crusader1089 Fair, but still it is quite a lot
They are a company now, not a media outlet anymore.
@@keonxd8918 Yes
Around -3:17 the audio loses sync with video. Not sure if it’s a TH-cam player issue, or something baked into the video. It happens for me consistently in the TH-cam app for iOS.
Hi your audio is out of sink with the video futage. Thought I'd let you know as watch your video's regularly for technology info. Cheers Scott
It's the lack of PCI-E lanes on desktop CPUs that's really driving me nuts. After a discrete GPU and NVMes, I couldn't even POST after adding a 10GbE nic because I was out of available lanes.
A frustrating problem that only applies to 0.01% of users unfortunately.
20 lanes on modern CPUs is embarrassing when a GPU can use up to 16 and a single NVMe SSD can use up to 4.
yeah im really hoping to see the re-emergence of HEDT hardware without the "pro" pricing
@@Amehdion the gpu really only needs 8x pci-e lanes anyways
@@NightshiftCustom Hey, I paid for the premium hardware, let me use it lol. For most people the performance difference is negligible but I typically game in 4k ultra settings and have RAID 0 980pros so even 4 more lanes could be a meaningful performance increase.
This is hard to watch the audio is completely out of sync with the video
Somehing about the audio synchronization is wrong in this video. Particularly at ~ 4:00
It's literally multiple seconds off.
Is it me or the audio isn’t on the good frames
Is audio really messed up or its weird on my end
Not sure if this is fixable by you guys or if this is just a TH-cam thing, but for some reason at about 2 minutes into the video, the audio started lagging behind video playback by about 2 seconds or so
Is it just me or is the audio out of sync in this video?
Just You, And I'm Rewatching This a Year Later
Is it just me or is the audio is out of sync 9:38
It’s very out of sync and very maddening. Lol
as a lifelong intel supporter (they had a fab in my hometown, even if it was one of their most outdated ones, I felt at least some pride in supporting them for that reason for a long time), I'm feeling pretty good about the first AMD build I've ever done for myself 6 months ago.
As someone who will be purchasing a good few servers later this year I am happy to see the competition. Finally it is not just dump your money in the Intel pit. I look forward to the discussions this will cause with the upper levels who only know Intel 😉
I wish it was that simple. we tried Epyc several times - it was abysmal. From fringe problems to sporadic performance problems to the utter lack of any realtime or longterm support.
I still want them for our workstations but as the software will ultimately not run on AMD it also will not be developed on AMD just so we can test on more similar hardware. It is a shame just how bad the support is.
You guys have to wait for the SM server to stop blinking that front light (HW check) before it will turn on.
Is it me or is the audio out of sync from about the 4 min mark
Anyone else having problems with dissyncronized Video and audio? I’ve tried multiple networks now, so im pretty sure that it isn’t my connection?
Yes I have the same problem