Are you using DDR5 or DDR4? Heck, maybe you’re still using DDR3! Let us know below! Check out the parts on the rigs we used for texting: Intel Core i9 - 13900K Processor: geni.us/QEC6 AMD Ryzen 9 7950x Processor: geni.us/IQopks ASUS Z790 Hero Motherboard: geni.us/Wkx0 Gigabyte X670E Aorus Extreme Motherboard: geni.us/uUwno MSI Pro Z790-P WiFi DDR4 Motherboard: geni.us/zOPbBUq Nvidia GeForce RTX 4090 FE GPU: geni.us/4v1AZJ And check out some of the RAM we tested: G Skill Trident Z DDR4 3600 CL14: geni.us/3hU2Gaq G.Skill Trident Z5 5600 CL40: geni.us/41OzVMc G.Skill Trident Z5 NEO DDR5 6000 CL30: geni.us/4OZOQG G.Skill Trident Z5 DDR5 6800 CL34: geni.us/WIuJe G.Skill Trident Z5 DDR5 7200 CL34: geni.us/QrzKq Crucial DDR5 4800 CL40: geni.us/ZmTm Crucial DDR5 5200 CL42: geni.us/ywIZkl Purchases made through some store links may provide some compensation to Linus Media Group.
My laptop runs 2x 8GB Samsung DDR4 3200s, however the CPU (Pentium 6405U) only support 2666. Great! ___ I recently get the computer in our lab working. DDR3 12800 (1666MHz) 4GB x 2. Mixed manufacturer lol. i5 4th gen.
The fact that a company like Intel are pushing the frequencies just for the Big Numbers without working on better timings gives me that "V8 engine with a bicycle transmission" vibe.
They did the same with there cpu's for years whilst almost ignoring ipc. And that is exactly how AMD caught up and over took them and made intel wake up.
most of the Ryzen 7000 CPU's I've tested can't run 6400 reliably and as such you shouldn't buy anything rated above 6200 for AM5 if you just want to use EXPO/XMP. EDIT: I should also point out that depending on your luck even DDR5-7200 might be a massive pain to stabilize with intel CPUs. Plenty of CPUs and motherboards will straight up not run DDR5-7600 or higher.
@@TranHungDao.I back this, the 1600mhz Kingston Fury(2x4gb) with an Avexir Core Blue (2x4gb) doing decent with the i7-4790 that I handed down to my brother. Running 4x8gb GSkill ddr4 @3733mhz right now with my 5700x.
It's worth noting that whilst the 7950X has 64MB of L3 cache vs the 13900K's 36MB, the 13900K actually has a whopping 32MB of faster L2 cache, where the 7950X only has 16MB of L2. The 7950X also has only 1MB of L1 cache, and the 13900K has 2.1MB of L1 cache. It seems that we're seeing something we've seen in the past; more cache means you gain less from fast RAM.
That makes a lot of sense! It's sort of like how a faster SSD would make virtual memory faster, but if you just had more RAM you wouldn't need to use virtual memory as much.
@@johnfrankster3244Yeah Intel is definitely more power hungry. I just bought a i7 12700K but it's $50 CAD cheaper than the 7900X and comes with integrated graphics (display won't blow out if GPU is not connected, but that obviously doesn't matter as much) so I think its worth it imo.
@@DeepfriedBeans4492 You're totally right! Not sure what led me to think it didn['t have onboard graphics... was probably looking at the wrong chip. Thanks
I would love to see benchmarks for simulation type games, like Factorio for example. Cause Timings can bring a lot more improvement in those games, than for an FPS that relies mostly on the GPU.
@@phmu144 factorio's ups in very large megabase benchmark saves are a good choice for this. Factorio was shown to significantly be improved by the x3d amd CPUs and is very dependent on ram performance too.
@@phmu144 slighly incorrect - Factorio FPS is locked to UPS, which is the game's update cycle, which is highly dependent on your CPU and RAM speeds, and is just capped at 60.
_Well, technically_ Yes, UPS is capped at 60, but you can use a console command to speed up the game. Yeah I know you wouldn't do that normally, so it's not a useful test for how well it can run Factorio normally, but might be useful to see how big you can make a base for super-post-endgame, or some of those huge mods
In general, the more an application is bound by the cpu the higher chance that better memory speeds will help; its why AMD's 3d vcache has been well received as more code can be kept closer to the cpu for increased performance.
AMD X3D decrease Higher RAM clock benefit If you have a 5800X3D you get all benefit directly from cache not from RAM, only software and hardware optimization can boost both to the limits. Now is useless extreme RAM kits with X3D
I'm so glad you included the latency formula around 4:15. That latency actually tells you what kind of chips the memory has and the timing is basically manufacturing tolerances for the wiring. DDR5 6000 MT/s CL30 has literally the same memory chips as DDR5 6800 MT/s CL34, just different XMP profile. That's why checking out that CL number is so important while buying RAM. And the general rule is that if the software (game or app) is written so that the most used data fits in the L1+L2 cache, memory latency doesn't matter and bandwidth is more important. If however, you're running software that needs to access more data than can fit in your CPU cache, higher latency memory will hurt a lot. As most users have high latency memory because it's cheaper, well optimized games typically run just fine with high latency memory. However, if your favorite game happens to be poorly optimized one, you'll be out of luck with high latency memory. I'd say go with the cheapest DDR5 RAM that can get you around 12 ns using the formula at 4:15.
This is just not true, 6800MT/s is almost guarenteed to be SK Hynix A-die, but the 6000MT/s kit can be samsung b-die or hynix m or a-die. "First word latency" doesn't matter much, if at all. I would say only thing tCL is useful now is checking if a 5600-rated kit is samsung or micron. I would bet you could get considerable gains in some games if you tightened subtimings, but little to none if you tightened primaries.
@@juliuss2056 They also divide by the data-rate, but the command rate is half of the data rate. Thinking the CL is nanoseconds instead of clock cycles when the topic is comparing memory sticks with different clock frequency is also off. The entire formula is questionable at best.
@@hehefunnysharkgoa9515 The formula is correct though? That's what the multiplication by 2000 accounts for. The only mistake is the mislabeling of the CL units.
@@whatanoob96 It's correct, but it's not intuitive. The CAS measures command-rate clock cycles, so doing clock cycles / clock rate makes sense. clock cycles * 2000 / data rate however takes a detour and Linus doesn't explain it well. Ultimately all that formula is really saying is CAS / CR (mhz) * 1000, which is more obvious when it's written as CAS / CR (ghz). For 34 CAS and 3300mhz CR that's 34 / 3.3, vs doing (34 * 2000) / 6600.
The first Zen CPUs weren't the most stable thing with DDR4 speed. Zen+ did improve that a bit but only on Zen 2 AMD could really push memory overclocking, stability, performance altogether. As Zen 4 is the first DDR5 CPU for them, maybe on Zen 5 we can see 7000+MHz memory working wonders without a hassle and stretch the performance levels.
I believe we will only see the true gains of higher DDR5 speeds a few years later when developers long stopped making games for previous gen consoles and when grahpics cards are even more powerful. Remember that even Hogwart's Legacy is still coming to PS4/Xbox One and even the freaking Switch that has trouble running Pokemon... I'm thinking about something like Spiderman 2 with RT on with an RTX 6090 or RX 8900 XT or Crysis 4, the next Tomb Raider or maybe Assassin's Creed Codename Red.
AMD works actually better on lower clock speeds. Its better to match the memory speed with the CPU controller speed than increase the speed. At least for DDR4. So in DDR5 it will probably be the same.
Considering I NEEDED to use DDR5 on my AM5 build, I'm glad I got it for free as a deal Microcenter runs when you buy both the processor and motherboard there. The GSkill Flare X5 is good running at 6000 speed and for free I can't really complain. Glad to see this being done though, I like to see how much I'm being ripped off for in tech 🤣Nicely done Linus!
The free 6000 MT/s gskill ram is what pushed me to go with a 7700x over a 5800x3d. Plus I'm solely a SFFPC person so itx boards were always expensive. I think I paid a whole 20 bucks more for my ASRock b650e compared to my b450 Asus strix.
@@Rational_Redneck AM5 is finally a reality now thanks to those B650 boards...but they still are over priced. They promised ones hitting as low as $125US, I haven't seen one under $180US yet. Still, it's worth it to have future update-ability at least to me.
What’s also interesting is just the scaling in general being so small on Intel for raw frequency. I mean it’s almost 3000 MHz higher in some of these benchmarks. That’s almost double and it barely breaks 1-5% total. Where as on amd we only saw 4800-6400 which is 2000 MHz and saw 10%+. It’s weird things like that where you often wonder why it would be such a stark drastic change.
intel don't really care for as much ram as amd, 4000mhz cl15 in gear 1 is beating everything below 7800mhz in gaming and even higher if you get lucky bin and able to do even higher mhz on gear 1
since ryzen released it's been known that AMD chips need faster ram speed, but no clue if that means that they are being held back by lower speeds, or they unlock higher speeds by using faster ram. One means Intel chips are more "efficient" at using the lower ram speeds, and the later means that Intel chips aren't optimized for faster speeds
Intel is literally just the difference from faster ram. Back in the DDR4 days, AMD’s infinity fabric would be tied to the speed of the ram. That’s why higher ram speed outright overclocks the CPU. It is likely that this continues to be the case with DDR5, although I am not sure. You can also see the 5800X3D not caring about higher ram speeds because the additional cache reduces the need to care about latency and bandwidth from the ram.
It absolutely makes sense of you know how ram works. The chip takes X amount of NS to do something. The module manufacturer will choose speed at which the chip can run, and calculate the number of cycles the operation will take at that speed, so it covers the same amount of time in NS. For example an operation taking 13.3 ns takes 24 cycles at 3600MT and 48 cycles at 7200 MT, and the performance will be more of less identical.
That was interesting. I have 6000MHz CL36 because it's what was part of a bundle deal when I upgraded to AM5. Didn't think about it much since the CPU improvement was my main goal anyways. GPU bottlenecked for now until the bank account refills.
The subtimings have the biggest impact on performance, far more than the CAS latency has. First word latency is also a bit of a misnomer, it only applies for an already open row, a large proportion of memory accesses require you to open the row first so RCD has to be added before CAS to get the latency of the operation. Also, back to back memory operations are probably the biggest latency penalty, these all are limited by the subtimings.
Subtimings? How about primary timings? This is the obligatory complaint that non-RGB 3600 CL16 isn't on this chart since that's only $100/32GB, much much cheaper than this RGB CL14 stuff which is more than twice the price. That totally nullifies Linus' statement about DDR5 and DDR4 being the same price.
@@Mr.Morden that statement about cheaper ddr4 is true, but primary timings actually don't matter very much on ddr4 and ddr5, subtimings like trrd and trfc matter much more.
@@Savitarax When you have enough memory bandwidth then the extra doesn't help at all. Kind of like having 13900K with Geforce 2060 doesn't make games run faster than having it with 12900K. There are probably some workloads that are really hard to be cached that would benefit a ton from higher bandwidth but those are really rare exception.
@@Savitarax Clock speed alone means nothing. I can do one addition in one second (i.e. 1Hz), but I could also do half an addition in half a second (2Hz). That's double the frequency, but the amount of actual work being done is the same. You have to include the timings, as the video says. The timings are (proportional to) how many clock cycles it takes to access memory. To oversimplify, a 2000MHz CL10 kit and a 4000MHz CL20 kit would be about equal in performance, despite one having double the frequency.
I initially had a CL30 5600 kit in my new 13700k build, but the prices dropped a lot while I was still within the return window so I swapped the RAM for a CL32 6000 kit. There was a tiny performance increase (calculated latency is almost the same) but a dramatic consistency improvement -- my tests went from 4% variance between results to 1%. I haven't toyed around with overclocking yet, but will at some point.
@@mikeramos91 The improvement I'm seeing might be due to the bandwidth, might be due to the timing differences, or might be purely lucking out in the silicon lottery. With a sample size of just 1, I cannot say.
@@G0A7 Are you from a third world country? no offense. Besides, DDR3 is more recent than DDR2, it's more believable. Today's software can barely run on DDR2 RAM...
I grabbed a kit of 5600MHz 28 CL memory for a great price and its running flawlessly with my 7900x. 5600MHz seems to be the sweet spot right now for cost to performance so I went with the lowest latency kit I could find and I am very happy with my decision.
What has been the most curious is how RAM can be used to approximate DirectStorage in some games. For example, Returnal on PC is requiring a ton of RAM rather than asking for DirectStorage.
Also we have to keep in mind that Linus is using the top end cpu and gpu. For majority of the people Ram will never be the bottleneck whether they are using DDR4 or DDR5. Their performance will be limited by cpu or gpu and which Ram kit they use will hardly affect the performance.
The whole video took me back a bit to the days when I was still going to benching sessions myself, dabbling in XOC, and my main system was based on a Classified SR2, two X5690 and 2 Mo-Ra 2 radiators, and a 60 liter barrel as an expansion tank. Good old days. Back then I bought and sold quite a few Corsair Dominator GT 2x2 and 3x2 GB kits, always looking for Rev. 7.1A kits. They had the good ICs on them. With water cooling (they didn't like sub-zero temperatures, but also not when they got hotter than 50°C) I could run them at about 2050 MHz and CL6-7-6-20. What a great time, and always selecting out which module is now the bottleneck. Such nice memories. Oh how I miss those days when you could afford PC hardware without having to sell your kidney. Today all the junk is so expensive and yet none of the sets shown come close to the 5.85ns of my Dominator GT. I once wanted to sell my whole setup, today I'm glad I didn't, because videos like this show me what sentimental value the system had and still has for me.
Linus I am a huge fan of the channel. Can you also show how SolidWorks, ansys or any CAD and simulation software’s can also take advantage of the faster ram . It is very helpful to our engineers community 😊
I’m a drafter that works in a large manufacturing plant. We run Solid Edge, but I’ve used Solidworks, inventor, and Fusion 360 a lot. If you’re not doing sims or rendering, GPU doesn’t matter, CPU doesn’t really matter, and ram is firmly in the “have enough so you don’t hit 100% useage.” Modern CAD programs are horrendously optimized to take advantage of newer technology and anyone who says otherwise isn’t doing modeling and drawings 40-60 hours a week. We’ve quit ordering work stations with Quadra cards at work because the graphics card is literally never above 2-3% useage at any point.
Subtimings, at least on Intel, make a much bigger difference than they did with DDR4 so this might be something worth investigating. Glad to see a succinct, useful video like this come up.
That is the reason why you should use the XMP2 profile instead of the XMP1 profile. To many search results are still saying that XMP1 and XMP2 are the same.
Really sad to see the DDR5 Zen 4 stability issues, but glad you called it out. I've ran through at least 6 ram sticks trying to get a stable system. If it's not a boot issue, it's a game/application crash.
I'm also finding that my BIOS on the ASUS ROG X670E-I Gaming doesn't update when I've changed RAM. For example, I'll switch from a CAS 36 stick to CAS 28 and my bios will still have CAS 36 timings. I have to completely wipe my bios each time. I think that's the source of my RAM issues right now. Might actually be a motherboard issue.
I don't even know why they measured FPS gains in this video. Ram will barely affect your FPS. It's really intended to help with loading times, so it's not likely to make any significant difference on low end or mid tier machines either.
@@TorQueMoDwell that's just demonstrably false. Faster RAM provides the data to the GPU faster, which makes you get more frames. Granted, at a certain point there is a dramatic drop off in terms of gains, once you reach the max what a GPU can handle. However, as more and more games adopt direct storage, we'll see that be less and less of an issue. Direct storage bypasses the RAM altogether.
Could you chart this in 2 dimensions? Price on X axis, 1% lows vs avg could be represented using vertical bars, performance on Y axis. It would be interesting to see cost to performance ratio. Representing data as a big list of specs on the left is not too easy to digest - most people just care about price.
The fact that you managed 6400 sticks to run on AMD means you have a golden chip, AMD recmommends to stay at 6000 for optimal performance (Infinity Fabric) and sometimes stabilizing even 6133 or 6200 can be a challenge.
Idk I don't think it is that rare, I myself I'm running 6400CL32 with my 7700X at 5.3ghz and it runs like a charm, my ram is not even expo btw and was meant for Intel CPUs
Something I wasn't aware of before buying a new Ryzen 7000 system: it is apparently normal to have a 40 second delay when turning on the machine, and this is to do with DDR5 memory training. That's before POST with no video signal.
I'm very new to the channel and This is the first channel where the sponsorship and ad placement doesn't annoy me or feel rammed downed the throat. Whatever they're formula is, I hope they keep doing it.
Matters on my 7950x for sure. The difference in getting my timings tightened and optimized for 6000mhz made a big difference. The kit is the g skill neo SK Hynix but the AMD EXPO mode was trash. Manual subtimings were a game changer.
Same here.. my Ryzen 7700x was decent with DDR5 5200MHZ.. OC the RAM to 5600MHz & it's improved a little.. I did some aggeresive timing + sub-timing & it's show some noticble differnce.. Couldn't go beyond 5600MHz because samsung DDR5 are meh.. & I just ordereed Hynix 6400 kit.. will arrive in 2 days.. I'll do some extreme OC & see how it's works.. I hope I get some Hynix A-die (best DDR5 dies currently) newest G.skill kits have A-Die in 2023
I love this type of video, diving a little deeper into how and why ram does what it does. I would love a video (or channel if I may be so bold) that dives really deep into how ram or any tech works. Either way I think the lab will help you guys create videos that scratch that itch for me, looking forward to the upcoming content!
I managed to snag some 32GB DDR5 5200MHz RAM for a grand total of ~$130 during last years Black Friday, when I was looking into parts for my new AM5 build. This was a serious upgrade from my previous 16GB DDR3 3200 MHz. It's kind of hard for me to compare what impact it has had since I did a full platform upgrade, but having an all-around powerful build feels a lot more stable. And I likely won't have to upgrade my RAM for a very long time.
DDR5-5200 is on par with DDR4-2400 memory, it's not quite at the official JDEC base speed, but close to it. DDR5-6000 would provide a significant performance upgrade.
Great video! I just wanted to chime in on the discussion about RAM speed and latency. It's important to note that RAM speed, measured in MHz, determines the number of transfers of data per second, while latency, measured in nanoseconds, determines the time it takes for the RAM to respond to a request for data. When building a computer, it's crucial to strike a balance between these two factors, as having too much of one and not enough of the other can lead to bottlenecks in performance. Thanks for enlightening us on this topic!
DDR5 is more complex/advanced with 2x bank groups compared to DDR4 (Different Architecture) Majority of people don't realize that DDR5 primary timing have lower impact compared to DDR4 primary timing In fact.. sub-timings have bigger impact in DDR5.. U can tune 6000 kit from cl40 to cl30 & barely get 1% difference But with tuned sub-tming you will get bigger improvement in games especially My Samsung 5600 CL38 have like 76ms delay.. tuned with primary timing up to 30cl & only get 73ms (3ms difference) I tried sub-timing & got it go from 73ms to 64ms (9ms difference).. & that was with only 5600.. with DDR5 Hynix die.. U can get around 55ms Fun factor: Spider-Man with slowest 4800 DDR5 can match best 3800 DDR4.. & with 6000+ tuned timing, it's can get like 20% boost in lows (smother)
@carlososwaldocasimiroferna2631 nearly most of them.. check OC tutorials & try to know what kind of ram die you have (Hynix is the best, Samsung comes second, Micron is the last.. at least for DDR5 Then try to look for tutorials based on the die manufacturer & do some stability benchmarks & see
I fully agree! Especially when a person basically just doesn't really notice the improvement from 120 to 240fps, the stuttering, jittering and latency are really important for immersion
You're better off with a channel that has better knowledge of such things that LTT. Actually Hardcore Overclocking for example. I don't go to my barber to get dental work done. Don't use LTT for in depth OC, use an OC focussed info source.
Very nice video on an intersting subject with lots of charts and valueble info but without clear, consise and accurate conclution for average customer!
Normally LTTs comparisons are reasonable, however I don't think it's reasonable to use a 3600 CL14 kit to represent price-to-performance for DDR4. Unless something has changed in the last few weeks a 3600 CL16 kit would have been significantly less expensive with minimal overall speed impact.
RAM is definitely used for gaming, but it's mainly for performing multiple tasks at once. In the long run, bigger size and faster speeds are best for a system you don't want to worry about for years.
Fun fact: Since AMDs infinity fabric bottlenecks the bandwidth of the memory subsystem, the increase in ram speed is less noticable. Thats why the 5600 C28 kits performs so "well"
I feel like people keep repeating this without understanding what infinity fabric is. Zen 1-2 CPUs have 4 cores per CCX, and Zen 3 have 8 cores per CCX. On CPUs that have more cores than whats on a single CCX is where you seen an increased benefit over Intel CPUs when it comes to RAM speed.
With Zen 4, the memory controller clock has been decoupled from the infinity fabric clock, so that no longer affects the memory*. By default, the fclock is 2000 MHz and can be raised to abour 2133 - 2167 MHz, from what I've seen. uclock (memory controller clock) runs 1:1 with mclock (memory clock, MT/s rated speed divided by 2) up 6200 MT/s and switches to 1:2 ratio at 6400, but can be switched back to 1:1 (and on my 7900X system, it works without issues. However, there are a few fclock states that do effect the memory, when the memory controller is at certain clock speeds. When running 6000 MT/s memory, fclock at 2033 pushes the uclock and mclock higher by 50 MHz, thus it will put the memory into a pseudo 6100 MT/s state. I first thought it was a bug, but performance improves measurably. The same thing happens at 6400 MT/s and 2167 MHz fclock, that puts the memory into 6500 MT/s mode. For me, that becomes unstable regardless of timings, so I'm sticking with 6400 MT/s and 2133 MHz fclock. (going from 2000 MHz to 2133 MHz gives about 1ns reduced latency, so it's not a lot, but measurable.
@@lievre460That’s because each CCD has its own link through the IOD, increasing the overall bandwidth But it’s still pretty noticeable cap on single CCD CPUs
@@bradhaines3142 Oh, then your initial comment just sounds like you're telling OP water is wet. I agree with them, would've been interesting to see how much of an uplift you get out of bumping RAM speeds.
@@Gabu_ Yeah and I'm not terribly excited for first gen AM5 APUs. Really high platform entry cost and mediocre RAM support so far on AM5 makes it feel like they'll be duds outside of laptops. Could be sweet on laptops though if they're highly efficient.
I’m so glad you made this video. I’m upgrading to AM5 and I ended up buying 32gb of DDR5 7200Mhz for about $300 but it also included a 2TB M.2 drive so I thought that was worth it cause I needed the extra storage anyway. Even though I won’t be able to hit 7200mhz I still got a pretty good deal. Just keep it at 6400mhz and I’m good to go
I've always gone by the rule of thump, if it has 10ns CAS Latency, it is probably "good enough" for an average (i.e. non benchmarking/overclocking) usecase.
Cas latency does almost nothing on DDR5. The sub and tertiary timings do which are on auto (garbage) with xmp/expo. The jump from xmp to manual oc on ddr5 is far larger than jedec to xmp.
What does the Rule of Thump mean? I have never heard of that.. also how do you know if your RAM's CAS Latency is under 10ns? My 3200mhz CL14 32gb sticks of ram for my 5800X are finally working stable set with DOCP and 1.375 dram voltage.
@@casedistorted Tight timings are normally an indicator for a good (or at least not shit) stick. Your stick for example has a CAS Latency below 10ns. CL is the CAS-Latency in clockcycles, i.e. 14 clocks RAM Is DDR, double datarate so you have to take your speed in GHz halfed. i.e. your 3.2GHz datarate stick is really 1.6GHz clockrate. In the stated units it gets easy to compare. 1.6GHz and CL of 14 means below 10ns (8.75ns to be exact). It's just CL divided by half the speed in GHz.
Wendall from level1tech's mentioned that the internal memory controller (IMC) is what Intel binned for with the 13900KS vs binning for the P cores in the 12900KS. I wonder if the 139ks with the better IMC would show bigger gains during gaming at higher RAM speeds. Wendall also mentioned that 5800X3D didn't see much difference in performance with 3200 vs 3600 mhz ram. I wonder if that will stay the same for the 7000 series x3d cpu's.
Got the "G.Skill Trident Z5 Neo Series 32 GB (2x 16 GB) DDR5 5600 MHz CL28" last December for under 200€. I paired it with a Ryzen 9 7950X. Happy to see I made the right choice. Although it was a pain to have the kit recognized by the MSI motherboard, and I even had to update the BIOS to make it accept a second kit that I bought later to reach 64GB or RAM.
im not 100% sure how its with DDR5 or your Mainboard+7950X but be aware that it usually was better to have 2x 32GB modules than 4x 16GB modules. Yes, its a kind of more cheaper upgrade path if you REALLY(!!!) need this amount of RAM. Performance whise its better to stick with two sticks because Dual Channel performes better than quad channel mode. And in addition to that more cheap/mid-range Mainboards and CPUs dont always offer you the max speeds with 4x Sticks. For example: My younger brothers PC has a Ryzen 2700X with a x470 board(Msi Gaming Plus). The board specs say: DDR4 MEMORY: 1866/ 2133/ 2400/ 2667 Mhz by JEDEC, and 2667/ 2800/ 2933/ 3000/ 066/ 3200/ 3466 Mhz by A-XMP mode MEMORY CHANNEL: DUAL DIMM SLOTS: 4 MAX MEMORY (GB): 64 He has 4x 8GB 3200mhz sticks inside and judging by the mainboard specs XMP/EXPO mode should work perfectly fine for 3200mhz. But the reality is that it seems those specs only are valid for a dual-channel (2x Sticks) configuration. When trying to enable XMP/EXPO with 4x Sticks to 3200mhz the system is incredibly unstable and it basically doesnt makes any sense to run it in this mode, hence loosing performance which you paid for the RAM. Only 2933mhz speed worked stable for 4x Sticks.
I tuned my 3600 cl14 kit for my 5900x. Totally worth the day and a half of crashing, retuning, and trying again. Userbenchmark puts my system in the top 5% for identical builds, and cpu top 1% for 5900x.
I'm just strolling up from the old "RAM speed doesn't matter"video to the "we were wrong it can matter"video to this "does it REALLY matter now with ddr5"video? lol I'm just on this DOES IT MATTER?! rollercoaster. Love the segue favorite to date
Was about to get 2x16GB 5200MHz CL40 but decided to swap it for 2x32GB 4800MHz CL40 seeing as I don't game as much but run a lot of virtual machines. The latter costed $48 CAD more so I think that's pretty damn good considering VMs eat RAM for breakfast and the difference in speed only comes out to be 0.64ns lower, so basically paid $48 bucks for another 32GB of RAM.
At least for my 5900XT, there was a big difference between the XMP and the timings from the specific AMD calculator I used. The XMP settings seem very much designed for Intel was my experience.
For DDR4 ram it's best to go with no more then 3000Mhz at 16GB or 32GB sticks getting all x4 sticks and also with DDR5 it's best to go with 5000Mhz. Unless you are an overclocker no one can tell the difference anyway.
i chose DDR5 4800 on my alderlake setup for the increased bandwidth. Sure, 7000 kits twice as expensive have even better timings but I only game at 60 fps and will gladly play Elden Ring 2 at 60 as well.
Your memory is supposed to match your CPU bus with the lowest possible latency. Memory faster than your CPU is going backwards because it generally means higher latency for speeds that will never be utilized.
The only time I ever saw an improvement from RAM speed was using the old A10 APU's - which did noticeably better at 2400Mhz than at 1600. But that's due to the graphics architecture. Still have that thing. It's a good bluray player.
I am so glad the sponsoring product reaches it's read and write performance" respectfully". While I think there are many aspects of Korean culture I don't yet grasp, respect does appear to be a very important concept from what I've experienced. There does seem to be a concept of being respectable (in non-verbal ways indicating your own worthiness/capacity-for of being respected by others). With respect to this SSD, if the read and write performance has different metrics when it is being respectful and when being respectable, it would be good to know the respective metrics. I mean this in jest, and in a most respectful way - writing copy that would be scrutinized by the internet (and your audience is probably more pedantic than most) seems like it would be a special kind of torture.
FANTASTIC VIDEO! But what about RAM size vs performance? Like 64GB (2x 32GB sticks) seems to come with slower CL than 32GB (2x 16GB stick) kits; does the extra RAM space offset the lower CL in gaming performance? I'm planning on getting a AMD 7950X3D when they come out later this month. I was planning on getting a DDR5 64GB RAM kit, to use in this new rig which is primarily a gaming PC with an RTX4090. Any specific suggestions on the best 64GB RAM kit?
First, why the heII are you asking this question here and second, if you are planning to just game 16gb is enough and ई recommend cl 28 -cl32 ram at 5200-6000
RAM capacity doesn't directly impact performance at all, if you have enough RAM you have enough. 64 GB is directly counterproductive if you don't need it for productivity workloads, since as you point out the kits are slower. 16 GB is just barely enough (the minimum requirement for new games), 32 GB is the sweet spot.
On AMD, isn't memory speed tied to Infinity Fabric speed? Is this a major limitation? Some explanation of other timing numbers (sub-timings?) would be helpful as well. I see a lot of DDR5-6000 kits at 30 or 32ns but with a ton of variance on the following three values.
That was true for Zen 1 through 3. With Zen 4 (Ryzen 7000), the memory clock is no longer tied to the infinity fabric clock so you can safely clock them at different speeds without losing performance from not being synchronized. For example, you can run your memory at 6400 while having the fabric clock at 2000 mhz. With older Ryzens you would have to run your memory at 4000 or 8000 when fabric is at 2000 mhz or face performance degradation. BTW, for my 7950x I got a relatively cheap Kingston Fury 5600CL40 kit. I successfully overclocked it to 6400CL32 and got a nice performance bump for free. So don't waste your money on expensive RAM if you know how to overclock it. If your CPU can't run the RAM at 6400CL32, try 6000CL30/CL32 for essentially the same performance.
CAS latency is one of the least important timings for RAM, but is probably the most easily-marketable. First word latency is rarely what matters in real-world performance. No real workload has purely-random access patterns. AIUI what tends to matter more is the latency between commands (Read to Read, Read to Write, etc.) in the same bank or bank group. It *does* represent worst-case (or near-worst-case) performance so it has *some* utility but real-world applications rarely present a worst-case scenario - RAM, like most hardware, is designed to perform well for typical usage after all.
I really enjoy Linus’s content- I come here every time I’m shopping for nearly everything when it comes to technology. But it’s so hard to pay attention and I’ve figured out it’s his voice. It’s like one of those tones men can’t help but to ignore.
Could you please do something more with your tests other than gaming? I'm currently on ddr3 2133mhz running at 1866mhz with a lower CL of 9, I noticed when going from cheapo CL13 1333MHZ RAM that my input latency while using VST's reduced, but the thought of going to ddr4/5 with a CL of 34+ scares me. Will this be adversely affected? Have I got the sweet spot for a music rig and it's pointless now to upgrade as in my case it would be a downgrade? These are things we need to know! Many thanks
@El Cactuar yes we all saw the video, it wouldn't make sense to use MT/s in this instance because that's not how the ram is marketed, labelled or tuned. The question was "is higher CL RAM going to affect ASIO input" not "what is a good measure of IOPS"
@El Cactuar well the speed of the ram copying data wasn't what I was asking about, it's the delay of VST's from external midi devices, which I can confirm runs faster on a lower memory speed (as marketed) because CAS latency is reduced. Seeing at CAS latency isn't a consideration within the equation to find MT/s it would suggest that MT/s doesn't offer a solution to this situation.
i went from 2666 to 3200 and i can feel drastic difference in daily tasks especially using chrome with a lot of tabs open. EDIT: I should probably mention i am using only one 8gb stick so there is no dual chanel going on cpu is i3 12100f
@@idkanymore3382 i made a jump like that & yes it was enough to feel a difference. Nothing to brag about, but it made higher 1% lows which you can feel when gaming
Because it doesn’t work like that on Ryzen 7000 anymore. IF frequency is dynamic up to 4000 MT/s and not higher. What changes over 6000MT/s is that the memory controller runs at half clock, but you can still try running Gear 1, which is apparently what they did here. IFCLK:UCLK:MEMCLK Auto:1:1 On Ryzen 7000 at least.
@@joshuadelaughter yeah relative to manual OC xmp does almost nothing on ddr5, and hardly scales with higher speeds without manual tuning as the subs and tertiary timings get looser as the speed gets higher without them being manually configured
Would have liked to see a DDR4 3600 CL16 kit in there as those are like ~$90 less than the CL14 kit…it would be much more competitive on price with low end DDR5 (which is actually more than $135 for 3600 CL16) for sure.
As requested in your pin: using 64GB 3800CL16 DDR4 on main rig and 32GB 3600CL16 on secondary Ryzen PC's (same kits just won't run over 3600 stable on cheaper mobo in 2nd). Testbench PC which I just use to wipe/test drives and test GPU's has just 8GB 2800CL14 DDR4 installed most of the time and I've got an old i7 3770K rig lying around which used to be my daily driver with 16GB 2133CL11 DDR3. Honestly 3800CL16 is doing just fine for now and I don't see myself outlaying for a whole new system until it becomes necessary for acceptable performance. It is nice to see DDR5 prices come down finally tho.
My main box is still using DDR3, so having the next box using DDR5 seems like an obvious choice. If not for latency but at least I will be doubling my capacity to at least 32 gigs
Interesting because when I bought my i7-13700k there was almost no testing done yet. The MSI board QVL said 5600 ddr5 with CL28 seemed to stack up better than anything else approved for use. I figured I could upgrade later if speeds and latency got better. I am running 64Mb and doing primarily rendering and production work (database and site development as well as graphics but almost no gaming so only went with a 3080ti). So far the 5600 Cl28 has really impressed me.
I thought that AM5 only went to 6000mhz, anything above that is purely silicon lottery. So lowest CL 6000 is best. To give you an all around score you could take both bandwidth and latency into account. Like 1/ns*((write+read)/2) which I think would give you a pretty good indication of real world performance between the kits. So by my math 4000MT CL14 gets 9286 points, 6000 CL30 gets 8950. 6000 CL 28 is where DDR5 starts to beat the best of DDR4 all around with 9589 points.
Are you using DDR5 or DDR4? Heck, maybe you’re still using DDR3! Let us know below!
Check out the parts on the rigs we used for texting:
Intel Core i9 - 13900K Processor: geni.us/QEC6
AMD Ryzen 9 7950x Processor: geni.us/IQopks
ASUS Z790 Hero Motherboard: geni.us/Wkx0
Gigabyte X670E Aorus Extreme Motherboard: geni.us/uUwno
MSI Pro Z790-P WiFi DDR4 Motherboard: geni.us/zOPbBUq
Nvidia GeForce RTX 4090 FE GPU: geni.us/4v1AZJ
And check out some of the RAM we tested:
G Skill Trident Z DDR4 3600 CL14: geni.us/3hU2Gaq
G.Skill Trident Z5 5600 CL40: geni.us/41OzVMc
G.Skill Trident Z5 NEO DDR5 6000 CL30: geni.us/4OZOQG
G.Skill Trident Z5 DDR5 6800 CL34: geni.us/WIuJe
G.Skill Trident Z5 DDR5 7200 CL34: geni.us/QrzKq
Crucial DDR5 4800 CL40: geni.us/ZmTm
Crucial DDR5 5200 CL42: geni.us/ywIZkl
Purchases made through some store links may provide some compensation to Linus Media Group.
ddr3 with an i5 4690t lol
ddr3 with 4790k
Ballistix sport DDR4 2x8, Ryzen 5 1600. Paid $120, I be ballin all the way to the bank
ddr3 with and phenom ii 1045t
My laptop runs 2x 8GB Samsung DDR4 3200s, however the CPU (Pentium 6405U) only support 2666. Great!
___
I recently get the computer in our lab working. DDR3 12800 (1666MHz) 4GB x 2. Mixed manufacturer lol. i5 4th gen.
The fact that a company like Intel are pushing the frequencies just for the Big Numbers without working on better timings gives me that "V8 engine with a bicycle transmission" vibe.
they know big numbers sell better.
I remember back in the day advertisement going: "3.000 MEGAHERTZ !!!! 💪
But its a Pentium 4 and you could just buy an Athlon x64 🤷♂
Wrong. Higher frequencies make a measurable difference when overclocking and tuning.
@@Felale So has an engine instead of your legs
They did the same with there cpu's for years whilst almost ignoring ipc. And that is exactly how AMD caught up and over took them and made intel wake up.
most of the Ryzen 7000 CPU's I've tested can't run 6400 reliably and as such you shouldn't buy anything rated above 6200 for AM5 if you just want to use EXPO/XMP.
EDIT: I should also point out that depending on your luck even DDR5-7200 might be a massive pain to stabilize with intel CPUs. Plenty of CPUs and motherboards will straight up not run DDR5-7600 or higher.
I've seen 6000 expo unstable lmao
Thank you for this comment. Do you think RAM speed will matter even less on upcoming Ryzen 7000X 3D CPUs?
@@nepnep6894same, we need another bios update
6000 seems to be the highest thats stable in my experience
feels like ryzen 1000 all of over again. CPUs that benefit from faster ram but can't use it
revisit needed?
@@gershon9600 what’s the latest scoop?
If you're watching this in Q3 2023:
-DDR4 prices are about 30% of what's shown at 2:02
-DDR5 7800mhz at 5:12 is ~230 usd vs 370 usd (~60%)
What do you recommend us NOW
DDR4 still so good with the price now
@@alwaleedalbahri4354 ddr5 7200 cl34 is perfect
@@datnguyen07Lol true ddr4 is completely fine, even ddr3 1600mhz is still ok in 2024 lol
@@TranHungDao.I back this, the 1600mhz Kingston Fury(2x4gb) with an Avexir Core Blue (2x4gb) doing decent with the i7-4790 that I handed down to my brother. Running 4x8gb GSkill ddr4 @3733mhz right now with my 5700x.
It's worth noting that whilst the 7950X has 64MB of L3 cache vs the 13900K's 36MB, the 13900K actually has a whopping 32MB of faster L2 cache, where the 7950X only has 16MB of L2. The 7950X also has only 1MB of L1 cache, and the 13900K has 2.1MB of L1 cache. It seems that we're seeing something we've seen in the past; more cache means you gain less from fast RAM.
That makes a lot of sense! It's sort of like how a faster SSD would make virtual memory faster, but if you just had more RAM you wouldn't need to use virtual memory as much.
Performance per watt AMD is just better, specially when you consider for things like gameing the X3D models shit on everything else.
@@johnfrankster3244Yeah Intel is definitely more power hungry. I just bought a i7 12700K but it's $50 CAD cheaper than the 7900X and comes with integrated graphics (display won't blow out if GPU is not connected, but that obviously doesn't matter as much) so I think its worth it imo.
@@zxphthe 7900x has integrated graphics too but yeah Intel is still a decent choice, especially if you already have an lga1700 mobo.
@@DeepfriedBeans4492 You're totally right! Not sure what led me to think it didn['t have onboard graphics... was probably looking at the wrong chip. Thanks
I would love to see benchmarks for simulation type games, like Factorio for example. Cause Timings can bring a lot more improvement in those games, than for an FPS that relies mostly on the GPU.
Factorio's framerate is locked at 60 tho, maybe satisfactory or dyson sphere program! :p
@@phmu144 factorio's ups in very large megabase benchmark saves are a good choice for this. Factorio was shown to significantly be improved by the x3d amd CPUs and is very dependent on ram performance too.
@@phmu144 slighly incorrect - Factorio FPS is locked to UPS, which is the game's update cycle, which is highly dependent on your CPU and RAM speeds, and is just capped at 60.
_Well, technically_ Yes, UPS is capped at 60, but you can use a console command to speed up the game. Yeah I know you wouldn't do that normally, so it's not a useful test for how well it can run Factorio normally, but might be useful to see how big you can make a base for super-post-endgame, or some of those huge mods
I second this, not all games are bottlenecked on GPU, some have heavy memory demands like Factorio
In general, the more an application is bound by the cpu the higher chance that better memory speeds will help; its why AMD's 3d vcache has been well received as more code can be kept closer to the cpu for increased performance.
AMD X3D decrease Higher RAM clock benefit
If you have a 5800X3D you get all benefit directly from cache not from RAM, only software and hardware optimization can boost both to the limits.
Now is useless extreme RAM kits with X3D
@@rikycesari6600 cache is just 100 MB, the rest is in the RAM
I'm so glad you included the latency formula around 4:15.
That latency actually tells you what kind of chips the memory has and the timing is basically manufacturing tolerances for the wiring.
DDR5 6000 MT/s CL30 has literally the same memory chips as DDR5 6800 MT/s CL34, just different XMP profile. That's why checking out that CL number is so important while buying RAM.
And the general rule is that if the software (game or app) is written so that the most used data fits in the L1+L2 cache, memory latency doesn't matter and bandwidth is more important. If however, you're running software that needs to access more data than can fit in your CPU cache, higher latency memory will hurt a lot.
As most users have high latency memory because it's cheaper, well optimized games typically run just fine with high latency memory. However, if your favorite game happens to be poorly optimized one, you'll be out of luck with high latency memory.
I'd say go with the cheapest DDR5 RAM that can get you around 12 ns using the formula at 4:15.
Slight correction on the formula in the video. CAS Latency has units of number of cycles and not nanoseconds
This is just not true, 6800MT/s is almost guarenteed to be SK Hynix A-die, but the 6000MT/s kit can be samsung b-die or hynix m or a-die.
"First word latency" doesn't matter much, if at all. I would say only thing tCL is useful now is checking if a 5600-rated kit is samsung or micron.
I would bet you could get considerable gains in some games if you tightened subtimings, but little to none if you tightened primaries.
@@juliuss2056 They also divide by the data-rate, but the command rate is half of the data rate. Thinking the CL is nanoseconds instead of clock cycles when the topic is comparing memory sticks with different clock frequency is also off. The entire formula is questionable at best.
@@hehefunnysharkgoa9515 The formula is correct though? That's what the multiplication by 2000 accounts for. The only mistake is the mislabeling of the CL units.
@@whatanoob96 It's correct, but it's not intuitive. The CAS measures command-rate clock cycles, so doing clock cycles / clock rate makes sense. clock cycles * 2000 / data rate however takes a detour and Linus doesn't explain it well. Ultimately all that formula is really saying is CAS / CR (mhz) * 1000, which is more obvious when it's written as CAS / CR (ghz). For 34 CAS and 3300mhz CR that's 34 / 3.3, vs doing (34 * 2000) / 6600.
1:19 that didn't age too well XD
You are always on time with these vidoes/subjects!! Thanks LTT Team!!
The first Zen CPUs weren't the most stable thing with DDR4 speed. Zen+ did improve that a bit but only on Zen 2 AMD could really push memory overclocking, stability, performance altogether. As Zen 4 is the first DDR5 CPU for them, maybe on Zen 5 we can see 7000+MHz memory working wonders without a hassle and stretch the performance levels.
I believe we will only see the true gains of higher DDR5 speeds a few years later when developers long stopped making games for previous gen consoles and when grahpics cards are even more powerful. Remember that even Hogwart's Legacy is still coming to PS4/Xbox One and even the freaking Switch that has trouble running Pokemon...
I'm thinking about something like Spiderman 2 with RT on with an RTX 6090 or RX 8900 XT or Crysis 4, the next Tomb Raider or maybe Assassin's Creed Codename Red.
@@valentinvas6454 the switch is weak, but Pokemon running bad is not because if the switch. Pokemon was released as a broken game.
@@valentinvas6454 It's not the development tools that don't utilize the faster ram. So no. That will not make a difference
AMD works actually better on lower clock speeds.
Its better to match the memory speed with the CPU controller speed than increase the speed.
At least for DDR4. So in DDR5 it will probably be the same.
it why i didint go for there Zen 5, I know the amount of Headack Zen 1 had with memory.
Considering I NEEDED to use DDR5 on my AM5 build, I'm glad I got it for free as a deal Microcenter runs when you buy both the processor and motherboard there. The GSkill Flare X5 is good running at 6000 speed and for free I can't really complain. Glad to see this being done though, I like to see how much I'm being ripped off for in tech 🤣Nicely done Linus!
The free 6000 MT/s gskill ram is what pushed me to go with a 7700x over a 5800x3d. Plus I'm solely a SFFPC person so itx boards were always expensive. I think I paid a whole 20 bucks more for my ASRock b650e compared to my b450 Asus strix.
@@Rational_Redneck AM5 is finally a reality now thanks to those B650 boards...but they still are over priced. They promised ones hitting as low as $125US, I haven't seen one under $180US yet. Still, it's worth it to have future update-ability at least to me.
@@Rational_Redneck would have loved to get that deal, but I'm 300 miles away from the closest Microcenter.
Damn, wish i lived near a microcenter
@@adobo777hm i live 6004km away from the closest microcenter... thats 3730 miles :I
What’s also interesting is just the scaling in general being so small on Intel for raw frequency. I mean it’s almost 3000 MHz higher in some of these benchmarks. That’s almost double and it barely breaks 1-5% total.
Where as on amd we only saw 4800-6400 which is 2000 MHz and saw 10%+.
It’s weird things like that where you often wonder why it would be such a stark drastic change.
intel don't really care for as much ram as amd, 4000mhz cl15 in gear 1 is beating everything below 7800mhz in gaming and even higher if you get lucky bin and able to do even higher mhz on gear 1
since ryzen released it's been known that AMD chips need faster ram speed, but no clue if that means that they are being held back by lower speeds, or they unlock higher speeds by using faster ram. One means Intel chips are more "efficient" at using the lower ram speeds, and the later means that Intel chips aren't optimized for faster speeds
Intel is literally just the difference from faster ram. Back in the DDR4 days, AMD’s infinity fabric would be tied to the speed of the ram. That’s why higher ram speed outright overclocks the CPU. It is likely that this continues to be the case with DDR5, although I am not sure. You can also see the 5800X3D not caring about higher ram speeds because the additional cache reduces the need to care about latency and bandwidth from the ram.
It absolutely makes sense of you know how ram works. The chip takes X amount of NS to do something. The module manufacturer will choose speed at which the chip can run, and calculate the number of cycles the operation will take at that speed, so it covers the same amount of time in NS. For example an operation taking 13.3 ns takes 24 cycles at 3600MT and 48 cycles at 7200 MT, and the performance will be more of less identical.
@@dex6316 This is one of the wrongest comments I've ever seen on TH-cam from start to finish.
That was interesting. I have 6000MHz CL36 because it's what was part of a bundle deal when I upgraded to AM5. Didn't think about it much since the CPU improvement was my main goal anyways. GPU bottlenecked for now until the bank account refills.
I’d love to see how DDR5 kits work with integrated graphics and shared memory GPUs in laptops.
🤣😂
@@G-YEZZUZZ bruv chill
The subtimings have the biggest impact on performance, far more than the CAS latency has. First word latency is also a bit of a misnomer, it only applies for an already open row, a large proportion of memory accesses require you to open the row first so RCD has to be added before CAS to get the latency of the operation. Also, back to back memory operations are probably the biggest latency penalty, these all are limited by the subtimings.
Subtimings? How about primary timings? This is the obligatory complaint that non-RGB 3600 CL16 isn't on this chart since that's only $100/32GB, much much cheaper than this RGB CL14 stuff which is more than twice the price. That totally nullifies Linus' statement about DDR5 and DDR4 being the same price.
@@Mr.Morden that statement about cheaper ddr4 is true, but primary timings actually don't matter very much on ddr4 and ddr5, subtimings like trrd and trfc matter much more.
Even in this context if still doesn’t make sense how an extra 3000mhz barely breaks 5%. Like that’s almost double the speed.
@@Savitarax When you have enough memory bandwidth then the extra doesn't help at all. Kind of like having 13900K with Geforce 2060 doesn't make games run faster than having it with 12900K. There are probably some workloads that are really hard to be cached that would benefit a ton from higher bandwidth but those are really rare exception.
@@Savitarax Clock speed alone means nothing. I can do one addition in one second (i.e. 1Hz), but I could also do half an addition in half a second (2Hz). That's double the frequency, but the amount of actual work being done is the same. You have to include the timings, as the video says. The timings are (proportional to) how many clock cycles it takes to access memory. To oversimplify, a 2000MHz CL10 kit and a 4000MHz CL20 kit would be about equal in performance, despite one having double the frequency.
Love to see results from the lab, great work from all the team!
I initially had a CL30 5600 kit in my new 13700k build, but the prices dropped a lot while I was still within the return window so I swapped the RAM for a CL32 6000 kit.
There was a tiny performance increase (calculated latency is almost the same) but a dramatic consistency improvement -- my tests went from 4% variance between results to 1%.
I haven't toyed around with overclocking yet, but will at some point.
be warned tho, ram overclocking can tax your sanity
@@tictechto especially overclocking DDR5
@@Madi_Ernar exactly, lol.
so would u say bandwidth is more noticeable than latency?
@@mikeramos91 The improvement I'm seeing might be due to the bandwidth, might be due to the timing differences, or might be purely lucking out in the silicon lottery.
With a sample size of just 1, I cannot say.
Well, Im still using my build from 2010 with DDR2. This really helped me pick out what i needed for this next build. Thanks for the help!
Don't lie
@@FightRayTV I have a frind with ddr3 so i can believe him easy, not everyone plays AAA games
@@G0A7 Are you from a third world country? no offense. Besides, DDR3 is more recent than DDR2, it's more believable.
Today's software can barely run on DDR2 RAM...
@@FightRayTVWhy would he be lying about DDR2?
@@someoneelse4811 In order to get likes, or just an idiot.
I grabbed a kit of 5600MHz 28 CL memory for a great price and its running flawlessly with my 7900x. 5600MHz seems to be the sweet spot right now for cost to performance so I went with the lowest latency kit I could find and I am very happy with my decision.
Mind sending me a link to your kit? I am trying to find an AMD set with CL28, and I havent found a single one.
@@caedon0 Should've went with a 6000 cl30
does aliexpress have these low cl memories?
How do i know the latency when buying ram?
Which ram did you buy, what brand and from where?
What has been the most curious is how RAM can be used to approximate DirectStorage in some games. For example, Returnal on PC is requiring a ton of RAM rather than asking for DirectStorage.
@@RecRoom_Stuff Wait, did the others delete their comments?
@@Neoxon619 either a chat mod removed them or TH-cam's bot catching system is working for once
It doesn't approximate direct storage
@@inoob26 LTT has a Bot made by another TH-camr to Detect Bots and kill them, that's why they don't get so many of them
@@inoob26 if i had to bet, i think what happened was the first option. yt hasnt done anything meaningful to combat bots for now
Would be cool to see 1440p/4K numbers - in my experience, beyond 1080P, the difference between DDR4 and DDR5 in games becomes almost negligible.
So why would you want to test those? Higher resolutions would make it a GPU bottleneck.
Also we have to keep in mind that Linus is using the top end cpu and gpu. For majority of the people Ram will never be the bottleneck whether they are using DDR4 or DDR5. Their performance will be limited by cpu or gpu and which Ram kit they use will hardly affect the performance.
probably didn't do it out of laziness lol
Literally pointless to do higher Res. You literally won't learn anything extra than you just did at 1080p.
@@veth10 Because this is a meaningless scenario. Why would you buy super fast ram, a super fast CPU, then use a garbage gpu and a 1080p screen?
0:30 that has to be a war crime
i felt a lot of pain inside after seeing that
0:01 Linus: HEY HEY EVERYONE!! IT'S ME!! THE DROP TIPS MAN!
The whole video took me back a bit to the days when I was still going to benching sessions myself, dabbling in XOC, and my main system was based on a Classified SR2, two X5690 and 2 Mo-Ra 2 radiators, and a 60 liter barrel as an expansion tank. Good old days. Back then I bought and sold quite a few Corsair Dominator GT 2x2 and 3x2 GB kits, always looking for Rev. 7.1A kits. They had the good ICs on them. With water cooling (they didn't like sub-zero temperatures, but also not when they got hotter than 50°C) I could run them at about 2050 MHz and CL6-7-6-20. What a great time, and always selecting out which module is now the bottleneck. Such nice memories. Oh how I miss those days when you could afford PC hardware without having to sell your kidney.
Today all the junk is so expensive and yet none of the sets shown come close to the 5.85ns of my Dominator GT.
I once wanted to sell my whole setup, today I'm glad I didn't, because videos like this show me what sentimental value the system had and still has for me.
Linus I am a huge fan of the channel. Can you also show how SolidWorks, ansys or any CAD and simulation software’s can also take advantage of the faster ram . It is very helpful to our engineers community 😊
That would be a perfect comprehensive review. My Windows device is primarily for CAD and simulation softwares, not editing softwares, too.
No, unless you are running dynamics simulations, pure CAD drawing need no more ram or cpu than they did in 2006
I’m a drafter that works in a large manufacturing plant. We run Solid Edge, but I’ve used Solidworks, inventor, and Fusion 360 a lot.
If you’re not doing sims or rendering, GPU doesn’t matter, CPU doesn’t really matter, and ram is firmly in the “have enough so you don’t hit 100% useage.” Modern CAD programs are horrendously optimized to take advantage of newer technology and anyone who says otherwise isn’t doing modeling and drawings 40-60 hours a week. We’ve quit ordering work stations with Quadra cards at work because the graphics card is literally never above 2-3% useage at any point.
@@saab9251is doing sims not an integral part of cad work? Or maybe there's specialized servers for running simulations.
Subtimings, at least on Intel, make a much bigger difference than they did with DDR4 so this might be something worth investigating. Glad to see a succinct, useful video like this come up.
With ddr4 they already were the most important timings, now with ddr5 the primaries are literally just for show.
That is the reason why you should use the XMP2 profile instead of the XMP1 profile. To many search results are still saying that XMP1 and XMP2 are the same.
Correct, and also on AMD Ryzen. Much more than CAS.
Really sad to see the DDR5 Zen 4 stability issues, but glad you called it out. I've ran through at least 6 ram sticks trying to get a stable system. If it's not a boot issue, it's a game/application crash.
@@pixels_per_inch I run memtest86+ on all my memory. I've had one kit that was bad. I'm assuming the rest were mainly because of AM5/DDR5 immaturity.
I'm also finding that my BIOS on the ASUS ROG X670E-I Gaming doesn't update when I've changed RAM. For example, I'll switch from a CAS 36 stick to CAS 28 and my bios will still have CAS 36 timings. I have to completely wipe my bios each time. I think that's the source of my RAM issues right now. Might actually be a motherboard issue.
Would be interested to see how memory speeds help on mid and low end machines; most people don't have a 3090.
I don't even know why they measured FPS gains in this video. Ram will barely affect your FPS. It's really intended to help with loading times, so it's not likely to make any significant difference on low end or mid tier machines either.
@@TorQueMoDwell that's just demonstrably false. Faster RAM provides the data to the GPU faster, which makes you get more frames. Granted, at a certain point there is a dramatic drop off in terms of gains, once you reach the max what a GPU can handle.
However, as more and more games adopt direct storage, we'll see that be less and less of an issue. Direct storage bypasses the RAM altogether.
Could you chart this in 2 dimensions? Price on X axis, 1% lows vs avg could be represented using vertical bars, performance on Y axis. It would be interesting to see cost to performance ratio. Representing data as a big list of specs on the left is not too easy to digest - most people just care about price.
You cant do a Linode spot without saying it like Dawid does Tech....Liiiiinoooooooooooooode! Good stuff guys!
Whenever there is a new episode of Dawid Does Tech Things and I see that "Linoooooooo......d." is not the sponsor my disappointment is immense
Lenooooooooooooooooo... vo
Very informative video, I hope you guys have time to maybe test this for AMD once the X3D parts come out. 👍
imc shouldn't change between these launches. main difference will probably just be the cache.
The fact that you managed 6400 sticks to run on AMD means you have a golden chip, AMD recmommends to stay at 6000 for optimal performance (Infinity Fabric) and sometimes stabilizing even 6133 or 6200 can be a challenge.
fr
Idk I don't think it is that rare, I myself I'm running 6400CL32 with my 7700X at 5.3ghz and it runs like a charm, my ram is not even expo btw and was meant for Intel CPUs
they didn't actually run at 6400. in the video they said it could not do some of the tasks and crashed.
@@perceptivity_ have you tried any real stability tests? tm5, ycruncher, linpack?
@@hector6264 none of these, still haven't run into any issue with my ram so far
Something I wasn't aware of before buying a new Ryzen 7000 system: it is apparently normal to have a 40 second delay when turning on the machine, and this is to do with DDR5 memory training. That's before POST with no video signal.
yep, you can try to disable memory training with "memory context restore".
I'm very new to the channel and This is the first channel where the sponsorship and ad placement doesn't annoy me or feel rammed downed the throat. Whatever they're formula is, I hope they keep doing it.
The masterful segue literally make me watch the ads bro
Matters on my 7950x for sure. The difference in getting my timings tightened and optimized for 6000mhz made a big difference. The kit is the g skill neo SK Hynix but the AMD EXPO mode was trash. Manual subtimings were a game changer.
Same here.. my Ryzen 7700x was decent with DDR5 5200MHZ..
OC the RAM to 5600MHz & it's improved a little..
I did some aggeresive timing + sub-timing & it's show some noticble differnce..
Couldn't go beyond 5600MHz because samsung DDR5 are meh.. & I just ordereed Hynix 6400 kit.. will arrive in 2 days..
I'll do some extreme OC & see how it's works.. I hope I get some Hynix A-die (best DDR5 dies currently) newest G.skill kits have A-Die in 2023
I love this type of video, diving a little deeper into how and why ram does what it does. I would love a video (or channel if I may be so bold) that dives really deep into how ram or any tech works. Either way I think the lab will help you guys create videos that scratch that itch for me, looking forward to the upcoming content!
Not sure if you meant this detailed, but I gotchu: th-cam.com/video/7J7X7aZvMXQ/w-d-xo.html
@@qvkervox that's exactly what I meant, thank you lol. I probably could have just searched it up myself
I managed to snag some 32GB DDR5 5200MHz RAM for a grand total of ~$130 during last years Black Friday, when I was looking into parts for my new AM5 build. This was a serious upgrade from my previous 16GB DDR3 3200 MHz. It's kind of hard for me to compare what impact it has had since I did a full platform upgrade, but having an all-around powerful build feels a lot more stable. And I likely won't have to upgrade my RAM for a very long time.
Peasants and their ddr3 16gb, i use ddr2 8gb
DDR5-5200 is on par with DDR4-2400 memory, it's not quite at the official JDEC base speed, but close to it. DDR5-6000 would provide a significant performance upgrade.
Great video! I just wanted to chime in on the discussion about RAM speed and latency. It's important to note that RAM speed, measured in MHz, determines the number of transfers of data per second, while latency, measured in nanoseconds, determines the time it takes for the RAM to respond to a request for data. When building a computer, it's crucial to strike a balance between these two factors, as having too much of one and not enough of the other can lead to bottlenecks in performance. Thanks for enlightening us on this topic!
Just bye biggest ram speed and lowest latency
DDR5 is more complex/advanced with 2x bank groups compared to DDR4 (Different Architecture)
Majority of people don't realize that DDR5 primary timing have lower impact compared to DDR4 primary timing
In fact.. sub-timings have bigger impact in DDR5.. U can tune 6000 kit from cl40 to cl30 & barely get 1% difference
But with tuned sub-tming you will get bigger improvement in games especially
My Samsung 5600 CL38 have like 76ms delay.. tuned with primary timing up to 30cl & only get 73ms (3ms difference)
I tried sub-timing & got it go from 73ms to 64ms (9ms difference).. & that was with only 5600.. with DDR5 Hynix die.. U can get around 55ms
Fun factor: Spider-Man with slowest 4800 DDR5 can match best 3800 DDR4.. & with 6000+ tuned timing, it's can get like 20% boost in lows (smother)
what values u have to change to get less delay
@carlososwaldocasimiroferna2631 nearly most of them.. check OC tutorials & try to know what kind of ram die you have (Hynix is the best, Samsung comes second, Micron is the last.. at least for DDR5
Then try to look for tutorials based on the die manufacturer & do some stability benchmarks & see
@@Jack_Sparrow131 i got g skills trident 5z 6400mhhz 32cl, its Hynix but do u have a specific tutorial with a good explanation about the ram oc?
Would be interested to see the lowest worst frames like 0.1%. Reducing stutters, latency and smoothness is more important to me than FPS.
Same
I fully agree! Especially when a person basically just doesn't really notice the improvement from 120 to 240fps, the stuttering, jittering and latency are really important for immersion
@@halomika4973 what people can easily tell the difference between 120-240
@@danieltiger8789 Provide good evidence that supports your statement, and I'll believe you, friend. Have a nice day.
@@halomika4973 you probably used to say the eye cant see past 30fps because some scientist said so lol
would like to see a revisit of the potential usefulness of a 'ram drive' implementation with this new gen
Good idea bro.
I wish you guys could do another in depth guide on how to OC and tighten ram timings.
You're better off with a channel that has better knowledge of such things that LTT. Actually Hardcore Overclocking for example. I don't go to my barber to get dental work done. Don't use LTT for in depth OC, use an OC focussed info source.
i mean oc are the same basically set and stability test
@@noxious89123 yeah used to visit overclocker forums back then, spent money to icafes and all, great times
Very nice video on an intersting subject with lots of charts and valueble info but without clear, consise and accurate conclution for average customer!
Finally. Been waiting for this proper comparison of DDR4 to DDR5.
Normally LTTs comparisons are reasonable, however I don't think it's reasonable to use a 3600 CL14 kit to represent price-to-performance for DDR4. Unless something has changed in the last few weeks a 3600 CL16 kit would have been significantly less expensive with minimal overall speed impact.
You would be happy to find 3200CL18 kits at reasonable prices these days. The cheap DDR4 is all around 3200CL22
@@igelbofh silicon power sells 3200 c16 for dirt cheap, and it's less likely do have a dogshit ic than a jedec kit
no, using that ram was OK because it compares to 4000 CL 18.
@@igelbofh. What’s cheap for you? 32 gb cl18 3600 for $75 US is possible.
RAM is definitely used for gaming, but it's mainly for performing multiple tasks at once. In the long run, bigger size and faster speeds are best for a system you don't want to worry about for years.
Yeah no shit
Games are starting to need more than 16tb too
I might misunderstood your comment but ram is not only used for gaming
@@Phantom-kc9ly I know you meant 16Gb but 16Tb of memory would be hilarious
@@timbit2121 the day of zero loading screen with games installed on ram
I love the lab. Doing the tests we all need done! About to build a new system with the new ryzen series and now I know the sweet spot.
i told my psychiatrist im afraid of random numbers and letters she told me to never look up Linus Tech Tips videos, i wish i listened to her
I like that y’all are bringing in equations and talking about clock cycles. These shouldn’t be considered advanced topics
10:07 Riley, I believe the word you were looking for is “Respectively”. Unless you actually respect the transfer speeds. 😂
Fun fact: Since AMDs infinity fabric bottlenecks the bandwidth of the memory subsystem, the increase in ram speed is less noticable. Thats why the 5600 C28 kits performs so "well"
I feel like people keep repeating this without understanding what infinity fabric is. Zen 1-2 CPUs have 4 cores per CCX, and Zen 3 have 8 cores per CCX. On CPUs that have more cores than whats on a single CCX is where you seen an increased benefit over Intel CPUs when it comes to RAM speed.
With Zen 4, the memory controller clock has been decoupled from the infinity fabric clock, so that no longer affects the memory*. By default, the fclock is 2000 MHz and can be raised to abour 2133 - 2167 MHz, from what I've seen. uclock (memory controller clock) runs 1:1 with mclock (memory clock, MT/s rated speed divided by 2) up 6200 MT/s and switches to 1:2 ratio at 6400, but can be switched back to 1:1 (and on my 7900X system, it works without issues. However, there are a few fclock states that do effect the memory, when the memory controller is at certain clock speeds. When running 6000 MT/s memory, fclock at 2033 pushes the uclock and mclock higher by 50 MHz, thus it will put the memory into a pseudo 6100 MT/s state. I first thought it was a bug, but performance improves measurably. The same thing happens at 6400 MT/s and 2167 MHz fclock, that puts the memory into 6500 MT/s mode. For me, that becomes unstable regardless of timings, so I'm sticking with 6400 MT/s and 2133 MHz fclock. (going from 2000 MHz to 2133 MHz gives about 1ns reduced latency, so it's not a lot, but measurable.
@@lievre460That’s because each CCD has its own link through the IOD, increasing the overall bandwidth
But it’s still pretty noticeable cap on single CCD CPUs
RAM also bottlenecks the infinity fabric.
@@TonkarzOfSolSystem dare to explain?
RT-heavy games seem to benefit the most. Curious to see how Zen 4 3D does since the cache might do more of the heavy lifting
These are the videos we need more of...what's best, best bang for buck ect
Love the research basis of some of your videos when needed. Looking forward to the future!!
It's mostly a "for science" thing, but I would've liked to see the correlation between RAM speed performance and iGPUs.
@@bradhaines3142 iGPUs do not have their own ram. They use system memory.
@@bradhaines3142 Oh, then your initial comment just sounds like you're telling OP water is wet. I agree with them, would've been interesting to see how much of an uplift you get out of bumping RAM speeds.
Unfortunately, AMD hasn't released their new APUs yet, so it'd be a comparison using their incredibly small 2 CU graphics.
@@Gabu_ Yeah and I'm not terribly excited for first gen AM5 APUs. Really high platform entry cost and mediocre RAM support so far on AM5 makes it feel like they'll be duds outside of laptops. Could be sweet on laptops though if they're highly efficient.
@@UranTCG Platform entry isn't really high, only mildly higher than AM4. RAM support isn't mediocre either, just don't expect to run 6400+ speeds.
It's nice to see LTT finally coming back to actually providing tech tips and not just reviewing various tech products.
I’m so glad you made this video. I’m upgrading to AM5 and I ended up buying 32gb of DDR5 7200Mhz for about $300 but it also included a 2TB M.2 drive so I thought that was worth it cause I needed the extra storage anyway. Even though I won’t be able to hit 7200mhz I still got a pretty good deal. Just keep it at 6400mhz and I’m good to go
Wow that is a big rip off
Definitely need this video to get a follow up in roughly 1 year from now
That last transition to the sponsor segment was so funny. XD
Great video, and very informative, thank you LTT.
I've always gone by the rule of thump, if it has 10ns CAS Latency, it is probably "good enough" for an average (i.e. non benchmarking/overclocking) usecase.
Cas latency does almost nothing on DDR5. The sub and tertiary timings do which are on auto (garbage) with xmp/expo. The jump from xmp to manual oc on ddr5 is far larger than jedec to xmp.
What does the Rule of Thump mean? I have never heard of that.. also how do you know if your RAM's CAS Latency is under 10ns? My 3200mhz CL14 32gb sticks of ram for my 5800X are finally working stable set with DOCP and 1.375 dram voltage.
@@casedistorted Tight timings are normally an indicator for a good (or at least not shit) stick.
Your stick for example has a CAS Latency below 10ns.
CL is the CAS-Latency in clockcycles, i.e. 14 clocks
RAM Is DDR, double datarate so you have to take your speed in GHz halfed. i.e. your 3.2GHz datarate stick is really 1.6GHz clockrate.
In the stated units it gets easy to compare.
1.6GHz and CL of 14 means below 10ns (8.75ns to be exact). It's just CL divided by half the speed in GHz.
Wendall from level1tech's mentioned that the internal memory controller (IMC) is what Intel binned for with the 13900KS vs binning for the P cores in the 12900KS. I wonder if the 139ks with the better IMC would show bigger gains during gaming at higher RAM speeds.
Wendall also mentioned that 5800X3D didn't see much difference in performance with 3200 vs 3600 mhz ram. I wonder if that will stay the same for the 7000 series x3d cpu's.
Thank you LTT for telling me which DDR5 ram I can get when I currently have ddr3! 👍
this is the type of videos from LTT that i enjoy and educated me for so many years.
Got the "G.Skill Trident Z5 Neo Series 32 GB (2x 16 GB) DDR5 5600 MHz CL28" last December for under 200€. I paired it with a Ryzen 9 7950X. Happy to see I made the right choice. Although it was a pain to have the kit recognized by the MSI motherboard, and I even had to update the BIOS to make it accept a second kit that I bought later to reach 64GB or RAM.
im not 100% sure how its with DDR5 or your Mainboard+7950X but be aware that it usually was better to have 2x 32GB modules than 4x 16GB modules. Yes, its a kind of more cheaper upgrade path if you REALLY(!!!) need this amount of RAM.
Performance whise its better to stick with two sticks because Dual Channel performes better than quad channel mode. And in addition to that more cheap/mid-range Mainboards and CPUs dont always offer you the max speeds with 4x Sticks.
For example: My younger brothers PC has a Ryzen 2700X with a x470 board(Msi Gaming Plus). The board specs say:
DDR4 MEMORY: 1866/ 2133/ 2400/ 2667 Mhz by JEDEC, and 2667/ 2800/ 2933/ 3000/ 066/ 3200/ 3466 Mhz by A-XMP mode
MEMORY CHANNEL: DUAL
DIMM SLOTS: 4
MAX MEMORY (GB): 64
He has 4x 8GB 3200mhz sticks inside and judging by the mainboard specs XMP/EXPO mode should work perfectly fine for 3200mhz. But the reality is that it seems those specs only are valid for a dual-channel (2x Sticks) configuration. When trying to enable XMP/EXPO with 4x Sticks to 3200mhz the system is incredibly unstable and it basically doesnt makes any sense to run it in this mode, hence loosing performance which you paid for the RAM. Only 2933mhz speed worked stable for 4x Sticks.
I tuned my 3600 cl14 kit for my 5900x.
Totally worth the day and a half of crashing, retuning, and trying again.
Userbenchmark puts my system in the top 5% for identical builds, and cpu top 1% for 5900x.
Thank you LTT, for another banger 😎
stfu you havent even watched the video bitch.
DDR5? Time flies, I'm still playing the original arcade version.
Damn I am still on an abacus with only 6 rows
I'm just strolling up from the old "RAM speed doesn't matter"video to the "we were wrong it can matter"video to this "does it REALLY matter now with ddr5"video? lol I'm just on this DOES IT MATTER?! rollercoaster. Love the segue favorite to date
I love the shift to the sponsor, reminds me of Top Gears “some might say “ for the stig! 😅
Was about to get 2x16GB 5200MHz CL40 but decided to swap it for 2x32GB 4800MHz CL40 seeing as I don't game as much but run a lot of virtual machines. The latter costed $48 CAD more so I think that's pretty damn good considering VMs eat RAM for breakfast and the difference in speed only comes out to be 0.64ns lower, so basically paid $48 bucks for another 32GB of RAM.
What do you do with virtual machines
@@blastyouof use it to play around with Active Directory, mostly.
At least for my 5900XT, there was a big difference between the XMP and the timings from the specific AMD calculator I used.
The XMP settings seem very much designed for Intel was my experience.
Ah, yes, the fabled 5900XT...
@@selohcin maybe he is in sea and has a 5900x3d sample, but i doubt it.
I vaguely remember you mentioning factorio ran better with better memory, would that have been something valuable to test?
Factorio might benefit from AMD's new V-Cache chips.
For DDR4 ram it's best to go with no more then 3000Mhz at 16GB or 32GB sticks getting all x4 sticks and also with DDR5 it's best to go with 5000Mhz. Unless you are an overclocker no one can tell the difference anyway.
i chose DDR5 4800 on my alderlake setup for the increased bandwidth. Sure, 7000 kits twice as expensive have even better timings but I only game at 60 fps and will gladly play Elden Ring 2 at 60 as well.
Bro rams getting faster then my dad leaving me
Your memory is supposed to match your CPU bus with the lowest possible latency. Memory faster than your CPU is going backwards because it generally means higher latency for speeds that will never be utilized.
How to know what speed your cpu bud is running at?
The only time I ever saw an improvement from RAM speed was using the old A10 APU's - which did noticeably better at 2400Mhz than at 1600. But that's due to the graphics architecture.
Still have that thing. It's a good bluray player.
I love you guys. Truly, you have built an amazing team. We know you are pessimistic with selective optimism. That's why we like you.
I am so glad the sponsoring product reaches it's read and write performance" respectfully". While I think there are many aspects of Korean culture I don't yet grasp, respect does appear to be a very important concept from what I've experienced. There does seem to be a concept of being respectable (in non-verbal ways indicating your own worthiness/capacity-for of being respected by others). With respect to this SSD, if the read and write performance has different metrics when it is being respectful and when being respectable, it would be good to know the respective metrics.
I mean this in jest, and in a most respectful way - writing copy that would be scrutinized by the internet (and your audience is probably more pedantic than most) seems like it would be a special kind of torture.
FANTASTIC VIDEO! But what about RAM size vs performance? Like 64GB (2x 32GB sticks) seems to come with slower CL than 32GB (2x 16GB stick) kits; does the extra RAM space offset the lower CL in gaming performance?
I'm planning on getting a AMD 7950X3D when they come out later this month. I was planning on getting a DDR5 64GB RAM kit, to use in this new rig which is primarily a gaming PC with an RTX4090. Any specific suggestions on the best 64GB RAM kit?
First, why the heII are you asking this question here and second, if you are planning to just game 16gb is enough and ई recommend cl 28 -cl32 ram at 5200-6000
RAM capacity doesn't directly impact performance at all, if you have enough RAM you have enough.
64 GB is directly counterproductive if you don't need it for productivity workloads, since as you point out the kits are slower. 16 GB is just barely enough (the minimum requirement for new games), 32 GB is the sweet spot.
2x32gb ddr5 can run at the same speed at 2x32gb. But 128gb with 4 sticks will definitely not run. DDR5 can't handle well 4 sticks for now. Shame.
@@Verroll I'm guessing you didn't see the Linus video on them now recommending at least 32GB for gaming PCs now?
On AMD, isn't memory speed tied to Infinity Fabric speed? Is this a major limitation? Some explanation of other timing numbers (sub-timings?) would be helpful as well. I see a lot of DDR5-6000 kits at 30 or 32ns but with a ton of variance on the following three values.
That was true for Zen 1 through 3. With Zen 4 (Ryzen 7000), the memory clock is no longer tied to the infinity fabric clock so you can safely clock them at different speeds without losing performance from not being synchronized. For example, you can run your memory at 6400 while having the fabric clock at 2000 mhz. With older Ryzens you would have to run your memory at 4000 or 8000 when fabric is at 2000 mhz or face performance degradation. BTW, for my 7950x I got a relatively cheap Kingston Fury 5600CL40 kit. I successfully overclocked it to 6400CL32 and got a nice performance bump for free. So don't waste your money on expensive RAM if you know how to overclock it. If your CPU can't run the RAM at 6400CL32, try 6000CL30/CL32 for essentially the same performance.
CAS latency is one of the least important timings for RAM, but is probably the most easily-marketable. First word latency is rarely what matters in real-world performance. No real workload has purely-random access patterns. AIUI what tends to matter more is the latency between commands (Read to Read, Read to Write, etc.) in the same bank or bank group. It *does* represent worst-case (or near-worst-case) performance so it has *some* utility but real-world applications rarely present a worst-case scenario - RAM, like most hardware, is designed to perform well for typical usage after all.
Well then, how does one determine which RAM card is performs the best?
I really enjoy Linus’s content- I come here every time I’m shopping for nearly everything when it comes to technology. But it’s so hard to pay attention and I’ve figured out it’s his voice. It’s like one of those tones men can’t help but to ignore.
Thanks for checking the RAMs, Tech Tips Man
Could you please do something more with your tests other than gaming? I'm currently on ddr3 2133mhz running at 1866mhz with a lower CL of 9, I noticed when going from cheapo CL13 1333MHZ RAM that my input latency while using VST's reduced, but the thought of going to ddr4/5 with a CL of 34+ scares me. Will this be adversely affected? Have I got the sweet spot for a music rig and it's pointless now to upgrade as in my case it would be a downgrade? These are things we need to know!
Many thanks
@El Cactuar yes we all saw the video, it wouldn't make sense to use MT/s in this instance because that's not how the ram is marketed, labelled or tuned.
The question was "is higher CL RAM going to affect ASIO input" not "what is a good measure of IOPS"
@El Cactuar well the speed of the ram copying data wasn't what I was asking about, it's the delay of VST's from external midi devices, which I can confirm runs faster on a lower memory speed (as marketed) because CAS latency is reduced. Seeing at CAS latency isn't a consideration within the equation to find MT/s it would suggest that MT/s doesn't offer a solution to this situation.
i went from 2666 to 3200 and i can feel drastic difference in daily tasks especially using chrome with a lot of tabs open. EDIT: I should probably mention i am using only one 8gb stick so there is no dual chanel going on cpu is i3 12100f
I updated a friends Bios, and found out he didn't have xmp enabled..
Absolute nonsense
@@idkanymore3382 i made a jump like that & yes it was enough to feel a difference. Nothing to brag about, but it made higher 1% lows which you can feel when gaming
0:44 Had to do a double take here... for a second I thought Babish was doing a cameo on LTT...
I could use more context and disambiguation of terms to fully appreciate the hard work put into this video.
Should be mentioned that DDR5 6000 runs with AMD's infinity fabric in a 1:1 ratio and will generally be the most stable
Why was this not a focal point in this test? Not even a mention of infinity fabric, strange!?
Because it doesn’t work like that on Ryzen 7000 anymore. IF frequency is dynamic up to 4000 MT/s and not higher. What changes over 6000MT/s is that the memory controller runs at half clock, but you can still try running Gear 1, which is apparently what they did here.
IFCLK:UCLK:MEMCLK
Auto:1:1
On Ryzen 7000 at least.
In my experience, latency matters a lot more than speed. For DDR5, I'd definitely opt for a CL28 kit.
@@ZoneXV
Good to know.
@@ZoneXV yep secondaries and tertiaries have a massive impact you can easily beat a 7600 xmp by a decent margin even at 6000
@@joshuadelaughter yeah relative to manual OC xmp does almost nothing on ddr5, and hardly scales with higher speeds without manual tuning as the subs and tertiary timings get looser as the speed gets higher without them being manually configured
Would have liked to see a DDR4 3600 CL16 kit in there as those are like ~$90 less than the CL14 kit…it would be much more competitive on price with low end DDR5 (which is actually more than $135 for 3600 CL16) for sure.
I've got 32GB of DDR4-3600 CL16 in my 5800X system, and I have no issue with the performance.
@@Rac3r4Life I mean it’s definitely top tier RAM, the question is merely, how much does it differ numerically from CL14 in gaming.
@@arjunyg4655 In terms of real world performance I doubt there's even a discernable difference.
Not to mention the boards being cheaper too.
@Linus Tech Tips, I just wanted to say, that was the best segue ever and you got me laughing. Well done
As requested in your pin: using 64GB 3800CL16 DDR4 on main rig and 32GB 3600CL16 on secondary Ryzen PC's (same kits just won't run over 3600 stable on cheaper mobo in 2nd). Testbench PC which I just use to wipe/test drives and test GPU's has just 8GB 2800CL14 DDR4 installed most of the time and I've got an old i7 3770K rig lying around which used to be my daily driver with 16GB 2133CL11 DDR3. Honestly 3800CL16 is doing just fine for now and I don't see myself outlaying for a whole new system until it becomes necessary for acceptable performance. It is nice to see DDR5 prices come down finally tho.
My main box is still using DDR3, so having the next box using DDR5 seems like an obvious choice. If not for latency but at least I will be doubling my capacity to at least 32 gigs
Interesting because when I bought my i7-13700k there was almost no testing done yet. The MSI board QVL said 5600 ddr5 with CL28 seemed to stack up better than anything else approved for use. I figured I could upgrade later if speeds and latency got better. I am running 64Mb and doing primarily rendering and production work (database and site development as well as graphics but almost no gaming so only went with a 3080ti). So far the 5600 Cl28 has really impressed me.
IDK who works at LTT sponsor search department ,but he is ulta-super-master, who convinced Samsung to advertise his 970 evo ssd.
I have almost no idea what you just said but enjoyed watching. Thanks!
I thought that AM5 only went to 6000mhz, anything above that is purely silicon lottery. So lowest CL 6000 is best. To give you an all around score you could take both bandwidth and latency into account. Like 1/ns*((write+read)/2) which I think would give you a pretty good indication of real world performance between the kits. So by my math 4000MT CL14 gets 9286 points, 6000 CL30 gets 8950. 6000 CL 28 is where DDR5 starts to beat the best of DDR4 all around with 9589 points.
there is an actual calculation for this...why would you make one up? It is incorrect.