It matters a LOT more on non X3D chips. Thats because fast ram comes into its own when games have to leave cache and go to ram. The X3D chips having more cache means they have to go to ram less frequently.
True, side note though: some games are such an unoptimised mess that their data working set is huge while memory access patterns are random(from cpu perspective), which makes ram timings important regardless of L3 size.
@@eugeneslepov3884 Unoptimised? Or suspending it in ram reduce crash's because taking it from storage would cause latency induced crash's? Or its just more data being needed and ram is the only place for it to go?
You should do a segment called "Jay's Sweet Spot" where you choose one extremely popular game and come up the best hardware to run the game at the best cost to performance. So people who are building a system can make informed decisions on hardware and skip unnecessary costs.
It’s a good first start Jay! A follow up would be great later down the line. Using a non X3D for the next test would definitely illustrate that the CL36 kit might be fine for X3D, but will show its slowness on a chip without a huge amount of cache. I ran a 4790k at 4.9ghz a long time ago, and overwatch specifically showed BIG gains going from DDR3 1600 to DDR3 2400, I don’t remember the cas timings though. Just that the Overwatch engine is/was sensitive. My 9900K I ran at 5.1 all core and I went from a Corsair 4000mhz kit with loose timings to a Gskill 4133 kit with very tight timings and Overwatch and Destiny 2 felt much smoother and snappier which I attributed to the tighter timings and not the completely insignificant 133mhz clock increase on the dimms. StarCraft 2 might be a game that would really show CAS differences as it’s single threaded and every NS saved would increase the minimums. Games that are notoriously CPU bound or single threaded (Space Marine 2, StarCraft 2, Helldivers 2) where high frame rates are preferred, the 1% lows should have some noticeable areas. I had a 7700x with the free 6000 36-36-36-whatever microcenter kit but didn’t mess with the timings. I’ve grown out of having my PC sit and run memtest for 22 hours before having an error just for an additional 3% in the 1% lows. I upgraded to the 9800x3d and the fastest ram with the tightest timings on the mobo QVL and am just doing that from now on.
While I'd agree, but the memory timings need to be tuned down to the tertiary. A say (imaginary, but the kits in question here are way too similar) CL-38-40-40-40 To CL-36-40-40-40 Is not even worth testing.
I got the 9800x3d too.. any speed above 6000 gives me micro latency ..doesnt matter the timings . 6000 is the sweet spot for my motherboard. X870 gigabyte wifi 7
@@GlennsHardWired majority of am5 motherboards put your memory at gear two mode when you go above 6000. gear two mode is bad and not worth it for amd unless if your motherboard and memory kit can do ddr5 7600 or higher. with that said, you got an x3d so its not gonna matter all that much. theres even peopIe out there with a 7800x3d running ddr5 6400 gear two. they say they experience amdip and horrible stuttering but in reality overall performance is still great despite the wrong settings
I was initially more excited for this video because I'm buying new RAM soon but I wish some non X3D parts were tested because the huge cache buffer of the X3D chips means they don't use the RAM as much. So high speed, low latency ram has a much lesser impact than on CPUs with less cache
The combined knowledge of 25 years still applies. As much money into GPU, CPU, and board and get the best RAM speed and timings you are comfortable with. If you put 100$+ into better RAM -> that should've gone to CPU, GPU, or something else like the monitor.
Yeah, this was my thought too. The 3d v-cache really would minimized a memory bottle neck, which is what is being tested here. I would have expected the fastest non X3D CPU would be better for this test.
As Hardware Unboxed & Buildzoid's collab on RAM timings showed, tuning the secondary & tertiary timings can actually result in a larger performance increase than increasing speed, by a lot in some games. There are very few games that are actually dependent on raw RAM speed. I'm running 6000MHz at CL30-37-32-30 (on a 2x48GB kit) with a lot of the timings besides that tightened a lot, still at 1.35v vdd & 1.3v VDDIO/VDDQ. I can run that same kit at 6200MHz CL30 but it requires 1.5v VDD & 1.4v VDDIO/VDDQ, the extra heat isn't really worth that bump in most use cases. You also have to take in to account FCLK and whether or not you CPU can do the FCLK required to run in UCLK=MCLK mode. I.e. 6000MHz RAM need 2000MHz FCLK, 6200 needs 2067MHz etc. Unfortunately my kit can't do 6400MHz, but it's also a 96GB G.skill Trident Z Royal kit, so I'm happy these dual sided dimms can run 6200 at those timings.
Steve, that‘s his name from HUB or?, later went back to default because of stability issues and said its not worth. I have also 96GB from GSkill XMP@6800 without RGB in an Intel System. I want to change to 9800x3D, what do i do with these RAM-Kit? Does my Mainboard has automatically set good and stable values when i set it to 6000 or 6400? And there is this dynamic stuff from Asus about RAM. Never tried that nor informed yet.
More banks always beats timings and speed. And to get more bank you slot every slot with memory and make sure it's dual rank. The interleaving and the ability to access so much more ranks than the brainwashed guy running a single single rank stick at higher clock is absolutely worth it. TFAW , trcrd and A couple of other are much more important than CAS. It's also shitty that manufacturers sell kits with artificially lowered CAS compared to other primaries. For example 18-18-18 is so much better than 16-22-22
This is a good set of points, and I did the same for my 5800x system, along with tuning PBO. I spent a while doing a "binary search" to determine around what individual timings are generally stable. And once I locked in every subtiming as a good starting point I did a 24hr memory stress test to ensure stability. After that pretty much every secondary and tertiary timing was systematically lowered by 1 (except tREFI which was raised a lot, because bigger is better for this one subtiming) and stress tested for 5~6 hrs before moving on lowering another subtiming. It took over a week to get to the final result, but my memory is as fast as it can be while being stable. It's also good to note that Jay doesn't seem to mention that he locked secondary/tertiary timings. While DDR5 has more secondary/tertiary timings in the XMP/EXPO table, it still takes motherboards a long time on first boot to do memory training if those values aren't known, since the motherboard is trying to guess what works/what doesn't. And those secondary/tertiary timing could have been very different (in good or bad ways) for different sets of ram.
X3D chips DO care about timings as evidenced by 11:00 the X3D chip is relying on ram because of 4k resolution and this manifests in a 10% performance improvement
agreed. most instructions will stay in L3 cache. the cpu will rarely go to ram to fetch crucial very frequently used instructions. hence, if there was ever anything to see, you wont see it with an x3d chip. this was proven in a GN video where steve had a build with a 7800x3d but with 3000mhz ram. the performance was very close to 6000mhz ram
@@Robbie-mw5uu that just means 4K has more instructions that dont fit in the L3 cache. so some go to ram. and with slower timing, this affect performance
Yep. This video will confuse people and let them think that ram speed and timings don't matter that much, when they actually do matter for non x3d. What a "great" content.
@@Robbie-mw5uu it's still a better test to use a non 3d CPU since they are more sensitive to speed and timings, however timings have an impact on latency for everything. You could increase voltages and halve timings on the 3d CPUs and it would be far more effective than increasing ram speed alone.
The new methodology and presentation is much better, jay. Explaining the stuff prior to the tests. The exchange of information with Steve ( GN ) was worth it. keep it up !
It's a step in the right direction regarding ram benchmarks Jay. Your results are with just 2 timings adjusted, on a non X3D CPU if you fully tune all the timings, including tertiary timings, you will see big performance increase in 1% lows. High fps don't mean anything if your lows are all over the place and cause frametime spikes, and the fastest ram you can afford helps more than a lot of mainstream techtubers suggest. Gamers know fast low latency ram matters, whilst the mainstream say spending money on 'fast' ram it's a waste of money. I welcome more content like this. Thanks.
It's not only those 2 timings adjusted. He's using the XMP/EXPO profile, and is just highlighting the CL and RCD as timings that are different between the kits. Other timings are different as well. On most RAM kits RP = RCD, for example.
i dont trust this test. use a non-x3d chip. most of the instructions never go to ram. thats why you dont see anything if there was ever anything to see lol
Sometimes I come across JayzTwoCents’ videos, and while he clearly knows his stuff when it comes to custom water loops, the way he handles technical content is just insane. This time, he took memory modules with different timings and tested them… on a 9800X3D, of all things. And guess what conclusion he reached? That memory doesn’t really affect FPS. Well, no kidding! What a revelation. The problem is, in this video, he genuinely seems to think this applies to all CPUs, not just the 9800X3D, which behaves that way because of its massive cache. A video like this is going to lead thousands of people to buy garbage memory because, according to a million-subscriber tech TH-camr, “it doesn’t matter.”
You're kind of missing the point that even with non-X3D CPUs, the differences would be basically indiscernible with properly-controlled double-blind tests, absolutely guaranteed. One person in a thousand (ie: a true pro-level gamer) _might_ get it right occasionally, but your average gamer would *never* be able to tell. DRAM snobbery is comically stupid...
@awebuser5914 I can buy terrible ram and cripple performance. It happens to people all the time. Big TH-camrs need to help people understand what is the right ram.
@@artyomexplains _"...cripple performance"_ LOL! Hyperbolic much?? RAM will make virtually no discernible difference in the experience and enjoyment of *any* PC. This PC "expert" obsession with utterly arbitrary "bigger numbers" is laughably idiotic...
@awebuser5914 Get ddr5 2x8 5200 memory kit and explore your new pc performance. Share your experience after. Properly tuned ram can make a huge difference. Not to mention getting 6400 kit may force 1:2 ram mode or just unstable performance at 1:1. You games crashing every hour would be quite noticeable, even for you. I have personally tested HUNDREDS of ram configurations on different systems.
FYI - If you're doing repeated experiments, it is possible to characterise if a small variation is run variance or an actual small effect. If you are comparing A and B and repeat the experiment X times, you can assume a null hypothesis that A and B are equal within variance, i.e. the probability of A beating B is 50%. You can then look up the probability of the number of times that A beats B on a binomial distribution, and if that probability is sufficiently small, you can claim it's a small, but real, effect rather than run-to-run variance. So, for example, if A beat B 5 times out of 5, then as the probability of that happening under the null hypothesis is 6.25%, you can probably claim that A is genuinely better than B, but by a small amount. (OK, technically I'd recommend more repeats to get the probability down lower, but there are practical considerations as well) This probably isn't too useful unless you're trying to identify really small effects, like this CAS latency stuff.
It's known on AM5 that sub-timings are more important. So, if these kits sacrifice sub-timings to get a lower CAS etc, it throws the results. Some boards make sub-timings tuning super easy, for other boards you need to look up sub-timing tuning guides.
A flight sim, msfs or dcs, hits my ram harder than anything else. One of these in the testing suite would probably be cool. Especially with vr considerations
I needed to upgrade to 64 GB of RAM because of modded KSP - it can take 32 GB of RAM at most, but it can take all of it for breakfast if you are not carefull. Had no problems with DCS, not playing MSFS thought.
96GB of RAM is the sweet spot for Microsoft Flight Simulator 2024. I know Microsoft said 64 GB of RAM, but every thing I've seen 96 GB actually improves performance.
why are you surprised? thats what X3D does... increase the l3 cache size significantly. Those cpus dont rely on ram nearly as much as an intel part would....
I've been in the I.T. and tech sphere for 10+ years now. And frankly I'm kind of jaded to the industry as a whole. But for whatever reason finding the best dollar to perfomance balance of ram speed, timing, and first word latency still to this day is fun for me. Been doing it since DDR3.
In "the industry" your choices may be set by corporate policy and contracts. Price or performance may have been considered at some point, but perhaps not together. Once decided, those choices may be set in stone until that part is no longer available. No fun. For me the fun is in knowing that I'm working with the platform that best meets the current and future needs, and that each component is the best it can be within the available budget. I know I've extracted the most performance I can from every dollar. The rich boys don't have this. They probably sort by price, high to low, and pick whatever is on top. Or they order some customized prebuilt rig with the integrated fish tank and meteorite fragments. Maybe that has its own kind of thrill, but it's a thrill that a bottom-feeder like me will never know.
Jay, I gotta say, you have drastically improved not only the quality of your content recently making sure it is highly accurate but you have also started delving into fun topics that a lot of gamers/overclockers like to utilize when making configuring decisions. I enjoyed that evga 4090 video too and I'm not even a fan of evga.
I wish you could’ve expanded this to include stalker 2 as well, but dove further by comparing 16 vs 32 gb modules. Only reason I say this is because Stalker specifically mentions 32gb ram being necessary and just how big of a difference double the capacity would’ve made. Regardless, I’m really digging this more refined testing methodology Jay and team!
*For gaming:* 2015: 8GB is standard, 16GB is recommended, 32GB is overkill 2020: 16GB is standard, 32GB is recommended, 64GB is overkill 2025: 32GB will be standard, 64GB will be recommended, 98GB is overkill
I can't stop thinking about this since I watched it. It would be great to have a lot more data about it. Literally everyone with ram capable of high-speeds has the option of running them at lower frequencies with tighter timings. I imagine each game would have a "curve" of performance across different XMP/EXPO options. It would be really interesting to see where the sweet spot on the bell curve is for the majority of games
Since you are working at improving testing, I figured there is something worth discussing. One significant variable that this test didn't account for is the actual bin quality of the chips. Sure, they were binned by Samsung/Hynix and binned again at G.Skill, but they still have a decent variance. As a person with several AMD systems, all with Flare X, I can state I have ram sticks with tighter timings(CL12 and CL30) that I actually need to run looser than sticks with looser timings(CL14 and CL32). As an example, I have one set(2x16) of 3200 mhz, CL12 that will appear to run okay at "stock" XPO. Only after gaming for hours and crashing or running OCCT will I see they are dealing with errors. Loosening the timings to CL16(CL14 still errors) do I get error free usage. On the other hand, I have a set(2x16) of CL14 that runs at CL10 without errors. This set is golden, with it beating supposedly better sticks in real use and testing. Long story short, the "slower" set can actually end up being your fastest and your "faster" sets could be throwing errors making them slower. To avoid this, IMO, find your best set(underclock and overclock each respectively) to see which one can actually handle each timing the best and use that one set as your test subject as you dive deeper into the timings.
I say that every single RAM video, test games with mods, test online games with you as a host, test games with custom maps. THATS where RAM really shines, not as a FPS boost in any scenario
Was going to say fast RAM is less discernable on X3D CPUs due to the increased cache reducing how hard the RAM is hit. For RAM testing, please retry with non-X3D CPU like a 9700X/7700X to be able to see the difference better from these timings and fast RAM.
RAS is usually more important than CL, but RAS is still only relevant when accessing data from different rows within the same bank and rank. Modern types of RAM have enough banks and bank groups (16GB DDR5 modules normally have 8 groups with 4 banks each), that data can be interleaved between them very efficiently, so RAS is relatively rarely used. The time taken to switch between banks (RRD_L) or bank groups (RRD_S) and FAW (the minimum time for 4 consecutive switches between banks or bank groups) are therefore often more important than RAS for DDR5. RAS is still important, because it takes a long time (usually around double RCD or RP, and around 10x the RRD values), so when the RAM _does_ need to consecutively access data from the same bank and rank it is the biggest component of latency for that access, and it is a significant component of total memory latency, but it doesn't particularly stand out against RRD and FAW, RP, or RCD.
Jay, great video! This video is a great example of something very techical, but yet important for me as a customer to understand what I'm buying, escpecially if it comes to high-end and sometimes very expensive PC parts. Also, I would really appreciate if you would highlight in some way the numbers you're currently talking about (more effort in video editing thought), as English is my second language sometimes I just get lost in what you're talking about.
Did you leave all sub timings on Auto? Because it would not surprise me at all that the motherboard trained your 36/36 kit with tighter sub timings and your 28/36 kit with looser ones. There are sub timings that cut into bandwidth not just access speed. Even if this video was not a deep dive, at least set every sub timing on every RAM kit the same or if you did that here: Tell us.
This is not relevant in the real world. Selecting by published specs of ram is. You are talking 1 percentile here of people that mess with sub timings, even in this tech-based community. The vast majority of people will be running on auto.
@@BifsieOfficial I'd have to disagree. one of the secondarytertiary timings is how long it takes to swap from bank to bank and dimm to dimm. they're why single-sided single-stick DDR5 was faster than either dual-sided or dual-dimms for a LONG time after their release. ddr4 on amd also had a bug where tRC was applied waaaay bigger than it needed to be, which is by definition tras+trp, and is argueably THE most important timing as it's how long it takes to do a full cycle of operations - e.g. mine was supposed to be 59 but DOCP made it go to 80 so I gained ~30% more ram performance just tweaking that value. hopefully it's fixed on ddr5 but I haven't upgraded yet
@@BifsieOfficial So your are saying this entire video was pointless since it showed little to no performance difference? Tell Jay that :) No matter it's beside the point anyway. The point is Jay cannot explain what caused the 36/36 kit to be the fastest kit in F1. Even if its just variance, that 1% variance should be in favor of the CL28 kit. And as I said it wouldn't suprise me if the reason was the motherboard trained looser on sub timings with the 28/36 kit than it did with the 36/36 kit, which may completely ate up any performance gains a CL28 kit may have had. For example you lose ~12% on bandwidth - not access speed - by going from tRDRDSC/tWRWRSC 1 to 2. You would also come to a different conclusion by the end of the video: It's not that CL28 kits might make sense given certain scenarios. It's that IF you do not control sub timings don't buy CL28 kits, period! Also I never said Jay should tune sub timings. I said he should control them to be the same on every kit! I don't even understand how you can argue against this. The video is literally titled "How much does RAM Timing REALLY matter?" What did Jay show here? That RAM timing may or may not matter because the motherboard may or may not train way looser timings on one kit and may or may not do that for another kit. And if you disagree with all if that, please explain why Jay bothered controlling the CPU speed to be always 5.3GHz? Using your words "The vast majority of people" will never do static clocks on their systems.
The 9800x3d (or any x3d cpu) the ram has the least effect on performance. Do the same with a non 3D cpu or a intel cpu and you will see more of a performance swing.
Factorio is a game where CPU and especially fast RAM is absolute king. Steve was looking into it for that reason if they wanted to add it to their test games and, well, because of the 9950X3D leak where it came up again. But in general since Anand went belly up it's gone from Test lineups sadly.
@Jayztwocents I want to give some feedback. First, I really like that this video is very informal. Like no joking around. But your channel gives me the feeling, that it can be both. Sometimes a funny video about hardware and then a good informational video like this without clowning around. Second, to concentrate on one aspect of hardware, like the RAM-timings this time, is very cool and gives a lot of insight to me. Thrid, putting the list of the RAM modules on screen, while you talk about it is very good. I had enough time to read it, which makes it more memorable to me. I really don't like if videos say "just stop the video if you want to read it" or "google it yourself". I think if a video wants to adress something, then put it in the video in a length, everyone can read and understand it, otherwise, the video is useless. So good job here. Fourth, not only do the benchmark results, the optics of it, look very good, the transition animation is very eye-pleasing. I hope you find that little feedback interesting.
2:26 Isn't DOCP specifically Asus referring to XMP profiles as "DOCP" on non-Intel boards in order to appease Intel, while everyone seemed to just end up referring to XMP as "XMP" also on non-Intel boards and seemingly got away with it without serious incident? As opposed to EXPO, which is a separate thing from XMP, actually specifically made for AMD and not just the Intel XMP timings being repurposed for AMD systems, as in earlier generations.
As XMP is Intel's trademark, ASUS came up with naming RAM profile for AMD CPUs as DCOP, to avoid paying royalty to Intel for using the trademark on AMD systems. Afterward AMD introduced their own label, EXPO, to solve the problem for all manufacturers.
Great release of the new testing methodologies. The results themselves were what I expected but it's a great safe start to implement these new testing procedures. As a recommendation, I think a good benchmark for CPU/RAM gaming applications would be something like Assetto Corsa Competizione, RFactor 2 or flight sims and X4 strategy games. Every time I've upgraded my CPU/RAM, ACC was the game with the most noticeable changes in FPS and frame time.
They should stop talking with such lame phrases that aren't even true, and learn to make their sentences properly and according to the facts. It's not "no one", when it's just a "small number of people".
Fetching data from cache takes a long time and hence leads to stutter and worse 1% lows, but if that fetching is infrequent then the average FPS will remain mostly unchanged. Some games try to avoid big stutters by fetching the data over multiple frames instead of all at once, this leads to a more even and smaller drop in FPS instead of one large hitch, which will have a greater impact on average FPS and possibly 95 percentile (depending on how spread out the fetching is) but have a smaller impact on 1% lows.
what i really want to know is whats more important for gaming. low CL or high MT/s. could you compare what the performance difference would be between high MT/s slow CL vs lower MT/s Fast CL ram
@@ZithisVT it's easy, get as high mts as you can with the lowest cl, then you have to tune secondary timings, that's were the most of performance comes from.
You can't consider them independently. Latency is measured in clock cycles, so you need to consider *both* the transfer rate and CL in order to compare different RAM kits. CL30 at 6000MT/s (3GHz internal frequency) is equal to 30/3 = 10ns latency. CL36 at 7200MT/s (3.6GHz internal frequency) is also equal to 36/3.6 = 10ns latency. But the other primary timings constitute a larger proportion of total latency, so are generally more important than CL, even though CL is the main advertised timing. For example when buying DDR5-6000, CL30-38-38 is often a lot more expensive than CL36-38-38, but not much faster; and CL30-40-40 is often about the same price as CL36-38-38 but is slightly slower in most tasks.
@@dagnisnierlins188 "get as high mts as you can with the lowest cl" No, this is bad advice. Getting higher MT/s isn't useful if it means your memory controller can't support it without running in a higher gear, that it won't be stable in your motherboard without excessive voltage, or (if you have a Ryzen CPU) that your infinity fabric can't synchronise with it. Some RAM kits have very low CL but are slow because their other primary timings and subtimings are crap, and they often use low-quality dies that won't be stable if you manually reduce the other latency timings yourself. They're designed to _look_ fast to buyers who know that low latency is good but don't know a lot about memory timings, not to actually be fast.
The only Tipp I can give you is… if you indeed do test Anno 1800 you also need to test with a big Population. I know you can download 1 Mio city’s but that’s as far as my knowledge goes sorry. I just know the Game runs without issues at start. Just to keep in Mind.
This is a really good topic and I appreciate the testing. I know it's a very specific thing but I'd be interested in a similar test with different brands / models of RAM with the same timings to see if things like integrated cooling and voltage regulators and whatnot make a real difference. But thanks a ton for a great video!
I remember back in the day (early 90's) when timing changes would show notable changes. With today's high clock frequencies and massive data transfers, I see very small changes if any, especially with (most) games.
SotTR shows pretty solid scaling with tuned timings, that's my sanity check "is this doing anything" gaming benchmark. +11% FPS on a 12700k from stock XMP to manually tuned b-die kit at the same frequency.
I'm curious how much of a difference it would make in Star Citizen. The game is often RAM intensive, probably due to a lack of optimization, but the server also plays such a large role in performance that measuring it would be impossible.
When a game has a memory leak, the only difference RAM makes is determining how long you can play before you need to restart the game, i.e. the more ram you have the longer it takes to get to 99% utilization and need to restart. That said, getting more RAM won't fix Star Citizen's issues, because Star Citizen's issue is that it's an unfinished game being made by incompetent people on a bad engine.
Great video, I learned a lot. Thinking of your charts, maybe adding standard deviation from the runs may help show us the run to run consistency? It may make them less accessible to the audience, but us data nerds love it. Especially with such minute differences you tested.
MUCH better explanation & presentation of data, without as many rabbit holes as GN tends to do in their pursuit of complete reviews. In this test, I would've preferred NON-X3D CPUs. I think the 3D-V cache could be confounding the results because the CPU could be using it's own cache for certain functions and make it less reliant on RAM, and therefore make it harder to see actual differences (if they exist)
Super stoked about the new methodologies! Actually more excited than I have been for a YT channel announcement in a while. My only request is some meaningful benchmarks for water-cooling components. GN is never going to do it and the closest thing to a true comparative tier list is DerBaur comparing CPU waterblocks. I for one, REALLY care about all this new data.
@@manuelp7472 well in black myth their was about a 5fps difference for the slowest kit tested 36/48 . significant not really. but as jay mentioned if you not paying attention well shopping you could accidently end up buying something dumb with a latency in the 50+ range which could be an even bigger difference.
@@manuelp7472of course it matters, but not so much for x3d chips, and lower cas latency doesn't matter when ram sticks with xmp on have borked secondary timings. Intel 13th and 14th gen gain more fps up to 8000-8200mhz
There is no guess really, it all just depends what's being tested, some games/tasks are extremely memory bandwidth limited and others are latency limited, so depending on the game; Cas, Speed, both, or neither can matter. This test also wasn't done super well, Jay should've used a CPU with less cache because the extra cache of the X3D chips dramatically reduces the difference high performance ram can even make because with so much cache the CPU calls on the ram much less
1:43 OMGJ, Ive been a Subscriber since the old Spare Bedroom Office and Garage Studio days and I have Never Laughed out loud that I could remember. Cache Joke got me Rolling on the floor... 🤣🤣🤣
So if I'm understanding this right, the best thing overall is to focus on capacity rather than speed. Budget left after capacity can be used towards speed, and aim for a high MHz, low CL number, low first two numbers on timings. Nice! I'd love to see a larger test list comparing DDR4 and DDR5 across some popular RAM kits just to see where major companies overpricing their RAM land alongside the more cheaper yet same/similar timings.
Not relevant for everyday use. You disable a bunch of power saving methods simply for "umaga I got another 90 points on my Cinebench score!1111" (we are literally talking
Your CPU will always run at turbo speed except in a thermal throttling situation. It's very good to turn it off for latency, consistency and sometimes performance when the Windows scheduler doesn't do its job properly
The main purpose of C-states is power saving, which for this test was disabled so that the CPU remains in an active state and presumably a constant voltage. Disabling C-states reduces variables in this test relative to the CPU. Note: Enabling C-states allows the CPU to idle and reduce voltage under the condition that CPU utilization drops below 100%. Therefore, overall system latency is only increased when the CPU changes states, such as from idle to active, but when a game is running you might assume the CPU should remain active. Whether the CPU C-state changes while a game is running might depend on the specific game and how that game interacts on a specific OS. For general PC purposes, its fine to enable C-states, as you probably don't need the CPU power maxed out (max frequency and/or max voltage locked), or active at all times that the PC is powered on, but you have the option because its up to you.
Ram timings and speed can cause 7800X3D to be unstable at times. Also, not many are talking about re-size bar and 4g decoding performance impact that can cause instability in Windows and mainly Linux. Also, with many games, including older titles.
Nice video ! It would be great to see competitive titles being tested one day, as the higher the fps, the bigger the difference a factor like ram timings would show on the results. Competitive titles might also be most of the titles that would actually "benefit" from the absolute lowest latencies, which only becomes a factor when you're pushing fps past 300. It is kinda frustrating when there's similar tests like this being done, and they stick to showing only the most popular titles despite most of them not being able to even get high enough fps due to their engine, or the gpu being the real bottleneck.
Instead of testing different kits with uncontrolled secondary and tertiary timings, should have used the same kit and punched in all the timings manually. This is hardly isolating the variables to answer the premise of the video, this is more of a review of the kits themselves at their included profiles.
I would suggest to do 5 Testruns, exclude the best and the worst and build a median of the remaining 3. I know it's a tough time consumption but it's worth the hassle.
A better title would be: How Important Are Secondary Timings? Primary timings aren't that important without good secondary timings; Buildzoid talks about this all the time.
Very important. I tune ram all the time for my builds I sell and for a couple of pro gamers here in Canada. The difference in speed is significant. The timings are probably more important than the speed of the ram. The trick is getting it to be 100% stable. I just recently tuned ddr5-5600 MHz that now gets 60 gb/sec ram speed and latency below 60 ms. Voltage has to be increased to get stability and it’s not just the ram voltage but the voltage for the IMC. I tune the primaries, secondaries, and tertiary timings. Takes one or two days because of testing for errors with every couple of timing changes. All you are doing is testing different kits with different primary timings. The difference is in tuning the secondary and tertiaries. Title of video is somewhat misleading because you are just testing different kits………you’re not tuning ram.
Wholly agree. One timing by one bin nets no difference, really. So many timings to tune and consider. And always hated RAM OC for how hard it is to make sure it's solid.
You ran games, with an X3D chip and a 4090… if you looked at the RAM usage during these runs it likely was at 1% usage the whole time. And no benchmarks beyond games. Load 1,000 RAW 45mp photos into Lightroom and apply a preset and export and you’ll see RAM CL make a difference.
My Custom Windows 11s Dragon Designed for gaming. This is how it works, I have games on steam, but I run steam in the background I don't open it or use it directly I set desktop icons to my own library using fences I make this a dropdown box directly on my desktop I have each fence set to individual categories 4 categories in all c1 action adventure c2 platformer c3 sports c4 strategy puzzle. I add in my own side panel directly on the desktop that contains setting profile and graphics login all at user control. It's all very clean no clutter and the most important part I don't have apps all running in the background slowing down your PC the user is in full control. I'm using my own custom Version of Windows 11. Yes, it is lower latency achieved by removing autorun and adding side panel for manual control. I am now working on my own browser it's not at a finished stat yet. Edition Windows 11s Dragon Version 24H2 OS custom build 26120.2705 Experience Windows Feature Experience Pack 1000.26100.42.0 No update issues all setting set to privacy no need for TPM) version 2.0. lower storage requirement just 35 GB or larger storage device. this is due to the removal of Bloatware from Windows 11.
@@sopcannon It's my custom OS Designed for gaming zero Bloatware built under OEM licence no need for TPM) version 2.0. lower storage requirement just 35 GB or larger storage device. this is due to the removal of Bloatware from Windows 11. Is a snapper faster vision of Windows 11s.
I was going to mention something about the x3d test, but many other people have already commented on it. It was interesting to see that it did still have some impact on the lows for 4k though. Something I am interested in seeing is a ramdisk versus cached nvme test for gaming.
Jay, I just had an idea, I'm not an app developer but what would be super useful is a phone app that lets you plug in your cpu, motherboard, and ram info and walks you through various bios optimizations like overclocking your ram.
I'm glad someone else took into account the other timing. Usually there's 5 important data. But the kit shows only 1, 3 or 4 of them. For example most of the time you see CL30. Sometimes 30-36-36. And sometimes 30-36-36-96. But for that one. There's a catch. Most of the time. The fourth one doesn't correspond to the fourth. It's 30-36-36-68-104 for my kit (Lexar Ares). The 96 is that one (usually Trident.Z) is the fifth. Example (IDK their timing) is 30-36-36-68-96. So the kit shows 4 data. But the fourth data is the fifth latency 🤔. Since AMD doesn't like fast timing but rather low timing. Finding low latency kit at 6000MT/s or tuning a stable profile who doesn't reduce performance (lowering timing manually can work but will not always increase performance)
I'm rocking a 7800 X3D, so It's great to see demonstrated the X3D side of AMD CPUs, virtually cancelling out the need for more expensive binned RAM. But as others have mentioned, non-X3D parts do lean more on faster RAM.
This is the kind of testing I like to see. I've always been curious about these multi-variable component pairings. Most of the time when a reviewer is doing a video on a CPU or a GPU, they only focus on pairing them with top-of-the-line components in order to create a bottleneck just to see it in a drag race against its peers. But there's never much time given to how those components fare when you pair them with cheaper parts. If all I get is 5 or 10 more fps, it doesn't seem like a worthy expense to throw an extra two hundred dollars at fancy RAM. Or a fancy SSD. Or a fancy motherboard. But I gotta see where the diminishing returns start kicking in.
For the sake of consistency, I would recommend to keep the order of the tested components the same on the comparison charts. For the first test set with F1 2024 the order was 36/36 -> 28/36 -> 36/48. I found that a little odd, because the RAM with the tightest timings was placed in the middle. On the BMW test set, the order then switched to 28/36 -> 36/36 -> 36/48, which made intuitively more sense to me, but was different than the other test set.
I’d watched your video with Steve. I’m new to your channel. For what it’s worth, I enjoy GNs data driven evaluation, and am happy to see you following suit. The more data the better. Going through your older videos, I’d like to see water cooling parts eval. GN doesn’t do this, and you seem to have a lot of experience. This could be a good niche. I’m adding channel, hope to see some open loop content in future.
tREFI is less important for DDR5 than it is for DDR4, because of DDR5's "same bank refresh" feature, which means that it can still access data stored in other banks while one bank in each bank group refreshes, while on DDR4 every bank within each rank has to refresh at the same time. But it does still make a difference.
Finally. I was trying to find videos on this. I have the ram that comes with the amd bundle. Potentially was going to get the gskill Royale cl28 but dont know if it’s truly that much better. Currently got a 9800x3d now
3 minutes ago…Never been this early to a video haha, cool. It’s cool to see you expanding the video topics and stuff Jay, I’m excited to see your testing and opinions and objectivity and how you present it (love the talking-head, casual, and dry humor personality you have in the video formats). Good luck and keep at it!
My god talking about timing. I just upgrade my ram yesterday and it dosen't run at full speed. And you release a video about it this morning... this is good TIMING 👍
Not many know about it and very few have done it but one can do the same with the vram and gain a fairly nice boost at stock clocks at the expense of reduced head room for overclocking. One can gain up to around 20% at stock clocks but results obviously vary.
I have some t-create 6000 30-36-36-76 that I got for $85 a month ago (it’s $88 now) when I upgraded from a 12th gen + ddr4 to 9800x3d, and I have to say I’ve been super impressed with it. I’ve only run gskill or kingston since ivy bridge, first time with team. It trained super quickly and hit expo with 0 problems. I don’t play anything competitively enough that I’m going to see any difference tighter or looser, but I was looking primarily for white, non-rgb, lower profile ram & then looked at brand and timings. I think it’s a great kit and for me was the “sweet spot” for the build.
It matters a LOT more on non X3D chips. Thats because fast ram comes into its own when games have to leave cache and go to ram. The X3D chips having more cache means they have to go to ram less frequently.
True, side note though: some games are such an unoptimised mess that their data working set is huge while memory access patterns are random(from cpu perspective), which makes ram timings important regardless of L3 size.
@@eugeneslepov3884 yeah man
Was thinking the same thing.
@UTTPBABYKID really...useful 😂
@@eugeneslepov3884 Unoptimised? Or suspending it in ram reduce crash's because taking it from storage would cause latency induced crash's? Or its just more data being needed and ram is the only place for it to go?
You should do a segment called "Jay's Sweet Spot" where you choose one extremely popular game and come up the best hardware to run the game at the best cost to performance. So people who are building a system can make informed decisions on hardware and skip unnecessary costs.
Like power supply, gpu , cpu and ram. And maybe an ssd.
@@Electrify928Good idea, but one should never skimp on PSUs, an A-Tier 750-850W is the sweet spot these days.
@@Maartwo this, get a good psu and it will last for 10 years with no problems.
@@Maartwo I have a dark power 1600w and a thor 1200w. The Dark Power 1600w i bought because of a Jay video.
Pretty cool idea.
Id like to see a part 2 to this test but without the an X3D chip. See how an intel 12900k or 9700x handles the faster ram.
nobody uses intel
@@Robbie-mw5uu and nobody sane test ram with cpu that got internal cache
😂 you will have to RMA your 12900k first
@@piotrsiemaszko5225 since non x3d chips have no cache as we all know. lmao.
I was going to say the same thing... It might matter more without the extra cache of the X3D chips.
It’s a good first start Jay! A follow up would be great later down the line.
Using a non X3D for the next test would definitely illustrate that the CL36 kit might be fine for X3D, but will show its slowness on a chip without a huge amount of cache.
I ran a 4790k at 4.9ghz a long time ago, and overwatch specifically showed BIG gains going from DDR3 1600 to DDR3 2400, I don’t remember the cas timings though. Just that the Overwatch engine is/was sensitive.
My 9900K I ran at 5.1 all core and I went from a Corsair 4000mhz kit with loose timings to a Gskill 4133 kit with very tight timings and Overwatch and Destiny 2 felt much smoother and snappier which I attributed to the tighter timings and not the completely insignificant 133mhz clock increase on the dimms.
StarCraft 2 might be a game that would really show CAS differences as it’s single threaded and every NS saved would increase the minimums.
Games that are notoriously CPU bound or single threaded (Space Marine 2, StarCraft 2, Helldivers 2) where high frame rates are preferred, the 1% lows should have some noticeable areas.
I had a 7700x with the free 6000 36-36-36-whatever microcenter kit but didn’t mess with the timings. I’ve grown out of having my PC sit and run memtest for 22 hours before having an error just for an additional 3% in the 1% lows. I upgraded to the 9800x3d and the fastest ram with the tightest timings on the mobo QVL and am just doing that from now on.
While I'd agree, but the memory timings need to be tuned down to the tertiary.
A say (imaginary, but the kits in question here are way too similar)
CL-38-40-40-40
To
CL-36-40-40-40
Is not even worth testing.
This comment is more useful than the entire video.
I got the 9800x3d too.. any speed above 6000 gives me micro latency ..doesnt matter the timings . 6000 is the sweet spot for my motherboard. X870 gigabyte wifi 7
@@GlennsHardWired majority of am5 motherboards put your memory at gear two mode when you go above 6000. gear two mode is bad and not worth it for amd unless if your motherboard and memory kit can do ddr5 7600 or higher. with that said, you got an x3d so its not gonna matter all that much. theres even peopIe out there with a 7800x3d running ddr5 6400 gear two. they say they experience amdip and horrible stuttering but in reality overall performance is still great despite the wrong settings
_"Destiny 2 felt much smoother and snappier..."_ 100% confirmation bias, end of story...
I was initially more excited for this video because I'm buying new RAM soon but I wish some non X3D parts were tested because the huge cache buffer of the X3D chips means they don't use the RAM as much. So high speed, low latency ram has a much lesser impact than on CPUs with less cache
The combined knowledge of 25 years still applies.
As much money into GPU, CPU, and board and get the best RAM speed and timings you are comfortable with.
If you put 100$+ into better RAM -> that should've gone to CPU, GPU, or something else like the monitor.
they still use the ram but the speeds dont matter nearly as much because of the higher capacity of L3 cache
Yeah, this was my thought too. The 3d v-cache really would minimized a memory bottle neck, which is what is being tested here. I would have expected the fastest non X3D CPU would be better for this test.
Ryzen likes tweaked sub timings.
Sorry, should you care about this if you don't have an X3D chip? I would just get ram that matches my pc's price class in that case.
If you want RAM timings I've got a ton of RAM timings.
Youre a god 🙏
The RAM guy himself! Show them chad!
As Hardware Unboxed & Buildzoid's collab on RAM timings showed, tuning the secondary & tertiary timings can actually result in a larger performance increase than increasing speed, by a lot in some games. There are very few games that are actually dependent on raw RAM speed.
I'm running 6000MHz at CL30-37-32-30 (on a 2x48GB kit) with a lot of the timings besides that tightened a lot, still at 1.35v vdd & 1.3v VDDIO/VDDQ. I can run that same kit at 6200MHz CL30 but it requires 1.5v VDD & 1.4v VDDIO/VDDQ, the extra heat isn't really worth that bump in most use cases.
You also have to take in to account FCLK and whether or not you CPU can do the FCLK required to run in UCLK=MCLK mode. I.e. 6000MHz RAM need 2000MHz FCLK, 6200 needs 2067MHz etc. Unfortunately my kit can't do 6400MHz, but it's also a 96GB G.skill Trident Z Royal kit, so I'm happy these dual sided dimms can run 6200 at those timings.
Steve, that‘s his name from HUB or?, later went back to default because of stability issues and said its not worth.
I have also 96GB from GSkill XMP@6800 without RGB in an Intel System.
I want to change to 9800x3D, what do i do with these RAM-Kit? Does my Mainboard has automatically set good and stable values when i set it to 6000 or 6400? And there is this dynamic stuff from Asus about RAM. Never tried that nor informed yet.
Not being able to do 6400 might have more to do with your CPU’s IMC
More banks always beats timings and speed. And to get more bank you slot every slot with memory and make sure it's dual rank. The interleaving and the ability to access so much more ranks than the brainwashed guy running a single single rank stick at higher clock is absolutely worth it. TFAW , trcrd and A couple of other are much more important than CAS. It's also shitty that manufacturers sell kits with artificially lowered CAS compared to other primaries. For example 18-18-18 is so much better than 16-22-22
This is a good set of points, and I did the same for my 5800x system, along with tuning PBO. I spent a while doing a "binary search" to determine around what individual timings are generally stable. And once I locked in every subtiming as a good starting point I did a 24hr memory stress test to ensure stability. After that pretty much every secondary and tertiary timing was systematically lowered by 1 (except tREFI which was raised a lot, because bigger is better for this one subtiming) and stress tested for 5~6 hrs before moving on lowering another subtiming. It took over a week to get to the final result, but my memory is as fast as it can be while being stable.
It's also good to note that Jay doesn't seem to mention that he locked secondary/tertiary timings. While DDR5 has more secondary/tertiary timings in the XMP/EXPO table, it still takes motherboards a long time on first boot to do memory training if those values aren't known, since the motherboard is trying to guess what works/what doesn't. And those secondary/tertiary timing could have been very different (in good or bad ways) for different sets of ram.
@@igelbofhthis is NOT the case on DDR5. You’re kind of correct when it comes to DDR4. Different behaviors
This is an X3D chip though, they don't really care that much about ram speeds/timings.
Should've used a 9950X instead or used Intel(RPL or ARL)
X3D chips DO care about timings as evidenced by 11:00
the X3D chip is relying on ram because of 4k resolution and this manifests in a 10% performance improvement
agreed. most instructions will stay in L3 cache. the cpu will rarely go to ram to fetch crucial very frequently used instructions. hence, if there was ever anything to see, you wont see it with an x3d chip. this was proven in a GN video where steve had a build with a 7800x3d but with 3000mhz ram. the performance was very close to 6000mhz ram
@@Robbie-mw5uu that just means 4K has more instructions that dont fit in the L3 cache. so some go to ram. and with slower timing, this affect performance
Yep. This video will confuse people and let them think that ram speed and timings don't matter that much, when they actually do matter for non x3d. What a "great" content.
@@Robbie-mw5uu it's still a better test to use a non 3d CPU since they are more sensitive to speed and timings, however timings have an impact on latency for everything. You could increase voltages and halve timings on the 3d CPUs and it would be far more effective than increasing ram speed alone.
The new methodology and presentation is much better, jay. Explaining the stuff prior to the tests. The exchange of information with Steve ( GN ) was worth it. keep it up !
Shame he then went and used a chip that nullifies most of the differences.....
It's a step in the right direction regarding ram benchmarks Jay.
Your results are with just 2 timings adjusted, on a non X3D CPU if you fully tune all the timings, including tertiary timings, you will see big performance increase in 1% lows.
High fps don't mean anything if your lows are all over the place and cause frametime spikes, and the fastest ram you can afford helps more than a lot of mainstream techtubers suggest.
Gamers know fast low latency ram matters, whilst the mainstream say spending money on 'fast' ram it's a waste of money.
I welcome more content like this.
Thanks.
It's not only those 2 timings adjusted. He's using the XMP/EXPO profile, and is just highlighting the CL and RCD as timings that are different between the kits. Other timings are different as well. On most RAM kits RP = RCD, for example.
i dont trust this test. use a non-x3d chip. most of the instructions never go to ram. thats why you dont see anything if there was ever anything to see lol
Thanks
Sometimes I come across JayzTwoCents’ videos, and while he clearly knows his stuff when it comes to custom water loops, the way he handles technical content is just insane. This time, he took memory modules with different timings and tested them… on a 9800X3D, of all things. And guess what conclusion he reached? That memory doesn’t really affect FPS. Well, no kidding! What a revelation.
The problem is, in this video, he genuinely seems to think this applies to all CPUs, not just the 9800X3D, which behaves that way because of its massive cache. A video like this is going to lead thousands of people to buy garbage memory because, according to a million-subscriber tech TH-camr, “it doesn’t matter.”
So true
You're kind of missing the point that even with non-X3D CPUs, the differences would be basically indiscernible with properly-controlled double-blind tests, absolutely guaranteed. One person in a thousand (ie: a true pro-level gamer) _might_ get it right occasionally, but your average gamer would *never* be able to tell. DRAM snobbery is comically stupid...
@awebuser5914 I can buy terrible ram and cripple performance. It happens to people all the time. Big TH-camrs need to help people understand what is the right ram.
@@artyomexplains _"...cripple performance"_ LOL! Hyperbolic much?? RAM will make virtually no discernible difference in the experience and enjoyment of *any* PC.
This PC "expert" obsession with utterly arbitrary "bigger numbers" is laughably idiotic...
@awebuser5914 Get ddr5 2x8 5200 memory kit and explore your new pc performance. Share your experience after. Properly tuned ram can make a huge difference. Not to mention getting 6400 kit may force 1:2 ram mode or just unstable performance at 1:1. You games crashing every hour would be quite noticeable, even for you. I have personally tested HUNDREDS of ram configurations on different systems.
FYI - If you're doing repeated experiments, it is possible to characterise if a small variation is run variance or an actual small effect. If you are comparing A and B and repeat the experiment X times, you can assume a null hypothesis that A and B are equal within variance, i.e. the probability of A beating B is 50%. You can then look up the probability of the number of times that A beats B on a binomial distribution, and if that probability is sufficiently small, you can claim it's a small, but real, effect rather than run-to-run variance. So, for example, if A beat B 5 times out of 5, then as the probability of that happening under the null hypothesis is 6.25%, you can probably claim that A is genuinely better than B, but by a small amount. (OK, technically I'd recommend more repeats to get the probability down lower, but there are practical considerations as well)
This probably isn't too useful unless you're trying to identify really small effects, like this CAS latency stuff.
It's known on AM5 that sub-timings are more important. So, if these kits sacrifice sub-timings to get a lower CAS etc, it throws the results. Some boards make sub-timings tuning super easy, for other boards you need to look up sub-timing tuning guides.
A flight sim, msfs or dcs, hits my ram harder than anything else. One of these in the testing suite would probably be cool. Especially with vr considerations
Msfs is definitely an outlier where MT/s matter a good bit.
I gotta upgrade my ram for msfs. I think I need 64 gigs
+1 for DCS
I needed to upgrade to 64 GB of RAM because of modded KSP - it can take 32 GB of RAM at most, but it can take all of it for breakfast if you are not carefull. Had no problems with DCS, not playing MSFS thought.
96GB of RAM is the sweet spot for Microsoft Flight Simulator 2024. I know Microsoft said 64 GB of RAM, but every thing I've seen 96 GB actually improves performance.
"Nobody ever talks about timings"
*Hardware unboxed:* "Am I a joke to you?"
And Buildzoid: Hold my beer🍺
literally who
🤣
@@cliffs1965 He has a video on Hynix memory subtimings for ryzen 7000 that increased my 1% lows significantly.
why are you surprised? thats what X3D does... increase the l3 cache size significantly. Those cpus dont rely on ram nearly as much as an intel part would....
I've been in the I.T. and tech sphere for 10+ years now. And frankly I'm kind of jaded to the industry as a whole. But for whatever reason finding the best dollar to perfomance balance of ram speed, timing, and first word latency still to this day is fun for me. Been doing it since DDR3.
Gskill bdie oyaa
First word problems... This guy b-dies...
@@monochromatech xD
In "the industry" your choices may be set by corporate policy and contracts. Price or performance may have been considered at some point, but perhaps not together. Once decided, those choices may be set in stone until that part is no longer available. No fun.
For me the fun is in knowing that I'm working with the platform that best meets the current and future needs, and that each component is the best it can be within the available budget. I know I've extracted the most performance I can from every dollar.
The rich boys don't have this. They probably sort by price, high to low, and pick whatever is on top. Or they order some customized prebuilt rig with the integrated fish tank and meteorite fragments. Maybe that has its own kind of thrill, but it's a thrill that a bottom-feeder like me will never know.
Been around a while also more interested in data movement than compute these days 😅
Jay, I gotta say, you have drastically improved not only the quality of your content recently making sure it is highly accurate but you have also started delving into fun topics that a lot of gamers/overclockers like to utilize when making configuring decisions. I enjoyed that evga 4090 video too and I'm not even a fan of evga.
I wish you could’ve expanded this to include stalker 2 as well, but dove further by comparing 16 vs 32 gb modules. Only reason I say this is because Stalker specifically mentions 32gb ram being necessary and just how big of a difference double the capacity would’ve made. Regardless, I’m really digging this more refined testing methodology Jay and team!
*For gaming:*
2015: 8GB is standard, 16GB is recommended, 32GB is overkill
2020: 16GB is standard, 32GB is recommended, 64GB is overkill
2025: 32GB will be standard, 64GB will be recommended, 98GB is overkill
I can't stop thinking about this since I watched it. It would be great to have a lot more data about it. Literally everyone with ram capable of high-speeds has the option of running them at lower frequencies with tighter timings. I imagine each game would have a "curve" of performance across different XMP/EXPO options. It would be really interesting to see where the sweet spot on the bell curve is for the majority of games
Since you are working at improving testing, I figured there is something worth discussing. One significant variable that this test didn't account for is the actual bin quality of the chips. Sure, they were binned by Samsung/Hynix and binned again at G.Skill, but they still have a decent variance. As a person with several AMD systems, all with Flare X, I can state I have ram sticks with tighter timings(CL12 and CL30) that I actually need to run looser than sticks with looser timings(CL14 and CL32). As an example, I have one set(2x16) of 3200 mhz, CL12 that will appear to run okay at "stock" XPO. Only after gaming for hours and crashing or running OCCT will I see they are dealing with errors. Loosening the timings to CL16(CL14 still errors) do I get error free usage. On the other hand, I have a set(2x16) of CL14 that runs at CL10 without errors. This set is golden, with it beating supposedly better sticks in real use and testing.
Long story short, the "slower" set can actually end up being your fastest and your "faster" sets could be throwing errors making them slower. To avoid this, IMO, find your best set(underclock and overclock each respectively) to see which one can actually handle each timing the best and use that one set as your test subject as you dive deeper into the timings.
I say that every single RAM video, test games with mods, test online games with you as a host, test games with custom maps. THATS where RAM really shines, not as a FPS boost in any scenario
16:28 for the quick answer.
for the quick and wrong answer
Was going to say fast RAM is less discernable on X3D CPUs due to the increased cache reducing how hard the RAM is hit.
For RAM testing, please retry with non-X3D CPU like a 9700X/7700X to be able to see the difference better from these timings and fast RAM.
The more important number is Tras; basically the time it takes to get one data beat, then the next.
also read to read is very important if you need more than a few bytes of data
RAS is usually more important than CL, but RAS is still only relevant when accessing data from different rows within the same bank and rank.
Modern types of RAM have enough banks and bank groups (16GB DDR5 modules normally have 8 groups with 4 banks each), that data can be interleaved between them very efficiently, so RAS is relatively rarely used.
The time taken to switch between banks (RRD_L) or bank groups (RRD_S) and FAW (the minimum time for 4 consecutive switches between banks or bank groups) are therefore often more important than RAS for DDR5. RAS is still important, because it takes a long time (usually around double RCD or RP, and around 10x the RRD values), so when the RAM _does_ need to consecutively access data from the same bank and rank it is the biggest component of latency for that access, and it is a significant component of total memory latency, but it doesn't particularly stand out against RRD and FAW, RP, or RCD.
Jay, great video! This video is a great example of something very techical, but yet important for me as a customer to understand what I'm buying, escpecially if it comes to high-end and sometimes very expensive PC parts. Also, I would really appreciate if you would highlight in some way the numbers you're currently talking about (more effort in video editing thought), as English is my second language sometimes I just get lost in what you're talking about.
Did you leave all sub timings on Auto? Because it would not surprise me at all that the motherboard trained your 36/36 kit with tighter sub timings and your 28/36 kit with looser ones. There are sub timings that cut into bandwidth not just access speed. Even if this video was not a deep dive, at least set every sub timing on every RAM kit the same or if you did that here: Tell us.
This is not relevant in the real world. Selecting by published specs of ram is. You are talking 1 percentile here of people that mess with sub timings, even in this tech-based community. The vast majority of people will be running on auto.
@@BifsieOfficial I'd have to disagree. one of the secondarytertiary timings is how long it takes to swap from bank to bank and dimm to dimm. they're why single-sided single-stick DDR5 was faster than either dual-sided or dual-dimms for a LONG time after their release. ddr4 on amd also had a bug where tRC was applied waaaay bigger than it needed to be, which is by definition tras+trp, and is argueably THE most important timing as it's how long it takes to do a full cycle of operations - e.g. mine was supposed to be 59 but DOCP made it go to 80 so I gained ~30% more ram performance just tweaking that value. hopefully it's fixed on ddr5 but I haven't upgraded yet
@@BifsieOfficial So your are saying this entire video was pointless since it showed little to no performance difference? Tell Jay that :)
No matter it's beside the point anyway. The point is Jay cannot explain what caused the 36/36 kit to be the fastest kit in F1. Even if its just variance, that 1% variance should be in favor of the CL28 kit. And as I said it wouldn't suprise me if the reason was the motherboard trained looser on sub timings with the 28/36 kit than it did with the 36/36 kit, which may completely ate up any performance gains a CL28 kit may have had.
For example you lose ~12% on bandwidth - not access speed - by going from tRDRDSC/tWRWRSC 1 to 2.
You would also come to a different conclusion by the end of the video: It's not that CL28 kits might make sense given certain scenarios. It's that IF you do not control sub timings don't buy CL28 kits, period!
Also I never said Jay should tune sub timings. I said he should control them to be the same on every kit! I don't even understand how you can argue against this.
The video is literally titled "How much does RAM Timing REALLY matter?"
What did Jay show here? That RAM timing may or may not matter because the motherboard may or may not train way looser timings on one kit and may or may not do that for another kit.
And if you disagree with all if that, please explain why Jay bothered controlling the CPU speed to be always 5.3GHz?
Using your words "The vast majority of people" will never do static clocks on their systems.
Hey create your own video and following. 😂😂😂
Excited to see your explorations in more data driven content! (Also the eventual use of bookmarks in the timeline? :D)
The 9800x3d (or any x3d cpu) the ram has the least effect on performance. Do the same with a non 3D cpu or a intel cpu and you will see more of a performance swing.
Factorio is a game where CPU and especially fast RAM is absolute king.
Steve was looking into it for that reason if they wanted to add it to their test games and, well, because of the 9950X3D leak where it came up again. But in general since Anand went belly up it's gone from Test lineups sadly.
I was thinking about Factorio as well. Especially if you have large bases with lots of things moving around, it will absolutely hammer your RAM.
Jay i love the colors you have for the charts, this is perfect and the changes definitely shows your style with more "professional" results.
Bro Jay, you better be serious about this RAM timing thing, there is a huge mob that does hardcore RAM overclocking!
So relevant for me right now. Thank you! Would love to see more videos on this! Especially ones that utilize more RAM.
DDR5 6000 CL30 or 6400 CL32 is my go to.
yes got the 2nd one from gskill
waste of money imo
I got 6600 CL32 & it came with a 2nd profile at 6800 🙌🏻
@Jayztwocents I want to give some feedback. First, I really like that this video is very informal. Like no joking around. But your channel gives me the feeling, that it can be both. Sometimes a funny video about hardware and then a good informational video like this without clowning around. Second, to concentrate on one aspect of hardware, like the RAM-timings this time, is very cool and gives a lot of insight to me. Thrid, putting the list of the RAM modules on screen, while you talk about it is very good. I had enough time to read it, which makes it more memorable to me. I really don't like if videos say "just stop the video if you want to read it" or "google it yourself". I think if a video wants to adress something, then put it in the video in a length, everyone can read and understand it, otherwise, the video is useless. So good job here. Fourth, not only do the benchmark results, the optics of it, look very good, the transition animation is very eye-pleasing.
I hope you find that little feedback interesting.
Love Jay, hate he never does Timestamps
Really random but some of the best topics that this channel covers . You have helped me quench my curiosity over this topic. Thanks a ton !!
2:26 Isn't DOCP specifically Asus referring to XMP profiles as "DOCP" on non-Intel boards in order to appease Intel, while everyone seemed to just end up referring to XMP as "XMP" also on non-Intel boards and seemingly got away with it without serious incident?
As opposed to EXPO, which is a separate thing from XMP, actually specifically made for AMD and not just the Intel XMP timings being repurposed for AMD systems, as in earlier generations.
Yes. Steve's "research more" before commenting note rings true here.
As XMP is Intel's trademark, ASUS came up with naming RAM profile for AMD CPUs as DCOP, to avoid paying royalty to Intel for using the trademark on AMD systems. Afterward AMD introduced their own label, EXPO, to solve the problem for all manufacturers.
My msi board now shows A-XMP. I had another msi intel board before with the same ram kit and it just showed XMP. Dont we all love nonstandard naming
Great release of the new testing methodologies. The results themselves were what I expected but it's a great safe start to implement these new testing procedures.
As a recommendation, I think a good benchmark for CPU/RAM gaming applications would be something like Assetto Corsa Competizione, RFactor 2 or flight sims and X4 strategy games. Every time I've upgraded my CPU/RAM, ACC was the game with the most noticeable changes in FPS and frame time.
What do you mean no one talks about RAM timings, Buildzoid literally can't stop talking about timings 😂😂
They should stop talking with such lame phrases that aren't even true, and learn to make their sentences properly and according to the facts. It's not "no one", when it's just a "small number of people".
Fetching data from cache takes a long time and hence leads to stutter and worse 1% lows, but if that fetching is infrequent then the average FPS will remain mostly unchanged. Some games try to avoid big stutters by fetching the data over multiple frames instead of all at once, this leads to a more even and smaller drop in FPS instead of one large hitch, which will have a greater impact on average FPS and possibly 95 percentile (depending on how spread out the fetching is) but have a smaller impact on 1% lows.
I'm sure it makes a difference with the amd apus.
Great work Jay! Can’t wait to see more experiments like this!
what i really want to know is whats more important for gaming. low CL or high MT/s. could you compare what the performance difference would be between high MT/s slow CL vs lower MT/s Fast CL ram
@@ZithisVT it's easy, get as high mts as you can with the lowest cl, then you have to tune secondary timings, that's were the most of performance comes from.
You can't consider them independently. Latency is measured in clock cycles, so you need to consider *both* the transfer rate and CL in order to compare different RAM kits.
CL30 at 6000MT/s (3GHz internal frequency) is equal to 30/3 = 10ns latency.
CL36 at 7200MT/s (3.6GHz internal frequency) is also equal to 36/3.6 = 10ns latency.
But the other primary timings constitute a larger proportion of total latency, so are generally more important than CL, even though CL is the main advertised timing. For example when buying DDR5-6000, CL30-38-38 is often a lot more expensive than CL36-38-38, but not much faster; and CL30-40-40 is often about the same price as CL36-38-38 but is slightly slower in most tasks.
@@dagnisnierlins188 "get as high mts as you can with the lowest cl"
No, this is bad advice. Getting higher MT/s isn't useful if it means your memory controller can't support it without running in a higher gear, that it won't be stable in your motherboard without excessive voltage, or (if you have a Ryzen CPU) that your infinity fabric can't synchronise with it.
Some RAM kits have very low CL but are slow because their other primary timings and subtimings are crap, and they often use low-quality dies that won't be stable if you manually reduce the other latency timings yourself. They're designed to _look_ fast to buyers who know that low latency is good but don't know a lot about memory timings, not to actually be fast.
Amazing video, i thoroughly enjoyed it and i'm well informed on the subject. Thank you Jay.
I've always paid attention to timings. But I don't get upset over cl30 vs cl32.
Same
That's miniscule
Especially if the rest of the timings stay the same
I went from cl32 on 5200mts to cl40 8000mts the cl40 out performed the ''faster'' cl32
@@nando03012009 8000 should outperform 5200. Timings are useful with ram running same speeds but 8000 is always going to beat 5200 in most use cases.
The only Tipp I can give you is… if you indeed do test Anno 1800 you also need to test with a big Population. I know you can download 1 Mio city’s but that’s as far as my knowledge goes sorry. I just know the Game runs without issues at start. Just to keep in Mind.
@@nando03012009cas latency is given in clock cycles
8000c40 accesses rows faster than 5200c32
This is a really good topic and I appreciate the testing. I know it's a very specific thing but I'd be interested in a similar test with different brands / models of RAM with the same timings to see if things like integrated cooling and voltage regulators and whatnot make a real difference.
But thanks a ton for a great video!
I remember back in the day (early 90's) when timing changes would show notable changes. With today's high clock frequencies and massive data transfers, I see very small changes if any, especially with (most) games.
SotTR shows pretty solid scaling with tuned timings, that's my sanity check "is this doing anything" gaming benchmark. +11% FPS on a 12700k from stock XMP to manually tuned b-die kit at the same frequency.
I've been waiting for a video like this explaining the performance differences in timings.
I'm curious how much of a difference it would make in Star Citizen. The game is often RAM intensive, probably due to a lack of optimization, but the server also plays such a large role in performance that measuring it would be impossible.
Server has no effect on frame rate
All the ram in the world won't turn star citizen into a game.
When a game has a memory leak, the only difference RAM makes is determining how long you can play before you need to restart the game, i.e. the more ram you have the longer it takes to get to 99% utilization and need to restart. That said, getting more RAM won't fix Star Citizen's issues, because Star Citizen's issue is that it's an unfinished game being made by incompetent people on a bad engine.
@@DabutCHeR3000 They never said server has an effect on FPS, they said server plays a large role on performance, which it does.
Great video, I learned a lot. Thinking of your charts, maybe adding standard deviation from the runs may help show us the run to run consistency? It may make them less accessible to the audience, but us data nerds love it. Especially with such minute differences you tested.
love you guys!!
Very interesting test and results! Good job on improving your test methodology!
MUCH better explanation & presentation of data, without as many rabbit holes as GN tends to do in their pursuit of complete reviews.
In this test, I would've preferred NON-X3D CPUs. I think the 3D-V cache could be confounding the results because the CPU could be using it's own cache for certain functions and make it less reliant on RAM, and therefore make it harder to see actual differences (if they exist)
True
I do not agree with your first statement.. at all really.
Maybe its just hard for you to understand
@@AugmentedGravity Me either. GN rabbit holes are amazing.
Haven’t heard “FRAPS” in a looong time. Used to use that in early years of WoW.
No Heaven benchmark?
wheeze
Thanks Jay! I love your test videos.
1:43 I swear I felt my brain shrink 😐😂
Super stoked about the new methodologies! Actually more excited than I have been for a YT channel announcement in a while. My only request is some meaningful benchmarks for water-cooling components. GN is never going to do it and the closest thing to a true comparative tier list is DerBaur comparing CPU waterblocks. I for one, REALLY care about all this new data.
My guess is it doesn't make any significant difference.
Turns out my hunch was correct. Still good to have the data now.
Depends on the chip and the load.
@@manuelp7472 well in black myth their was about a 5fps difference for the slowest kit tested 36/48 . significant not really. but as jay mentioned if you not paying attention well shopping you could accidently end up buying something dumb with a latency in the 50+ range which could be an even bigger difference.
@@manuelp7472of course it matters, but not so much for x3d chips, and lower cas latency doesn't matter when ram sticks with xmp on have borked secondary timings.
Intel 13th and 14th gen gain more fps up to 8000-8200mhz
There is no guess really, it all just depends what's being tested, some games/tasks are extremely memory bandwidth limited and others are latency limited, so depending on the game; Cas, Speed, both, or neither can matter. This test also wasn't done super well, Jay should've used a CPU with less cache because the extra cache of the X3D chips dramatically reduces the difference high performance ram can even make because with so much cache the CPU calls on the ram much less
1:43 OMGJ, Ive been a Subscriber since the old Spare Bedroom Office and Garage Studio days and I have Never Laughed out loud that I could remember. Cache Joke got me Rolling on the floor...
🤣🤣🤣
So if I'm understanding this right, the best thing overall is to focus on capacity rather than speed. Budget left after capacity can be used towards speed, and aim for a high MHz, low CL number, low first two numbers on timings. Nice! I'd love to see a larger test list comparing DDR4 and DDR5 across some popular RAM kits just to see where major companies overpricing their RAM land alongside the more cheaper yet same/similar timings.
7:00 Does switching off C-states matter for gaming performance and overall system latency?
I'd like to know this too. I heard it's a bad idea to turn off C-states but not sure.
Not relevant for everyday use.
You disable a bunch of power saving methods simply for "umaga I got another 90 points on my Cinebench score!1111" (we are literally talking
Your CPU will always run at turbo speed except in a thermal throttling situation. It's very good to turn it off for latency, consistency and sometimes performance when the Windows scheduler doesn't do its job properly
The main purpose of C-states is power saving, which for this test was disabled so that the CPU remains in an active state and presumably a constant voltage. Disabling C-states reduces variables in this test relative to the CPU. Note: Enabling C-states allows the CPU to idle and reduce voltage under the condition that CPU utilization drops below 100%. Therefore, overall system latency is only increased when the CPU changes states, such as from idle to active, but when a game is running you might assume the CPU should remain active. Whether the CPU C-state changes while a game is running might depend on the specific game and how that game interacts on a specific OS. For general PC purposes, its fine to enable C-states, as you probably don't need the CPU power maxed out (max frequency and/or max voltage locked), or active at all times that the PC is powered on, but you have the option because its up to you.
I really like the use of colour on your charts - they really pop and are easy to read.
Ram timings and speed can cause 7800X3D to be unstable at times. Also, not many are talking about re-size bar and 4g decoding performance impact that can cause instability in Windows and mainly Linux. Also, with many games, including older titles.
Nice video !
It would be great to see competitive titles being tested one day, as the higher the fps, the bigger the difference a factor like ram timings would show on the results.
Competitive titles might also be most of the titles that would actually "benefit" from the absolute lowest latencies, which only becomes a factor when you're pushing fps past 300.
It is kinda frustrating when there's similar tests like this being done, and they stick to showing only the most popular titles despite most of them not being able to even get high enough fps due to their engine, or the gpu being the real bottleneck.
Instead of testing different kits with uncontrolled secondary and tertiary timings, should have used the same kit and punched in all the timings manually. This is hardly isolating the variables to answer the premise of the video, this is more of a review of the kits themselves at their included profiles.
This is great! Love the new methodology deep dive
I will ask Buildzoid to do a review of this video 🍿
Umm, anyway
@@nuubialainen
I would suggest to do 5 Testruns, exclude the best and the worst and build a median of the remaining 3. I know it's a tough time consumption but it's worth the hassle.
A better title would be: How Important Are Secondary Timings? Primary timings aren't that important without good secondary timings; Buildzoid talks about this all the time.
He didn't talk about secondary or tertiary timings at all
correct, but they are extremely important
@@Poppaai I'm well aware. Primary timings make almost 0 difference compared to tuning the secondary and tertiary timings.
dammit jay! this videos been showing itself to me so much to the point where i HAVE to watch it now !!
Very important. I tune ram all the time for my builds I sell and for a couple of pro gamers here in Canada. The difference in speed is significant. The timings are probably more important than the speed of the ram. The trick is getting it to be 100% stable. I just recently tuned ddr5-5600 MHz that now gets 60 gb/sec ram speed and latency below 60 ms. Voltage has to be increased to get stability and it’s not just the ram voltage but the voltage for the IMC. I tune the primaries, secondaries, and tertiary timings. Takes one or two days because of testing for errors with every couple of timing changes. All you are doing is testing different kits with different primary timings. The difference is in tuning the secondary and tertiaries. Title of video is somewhat misleading because you are just testing different kits………you’re not tuning ram.
Wholly agree.
One timing by one bin nets no difference, really.
So many timings to tune and consider.
And always hated RAM OC for how hard it is to make sure it's solid.
LOVING the new test disclosure. Definitely try non-X3D in the future, but otherwise? Awesome improvement!
You ran games, with an X3D chip and a 4090… if you looked at the RAM usage during these runs it likely was at 1% usage the whole time.
And no benchmarks beyond games. Load 1,000 RAW 45mp photos into Lightroom and apply a preset and export and you’ll see RAM CL make a difference.
He said "In gaming". It's a gaming test.
I really appreciate the new testing methodology. I think it's a big step up from what you have been doing. Keep up the self improvement!
OH boy, about to regret my 8000 mhz purchase lol
Surprisingly on Intel 13th/14th gen 8000mhz will give you more fps
@@dagnisnierlins188 ew intel lol
Yeah if your cpu isn't constantly overheating and shutting off maybe.@@dagnisnierlins188
you get 5-8 extra fps for that 150-200usd plus overprice 👍
@@zeroblade9800 did you buy 8000mhz for ryzen?
Ive been waiting on someone to make this video!!! Thank you Jay!!
It's almost 2025, can you put chapters in the video??? Sitting thru a 19 min video for something that can be summed up in less than 1 min.
Jay just being lazy as usual.
About to buy new DDR5 RAM to go with my -just ordered- Ryzen7 9800x3D. And the info about the timings just came when I needed it. Thank you!
My Custom Windows 11s Dragon Designed for gaming.
This is how it works,
I have games on steam, but I run steam in the background I don't open it or use it directly I set desktop icons to my own library using fences I make this a dropdown box directly on my desktop I have each fence set to individual categories 4 categories in all c1 action adventure c2 platformer c3 sports c4 strategy puzzle. I add in my own side panel directly on the desktop that contains setting profile and graphics login all at user control. It's all very clean no clutter and the most important part I don't have apps all running in the background slowing down your PC the user is in full control. I'm using my own custom Version of Windows 11. Yes, it is lower latency achieved by removing autorun and adding side panel for manual control. I am now working on my own browser it's not at a finished stat yet.
Edition Windows 11s Dragon
Version 24H2
OS custom build 26120.2705
Experience Windows Feature Experience Pack 1000.26100.42.0
No update issues all setting set to privacy no need for TPM) version 2.0. lower storage requirement just 35 GB or larger storage device. this is due to the removal of Bloatware from Windows 11.
wtf is winnows 11 dragon?
@@sopcannon It's my custom OS Designed for gaming zero Bloatware built under OEM licence no need for TPM) version 2.0. lower storage requirement just 35 GB or larger storage device. this is due to the removal of Bloatware from Windows 11. Is a snapper faster vision of Windows 11s.
Is your OS available to download somewhere ? I checked your YT and did a web search but am not finding anything
I was going to mention something about the x3d test, but many other people have already commented on it. It was interesting to see that it did still have some impact on the lows for 4k though.
Something I am interested in seeing is a ramdisk versus cached nvme test for gaming.
You need to do this testing with a non x3d cpu jay…. Ugh he is just so out of touch.
This is the video I have been waiting for! Amazing!
Jay, I just had an idea, I'm not an app developer but what would be super useful is a phone app that lets you plug in your cpu, motherboard, and ram info and walks you through various bios optimizations like overclocking your ram.
I'm glad someone else took into account the other timing.
Usually there's 5 important data. But the kit shows only 1, 3 or 4 of them.
For example most of the time you see CL30. Sometimes 30-36-36. And sometimes 30-36-36-96.
But for that one. There's a catch. Most of the time. The fourth one doesn't correspond to the fourth.
It's 30-36-36-68-104 for my kit (Lexar Ares). The 96 is that one (usually Trident.Z) is the fifth. Example (IDK their timing) is 30-36-36-68-96.
So the kit shows 4 data. But the fourth data is the fifth latency 🤔.
Since AMD doesn't like fast timing but rather low timing. Finding low latency kit at 6000MT/s or tuning a stable profile who doesn't reduce performance (lowering timing manually can work but will not always increase performance)
I'm rocking a 7800 X3D, so It's great to see demonstrated the X3D side of AMD CPUs, virtually cancelling out the need for more expensive binned RAM. But as others have mentioned, non-X3D parts do lean more on faster RAM.
This is the kind of testing I like to see. I've always been curious about these multi-variable component pairings. Most of the time when a reviewer is doing a video on a CPU or a GPU, they only focus on pairing them with top-of-the-line components in order to create a bottleneck just to see it in a drag race against its peers. But there's never much time given to how those components fare when you pair them with cheaper parts. If all I get is 5 or 10 more fps, it doesn't seem like a worthy expense to throw an extra two hundred dollars at fancy RAM. Or a fancy SSD. Or a fancy motherboard. But I gotta see where the diminishing returns start kicking in.
This was cool! Would like more highlighting as Jay talks through the charts, just easier to follow.
For the sake of consistency, I would recommend to keep the order of the tested components the same on the comparison charts.
For the first test set with F1 2024 the order was 36/36 -> 28/36 -> 36/48. I found that a little odd, because the RAM with the tightest timings was placed in the middle.
On the BMW test set, the order then switched to 28/36 -> 36/36 -> 36/48, which made intuitively more sense to me, but was different than the other test set.
The charts are a tier list. Whatever is best is on top.
I’d watched your video with Steve. I’m new to your channel. For what it’s worth, I enjoy GNs data driven evaluation, and am happy to see you following suit. The more data the better. Going through your older videos, I’d like to see water cooling parts eval. GN doesn’t do this, and you seem to have a lot of experience. This could be a good niche. I’m adding channel, hope to see some open loop content in future.
TREFI is the really important Timing for DDR5. Increase or max it out and it'll make a bigger difference than CAS/TRCD etc
tREFI is less important for DDR5 than it is for DDR4, because of DDR5's "same bank refresh" feature, which means that it can still access data stored in other banks while one bank in each bank group refreshes, while on DDR4 every bank within each rank has to refresh at the same time.
But it does still make a difference.
Finally. I was trying to find videos on this. I have the ram that comes with the amd bundle. Potentially was going to get the gskill Royale cl28 but dont know if it’s truly that much better. Currently got a 9800x3d now
3 minutes ago…Never been this early to a video haha, cool. It’s cool to see you expanding the video topics and stuff Jay, I’m excited to see your testing and opinions and objectivity and how you present it (love the talking-head, casual, and dry humor personality you have in the video formats). Good luck and keep at it!
My god talking about timing. I just upgrade my ram yesterday and it dosen't run at full speed. And you release a video about it this morning... this is good TIMING 👍
Great refresher. Thanks.
Not many know about it and very few have done it but one can do the same with the vram and gain a fairly nice boost at stock clocks at the expense of reduced head room for overclocking. One can gain up to around 20% at stock clocks but results obviously vary.
I have some t-create 6000 30-36-36-76 that I got for $85 a month ago (it’s $88 now) when I upgraded from a 12th gen + ddr4 to 9800x3d, and I have to say I’ve been super impressed with it. I’ve only run gskill or kingston since ivy bridge, first time with team. It trained super quickly and hit expo with 0 problems. I don’t play anything competitively enough that I’m going to see any difference tighter or looser, but I was looking primarily for white, non-rgb, lower profile ram & then looked at brand and timings. I think it’s a great kit and for me was the “sweet spot” for the build.