When it comes to storage capacity is still more important for me then raw speed, gen 4 ssd are plenty fast and almost never get used to their max speed capacity in real world. When i can get 2 times the capacity for same price as gen 5 it just make no sense to buy them.
@@ThylineTheGay Yes, mine runs about 3.5 gigs per second and already is very fast. Samsung 970evo. However, for AI and large files (10+ GB) that becomes slow. Now running two Lexar NM790s in raid-0 (~ 12 GB/sec). Now that is back up to speed for the task. in 5 years that will be slow.
The speeds are so far ahead of what I need that all I care about is price, low temps on passive cooling, and reliability/longevity. All I want to do it install my drives under the motherboard heatsinks and forget about them.
Would've been nice if SSDs had a form factor resembling RAM sticks. Or perhaps a double standard for DIMM slots supporting both RAM and PCIe/NVMe x8 and double the default amount of slots to 8 on a standard ATX board.
@@paulmichaelfreedman8334 RAM sticks are actually bigger than M.2 SSD's, it's just their footprint on the motherboard that's wider because the PCB's are placed parallel instead of angled.
@@Saiohleetno need for faster storage Gen4x4 still the best by far if Single Sided SSD with 3DFlash TLC that has great Endurance I use 2x4TB Netac or Viper Lite and are just awesome
Price is important, but in my experience as a builder of high end gaming PC’s the cheap drives (ie; Kingston NV2) have a much higher failure rate. More specs are required in advertisements. Speed is not a concern of mine either because the drives are fast enough……….but quality of the components is my #1 concern. You just have to wait for the sales and select the highest quality drive within your budget.
Those cat breaks are evil, attention-drawing and manipulates our emotions with cuteness to make us watch even longer! What an insidious business strategy. I love it! 😁👍
@@IIARROWS no one can ever be happy, did you get pissed off when a cat sat on his desk? use the skip button, just a waste of bandwidth to forward such a rude complaint on content you get for free.
One of the good old conspiracy guys who's since disappeared answered the "these guys just want money and attention" comment with "I know how to point my camera at my cat and get 5 million views". Instead he was getting shadowbanned and 5k views for talking about the cabal.
Depends. I'd say its a time saver for backups of big data sizes, especially at the consumer level backups. IOPS are finally good for consumer level virtualization, where all the VMs are running from the same drive.
Personally, I've been waiting for the Hynix/Solidigm Platinum P51 drives to come out. Hynix has always been late to each PCIe gen, but both efficient & speedy.
I liked Hynix/Solidigm but they have never corrected the firmware on their drives where after several months of useage the pSLC cache doesn't flush and your write speeds drop because all writes are direct to TLC, and the only solution is to periodically secure erase and reinstall onto the drive, Both the P31 and P41 Platinum/P44 Pro exihibit this quirk and Hynix won't do anything about it.
@@brandonupchurch7628 So I looked up the issue and it seems like people would encounter it only on Windows, while Linux dual-booting on the same drive would be fine. Also may be Windows version dependent. These things suggest a Microsoft issue, and it would hardly be the first. Still, worth avoiding the drives for now, so thanks for the heads-up.
Thanks, Roman! I’m considering upgrading from my Samsung 990 Pro Gen 4 SSD to a PCI Gen 5 SSD. My main use for high-speed SSDs is 3D simulation caching, where each frame can produce files over 2GB. During cache playback, if the SSD's speed isn’t sufficient, it results in glitches and choppy performance which hinders work in some cases.
This is where Enterprise-class drives (U.2, but that's basically another form of NVMe, and cheap adapters are available) are a good solution, they're designed for intensive R/W over time, and are physically larger, so they're much easier to cool. The downside is the price. Still, if you need the speed and are able to pay for it, it's a solution.
I have a good if very niche use case for a Gen 5 NVMe SSD. I use EVE-NG to build large network simulations. A catalyst 9000v node needs 18BG of RAM. I bought a 1TB Gen 5 SSD and created a 1TB page file on it. I can then add 1TB of RAM to the EVE NG VM and it allows me to create very large networks.
Is it written in Java to suck that much memory emulating a single device ? 15 years ago there was a Cisco emulator called Dynamips which would run quite a bunch of devices on a 256MB machine. With that said, I agree with your that modern NVMEs are now getting closer to RAM speed and it becomes almost acceptable to swap on them!
@@levieux1137 It is indeed approaching DDR 3 or even DDR 4 speeds, but the latency is still 100 to 1000 times higher. That's where Optane was awesome. Permanent storage, SSD speeds, but only at about 10 times the latency of RAM. I forgot the exact numbers, but it was properly between RAM and SSDs.
@@Winnetou17 yeah, I do have an optane 16GB in M.2 form factor, it's pretty nice as an SSD! Amazingly fast with super low latency. These ones are quite easy to find used for acceptable prices nowadays.
ChatGPT is most likely wrong about the heat up time With a 2w power draw, 10g weight and 800J/K/kg thermal capacity (about the same as aluminum, bricks or glass), it'll take 800×0.01÷2×60 = 240 seconds
@@ThylineTheGay That tool would also be very wrong. Why? Because most silicon thermocouples are directly where the heat would be an issue. Meaning the it would heat up way quicker. ChatGPT is using (depending on the prompt) the normal calculations as most tools would use
@@AlpineTheHusky I wouldn't trust ChatGPT to add 2+2. When I asked it how much tp a person would need for 9 days making a travel shopping list, it said the average person uses 2-3 a week, so I'd need 18-27. It was unable to reconcile weeks with days, just multiplied 2-3 with 9 instead of 1.28.
First gen5 drives were illegal, they violated pci sig rules, current delivery per pin, then pcisig did a revision change increasing current per pin and a new connector spec. So you have to be careful. It is a scheisse that rarely who had noticed as pcisig hides their spec sheets.
11 วันที่ผ่านมา +26
Good video as always der8quer and it nice that you mentioned the lack of DRAM, because all SSD's are not equal. The lack of DRAM cache annoys me on modern SSD's. Sure the DRAM cache CAN be used as write cache for example but it was mainly used for storing META data for quick access. The lookup Table is part of the metadata and its basically a table that tells the SSD where data is stored in the NAND. Storing Metadata like the lookup table (LUT), mapping and addressing information makes it much faster to access because DRAM is so much faster in latency then NAND. HMB will add latency, going to the CPU over the PCI-E bus and then its RAM, no way around the laws of physics there. There is also some SRAM on the controller, but SRAM is a lot more expensive then DRAM so not that much of it but that but DRAM less SSD's would have to depend more on that for performance. DRAM on a SSD is mainly for Metadata, not read and write cache like a Hard Drive, but some part of the DRAM can and probably is allocated for Buffering Writes. We think of NAND as very fast and sure it can be, benefit is good random read/write performance BUT it can also be very slow. An erase cycle followed by a write can take 20ms, even a 20 year old SCSI hard drive will laugh at that with 4-5ms access time. Thats why we hav TRIM and garbage collection on SSD's so erase blocks are ready for writes, also why we have DRAM cache onboard to find where data is located faster. But anyways I dont like the Linear Bandwidth over everything else trend with SSD's, Low Latency is more important to get access times down for most use cases.
DRAM is a band aid anyways and don't really makes that much of a difference in absolute majority of desktop usecases, if you really care for random access iops you have to run 3d xpoint drives which are night and day difference with any other storage devices
For those with the same lack of acronym knowledge as me: HMB stands for Host Memory Buffer, an optional feature in version 1.2 of the NVMe specification, which allows SSDs to utilize the DRAM of the host machine. And yeah, even though mostly everybody agrees that PCI-E 5.0 speeds aren't needed for normal folks (including some power users), it is annoying that the rest of the specs are still not promoted enough. I guess they need a new short, catchy way of advertising them. Like DDR5-8200 CL40 is for RAM, we need something like PCIe5-10000 XL4000 where XL is worst case situation latency, in microseconds. And maybe have an expanded way of saying several often-used (so relevant) well-defined different latencies, in a well-defined specific order, like the primary timings are for RAM. So something like 50-50-70-4000. Which can then be added in the long name of the product.
The motherboard design and M.2 location not being next to the GPU heat is also a big factor. I've got 2x Samsung 990 Pro 2TB both idle, ones at 52C the other 36C :) on TUF GAMING B650-PLUS.
The samsung 980 pros run stupid hot from my experience. Saw one of the temp sensors hit 76c after a long period of gaming, saw a video about the thermal pad not making correct contact though, the pad covers 2-3 different chips and they're different sizes you you're better off repading the nvme with your own pads that have a 0.5mm difference. e.g 0.5+1mm, i tried that but 1mm+1.5mm seems to be work better. same gaming scenario brings it to 56c or so now.
@@MDxGano yeah one is C drive, might well be the reason. other drive is data. I guess there could be background io going on. but it is small in Process Explorer. I wondered about moving them around but I think with the GPU installed the 3rd m.2 socket is either disabled or is half bandwidth if I recall.
Exactly, the m2 above the psi x 16 connector gets much hotter as the heat from the backplate of the videocard rises up and heats it up. Most motherboards have this arrangement.
If only these SSD makers make high durability units, and instead of raw speed they should really be paying attention to latency and mixed access speed. Optane was too early too soon. I don't want big capacity (because it becomes tempting to throw everything in it) but sure appreciate peace of mind without worrying about repeated write/erase cycles...
Unless youre moving TBs of data every day, the lifespan of an SSD is functionally limitless. Expect a minimum of 10 years, possibly extending to beyond 100 years of life from an SSD in regular use.
@@wills.5762After 10 years of taking a very good care of my 256GB SSD my drive was still at 98% life. Average personshould never ever worry about drive life
@@wills.5762 I agree on that part, if your drive has good TBW rating. There is no telling about how slow it becomes into the future with wear leveling coming into play, or how good it is...
indeed this was needed to get pcie 5 to be of use. if only pcie 4 came in 8tb and a cheap price that would be of more use. 8tb is harder to find now that it was a few years back, we truly have regressed :(
Unless there's a flash memory oversupply like in 2023, you ain't getting anything less than $50 per TB from a reputable brand in the near future. Don't worry though, I feel your pain.
@@EbonySaints Yeah, I'm not expecting it'd be anything other than like Team Group (the MP33 was already as low as $175 for 4TB). My only requirement is that it is not QLC.
If you have the space (i.e. not for laptop) then look at used enterprise U.2 SSDs, I bought sightly used 8TB one for around $250. Then all you need is PCIe-U.2 or M.2-U.2 adapter which cost around $20 or less. And they have way better endurance than consumer drives and usually also a power loss protection which is nice to have.
Still rocking my Samsung 970 Evo gen 3 drive and have zero reason to desire much more. Does great in daily tasks, great in gaming, great in most workloads that aren't super dependent on massive constant file transfers -and even then its not bad by any means. I'd rather want more capacity for cheaper, than just faster spec for higher cost and higher cooling demands.
Indeed - I accidentally used my Gen3 NVMe drive as the boot drive in my last build, and it actually works well as it sits below the GPU and doesn't generate heat. It depends on your use-case, but not everyone wants bigger faster hotter etc. For general boot drives and gaming, Gen3 (maybe even SATA) flash is good enough.
I wouldn't be surprised if NVME drives go back to the 2.5" format because you can have better heat dissipation that way. There's only so much you can do to attempt cooling such a small area.
that's pretty much the U.2 format you're describing. Too bad it's mostly kept outside of the consumer market. What can be achieved with nvme is to use PCIe card form factor SSDs (at least M.2 to PCIe adapters) with proper rad blocks. I still believe this is a bad design choice to reserve some precious pcb area for an M.2 instead of putting a standard PCIe slot.
I'm glad you said that about the cache. I've run into _so many_ people who say to get a drive with cache for gaming, that it's a must. Eventually (after years) I finally did, only to see what I had expected, no performance difference for that use.
The onlt thing the Dram on an SSD does is cache the local bitmap of data on the drive it doesn't work like an cache on an HDD hence why they're only around 1Gb (134MB) in size as it's just a local copy of what is basically the MBR for the SSD's firmware so it knows where all the used bits and empty bits are without having to wait for CPU/Memory access to take place. Where as the write cache on an SSD usually takes the form of a couple of hundred GB's of flash that runs single bit write instead of 2 or 3 but once that runs out and the data need to written to flash normally then it will slow down quite a bit
One thing I would like you to test is absorption of heat from the GPU backplate to the SSD. Heatsinks can also work in reverse, and I find my idle SSD can reach 60+C while gaming just from the passive heat of my GPU.
@@Chicken-o5emuch slower would be a hdd or maybe a sata ssd, for random loads ssd to a another ssd is generally the same (unless your using enterprise/business class ssd's as they have QOS with keeps latency below 2ms reads 8ms write max, but under 0.1ms typical for both under normal loads)
Thanks, I had just ordered some parts for a new build and went with the Crucial T700. After watching this video I looked into it a bit more and cancelled the order, got a Lexar gen 4 instead.
If you have a fan cooler for it, you will be fine. Not sure if you will have enough space in a mini ITX. I have a HAF 942 case and the giant heatsink and fan that came with my Mobo for my PCEI 5 m.2 slot aka blazing fast slot it barely fit under my NH-D14. I felt kind of lucky that it all fit litterly just a few mms of space left. I would measure the space to see if it can fit, before ordering. My temps never go above like 44C and I use it everyday. For me the blazing fast slot sat right under my CPU, so it was a tight fit. If you are making a future build this is definitely the best way. You won't need to upgrade in the future. and atleast you won't have to wait a month like i did. Funny that my mobo had all these 5th gen slots and nothing was even out on the market yet. Also if you wait enough for your build you can get a GPU with 5 gen too. AMD 8000 and Nvidia 5000 series will finally be able to use it.
Practically I only had a problem with a M2 SSD in a PCIE 5.0 M2 slot is when I did a Clonezilla disk image restore of my Windows 10 C drive. The speed went from 17GB a minute down to 3GB because the gimmick ASUS motherboard M2 SSD heatsink could provide enough passive cooling. I got a Thermalright HR10 2280 PRO Black SSD Cooler. With a fan it will keep the SSD cool in any scenario.
I think you meant GB/s not GB/mn otherwise 3GB/mn is the speed of a USB3 thumb drive (50MB/s) and PCIe generation becomes irrelevant at such speeds (even gen1 x1 was 5 times faster).
@@levieux1137 It is 17GB a minute to read from the compressed disk image files from a mechanical hard drive. Uncompress and write them to a M2 SSD. So it takes about 30 minutes to restore a C: Windows 10 back up image. Which is about 355GB backup image to 525GB on C:. So one might think my PCIE 5.0 SSD should not slow down much during such a restore process. But it did because the gimmick ASUS motherboard M2 heatsink is crap.
@@levieux1137 Clonezilla runs Linux. So when the restore was running I could ALT+F2 to a command line and run the "sensors" program. I could see the PCIE 5.0 M2 SSD temperature rose from 40C to 72C. Again the M2 SSD heatsink that came with my ASUS X670E Tuf Gaming Plus motherboard was a visual gimmick. And the thermal pad was far too thick.
Good video again. I believe that one day your cats will know how to build their own PC setup. Cats have such fast reflexes that you need the fastest stuff for that PC. 😀
Ran crystaldisk while watching this. Running a crucial T705 2TB. Sat at around 55 with a max of 58 during the read/write tests.. It's in the Z890 ROG Maximus Extreme with the vapor chamber cooling block. So the cooling on this motherboard seems to be working well
nice work Roman, I find odd that no one talks about that any PCIE 5.0 SSD put in any board that's not Z890 (Intel) will drop your GPU x8 no matter what. Sadly I found this myself when I first got my Z790 APEX ENCORE saying PCIE 5.0 but dropping my RTX 4090 to x8 :/
@igorvidakovic7388 I just checked the manual of ROG X670E boards and if you install PCIE 5.0 SSD it's the same as Intel do you have a board and PCIE 5.0 to check if it drops the GPU to x8, diagram of the chipset for both Intel and AMD says one thing but actual board implementation is not the same
If you do a bit more research you will find that it doesn't really matter. I initially also got hung up on that but from reviews and tests done it doesn't matter.
I just LOVE the Cat Breaks !!!! Your cats are ADORABLE!!! My cat has become very overweight and doe3sn't play with things anymore... she just cleans her fur all day (either she likes eating it or she has a skin prob... OFF TO THE VET!!!)
I just want some affordable 16TB SSDs even if they are half as fast. They can be SATA for all I care - I just want to build an affordable and reliable QUIET disk array.
I'm just waiting for someone to say that is a niche market - like BMW, Lexus, Acura, Mercedes, Landrover and other more expensive cars are a niche market - somehow through charitable work the companies survive.
Same tbh. My steam library is well over 10TB and I live rural so the internet is assholes Im not uninstalling games just to redownload them a few months later
@@Hetsu.. They make them - crazy expensive but we were promised 48TB consumer SSDs from Samsung ages ago - I'm sure they are available in the same aisle as my graphite batteries and my flying car - just below the cold fusion generators.
@@wills.5762 I get it - I converted countless discs that were filling my small apartment - I'm at 125TB of movies and TV shows. I'm in a town where the biggest import is manure for the farms.
@@kzip2009 I hope you find more interesting things in life than politics, like everyone gets it by now. There's other countries in the world too than US. i don't go shouting around the internet who my president is when they get elected.
For me, the most I'd ever want would be a PCIE gen 4.0 x 4. If there were more gen 3.0 x 4 with DRAM Cache I'd probably plump for them every time, but Gen 3 is usually budget drives with questionable controllers and junky sustained writes
der8auer Corsair makes the MP700 PRO SE Hydro X Series 2TB or 4TB M.2 PCIe Gen5 x4 NVMe 2.0 SSD - M.2 2280 - Up to 14,000MB/sec Sequential Read - High-Density TLC NAND most importantly a Pre-Installed Water Block. I’m running it on my new Asus X870-I ITX mobo along with the 9800X3D delided with your AMD MYCRO DIRECT-DIE PRO and my temps are in the mid 40’s to low 50’s in use. By using the Corsair MP700 PRO SE Hydro X water block I can’t use the above section m.2 is the only drawback.
I use a WD Black SN850X on my old Lenovo P520 which is PCI 3.0 only, it's plenty fast and doesn't heat up much while gaming. The small aluminum heatsink from Lenovo is warm but not as hot as I expected. I think my nvme drive is rather unnecessary for gaming, though freeing up a Sata drive bay is a great benefit. Using it at 3.0 speed, it's not much faster than a 2.5 Crucial MX500, where both drives were running Steam and games only and Windows was on a separate drive. It's wicked fast while using Pop OS, however, with both Pop and Steam running on one drive.
I'm more interested in idle power consumption, because as someone keeping my PC on 24x7 for remote access, I don't want to heat the room nor to have noisy fans. That's also the problem I'm having with recent CPU reviews (everywhere), for 285K and 9800x3D you only find figures under full load but not in idle. And that's sad, considering that some previous CPUs (AMD) were sucking a lot of power in idle!
TBF, most consumers don't run their PC 24/7 or they have cheap electricity so nobody does that anymore except some rare media like TechPowerUp. AMD cpus are still sucking a lot in idle, even the new APUs made on a monolithic die are bad compared to those on AM4. Though Intel regressed on idle power since Skylake/Kabylake. To me the best solution is to remote access some cheap ARM SBC to turn the PC on/off.
@@PainterVierax in fact most gamers don't keep their up due to power draw, but many other people I know (developers, techies) do it. You have all your apps loaded with your unfinished work and files being edited, everything accessible both locally and remotely etc. Typically I do frequently run some builds remotely when I want to test something from my work place on my PC. This aging PC (skylake 6700k OC to 4.4GHz) is only 27W in idle. That's low enough to stay on all time. I do have other machines that I can turn on when I need, they're actually the ARM boards since I don't use them every day (~70W total). At work we have a 7800x3D and a 14700K for evaluation and already noticed the AMD sucked a lot more in idle (I don't remember the exact numbers but it was almost the double). That's why I'm wondering about all that before I make the wrong choice and regret it.
@@levieux1137 well you're actually well equipped and I don't think you can really do much more than using some AM4 APU before splitting those service on a rendering farm with WoL. Mini-PCs using x86 laptop chips are another solution to replace this Skylake build but it's a trade-off you probably don't want to deal with.
@@PainterVierax that's right, I want high performance when needed, and low idle consumption the rest of the time. Note that I'm not interested by "efficiency cores" that are not quite fast in fact (enhanced atoms mostly).
A very VERY informative video! Thank You Roman! I've been waiting to see any new development with PCIe Gen5 drives.... they've only gone down a small- amount in price but there hasn't been anything new with them except maybe size and speed.... but at that speed do you REALLY need it to go faster?? Not for gaming, No.... But I do like my Gen4 when I have to move my ISO's over from my slow storage HDD to my Gen4 ... it slow to move it yea... but when I'm installing the game it's SUPER FAST!!! lol ..... I just need to get a bigger Gen4... 1TB isn't cutting it anymore :/
My Adata XPG Gammix S70 uses the InnoGrit IG5236 (gen4 - 7000+ seq reads) heats up way faster than the ones with the equivalent Phison E18, that made me not consider upgrading any time soon to a gen5. You don't have to look at lot to find some stupidly huge active coolers (yes, with fans) for NVMe... or liquid cooling For those looking for a good laptop NVMe (without any heatsink besides what the laptop provides), stick to a Phison E16 controllers (tops around 4500 seq reads) which is still fast enough for most situations and will only overheat if you torture it enough
the corsair without the heatsink is even more impressive when you realise it doesn't even have the copper heat spreader (disguised as a sticker) the other one has
I play game titles that read and write to logs every second so it really does depend on the software your using. We my group recommend anti virus exclusions on the log files because it introduces studders
The price is because it's the "latest and greatest" if you don't want to deal with that wait 1 to 2 years. This video reminds me of the time you were shocked to find out VRMs need heatsinks.. PCI-e 5.0 NVMe m.2 drives at full PCI-e 5.0 speed require water cooling due to the tight space if you want to use the drive for long hours at a time.
There is an article titled "Adding ceramic powder to liquid metal thermal paste improves cooling up to 72% says researchers". Please research this topic. Thank you!
The endurance indicated for the Corsair MP700 ELITE 1 TB is 600 TB, so if you do 10 mins of non-stop write testing at 10 GB/s you've already worn out the SSD by 1% 🤔🤔
Hi . Interesting the temperature issue but also one thing if it is so fast you will need also less time to load the game or the OS or whaever so with a good dissipator sholuld be all ok no? the high price is inevitable
Am I nuts? In all of the systems I've built in recent years (mostly with salvaged parts) the goal is always to have at least two drives, where one is larger, and appreciably slower than the other. Meaning even while transferring data to or from that 'backup' drive, especially when backups happens automatically, the system is still perfectly responsive and you're unlikely to notice.
The thermalright heat sinks, HR-10 or HR09 pro (not 10 pro) both seem very capable and sensible. I had saw an impressive heat difference with the HR10 and from what i can tell from the smaller channels they saw similar impact. The heat sinks sold with the SSD are mostly placebo.
The X4 mode is likely a result of dual personality - I have recently seen a drive/controller that can do either Gen4x4 or a Gen5x2, but not Gen5x4. The tool which you used is fishy, as is the whole Windows platform. I would suggest to run Linux and see the drive capabilities in a tool like lspci, which shows exactly what the interface capability and current negotiated mode is. Furthermore a nvme-cli could show us what the operating points are in terms of power.
Hey @Der8auer, I am only seeing 12GBps read speeds on my T705 could this be due to the contact frame still being tightened to much ? Or is it just the fault of the 285K ? Maybe we need a bios or windows update or something. But it is strange that the read and write are nearly identical, seems like they are capped.. it’s not like I am only at 2 lanes because then I should not even see 12GBps but this drive should be capable of 14.5GBps reads… what gives 🤷♂️ .
Hmm just thought could also be the temperature, I will try to cool it with a fan later and see if it remains the exact same but so far I have tested the PC form a cold state and after normal use and every time I get the same speeds. I am using the Z890 Master with the motherboard Gen 5 cooler attached.
They really just need to drop SSDs down to a single PCIE lane. 1 lane of PCIE 5.0 is the same speed of 4 lanes of pcie 3.0 which was already plenty fast for most people and again doesn't need a heatsink or active cooling which is worse. PCIE 6.0 is already finalized and pcie 7.0 is already in the works. Again, for consumers the drives are fast enough and what most people would benefit from would be lower cooling and more PCIE lanes available for other devices.
It's a good thing motherboard MFG's keep removing PCI-E and SATA connectivity for more NVME when they're still super expensive and 5.0 are barely usable even with a heatsink.. which will interfere with anything else over it on many motherboards. I haven't even researched modern controllers/NAND endurance so lifespan and reliability is another unknown variable, unlike every SATA drive we've been using for over a decade now.
That power draw problem is why I stuck with an efficient, high iOPS gen 3 NVME for the system drive on my game build in 2023 - it's sitting under the GPU and gets hot enough as it is. (But when the SATA 2TB drive I already had started throwing reallocation errors last month, I picked up a WD SN850X with heat sink, as the second NVME slot isn't under the GPU and I figure the newer gen 4's aren't so toasty.)
Random question: is it possible to overclock memory controllers or is the bottleneck the read/write on the flash-memory? Have you tried using thermal paste (non-conducting) on the memory/chips even without an IHS/die contact? 😂 MP700 looks nice, but idle powerdraw is a big selling point for laptops/handhelds. Hopefully SSD reviews will include that factor, where less dram/components instead are a benefit.
is it possible to overclock memory controllers - Sure it is... but do you really want to take the risk with your data? You will also need to modify the firmware since there is no api for overclocking the controller.
Out of curiosity: does it make a difference if you have a HMB vs a dram drive if you intend on using it in an external enclosure? (Regardless of gen4 or 5)
I wish Optane didn't have so many drawbacks because we really need to take a break from sequential advancements to work on small file random performance. Every generation gets us double the sequential speed but only 10% more 4KQ1T1 performance.
I think PCIe 4.0 SSDs also have questionable value for many people. I am using 4TB PCIe 3.0 WD SN700 in both desktop and laptop computers, and happy with disk performance.
I'm in accounting but in the third world. Everything is mostly still on local machines. Any speed improvement would be awesome for data transformation and analysis. I used to dream about being to run our scripts on beefy multicores desktop, 64GB DDR5 and PCIe Gen 4.0 machines instead of old ThinkPad. At least our clients and competitors are moving to big data.
I have a heat sink on my NVMe and installed a fan into the case side that blows across the back of my graphics card to keep my NVMe cool. I do not know why Gigabyte decided to put the PCI-E NVMe drive right by the PCI-E 16x slot, and have it mounted so close to the motherboard, there is no air gap behind the drive. I was wondering why my games on the NVMe kept giving me issues, then realized the drive was running hot. Now I have seen the drive hit about 45*C vs I have no idea just how hot it was getting before.
I remember the days of trying to Raid 0 my HDD WD raptors and trying to get more read and write performance. Wow 12,500mbs is pretty amazing. Yes, hot!
Idk how accurate this is. That first drive isn't working very hard. Only 2kmbps on a gen 5? That's lightweight able to do it in their sleep. So its not going to heat up as much. While the 2nd drive was pushing 12kmbps......thats why it got hot.
Outside of data centers you really don't need high speeds. While my main gaming rig does use a gen 3 NVME M.2 drive, I have found for most uses old school SATA drives still work just fine. So when building your PC it's still best to focus on price per gigabyte. Unless you do high I/O applications.
I think gen5 storage came a bit too early. Gen4 has still a way to go in improvement, capacity and lowering prices. And here came gen5 that is way out of anyone's specs. Nobody uses the speed it provides, it heats up too fast and is too expensive. Ideally it would just beat gen4 at every measurement and there would be no need to use gen4 but instead it's like a race car that came at a street racing with exotic rides. Way out of place and not racing anyone.
To be fair gen5 is really more about enterprise than consumer, datacenters pushing 100GbE+ is where gen5 nvme shines. And M.2 is starting to show it's limitations with gen4/5 NVME, kind of defeats the point of having a small form factor when you have to use it with a huge heatsink to keep it from overheating.
When it comes to storage capacity is still more important for me then raw speed, gen 4 ssd are plenty fast and almost never get used to their max speed capacity in real world. When i can get 2 times the capacity for same price as gen 5 it just make no sense to buy them.
even pcie 3 is generally plenty fast
@@ThylineTheGay even sata ssd is good for almost everything
2B-chan 😍😘🤗
@@ThylineTheGay Yes, mine runs about 3.5 gigs per second and already is very fast. Samsung 970evo. However, for AI and large files (10+ GB) that becomes slow. Now running two Lexar NM790s in raid-0 (~ 12 GB/sec). Now that is back up to speed for the task. in 5 years that will be slow.
id love some much slower but larger ssd storage.
still would want at least 5x nromal hdd speeds
The speeds are so far ahead of what I need that all I care about is price, low temps on passive cooling, and reliability/longevity.
All I want to do it install my drives under the motherboard heatsinks and forget about them.
Would've been nice if SSDs had a form factor resembling RAM sticks. Or perhaps a double standard for DIMM slots supporting both RAM and PCIe/NVMe x8 and double the default amount of slots to 8 on a standard ATX board.
Yap exactly fot now not need , but with 1 TB games in coupe years u will need faster storage
@@paulmichaelfreedman8334 RAM sticks are actually bigger than M.2 SSD's, it's just their footprint on the motherboard that's wider because the PCB's are placed parallel instead of angled.
@@Saiohleetno need for faster storage Gen4x4 still the best by far if Single Sided SSD with 3DFlash TLC that has great Endurance I use 2x4TB Netac or Viper Lite and are just awesome
Price is important, but in my experience as a builder of high end gaming PC’s the cheap drives (ie; Kingston NV2) have a much higher failure rate. More specs are required in advertisements. Speed is not a concern of mine either because the drives are fast enough……….but quality of the components is my #1 concern. You just have to wait for the sales and select the highest quality drive within your budget.
The cat is trying to figure out how to get into the little box.
Those cat breaks are evil, attention-drawing and manipulates our emotions with cuteness to make us watch even longer! What an insidious business strategy. I love it! 😁👍
I hate them, they pad the video and remove from the flow of the content.
@@IIARROWS no one can ever be happy, did you get pissed off when a cat sat on his desk? use the skip button, just a waste of bandwidth to forward such a rude complaint on content you get for free.
@@razorsz195 Cat in the video is not the same thing of 30 seconds of nothing.
He found the formula and it's adorable 🤣 but it works
One of the good old conspiracy guys who's since disappeared answered the "these guys just want money and attention" comment with "I know how to point my camera at my cat and get 5 million views". Instead he was getting shadowbanned and 5k views for talking about the cabal.
basically PCIe 5.0 SSDs are still questionable for regular use
Depends. I'd say its a time saver for backups of big data sizes, especially at the consumer level backups. IOPS are finally good for consumer level virtualization, where all the VMs are running from the same drive.
@@EasyMoney322 forsen1 I see bajs.
@@Simon_Denmark LUKA TIM
Unless gen 5 drives are x2 instead of x4
@@Simon_DenmarkforsenBoys 🔭 forsen1
I pressed skip to highlight in sponsorblock and the highlight was the cat playing with the m.2
ahahahahaha
fellow sponsorblock user, hi!
@@anonymoususerinterface hi.. been using it for years now on my phone with vanced and now revanced and on brave browser on pc
Highlight button being put to good use!
Personally, I've been waiting for the Hynix/Solidigm Platinum P51 drives to come out. Hynix has always been late to each PCIe gen, but both efficient & speedy.
I liked Hynix/Solidigm but they have never corrected the firmware on their drives where after several months of useage the pSLC cache doesn't flush and your write speeds drop because all writes are direct to TLC, and the only solution is to periodically secure erase and reinstall onto the drive, Both the P31 and P41 Platinum/P44 Pro exihibit this quirk and Hynix won't do anything about it.
@brandonupchurch7628 Wish I knew this earlier, just bought one cause of positive reviews. Should have went for Lexar 790n.
@@brandonupchurch7628i bought the p41 because linus said skhynix was good and it was on sale and i have regrets
@@brandonupchurch7628 Wow, first I'm hearing of this. Samsung's firmware bug was worse but at least they fixed it.
@@brandonupchurch7628 So I looked up the issue and it seems like people would encounter it only on Windows, while Linux dual-booting on the same drive would be fine. Also may be Windows version dependent. These things suggest a Microsoft issue, and it would hardly be the first. Still, worth avoiding the drives for now, so thanks for the heads-up.
Thanks, Roman! I’m considering upgrading from my Samsung 990 Pro Gen 4 SSD to a PCI Gen 5 SSD. My main use for high-speed SSDs is 3D simulation caching, where each frame can produce files over 2GB. During cache playback, if the SSD's speed isn’t sufficient, it results in glitches and choppy performance which hinders work in some cases.
What kind of work you do? Genuinely curious sounds pretty awesome.
Sounds worth It then, hope everything goes well!
This is where Enterprise-class drives (U.2, but that's basically another form of NVMe, and cheap adapters are available) are a good solution, they're designed for intensive R/W over time, and are physically larger, so they're much easier to cool. The downside is the price. Still, if you need the speed and are able to pay for it, it's a solution.
i would really be interested in your user case, what kind of industry you work in?
@@greggmacdonald9644 I agree. Enterprise SSDs are worth the look for intensive workloads. U.2 and U.3 are pretty common.
I have a good if very niche use case for a Gen 5 NVMe SSD. I use EVE-NG to build large network simulations. A catalyst 9000v node needs 18BG of RAM. I bought a 1TB Gen 5 SSD and created a 1TB page file on it. I can then add 1TB of RAM to the EVE NG VM and it allows me to create very large networks.
18 BigaGytes of RAM
Is it written in Java to suck that much memory emulating a single device ? 15 years ago there was a Cisco emulator called Dynamips which would run quite a bunch of devices on a 256MB machine. With that said, I agree with your that modern NVMEs are now getting closer to RAM speed and it becomes almost acceptable to swap on them!
@@levieux1137 It is indeed approaching DDR 3 or even DDR 4 speeds, but the latency is still 100 to 1000 times higher. That's where Optane was awesome. Permanent storage, SSD speeds, but only at about 10 times the latency of RAM. I forgot the exact numbers, but it was properly between RAM and SSDs.
@@Winnetou17 yeah, I do have an optane 16GB in M.2 form factor, it's pretty nice as an SSD! Amazingly fast with super low latency. These ones are quite easy to find used for acceptable prices nowadays.
ChatGPT is most likely wrong about the heat up time
With a 2w power draw, 10g weight and 800J/K/kg thermal capacity (about the same as aluminum, bricks or glass), it'll take 800×0.01÷2×60 = 240 seconds
I assumed 7g PCB material and 3g copper so that's probably where the difference comes from :)
@@der8auer-en just use a tool that'll always give good results then
and not hallucinate trash
@@ThylineTheGay That tool would also be very wrong. Why? Because most silicon thermocouples are directly where the heat would be an issue. Meaning the it would heat up way quicker. ChatGPT is using (depending on the prompt) the normal calculations as most tools would use
@@AlpineTheHusky I wouldn't trust ChatGPT to add 2+2. When I asked it how much tp a person would need for 9 days making a travel shopping list, it said the average person uses 2-3 a week, so I'd need 18-27. It was unable to reconcile weeks with days, just multiplied 2-3 with 9 instead of 1.28.
@@dominic.h.3363 Well the prompt being written well and common sense is still required.
First gen5 drives were illegal, they violated pci sig rules, current delivery per pin, then pcisig did a revision change increasing current per pin and a new connector spec. So you have to be careful. It is a scheisse that rarely who had noticed as pcisig hides their spec sheets.
Good video as always der8quer and it nice that you mentioned the lack of DRAM, because all SSD's are not equal.
The lack of DRAM cache annoys me on modern SSD's.
Sure the DRAM cache CAN be used as write cache for example but it was mainly used for storing META data for quick access.
The lookup Table is part of the metadata and its basically a table that tells the SSD where data is stored in the NAND.
Storing Metadata like the lookup table (LUT), mapping and addressing information makes it much faster to access because DRAM is so much faster in latency then NAND.
HMB will add latency, going to the CPU over the PCI-E bus and then its RAM, no way around the laws of physics there.
There is also some SRAM on the controller, but SRAM is a lot more expensive then DRAM so not that much of it but that but DRAM less SSD's would have to depend more on that for performance.
DRAM on a SSD is mainly for Metadata, not read and write cache like a Hard Drive, but some part of the DRAM can and probably is allocated for Buffering Writes.
We think of NAND as very fast and sure it can be, benefit is good random read/write performance BUT it can also be very slow.
An erase cycle followed by a write can take 20ms, even a 20 year old SCSI hard drive will laugh at that with 4-5ms access time.
Thats why we hav TRIM and garbage collection on SSD's so erase blocks are ready for writes, also why we have DRAM cache onboard to find where data is located faster.
But anyways I dont like the Linear Bandwidth over everything else trend with SSD's, Low Latency is more important to get access times down for most use cases.
DRAM is a band aid anyways and don't really makes that much of a difference in absolute majority of desktop usecases, if you really care for random access iops you have to run 3d xpoint drives which are night and day difference with any other storage devices
Yes, lower latency please. Bandwidth is good enough for now.
😮
For those with the same lack of acronym knowledge as me:
HMB stands for Host Memory Buffer, an optional feature in version 1.2 of the NVMe specification, which allows SSDs to utilize the DRAM of the host machine.
And yeah, even though mostly everybody agrees that PCI-E 5.0 speeds aren't needed for normal folks (including some power users), it is annoying that the rest of the specs are still not promoted enough. I guess they need a new short, catchy way of advertising them. Like DDR5-8200 CL40 is for RAM, we need something like PCIe5-10000 XL4000 where XL is worst case situation latency, in microseconds. And maybe have an expanded way of saying several often-used (so relevant) well-defined different latencies, in a well-defined specific order, like the primary timings are for RAM. So something like 50-50-70-4000. Which can then be added in the long name of the product.
The motherboard design and M.2 location not being next to the GPU heat is also a big factor. I've got 2x Samsung 990 Pro 2TB both idle, ones at 52C the other 36C :) on TUF GAMING B650-PLUS.
Does one of them run your OS perhaps? That would explain the difference at idle.
The samsung 980 pros run stupid hot from my experience.
Saw one of the temp sensors hit 76c after a long period of gaming,
saw a video about the thermal pad not making correct contact though, the pad covers 2-3 different chips and they're different sizes you you're better off repading the nvme with your own pads that have a 0.5mm difference.
e.g 0.5+1mm, i tried that but 1mm+1.5mm seems to be work better. same gaming scenario brings it to 56c or so now.
@@MDxGano yeah one is C drive, might well be the reason. other drive is data. I guess there could be background io going on. but it is small in Process Explorer. I wondered about moving them around but I think with the GPU installed the 3rd m.2 socket is either disabled or is half bandwidth if I recall.
@@DamnationPala ah nice yup could be user error :) maybe some motherboard heatspreader contact needs checking/tightening, I'll take a look.
Exactly, the m2 above the psi x 16 connector gets much hotter as the heat from the backplate of the videocard rises up and heats it up. Most motherboards have this arrangement.
If only these SSD makers make high durability units, and instead of raw speed they should really be paying attention to latency and mixed access speed. Optane was too early too soon. I don't want big capacity (because it becomes tempting to throw everything in it) but sure appreciate peace of mind without worrying about repeated write/erase cycles...
Unless youre moving TBs of data every day, the lifespan of an SSD is functionally limitless. Expect a minimum of 10 years, possibly extending to beyond 100 years of life from an SSD in regular use.
@@wills.5762After 10 years of taking a very good care of my 256GB SSD my drive was still at 98% life.
Average personshould never ever worry about drive life
@@wills.5762 I agree on that part, if your drive has good TBW rating. There is no telling about how slow it becomes into the future with wear leveling coming into play, or how good it is...
I think for these usecases you need to look at enterprise SSDs and not this consumer stuff.
@@DerIchBinDa If only they have consumer form factors, heck, even just SATA but not happening...
indeed this was needed to get pcie 5 to be of use.
if only pcie 4 came in 8tb and a cheap price that would be of more use.
8tb is harder to find now that it was a few years back, we truly have regressed :(
I'm still waiting for $150 4TB drive then it's time to buy.
Unless there's a flash memory oversupply like in 2023, you ain't getting anything less than $50 per TB from a reputable brand in the near future.
Don't worry though, I feel your pain.
@@EbonySaints Yeah, I'm not expecting it'd be anything other than like Team Group (the MP33 was already as low as $175 for 4TB). My only requirement is that it is not QLC.
I'm also waiting for that. I don't need gen5 speeds but a large capacity SSD at gen3 or whatever, that does not need humongous active cooling
@@marcogenovesi8570 crucial p3 plus, reputable brand, high capacity, good price
If you have the space (i.e. not for laptop) then look at used enterprise U.2 SSDs, I bought sightly used 8TB one for around $250. Then all you need is PCIe-U.2 or M.2-U.2 adapter which cost around $20 or less. And they have way better endurance than consumer drives and usually also a power loss protection which is nice to have.
Still rocking my Samsung 970 Evo gen 3 drive and have zero reason to desire much more. Does great in daily tasks, great in gaming, great in most workloads that aren't super dependent on massive constant file transfers -and even then its not bad by any means.
I'd rather want more capacity for cheaper, than just faster spec for higher cost and higher cooling demands.
Indeed - I accidentally used my Gen3 NVMe drive as the boot drive in my last build, and it actually works well as it sits below the GPU and doesn't generate heat. It depends on your use-case, but not everyone wants bigger faster hotter etc. For general boot drives and gaming, Gen3 (maybe even SATA) flash is good enough.
I wouldn't be surprised if NVME drives go back to the 2.5" format because you can have better heat dissipation that way. There's only so much you can do to attempt cooling such a small area.
that's pretty much the U.2 format you're describing. Too bad it's mostly kept outside of the consumer market.
What can be achieved with nvme is to use PCIe card form factor SSDs (at least M.2 to PCIe adapters) with proper rad blocks. I still believe this is a bad design choice to reserve some precious pcb area for an M.2 instead of putting a standard PCIe slot.
@PainterVierax Some manufacturers had vertical mounted M.2 slots then decided to not have them anymore.
I'm glad you said that about the cache. I've run into _so many_ people who say to get a drive with cache for gaming, that it's a must. Eventually (after years) I finally did, only to see what I had expected, no performance difference for that use.
The onlt thing the Dram on an SSD does is cache the local bitmap of data on the drive it doesn't work like an cache on an HDD hence why they're only around 1Gb (134MB) in size as it's just a local copy of what is basically the MBR for the SSD's firmware so it knows where all the used bits and empty bits are without having to wait for CPU/Memory access to take place. Where as the write cache on an SSD usually takes the form of a couple of hundred GB's of flash that runs single bit write instead of 2 or 3 but once that runs out and the data need to written to flash normally then it will slow down quite a bit
3:12 sounds like it's time for another Der8auer product ;)
The silver block looks so good 🤤
One thing I would like you to test is absorption of heat from the GPU backplate to the SSD. Heatsinks can also work in reverse, and I find my idle SSD can reach 60+C while gaming just from the passive heat of my GPU.
Once you realize SSD speeds don’t affect performance it’s hard to justify gen 5 drives
what? that doesnt make sense.
Well you wont notice much difference unless you upgraded from something much slower. Also depends on what you do on ur pc.
@@Chicken-o5emuch slower would be a hdd or maybe a sata ssd, for random loads ssd to a another ssd is generally the same (unless your using enterprise/business class ssd's as they have QOS with keeps latency below 2ms reads 8ms write max, but under 0.1ms typical for both under normal loads)
litterly a huge difference
you must have ram cacheing enabled
when ram has 200ms at 300mb and a gen5 has 1ms at 5gb
@@croydonzeldra5623 does ram caching decrease speed and/or increase latency? If so, how do you turn it off?
Thanks, I had just ordered some parts for a new build and went with the Crucial T700. After watching this video I looked into it a bit more and cancelled the order, got a Lexar gen 4 instead.
perfect timing, i'm currently building a pc and was wondering whether to use PCIe 4.0 or 5.0 drive due to temperature concerns (mini itx)
If you have a fan cooler for it, you will be fine. Not sure if you will have enough space in a mini ITX. I have a HAF 942 case and the giant heatsink and fan that came with my Mobo for my PCEI 5 m.2 slot aka blazing fast slot it barely fit under my NH-D14. I felt kind of lucky that it all fit litterly just a few mms of space left. I would measure the space to see if it can fit, before ordering. My temps never go above like 44C and I use it everyday. For me the blazing fast slot sat right under my CPU, so it was a tight fit. If you are making a future build this is definitely the best way. You won't need to upgrade in the future. and atleast you won't have to wait a month like i did. Funny that my mobo had all these 5th gen slots and nothing was even out on the market yet. Also if you wait enough for your build you can get a GPU with 5 gen too. AMD 8000 and Nvidia 5000 series will finally be able to use it.
I needed that cat break. Thank you for sharing. :D
The era of diminishing returns.
Nah, just things are rushed to hit the market too fast instead of resting a few years in between to let the tech cook.
@@thetheoryguy5544 that's how tech has always been, it's just developments are made faster and faster as Moore predicted
thx :) next video if we can have test on game launch speed test vs previous pcie versions, it would be perfect. :p
I'll add a Big + for this Cat break! Thx Mr Roman.
Practically I only had a problem with a M2 SSD in a PCIE 5.0 M2 slot is when I did a Clonezilla disk image restore of my Windows 10 C drive. The speed went from 17GB a minute down to 3GB because the gimmick ASUS motherboard M2 SSD heatsink could provide enough passive cooling. I got a Thermalright HR10 2280 PRO Black SSD Cooler. With a fan it will keep the SSD cool in any scenario.
I think you meant GB/s not GB/mn otherwise 3GB/mn is the speed of a USB3 thumb drive (50MB/s) and PCIe generation becomes irrelevant at such speeds (even gen1 x1 was 5 times faster).
@@levieux1137 It is 17GB a minute to read from the compressed disk image files from a mechanical hard drive. Uncompress and write them to a M2 SSD. So it takes about 30 minutes to restore a C: Windows 10 back up image. Which is about 355GB backup image to 525GB on C:. So one might think my PCIE 5.0 SSD should not slow down much during such a restore process. But it did because the gimmick ASUS motherboard M2 heatsink is crap.
@@xav500011 it's just very strange the SSD was hot at this speed which is in the order of 1% of its performance.
@@levieux1137 Clonezilla runs Linux. So when the restore was running I could ALT+F2 to a command line and run the "sensors" program. I could see the PCIE 5.0 M2 SSD temperature rose from 40C to 72C. Again the M2 SSD heatsink that came with my ASUS X670E Tuf Gaming Plus motherboard was a visual gimmick. And the thermal pad was far too thick.
Great video. It's nice to see Gen 5 improving even if it's slow and sure.
I vote for every video having a Cat Break. We really need it.
Thanks man, just saved me some cash in the new build.
Good video again. I believe that one day your cats will know how to build their own PC setup. Cats have such fast reflexes that you need the fastest stuff for that PC. 😀
Ran crystaldisk while watching this. Running a crucial T705 2TB. Sat at around 55 with a max of 58 during the read/write tests.. It's in the Z890 ROG Maximus Extreme with the vapor chamber cooling block. So the cooling on this motherboard seems to be working well
nice work Roman, I find odd that no one talks about that any PCIE 5.0 SSD put in any board that's not Z890 (Intel) will drop your GPU x8 no matter what. Sadly I found this myself when I first got my Z790 APEX ENCORE saying PCIE 5.0 but dropping my RTX 4090 to x8 :/
It wouldn't, with AMD on their 6xxE chipsets and 8xx chipset, they have 4x PCI reserved for SSD that is wired directly to CPU and 16X for GPU.
@@igorvidakovic7388 i'm talking about Intel bro
@igorvidakovic7388 I just checked the manual of ROG X670E boards and if you install PCIE 5.0 SSD it's the same as Intel do you have a board and PCIE 5.0 to check if it drops the GPU to x8, diagram of the chipset for both Intel and AMD says one thing but actual board implementation is not the same
If you do a bit more research you will find that it doesn't really matter. I initially also got hung up on that but from reviews and tests done it doesn't matter.
@RoelofaCoetzee I'm a hwbot overclocker and x8 on the GPU it does matter it should always be at x16 no matter what
I love the cat breaks. I know they're not the point of the video, or why I watch them, but still :)
I just LOVE the Cat Breaks !!!! Your cats are ADORABLE!!! My cat has become very overweight and doe3sn't play with things anymore... she just cleans her fur all day (either she likes eating it or she has a skin prob... OFF TO THE VET!!!)
They groom their fur, not clean it. If they can't get actual dirt (like car oil) out of their fur they will pull it out.
The Cat Break is a clever idea for a sponsored segment 😉
Yet here I am, with a SATA SSD and an Optane cache on top lol, I feel like I don't even need a gen4 SSD
i have one too that has lasted me a good 7 years now through very rough usage.
I just want some affordable 16TB SSDs even if they are half as fast. They can be SATA for all I care - I just want to build an affordable and reliable QUIET disk array.
I'm just waiting for someone to say that is a niche market - like BMW, Lexus, Acura, Mercedes, Landrover and other more expensive cars are a niche market - somehow through charitable work the companies survive.
Enterprise drives maybe?
Same tbh. My steam library is well over 10TB and I live rural so the internet is assholes Im not uninstalling games just to redownload them a few months later
@@Hetsu.. They make them - crazy expensive but we were promised 48TB consumer SSDs from Samsung ages ago - I'm sure they are available in the same aisle as my graphite batteries and my flying car - just below the cold fusion generators.
@@wills.5762 I get it - I converted countless discs that were filling my small apartment - I'm at 125TB of movies and TV shows. I'm in a town where the biggest import is manure for the farms.
What was this video about again? Sorry I was distracted by the cats...
Nice video sir 👍
Came for the tests, stayed for the Cat!!
Came for the tests, stayed to remind everyone Trump is president elect. Cry
Glad the SSD didn't suffer CATastrophic failure
Be honest, it was just for the cat…
@@kzip2009 I hope you find more interesting things in life than politics, like everyone gets it by now. There's other countries in the world too than US. i don't go shouting around the internet who my president is when they get elected.
@@Simon_DenmarkCope champion
For me, the most I'd ever want would be a PCIE gen 4.0 x 4. If there were more gen 3.0 x 4 with DRAM Cache I'd probably plump for them every time, but Gen 3 is usually budget drives with questionable controllers and junky sustained writes
I wish we got more of PCIe gen5 drives, but in x2 instead of x4. Having more lanes available is always nice.
der8auer Corsair makes the MP700 PRO SE Hydro X Series 2TB or 4TB M.2 PCIe Gen5 x4 NVMe 2.0 SSD - M.2 2280 - Up to 14,000MB/sec Sequential Read - High-Density TLC NAND most importantly a Pre-Installed Water Block. I’m running it on my new Asus X870-I ITX mobo along with the 9800X3D delided with your AMD MYCRO DIRECT-DIE PRO and my temps are in the mid 40’s to low 50’s in use. By using the Corsair MP700 PRO SE Hydro X water block I can’t use the above section m.2 is the only drawback.
I use a WD Black SN850X on my old Lenovo P520 which is PCI 3.0 only, it's plenty fast and doesn't heat up much while gaming. The small aluminum heatsink from Lenovo is warm but not as hot as I expected. I think my nvme drive is rather unnecessary for gaming, though freeing up a Sata drive bay is a great benefit. Using it at 3.0 speed, it's not much faster than a 2.5 Crucial MX500, where both drives were running Steam and games only and Windows was on a separate drive. It's wicked fast while using Pop OS, however, with both Pop and Steam running on one drive.
I'm gonna have to convince the boss to let me build something dual PCIE5 for some Photoshop scripts placing images into layers 😅
I'm more interested in idle power consumption, because as someone keeping my PC on 24x7 for remote access, I don't want to heat the room nor to have noisy fans. That's also the problem I'm having with recent CPU reviews (everywhere), for 285K and 9800x3D you only find figures under full load but not in idle. And that's sad, considering that some previous CPUs (AMD) were sucking a lot of power in idle!
TBF, most consumers don't run their PC 24/7 or they have cheap electricity so nobody does that anymore except some rare media like TechPowerUp.
AMD cpus are still sucking a lot in idle, even the new APUs made on a monolithic die are bad compared to those on AM4. Though Intel regressed on idle power since Skylake/Kabylake.
To me the best solution is to remote access some cheap ARM SBC to turn the PC on/off.
@@PainterVierax in fact most gamers don't keep their up due to power draw, but many other people I know (developers, techies) do it. You have all your apps loaded with your unfinished work and files being edited, everything accessible both locally and remotely etc. Typically I do frequently run some builds remotely when I want to test something from my work place on my PC. This aging PC (skylake 6700k OC to 4.4GHz) is only 27W in idle. That's low enough to stay on all time. I do have other machines that I can turn on when I need, they're actually the ARM boards since I don't use them every day (~70W total). At work we have a 7800x3D and a 14700K for evaluation and already noticed the AMD sucked a lot more in idle (I don't remember the exact numbers but it was almost the double). That's why I'm wondering about all that before I make the wrong choice and regret it.
@@levieux1137 well you're actually well equipped and I don't think you can really do much more than using some AM4 APU before splitting those service on a rendering farm with WoL.
Mini-PCs using x86 laptop chips are another solution to replace this Skylake build but it's a trade-off you probably don't want to deal with.
@@PainterVierax that's right, I want high performance when needed, and low idle consumption the rest of the time. Note that I'm not interested by "efficiency cores" that are not quite fast in fact (enhanced atoms mostly).
Almost certainly negligible, couple watts maybe. Any RGB lighting probably pulls more power than an idling m2
Putting heatsink on my WD SN770 is what took it from blazing hot in load to manageably hot.
A very VERY informative video! Thank You Roman! I've been waiting to see any new development with PCIe Gen5 drives.... they've only gone down a small- amount in price but there hasn't been anything new with them except maybe size and speed.... but at that speed do you REALLY need it to go faster?? Not for gaming, No.... But I do like my Gen4 when I have to move my ISO's over from my slow storage HDD to my Gen4 ... it slow to move it yea... but when I'm installing the game it's SUPER FAST!!! lol ..... I just need to get a bigger Gen4... 1TB isn't cutting it anymore :/
Any videos that include your sweet cats = win! :)
My Adata XPG Gammix S70 uses the InnoGrit IG5236 (gen4 - 7000+ seq reads) heats up way faster than the ones with the equivalent Phison E18, that made me not consider upgrading any time soon to a gen5. You don't have to look at lot to find some stupidly huge active coolers (yes, with fans) for NVMe... or liquid cooling
For those looking for a good laptop NVMe (without any heatsink besides what the laptop provides), stick to a Phison E16 controllers (tops around 4500 seq reads) which is still fast enough for most situations and will only overheat if you torture it enough
I got Lexar MN790 1TB Pci-e 4.0 and its very cold. Response time is also great. I like my ssd
the corsair without the heatsink is even more impressive when you realise it doesn't even have the copper heat spreader (disguised as a sticker) the other one has
Upvoted as the cats are freaking amazing! As for PCIE 5, isn't PCIE 6 almost ready, lol. They are so far behind.
I play game titles that read and write to logs every second so it really does depend on the software your using. We my group recommend anti virus exclusions on the log files because it introduces studders
After seeing GPU and CPU launches, it feels weird to be hearing 'insane' power draws in single digit wattages.
Consider that if you put one of these SSDs into a new MacBook, it would most likely lower the battery life by one half.
Interesting show. Thank you.
love the cat break session!
4 out of 5 of my rigs still use gen 3 boot drives, I really don't see the point if it isn't for mass storage and a hell of a lot cheaper
Watching your vide just reminds me of how much I love cats, don't care about ssd speed, they are fast enough, but cats? Heck yeah...
They need to start offering PCIe5 x1 drives, even at x1 speeds they're fast enough for daily use and it would free up a lot of lanes for more drives.
this is the only reason i'd want them. my pc is heavily pcie lane starved
Cat break was brilliant.
The price is because it's the "latest and greatest" if you don't want to deal with that wait 1 to 2 years.
This video reminds me of the time you were shocked to find out VRMs need heatsinks..
PCI-e 5.0 NVMe m.2 drives at full PCI-e 5.0 speed require water cooling due to the tight space if you want to use the drive for long hours at a time.
Please keep the cat breaks in future vids👌
There is an article titled "Adding ceramic powder to liquid metal thermal paste improves cooling up to 72% says researchers". Please research this topic. Thank you!
The endurance indicated for the Corsair MP700 ELITE 1 TB is 600 TB, so if you do 10 mins of non-stop write testing at 10 GB/s you've already worn out the SSD by 1% 🤔🤔
Very insightful cat test.
Hi . Interesting the temperature issue but also one thing if it is so fast you will need also less time to load the game or the OS or whaever so with a good dissipator sholuld be all ok no? the high price is inevitable
Am I nuts? In all of the systems I've built in recent years (mostly with salvaged parts) the goal is always to have at least two drives, where one is larger, and appreciably slower than the other. Meaning even while transferring data to or from that 'backup' drive, especially when backups happens automatically, the system is still perfectly responsive and you're unlikely to notice.
The thermalright heat sinks, HR-10 or HR09 pro (not 10 pro) both seem very capable and sensible. I had saw an impressive heat difference with the HR10 and from what i can tell from the smaller channels they saw similar impact. The heat sinks sold with the SSD are mostly placebo.
5000MB/s is already crazy fast. Even assuming if that entire 64GB needs to be filled, it will only take like 13 seconds.
The X4 mode is likely a result of dual personality - I have recently seen a drive/controller that can do either Gen4x4 or a Gen5x2, but not Gen5x4. The tool which you used is fishy, as is the whole Windows platform. I would suggest to run Linux and see the drive capabilities in a tool like lspci, which shows exactly what the interface capability and current negotiated mode is. Furthermore a nvme-cli could show us what the operating points are in terms of power.
Hey @Der8auer, I am only seeing 12GBps read speeds on my T705 could this be due to the contact frame still being tightened to much ? Or is it just the fault of the 285K ?
Maybe we need a bios or windows update or something. But it is strange that the read and write are nearly identical, seems like they are capped.. it’s not like I am only at 2 lanes because then I should not even see 12GBps but this drive should be capable of 14.5GBps reads… what gives 🤷♂️ .
Hmm just thought could also be the temperature, I will try to cool it with a fan later and see if it remains the exact same but so far I have tested the PC form a cold state and after normal use and every time I get the same speeds. I am using the Z890 Master with the motherboard Gen 5 cooler attached.
They really just need to drop SSDs down to a single PCIE lane. 1 lane of PCIE 5.0 is the same speed of 4 lanes of pcie 3.0 which was already plenty fast for most people and again doesn't need a heatsink or active cooling which is worse. PCIE 6.0 is already finalized and pcie 7.0 is already in the works. Again, for consumers the drives are fast enough and what most people would benefit from would be lower cooling and more PCIE lanes available for other devices.
It's a good thing motherboard MFG's keep removing PCI-E and SATA connectivity for more NVME when they're still super expensive and 5.0 are barely usable even with a heatsink.. which will interfere with anything else over it on many motherboards.
I haven't even researched modern controllers/NAND endurance so lifespan and reliability is another unknown variable, unlike every SATA drive we've been using for over a decade now.
30min of cat test feels great
Love, the Cat Breaks :)
That power draw problem is why I stuck with an efficient, high iOPS gen 3 NVME for the system drive on my game build in 2023 - it's sitting under the GPU and gets hot enough as it is.
(But when the SATA 2TB drive I already had started throwing reallocation errors last month, I picked up a WD SN850X with heat sink, as the second NVME slot isn't under the GPU and I figure the newer gen 4's aren't so toasty.)
Random question: is it possible to overclock memory controllers or is the bottleneck the read/write on the flash-memory?
Have you tried using thermal paste (non-conducting) on the memory/chips even without an IHS/die contact? 😂
MP700 looks nice, but idle powerdraw is a big selling point for laptops/handhelds. Hopefully SSD reviews will include that factor, where less dram/components instead are a benefit.
is it possible to overclock memory controllers - Sure it is... but do you really want to take the risk with your data? You will also need to modify the firmware since there is no api for overclocking the controller.
I just wish for SSDs that last longer man
Good to know about PCIe 5.0...obv just a little too early. WIll prob take 2-3 years esp once the new processors mature
And now a word from our sponsor; Cats! Who needs Xanax when you got Cats!
I just got MP600 CORE XT , its really good back here in normie land.
its good TDP for an NVME drive and for how much performance it gives
Out of curiosity: does it make a difference if you have a HMB vs a dram drive if you intend on using it in an external enclosure? (Regardless of gen4 or 5)
11/10 for the cat test!
I wish Optane didn't have so many drawbacks because we really need to take a break from sequential advancements to work on small file random performance. Every generation gets us double the sequential speed but only 10% more 4KQ1T1 performance.
I think PCIe 4.0 SSDs also have questionable value for many people. I am using 4TB PCIe 3.0 WD SN700 in both desktop and laptop computers, and happy with disk performance.
I'm in accounting but in the third world. Everything is mostly still on local machines. Any speed improvement would be awesome for data transformation and analysis. I used to dream about being to run our scripts on beefy multicores desktop, 64GB DDR5 and PCIe Gen 4.0 machines instead of old ThinkPad. At least our clients and competitors are moving to big data.
Missing Linear Write test Aida64.
Yep, one of the most important thing
Years later and I am still happy with my 2 seagate 530s in Raid 0 ... LoL !!!
I have a heat sink on my NVMe and installed a fan into the case side that blows across the back of my graphics card to keep my NVMe cool. I do not know why Gigabyte decided to put the PCI-E NVMe drive right by the PCI-E 16x slot, and have it mounted so close to the motherboard, there is no air gap behind the drive. I was wondering why my games on the NVMe kept giving me issues, then realized the drive was running hot. Now I have seen the drive hit about 45*C vs I have no idea just how hot it was getting before.
I remember the days of trying to Raid 0 my HDD WD raptors and trying to get more read and write performance. Wow 12,500mbs is pretty amazing. Yes, hot!
Kinda wish U2 drives are mainstream. We really could use a bigger form factor
Idk how accurate this is. That first drive isn't working very hard. Only 2kmbps on a gen 5? That's lightweight able to do it in their sleep. So its not going to heat up as much. While the 2nd drive was pushing 12kmbps......thats why it got hot.
Bauu is a sound a cat in the heat makes. Bauer is someone who also makes that sound 😹
Would love to see some temps/testing done with Pcie gen 5 drive and Direct storage games where they are require/needed?
Outside of data centers you really don't need high speeds. While my main gaming rig does use a gen 3 NVME M.2 drive, I have found for most uses old school SATA drives still work just fine. So when building your PC it's still best to focus on price per gigabyte. Unless you do high I/O applications.
I think gen5 storage came a bit too early. Gen4 has still a way to go in improvement, capacity and lowering prices. And here came gen5 that is way out of anyone's specs. Nobody uses the speed it provides, it heats up too fast and is too expensive. Ideally it would just beat gen4 at every measurement and there would be no need to use gen4 but instead it's like a race car that came at a street racing with exotic rides. Way out of place and not racing anyone.
To be fair gen5 is really more about enterprise than consumer, datacenters pushing 100GbE+ is where gen5 nvme shines. And M.2 is starting to show it's limitations with gen4/5 NVME, kind of defeats the point of having a small form factor when you have to use it with a huge heatsink to keep it from overheating.
we need more space when do we get more storage space for the same price like 2x storage for same price?