I don't often add comments, but as an engineer who lives networking, storage, and server space, I have to call out a materially significant testing mistake. I immediately suspected that you had other environmental factors influencing your results when I saw your "no cache" file transfers. 113 MB/s is 904 Mbps, which means you are pretty close to maxing out a gig link (~90%). Depending on the quality of equipment you are using in your network (home vs true enterprise) you are most definitely maxing out your network equipment. I use for example, Cisco 3850s at home (a very datacenter capable grade switch). I can easily push greater than you are on my 10 year old Synology and max out a gig link at 100% (on a RAID 1 non-SSD drive no less). Home routers and equipment are not quite as capable (I have a pile of every vendor you could imagine). You may also have a PC issue and while the exact environmental issue is hard to tell from a video without seeing more stats, one does exist in your case as it was further confirmed once I saw your "cached" results, nothing above 113 MB/s with really no variance jumps or deviations above that regardless of caching status. Also your recommendations on when a user would get a benefit is simplified at best, inaccurate at worst. In general, file transfers, like you did, see benefit in home only when two conditions both occur: 1. repetitive access of the same data and #2 the primary drive is significantly slower than the cache drive. Cache drives also help up the concurrency factor, which your simplified test did not account for and, to be fair, probably won't occur that much in home use or around file transfers. However, home users notoriously have cheaper and slower drives for their main drives so, in your simplified test, you have to account for that. Slower drives often make it easier to have more storage at a cost effective price. Cache drives are designed around slow drives, reducing random operations from hitting the primary drive, and concurrency - that is the areas they help with. You also have mixed drives from different vendors in your primary storage pool. That will affect the way that information is delivered in cache.
Appreciate the detailed response Sidney however I think you're missing a few points of the video really. Firstly the DS920+ is a prosumer grade NAS, it's prosumer because it's got a couple of extra drive bays, double the memory and a slightly faster CPU than some of the entry level NAS units from Synology. It also lacks significantly more enterprise level features that are available in the enterprise gear that Synology sells. Whilst your Cisco 3850 switch is impressive it's highly unlikely that your average consumer will have anything other than their ISP supplied modem/router or if they have the funds, something like a Linksys or Netgear switch so their setup will have all the overhead you refer to. They'll be wanting to back up photos from mobile phones, stream films from the NAS and basically dump files from computers around their home onto the NAS. Now I appreciate some more technical users will have a DS920+ in a home lab setup and their usage may vary from the above however the fact remains, most users will use the NAS for it's fundamental function and that's to store files. The cache will have ZERO benefit here, as will having a mix of drives because ultimately the unit is LAN port limited most of the time. So I stand by the test, it's based upon what most people will use it for rather than what a minority of tech experts would use it for, who in reality should be using something more enterprise grade. I encourage discussion on my channel, it's healthy and helps inform 🙂
I agree, even a Synology NAS with SSD only drives the throughput is still around 113 MB/s max. Sucks 2.5GBit switches with a decent number of ports haven't really taken off even in Nov. '22.
Some your points are true but let me tell u about your C3850. Its scalable campus aggregation switch, not really DC switch, I dont wanna speak about Nexus and so on ...switches. The other point is that C3850 has basically the same guts as C3650 if we compare default models like C3850-24T vs C3650-24T ... I just wanna say these switches are Cisco mid range and bad news for u is basically they can be beaten or equal to my C1000-24T-4X-L with simple IOS and not complex IOS XE which is kernel based and not fitting the home purpose with NAS interconnection in my view. Why? IOS is simple, easy to recover and C1000 series has similar L3 features as your C3850 except EIGRP what C1000 does not support. Dont get me wrong but I doubt thinking EIGRP is the key benefit for such usage. I can imagine EIGRP is fine for transfer networks with dual routers for redundancy in combination with HSRP/VRRP but here for what? U got my point ? My uplinks are 10Gb, and yours? Do u have with you C3850 10Gb uplink modules? Since your switch is modular and by default does not contain module. And 10Gb GBICS as well?;) Have u seen pricing for original Cisco SFP? I do on my home network - 10Gb network running cat6A cabling in combination with home office and also aggeregated 2x 1Gb into 2Gb etherchannel on switch towards NAS. All running well, used 4x 6TB Seagate ironwolf Pro 7200RPM drives - all same plus 2x NMVe 1TB, 4+4 GB RAM on DS920+ running SHR2 RAID which helps with performance and redundancy even more. Anyways these features are more performance improvement that Cisco usage on my network. Even I love configuring Cisco, but C3850 does not mean anything in this case, sorry. Especially if we speak about switching performance. Forwarding performance is another topic...
I think a few of you who commented are missing the real picture, the claim of performance, or lack thereof, is faulty in this video because of outside influences. It doesn't matter if a home user buys something they can't fully utilize, that is user error not a hardware issue. When you rate performance you rate based upon what the hardware can do in an ideal environment or you list limitations of your own environment, that you can't/won't solve, along with your performance results. Be honest and upfront, that is what makes for a great review. And for the guy talking about EIGRP, I am unsure where you are going, but I suggest you rewrite your comment to make it more coherent if you want a real conversation from me. From what I understood, I think you have been misinformed about a few things.
It made my NAS much more quiet, it now uses the HDD much more rare and therefore there is not always this HDD read/write sounds. I think thats worth mentioning it.
OK this is not something I noticed but I find my drives fairly quiet anyway however good feedback and definitely worth considering if you're using noisier drives 👍
The Bottleneck is not the disks, is the network connections ( 1GB NICs ), just buy a NAS with 2.5GB NICs at least if you want faster network transfers.
Depends, data at the outer of the disk will be much faster and will almost certainly exceed a 1GB NIC, start filling up the disk though and the transfer rate will start to fall off. A NAS with 2.5GB NICs would be ideal, maybe Synology will release it in the new 2022 models 🤔
I’ve seen some redditors mentioning decent results with a usb 2.5gb Nic. True it is a shame this isn’t standard on mid level/‘prosumer’ Synology as it is a bottleneck.
or just connect two gigabit ports to gigabit switch, combine them to bond 2gbit, synology dsm can do it, then do the same bond with two gigabit ports on your PC/server whatever. After that you will be able to test file transfers on 2 gigabit network connection. Anyway you must sacrifice four gigabit ports of gigabit switch to use it in this case.
Great advice. I'm off to order a new hard drive for the empty bay in my NAS, as it's aparent from your video I will not benefit from installing M.2 Cach. Excellent video, Thank You!
Thanks for the feedback, yes only 1 slot, officially supports 8GB total but I've seen reports of 12GB working - I have an extra 4GB installed and it's plenty at the moment.
I upgraded mine to 8GB, but I can’t say I’ve noticed any difference. I think it really only helps if you’re using resource intensive applications like Plex, or are running Docker containers or virtual machines. I use mine as a file server, use the Photos app on several devices, Synology Drive on 3 home computers, Hyper Backup, and I use my doorbell camera on the Surveillance app. I never get above 25% memory usage.
I repurposed an 8GB 2400MHz SODIMM I had in my drawer after a laptop memory upgrade to upgrade my DS920+ to 12GB. So far it works great even though it's not officially supported.
Hi. I just purchased a Synology 920+, and along with it I purchased 2- M.2 drives and a 4GB memory stick. Is it okay to install the extras on the initial setup, or should I wait to install the extra memory and the 2-M.2 drives? I probably didn't need to buy the extras but since it was reasonably priced I purchased everything in one shot. Nice video. Thanks for the help.
Hello, enjoy your presentations. I would like to be able to read the software screens you work with when selecting the options. Thanks for sharing your knowledge.
Depends on your total disk capacity, you can use the cache analyser function in the cache setup process to determine how much you need. It might be worth doing this before buying any nvme drives
Wow! Thank you. I assumed that the SSD cache would make a significant difference; much like the difference I experienced taking my desktop operating system from a HDD and placing the OS on an SSD. I have a Samsung 970 evo 500GiB that isnt getting utilized. I placed it in my Synology Nas because I wanted to use that SSD instead of the SSD just sitting in my desktop doing nothing. (500GiB isn't enough space for my scratch drive, so I replaced it with a 4TiB SSD) In your opinion; would using the Samsung 970 Evo 500GiB as a Volume for the DSM7 operating system, be more beneficial than using that drive for caching?
Another great video. I was going to pick up a couple of NVMe drives but now realize for my use (mainly movies and TV shows) I probaably won't see much if any improvement.
Thanks, if its just streaming then probably better spending the money on ensuring you've got a good solid network especially if the devices are connected via WiFi
But the cache isn't only for the sake of speeding up transfers, surely... I mean really once a transfer has started the network is always the bottleneck, even if you're using both RJ45 ports on the go you're going to struggle to push the speed a HDD will read or write at. I've yet to get one of these but I've been assuming that the main purpose of the cache is, as well as for serving multiple users more efficiently than spinning drives can manage alone, is for the scenario where the device is mostly on standby, spun down, so that when you say jump on your most used Synology apps it'll have something to serve instantly while it takes its time spinning up the drives in the background, and equally so it can start receiving whatever you send it immediately, again while it spins up the drives. That way your experience is a lot more seamless even if it's been hibernating, as opposed to having a loading screen/hanging app for around 10 seconds. Especially when it comes to spinning up a full array, that tends to take significantly longer than even spinning up one drive on its own and is really where the value of a cache comes in. Surprised that you need dual SSDs to have a write cache though.
The restriction is definitely down to your single NIC. If throughput is your main goal you need to install a 10Gb ethernet NIC or dual version with aggregation (provided you have a capable switch), and you'll see those throughput rates climb significantly.
@@Byteofgeek It is, you can push more than that from one PC. My new build just tested at 6-7 GB/s random writes speed. and 13 Gb/s sequential read speed. If you are going to provide performance tests. Then having a proper test environment for performance seems pretty critical in my opinion. For a LAG to one PC to work you must ensure that you LAG hashes source & destination ports otherwise you'll only use one link in a LAG. Then you must open up two different connections to the file share.
does the ssd cache help in terms of phone access speed? Currently my HDDs are in idle mode after some time for power saving. But if I'm accessing the NAS from the phone outside of my network, it feels like it needs some time.
Thanks for your honest review! I was thinking of upgrading from old Synology 2-bay NAS to a 4-bay one with SSD cache, now perhaps I might just spend my money on two 2TB SSD into my current (old) NAS for better performance. I assume the SSD cache will benefit more if there are more than two users access the NAS at the same time, instead of single user like me.
More users accessing the same files would almost certainly see an increase in performance, single user is more likely with things like indexing photos and such like. The 2TB SSDs sound like a good option but go for pro/enterprise grade drives 👍
This video shows a very narrow perspective on the matter. It should seem pretty obvious without any tests that if your hard drive provides data transfers at about 900 mpbs (i.e. pretty much maxes out your 1 gig ethernet connection), then adding cache would not result in any meaningful improvement. What this video is missing is the effect on latency/responsiveness (like when your Plex server needs to snoop around the drive to find thousands of small files while the UI is loading), the effect on concurrent data transfers to several clients when the NAS is using two ethernet ports with link aggregation, as well as the effect on the noise level.
The Tests just Show what the cache doesn't enhance file transfers. Things that get orders of magnitude better are for example the plex ui the synology photos ui indexing browsing files in smb or ds file the os snappiness etc.pp. anything that requires acces to a lot of small files like thumbnails or even more so sql db operations. I remember how the plex ui used to beach Ball every time there was a background task updating metadata that Kind of stuff is just gone. Whenever you run something on the server a ssd cache transforms the experience like it did when you first put an ssd in your laptop.but yes filetransfers are not faster.
I get your point and I think in a way your point also re-enforces the tests that small files, whatever the action, are the main benefit in the cache. A lot of people will just rush to put a cache in because they see it as being better however more ram and filling up the drive bays should be where the money is spent first IMO.
@@Byteofgeek for me it depends what you want. Most people access a nas through wlan so more drives are pretty much useless for them. Even a single drive can do 50MB/s of a good wifi connection. And ram? If you do not run vms or docker you never use more than 1-2GB on dsm regardless how many apps you run. Sure more ram can cache the filesytem as well but an ssd cache does this much better because its bigger and has more of the warm data. So if you want a nas as media server photos Server plexamp Server an ssd cache is the best investment you can make IMO. it just makes everything soooooo much smoother and nicer.
No member of the pcmasterrace ever uses WiFi to access a NAS 😂 but seriously, sure everyone is different and that's the point the cache won't be beneficial for many, but agreed things like photos/thumbnails will benefit on frequent data but I think there is a cross purposes here, my reference to the drives was not from a performance perspective but from a point of view that if you have spare capacity in your NAS then purely from a redundancy perspective you should be spending money there first before trying to speed things up by spending money on the cache. No point having a cache on a single drive because your cache is worth nothing on a dead drive 👍
My experience on my 1819+ as well. Don't regret installing 2x 2.5" SSDs into it for this. Do regret having to use 2 of the drive bays though. The Synology M.2 add on card for the 1819+ is pretty pricey!
Thanks for the video. It was very well produced and it showed me how to install a cache in my NAS. I am guessing this cache will improved database performance, given the high number of reads/writes that occur. The DS920+ is more of a SMB device, or pro-sumer. This is our office choice of NAS, as it handles VPN, file serving, minor web serving, and some database functions. I think in these capacities the SSD cache will shine. Now I'll go pop our new 1TB in and see what we get.
Quite obviously, there's no benefit of using SSDs for large files. A good test might be something like generating thumbnails from a large collection of small jpeg files. So, using SSD cache indeed doesn't make sense for a NAS that is used as a video library.
Very much depends on use case, synology website details some usage types that would benefit but additional to this it takes up more space in the cache and I didn't feel my use justified the extra space.
I was thinking of a cache SSD because I have burned Blu Rays onto my Synology and they keep stuttering when playing and I can't figure out why. HDD reading speed should be plenty of speed for a blu ray because an HDD is 100 MBps and a blu ray is about 10 MBps I think. IDK why this is happening though.
I have the same problem and after thorough troubleshooting I can pretty much guarantee the issue is that your network bandwidth is only 100mb someplace in the path. You need a 1000mb(1gb) path. If any piece is less for any reason that's the best you can expect.
A shame Synology is still not including the 802.3bz (N-baseT, autonegotiated 1Gb → 5Gb) ethernet, in a lame attempt to sell a few 802.3an (10Gb only) expansion cards. Which are totally inapropriate for a SOHO environment where most nodes are consumer appliances with 1Gb or 2.5 ports, with maybe a few 100Mb hold outs, all incompatible with an 802.3an switch.[a few high end switches can combine BZ and AN... for a price]
4:50 CONTAINS CRITICAL INFORMATION. A read write cache without a UPS can get you into big trouble and major data loss. It can even CRASH your Synology. DSM 7 is better than 6.2 in protecting you from this but be aware it's dangerous. A better option is to run 2x SSDs in RAID 0. It's faster than a single SSD, and makes photos and the GUI of programs like PLEX MUCH snappier. It doesn't affect your video playback speeds but the interface is very noticeably quicker. Repeat: VERY NOTICEABLY quicker. The benchmarks in this video don't illustrate this.
Yes it was important to point this out I felt as almost certainly people will populate the cache without a UPS and use it this way. Agreed, a readonly cache would be better in that circumstance.
This test is not relevant. The cache is good for small files not for large files. in your test is bottlenecked by gigabit LAN. The cache it need to be used for people who want web servers and databases to quick access lots of small files or other applications that work with large amount of small files. The latency of NVME drive is lower than SSD or HDD. For example if you want to host many websites with large databases the time for loading webpages is much slower with NVME cache.
You are right on Mr. Serban. 113 MB/s is 904 Mbps per second and on home grade equipment that is really really good and pretty much the top that you are gonna get without significant tweaks. As I pointed out in my post, my 10 year old synology on a RAID 1 SATA III can max out a gig link on my Cisco 3850 and out perform his newer setup. Also, MP3s and images can absolutely bottleneck a NIC, if they can't your PC or system is the bottleneck then. I have performance tested distributed storage systems going up to 60 GB/s sustained for hours. Installed, tested, etc. links up to 100 Gb/s. Built my own bandwidth speed testing and presently the new desktop PC I just built can do sequential transfers at a rate of 13 Gb/s along with random write at 6-7 Gb/s.
One qualification, I haven't done Synology Caching, but I have done many enterprise grade caching systems including distributed caching. The concepts are the same though. Caching falls into two camps blocks and files. File size is usually irrelevant in caching unless you are using a caching system that looks at the entire file. There is more overhead in such a caching system though. But even then, the argument on whether caching large or small files has benefit only has merit if you have limits on file sizes allowed to be cached. In most cases large files do equal sequential operations thus they'll transfer faster from the primary drive and by comparison not see as dramatic performance improvements as smaller files. Ultimately the speed/caching difference really comes down to the differences between the specs on the cache drive and the primary drive. If the cache drive is the same specs as the primary drive, it will appear the same - but not for the reason many think. If the specs are radically different, the large files will have radical performance improvements too.
i installed nvme cache so I could get higher performance on small files being transferred, it doesn't do shit and these guides are all bullcrap. small files still running at 9MB/sec, large files 100MB/sec
Why do you pronounce NAS as NAJ? S is for Storage, not for Jordon)) The 256gb cache is too small. You need one or two tb cache to really use it. For example, if you have active big torrents, they will be downloaded to and seeded from the cache, and only after you are done with them, they will be transferred to the "cold storage" of the HDD volume. Thereby your HDD usage rate will fall drastically (=increase HDD lifetime, decrease noise). Further, there is absolutely no reason to test ssd cache with 1 gigabit connection between the NAS and the PC. Put a 10GbE expansion card to the NAS and to the PC, connect them directly, now there is no 100MB/s bottleneck. Cheers!
Never once called it a NAJ, guess you're using CC on the video? 😀 Also, the DS920+ doesn't have expansion to add a 10GbE card - Synology could have done a lot better here.
Nope youtube does it automatically with spammy comments. Your comments aren't even in my held for review so it really doesn't like what you're trying to say
Bro, whats even the point of this test? Thats not what the cache is for.... You noticed the time when you click on your network drive to access the nass, when it takes a few seconds to be able to access the files until the drives are started again? Thats what the cache is for, so you instantly can access it all the time without the annoying wating seconds...
What's the point? It's what a NAS is used for - storing files and transferring across the network. The drives in my NAS are spinning all the time and it goes to show that the cache has limited if no benefit on the DS920+. Forums are full of the same comments, maybe on an enterprise level NAS it might be beneficial but on this NAS it's a waste of money
Doubt it's pointless Jake as it reflects real world consumer usage on a consumer grade NAS so won't be taking it down but happy to hear your views on why it's pointless if you'd like to enlighten?
@@Byteofgeek 1. It appears you are saturating your network bandwidth. How can you expect to measure for improvements when starved for headroom? 2. It appears you are copying files back and forth to test cache. The vendor states this type of operation is not what the cache is for. See Synology doc titled "Important considerations when creating SSD cache" under section "Unsuitable Applications"
OK, so before we look at your points let's just reflect on what a NAS is. It's a bunch of disks in a box who's main purpose is to store and retrieve files for one of more users. In the instance of this NAS, ideally suited to home or SOHO use but NOT enterprise and/or large teams. So just think about that for a second and the type of content such a user is likely to store. Photos, documents, films maybe, isos, backups perhaps, after all it's just an extension of their computer so it could be anything. So saturating the 1gig port is a very likely scenario, most users are extremely unlikely to buy a usb 2.5 gig adapter, so unless you're coping very low volumes of data/files then the cache is pointless and you might as well just buy an external drive. Regarding your second point, you are correct, the Synology article does state that, it specifically mentions file upload, download and large files as well as video streaming all of which just happen to be pretty much the most common things home users what to do with the NAS. So IF this was an enterprise grade NAS undoubtedly the test would have been different and so would the results but it's not, it's prosumer at best and for the vast majority of users it will be more beneficial putting in an extra disk or two than paying our to fill the cache for very little benefit.
@@Byteofgeek So, the vendor specifically says that SSD cache is for features you label as "enterprise". IE database, mail server, heavily accessed shared files, etc. The vendor specifically says for the type of use you define as "home or SOHO" that the cache is nearly pointless. And you are arguing that proving they are correct that it is basically pointless is in and of itself not.... pointless? I am a home/SOHO user. I have a similar model with a 2x10GB nic installed. I run iSCSI, docker, network backups, and the occasional server application. I could not get anything done with a 1GB connection or even a 2.5 connection. A lot of video editors use these devices and they would tell you the same thing. I believe there are far more of us power users that you give credit. This is a power user feature you are reviewing. There are two types of people seeking out information on this subject, those like me that are looking for real technical analysis of the cache feature being used under circumstances where it should make a difference. And users who should not be considering cache because it will do them no good as the vendor and common sense dictates. So who exactly is this content for? I'm trying to tell you that you are missing an opportunity to speak to those that found you and actually understand the feature.
That's the exact point of the video, everyone is missing this apart from one person who pointed it out!! The cache is pretty much useless on the DS920+ because of the NIC
If you're talking about the DS920+ then forget 10GbE if you're talking about another device then you're watching the wrong video as this is SPECIFICALLY about whether the cache is worth it on the DS920+
No you can either bind the 2 ports to get 2GbE or you can try a USB adaptor that can give 2.5GbE but that's it, no other ports and no addin card options hence why really the cache is pointless on this NAS
no its fucking not. because synology is a garbage. every now and then you will see that for whatever reason its not connecteing remotely, something needs to be dropped, re-coinfigured etc. fuck this shit basically. just setup your own freebsd/linux server for whatever stuff you need and run it.
@@ByteofgeekGreat review mate! Ive been using Unraid for awhile. (I had issues with cache basically all the time), then tried Truenas Scale (Dropped after a few hours, This thing It is a resource hog) Came back to Synology. It is not the fastes NAS ever, transfer rates sucks but it is simple and does work.
Hope you enjoyed the video and found it useful, if you did then don't forget to LIKE and SUBSCRIBE to the channel 👍
The quality of your work is very nice, thanks for the great video !
Thanks for watching and the feedback, appreciated 👍
I don't often add comments, but as an engineer who lives networking, storage, and server space, I have to call out a materially significant testing mistake. I immediately suspected that you had other environmental factors influencing your results when I saw your "no cache" file transfers. 113 MB/s is 904 Mbps, which means you are pretty close to maxing out a gig link (~90%). Depending on the quality of equipment you are using in your network (home vs true enterprise) you are most definitely maxing out your network equipment. I use for example, Cisco 3850s at home (a very datacenter capable grade switch). I can easily push greater than you are on my 10 year old Synology and max out a gig link at 100% (on a RAID 1 non-SSD drive no less). Home routers and equipment are not quite as capable (I have a pile of every vendor you could imagine). You may also have a PC issue and while the exact environmental issue is hard to tell from a video without seeing more stats, one does exist in your case as it was further confirmed once I saw your "cached" results, nothing above 113 MB/s with really no variance jumps or deviations above that regardless of caching status. Also your recommendations on when a user would get a benefit is simplified at best, inaccurate at worst. In general, file transfers, like you did, see benefit in home only when two conditions both occur: 1. repetitive access of the same data and #2 the primary drive is significantly slower than the cache drive. Cache drives also help up the concurrency factor, which your simplified test did not account for and, to be fair, probably won't occur that much in home use or around file transfers. However, home users notoriously have cheaper and slower drives for their main drives so, in your simplified test, you have to account for that. Slower drives often make it easier to have more storage at a cost effective price. Cache drives are designed around slow drives, reducing random operations from hitting the primary drive, and concurrency - that is the areas they help with. You also have mixed drives from different vendors in your primary storage pool. That will affect the way that information is delivered in cache.
Appreciate the detailed response Sidney however I think you're missing a few points of the video really. Firstly the DS920+ is a prosumer grade NAS, it's prosumer because it's got a couple of extra drive bays, double the memory and a slightly faster CPU than some of the entry level NAS units from Synology. It also lacks significantly more enterprise level features that are available in the enterprise gear that Synology sells.
Whilst your Cisco 3850 switch is impressive it's highly unlikely that your average consumer will have anything other than their ISP supplied modem/router or if they have the funds, something like a Linksys or Netgear switch so their setup will have all the overhead you refer to. They'll be wanting to back up photos from mobile phones, stream films from the NAS and basically dump files from computers around their home onto the NAS.
Now I appreciate some more technical users will have a DS920+ in a home lab setup and their usage may vary from the above however the fact remains, most users will use the NAS for it's fundamental function and that's to store files. The cache will have ZERO benefit here, as will having a mix of drives because ultimately the unit is LAN port limited most of the time.
So I stand by the test, it's based upon what most people will use it for rather than what a minority of tech experts would use it for, who in reality should be using something more enterprise grade.
I encourage discussion on my channel, it's healthy and helps inform 🙂
I agree, even a Synology NAS with SSD only drives the throughput is still around 113 MB/s max. Sucks 2.5GBit switches with a decent number of ports haven't really taken off even in Nov. '22.
Some your points are true but let me tell u about your C3850. Its scalable campus aggregation switch, not really DC switch, I dont wanna speak about Nexus and so on ...switches. The other point is that C3850 has basically the same guts as C3650 if we compare default models like C3850-24T vs C3650-24T ... I just wanna say these switches are Cisco mid range and bad news for u is basically they can be beaten or equal to my C1000-24T-4X-L with simple IOS and not complex IOS XE which is kernel based and not fitting the home purpose with NAS interconnection in my view. Why? IOS is simple, easy to recover and C1000 series has similar L3 features as your C3850 except EIGRP what C1000 does not support. Dont get me wrong but I doubt thinking EIGRP is the key benefit for such usage. I can imagine EIGRP is fine for transfer networks with dual routers for redundancy in combination with HSRP/VRRP but here for what? U got my point ? My uplinks are 10Gb, and yours? Do u have with you C3850 10Gb uplink modules? Since your switch is modular and by default does not contain module. And 10Gb GBICS as well?;) Have u seen pricing for original Cisco SFP? I do on my home network - 10Gb network running cat6A cabling in combination with home office and also aggeregated 2x 1Gb into 2Gb etherchannel on switch towards NAS. All running well, used 4x 6TB Seagate ironwolf Pro 7200RPM drives - all same plus 2x NMVe 1TB, 4+4 GB RAM on DS920+ running SHR2 RAID which helps with performance and redundancy even more. Anyways these features are more performance improvement that Cisco usage on my network. Even I love configuring Cisco, but C3850 does not mean anything in this case, sorry. Especially if we speak about switching performance. Forwarding performance is another topic...
I think a few of you who commented are missing the real picture, the claim of performance, or lack thereof, is faulty in this video because of outside influences. It doesn't matter if a home user buys something they can't fully utilize, that is user error not a hardware issue. When you rate performance you rate based upon what the hardware can do in an ideal environment or you list limitations of your own environment, that you can't/won't solve, along with your performance results. Be honest and upfront, that is what makes for a great review.
And for the guy talking about EIGRP, I am unsure where you are going, but I suggest you rewrite your comment to make it more coherent if you want a real conversation from me. From what I understood, I think you have been misinformed about a few things.
@Byteofgeek I appreciate the encouragement around discussion. Two thumbs up for that sir!
It made my NAS much more quiet, it now uses the HDD much more rare and therefore there is not always this HDD read/write sounds. I think thats worth mentioning it.
OK this is not something I noticed but I find my drives fairly quiet anyway however good feedback and definitely worth considering if you're using noisier drives 👍
Thats why i'm looking into this
The Bottleneck is not the disks, is the network connections ( 1GB NICs ), just buy a NAS with 2.5GB NICs at least if you want faster network transfers.
Depends, data at the outer of the disk will be much faster and will almost certainly exceed a 1GB NIC, start filling up the disk though and the transfer rate will start to fall off. A NAS with 2.5GB NICs would be ideal, maybe Synology will release it in the new 2022 models 🤔
I’ve seen some redditors mentioning decent results with a usb 2.5gb Nic. True it is a shame this isn’t standard on mid level/‘prosumer’ Synology as it is a bottleneck.
@@ThurstanHethorn The 2.5GB dongles don't work with Synology, they don't support the drivers.
or just connect two gigabit ports to gigabit switch, combine them to bond 2gbit, synology dsm can do it, then do the same bond with two gigabit ports on your PC/server whatever. After that you will be able to test file transfers on 2 gigabit network connection. Anyway you must sacrifice four gigabit ports of gigabit switch to use it in this case.
@@MykolaKushnirUA Is just a waste of money to install SSDs on 1GBs Nics NAS, he bought the wrong equipment, he should had got a 2.5GB Nics NAS.
Exactly the information I was looking for. Cheers!
Great, thanks for the feedback 👍
Great advice. I'm off to order a new hard drive for the empty bay in my NAS, as it's aparent from your video I will not benefit from installing M.2 Cach. Excellent video, Thank You!
Thanks for watching and glad you found it useful 👍
Great video. What about upgrading the memory. I saw the 920+ has one slot?
Thanks for the feedback, yes only 1 slot, officially supports 8GB total but I've seen reports of 12GB working - I have an extra 4GB installed and it's plenty at the moment.
I upgraded mine to 8GB, but I can’t say I’ve noticed any difference. I think it really only helps if you’re using resource intensive applications like Plex, or are running Docker containers or virtual machines. I use mine as a file server, use the Photos app on several devices, Synology Drive on 3 home computers, Hyper Backup, and I use my doorbell camera on the Surveillance app. I never get above 25% memory usage.
I repurposed an 8GB 2400MHz SODIMM I had in my drawer after a laptop memory upgrade to upgrade my DS920+ to 12GB. So far it works great even though it's not officially supported.
I put a 16GB in for a total of 20GB. Works flawlessly
@@markuslommer7324 was that a Samsung based SODIMM?
Seems like your 1GbE is the bottleneck.
Hi. I just purchased a Synology 920+, and along with it I purchased 2- M.2 drives and a 4GB memory stick. Is it okay to install the extras on the initial setup, or should I wait to install the extra memory and the 2-M.2 drives? I probably didn't need to buy the extras but since it was reasonably priced I purchased everything in one shot. Nice video. Thanks for the help.
I would get set up as is and then add in the ram and m.2 drives, that's what I did and had no problems whatsoever 👍
Hello, enjoy your presentations. I would like to be able to read the software screens you work with when selecting the options. Thanks for sharing your knowledge.
Thanks for the advice .
Quick question . What would be the ideal size of nvme is it 256, 512 or 1TB? .
I have a ds920
Depends on your total disk capacity, you can use the cache analyser function in the cache setup process to determine how much you need. It might be worth doing this before buying any nvme drives
Great presentation and highly informative. THANK you
Wow! Thank you. I assumed that the SSD cache would make a significant difference; much like the difference I experienced taking my desktop operating system from a HDD and placing the OS on an SSD.
I have a Samsung 970 evo 500GiB that isnt getting utilized. I placed it in my Synology Nas because I wanted to use that SSD instead of the SSD just sitting in my desktop doing nothing. (500GiB isn't enough space for my scratch drive, so I replaced it with a 4TiB SSD)
In your opinion; would using the Samsung 970 Evo 500GiB as a Volume for the DSM7 operating system, be more beneficial than using that drive for caching?
Another great video. I was going to pick up a couple of NVMe drives but now realize for my use (mainly movies and TV shows) I probaably won't see much if any improvement.
Thanks, if its just streaming then probably better spending the money on ensuring you've got a good solid network especially if the devices are connected via WiFi
But the cache isn't only for the sake of speeding up transfers, surely... I mean really once a transfer has started the network is always the bottleneck, even if you're using both RJ45 ports on the go you're going to struggle to push the speed a HDD will read or write at. I've yet to get one of these but I've been assuming that the main purpose of the cache is, as well as for serving multiple users more efficiently than spinning drives can manage alone, is for the scenario where the device is mostly on standby, spun down, so that when you say jump on your most used Synology apps it'll have something to serve instantly while it takes its time spinning up the drives in the background, and equally so it can start receiving whatever you send it immediately, again while it spins up the drives. That way your experience is a lot more seamless even if it's been hibernating, as opposed to having a loading screen/hanging app for around 10 seconds. Especially when it comes to spinning up a full array, that tends to take significantly longer than even spinning up one drive on its own and is really where the value of a cache comes in.
Surprised that you need dual SSDs to have a write cache though.
Hi great info...what maximum capacity of cache drive can be used?
Someone else might know but above 1TB and NVMe drives start to get expensive
Looks like you're limited by your network speeds now. 1gb I assume?
Could possibly be, nothing faster on the Synology - I could set the ports to LAG but I won't push that much from one PC
The restriction is definitely down to your single NIC. If throughput is your main goal you need to install a 10Gb ethernet NIC or dual version with aggregation (provided you have a capable switch), and you'll see those throughput rates climb significantly.
@@Byteofgeek It is, you can push more than that from one PC. My new build just tested at 6-7 GB/s random writes speed. and 13 Gb/s sequential read speed. If you are going to provide performance tests. Then having a proper test environment for performance seems pretty critical in my opinion. For a LAG to one PC to work you must ensure that you LAG hashes source & destination ports otherwise you'll only use one link in a LAG. Then you must open up two different connections to the file share.
does the ssd cache help in terms of phone access speed? Currently my HDDs are in idle mode after some time for power saving. But if I'm accessing the NAS from the phone outside of my network, it feels like it needs some time.
Good videos!
Keep up the good work of your!
Thanks for watching and the feedback 👍
I was thinking about it, but now I better spend money upgrading memory on my Ugreen NAS
Thanks for your honest review! I was thinking of upgrading from old Synology 2-bay NAS to a 4-bay one with SSD cache, now perhaps I might just spend my money on two 2TB SSD into my current (old) NAS for better performance.
I assume the SSD cache will benefit more if there are more than two users access the NAS at the same time, instead of single user like me.
More users accessing the same files would almost certainly see an increase in performance, single user is more likely with things like indexing photos and such like.
The 2TB SSDs sound like a good option but go for pro/enterprise grade drives 👍
Does a heatsink on the M.2 2280 NVMe fit inside?
This video shows a very narrow perspective on the matter. It should seem pretty obvious without any tests that if your hard drive provides data transfers at about 900 mpbs (i.e. pretty much maxes out your 1 gig ethernet connection), then adding cache would not result in any meaningful improvement. What this video is missing is the effect on latency/responsiveness (like when your Plex server needs to snoop around the drive to find thousands of small files while the UI is loading), the effect on concurrent data transfers to several clients when the NAS is using two ethernet ports with link aggregation, as well as the effect on the noise level.
The Tests just Show what the cache doesn't enhance file transfers. Things that get orders of magnitude better are for example the plex ui the synology photos ui indexing browsing files in smb or ds file the os snappiness etc.pp. anything that requires acces to a lot of small files like thumbnails or even more so sql db operations. I remember how the plex ui used to beach Ball every time there was a background task updating metadata that Kind of stuff is just gone. Whenever you run something on the server a ssd cache transforms the experience like it did when you first put an ssd in your laptop.but yes filetransfers are not faster.
I get your point and I think in a way your point also re-enforces the tests that small files, whatever the action, are the main benefit in the cache.
A lot of people will just rush to put a cache in because they see it as being better however more ram and filling up the drive bays should be where the money is spent first IMO.
@@Byteofgeek for me it depends what you want. Most people access a nas through wlan so more drives are pretty much useless for them. Even a single drive can do 50MB/s of a good wifi connection. And ram? If you do not run vms or docker you never use more than 1-2GB on dsm regardless how many apps you run. Sure more ram can cache the filesytem as well but an ssd cache does this much better because its bigger and has more of the warm data. So if you want a nas as media server photos Server plexamp Server an ssd cache is the best investment you can make IMO. it just makes everything soooooo much smoother and nicer.
No member of the pcmasterrace ever uses WiFi to access a NAS 😂 but seriously, sure everyone is different and that's the point the cache won't be beneficial for many, but agreed things like photos/thumbnails will benefit on frequent data but I think there is a cross purposes here, my reference to the drives was not from a performance perspective but from a point of view that if you have spare capacity in your NAS then purely from a redundancy perspective you should be spending money there first before trying to speed things up by spending money on the cache. No point having a cache on a single drive because your cache is worth nothing on a dead drive 👍
My experience on my 1819+ as well. Don't regret installing 2x 2.5" SSDs into it for this. Do regret having to use 2 of the drive bays though. The Synology M.2 add on card for the 1819+ is pretty pricey!
Thanks for the video. It was very well produced and it showed me how to install a cache in my NAS. I am guessing this cache will improved database performance, given the high number of reads/writes that occur. The DS920+ is more of a SMB device, or pro-sumer. This is our office choice of NAS, as it handles VPN, file serving, minor web serving, and some database functions. I think in these capacities the SSD cache will shine. Now I'll go pop our new 1TB in and see what we get.
Thanks for the feedback. I think the cache will work well for your usage, 1x1TB should be enough if the use is read heavy.
This looks like a maxed out network link.
Quite obviously, there's no benefit of using SSDs for large files. A good test might be something like generating thumbnails from a large collection of small jpeg files. So, using SSD cache indeed doesn't make sense for a NAS that is used as a video library.
Good point, probably more beneficial if used with say a gallery app or similar which the NAS does have an app for.
Curious as to why you chose not to "pin all BTRFS metadata to the SSD cache"? Every video I have seen ticks that box. 5:05
Very much depends on use case, synology website details some usage types that would benefit but additional to this it takes up more space in the cache and I didn't feel my use justified the extra space.
I was thinking of a cache SSD because I have burned Blu Rays onto my Synology and they keep stuttering when playing and I can't figure out why. HDD reading speed should be plenty of speed for a blu ray because an HDD is 100 MBps and a blu ray is about 10 MBps I think. IDK why this is happening though.
Are you using plex?
Face camera framing is bad to a otherwise informative video!
Thanks for watching and the feedback 👍
Subbed mate!
I have one old MacBook Pro 256g (2015 model), will it work on this DS920+?
I don't see why not, it's just network storage 👍
its the gigabit lan
I connect my NAS DS920 to switch and connect my computer to the switch through CAT6 cables, I get only 11.1 MB/S, very disappointed, I don't know why
Does your switch have gigabit ports?
I have the same problem and after thorough troubleshooting I can pretty much guarantee the issue is that your network bandwidth is only 100mb someplace in the path. You need a 1000mb(1gb) path. If any piece is less for any reason that's the best you can expect.
A shame Synology is still not including the 802.3bz (N-baseT, autonegotiated 1Gb → 5Gb) ethernet, in a lame attempt to sell a few 802.3an (10Gb only) expansion cards.
Which are totally inapropriate for a SOHO environment where most nodes are consumer appliances with 1Gb or 2.5 ports, with maybe a few 100Mb hold outs, all incompatible with an 802.3an switch.[a few high end switches can combine BZ and AN... for a price]
You have not mentioned if the test is done under DSM 6 or 7.
Good point 👍, this was DSM 7
Wouldn't cache potentially prolong the life of your hard drives? It's probably the only thing I care about for home use
It's highly unlikely, more likely to wear out the cache unless you're using enterprise grade nvme.
4:50 CONTAINS CRITICAL INFORMATION. A read write cache without a UPS can get you into big trouble and major data loss. It can even CRASH your Synology. DSM 7 is better than 6.2 in protecting you from this but be aware it's dangerous. A better option is to run 2x SSDs in RAID 0. It's faster than a single SSD, and makes photos and the GUI of programs like PLEX MUCH snappier. It doesn't affect your video playback speeds but the interface is very noticeably quicker. Repeat: VERY NOTICEABLY quicker. The benchmarks in this video don't illustrate this.
Yes it was important to point this out I felt as almost certainly people will populate the cache without a UPS and use it this way. Agreed, a readonly cache would be better in that circumstance.
Can you elaborate on this? In the video, it looks like only RAID 1 was an option. Do you see the RAID 0 option if you select read-only cache?
basically you recommend a read cache in raid 0 right?
Rule of thumb, no UPS = read cache only, with UPS = read/write cache
Thanks for doing these tests…I probably would have thought it was worth it…now I can see it’s not 🤓
Thanks for watching 👍
10Gbe use case is a must for cache. Other wise it's useless.
and this NAS only has a 1GBe ethernet ports
@@Byteofgeek Have you tryed hook up 2 network cable aggregation with SMB multi channel enable on your 920+?
Yes I have mine set up with aggregation, that at least enables you to max out a 1GBe client
@@Byteofgeek That's good. Enable SMB multi channel and you will see a bit more snappy file transfer.
Thanks 👍 I'll give it a try
This test is not relevant. The cache is good for small files not for large files. in your test is bottlenecked by gigabit LAN. The cache it need to be used for people who want web servers and databases to quick access lots of small files or other applications that work with large amount of small files. The latency of NVME drive is lower than SSD or HDD. For example if you want to host many websites with large databases the time for loading webpages is much slower with NVME cache.
Plenty of small files in the test mp3 and images which won't be bottlenecked by the NIC. Only the ISO and DVD images would be impacted by that
You are right on Mr. Serban. 113 MB/s is 904 Mbps per second and on home grade equipment that is really really good and pretty much the top that you are gonna get without significant tweaks. As I pointed out in my post, my 10 year old synology on a RAID 1 SATA III can max out a gig link on my Cisco 3850 and out perform his newer setup. Also, MP3s and images can absolutely bottleneck a NIC, if they can't your PC or system is the bottleneck then. I have performance tested distributed storage systems going up to 60 GB/s sustained for hours. Installed, tested, etc. links up to 100 Gb/s. Built my own bandwidth speed testing and presently the new desktop PC I just built can do sequential transfers at a rate of 13 Gb/s along with random write at 6-7 Gb/s.
One qualification, I haven't done Synology Caching, but I have done many enterprise grade caching systems including distributed caching. The concepts are the same though. Caching falls into two camps blocks and files. File size is usually irrelevant in caching unless you are using a caching system that looks at the entire file. There is more overhead in such a caching system though. But even then, the argument on whether caching large or small files has benefit only has merit if you have limits on file sizes allowed to be cached. In most cases large files do equal sequential operations thus they'll transfer faster from the primary drive and by comparison not see as dramatic performance improvements as smaller files. Ultimately the speed/caching difference really comes down to the differences between the specs on the cache drive and the primary drive. If the cache drive is the same specs as the primary drive, it will appear the same - but not for the reason many think. If the specs are radically different, the large files will have radical performance improvements too.
i installed nvme cache so I could get higher performance on small files being transferred, it doesn't do shit and these guides are all bullcrap. small files still running at 9MB/sec, large files 100MB/sec
Why do you pronounce NAS as NAJ? S is for Storage, not for Jordon)) The 256gb cache is too small. You need one or two tb cache to really use it. For example, if you have active big torrents, they will be downloaded to and seeded from the cache, and only after you are done with them, they will be transferred to the "cold storage" of the HDD volume. Thereby your HDD usage rate will fall drastically (=increase HDD lifetime, decrease noise). Further, there is absolutely no reason to test ssd cache with 1 gigabit connection between the NAS and the PC. Put a 10GbE expansion card to the NAS and to the PC, connect them directly, now there is no 100MB/s bottleneck. Cheers!
Never once called it a NAJ, guess you're using CC on the video? 😀 Also, the DS920+ doesn't have expansion to add a 10GbE card - Synology could have done a lot better here.
You did wrong tests, please try RANDOM disk operations, and you will see HUGE difference. What you tested, is your Gigabit Interface. Nothing more.
I've replied to comments like this multiple times now as to why it's tested this way and WHY it's not worth it.
LOL. Are you deleting my replies? I'm happy to point out why this is a silly test.
Nope youtube does it automatically with spammy comments. Your comments aren't even in my held for review so it really doesn't like what you're trying to say
@@Byteofgeek It must be a character in my text then. I'll try again.
Bro, whats even the point of this test? Thats not what the cache is for....
You noticed the time when you click on your network drive to access the nass, when it takes a few seconds to be able to access the files until the drives are started again?
Thats what the cache is for, so you instantly can access it all the time without the annoying wating seconds...
What's the point? It's what a NAS is used for - storing files and transferring across the network. The drives in my NAS are spinning all the time and it goes to show that the cache has limited if no benefit on the DS920+. Forums are full of the same comments, maybe on an enterprise level NAS it might be beneficial but on this NAS it's a waste of money
Repeated file transfer. That is all I am going to say abt proper ssd cache test
Thanks, this is what the video demonstrates though 🤔
To spend cash on your cache or not.... Lol thank you
Glad you enjoyed it 👍
Poor quality audio in this video. Had a hard time hearing you.
Thanks for the feedback Dana, interested as to what device you were playing back on please as this is the first comment regarding audio volume 👍
Your test is flawed and pointless in multiple ways. You really should take this video down until you understand why.
Doubt it's pointless Jake as it reflects real world consumer usage on a consumer grade NAS so won't be taking it down but happy to hear your views on why it's pointless if you'd like to enlighten?
@@Byteofgeek
1. It appears you are saturating your network bandwidth. How can you expect to measure for improvements when starved for headroom?
2. It appears you are copying files back and forth to test cache. The vendor states this type of operation is not what the cache is for. See Synology doc titled "Important considerations when creating SSD cache" under section "Unsuitable Applications"
OK, so before we look at your points let's just reflect on what a NAS is. It's a bunch of disks in a box who's main purpose is to store and retrieve files for one of more users. In the instance of this NAS, ideally suited to home or SOHO use but NOT enterprise and/or large teams.
So just think about that for a second and the type of content such a user is likely to store. Photos, documents, films maybe, isos, backups perhaps, after all it's just an extension of their computer so it could be anything.
So saturating the 1gig port is a very likely scenario, most users are extremely unlikely to buy a usb 2.5 gig adapter, so unless you're coping very low volumes of data/files then the cache is pointless and you might as well just buy an external drive.
Regarding your second point, you are correct, the Synology article does state that, it specifically mentions file upload, download and large files as well as video streaming all of which just happen to be pretty much the most common things home users what to do with the NAS.
So IF this was an enterprise grade NAS undoubtedly the test would have been different and so would the results but it's not, it's prosumer at best and for the vast majority of users it will be more beneficial putting in an extra disk or two than paying our to fill the cache for very little benefit.
@@Byteofgeek So, the vendor specifically says that SSD cache is for features you label as "enterprise". IE database, mail server, heavily accessed shared files, etc. The vendor specifically says for the type of use you define as "home or SOHO" that the cache is nearly pointless. And you are arguing that proving they are correct that it is basically pointless is in and of itself not.... pointless?
I am a home/SOHO user. I have a similar model with a 2x10GB nic installed. I run iSCSI, docker, network backups, and the occasional server application. I could not get anything done with a 1GB connection or even a 2.5 connection. A lot of video editors use these devices and they would tell you the same thing. I believe there are far more of us power users that you give credit. This is a power user feature you are reviewing.
There are two types of people seeking out information on this subject, those like me that are looking for real technical analysis of the cache feature being used under circumstances where it should make a difference. And users who should not be considering cache because it will do them no good as the vendor and common sense dictates. So who exactly is this content for? I'm trying to tell you that you are missing an opportunity to speak to those that found you and actually understand the feature.
Jake, can you read the title of the video again please and then come back and read what you have commented?
This test is not useful without a 10Gb network to test!! What are you doing!? This speed is limited by your 1Gb ethernet port!!
That's the exact point of the video, everyone is missing this apart from one person who pointed it out!! The cache is pretty much useless on the DS920+ because of the NIC
@@Byteofgeek Yes but I am interested to know if it's worth spending money for CACHE if I have 10Gbit, does cache make a difference?
If you're talking about the DS920+ then forget 10GbE if you're talking about another device then you're watching the wrong video as this is SPECIFICALLY about whether the cache is worth it on the DS920+
@@Byteofgeek Oh I see, I thought DS920+ has 10Gbit optional port like DS923+
No you can either bind the 2 ports to get 2GbE or you can try a USB adaptor that can give 2.5GbE but that's it, no other ports and no addin card options hence why really the cache is pointless on this NAS
no its fucking not. because synology is a garbage. every now and then you will see that for whatever reason its not connecteing remotely, something needs to be dropped, re-coinfigured etc. fuck this shit basically. just setup your own freebsd/linux server for whatever stuff you need and run it.
It's a valid option, not for everyone but is an option, along with things like unraid, truenas, omv etc. 👍
I am all for calling out garbage, Synology is far from it. Most other home NAS vendors are though!
@@ByteofgeekGreat review mate!
Ive been using Unraid for awhile. (I had issues with cache basically all the time), then tried Truenas Scale (Dropped after a few hours, This thing It is a resource hog) Came back to Synology. It is not the fastes NAS ever, transfer rates sucks but it is simple and does work.
Pointless not doing it on a 10GBe model
What? When the video specifically says DS920+ which is not 10GBe??