Hey everyone! We just shot a great news video and are working on a ton of testing for this week. Things are ramping back up now! Expect lots of charts over the next two weeks. You might also like our RAM timings explained video: th-cam.com/video/o59V3_4NvPM/w-d-xo.html Grab a GN Modmat here: store.gamersnexus.net/products/modmat
Check your tone. The system doesn't boot with or without that PSU, but it does boot with an external one, so the point is irrelevant. All that matters is that we revived the thing for long enough to migrate cleanly.
I dont get it if you have a temporary fix and have it running to be able to migrate the data from it why would you buy a new synology NAS if you intend to shortly switch to a custom built server style NAS, you could either: 1. Just build the new server style NAS asap, you could probably do that before the new synology NAS even got there and migrate the files directly. As you say you have all the parts you would need less maybe the drives themselves which I'm sure you could pickup from local PC store today. OR 2. Just throw some extra drives in an existing render or work machine temporarily, migrate what files you need now and probably the files you don't have backed up just in case running it with external PSU becomes a problem then you have more time to build the new server style NAS. Then yeah if something happens and it dies running like that you have the files you need for now and re-download the rest from offsite backup, but if it keeps working fine just yeah do same as in 1. and just directly migrate everything. I dont understand why you "HAD" to buy it straight away, unless it was more of a panic order it right now so we can get back to work asap just in case without thinking to much about it kinda thing, without even trying to diagnose first, which i kinda understand but don't at the same time since the fix you found should have only taken 30min to an hour max, and an extra 1 hour lead time doesnt seem worth panic ordering a $1200 NAS. BTW im not trying to hate to say your dumb or anything just curious what your reasoning was.
I couldnt find a way to message you. But I had the same problem with my Synology NAS and I safely managed to get the whole data back and migrate it using a rather easy piece of software. It was kind of funny a even the Synology support staff didnt know about it. Its non destructive and auto recognises the Synology format as well as some others. You simply hook up the drives into any standard windows PC and selected the drives from the software (dont even need the right order), and it scans the drives and figure the rest out itself and then mounts it for read access. PM me and I will point you in their direction.
A couple things: Your data is easily accessible without a Synology chassis. Take the drives out, connect them to another machine (order doesn't even matter), and boot Ubuntu from a USB stick. You can then mount the array and copy all your data off. Synology has a knowledge base article on their website with all the details. (Under the hood, the array is just standard Linux stuff... mdraid and LVM2.) Those models, because of the atom CPU bug, all had their warranties extended. Call them and open an RMA for it. I realize it makes for more TH-cam clicks to claim the sky is falling, but it's not. Yes, it stinks to have a system go down. But that's why you should have redundancy, especially for a business...
When you do build the your own NAS in a server chassis, please record it and show the install procedure, what parts,what RAID lvl you plan on using, most likely ZFS zraid2?
You should in all honesty not put all you disks in one vdev like one huge raidz not only are you screwed if you want to add more storage you also lose massive amounts of performance. What you should be doing it putting multiple vdevs into a single pool. If you do this ZFS will stripe the vdevs and you get much better performance. If you need more space you can just create another vdev and add it to the pool and your done. It has to be the same type of vdev 6 disk raidz2 or a mirror etc but still. I mean technically you can add a none matched vdev to a pool but this is a VERY VERY bad idea and honestly your on your own. Now you may say AHH but stripes you have no redundancy. Well no you do, in ZFS a vdev is responsible for its own redundancy not the pool. So you can stripe as many vdevs as you like into a pool. You also get cool things like lz4 compression to save space. Now you may think that the compression will slow down this pool right. But no, think about it, the slowest part of your NAS is the hard disks, and you have loads of ram and CPU cycles sitting there 99% of the time doing fuck all why not put them to use. So say for example you have a 100mb file and you want to write it to a disk that can do 200mb a second , you can in theory write that in 0.5s, now if you can get a 50% compression ratio on your data you only have to effectively put 50mb on the disk. So now your 200mbps disk can do 300mbps!
Building it is surprisingly easy. You can get used servers off eBay pretty cheaply, then load them with drives. Install Linux and ZFS. Configure and put it in your basement. I have a 20TB ZFS setup downstairs - serious overkill but I had some of the parts lying around. Dual Xeon L5506 hex cores, 40GB ECC RAM and the chassis w/power supply was about $300 shipped. Then eight 4TB Hitachi drives (2 pools of 4 drives in RaidZ1) gives me 24 TB of raw storage and about 20TB of formatted storage.
Sorry dude, you are wrong about the Synology stuff being proprietary and need a lot of time to recover. Synology uses mdadm, you just need an Ubuntu server and you can rebuild the RAID and recover your files. That is the biggest advantage for me of Synology over proprietary RAID like in the Drobo...
I'm a bit surprised that comment was even made by Steve. I mean, full credit to the guy for being a legend but not knowing that before making the video seems weird. Oh well.
luckily my 1813+ is still good -- but anyone worth their salt has backups for important data. single copy of anything is never good. Yes you have raid, but its their own fault for not having a backup for the raid.. i may have 24tb of raid6 fun stuff but i realize its not safe without a real actual backup. this video unfortunately is a repeating rambling rant ... noone should ever expect priority treatment just because they run their business off of a particular product. especially after a few years. Motherboards easily go obsolete within that time, and are hard to find off of even a custom setup. That being said, since even my 1813+ is by this point nearly 4 years old i'm working on preparing its transition array .. yes its costly but when you cant afford to lose it you better have a backup... crying because you lost something irreplaceable in tech because you were too cheap to have a redundant system is pointless... raid only prevents against X# of drive failures, not X# of fires, or hurricanes or tornados or motherboard failures of said raid system. /end ranters rant no sympathy for this here... normally i'm not such an ass, but everyone learns the hard way about data backup. its not entirely synology's fault.
Far as I was aware, Steve was referring to the hardware? The motherboard and drive power delivery is custom - hence why he said he couldn't really do much of anything around ~3 minutes in, unless he said it elsewhere referring to their software/firmware that I missed?
Just unplug the drives and connect them to another SATA controller. Boot from USB and mount the drives. The raid is autodetected by Ubuntu and the filesystem is Ext4 or BTRFS. Synology is just a wrapper on top of Linux. This video is just a lot of non-information. A quick google on this model shows what the issue is (dead CPU due to Intel manufacturing mistake). So instead of doing al this, contacting synology for a swap, getting a mobo and Ubuntu stick would have costed the same ammount of time and showed some insight.
our DS1815+ just died last week - likely from the same issue as your unit. thankfully we were within warranty period and synology drop-shipped us a new one. (they actually extended the original warranties for a number of models commonly affected by a year)
This is exactly the reason I went with Synology. Wanted an out of the box, no tinker NAS solution that's still built on linux in case the hardware craps out.
This is excatly what I did when I upgraded from a DS1010+ with 5 disk RAID5 to a self made ubunto based NAS. Just move the disks to the new server and mdadm scan + assemble and voila, RAID was up and running with all of my data.
August Svensson - lol, Ceph and the XFS filesystem for small business storage. That is a next level hyperscale data-center solution... ZFS on steroids.
Update : The issue you face can be solved by replacing the transistor in Q2 area of the motherboard (Q4 area for 1815+ models). This transistor, once burnt, prevent the power switch of the MB.
I'm going to be disappointed if your new Server isn't called, "The Nexus" or "The Gamer's Nexus". I was actually recently looking at one of these myself, kinda weary now.
Qnap and synology NAS are good to have, this guy is just being an total pussy about this. He just needs a new PSU FFS which is super easy to buy and get for these NAS boxes but he's making out like the world has just fucked out and hes going to be dead tomorrow, and he can't get an PSU when Qnap and Syology don't change these PSUs as they keep the parts the same across the range. The Qnap boxes have an external PSU so you just replace the external PSU and it up and running again. I can see this guy trying to swap to an FreeNAS OS and then can't get it setup right then try and keep it going then end up giving up and using this box. Just to let you know Qnap and Synology use Lunix as well.
I have had this happen to me. Synology replaced the device quickly no problem. The older devices have power integrated, the newer have a separate power brick which is more easily replaceable. Worst case buy or lend another (can be a newer device too) just to get the files off it. Just mark the drives like you said and install them in a new device. Finally, backup backup backup! Any single point of failure will bite you, be it off the shelf or self built.
I have to thank you. My Synology DS1515+ shut off and would not turn on. Your suggestion of removing the drives and power worked. I am now trying to get it all backed up. I was stupid and didn't plan for failure.
Not particularly familiar with the different NAS/RAID solutions. What does everyone think of FreeNAS vs. the alternatives? Upvote the comment above so we can get some good suggestions, please!
Gamers Nexus FreeNAS should be your only choice for a home brew NAS. It uses a superior file system, ZFS, and has amazing support with constant updates. Having said that, I still prefer Synology because BTRFS is still a great file system, and DSM is very easy to use.
From what I have seen it is usually Freenas vs. Unraid vs Windows Server with some form of raid. Linus answered the question in his forum and looks like he uses a few different solutions based on his needs. "The high speed nvme storage server is using Windows Storage Spaces So is its near realtime backup The vault is running cluster over zfs on centos My server is running unraid with a couple of VMs and a plex docker container. Per the above comment I'm probably not doing it right, but it works for me" linustechtips.com/main/topic/940503-what-system-does-linus-use-for-his-server/
tbh its all based on the kind of redundancy u want, and freenas has only zfs which is poorly flexible, i went with windows 10 storage spaces which is more adaptable and the system is easy to use for a home server since its the classic one
The problematic x(x)15 models, my local dealer was quite open about it when I asked her about which Synology models are more prone to power failure and the a few models she mentioned were all from the 2015 series, perhaps none after that. Hope Synology learn from their mistake.
have a DS1813+ running for 9 years now. Always running. Replaced all hard drivers for bigger (from 4tb 5400 to 8tb 7200) ones a couple of months ago after one of then has failed.... No problem at all.
Was gunna comment on GN pumping out the content aaaannnnd.....video about y'all nearly losing a crap ton of data you need to do so. Glad you got it working temporarily!
3:55 - NO. This is not a problem with a NAS. The potentially catastrophic loss of data is *entirely* because you had a deficient backup strategy. If you can't tolerate a piece of hardware dying, you need a backup. It's really as simple as that. All hardware fails eventually.
For a critical system design, a client actually had a whole spare server in a box in storage. If any one component or a whole server died, near zero wait time to replace any part or a whole server. Swap and go. Expensive Solution ? No. Downtime ? Six figures per hour $$$$$$. So it was worth it to have extra parts nice and new on site. Off site backup out of state, if some disaster took out the building ! :-O If something took out both locations in both states, well - not their lucky day. :-/
Bummer. I ended up buying a 24tb NAS because of the size/cost convenience, but I trust it as far as I can throw it. This is a great example of RAID is not a backup (which you know and had good mitigation in place).
Looking forward to the NAS Server build! Also maybe cover server security. Ie. Ensuring that only those authorized to have access, do. And securing the box so it can be internet facing (accessible via VPN from a remote location).
Synology has a guide how to recover drives when plugged into a regular PC and using a Linux Live CD. It does work with those two bay NAS, not sure how easy it is to do this with their bigger units.
Several years back I had a Seagate Blackarmor NAS which stopped recognizing the RAID array even though the drives were fine. Pulled the drives, connected them to my desktop, and ran a RAID recovery program to get the data back. Much easier than trying to fix or replace the NAS itself.
I moved off QNAP/Synology/Etc, a long time ago. I went to FreeNAS via IXSystems. I did buy their actual hardware, but all of it is non-proprietary. You can get all the parts from the manufacturers (don't have to rely on e-Bay). The devices are also user-serviceable, and IXSystems will send you the part, with a 14-day return time allotment. It can take time to RMA, as they do so back to the original manufacturers, but you can get that advanced replacement to keep you from being down. I decided to buy an extra board from them at about cost, so now I have a spare. It's almost like a DIY server, except you can buy it all as a package. Another thing with using a NAS, is having a local backup that's up-to-date. While not affordable for everyone, I have a second NAS (slightly smaller in TB), which is synced every 6 hours with high compression. If the first NAS fails, my downtime is only the replication schedule time. Remote backup can then be used for the smoking hole/scenario, with a slightly larger sync window.
I'd like to point out that synology uses linux LVM and mdadm for storage, so if you plug these drives, just as they are, into a linux server, they should be detected without problem. Just like in the case of ZFS, LVM uses unique IDs for the drives, so it should pair them up right even if you mix them up. Maybe send an email to Wendel from Level1. I'm sure he can help you out, also...maybe a nice video for the both of you.
Curious what your backup solution for that much data winds up being. I have about 40TB active on Synology NAS's, but haven't found a price-reasonable remote backup solution outside of just buying duplicate NAS's and drives.
We use Crashplan for business on some places with success. Quite inexpensive. Uploading 40Tb for the first time (or restoring) could take a long long time depending on your internet though. But you will have that problem with any cloud service. Their plans are per device with unlimited storage. So you just need to concentrate everything on a single machine and then send it all to the cloud. Their system do incremental backups, versioning and other good stuff.
That's the problem with services like that and BackBlaze; a NAS server doesn't count and I can't "concentrate everything on a single machine" lmao. I will look into it in case they allow NAS's, but I doubt it
I have it running on a few headless Debian servers, so it runs fine on Linux. I just open the CrashPlan Desktop app on my local machine through X11 redirection over an SSH connection (works fine even on Windows). You can always pay 2 subscriptions for 2 machines (or more) if you need it, concentrating in a single machine is not really a must, but many times it does make sense to have a big backup machine that can be replicated to the cloud. I am not sure what do you mean with "NAS server doesn't count" since it runs on any Linux. I guess you could even mount a shared storage to the machine and include the mount in the backup if you want > EDIT (which means that you can also run it on any Windows/Mac/Linux machine that has access to the shared storage)
try unRAID, it doesn't stripe your data across multiple drives and you can have two parity drives. so you won't lose data if one or two drives fails, and even if more than 2 drives failed, you'll only lose the data on the failed drive, because your data is not striped across the drives.
Well with Backblaze, they specifically don't allow NAS's and thus don't allow network-mapped drive letters to keep that from being abused. I have Synology NAS's, much like what's shown here, so unless they have a specific Synology app I pretty much have to back up via other PCs.
Synology has very good support in my experience. I had an issue and they quickly shipped a replacement. Its possible to have hardware failures with anything. If you want very generic commodity hardware, use freenas. Then if you have a failure, you can rebuild the whole system or just replace the component.
you would be a terrible person if you took the needed parts out of the new one to fix the old one and then RMA'ed the new one, but only a terrible person would do such a thing to a obviously defective product
This all day, brought a synology cause i wanted the set it and forget it, but the limited space (50gb) and the gimped cpu is killing me. I need to move on to a custom solution.
Probably each of the parts have their own serial number to avoid thing like that, m/b could have it in bios, but if not, yes i would have done the same
@Giorgos - While i would never encourage such a terrible, terrible deed lets be honest they are not nearly organized enough to have all the serial numbers for each product cross matched with the internal serial numbers of each component.
I run a 12TB RAID 5 DIY NAS server, and I was pretty baffled when I saw the title. I was happy to hear that your criticism was at those crappy prebuilt systems. I never liked those.
**CRT scope producing screen burn in the background.** BTW, why not remove all the HDDs and mount them in Linux on another device to copy the data off it? Am I missing something or what?
ZFS > RAID IMO. Having the filesystem handle the majority of the replication tasks (and not separate hardware), while being an open standard, makes things a lot more safe to handle.
When I started the 14 min video, I thought "The Problem" would be much bigger. But basically the video just shows how to unplug and plug a power supply and lasts 14 min. And if you mean the problem is that a product (like a NAS) can fail, wow didn't know that...
So basically "the cause of problem was unknown until I tried the most obvious thing"? Wouldn't he have done exactly the same thing if he had built the box himself? (Yes, I know there's a small difference between "checking if a PSU is defective" and "checking if another PSU powers the system", but come on! He sounds like Synology was the biggest crap this side of the Atlantic.)
I have a Synology 1815+ that suffered the exact same failure. I got it to come to life using the double power supply trick then created (soldered up) an adapter for a SeaSonic SS-250SU power supply ($43 each). Mine has been happy for over a year now ;)
Synology (and others) have many data recovery methods for various scenarios. If you bought another NAS (you said you did), you could have used that for the recovery (by following correct steps). Electronics fail, no matter what the make/model. Don’t blame the tools for you not having backed up critical data.
I understand your frustration, especially since your small business depends on this system to operate. My experience in enterprise systems management has taught me about the concept of "lifetime spares". The idea is that if you have a critical system that you know will become unsupported by the manufacturer, you buy duplicate parts or complete spare systems to continue to operate for the lifetime of the project that these systems support (some of the systems I supported had been in operation more than 30 years). Depending on how standard those parts are, you may be able to hedge with an alternative supplier (as demonstrated) or wait until the products are EoL and stock up on used stuff (testing before you need it). A warranty is the first defense, a service contract is the second, and stocking your own spares is the third, the worst case is board level repair. All of these options have differing costs and applicability, you have to look at it form the perspective of an insurance underwriter, being critical of potential failure points and associated costs. You always have the option of transitioning to a different system, but in many cases the cost of engineering and implementing a new solution will far exceed proper maintenance and stocking spare parts. In your case, it may be as simple as replacing bad capacitors in the power supply, but your tester doesn't display ripple or noise like proper electronics test tools. Your tester probably doesn't supply enough load to simulate the connected system either.
Maybe it's time to buy another NAS the same size and sync all of your data across as another point of redundancy? Or you could always sync it first then keep it at home as an offsite backup :)
The funny thing is I was just looking into buying a NAS, but I am very much considering just buying a Threadripper CPU + 1180, and just making a hybrid nas/plex server/gaming PC and not spending hundreds on completely separate devices, when I can do all in one with overkill.
Honestly that's basically what I did. I'm actually doing all my gaming in a Windows 10 VM running on my Threadripper box with a dedicated GPU for the VM, but it's got Linux as the base system for everything else. It works pretty darn well.
This is what I did. I consolidated my computer and ZFS NAS together into one new beefy box running Proxmox. Proxmox handles all the ZFS storage for me, and for VMs I'm running Windows 10 (for gaming) and macOS High Sierra (for everything else) on it.
You could easily do that. I have a backup server running in a VM on my Ryzen 7 1700 system. I have an ICYDock 5 bay hotswap unit in the case (it's an older case that actually has 5 1/4 bays...). I run it under VirtualBox with a CentOS VM which has ZFS. The drives are attached via rawdiskaccess - and the VM runs in headless mode. So it just sits in the background, using very little overhead, and provides ZFS based storage.
With a threadripper i would honestly run linux as a base OS and run a virtual windows and pass that 1180 to the windows VM so you get 99.5% of bare metal performance. Then you can pass your sata controller to a BSD VM and set up a ZFS storage array , all on one box.
The Synology issue with the Atom processor came out a year ago. It is an Intel issue with the processor. Synology immediately fixed the issue on all new units and offered a 1 year warranty extension on all those units. Now the Atom issue does not affect all units, but only some. Having said that, I have used Synology boxes for years and they have been pretty rock solid for me and my customers. Now, if you have a good solid backup system, then this problem you are having would be a minimal thing. I have 1 NAS backup to a 2nd NAS twice daily and then have a 3rd one that I update monthly and keep in a safe. In addition, I use cloud backup. I can tell you from experience that the "end all solution" to your problem is not building out a server with a lot of drives. Been there, done that and sold the T-Shirts. You can get into the same issue there. Go a few years down the road and see if that motherboard is still being made or if you are going to use a separate controller, will it function with whatever operating systems are out then? This is the problem with using a PC/Server to do this. Now you add complication and more components and run a full blown operating system. These need to be kept up to date and patched too. That is the beauty of a NAS. It is a small box, no frills, no graphics and just does one thing. I have done both for many years, or I should say, decades and there is always a compromise or trade off. I can tell you of Servers failing and having the same issue you are having now. Your ultimate solution, whatever way you go, is to shore up your backup systems so that you don't have this problem the next time a NAS/Server/PC/SAN fails. AND..... If you are using anything but say a RAID 10 in your NAS or Server, you are asking for problems. RAID 5 is not longer recommended in the industry and RAID 6 will be there in 2020.
I built a FreeNAS box (4+2 4TB), about 4 years ago. About a year after that, I got paranoid, so I built another identical box, to rsync the whole thing onto. So I have ZFS2 4+2 setup, *times two.* Talk about an overkill. LOL But seriously, what better way is there to back up a huge NAS box? I certainly couldn't think of one, other than ANOTHER NAS BOX.
not overkill at all, as long as they are geographically separated. Wouldn't regular backing up of one box to image files be better than rsync though? Giving you the option to restore from an earlier point instead of getting ev. AIDS synced to both boxes.
Hmm interesting. I did use ECC memory on both builds, and ZFS does check-sum all data, so I'd think the data corruption is unlikely. But I see there are other ways to do things to be even more.... paranoid. However, I'm not interested in cloud type storage. My data stays with me, physically. Meltdown and the likes should have given people second thoughts about the cloud.
it's rare but raid cards / controllers can die and cause corruption. Same for bad areas on a disk (again rare as there are protections against it). Reason I mention it is i have seen it happen and cause an outage before.
I was the Editor for several tech magazines and Synology had always been very supportive of media. I'm sure they can express ship a new PSU to you or even a new NAS. Did you ask them?
You made the point in the video: reg PC part or server grade parts: we litteraly got shelfs , by the metric ton, of spare componements. Anyone who used to work with computers will be able to fix any x86 based machine under 10/20min unless it has a specific form factor. Theses NAS units can't even provide a proper PSU. At least that one is using a standard ATX 24 pins for the mobo it's still a good point. Most of thoses toys are running a soft raid on md so hopefully this is never a big deal to recover.
Yes, you have to buy another sinology box, but if its a power supply/motherboard issue you should be able to take the drives out of the old synology and move the drives to a new synology box and migrate the RAID. Synology provides documentation to do this.
Using commodity hardware is great... what I value my Synology for is the software, though. Sure, if all you use it for is file storage, a custom server is an easy call. But if you run a bunch of other stuff on it, you can easily spend more in time just configuring everything, keeping up with updates and security patches, etc than the cost of a box that handles all that for you.
You can build a custom server with XPEnology. Basically their DSM on your own server. I agree their software is amazing, so that's the route I went as their boxes are unreliable from my experience.
Those Synology NAS devices are brilliant. All you have to do is put those hard drives in to another one and it'll pick up all the settings for you. Magic.
This is actually a reason I like using windows raid. If something goes wrong with my hardware, the drives are easy to pull out and put in any other PC. If your storage needs are under 12TB (current largest drives i think) just run 3 12TB drives in raid 1 for 12TB of redundant safe storage which are easily readable on any PC!
I am a novice, planning to build my first system mainly for genealogy ... archive family records going back to the 1700s. Are you referring to raid as available in Windows 10 (or whatever the current version is)? Offsite cloud backups are a given, but what hardware + software is reasonable for a raid 1 setup?
I am referring to RAID in windows 10. It's software raid, running on your CPU, so you'll want to have a decent CPU in your system. But Windows RAID works well whether your aiming for redundancy (raid 1) or speed (raid 0). I have personally tested both these setups and speeds scaled just as well as hardware raid.
If you want a more modern file architecture with things like protection from bit rot you could always get a Windows Professional For Workstation's copy of windows 10 and use the ReFS file system which is very similar to ZFS in terms of data protection.
One of the beauties of ZFS or using an enterprise grade raid controller like an LSI, is that the Raid / Pool information is stored ON THE DISKS! So if you need to replace the controller / OS / Machine as a whole, a new Compatible ZFS install / Raid controller will pick up the array and allow you to import the array just fine.
This is surprisingly more informative than expected from the title...which is why I took forever to go back and watch this episode. Title wasn't click baity enough I guess :/ great video thou.
Great video. Definitely nailed the coffin on not getting a Prebuilt NAS. I had been leaning towards a diy file server for a while and now I’m committed. Look forward to seeing more vids about it.
No way in Hell I'd buy another one of those. Back it up and load to a new built system through your 3rd Party back up company. Seriously, their lack of any timely support for an emergency situation for an existing customer just screams "We don't want your continued business!"
That's really more of a price point issue than anything else. At the price one of these units go for you're not going to get emergency support or even especially competent regular support. No company selling units like these at the price they are would be able to afford to. There's a reason the top end of the storage market operates at several thousands of dollars per TB.
I love their DSM, but hate their boxes (bought a second hand one which worked for about two weeks...then like yours, totally stopped booting). My solution was to use XPEnology, which is amazing !! You could have done the same, and migrated your content easily). I just built my own with my old i7 4790k, 16gb ram, and purchased a nice itx board with 6 sata ports, all into a sexy looking white node 304. Installed XPEnology, and voila !! Now I have a very easily serviceable DIY synology NAS. Very cheap to build... prebuilt proprietary would have been £1.5k, and it'd be less powerful... Cool to see you troubleshoot it so quick.
If you loose data from having 1 nas go down you dont have your systems correct. Have back ups of backups and offsite storage. What happens if there is a fire at your office? so even if you have two NAS you loose all your data. Thats why offsite is so important.
rewatched it, I think i had too many beers last night and missed it at the start. What Software will you use to run your backups of new files? Hope you can do a video on it.
Nice temporary fix! I've been running a Synology NAS without any problems for 4 years (which is already backed up to another unit) but this got me thinking that it could happen to me as well.
This exact issue is why I am a big fan of ZFS. A storage server using it can totally crap its self and as long as the drives are not damaged they can be imported into any other machine running a ZFS file system.
I have two DS1815+ units, both with two expansion units, and have never had 1 second of problems with either of them. 170 tb of raw storage between the two. They are so much better than a server (I migrated to Synology from a storage server) that there is no comparison, particularly when it comes to the software available. If it is the power supply, order a power supply. Easy peazy. Your work-around will suffice until the replacement power supply comes.
I built myself a little Open Media Vault server and over the years frown tons of other features on it. For instance FTP, Plex server, Torrent client, Web server, Samba share, Mumble, TeamSpeak ,Omada Controller (EAP) for home wifi control, Octo print, Weather logger with time-lapse etc etc.. Services I might need to access outside of my home are routed through a simple webpage (easy to remember domain) directly to my machine. Yes, there is a long random pass for each individual service.
If you get a somewhat beefy server, it's pretty easy to throw Xen on it and then FreeNAS as a VM with the disks passed-through. That way you can have a bunch of VMs effectively directly attached to your storage server and free up your real network (if you have any use for VMs that is)
It's a known issue caused by Intel C2000 Series Failures. Alll the NAS units with this Intel C2000 series processor would experience the same issue eventually, usually within 2 years of use. Synology extends the warranty on those affected units to two years. We got one with the same issue after about 20 months of use, and got a RMA replacement from Synology. While we were waiting for the replacement to arrive, we managed to use two power supplies (one original and one additional PC ATX) to power up the defect unit and performed a data transfer/backup. Contact with Synology to get your unit replaced if it's still under warranty.
That's exactly why i do a diy server for bulk storage. If you are an enterprise with service for a nas or san that's different, but for the average home user the pain of having to build a server (or have someone do it for them) is more than offset by what happens of your storage device dies. This does nothing for bad luck with HD's though, that's a whole different matter.
I had the same problem with my DS415+. It basically means that the Intel processor has died. The issue seems to be that two clock lines that are needed for booting are wearing away over time due to incorrect voltages being applied (you can find a more accurate description if you search for information on the Intel C2xxx series bug). Synology has extended the warranty for all the devices using this processor, so mine was replaced under warranty and after inserting the NAS drives in the correct order in the new one, my RAID array came back up as if nothing had happened. So moral of the story: contact Synology for an RMA and have this one replaced with a fixed one.
I have had my SynoNAS since they first came out. I have the original DS1811(8-bay) and its still going strong in 2019. I have also bought dozens of different Synology models and generations for my customers, friends and family. I have never had an issue with any of them. I have heard of people having issues, but I think implying that its common that they die is a bit misrepresenting. This is NOT the case. Synology is a great company that makes a great product designed mainly for homes and SME. It is useful in cutting down IT costs as its OS DiskStationManager (DSM) is really easy to use and keep up to date. Plus all the extra applications for it makes it not just a NAS, but a fully functioning server without having to pay for any licensing or have any Linux knowledge. I dont use the server features on mine, as I have a rack with actual servers, but I have for a lot of customers with little technical expertise and no budget for mutliple servers just to run a shop or small office etc. I primarily use my SynoNAS as my home media server and to store my game backups, photos, music, software etc. I think the main precaution you should take when dealing with any NAS, Server, PC, or other expensive hardware would be ensuring that you have it plugged into a decent UPS (not a cheap one). That will prevent any power fluctuations, surges and unintended power losses. Make sure its in a clean, dry environment and keep it as cool as you can. Also, when you are storing critical or important information, relying on a single solution isn't smart. PSU's, Motherboards and Embedded CPU's failing is not a new issue. This is why you should always have other backups and if possible, multiple backups like cloud storage, remote backups other NAS's, external drives etc. This ensures you are also protected from fire and theft, not just hardware failure. You should also make sure to use quality drives that are designed for NAS's. You might think you can save a bit of money by using ordinary cheap drives, but at minimum you will just have to keep replaceing them, which WILL cost more in the long run. And at worst you will lose all your data. Also I noticed that you booted the NAS and then shut it down before it finished booting. That is not good practice for any computer. But especially proprietary devices that have special start up sequences. Anyway, I like your channel man. Between you, LTT and Jayz2cents, I dont really need to watch any other tech channels. You guys gimme everything I need.
It's easy. 1. Buy another 1515+ and use the built-in backup vault or high availability features. 2. Don't run out-of-warranty (3 years for the 1515+, and I believe they extended this a year because of this issue) servers in production. 3. Spin up an Ubuntu server and recover that RAID. There is information online, and it's not too risky.
I own the same DS1515+ unit. Mine died on me (CPU failed), I ordered a power supply that took two days to show up, not weeks as suggested in the video. When that didn't fix it I RMA the unit through Synology (painless process that didn't cost me anything other then time). Within just over a week I had my replacement unit and within an hour it was up and running as if nothing happened. To resolve a future issue I bought the RS818+ and moved my DS1515+ to my parents home as a remote backup storage site. Now they have backup storage for their computers and I have a remote site to access in case of failure of my current RS818+. Also all my data was fine and the only impact was buying the power supply and waiting just over a week for the RMA unit. My experience with Synology was much better than theirs but with that said, hearing him complain as much as he is. I would argue that there is a bit of exaggeration happening on his part. If you treat people well (email/phone support personnel) they will treat you well in return. It also helps to not come across as whining about your situation. When I emailed Synology, I explained all the steps I took to resolve the issue and that I had exhausted all my available options, that I have enjoyed the service received up to that point and looked forward to hearing from them in return.
Imo You should just find a electronic repair shop in your vicinity and have the power supply repaired for ±120,-. And you should be good to go again. Keep up the good work. Love your content. Regards from The Netherlands.
I will just point out something, don't know it is still relevant or not but if you jump start the internal one it will start as soon as you plug in the AC cord in so HDD will get power but the board won't start up unless the power switch is pressed because the external one was waiting for the signal to turn on. Jump starting this way actually got one of my motherboards to go wrong with two PSU setup. So what I did is use a relay to short the Green and Black as soon as the main +12V rail is active, the delay is like 10ms or less and it worked fine for me. In this way you are not powering up the HDD or ODD or other peripheral if you have any before the board is powered plus as soon as the system is off it will turn both PSUs off as well.
Best solution is to have two NAS devices, one for actual use, one for backups, nightly or similar. The backup NAS doesn't have to be very powerful, just support enough disks for the job. Of course, that's easier and more affordable if you have like 2x4 TB drives in RAID1, rather than 14 TB of usable space :) Also since DSM (and QTS for that matter) is built on Linux, you can use Linux recovery tools on another computer to get the data out. RAID in general scares the crap out of me though, because it's so easy to corrupt or lose the array if you don't know what you're doing.
I'm a big fan of your work but I hate to see local data setups like this. Super happy you invested in offsite backup, more people should do that. Even something simple like BackBlaze's are just such a huge savings with one of the low fee pay to recover setups. Failure history of Synology aside. You started off calling this proprietary and fixed it with a standard power supply. Lets call apples apples and just acknowledge that standard DDR3 stick on the back and the fact that that CPU supports standard instruction set as well as basic server functionality for ECC and Virtualization. The C2538 is a pretty sweet chip for a home NAS. These atom chips are nothing to scoff at for this usecase. I personally owned the C2750 for home use on a FreeNAS box and it was fantastic (motherboard, cpu for ~$330 with ECC suppport and imbeded SATA raid for 6-12 disks... yes). You demonstrated that the failure appears to be simple and power related. Honestly, you could have alleviated the downtime completely by following high availability best practices by picking up a second one for redundancy. The native software supports a highly available fail over mode described on page 2 of the product documentation. Don't get me wrong, I don't care for the home NAS in a box thing, building your own for home use is great but anyone who cares about their data enough to demand 100% up time should look at a proper commercial solution. One that provides service and support (Netapp, ESX etc..), or invest more time into learning how to implement local redundancy properly. I'm a big fan of yours and I sincerely hope that you put some solid research behind this upcoming storage server(s). My advice is that if you want to build your own.Invest in 2 servers and have your working share be actively mirrored and load the second server up with archival stuff to do your offsite backup. TLDR: please don't just take a Linus NAS box and call it a day
that's why people should just build a cheap system with nas drives in it, and use that. if it breaks, just buy the part you need! It might be considerably more hard to set up, but in the end, it's more reliable and easy to fix!
FYI I got a non working ds1010, all it was was a defective power switch. New cob from synology was $40. I even to test just tinkered with the switch pins and it worked. Good luck!
I so appreciate this. I had someone in IT suggest this to me because I wanted a file/email server of my own and I was thinking of it and just dropping the email or making it work. I won't be doing that now. Either building a cheap server or buying a used one.
Hey everyone! We just shot a great news video and are working on a ton of testing for this week. Things are ramping back up now! Expect lots of charts over the next two weeks.
You might also like our RAM timings explained video: th-cam.com/video/o59V3_4NvPM/w-d-xo.html
Grab a GN Modmat here: store.gamersnexus.net/products/modmat
Also did it for Deadmau5.
You do know that this PSU exists as a spare part, right?
Check your tone. The system doesn't boot with or without that PSU, but it does boot with an external one, so the point is irrelevant. All that matters is that we revived the thing for long enough to migrate cleanly.
I dont get it if you have a temporary fix and have it running to be able to migrate the data from it why would you buy a new synology NAS if you intend to shortly switch to a custom built server style NAS, you could either:
1. Just build the new server style NAS asap, you could probably do that before the new synology NAS even got there and migrate the files directly. As you say you have all the parts you would need less maybe the drives themselves which I'm sure you could pickup from local PC store today.
OR
2. Just throw some extra drives in an existing render or work machine temporarily, migrate what files you need now and probably the files you don't have backed up just in case running it with external PSU becomes a problem then you have more time to build the new server style NAS.
Then yeah if something happens and it dies running like that you have the files you need for now and re-download the rest from offsite backup, but if it keeps working fine just yeah do same as in 1. and just directly migrate everything.
I dont understand why you "HAD" to buy it straight away, unless it was more of a panic order it right now so we can get back to work asap just in case without thinking to much about it kinda thing, without even trying to diagnose first, which i kinda understand but don't at the same time since the fix you found should have only taken 30min to an hour max, and an extra 1 hour lead time doesnt seem worth panic ordering a $1200 NAS.
BTW im not trying to hate to say your dumb or anything just curious what your reasoning was.
I couldnt find a way to message you. But I had the same problem with my Synology NAS and I safely managed to get the whole data back and migrate it using a rather easy piece of software.
It was kind of funny a even the Synology support staff didnt know about it.
Its non destructive and auto recognises the Synology format as well as some others. You simply hook up the drives into any standard windows PC and selected the drives from the software (dont even need the right order), and it scans the drives and figure the rest out itself and then mounts it for read access.
PM me and I will point you in their direction.
A couple things:
Your data is easily accessible without a Synology chassis. Take the drives out, connect them to another machine (order doesn't even matter), and boot Ubuntu from a USB stick. You can then mount the array and copy all your data off. Synology has a knowledge base article on their website with all the details. (Under the hood, the array is just standard Linux stuff... mdraid and LVM2.)
Those models, because of the atom CPU bug, all had their warranties extended. Call them and open an RMA for it.
I realize it makes for more TH-cam clicks to claim the sky is falling, but it's not. Yes, it stinks to have a system go down. But that's why you should have redundancy, especially for a business...
I'm fairly sure it will use MDADM, I'm really surprised this hasn't been mentioned more!
yeah. it just seems like pure hate for something stupid....
testify!
@chicken it does use mdadm and lvm
@@Chicken42069p a 2 sentence definition/explanation of mdadm & lvm would educate us noobs a bit. Yes, we can google it, but...
When you do build the your own NAS in a server chassis, please record it and show the install procedure, what parts,what RAID lvl you plan on using, most likely ZFS zraid2?
anders gjerløw I hope so too
I definitely want to see that
You should in all honesty not put all you disks in one vdev like one huge raidz not only are you screwed if you want to add more storage you also lose massive amounts of performance. What you should be doing it putting multiple vdevs into a single pool. If you do this ZFS will stripe the vdevs and you get much better performance. If you need more space you can just create another vdev and add it to the pool and your done. It has to be the same type of vdev 6 disk raidz2 or a mirror etc but still. I mean technically you can add a none matched vdev to a pool but this is a VERY VERY bad idea and honestly your on your own.
Now you may say AHH but stripes you have no redundancy. Well no you do, in ZFS a vdev is responsible for its own redundancy not the pool. So you can stripe as many vdevs as you like into a pool.
You also get cool things like lz4 compression to save space. Now you may think that the compression will slow down this pool right. But no, think about it, the slowest part of your NAS is the hard disks, and you have loads of ram and CPU cycles sitting there 99% of the time doing fuck all why not put them to use. So say for example you have a 100mb file and you want to write it to a disk that can do 200mb a second , you can in theory write that in 0.5s, now if you can get a 50% compression ratio on your data you only have to effectively put 50mb on the disk. So now your 200mbps disk can do 300mbps!
Please do this! Because I need NAS and would rather build something than just buy it.
Building it is surprisingly easy. You can get used servers off eBay pretty cheaply, then load them with drives. Install Linux and ZFS. Configure and put it in your basement. I have a 20TB ZFS setup downstairs - serious overkill but I had some of the parts lying around. Dual Xeon L5506 hex cores, 40GB ECC RAM and the chassis w/power supply was about $300 shipped. Then eight 4TB Hitachi drives (2 pools of 4 drives in RaidZ1) gives me 24 TB of raw storage and about 20TB of formatted storage.
Do a build video when you build your server! I love that sort of thing!
LowSpecActionSquad yes!
This would be awesome!
Yes please
Agreed ... I would love to see such a video.
Sorry dude, you are wrong about the Synology stuff being proprietary and need a lot of time to recover.
Synology uses mdadm, you just need an Ubuntu server and you can rebuild the RAID and recover your files. That is the biggest advantage for me of Synology over proprietary RAID like in the Drobo...
I'm a bit surprised that comment was even made by Steve. I mean, full credit to the guy for being a legend but not knowing that before making the video seems weird. Oh well.
I have used this process to recovery my system.. Interesting that he does not even mention it..
luckily my 1813+ is still good -- but anyone worth their salt has backups for important data. single copy of anything is never good. Yes you have raid, but its their own fault for not having a backup for the raid.. i may have 24tb of raid6 fun stuff but i realize its not safe without a real actual backup.
this video unfortunately is a repeating rambling rant ... noone should ever expect priority treatment just because they run their business off of a particular product. especially after a few years. Motherboards easily go obsolete within that time, and are hard to find off of even a custom setup.
That being said, since even my 1813+ is by this point nearly 4 years old i'm working on preparing its transition array .. yes its costly but when you cant afford to lose it you better have a backup... crying because you lost something irreplaceable in tech because you were too cheap to have a redundant system is pointless... raid only prevents against X# of drive failures, not X# of fires, or hurricanes or tornados or motherboard failures of said raid system.
/end ranters rant no sympathy for this here... normally i'm not such an ass, but everyone learns the hard way about data backup. its not entirely synology's fault.
Far as I was aware, Steve was referring to the hardware? The motherboard and drive power delivery is custom - hence why he said he couldn't really do much of anything around ~3 minutes in, unless he said it elsewhere referring to their software/firmware that I missed?
Just unplug the drives and connect them to another SATA controller. Boot from USB and mount the drives. The raid is autodetected by Ubuntu and the filesystem is Ext4 or BTRFS. Synology is just a wrapper on top of Linux. This video is just a lot of non-information. A quick google on this model shows what the issue is (dead CPU due to Intel manufacturing mistake). So instead of doing al this, contacting synology for a swap, getting a mobo and Ubuntu stick would have costed the same ammount of time and showed some insight.
"We're gonna need another Storage Server" Better Call Linus and 45 Drives!
Defiantly call Wendell! (not joking, seriously call Wendell)
who's wendell?
Wendell is Level1tech
ArhGee you don't have to be defiant, I'm sure Wendell would be happy to help
Booo Linus. Lame
our DS1815+ just died last week - likely from the same issue as your unit. thankfully we were within warranty period and synology drop-shipped us a new one. (they actually extended the original warranties for a number of models commonly affected by a year)
Synology uses mdadm, boot up Ubuntu and rebuild the array.
Did not realize that! So the array could be dragged over to an Ubuntu system with mdadm installed, and disovered/ imported there? Awesome......
Actually that isn't that simple and if the data is encrypted than your lost in the wood's.
This is exactly the reason I went with Synology. Wanted an out of the box, no tinker NAS solution that's still built on linux in case the hardware craps out.
It should just be
mdadm --assemble --scan
vgscan --mknodes
vgchange -a y
This is excatly what I did when I upgraded from a DS1010+ with 5 disk RAID5 to a self made ubunto based NAS. Just move the disks to the new server and mdadm scan + assemble and voila, RAID was up and running with all of my data.
GN petabyte project?
We don't have that kind of need or money! But also, to be fair, we'd need a zettabyte just to be better than Linus.
Gamers Nexus Just go to the shelf, and grab *all* the storage drives. Who needs extras on hand? (/s)
Just get Western Digital or Seagate to "donate" the drives like Linus does!
Do a Ceph cluster with initially three machines and slowly creep up to that golden zettabyte mark over time.
August Svensson - lol, Ceph and the XFS filesystem for small business storage. That is a next level hyperscale data-center solution... ZFS on steroids.
Just finished watching Ask GN 89. Went back to sub page. New GN video has appeared. It doesn't get much better than this 😁
Okay. I guess it could get better if your NAS hasn't just shit the bed and taken all of your projects with it.
Great ethical content maker, his review of the PC-O11D was the one that sold me.
Update : The issue you face can be solved by replacing the transistor in Q2 area of the motherboard (Q4 area for 1815+ models). This transistor, once burnt, prevent the power switch of the MB.
what is the name of the transistor I cannot read anything on the tiny head of the transistor so have no idea what to replace it with?
You could have easily taken the drives out and pulled that data off. Plenty of posts on how to get your data off a dead synology
The power supply can be ordered directly from Synologys website. Delivery times are around 4 to 7 days to Europe.
I'm going to be disappointed if your new Server isn't called, "The Nexus" or "The Gamer's Nexus". I was actually recently looking at one of these myself, kinda weary now.
Gamer's NASxus...
I'm all for stupid names. My NAS is called Nassie. So cute.
Qnap and synology NAS are good to have, this guy is just being an total pussy about this.
He just needs a new PSU FFS which is super easy to buy and get for these NAS boxes but he's making out like the world has just fucked out and hes going to be dead tomorrow, and he can't get an PSU when Qnap and Syology don't change these PSUs as they keep the parts the same across the range.
The Qnap boxes have an external PSU so you just replace the external PSU and it up and running again.
I can see this guy trying to swap to an FreeNAS OS and then can't get it setup right then try and keep it going then end up giving up and using this box.
Just to let you know Qnap and Synology use Lunix as well.
I have had this happen to me. Synology replaced the device quickly no problem. The older devices have power integrated, the newer have a separate power brick which is more easily replaceable.
Worst case buy or lend another (can be a newer device too) just to get the files off it. Just mark the drives like you said and install them in a new device.
Finally, backup backup backup! Any single point of failure will bite you, be it off the shelf or self built.
Synology's raid is LVM at its heart, a standard linux install can read the entire RAID set.
I have to thank you. My Synology DS1515+ shut off and would not turn on. Your suggestion of removing the drives and power worked. I am now trying to get it all backed up. I was stupid and didn't plan for failure.
Just use FreeNAS and a custom build
Not particularly familiar with the different NAS/RAID solutions. What does everyone think of FreeNAS vs. the alternatives? Upvote the comment above so we can get some good suggestions, please!
Gamers Nexus FreeNAS should be your only choice for a home brew NAS. It uses a superior file system, ZFS, and has amazing support with constant updates. Having said that, I still prefer Synology because BTRFS is still a great file system, and DSM is very easy to use.
From what I have seen it is usually Freenas vs. Unraid vs Windows Server with some form of raid.
Linus answered the question in his forum and looks like he uses a few different solutions based on his needs.
"The high speed nvme storage server is using Windows Storage Spaces
So is its near realtime backup
The vault is running cluster over zfs on centos
My server is running unraid with a couple of VMs and a plex docker container.
Per the above comment I'm probably not doing it right, but it works for me"
linustechtips.com/main/topic/940503-what-system-does-linus-use-for-his-server/
freeNAS as the name implies, is free, and you should really look into it. ZFS is really robust and scales well for the future.
tbh its all based on the kind of redundancy u want, and freenas has only zfs which is poorly flexible, i went with windows 10 storage spaces which is more adaptable and the system is easy to use for a home server since its the classic one
not directed at anyone in particular but... obligatory "RAID IS NOT A BACKUP" comment.
Yes ... and you're totally right to remind everyone about that.
Please keeps on the loop over your NAS build as well as the why for your choices of os and file system.
Keep up the good work.
The problematic x(x)15 models, my local dealer was quite open about it when I asked her about which Synology models are more prone to power failure and the a few models she mentioned were all from the 2015 series, perhaps none after that. Hope Synology learn from their mistake.
have a DS1813+ running for 9 years now. Always running. Replaced all hard drivers for bigger (from 4tb 5400 to 8tb 7200) ones a couple of months ago after one of then has failed.... No problem at all.
Was gunna comment on GN pumping out the content aaaannnnd.....video about y'all nearly losing a crap ton of data you need to do so. Glad you got it working temporarily!
3:55 - NO. This is not a problem with a NAS. The potentially catastrophic loss of data is *entirely* because you had a deficient backup strategy. If you can't tolerate a piece of hardware dying, you need a backup. It's really as simple as that. All hardware fails eventually.
For a critical system design, a client actually had a whole spare server in a box in storage. If any one component or a whole server died, near zero wait time to replace any part or a whole server. Swap and go.
Expensive Solution ? No. Downtime ? Six figures per hour $$$$$$.
So it was worth it to have extra parts nice and new on site. Off site backup out of state, if some disaster took out the building ! :-O
If something took out both locations in both states, well - not their lucky day. :-/
I believe he says somewhere that he does have a backup.. just saying.
Bummer. I ended up buying a 24tb NAS because of the size/cost convenience, but I trust it as far as I can throw it. This is a great example of RAID is not a backup (which you know and had good mitigation in place).
Looking forward to the NAS Server build!
Also maybe cover server security. Ie. Ensuring that only those authorized to have access, do. And securing the box so it can be internet facing (accessible via VPN from a remote location).
I love when you have an issue and just take us to a trirp through the diagnostics of the issue.
Synology has a guide how to recover drives when plugged into a regular PC and using a Linux Live CD. It does work with those two bay NAS, not sure how easy it is to do this with their bigger units.
Several years back I had a Seagate Blackarmor NAS which stopped recognizing the RAID array even though the drives were fine. Pulled the drives, connected them to my desktop, and ran a RAID recovery program to get the data back. Much easier than trying to fix or replace the NAS itself.
How are you NOT doing backups every 24hrs?! o.O
I moved off QNAP/Synology/Etc, a long time ago. I went to FreeNAS via IXSystems. I did buy their actual hardware, but all of it is non-proprietary. You can get all the parts from the manufacturers (don't have to rely on e-Bay). The devices are also user-serviceable, and IXSystems will send you the part, with a 14-day return time allotment. It can take time to RMA, as they do so back to the original manufacturers, but you can get that advanced replacement to keep you from being down. I decided to buy an extra board from them at about cost, so now I have a spare. It's almost like a DIY server, except you can buy it all as a package.
Another thing with using a NAS, is having a local backup that's up-to-date. While not affordable for everyone, I have a second NAS (slightly smaller in TB), which is synced every 6 hours with high compression. If the first NAS fails, my downtime is only the replication schedule time. Remote backup can then be used for the smoking hole/scenario, with a slightly larger sync window.
I'd like to point out that synology uses linux LVM and mdadm for storage, so if you plug these drives, just as they are, into a linux server, they should be detected without problem. Just like in the case of ZFS, LVM uses unique IDs for the drives, so it should pair them up right even if you mix them up. Maybe send an email to Wendel from Level1. I'm sure he can help you out, also...maybe a nice video for the both of you.
My word this guy can ramble on, 30 seconds of useful footage
Curious what your backup solution for that much data winds up being. I have about 40TB active on Synology NAS's, but haven't found a price-reasonable remote backup solution outside of just buying duplicate NAS's and drives.
We use Crashplan for business on some places with success. Quite inexpensive. Uploading 40Tb for the first time (or restoring) could take a long long time depending on your internet though. But you will have that problem with any cloud service. Their plans are per device with unlimited storage. So you just need to concentrate everything on a single machine and then send it all to the cloud. Their system do incremental backups, versioning and other good stuff.
That's the problem with services like that and BackBlaze; a NAS server doesn't count and I can't "concentrate everything on a single machine" lmao. I will look into it in case they allow NAS's, but I doubt it
I have it running on a few headless Debian servers, so it runs fine on Linux. I just open the CrashPlan Desktop app on my local machine through X11 redirection over an SSH connection (works fine even on Windows). You can always pay 2 subscriptions for 2 machines (or more) if you need it, concentrating in a single machine is not really a must, but many times it does make sense to have a big backup machine that can be replicated to the cloud. I am not sure what do you mean with "NAS server doesn't count" since it runs on any Linux. I guess you could even mount a shared storage to the machine and include the mount in the backup if you want > EDIT (which means that you can also run it on any Windows/Mac/Linux machine that has access to the shared storage)
try unRAID, it doesn't stripe your data across multiple drives and you can have two parity drives.
so you won't lose data if one or two drives fails, and even if more than 2 drives failed, you'll only lose the data on the failed drive, because your data is not striped across the drives.
Well with Backblaze, they specifically don't allow NAS's and thus don't allow network-mapped drive letters to keep that from being abused. I have Synology NAS's, much like what's shown here, so unless they have a specific Synology app I pretty much have to back up via other PCs.
Synology has very good support in my experience. I had an issue and they quickly shipped a replacement. Its possible to have hardware failures with anything. If you want very generic commodity hardware, use freenas. Then if you have a failure, you can rebuild the whole system or just replace the component.
you would be a terrible person if you took the needed parts out of the new one to fix the old one and then RMA'ed the new one, but only a terrible person would do such a thing to a obviously defective product
This all day, brought a synology cause i wanted the set it and forget it, but the limited space (50gb) and the gimped cpu is killing me. I need to move on to a custom solution.
Probably each of the parts have their own serial number to avoid thing like that, m/b could have it in bios, but if not, yes i would have done the same
mochi kun Wait, 50GB? When was this?
50Tb, lol, my bad
@Giorgos - While i would never encourage such a terrible, terrible deed lets be honest they are not nearly organized enough to have all the serial numbers for each product cross matched with the internal serial numbers of each component.
Wow, I'm glad I knew I could setup a home server back when I wanted a NAS and went with the DIY solution.
While watching the intro. Me: "yeah, drop the NAS a bit more...." 😂
I run a 12TB RAID 5 DIY NAS server, and I was pretty baffled when I saw the title. I was happy to hear that your criticism was at those crappy prebuilt systems. I never liked those.
**CRT scope producing screen burn in the background.**
BTW, why not remove all the HDDs and mount them in Linux on another device to copy the data off it?
Am I missing something or what?
This is why my 'NAS' lives in a DS380 case and it's array is built using Mdadm. Makes transferring the drives so much easier.
ZFS > RAID IMO. Having the filesystem handle the majority of the replication tasks (and not separate hardware), while being an open standard, makes things a lot more safe to handle.
If you don't care about performance, ZFS is great
Glad you didn't lose all that data! Looking forward to the server build.
When I started the 14 min video, I thought "The Problem" would be much bigger. But basically the video just shows how to unplug and plug a power supply and lasts 14 min.
And if you mean the problem is that a product (like a NAS) can fail, wow didn't know that...
the problem was apparently unknown at the time until it powered on from the second psu
Yeah, but why needs it to be 14 min?
Everybody knows a TH-cam rant shorter than 10 minutes is not a proper rant.
So basically "the cause of problem was unknown until I tried the most obvious thing"? Wouldn't he have done exactly the same thing if he had built the box himself? (Yes, I know there's a small difference between "checking if a PSU is defective" and "checking if another PSU powers the system", but come on! He sounds like Synology was the biggest crap this side of the Atlantic.)
I have a Synology 1815+ that suffered the exact same failure. I got it to come to life using the double power supply trick then created (soldered up) an adapter for a SeaSonic SS-250SU power supply ($43 each). Mine has been happy for over a year now ;)
Synology (and others) have many data recovery methods for various scenarios. If you bought another NAS (you said you did), you could have used that for the recovery (by following correct steps). Electronics fail, no matter what the make/model. Don’t blame the tools for you not having backed up critical data.
I understand your frustration, especially since your small business depends on this system to operate. My experience in enterprise systems management has taught me about the concept of "lifetime spares". The idea is that if you have a critical system that you know will become unsupported by the manufacturer, you buy duplicate parts or complete spare systems to continue to operate for the lifetime of the project that these systems support (some of the systems I supported had been in operation more than 30 years). Depending on how standard those parts are, you may be able to hedge with an alternative supplier (as demonstrated) or wait until the products are EoL and stock up on used stuff (testing before you need it). A warranty is the first defense, a service contract is the second, and stocking your own spares is the third, the worst case is board level repair. All of these options have differing costs and applicability, you have to look at it form the perspective of an insurance underwriter, being critical of potential failure points and associated costs. You always have the option of transitioning to a different system, but in many cases the cost of engineering and implementing a new solution will far exceed proper maintenance and stocking spare parts. In your case, it may be as simple as replacing bad capacitors in the power supply, but your tester doesn't display ripple or noise like proper electronics test tools. Your tester probably doesn't supply enough load to simulate the connected system either.
Maybe it's time to buy another NAS the same size and sync all of your data across as another point of redundancy? Or you could always sync it first then keep it at home as an offsite backup :)
Fridgemusa at home is on site, in his case. With gigabit internet he has the possibility of off-site sync of large 4k video though.
Yep.
As my luck would have it, just today my Synology 916+ died on me. Thank you for this video, gives me some guidelines and tips to recover my data.
DIY servers
Well done on the troubleshooting stuff, lesson learned on not following the 3-2-1 backup protocol. Great channel.
"These NAS tend to die randomly so we decided to put all on data on it without a complete backup solution"... oook
This was a really interesting video guys. Legit those mod mats are actually really useful! I think you've got a good product on your hand there guys.
The funny thing is I was just looking into buying a NAS, but I am very much considering just buying a Threadripper CPU + 1180, and just making a hybrid nas/plex server/gaming PC and not spending hundreds on completely separate devices, when I can do all in one with overkill.
Honestly that's basically what I did. I'm actually doing all my gaming in a Windows 10 VM running on my Threadripper box with a dedicated GPU for the VM, but it's got Linux as the base system for everything else. It works pretty darn well.
This is what I did. I consolidated my computer and ZFS NAS together into one new beefy box running Proxmox. Proxmox handles all the ZFS storage for me, and for VMs I'm running Windows 10 (for gaming) and macOS High Sierra (for everything else) on it.
You could easily do that. I have a backup server running in a VM on my Ryzen 7 1700 system. I have an ICYDock 5 bay hotswap unit in the case (it's an older case that actually has 5 1/4 bays...). I run it under VirtualBox with a CentOS VM which has ZFS. The drives are attached via rawdiskaccess - and the VM runs in headless mode.
So it just sits in the background, using very little overhead, and provides ZFS based storage.
With a threadripper i would honestly run linux as a base OS and run a virtual windows and pass that 1180 to the windows VM so you get 99.5% of bare metal performance. Then you can pass your sata controller to a BSD VM and set up a ZFS storage array , all on one box.
Done this with an unRaid build. 1950x + 1060/Titan Xp.
Super flexible and does everything I need... and my needs are pretty diverse haha
The Synology issue with the Atom processor came out a year ago. It is an Intel issue with the processor. Synology immediately fixed the issue on all new units and offered a 1 year warranty extension on all those units. Now the Atom issue does not affect all units, but only some. Having said that, I have used Synology boxes for years and they have been pretty rock solid for me and my customers. Now, if you have a good solid backup system, then this problem you are having would be a minimal thing. I have 1 NAS backup to a 2nd NAS twice daily and then have a 3rd one that I update monthly and keep in a safe. In addition, I use cloud backup. I can tell you from experience that the "end all solution" to your problem is not building out a server with a lot of drives. Been there, done that and sold the T-Shirts. You can get into the same issue there. Go a few years down the road and see if that motherboard is still being made or if you are going to use a separate controller, will it function with whatever operating systems are out then? This is the problem with using a PC/Server to do this. Now you add complication and more components and run a full blown operating system. These need to be kept up to date and patched too. That is the beauty of a NAS. It is a small box, no frills, no graphics and just does one thing. I have done both for many years, or I should say, decades and there is always a compromise or trade off. I can tell you of Servers failing and having the same issue you are having now. Your ultimate solution, whatever way you go, is to shore up your backup systems so that you don't have this problem the next time a NAS/Server/PC/SAN fails. AND..... If you are using anything but say a RAID 10 in your NAS or Server, you are asking for problems. RAID 5 is not longer recommended in the industry and RAID 6 will be there in 2020.
I built a FreeNAS box (4+2 4TB), about 4 years ago.
About a year after that, I got paranoid, so I built another identical box, to rsync the whole thing onto.
So I have ZFS2 4+2 setup, *times two.*
Talk about an overkill. LOL
But seriously, what better way is there to back up a huge NAS box? I certainly couldn't think of one, other than ANOTHER NAS BOX.
not overkill at all, as long as they are geographically separated. Wouldn't regular backing up of one box to image files be better than rsync though? Giving you the option to restore from an earlier point instead of getting ev. AIDS synced to both boxes.
rsync is not a backup. If data gets corrupted somehow you could end up rsync'ing the bad data
There are projects on github that can get you backing up to Amazon S3 from a ZFS volume using ZFS send and ZFS receive, so that's an option.
Hmm interesting. I did use ECC memory on both builds, and ZFS does check-sum all data, so I'd think the data corruption is unlikely. But I see there are other ways to do things to be even more.... paranoid.
However, I'm not interested in cloud type storage. My data stays with me, physically. Meltdown and the likes should have given people second thoughts about the cloud.
it's rare but raid cards / controllers can die and cause corruption. Same for bad areas on a disk (again rare as there are protections against it). Reason I mention it is i have seen it happen and cause an outage before.
I was the Editor for several tech magazines and Synology had always been very supportive of media. I'm sure they can express ship a new PSU to you or even a new NAS. Did you ask them?
Synology has procedure on how to pull that data using linux
Synology uses Linux, it's mdadm. Super standard Linux stuff.
You made the point in the video: reg PC part or server grade parts: we litteraly got shelfs , by the metric ton, of spare componements. Anyone who used to work with computers will be able to fix any x86 based machine under 10/20min unless it has a specific form factor. Theses NAS units can't even provide a proper PSU. At least that one is using a standard ATX 24 pins for the mobo it's still a good point.
Most of thoses toys are running a soft raid on md so hopefully this is never a big deal to recover.
Yes, you have to buy another sinology box, but if its a power supply/motherboard issue you should be able to take the drives out of the old synology and move the drives to a new synology box and migrate the RAID. Synology provides documentation to do this.
Using commodity hardware is great... what I value my Synology for is the software, though. Sure, if all you use it for is file storage, a custom server is an easy call. But if you run a bunch of other stuff on it, you can easily spend more in time just configuring everything, keeping up with updates and security patches, etc than the cost of a box that handles all that for you.
You can build a custom server with XPEnology. Basically their DSM on your own server. I agree their software is amazing, so that's the route I went as their boxes are unreliable from my experience.
Time to call Wendell
Those Synology NAS devices are brilliant. All you have to do is put those hard drives in to another one and it'll pick up all the settings for you. Magic.
This is actually a reason I like using windows raid. If something goes wrong with my hardware, the drives are easy to pull out and put in any other PC. If your storage needs are under 12TB (current largest drives i think) just run 3 12TB drives in raid 1 for 12TB of redundant safe storage which are easily readable on any PC!
I am a novice, planning to build my first system mainly for genealogy ... archive family records going back to the 1700s. Are you referring to raid as available in Windows 10 (or whatever the current version is)? Offsite cloud backups are a given, but what hardware + software is reasonable for a raid 1 setup?
Raid on windows is utter trash. It does not have any modern features that something like ZFS has
Unless of course, you WIndows FakeRAID (TM) just loses it's mind, resulting in total loss, a standard feature of S2D! :)
I am referring to RAID in windows 10. It's software raid, running on your CPU, so you'll want to have a decent CPU in your system. But Windows RAID works well whether your aiming for redundancy (raid 1) or speed (raid 0). I have personally tested both these setups and speeds scaled just as well as hardware raid.
If you want a more modern file architecture with things like protection from bit rot you could always get a Windows Professional For Workstation's copy of windows 10 and use the ReFS file system which is very similar to ZFS in terms of data protection.
One of the beauties of ZFS or using an enterprise grade raid controller like an LSI, is that the Raid / Pool information is stored ON THE DISKS! So if you need to replace the controller / OS / Machine as a whole, a new Compatible ZFS install / Raid controller will pick up the array and allow you to import the array just fine.
#LinusSaveGN let's give GN a badass server, they deserve!
This is surprisingly more informative than expected from the title...which is why I took forever to go back and watch this episode. Title wasn't click baity enough I guess :/ great video thou.
Storinator to the rescue
Great video. Definitely nailed the coffin on not getting a Prebuilt NAS. I had been leaning towards a diy file server for a while and now I’m committed. Look forward to seeing more vids about it.
No way in Hell I'd buy another one of those. Back it up and load to a new built system through your 3rd Party back up company. Seriously, their lack of any timely support for an emergency situation for an existing customer just screams "We don't want your continued business!"
That's really more of a price point issue than anything else. At the price one of these units go for you're not going to get emergency support or even especially competent regular support. No company selling units like these at the price they are would be able to afford to. There's a reason the top end of the storage market operates at several thousands of dollars per TB.
I love their DSM, but hate their boxes (bought a second hand one which worked for about two weeks...then like yours, totally stopped booting). My solution was to use XPEnology, which is amazing !! You could have done the same, and migrated your content easily).
I just built my own with my old i7 4790k, 16gb ram, and purchased a nice itx board with 6 sata ports, all into a sexy looking white node 304. Installed XPEnology, and voila !! Now I have a very easily serviceable DIY synology NAS. Very cheap to build... prebuilt proprietary would have been £1.5k, and it'd be less powerful...
Cool to see you troubleshoot it so quick.
If you loose data from having 1 nas go down you dont have your systems correct. Have back ups of backups and offsite storage.
What happens if there is a fire at your office? so even if you have two NAS you loose all your data. Thats why offsite is so important.
? That was already discussed in the video. Watch it.
Gamers Nexus lol
rewatched it, I think i had too many beers last night and missed it at the start.
What Software will you use to run your backups of new files? Hope you can do a video on it.
Nice temporary fix!
I've been running a Synology NAS without any problems for 4 years (which is already backed up to another unit) but this got me thinking that it could happen to me as well.
FreeNAS. :)
Just built an Unraid server myself and loving it so far.
This exact issue is why I am a big fan of ZFS. A storage server using it can totally crap its self and as long as the drives are not damaged they can be imported into any other machine running a ZFS file system.
I have two DS1815+ units, both with two expansion units, and have never had 1 second of problems with either of them. 170 tb of raw storage between the two. They are so much better than a server (I migrated to Synology from a storage server) that there is no comparison, particularly when it comes to the software available. If it is the power supply, order a power supply. Easy peazy. Your work-around will suffice until the replacement power supply comes.
I was about to but one of these for my video production uses, glad I got to check this out before that.
I built myself a little Open Media Vault server and over the years frown tons of other features on it. For instance FTP, Plex server, Torrent client, Web server, Samba share, Mumble, TeamSpeak ,Omada Controller (EAP) for home wifi control, Octo print, Weather logger with time-lapse etc etc.. Services I might need to access outside of my home are routed through a simple webpage (easy to remember domain) directly to my machine. Yes, there is a long random pass for each individual service.
If you get a somewhat beefy server, it's pretty easy to throw Xen on it and then FreeNAS as a VM with the disks passed-through.
That way you can have a bunch of VMs effectively directly attached to your storage server and free up your real network (if you have any use for VMs that is)
It's a known issue caused by Intel C2000 Series Failures. Alll the NAS units with this Intel C2000 series processor would experience the same issue eventually, usually within 2 years of use. Synology extends the warranty on those affected units to two years. We got one with the same issue after about 20 months of use, and got a RMA replacement from Synology. While we were waiting for the replacement to arrive, we managed to use two power supplies (one original and one additional PC ATX) to power up the defect unit and performed a data transfer/backup. Contact with Synology to get your unit replaced if it's still under warranty.
That's exactly why i do a diy server for bulk storage. If you are an enterprise with service for a nas or san that's different, but for the average home user the pain of having to build a server (or have someone do it for them) is more than offset by what happens of your storage device dies. This does nothing for bad luck with HD's though, that's a whole different matter.
I had the same problem with my DS415+. It basically means that the Intel processor has died. The issue seems to be that two clock lines that are needed for booting are wearing away over time due to incorrect voltages being applied (you can find a more accurate description if you search for information on the Intel C2xxx series bug). Synology has extended the warranty for all the devices using this processor, so mine was replaced under warranty and after inserting the NAS drives in the correct order in the new one, my RAID array came back up as if nothing had happened. So moral of the story: contact Synology for an RMA and have this one replaced with a fixed one.
One could make a drinking game from this video - drink every time he mentions his Modmap and that you can buy it.
This video was brought to you by Synology. Hours of fun for the troubleshooter in you. Corrupting data since the year 2000!
I have had my SynoNAS since they first came out. I have the original DS1811(8-bay) and its still going strong in 2019. I have also bought dozens of different Synology models and generations for my customers, friends and family. I have never had an issue with any of them. I have heard of people having issues, but I think implying that its common that they die is a bit misrepresenting.
This is NOT the case. Synology is a great company that makes a great product designed mainly for homes and SME. It is useful in cutting down IT costs as its OS DiskStationManager (DSM) is really easy to use and keep up to date. Plus all the extra applications for it makes it not just a NAS, but a fully functioning server without having to pay for any licensing or have any Linux knowledge. I dont use the server features on mine, as I have a rack with actual servers, but I have for a lot of customers with little technical expertise and no budget for mutliple servers just to run a shop or small office etc. I primarily use my SynoNAS as my home media server and to store my game backups, photos, music, software etc.
I think the main precaution you should take when dealing with any NAS, Server, PC, or other expensive hardware would be ensuring that you have it plugged into a decent UPS (not a cheap one). That will prevent any power fluctuations, surges and unintended power losses. Make sure its in a clean, dry environment and keep it as cool as you can.
Also, when you are storing critical or important information, relying on a single solution isn't smart. PSU's, Motherboards and Embedded CPU's failing is not a new issue. This is why you should always have other backups and if possible, multiple backups like cloud storage, remote backups other NAS's, external drives etc. This ensures you are also protected from fire and theft, not just hardware failure.
You should also make sure to use quality drives that are designed for NAS's. You might think you can save a bit of money by using ordinary cheap drives, but at minimum you will just have to keep replaceing them, which WILL cost more in the long run. And at worst you will lose all your data.
Also I noticed that you booted the NAS and then shut it down before it finished booting. That is not good practice for any computer. But especially proprietary devices that have special start up sequences.
Anyway, I like your channel man. Between you, LTT and Jayz2cents, I dont really need to watch any other tech channels. You guys gimme everything I need.
Custom FreeNas box is the way to go. Been running one for a while now and it is a wonderful OS.
It's easy.
1. Buy another 1515+ and use the built-in backup vault or high availability features.
2. Don't run out-of-warranty (3 years for the 1515+, and I believe they extended this a year because of this issue) servers in production.
3. Spin up an Ubuntu server and recover that RAID. There is information online, and it's not too risky.
I own the same DS1515+ unit. Mine died on me (CPU failed), I ordered a power supply that took two days to show up, not weeks as suggested in the video. When that didn't fix it I RMA the unit through Synology (painless process that didn't cost me anything other then time). Within just over a week I had my replacement unit and within an hour it was up and running as if nothing happened. To resolve a future issue I bought the RS818+ and moved my DS1515+ to my parents home as a remote backup storage site. Now they have backup storage for their computers and I have a remote site to access in case of failure of my current RS818+. Also all my data was fine and the only impact was buying the power supply and waiting just over a week for the RMA unit. My experience with Synology was much better than theirs but with that said, hearing him complain as much as he is. I would argue that there is a bit of exaggeration happening on his part. If you treat people well (email/phone support personnel) they will treat you well in return. It also helps to not come across as whining about your situation. When I emailed Synology, I explained all the steps I took to resolve the issue and that I had exhausted all my available options, that I have enjoyed the service received up to that point and looked forward to hearing from them in return.
Imo You should just find a electronic repair shop in your vicinity and have the power supply repaired for ±120,-. And you should be good to go again.
Keep up the good work. Love your content. Regards from The Netherlands.
My DS1515+ died a few months ago with exactly the same problem... sent it to them, they replaced it a few days later with a DS1517+ at no cost.
I have been used DS918+ for 1 year. No issue at all !!! very stable !!
And this is why I run a home backup server and not a backup nas. Reliability and ease of repair
I will just point out something, don't know it is still relevant or not but if you jump start the internal one it will start as soon as you plug in the AC cord in so HDD will get power but the board won't start up unless the power switch is pressed because the external one was waiting for the signal to turn on. Jump starting this way actually got one of my motherboards to go wrong with two PSU setup.
So what I did is use a relay to short the Green and Black as soon as the main +12V rail is active, the delay is like 10ms or less and it worked fine for me. In this way you are not powering up the HDD or ODD or other peripheral if you have any before the board is powered plus as soon as the system is off it will turn both PSUs off as well.
Best solution is to have two NAS devices, one for actual use, one for backups, nightly or similar. The backup NAS doesn't have to be very powerful, just support enough disks for the job. Of course, that's easier and more affordable if you have like 2x4 TB drives in RAID1, rather than 14 TB of usable space :)
Also since DSM (and QTS for that matter) is built on Linux, you can use Linux recovery tools on another computer to get the data out. RAID in general scares the crap out of me though, because it's so easy to corrupt or lose the array if you don't know what you're doing.
I'm a big fan of your work but I hate to see local data setups like this. Super happy you invested in offsite backup, more people should do that. Even something simple like BackBlaze's are just such a huge savings with one of the low fee pay to recover setups.
Failure history of Synology aside. You started off calling this proprietary and fixed it with a standard power supply. Lets call apples apples and just acknowledge that standard DDR3 stick on the back and the fact that that CPU supports standard instruction set as well as basic server functionality for ECC and Virtualization. The C2538 is a pretty sweet chip for a home NAS. These atom chips are nothing to scoff at for this usecase. I personally owned the C2750 for home use on a FreeNAS box and it was fantastic (motherboard, cpu for ~$330 with ECC suppport and imbeded SATA raid for 6-12 disks... yes).
You demonstrated that the failure appears to be simple and power related. Honestly, you could have alleviated the downtime completely by following high availability best practices by picking up a second one for redundancy. The native software supports a highly available fail over mode described on page 2 of the product documentation.
Don't get me wrong, I don't care for the home NAS in a box thing, building your own for home use is great but anyone who cares about their data enough to demand 100% up time should look at a proper commercial solution. One that provides service and support (Netapp, ESX etc..), or invest more time into learning how to implement local redundancy properly. I'm a big fan of yours and I sincerely hope that you put some solid research behind this upcoming storage server(s).
My advice is that if you want to build your own.Invest in 2 servers and have your working share be actively mirrored and load the second server up with archival stuff to do your offsite backup.
TLDR: please don't just take a Linus NAS box and call it a day
Really interested in seeing the next video on you guys making a new custom server for storage. I'll likely follow up on it myself
that's cool mat with the Power supply info... dig it!
that's why people should just build a cheap system with nas drives in it, and use that. if it breaks, just buy the part you need! It might be considerably more hard to set up, but in the end, it's more reliable and easy to fix!
FYI I got a non working ds1010, all it was was a defective power switch. New cob from synology was $40. I even to test just tinkered with the switch pins and it worked. Good luck!
I so appreciate this. I had someone in IT suggest this to me because I wanted a file/email server of my own and I was thinking of it and just dropping the email or making it work. I won't be doing that now. Either building a cheap server or buying a used one.