1. Jeff checks Linus' systems before passing them on to others. Linus built a new system for a big guy from Oregon and because of Corona and as Jeff lives practically (!) next door to the guy, Jeff delivered it in person instead of Linus. 2. Linus has visited a data recovery company, so he is aware just how much he can screw up before actually losing data. LMG actually suffered a fire in the server room. There was something about a bolt. Not lightning but a regular bolt.
@@mikkelbreiler8916 The closest LMG came to losing all their data was a raid controller dying while he was prepping the new backup solution. There's a very interesting video about that. But yeah, the UPS frying is also something that happened.
@@gari5961 that one definitely wasn't the reason for me to automate my server backups for the server I got at that company in that city and luckily didn't burn down. Nope.
My backup setup is very similar to yours. Pro-tip: change the theme on your backup truenas gui, so it's immediately obvious which system you're looking at.
Suggest for off site, duplicate the server you just built, have it replicate the backup server on site initially then move it to a friend's house and ssh tunnel back
if your car is in a closed storage...same building... get a ssd server for your car and a network port in the garage... if something happens im sure i rush away with my car. its not the best but also a solution to help the family with backups if you do a visit. :) greatz from Germany have a nice Day
Tape drive storage is an ol' reliable and you can store the tapes off site at a friend's house or a climate controlled storage unit (nearby or by mail for geographically separated backup) or a safety deposit box at a bank. Tapes aren't that big in physical volume each, but this is a case where sneakernet may still outstrip network duplication for offsite backup. Plus it's a true offline storage which would protect you against certain types of malware. That said, enterprise grade tape drives can get expensive fast. A newer LTO7 or LTO8 drive is north of $3,000. An LTO5 drive on eBay is like $400 reliably, not counting the HBA or rack mounting, but the LTO5 tapes are like $15 each if you buy a 5 pack - though the newer varieties of tapes are cheaper per terabyte yet more expensive per tape. I'll use the old LTO5 tape format for the rest of this reply, since that's what I'm using. Bear in mind you likely won't get the full capacity claimed on tapes if you write video files. That's because video files compress poorly to tape media and video formats already try really hard to be efficient plus when you add in a modest capacity loss if you enable LTFS and an LTO5 tape will hold a bit more than 1.2 TB. (Or 1.5 TB if you leave LTFS off, but then you need to do more tracking of what's on a tape rather than just mounting and reading the filesystem.) A tape rotation plan would be needed too if you plan to reuse tapes... Aaaaand then you're back on human scheduling. A harder problem for a TH-cam channel or someone running a home lab than a general user, clearly. Source: I have an LTO5 external SAS tape drive hooked up to my home/lab server. It's enough for me, but my RAID 5 data array is only 2.7 TB in usable size, so... Your problem is significantly bigger than mine. Though watching you gives me the itch to buy more servers. 😉 Some other people have suggested you can try network duplication to an offsite location, but maybe you can try with some rate limiting to try and share your uplink bandwidth with your other use cases? The initial sync will still be crazy long (or even get longer), but it might not necessarily cause you daily discomfort while it runs in the background. You might also need to break that rsync job into batches. 😬
As an lto6 user, lto7+ for for 40tb+ arrays is my vote. Especially if using a simple tape drive, and not an autoload library. I'd love to go that route, but would probably need some friends willing to split the cost of the drive, and only make periodic tape backups
I calculated rhat for this exact use case (off site raw video backup) AWS Glacier is the cheapest option (cheaper than buying your own lto drive and managing everything). You'll need these Backups close to never. I'm just not sure if AWS Glacier stores the data redundant. I think I wouldn't even use on site backup If the videos are years old if Glacier is redundant.
If both TrueNAS machines have the dual 10gig SFP+ cards in them you could make use of LAGG to increase the throughput between the servers. Since the traffic is being handled on the same switch it's not going to affect your network that much. Plus added bonus of fault tolerance if one network cable or port should go bad.
@@mormantu8561 I'd be surprised if the switches aren't full fabric switches, ie, they can switch 100% of the aggregate port bandwidth simultaneously, and that's almost all switches nowadays. That means that "keeping data off the network" is completely irrelevant, because the network as a whole will never be congested. Additionally, none of these arrays can saturate a 10gbps link, let alone two, so aside from whatever overhead SSH would cause you (shouldn't be much), aggregating the dual 10gbps links via LACP 801.11ad gives you the same performance, plus the benefit of fault tolerance.
@@jttech44 But we don't know anything about his networking infrastructure. And I agree fully with what you are telling. But without knowing the rest there's really not a lot we can remark on.
@@jttech44 what switches? One link _is direct_ = cable from one server to the other. If the cable somehow fails, or the NIC's (not very likely) he can reconfigure it to go via the LAN instead. That makes sense, shoving it onto the LAN does not make sense since there is a whole crapton of other traffic there, and it is simple to set up the way he did and eliminate all potential problems affecting the LAN. This way the backup is only detectable as a slight decrease in disk performance if at all, since the 10gbe to LAN is likely to be the limiting factor rather than the disks feeding data to two NIC's at the same time, but it would depend on the situation.
SysAdmin of a small to midsized local construction company here. Currently we have two ReadyNAS's onsite that are roughly mirrored (say roughly as our security footage is only on one NAS, which is fine as it comes from a different box). We then have a third NAS at our ISPs Colo Facility that the local data gets backed up to nightly. Our two physical servers which are mirrored are also backed up nightly to those NAS's and we also use some national cloud backup providers for that data as well. As for those ReadyNAS boxes, we were original customers to Infrant Technologies before Netgear bought them about. We bought replacements 4 years ago and while they've been pretty rock solid, when I replace these, it will probably be with TrueNAS on some barebones Dell or other boxes. Netgear seems to have ReadyNAS in maintenance only mode these days, which is sad to see.
For off-site backup, you can backup to backblaze b2 - they can even send you a drive array push the bulk of your data to, so that you don't have to upload it all over the internet. I recommend giving that a go :)
You might want to keep an eye on the temperatures of those HP (made by QLogic) 10Gb adapters. I recently put two of those same adapters in my homelab servers. One of them hit 108C and shut down! After a bit of reading, I discovered they have known overheating issues. I replaced them with Emulex-based cards and the thermals are much better (one running at 47C, the other at 57C as I write this, with no other changes to the configuration).
Watching this again is just making me remember writing little scripts to Cron. So nice having these appliances to do this stuff with now a days. Finally got all the bits for my first server, now to start saving for some 10gb networking hardware and a second server.
For my backups, I run Synologys Active Backup For Business, along with snapshots every 5 mins. AB4B is amazing, I can do a bare metal restore of any machine in my network, and along with the snapshots, and full shared folders backup, to the backup NAS, along with uploading to the cloud, I really have piece of mind of my data. Synology's AB4B is the best backup system I've come across in 25years of IT
Hi Jeff, fantastic reviewo on servers, not only entertaining but so information packed that its a super pleasure to listen to you and a great source of knowledge you share - not stops barred.
If you had checksum errors on a disk they were corrected by ZFS, and the counter isn't going up, you should just do a zpool clear. That will make the pool healthy again, but be aware that if errors appear again the disk should be replaced.
Even with the bad counters he should be fine. With the error rate of 1 per 10^14 bit (all consumer hdds), and a pool made of 10TB+ drives having the checksum errors after a full scrub is within drives specs, normal behaviour really. Hence the one needs zraid2 indeed.
Thank you so much for this tutorial. I have just set up a proper backup server for my main truenas and this worked without a hitch. I even set up the connection on a direct link between each server so it's not hogging my network. I've got about 60+TB to transfer and that will take awhile on my simple 1 gig network.
I get where you're coming from. I'm at 90TB of usable storage on my unRAID server with 60.6TB being used along with 2x14TB WD Ultrastar parity drives. I'm using a Netgear ReadyNAS RN516 with a 5-bay expansion to back up the unRAID server. Rsync is a wonderful thing. I'm also using Duplicati to send encrypted backups to Google Drive of important data.
Basically the same. Had a single FreeNAS with raidz-1 and one drive failed and that was scary 😱️ Had a media player in my living room, a PC with Ubuntu that could house two drives I had laying around so I immediately set it up with raid-0 and rsync:ed my NAS to that one. I've had two FreeNAS for a couple of years now, with the type of setup you describe in the video. So worth it, so relaxing. The plan for this summer is to create a new TrueNAS with server grade hardware and also upgrade the old ones to TrueNAS, once the new one is up and running. I have other things to do first, creating a couple of Raspberry Pi clusters so that I can, finally, decommission one of my rather old and very power hungry 1U servers.
To backup my truenas box off site I use backblaze B2 as you can set up a sync in truenas for it. The monthly cost for storage can get a little steep but it's cheaper than anywhere else I've found. The initial data upload sucks but not much to do about that with the amount of data you've got.
Hi Jeff, at 11:01 when you were replicating the second backup job you narrated that you don't need encryption but you left it on the default "encryption" option instead of clicking "no encryption". I don't know if you caught this later on, but just a heads up. Thanks for your videos as always.
I have a NAS clone of another NAS, then a same sized 3rd NAS to handle a temp-swap-over when one NAS needs a new set of bigger drives. Then a 4th NAS does an offsite backup. Its a pretty NASty set of chores when doing a file integrity check on all three locals.
Working for a cloud-provider, and being insanely lucky with local purchases (and just basically people calling me, asking if I need "stuff"...) I have a HP DL380 G9 with a 40 TB diskstorage, this is connected to two (yes, two) taperobots, providing a total of 6 drives and 80 tapes online. You can say I'm pretty much set on backups ;)
Jeff, I already did the first chenbro on your last video...Thanks the learning experience was great..cost me a fortune in drives..the 32GB mem upgrade cost more than the server as well...I still have a problem with the rails...cant find 'em...how u doing on the rail situation?.... I also setup the proxmox server March 30 im turning another year older and getting screwed by being a disabled vietnam war veteran. I gotta figure this out and get a job before I am sleeping in a cardboard box. Keep these videos coming so I am not homeless...Computers are the only thing giving me the will to live. I am not as sophisticated as you so I am just drinking a Bud...been giving that up to buy some more 4GB WD red cmr disks Thank You
I run 321, with non redundant disk sets, on a ~20TB dataset, and low bandwidth. The main server has good hardware and good quality disks but no redundancy. Second, there's a 486-looking beige box in an outbuilding stuffed full of older drives, also non-redundant. That machine wakes up on schedule, rsyncs data off of the main server over a 50Mbps wireless link, and shuts itself down. Third, there is a machine in a colo doing the same. To complicate it slightly, I have had machines stolen, and so I run (and did run) encrypted drives. As a minimum, I have the machines pull the keys from an unrelated server where I can delete them.
I've recently configured UrBackup on my LAN for backing up my desktop and laptops and I'm really enjoying it. It will take hot system image backups on windows for the system volume and also does an incremental backup of all your data
Very helpful. I was reminded shortly again (during end of February) about importance of backups, because my File Server failed. I thought that I'd lost most of my stuff, or, at least, everything since August of 2020 -- when I backed-up shortly before the last Hurricane in Houston, Texas. But, I do believe in backups, but I was just backing-up within the same server. YEAH, DUMB MISTAKE. However, I never thought that Windows would start damaging data, and I copied my Master Copy onto the Backup Copy. But, eventually, I got it figured out, but it scared me quite a lot. Never been in that situation, so I guess (sadly) that I will have to advance my skillset to setup something like you have. Not exactly, but more like another computer on network just for backups. Thanks for your videos. Helps show us the best (or better) practices than what a lot of us have, right now.
The off site backup is something I would advocate implicitly.... I've seen firsthand the devistating effects of a workplace fire and the data loss that nearly resulted in bankruptcy...even if it's a case of periodically physically taking a NAS off site and bringing it back when you need to run another backup is it's a must have ...we used to call them the deep sleep backups....I've worked in rural locations where they had only
Just getting started with TrueNAS, and this is similar to my local backup solution. I'm still working on the remote backup. My current plan is to create a third TrueNAS system as a backup, and take it to my father's house in another state after the initial backup is done. My rate of data generation is fairly slow, so the 6 Mb/s I get outbound will not be too much of a problem.
If you use BackBlaze B2 to backup offsite, the price is pretty good, and to not use up your bandwidth for 3 years trying to create the initial backup they can send you a device that you copy the initial data to and they will then use their internal network (there is a separate charge for this, but it is a one-time cost for that). You can't beat the bandwidth of a UPS truck.
lol... perfect timing on this video. I am just about to set up a second nas in my shed for "offsite" backup. I looked through all the settings last night and had a rough idea how to create my replication tasks. I was pretty close but this will definitely help! Thanks Jeff! I am loving proxmox so much too! I will never go back to VMware... Proxmox just works.
You seed your backup that is offsite and calculate a monthly different to off load, if the monthly difference is small enough that you can spread it across the month at night, you should be able to stay within your bandwidth limits. I was a IT Backup & Storage Administrator in a past job. (my suggestion is an over simplification but you get the idea i hope) Check out transferring delta difference and building a Synthetic full.
I do it the same way here at home. My one Proxmox server is backing up to my main FreeNAS server and that one is doing a replication to my backup FreeNAS server. Both FreeNAS server are using raidz1 pools so only after the 3rd drive fails I loose data. Because the Proxmox and main FreeNAS servers already use 60$ electricity per month I wrote a script that uses IPMI and the FreeNAS API to do the backup. Once a week the script boots up the backup NAS using IPMI, unlocks the encrypted pools, waits until all replication and scrub tasks are finished and shuts down the backup NAS again. That way the backup NAS only needs to run for 1-2 hours per week (or 20 hours if the monthly scrub is also running) and beside the electricity bill the drives should last longer.
This video was very educational. As a uni student living in a university dorm I can’t have more than one server and the university blocks VPNs so I can’t offsite backup to my parents house. So at the moment I encrypt my data on my server and back it up to Backblaze.
Hi Jeff, my 3.2.1 Backup Solution is essentially the same as yours... but with only 25 MBit upload (and a larger Dataset) :-) i am running a small Atom (10W) board with plenty of SATA connections and i am doing the backup to my brothers home using FreeNas/Truenas Cloud Sync Tasks... they are working more reliable in my case (especially you can set up the cloud sync task not to consume your full internet connection during the day) i made the initial replication (using a replication task) via my network, then i drove the BackupNAS to my brothers house and changed to cloud sync tasks. between both homes i use a VPN tunnel.
Personally, I'm not backing up my full NAS - much of it is recoverable through other means (re-ripping movies, re-downloading Linux ISOs, etc) so I'm running Duplicati. It's running on each desktop via SSH to my NAS (OMV with ZFS) on a backups dataset, then running Duplicati in Docker on the NAS itself and using it to back up critical datasets (photos, personal info, other machine backups, VMs, and Docker container configs) to upload to OneDrive on Office365. Works fairly well, and the only time I've tried to recover off-site data I was able to do so relatively easily, so I'll take it. My critical data comes in at under a terabyte though, so OneDrive won't work for everyone.
For your off site backup: If you happen to have a (very) roomy garden and if you don't live in an area that floods now and then: How about a garden shed (if you are allowed to build one) as far away from your house as possible? Installing power and networking to it is not that big of a deal. That way you would have to rely on your 50MBit uplink...
I use Duplicity, augmented by the "Duply" script, to Amazon S3 several states away from my house. That first upload definitely smarts -- our upload taps out at 10 MBits -- but it works well. (Edit: Duplicity uses GPG to encrypt the data, so make sure you have another copy of your key!!) If I were to set it up today, I'd probably use another service for storage, like Backblaze, or Wasabi. Not because S3 isn't fine, but because the Amazon monster really doesn't need more of my money.
I just came across you video. My setup is similar with a direct connection 10Gig card in server and backup. I've been unable to create a push replication task but pull works. The ssh connection gives an error of connection refused. Is there any reason I can't create the ssh connection in a push?
I also create pull-replication jobs on the backup server. Because doing a push from the main server would send a bunch of mails when the backup server is down, and mine is only powered on once a week. But make sure to monitor it regularly, because one could very easy overlook when the backup server is turned off (even accidentally) and thus not making any backups at all. And in your configuration when the backup server doesn't run for over a week you would need to fully replicate everything again because the latest snapshot on the backup server was already deleted on the main server and thus zfs can't calculate which data differs and shows an error about an unrelated dataset.
When you have shit internet speeds, then the best thing to do is simply to duplicate your backup server onto a remote location and just upload the snapshot differences during the night
That's also what I am doing. First upload took a while but now my off site backups are safe in Norway :p
3 ปีที่แล้ว +3
Or If you have physical access the offsite location and server, you make the first "big" replication on your local LAN, then move the server to the intended location and then start copying snapshots during the night xD
Hi Jeff. Great video as always. Obviously I don't have the amount of data you do. I use a Synology with cloudsync installed. So I use synology's backup agent on my laptop which backs up my data to the synology. Then every 24 hours all the data is backed up to a S3 Bucket. .
For my personal server, I have the DigitalOcean backup feature enabled and I have a simple rsync that runs daily and replicate the files in a incremental way to my home NAS.
I have a r710 as my local backup, then some USB HDs I store "critical" data on. Whenever I visit my mom I take the current ones and grab the ones I previously left with her. Then I have a B2 account that everything is backed up on ... pricey but I sleep better because of it. I'm only in the 12TB range right now so this wouldn't scale well but it works for me ... for now.
You need a backup buddy. A good solution if you both have enough free space for each other. Or if you setup an off-site box set it up with Zerotier either on both boxes or on your router as a network bridge for all you LAN devices. Then do the initial backup locally over Zerotier interface. They ship it off. Won't matter what public IP they give you.
i dont have this much important data, so i have just a Cloud Sync task to a Google Drive that runs once a week, since i have currently only 34GB of data to backup it works for me.
Great video man! I definitely do not have a backup system as intricate as yours but, for my newbie backup I am using a repurposed Dell T5400 workstation with two Intel Xeon E5430 CPU's, 64GBs of ECC memory, five 3TB HDD in Raid-Z2 with a 120GB SSD cache drive and 4 1Gb ports configured as a LAGG LACP running TrueNAS-12.0-U2.1. Not that great but enough to back up the data on my home network as well as teaching myself how to us Truenas with tutorials such as yours. Keep up the good work!
> I have only 15 megabit upload. Ouch, I feel you. My 23 TB online backup has been running for EIGHT months (rsync, restarted weekly to accomodate new changes). More often than not data is being added to the pool faster than it is being backed up. It is still running.
I'm lucky in that I don't generate lots of new data on a daily basis and if I do, it's generally a movie rip, which I don't include in my backup strategy. With only ~15Mb upload, I schedule an every 30 minute backup from my NAS to CrashPlanPro. Even then, I limit the bitrate to 256Kb. That single $10/month subscription means all my data from all my devices eventually makes its way to the cloud, because I have my laptops and PC's backing up to my NAS.
You could have installed proxmox backup server as a vm or on the proxmox host itself and save it on trueNAS again. PBS does deduplicated backups and are much faster than the built in full backup.
My Homeserver does not have VMs. I mostly just run Docker on NixOS on there. I have a Microsoft 365 Family subscription so my backup solution consists of 1 Onedrive user. I back up my few files to Onedrive over restic with rclone. Nothing fancy, but it works well for the few MB i need to back up. There is only the NixOS config that needs backing up and the Docker containers.
I have something set up basically the same way you have, though the backup server is in a different building so it cant be direct connected. The backup server was supposed to be the primary server, i was going to do one full replication to the new all flash server, then swap them. But for some reason no matter what i do, the pools seem to be read only, even if i remove the replication tasks, and set permissions recursively afterwards I've even reinstalled trunas and destroyed the array and reconfigured ther replication. I need to just bite the bullet, use the RAID0 working pool on my workstation for the backup and re-configure my arrays on the new server and do a manual copy of everything.
15:30 Why not backup to B2 and use a Fireball for the initial upload? Both at home and at work I use Arq Backup + B2, works pretty flawlessly for a few years now, and B2 is about as cheap as it gets for cloud storage (without gambling on an "unlimited" service).
How loud is this chassis model when filled? like, comfortable to work next to, or ok to have in a storage closet, or is it relegated to garage only/need some proper noise protection during idle?
Just a query you didnt mention any logging report, so does the system ping you if any back up fails for whatever reason or do you hope it faithfully works? Oh and the old question of how do you know the back ups are good, do you test them?
I'm still figuring out backups in zero trust environment. The dataset is encrypted, meaning if the server gets restarted, I have to manually mount the encrypted dataset first (this is what I want), but then I'm fighting syncthing and rsync to run those periodically but only if the dataset is mounted.
Jeff, is there any way of getting in touch with someone at Starlink? Your 50 Mb up is your limiting factor as you pointed out... Maybe talking to them to show a use case of something like an external backup product showcase to someone interested in contracting their services.
I only subscribed for the drinks until this video. I have a 48TB TrueNAS Dell R720dx. I'm planning to build another similar system to install at my son's house. He has 1Gb eithernet . I will have 1Gb soon. I'm planning to connect via Wireguard. The Snapshot scheme sounds promising for backing up.
**** Hi Jeff. Would you please show a video of your maintenance on the HDD showing an error and how you received notification of the error such as an automated generated email or message? Thank you for all your videos.
I use Duplicati on my NAS to backup stuff like music, pictures, etc to opendrive. I want to backup some of the videos I've got in my plex collection but I don't consider cloud storage to be practical cost-wise.
Why not offsite? What happens in case of catastrophe, such as fire? What's the cheapest solution for 40TB you have there 'in the cloud?' EC2? Backblaze? Homebrew is good, but this is your business! Good videos. Keep it up.
Thank you for all this great info! If I wanted to have my backup stored at another location, couldn’t I use this method? But I’d need to open a port on the backup server’s router and port forward to the truenas backup?
Me too mate, my Nextcloud is on my 3TB HDDs lol, although im in the process of setting up a back up system for that pool alone, should the worst happen, as Jeff says, RAID is not a back up lol
Obviously I wouldn't recommend trying but I'd bet this server would survive getting a drink spilled on it (provided it's unpowered at the time). Unlike normal drives the helium-filled drives are hermetically sealed so the platters shouldn't be ruined.
JEFF PLEASE HELP! I bought the chenbro nr12000 from watching your videos. Which thank you for the recommendation it's practically brand new. But I bought SAS drives knowing the motherboard supports SAS and I knew SAS has a different protocol but didnt realize they have different connectors. Will sata to SAS adaptors work that you know of or do have to get sata drives? Good video as always. Tech yes Brian got me back into pc gaming but you've introduced me to the ENTERPRISE.
I really would like to see your network setup, how do you separate your servers from normal PCs and other smart home appliances. I like your server setups and this would help me figure things out in my network.
I love that you're using starship registry numbers as the names for your pools. Although it seems somewhat ominous that you're using the registries for two ships that were both destroyed ;-)
Thank you very helpful. Question. Do you now have a 3-2-1 backup system in place? I have just a 3-2 backup for now. Using Unifi, Synology NAS, & 3 WD My Cloud EX2 Ultra backing up 3 different groups of data. No loss so far. By the end of 2025 I may need to add a Synology Extender with a new pool of drives. Also an off site Synology NAS as well.
Thank you for the entertaining and educating video as always! What should be the capacity of a back up server that is meant to be used for backing up another TrueNAS server that has a pool size of about 35 TB? Should the capacity be comparable, or can one get away with a smaller capacity due to some ZFS magic?
Have you thought about storing some of the data offline? (Especially old videos.) SSDs are cheap enough now that I treat them the way I did floppies 30 years ago. I use an IcyDock and have the SATA ports set as hot swappable so I can easily stick an SSD into the dock, drop some files on it, then pull it back out again and toss it on my shelf. Just a thought. :)
I have 1 Synology DS720+ that im using for backups with active backup for business to backup al my desktops/notebooks, and i have a Synology DS418 for the data. For that im using hyper backup to the DS720+
I like the way you open your video's and your videos are a source of inspiration for me. One thing i want to ask though is this, do you have solar panels on the roof of your house or do you generate your own energy in another way? I saw a video of you when you talked about an 800 watt PSU for a server and i thought that must be a huge source of energy consumption if you don't have one but several units of this kind in your house.
Backup process I use...Backing up my VMware es I host server with Nakivo to S3 Backup to Wasabi cloud. Solid reliable low economical solution. 5TB solution for 4 vms
Suggestion: if you have friends with their own large NASes perhaps they could be your offsite backup and vice versa. It would be mutually beneficial but it wouldn't really address the problem of your slow uplink.
Nice video Jeff. I have QNAS and I backup around 6TB to external hard disks. I think the 3-2-1 philosophy is near impossible for home lab'ers. Our data size has grown exponentially so it is not uncommon for home backups in the 10-30TB size range. So the only medium I can think of for holding that sort of data is disk. If you look at tape, it is very expensive (thousands). So that rules the '2' out of 3-2-1. Automating a backup offsite is tricky also. Unless you have a buddy who will let you host a backup and fast upload speeds, you're only other real option is cloud storage. Again expensive with recurring costs. What to do?
I've been using 40Mbit/s upstream for remote backups for over a year, I have no impact from this. The job needs about one and a half hours and is done while I sleep, so it'S really not a problem
Yeah, most but not all were unreliable. The only two 3TB drives (Seagate Barracuda) i ever bought are now about 9 years old and still working in my backup NAS. I guess i was really lucky there. They aren't constantly powered on though, so power-on hours sum up to
@@Charlie8913 Absolute terrible experiences with 3TByte HDDs from Seagate. Over the years I have collected an impressive amount of broken hard drives (100+....have been collecting for quite some time) and there are at least 5x a 3 TByte drive. Almost all of them died within 1 year. Only one lasted about 1,5 year. They left such a sour taste that I don't want them, even when offered for free. In my personal (n = 1) experience 1, 2, 4, 6, 8 and 10 TByte drives are far, far more reliable than those dreaded 3 TByte drives.
@@geroldmanders9742 I've never had a hard drive fail yet, but I don't put constant wear and tear on them and I buy slightly used enterprise drives with minimal hours on them. I only back up the data I truly need.
seeing the price of this server... i have 2 similar cpu/mobo setups in my rack but the years of enjoyment softens the blow for what they are now valued at
@craft computing, you will wanna do a zpool error clear to get rid of the unhealthy status, I just did the same with mine, I don't remember the exact process but i think it was in the shell u enter zpool clear *name of pool*, if you Google zpool clear it's all in an Oracle post
"What's your backup solution?" Prayers
So freaking relatable, bro
I liked this comment and I have full knowledge that those prayers do, literally, nothing
White wine.
Lol 😆 ✅️
*Thoughts and prayers
Good that Jeff isn't clumsy like Linus, Open glasses of Liquid near the Open server chassis.
And Jeff does a lot of hand talking
😂😂
Not switched on so 0 risk lol
1. Jeff checks Linus' systems before passing them on to others. Linus built a new system for a big guy from Oregon and because of Corona and as Jeff lives practically (!) next door to the guy, Jeff delivered it in person instead of Linus.
2. Linus has visited a data recovery company, so he is aware just how much he can screw up before actually losing data. LMG actually suffered a fire in the server room. There was something about a bolt. Not lightning but a regular bolt.
@@mikkelbreiler8916 The closest LMG came to losing all their data was a raid controller dying while he was prepping the new backup solution. There's a very interesting video about that. But yeah, the UPS frying is also something that happened.
There's absolutely no reason this video arrived just after the basement flooded, right? :-P
Those 2 events are 100% unrelated for sure
LOL
Insurance is sometimes nice!
Or the data center fire in france recently ... Coincidence for sure
@@gari5961 that one definitely wasn't the reason for me to automate my server backups for the server I got at that company in that city and luckily didn't burn down. Nope.
love how your data sets are named with the registry numbers of the Valiant and Defiant
My backup setup is very similar to yours. Pro-tip: change the theme on your backup truenas gui, so it's immediately obvious which system you're looking at.
I'm definitely looking forward to video covering how you decide to handle off-site backup as I'm in a similar boat.
Stumbled across your channel as I'm starting to build my own home lab, have to say, your videos are super informative and easy to follow, keep it up!
Suggest for off site, duplicate the server you just built, have it replicate the backup server on site initially then move it to a friend's house and ssh tunnel back
alternatively, pay for 1RU of space in a nearish DC or setup a "cloud" sync to somewhere like backblaze B2 or amazon glacier.
if your car is in a closed storage...same building...
get a ssd server for your car and a network port in the garage...
if something happens im sure i rush away with my car.
its not the best but also a solution to help the family with backups if you do a visit. :)
greatz from Germany
have a nice Day
Tape drive storage is an ol' reliable and you can store the tapes off site at a friend's house or a climate controlled storage unit (nearby or by mail for geographically separated backup) or a safety deposit box at a bank. Tapes aren't that big in physical volume each, but this is a case where sneakernet may still outstrip network duplication for offsite backup. Plus it's a true offline storage which would protect you against certain types of malware.
That said, enterprise grade tape drives can get expensive fast. A newer LTO7 or LTO8 drive is north of $3,000. An LTO5 drive on eBay is like $400 reliably, not counting the HBA or rack mounting, but the LTO5 tapes are like $15 each if you buy a 5 pack - though the newer varieties of tapes are cheaper per terabyte yet more expensive per tape. I'll use the old LTO5 tape format for the rest of this reply, since that's what I'm using.
Bear in mind you likely won't get the full capacity claimed on tapes if you write video files. That's because video files compress poorly to tape media and video formats already try really hard to be efficient plus when you add in a modest capacity loss if you enable LTFS and an LTO5 tape will hold a bit more than 1.2 TB. (Or 1.5 TB if you leave LTFS off, but then you need to do more tracking of what's on a tape rather than just mounting and reading the filesystem.) A tape rotation plan would be needed too if you plan to reuse tapes... Aaaaand then you're back on human scheduling.
A harder problem for a TH-cam channel or someone running a home lab than a general user, clearly.
Source: I have an LTO5 external SAS tape drive hooked up to my home/lab server. It's enough for me, but my RAID 5 data array is only 2.7 TB in usable size, so... Your problem is significantly bigger than mine. Though watching you gives me the itch to buy more servers. 😉
Some other people have suggested you can try network duplication to an offsite location, but maybe you can try with some rate limiting to try and share your uplink bandwidth with your other use cases? The initial sync will still be crazy long (or even get longer), but it might not necessarily cause you daily discomfort while it runs in the background. You might also need to break that rsync job into batches. 😬
As an lto6 user, lto7+ for for 40tb+ arrays is my vote. Especially if using a simple tape drive, and not an autoload library. I'd love to go that route, but would probably need some friends willing to split the cost of the drive, and only make periodic tape backups
I calculated rhat for this exact use case (off site raw video backup) AWS Glacier is the cheapest option (cheaper than buying your own lto drive and managing everything). You'll need these Backups close to never. I'm just not sure if AWS Glacier stores the data redundant. I think I wouldn't even use on site backup If the videos are years old if Glacier is redundant.
If both TrueNAS machines have the dual 10gig SFP+ cards in them you could make use of LAGG to increase the throughput between the servers. Since the traffic is being handled on the same switch it's not going to affect your network that much. Plus added bonus of fault tolerance if one network cable or port should go bad.
As he states in the video. One port is used for direct connection to one another.
@@mormantu8561 I'd be surprised if the switches aren't full fabric switches, ie, they can switch 100% of the aggregate port bandwidth simultaneously, and that's almost all switches nowadays. That means that "keeping data off the network" is completely irrelevant, because the network as a whole will never be congested. Additionally, none of these arrays can saturate a 10gbps link, let alone two, so aside from whatever overhead SSH would cause you (shouldn't be much), aggregating the dual 10gbps links via LACP 801.11ad gives you the same performance, plus the benefit of fault tolerance.
@@jttech44 But we don't know anything about his networking infrastructure. And I agree fully with what you are telling. But without knowing the rest there's really not a lot we can remark on.
@@jttech44 what switches? One link _is direct_ = cable from one server to the other. If the cable somehow fails, or the NIC's (not very likely) he can reconfigure it to go via the LAN instead. That makes sense, shoving it onto the LAN does not make sense since there is a whole crapton of other traffic there, and it is simple to set up the way he did and eliminate all potential problems affecting the LAN. This way the backup is only detectable as a slight decrease in disk performance if at all, since the 10gbe to LAN is likely to be the limiting factor rather than the disks feeding data to two NIC's at the same time, but it would depend on the situation.
SysAdmin of a small to midsized local construction company here. Currently we have two ReadyNAS's onsite that are roughly mirrored (say roughly as our security footage is only on one NAS, which is fine as it comes from a different box). We then have a third NAS at our ISPs Colo Facility that the local data gets backed up to nightly. Our two physical servers which are mirrored are also backed up nightly to those NAS's and we also use some national cloud backup providers for that data as well.
As for those ReadyNAS boxes, we were original customers to Infrant Technologies before Netgear bought them about. We bought replacements 4 years ago and while they've been pretty rock solid, when I replace these, it will probably be with TrueNAS on some barebones Dell or other boxes. Netgear seems to have ReadyNAS in maintenance only mode these days, which is sad to see.
For off-site backup, you can backup to backblaze b2 - they can even send you a drive array push the bulk of your data to, so that you don't have to upload it all over the internet. I recommend giving that a go :)
how much tho? i thought about using it and my array is 2 TB of 12 TB and growing
@@someguy9321 $5 per month per TB. (USD)
You might want to keep an eye on the temperatures of those HP (made by QLogic) 10Gb adapters. I recently put two of those same adapters in my homelab servers. One of them hit 108C and shut down! After a bit of reading, I discovered they have known overheating issues. I replaced them with Emulex-based cards and the thermals are much better (one running at 47C, the other at 57C as I write this, with no other changes to the configuration).
Watching this again is just making me remember writing little scripts to Cron. So nice having these appliances to do this stuff with now a days. Finally got all the bits for my first server, now to start saving for some 10gb networking hardware and a second server.
I picked up 2 Chenbro's because of you....one for a new plex build and one just because I wanted to tinker.....damn you and your amazing channel
For my backups, I run Synologys Active Backup For Business, along with snapshots every 5 mins. AB4B is amazing, I can do a bare metal restore of any machine in my network, and along with the snapshots, and full shared folders backup, to the backup NAS, along with uploading to the cloud, I really have piece of mind of my data. Synology's AB4B is the best backup system I've come across in 25years of IT
Hi Jeff, fantastic reviewo on servers, not only entertaining but so information packed that its a super pleasure to listen to you and a great source of knowledge you share - not stops barred.
How about using AWS for your 3-2-1 offsite solution? You can get the Snowball to make the first transfer and then just sync
I think this guy likes servers
you spelled licks wrong
If you had checksum errors on a disk they were corrected by ZFS, and the counter isn't going up, you should just do a zpool clear. That will make the pool healthy again, but be aware that if errors appear again the disk should be replaced.
Even with the bad counters he should be fine. With the error rate of 1 per 10^14 bit (all consumer hdds), and a pool made of 10TB+ drives having the checksum errors after a full scrub is within drives specs, normal behaviour really. Hence the one needs zraid2 indeed.
Thank you so much for this tutorial. I have just set up a proper backup server for my main truenas and this worked without a hitch. I even set up the connection on a direct link between each server so it's not hogging my network. I've got about 60+TB to transfer and that will take awhile on my simple 1 gig network.
I get where you're coming from. I'm at 90TB of usable storage on my unRAID server with 60.6TB being used along with 2x14TB WD Ultrastar parity drives.
I'm using a Netgear ReadyNAS RN516 with a 5-bay expansion to back up the unRAID server.
Rsync is a wonderful thing. I'm also using Duplicati to send encrypted backups to Google Drive of important data.
Basically the same. Had a single FreeNAS with raidz-1 and one drive failed and that was scary 😱️
Had a media player in my living room, a PC with Ubuntu that could house two drives I had laying around so I immediately set it up with raid-0 and rsync:ed my NAS to that one.
I've had two FreeNAS for a couple of years now, with the type of setup you describe in the video. So worth it, so relaxing.
The plan for this summer is to create a new TrueNAS with server grade hardware and also upgrade the old ones to TrueNAS, once the new one is up and running.
I have other things to do first, creating a couple of Raspberry Pi clusters so that I can, finally, decommission one of my rather old and very power hungry 1U servers.
To backup my truenas box off site I use backblaze B2 as you can set up a sync in truenas for it. The monthly cost for storage can get a little steep but it's cheaper than anywhere else I've found. The initial data upload sucks but not much to do about that with the amount of data you've got.
Hi Jeff, at 11:01 when you were replicating the second backup job you narrated that you don't need encryption but you left it on the default "encryption" option instead of clicking "no encryption". I don't know if you caught this later on, but just a heads up. Thanks for your videos as always.
Yep, I noticed that as well. D'oh!
@@KaneHusky That was so he could show his wife the video for plausible deniability.
I have a NAS clone of another NAS, then a same sized 3rd NAS to handle a temp-swap-over when one NAS needs a new set of bigger drives. Then a 4th NAS does an offsite backup. Its a pretty NASty set of chores when doing a file integrity check on all three locals.
Working for a cloud-provider, and being insanely lucky with local purchases (and just basically people calling me, asking if I need "stuff"...)
I have a HP DL380 G9 with a 40 TB diskstorage, this is connected to two (yes, two) taperobots, providing a total of 6 drives and 80 tapes online.
You can say I'm pretty much set on backups ;)
Jeff, I already did the first chenbro on your last video...Thanks the learning experience was great..cost me a fortune in drives..the 32GB mem upgrade cost more than the server as well...I still have a problem with the rails...cant find 'em...how u doing on the rail situation?....
I also setup the proxmox server
March 30 im turning another year older and getting screwed by being a disabled vietnam war veteran.
I gotta figure this out and get a job before I am sleeping in a cardboard box.
Keep these videos coming so I am not homeless...Computers are the only thing giving me the will to live.
I am not as sophisticated as you so I am just drinking a Bud...been giving that up to buy some more 4GB WD red cmr disks
Thank You
This BUDS for you!
I run 321, with non redundant disk sets, on a ~20TB dataset, and low bandwidth. The main server has good hardware and good quality disks but no redundancy. Second, there's a 486-looking beige box in an outbuilding stuffed full of older drives, also non-redundant. That machine wakes up on schedule, rsyncs data off of the main server over a 50Mbps wireless link, and shuts itself down. Third, there is a machine in a colo doing the same. To complicate it slightly, I have had machines stolen, and so I run (and did run) encrypted drives. As a minimum, I have the machines pull the keys from an unrelated server where I can delete them.
I've recently configured UrBackup on my LAN for backing up my desktop and laptops and I'm really enjoying it. It will take hot system image backups on windows for the system volume and also does an incremental backup of all your data
Very helpful. I was reminded shortly again (during end of February) about importance of backups, because my File Server failed. I thought that I'd lost most of my stuff, or, at least, everything since August of 2020 -- when I backed-up shortly before the last Hurricane in Houston, Texas. But, I do believe in backups, but I was just backing-up within the same server. YEAH, DUMB MISTAKE. However, I never thought that Windows would start damaging data, and I copied my Master Copy onto the Backup Copy. But, eventually, I got it figured out, but it scared me quite a lot. Never been in that situation, so I guess (sadly) that I will have to advance my skillset to setup something like you have. Not exactly, but more like another computer on network just for backups. Thanks for your videos. Helps show us the best (or better) practices than what a lot of us have, right now.
The off site backup is something I would advocate implicitly.... I've seen firsthand the devistating effects of a workplace fire and the data loss that nearly resulted in bankruptcy...even if it's a case of periodically physically taking a NAS off site and bringing it back when you need to run another backup is it's a must have ...we used to call them the deep sleep backups....I've worked in rural locations where they had only
Just getting started with TrueNAS, and this is similar to my local backup solution.
I'm still working on the remote backup. My current plan is to create a third TrueNAS system as a backup, and take it to my father's house in another state after the initial backup is done.
My rate of data generation is fairly slow, so the 6 Mb/s I get outbound will not be too much of a problem.
If you use BackBlaze B2 to backup offsite, the price is pretty good, and to not use up your bandwidth for 3 years trying to create the initial backup they can send you a device that you copy the initial data to and they will then use their internal network (there is a separate charge for this, but it is a one-time cost for that). You can't beat the bandwidth of a UPS truck.
lol... perfect timing on this video. I am just about to set up a second nas in my shed for "offsite" backup. I looked through all the settings last night and had a rough idea how to create my replication tasks. I was pretty close but this will definitely help! Thanks Jeff! I am loving proxmox so much too! I will never go back to VMware... Proxmox just works.
You seed your backup that is offsite and calculate a monthly different to off load, if the monthly difference is small enough that you can spread it across the month at night, you should be able to stay within your bandwidth limits. I was a IT Backup & Storage Administrator in a past job. (my suggestion is an over simplification but you get the idea i hope)
Check out transferring delta difference and building a Synthetic full.
I do it the same way here at home. My one Proxmox server is backing up to my main FreeNAS server and that one is doing a replication to my backup FreeNAS server. Both FreeNAS server are using raidz1 pools so only after the 3rd drive fails I loose data. Because the Proxmox and main FreeNAS servers already use 60$ electricity per month I wrote a script that uses IPMI and the FreeNAS API to do the backup. Once a week the script boots up the backup NAS using IPMI, unlocks the encrypted pools, waits until all replication and scrub tasks are finished and shuts down the backup NAS again. That way the backup NAS only needs to run for 1-2 hours per week (or 20 hours if the monthly scrub is also running) and beside the electricity bill the drives should last longer.
This video was very educational. As a uni student living in a university dorm I can’t have more than one server and the university blocks VPNs so I can’t offsite backup to my parents house. So at the moment I encrypt my data on my server and back it up to Backblaze.
Hi Jeff,
my 3.2.1 Backup Solution is essentially the same as yours... but with only 25 MBit upload (and a larger Dataset) :-)
i am running a small Atom (10W) board with plenty of SATA connections and i am doing the backup to my brothers home using FreeNas/Truenas Cloud Sync Tasks... they are working more reliable in my case (especially you can set up the cloud sync task not to consume your full internet connection during the day)
i made the initial replication (using a replication task) via my network, then i drove the BackupNAS to my brothers house and changed to cloud sync tasks. between both homes i use a VPN tunnel.
Thank you for sharing. I would like to have seen how things were setup on the TrueNAS side. Eg. how to configure TN as the backup target.
Did you turn off encryption for all your transfers? I think you might have missed out that step after the first one.
Could you do a video about properly setting up a web server?
Personally, I'm not backing up my full NAS - much of it is recoverable through other means (re-ripping movies, re-downloading Linux ISOs, etc) so I'm running Duplicati. It's running on each desktop via SSH to my NAS (OMV with ZFS) on a backups dataset, then running Duplicati in Docker on the NAS itself and using it to back up critical datasets (photos, personal info, other machine backups, VMs, and Docker container configs) to upload to OneDrive on Office365. Works fairly well, and the only time I've tried to recover off-site data I was able to do so relatively easily, so I'll take it. My critical data comes in at under a terabyte though, so OneDrive won't work for everyone.
For your off site backup: If you happen to have a (very) roomy garden and if you don't live in an area that floods now and then: How about a garden shed (if you are allowed to build one) as far away from your house as possible?
Installing power and networking to it is not that big of a deal.
That way you would have to rely on your 50MBit uplink...
Hello From New Zealand. Love your videos. Just asking whats your power bill like?
I use Duplicity, augmented by the "Duply" script, to Amazon S3 several states away from my house. That first upload definitely smarts -- our upload taps out at 10 MBits -- but it works well. (Edit: Duplicity uses GPG to encrypt the data, so make sure you have another copy of your key!!) If I were to set it up today, I'd probably use another service for storage, like Backblaze, or Wasabi. Not because S3 isn't fine, but because the Amazon monster really doesn't need more of my money.
I just came across you video. My setup is similar with a direct connection 10Gig card in server and backup. I've been unable to create a push replication task but pull works. The ssh connection gives an error of connection refused. Is there any reason I can't create the ssh connection in a push?
**** Jeff, sorry I forgot something. Perhaps you might like to discuss using Linode as your offsite backup strategy!
I also create pull-replication jobs on the backup server. Because doing a push from the main server would send a bunch of mails when the backup server is down, and mine is only powered on once a week. But make sure to monitor it regularly, because one could very easy overlook when the backup server is turned off (even accidentally) and thus not making any backups at all. And in your configuration when the backup server doesn't run for over a week you would need to fully replicate everything again because the latest snapshot on the backup server was already deleted on the main server and thus zfs can't calculate which data differs and shows an error about an unrelated dataset.
When you have shit internet speeds, then the best thing to do is simply to duplicate your backup server onto a remote location and just upload the snapshot differences during the night
That's also what I am doing. First upload took a while but now my off site backups are safe in Norway :p
Or If you have physical access the offsite location and server, you make the first "big" replication on your local LAN, then move the server to the intended location and then start copying snapshots during the night xD
I'm thinking that Backblaze offers that as an option for the initial backup.
Hi Jeff. Great video as always. Obviously I don't have the amount of data you do. I use a Synology with cloudsync installed. So I use synology's backup agent on my laptop which backs up my data to the synology. Then every 24 hours all the data is backed up to a S3 Bucket. .
For my personal server, I have the DigitalOcean backup feature enabled and I have a simple rsync that runs daily and replicate the files in a incremental way to my home NAS.
I have a r710 as my local backup, then some USB HDs I store "critical" data on. Whenever I visit my mom I take the current ones and grab the ones I previously left with her. Then I have a B2 account that everything is backed up on ... pricey but I sleep better because of it. I'm only in the 12TB range right now so this wouldn't scale well but it works for me ... for now.
I was really hoping that you were putting that nr12000 in a DC for an off-site backup
Once your backup is complete, take it out of the rack an drive it to a nearby datacenter for them to host it. In case of fire or such.
You need a backup buddy.
A good solution if you both have enough free space for each other.
Or if you setup an off-site box set it up with Zerotier either on both boxes or on your router as a network bridge for all you LAN devices.
Then do the initial backup locally over Zerotier interface. They ship it off.
Won't matter what public IP they give you.
i dont have this much important data, so i have just a Cloud Sync task to a Google Drive that runs once a week, since i have currently only 34GB of data to backup it works for me.
Great video man! I definitely do not have a backup system as intricate as yours but, for my newbie backup I am using a repurposed Dell T5400 workstation with two Intel Xeon E5430 CPU's, 64GBs of ECC memory, five 3TB HDD in Raid-Z2 with a 120GB SSD cache drive and 4 1Gb ports configured as a LAGG LACP running TrueNAS-12.0-U2.1. Not that great but enough to back up the data on my home network as well as teaching myself how to us Truenas with tutorials such as yours. Keep up the good work!
Cheers to another informative #TrueNAS video, Craft!
> I have only 15 megabit upload.
Ouch, I feel you. My 23 TB online backup has been running for EIGHT months (rsync, restarted weekly to accomodate new changes). More often than not data is being added to the pool faster than it is being backed up. It is still running.
Can you explain further? 1:56 .
Why are 3tb hdd's unreliable?
I've just had a second HDD fail since 2017, and would like to know why :)
When the youtuber has more expensive equipment, redundancy and data integrity , than the public sector organization i work for
Wouldn't borgbackup be better suited so you can backup all your servers regardless of the distribution and easily to offsite over ssh.
I'm lucky in that I don't generate lots of new data on a daily basis and if I do, it's generally a movie rip, which I don't include in my backup strategy.
With only ~15Mb upload, I schedule an every 30 minute backup from my NAS to CrashPlanPro. Even then, I limit the bitrate to 256Kb.
That single $10/month subscription means all my data from all my devices eventually makes its way to the cloud, because I have my laptops and PC's backing up to my NAS.
You could have installed proxmox backup server as a vm or on the proxmox host itself and save it on trueNAS again. PBS does deduplicated backups and are much faster than the built in full backup.
My Homeserver does not have VMs. I mostly just run Docker on NixOS on there. I have a Microsoft 365 Family subscription so my backup solution consists of 1 Onedrive user. I back up my few files to Onedrive over restic with rclone. Nothing fancy, but it works well for the few MB i need to back up. There is only the NixOS config that needs backing up and the Docker containers.
I have something set up basically the same way you have, though the backup server is in a different building so it cant be direct connected. The backup server was supposed to be the primary server, i was going to do one full replication to the new all flash server, then swap them. But for some reason no matter what i do, the pools seem to be read only, even if i remove the replication tasks, and set permissions recursively afterwards I've even reinstalled trunas and destroyed the array and reconfigured ther replication. I need to just bite the bullet, use the RAID0 working pool on my workstation for the backup and re-configure my arrays on the new server and do a manual copy of everything.
15:30 Why not backup to B2 and use a Fireball for the initial upload?
Both at home and at work I use Arq Backup + B2, works pretty flawlessly for a few years now, and B2 is about as cheap as it gets for cloud storage (without gambling on an "unlimited" service).
I can recommend the Proxmox Backup solution. Works a treat... Great video again Jeff. Just wish I could do gin...
OMV - OPEN MEDIA VAULT... I USE AND IT HAS BEEN AWESOME FOR 3BYRS NOW. CAN RUN ON SBC, SO EFFICIENT!
This is a fantastic video - thanks for your hard work! Question: how do you then restore a snapshot that's been transferred to the backup server?
How loud is this chassis model when filled? like, comfortable to work next to, or ok to have in a storage closet, or is it relegated to garage only/need some proper noise protection during idle?
Just a query you didnt mention any logging report, so does the system ping you if any back up fails for whatever reason or do you hope it faithfully works?
Oh and the old question of how do you know the back ups are good, do you test them?
The start of the video :)
Gah, if Rambo had jumped on the desk...
I'm still figuring out backups in zero trust environment. The dataset is encrypted, meaning if the server gets restarted, I have to manually mount the encrypted dataset first (this is what I want), but then I'm fighting syncthing and rsync to run those periodically but only if the dataset is mounted.
Jeff, is there any way of getting in touch with someone at Starlink? Your 50 Mb up is your limiting factor as you pointed out... Maybe talking to them to show a use case of something like an external backup product showcase to someone interested in contracting their services.
I only subscribed for the drinks until this video. I have a 48TB TrueNAS Dell R720dx. I'm planning to build another similar system to install at my son's house. He has 1Gb eithernet . I will have 1Gb soon. I'm planning to connect via Wireguard. The Snapshot scheme sounds promising for backing up.
Pfsense just removed Wireguard due to critical security issues. You might wanna wait on Wireguard.
**** Hi Jeff. Would you please show a video of your maintenance on the HDD showing an error and how you received notification of the error such as an automated generated email or message? Thank you for all your videos.
I use Duplicati on my NAS to backup stuff like music, pictures, etc to opendrive. I want to backup some of the videos I've got in my plex collection but I don't consider cloud storage to be practical cost-wise.
Why not offsite? What happens in case of catastrophe, such as fire? What's the cheapest solution for 40TB you have there 'in the cloud?' EC2? Backblaze? Homebrew is good, but this is your business! Good videos. Keep it up.
Thank you for all this great info! If I wanted to have my backup stored at another location, couldn’t I use this method? But I’d need to open a port on the backup server’s router and port forward to the truenas backup?
"3tb drives are inherently unreliable"
*eyes his NAS at home, and gulps*
i know, right, one of mine has 9 3TB drives ranging from 6 to 7.5 years of age.
Me too mate, my Nextcloud is on my 3TB HDDs lol, although im in the process of setting up a back up system for that pool alone, should the worst happen, as Jeff says, RAID is not a back up lol
Obviously I wouldn't recommend trying but I'd bet this server would survive getting a drink spilled on it (provided it's unpowered at the time). Unlike normal drives the helium-filled drives are hermetically sealed so the platters shouldn't be ruined.
JEFF PLEASE HELP! I bought the chenbro nr12000 from watching your videos. Which thank you for the recommendation it's practically brand new. But I bought SAS drives knowing the motherboard supports SAS and I knew SAS has a different protocol but didnt realize they have different connectors. Will sata to SAS adaptors work that you know of or do have to get sata drives? Good video as always. Tech yes Brian got me back into pc gaming but you've introduced me to the ENTERPRISE.
I really would like to see your network setup, how do you separate your servers from normal PCs and other smart home appliances. I like your server setups and this would help me figure things out in my network.
I love that you're using starship registry numbers as the names for your pools. Although it seems somewhat ominous that you're using the registries for two ships that were both destroyed ;-)
Thank you very helpful. Question. Do you now have a 3-2-1 backup system in place?
I have just a 3-2 backup for now. Using Unifi, Synology NAS, & 3 WD My Cloud EX2 Ultra backing up 3 different groups of data. No loss so far. By the end of 2025 I may need to add a Synology Extender with a new pool of drives. Also an off site Synology NAS as well.
Thank you for the entertaining and educating video as always! What should be the capacity of a back up server that is meant to be used for backing up another TrueNAS server that has a pool size of about 35 TB? Should the capacity be comparable, or can one get away with a smaller capacity due to some ZFS magic?
Have you thought about storing some of the data offline? (Especially old videos.) SSDs are cheap enough now that I treat them the way I did floppies 30 years ago. I use an IcyDock and have the SATA ports set as hot swappable so I can easily stick an SSD into the dock, drop some files on it, then pull it back out again and toss it on my shelf. Just a thought. :)
I have 1 Synology DS720+ that im using for backups with active backup for business to backup al my desktops/notebooks, and i have a Synology DS418 for the data. For that im using hyper backup to the DS720+
I like the way you open your video's and your videos are a source of inspiration for me. One thing i want to ask though is this, do you have solar panels on the roof of your house or do you generate your own energy in another way? I saw a video of you when you talked about an 800 watt PSU for a server and i thought that must be a huge source of energy consumption if you don't have one but several units of this kind in your house.
Backup process I use...Backing up my VMware es I host server with Nakivo to S3 Backup to Wasabi cloud. Solid reliable low economical solution. 5TB solution for 4 vms
Yeah backup solution but could you make video with restore ? The only value of backup is can you restore ?. Thank's again.
i moved to LTO, it's fun, i got good deal on second hand LTO-5 and 6.
Drink after backup replication - made my day :)
Suggestion: if you have friends with their own large NASes perhaps they could be your offsite backup and vice versa. It would be mutually beneficial but it wouldn't really address the problem of your slow uplink.
Nice video Jeff. I have QNAS and I backup around 6TB to external hard disks. I think the 3-2-1 philosophy is near impossible for home lab'ers. Our data size has grown exponentially so it is not uncommon for home backups in the 10-30TB size range. So the only medium I can think of for holding that sort of data is disk. If you look at tape, it is very expensive (thousands). So that rules the '2' out of 3-2-1. Automating a backup offsite is tricky also. Unless you have a buddy who will let you host a backup and fast upload speeds, you're only other real option is cloud storage. Again expensive with recurring costs. What to do?
wasabi as offsite backup site with 6$/TB is good alternative.
I've been using 40Mbit/s upstream for remote backups for over a year, I have no impact from this.
The job needs about one and a half hours and is done while I sleep, so it'S really not a problem
Jeff: 3TB drives are inherently unreliable
me with my main nas made of 3TBx4 drives: 😎👌🏻
Yeah, most but not all were unreliable. The only two 3TB drives (Seagate Barracuda) i ever bought are now about 9 years old and still working in my backup NAS. I guess i was really lucky there. They aren't constantly powered on though, so power-on hours sum up to
I think he said “SAS” disks?
My 2x 3TB SATA disks run just fine. Purple series.
@@Charlie8913 Absolute terrible experiences with 3TByte HDDs from Seagate. Over the years I have collected an impressive amount of broken hard drives (100+....have been collecting for quite some time) and there are at least 5x a 3 TByte drive. Almost all of them died within 1 year. Only one lasted about 1,5 year. They left such a sour taste that I don't want them, even when offered for free.
In my personal (n = 1) experience 1, 2, 4, 6, 8 and 10 TByte drives are far, far more reliable than those dreaded 3 TByte drives.
@@geroldmanders9742 I've never had a hard drive fail yet, but I don't put constant wear and tear on them and I buy slightly used enterprise drives with minimal hours on them. I only back up the data I truly need.
seeing the price of this server... i have 2 similar cpu/mobo setups in my rack but the years of enjoyment softens the blow for what they are now valued at
Hey Jeff, after watching this video again, I just noticed your "coaster"! 3.5" FLOPPY DISK! Love it!!! LMAO!!!
@craft computing, you will wanna do a zpool error clear to get rid of the unhealthy status, I just did the same with mine, I don't remember the exact process but i think it was in the shell u enter zpool clear *name of pool*, if you Google zpool clear it's all in an Oracle post