An old PC is a great use for a backup server, particularly if it handles wake-on-lan. I have a two-node proxmox cluster, one of which nodes spends most of its time asleep, but is woken to perform backups. The backup node runs a truenas scale VM with a disk passthrough to an 8TB hdd. A backup hook script on the master wakes the backup node (wake-on-lan), waits, mount the directory and enables a proxmox storage unit that is then used to backup all my VMs and CTs. At the end it sleeps the backup server. A cron task on the master proxmox node wakes the proxmox backup server at a certain time. Replications are scheduled to run a little time later. Then a cron job sleeps the backup server. A cron task on a truenas VM on the master proxmox wakes the backup server. A timed backup on that truenas comes along a little later and does a zfs incremental backup to the backup truenas. A cron task then sleeps the backup server. For completeness, a cron job running in a VM on the master proxmox node does an incremental backup of the proxmox system itself to the master truenas VM. This backup is also copied to the backup server by one of the backups above. As a result of all the above, the backup server is awake for a couple of hours a night. I also have another 8TB hard disk in another PC that contains a much earlier iteration of the above. I deliberately keep it in case I want to go back to where I was before I started with proxmox. I have two reasons for wanting to sleep my backup server: 1. to save power. OK it's only about £100 worth a year. 2. My backup server is in my summer-house, which I also use to record audio-book narration (it comes with lots of hanging duvets and hot-and-cold running squirrels, but that's another story). I can't have anything with a fan or a rotating hard disk running when I'm recording. At least I can sleep a PC. I'm having less success with the squirrels. If the above sounds hopelessly over-the-top, it probably is. But I have two reasons for doing this: 1. I'm retired. I get to play with toys, OK? 2. Before I retired, I was a software engineer. I still have indelibly ingrained in my psyche a memory of the time 40+ years ago when I did a disk-copy of the only disk (8" floppy) in existence containing an important demo half an hour before the potential customer arrived to see it, except I did the disk copy in the wrong direction, and overwrite the demo disk from the blank disk. I can't remember exactly how we covered that, but I can remember that I was not software engineer du jour. The moral of this story is that you can never have too many backups, but you can have too few.
Another great video and I enjoyed seeing the whole story, from assembly to software setup. Lol I actually followed your suggestion and got a Terramaster F2-223 also as a backup server (mostly for 4 other Proxmox hosts and a Windows client). While I already have a NAS system for larger files, on learning about the Terramaster I cannot resist the idea - got to backup the backups.
I have the Gen 8 also, with 16gigs ram and a Dual port SFP+ card in it. runs Truenas, in-fact i have 2 of them, one is here at my house for a file server, the other is in another city at a friends house that gets replicated from the one locally :) wicked solid boxes.
You can also boot from the internal ODD SATA port if you enable B120i and then in SSA create a single drive raid-0 for port-5 and set it as primary and backup bootable. This works with or without a separate HBA (I tried it with HPE P222). I also read reports that the fan noise is an issue when not using an HPE HBA because non HP/HPE HBA's don't communicate with iLO, so fan optimization falls into a sort of crippled mode. When using P222 + FBWC, my fan was quiet most of hte time, exceptions were when booting, or when the server was under heavy drive usage.
The fans seem to settle down once Linux boots, it's really only an issue during the many minutes of booting. As to the boot drive, I still have two SATA ports from the SFF-8087 on the motherboard which are bootable as well, but the uSD card is perfectly adequate for PBS and PBS stores virtually everything in the backup datastore anyway, so recovering from a failed SD card isn't a big task.
I was waiting for your LTO5 tape drive and happy when you carted that out. You can get a proper 3,2,1 solution. I setup a tapepool called tapepool ;-). Proxmox backup server makes it really easy.
Good video. I have an HP Microserver N40L (the model before yours I think). They are fantastic devices and it's a shame that HP don't make a cost effective equivalent nowadays. I use mine as an Unraid NAS. To mitigate the power use I leave it shutdown most of the time and bring it up with WoL when needed. Then shut it down again.
@@apalrdsadventures I have used ansible to do something similar for me. I basically have a playbook that will start the NAS, enable the storage in proxmox, then run the backup command. Once completed, it then shutdown the NAS. You could then use cron to run this, I am using semaphore ansible. I am using a QNAP NAS as a NFS storage.
This gave me a starting homelab migration path idea. Could start with a single NAS server, and then as one migrates and builds up a ceph cluster, one could setup the original NAS as a backup server. I wanna make a homelab and decided on making a ceph cluster. But have honestly been too lazy to do it for now. So I may build a single NAS server and run it as a hypervisor.
I added an LSI SAS card to get SAS out to the tape drive (which I added in another video), which needed an SFF-8088 connector. But the LSI-9211-4i4e is an 8ch SAS card which has 4x on SFF-8088 external and 4x on SFF-8087 internal connectors. I moved SFF-8087 cable for the drive backplane from the motherboard SATA to the LSI card, the connector is in a really good place to make this switch. Then I used a SFF-8087 to 4x SATA breakout cable to connect the two SSDs to the motherboard SFF-8087 header. So I could stuff 5 SSDs in if I didn't want a mess of power splitter cables.
be careful with ZFS special metadata. I had a mirror on my TrueNAS Core server and one died and the other got corrupted and afterwards I was told you should use 3 drives for special metadata for just that reason. I ended losing all of my data and had to rebuild. I decided to setup my pool this time without the special metadata and went from RAID10 to RAIDz2 with 4x 18TB EXOS drives. I also run PBS and have a RAID10 of 480GB Intel Enterprise SSDs for a total of 960GB usable space and I can't complain. I don't have massive VMs/CTs so I don't need as much space to store backups. I really like having my PBS server. I know you will enjoy it as well.
It's always a tradeoff depending on the usage patterns. In the PBS case, it relies on the filesystem to store very large directories of chunks and rapidly return if a file of a given name exists for deuplication, so the special device will end up with a lot of read IO on directories which directly affect the backup performance. I wouldn't do the special device with a normal zfs pool of backup data stored normally. I also don't have any PCIe slots left to do NVMe or Optane special devices. So far I like PBS with PVE integration, wish it had a bit better file backup integration but that's solvable for me.
No way! I have two of these HP MicroServers Gen8 for my business backup, one at my office, one offsite. they run proxmox with LXC environment set to backup my main servers daily and sync each other weekly.
For standard backups of Homelab stuff, is there advantage of Backup Server vs backing up archives to a share folder on a TrueNAS server (or any other share) ?
It depends on if you are a heavy PVE user or not. PVE backups are always full backups, so doing a backup to TrueNAS relies on ZFS deduplication (which does work well), but also compressing the entire disk image and copying the entire backup from the PVE node to the TrueNAS server over the network every time. With PBS + PVE, the PVE client will use a dirty-bitmap in qemu to know which blocks have even changed, and when sending blocks to the PBS server it will first query the server for the block hash and see if it already has it before sending the contents, so network utilization is a lot lower. For non-PVE data, it depends on how much you like the backup client and how much you are willing to tolerate a slightly worse file-backup experience for a better VM/CT-backup experience. I'm planning on migrating to Ceph anyway so TrueNAS's native zfs sync wouldn't be an option, and I can cron my way to backing up CephFS on the PBS node. PBS also has native tape support, which is something I wanted that is probably not nearly as important to other people.
@@apalrdsadventures As a next step, let me suggest backing the /mnt directory of PBS to a cloud provider. In my case I have the backblaze CLI dockerized running in the PBS, and that gets trigged once a day via Ofelia to upload all the new .chunks into a backblaze bucket
LSI SAS9207-4i4e is newer controller, which have support for multiple MSI I wouldn't use single parity over HDD which is bigger then 2TB as HDDs of such size run risk of bannock high enough that if single drive would fail, then you are guaranteed to lose some data. You should run double parity over drives bigger then 2TB
If hardware capacity permits, I install pve and pbs on the same machine. pbs in this case, I install from the connected repository to pve. After that, it simply opens the web interface on port 8007 And on the same host I use not only pbs but also NAS / SAN virtual machines like ESOS, OviOS Linux, OpenMediaVault. Thus, I transfer the load of working with files from thin PVE servers to the backuper. This option suits me for an all in one solution. Sometimes I deploy pbs as a virtual machine on a thin node with an nfs or iscsi disk s SAN connected to it. This allows you to utilize the storage server for a SAN solution. Yes, in this case, pbs will not backup itself - but you can sometimes stop it and make a manual backup to a local disk and then transfer it to a safe place
I'm really looking forward to the tape drive video. No one seems to have any info on the usability of tape backup in proxmox beyond saying it's possible.
There’s very little info on tape in general really, especially with auto loaders. Eventually I’ll probably get one if I like the single drive, I want a system where I can keep my video footage archived on tape (so no standby power / very high scalability) and pull it back into the file system namespace on command.
@@apalrdsadventures It's because tape is mostly passee'. It's too expensive for all but the largest companies. Between the price of tapes, drives, a proper off-site rotation plan, speed, and lifecycle.... you're in for hundreds of thousands of dollars before you achieve critical mass.
You will find that any real server with IPMI/iLO/iDRAC. The fans go nuts when first booted. This is due to the fans being controlled by the IPMI. Once that booted, they calm down. The fan ramp up on start is from the old days of servers. It was meant to force any settled dust out. Also, as a real, but small server. The BIOS does a lot more checks for stability etc, than consumer machines. back when this server was new. A max memory rack server could take 10 mins to boot. All down to the basic memory checks.
My Microserver Gen8 is sitting here with 16GB ECC and two SDD using 20w (iLo open and system fairly busy). What are you using that does more for less power? Lots of interesting content on your channel which I found this evening so you'll probably get bored of me.
The PBS server in the video is drawing around 50-55W idle and 70W during backups (limited by the 1G NIC). It's a Xeon E3-1265L V2, 16G RAM as well, the 4 spinning SAS drives, SAS card, and two sata SSDs.
Ahhhh! You baited me. Here we go: 1: Used hard drives. Yikes. Wipe the drives with wipefs or shred. Use badblocks to test that the drives aren't junk (SMART info can't be trusted) 2: Tapes suck. 30TB of backup will take 20 LTO-5 tapes, which costs $300, and will take 60 hours to write. Nevermind the cost of the tape drive and controller. Even LTO-9 only holds 18TB. 3: Booting from SD (and USB) is deprecated. It's a bad idea as they aren't very reliable. 4: ZFS special devices aren't going to be of much benefit in a backup scenario. A better use for those would be as boot/system drives. 5: RAIDZ1 with used drives. You're flying awfully close to the sun. Make that RAIDZ2 and suddenly you are in the capacity range where a single 20TB HDD can be used for backup. But maybe figuring it out (the hard way) is the adventure.
Two weeks ago there was a hope that PBS can replace my Bareos (Bacula fork) and make my life easier. For my use case file backup is crucial. Unfortunately no rpm packages available and this debs are having dependency problem on Ubuntu 22. Bareos replacement still not found :(
For me, CephFS will connect to all the backend OSDs client-side, so running the client on the backup machine means I'm not funneling all of the CephFS traffic into one node and then on to the backup server from there. For actual hosts it's not the best idea.
Yea truenas is kinda not important anymore proxmox does it all better at the moment. Excited to see how you set up the schedule to have back server shutdown and start back up. Maybe looking at doing the same.
Dear what will you suggest for the backup of windows workstations in the same lan of my proxmox cluster? I used urbackup in a container for some years but to me looks always a little unstable, and there is this big issue with not incremental outlook pst files. I don't want to use the integrated windows backup software because it will not take care of who is making the backup in the same moment, and if 100 workstations are making backups in the same time it's a big issue, furthermore I do not have any centralized interface.
Proxmox's client doesn't natively support Windows, although I think it's on the roadmap. My favorite solution is to give users robust network storage and tell them if they lose data on their workstation it won't be recovered, then have good snapshots and backups server-side. Assuming they actually check in their work daily they shouldn't be able to lose much. Of course someone will keep the whole accounting excel 'database' on their desktop and mess this up though. I don't really do Windows work though, so not good ideas for that.
more drives. You can fit 60 drives in the biggest Storinator, so in raidz2 or raidz3 stripes you can get to a PB in a single chassis, and PBS will be very happy.
I've been using Proxmox backup server, but my question is can NFS be set up on the server? I'd like to use it more than just for park locks is there a way?
I mean it's a Debian system that you can modify as you want, so yes. You could follow my Proxmox NAS guide (but on the PBS host) to get a file sharing UI.
We tried the ZFS special meta devices for our hardware and it didn't improve performance with Proxmox backup transfers. We are getting a quote to go full SSD for our Proxmox and off spinning rust. I did notice our CPU doesn't have SHA hardware acceleration, so I'm hoping that it will help there, but ZFS doesn't appear to use the in kernel SHA hashing, I think I read it was due to portability and compatibility. Oh well!
I wonder if it will matter more with heavily deduplicated backups (like file backups), since that should require a lot more directory listing of the chunk store.
Special vdev won't help much in terms of backup speed, but in any read related jobs it will do wonders. Especially the cleanup jobs benifits heavily from special vdev. Prune and GarbageCollection is probably the task that's most impacted.
@@SveinErikLund that makes sense but Proxmox does deduplication by writing blob hashes in nested directory structures and won't write repeat blobs, but indexing those directory structures you'd think the special devices would accelerate the listing of the filesystem tree. The deduplication in Proxmox is filesystem agnostic. In our tests, it wasn't an improvement with spinning rust, maybe the bottleneck is so great it's not perceptible?
@@VexMage You won't see much difference with 10-50 vm's but it really started to get noticable when you have pass aboutt 100 vm's. We had about 20TB of data, and gc+prune took about 3 days without special vdev. With special vdev it took about 10 hours.
VMs can make use of the qemu dirty bitmap to know which blocks are unchanged and shouldn't even be attempted (so the listing of each unmodified chunk doesn't happen), but file based backups don't have a dirty bitmap so the client will need to check every chunk against the server even if it hasn't changed, resulting in a lot more listings than VM backups from PVE.
I've been setting up PBS as well and as far as I remember that secret you can get from GUI of PBS as well - no need for cli ;) But more surprised I am that you use... refurbished HDD - what about them being the only mechanical part and you decided for used ones? You don't care about the quality/reliability of the backed up data? Regarding tapes: I used to work as admin and we used to do backups on tapes and also MODs - probably nobody remembers that times anymore. Key word: tar Without tar knowledge nothing could have happened ;-)
Yeah, I realized later it wasn't where I was looking for it. The docs point to the cli lol. As to the drives, they actually have less hours on them than my current TrueNAS server, which I bought new drives for when I built it. I'm not particularly worried about them, as I'll still have both the current TrueNAS server and the backup server active during the Ceph transition, and each of them has redundancy via zfs (mirror on TrueNAS and RAIDZ on PBS).
@@apalrdsadventures cool, I have a single fdd -> sata power adapter, but wasn't sure if there's a fdd -> dual sata that would be enough power for the drives/won't burn my house down
It's not that HTTPS is high risk per se, but you need to be aware of the login methods you are allowing. So disabling root@pam for the web ui and using two-factor on the @pbs realm would improve security.
The backplane or the internal controller? The backplane can do SAS/SATA just fine, in my case I replaced the internal sata controller with a SAS card. Not sure what the limit is of the stock controller, but it's easily replaced.
you really need 2 nas for mission critical data like this - also you could add them to their own cluster and double throughput as well as have failover - why stop at one cluster also you really want to boot from nvme and not a sd card and also have a cache tier nvme zil/zlog although optane may be the best option for cache - the commercial nas devices are just like routers you get from the isp - you can do much better and you don't have to spend a lot - really surprised this device uses a lot of power - ha opnsense is another good option to have and it is also pretty cheap to get going - good future content - lastly raid cards are just a bad idea - go with sw raid so if you do have an array or a card go poof it is easy to move raid to other machines sw raid runs anywhere --- on further reading it turns out proxmox backup does not support clustering, ideally you still want a couple nas/backup instances that are low power - consider rsnapshot to a netfs then you get much more incremental granular snapshots and it is automagic
re: LTO-5 What software are you planning on using to manage your tape backup system? I'm using LTO-8, and I just have a Ubuntu 16.04 LTS system (technically it was actually CAELinux2018, but that's kind of besides the point), where I installed IBM's LTFS single drive edition package, compiled it, and then got my tape drive up and running that way. (My LTO-8 tape drive actually goes through an external SAS 6 Gbps interface to an ATTO Express H680 I think.) There IS supposed to be a way to add the tape drive directly to Proxmox, but I haven't messed with that yet. And if I am being honest, I've been having some system stability issues with my Proxmox server, probably because the server is responsible for so many different things, simultaneously, so for the backup tasks, I have a separate system that connects to my Proxmox server over 100 Gbps Infiniband (doesn't need it, but I bought it for my CAE/micro HPC cluster), so I'm using that as I don't have any 10 GbE hardware (and with my Proxmox server, I was able to consolidate everything onto that system, such that I don't NEED additional networking gear. This was one of the great advantages with virtualising so many of my systems and putting it all into a single, consolidated Proxmox server because now I just have a single 16-port GbE switch that I use for management, and all of the inter-server communication happens within the single, consolidated server itself (whether it's through NFS for those that don't support virtio-fs) or through virtio-fs (for those that do). (e.g. SLES12SP4 doesn't support virtio-fs, so that goes through NFS.) (Windows 10 supports and loves virtio-fs. The virtualised NIC in Win7 reports back as being a 100 Gbps virtual NIC.)
Proxmox Backup Server (PBS) has native tape support for both single drives and libraries (it always assumes a library, so a single drive is a library where tapes are changed by sending an email and waiting for it to be done), so that's what I'm planning on using. Proxmox VE doesn't do native tapes, but that's why I have a PBS server.
An old PC is a great use for a backup server, particularly if it handles wake-on-lan.
I have a two-node proxmox cluster, one of which nodes spends most of its time asleep, but is woken to perform backups. The backup node runs a truenas scale VM with a disk passthrough to an 8TB hdd.
A backup hook script on the master wakes the backup node (wake-on-lan), waits, mount the directory and enables a proxmox storage unit that is then used to backup all my VMs and CTs. At the end it sleeps the backup server.
A cron task on the master proxmox node wakes the proxmox backup server at a certain time. Replications are scheduled to run a little time later. Then a cron job sleeps the backup server.
A cron task on a truenas VM on the master proxmox wakes the backup server. A timed backup on that truenas comes along a little later and does a zfs incremental backup to the backup truenas. A cron task then sleeps the backup server.
For completeness, a cron job running in a VM on the master proxmox node does an incremental backup of the proxmox system itself to the master truenas VM. This backup is also copied to the backup server by one of the backups above.
As a result of all the above, the backup server is awake for a couple of hours a night.
I also have another 8TB hard disk in another PC that contains a much earlier iteration of the above. I deliberately keep it in case I want to go back to where I was before I started with proxmox.
I have two reasons for wanting to sleep my backup server:
1. to save power. OK it's only about £100 worth a year.
2. My backup server is in my summer-house, which I also use to record audio-book narration (it comes with lots of hanging duvets and hot-and-cold running squirrels, but that's another story). I can't have anything with a fan or a rotating hard disk running when I'm recording. At least I can sleep a PC. I'm having less success with the squirrels.
If the above sounds hopelessly over-the-top, it probably is. But I have two reasons for doing this:
1. I'm retired. I get to play with toys, OK?
2. Before I retired, I was a software engineer. I still have indelibly ingrained in my psyche a memory of the time 40+ years ago when I did a disk-copy of the only disk (8" floppy) in existence containing an important demo half an hour before the potential customer arrived to see it, except I did the disk copy in the wrong direction, and overwrite the demo disk from the blank disk. I can't remember exactly how we covered that, but I can remember that I was not software engineer du jour. The moral of this story is that you can never have too many backups, but you can have too few.
Lovin' your work. I was working on a project that was almost identical to yours here and I needed some guidance with the special disks. Thanks!
Another great video and I enjoyed seeing the whole story, from assembly to software setup. Lol I actually followed your suggestion and got a Terramaster F2-223 also as a backup server (mostly for 4 other Proxmox hosts and a Windows client). While I already have a NAS system for larger files, on learning about the Terramaster I cannot resist the idea - got to backup the backups.
That should be able to run PBS easily as well
Incredible work, very nice! 😍
Just one improvement needed: Please use a low-noise keyboard, it’s insanely loud 😂
I have the Gen 8 also, with 16gigs ram and a Dual port SFP+ card in it. runs Truenas, in-fact i have 2 of them, one is here at my house for a file server, the other is in another city at a friends house that gets replicated from the one locally :) wicked solid boxes.
It's definitely a great chassis for storage, not particularly useful for much else though
You can also boot from the internal ODD SATA port if you enable B120i and then in SSA create a single drive raid-0 for port-5 and set it as primary and backup bootable. This works with or without a separate HBA (I tried it with HPE P222).
I also read reports that the fan noise is an issue when not using an HPE HBA because non HP/HPE HBA's don't communicate with iLO, so fan optimization falls into a sort of crippled mode. When using P222 + FBWC, my fan was quiet most of hte time, exceptions were when booting, or when the server was under heavy drive usage.
The fans seem to settle down once Linux boots, it's really only an issue during the many minutes of booting.
As to the boot drive, I still have two SATA ports from the SFF-8087 on the motherboard which are bootable as well, but the uSD card is perfectly adequate for PBS and PBS stores virtually everything in the backup datastore anyway, so recovering from a failed SD card isn't a big task.
I was waiting for your LTO5 tape drive and happy when you carted that out. You can get a proper 3,2,1 solution. I setup a tapepool called tapepool ;-). Proxmox backup server makes it really easy.
Yeah, Proxmox is one of the only open-source systems I found that supports tape easily / natively
Good video. I have an HP Microserver N40L (the model before yours I think). They are fantastic devices and it's a shame that HP don't make a cost effective equivalent nowadays. I use mine as an Unraid NAS. To mitigate the power use I leave it shutdown most of the time and bring it up with WoL when needed. Then shut it down again.
Working on a good way to shut down when backups and scrubs and verifies are done, starting up is the easy part
@@apalrdsadventures I have used ansible to do something similar for me. I basically have a playbook that will start the NAS, enable the storage in proxmox, then run the backup command. Once completed, it then shutdown the NAS. You could then use cron to run this, I am using semaphore ansible. I am using a QNAP NAS as a NFS storage.
I've got a microserver gen 8 which I took out of service at home due to power use. did you get a good way to hibernate when not in use?
This gave me a starting homelab migration path idea. Could start with a single NAS server, and then as one migrates and builds up a ceph cluster, one could setup the original NAS as a backup server. I wanna make a homelab and decided on making a ceph cluster. But have honestly been too lazy to do it for now. So I may build a single NAS server and run it as a hypervisor.
Great work.
Fingerprint also available from UI. Don't forget the backup verification and cleaning. Update, I see others also mentioned these.
I do have backup verification set to run as well, just didn't show that in the video.
@@apalrdsadventures Roger that :) Seems you have triggered the nerd herd.
Yeah, I'm used to it lol
I agree with you about those Developers ..
Great video. Very informative. Please could you tell me how you connected two SSDs to the 5th internal SATA port?
I added an LSI SAS card to get SAS out to the tape drive (which I added in another video), which needed an SFF-8088 connector. But the LSI-9211-4i4e is an 8ch SAS card which has 4x on SFF-8088 external and 4x on SFF-8087 internal connectors.
I moved SFF-8087 cable for the drive backplane from the motherboard SATA to the LSI card, the connector is in a really good place to make this switch. Then I used a SFF-8087 to 4x SATA breakout cable to connect the two SSDs to the motherboard SFF-8087 header. So I could stuff 5 SSDs in if I didn't want a mess of power splitter cables.
because of the garbage collection and verify my backup server run 24/7, i make daly about 60 backups.
be careful with ZFS special metadata. I had a mirror on my TrueNAS Core server and one died and the other got corrupted and afterwards I was told you should use 3 drives for special metadata for just that reason. I ended losing all of my data and had to rebuild. I decided to setup my pool this time without the special metadata and went from RAID10 to RAIDz2 with 4x 18TB EXOS drives. I also run PBS and have a RAID10 of 480GB Intel Enterprise SSDs for a total of 960GB usable space and I can't complain. I don't have massive VMs/CTs so I don't need as much space to store backups. I really like having my PBS server. I know you will enjoy it as well.
It's always a tradeoff depending on the usage patterns. In the PBS case, it relies on the filesystem to store very large directories of chunks and rapidly return if a file of a given name exists for deuplication, so the special device will end up with a lot of read IO on directories which directly affect the backup performance. I wouldn't do the special device with a normal zfs pool of backup data stored normally. I also don't have any PCIe slots left to do NVMe or Optane special devices.
So far I like PBS with PVE integration, wish it had a bit better file backup integration but that's solvable for me.
No way! I have two of these HP MicroServers Gen8 for my business backup, one at my office, one offsite. they run proxmox with LXC environment set to backup my main servers daily and sync each other weekly.
Nice!
For standard backups of Homelab stuff, is there advantage of Backup Server vs backing up archives to a share folder on a TrueNAS server (or any other share) ?
PBS does incremental beackups. Saves space. You can run PBS in container even on same machine.
It depends on if you are a heavy PVE user or not.
PVE backups are always full backups, so doing a backup to TrueNAS relies on ZFS deduplication (which does work well), but also compressing the entire disk image and copying the entire backup from the PVE node to the TrueNAS server over the network every time. With PBS + PVE, the PVE client will use a dirty-bitmap in qemu to know which blocks have even changed, and when sending blocks to the PBS server it will first query the server for the block hash and see if it already has it before sending the contents, so network utilization is a lot lower.
For non-PVE data, it depends on how much you like the backup client and how much you are willing to tolerate a slightly worse file-backup experience for a better VM/CT-backup experience.
I'm planning on migrating to Ceph anyway so TrueNAS's native zfs sync wouldn't be an option, and I can cron my way to backing up CephFS on the PBS node. PBS also has native tape support, which is something I wanted that is probably not nearly as important to other people.
@@apalrdsadventures As a next step, let me suggest backing the /mnt directory of PBS to a cloud provider. In my case I have the backblaze CLI dockerized running in the PBS, and that gets trigged once a day via Ofelia to upload all the new .chunks into a backblaze bucket
LSI SAS9207-4i4e is newer controller, which have support for multiple MSI
I wouldn't use single parity over HDD which is bigger then 2TB as HDDs of such size run risk of bannock high enough that if single drive would fail, then you are guaranteed to lose some data. You should run double parity over drives bigger then 2TB
If hardware capacity permits, I install pve and pbs on the same machine. pbs in this case, I install from the connected repository to pve. After that, it simply opens the web interface on port 8007 And on the same host I use not only pbs but also NAS / SAN virtual machines like ESOS, OviOS Linux, OpenMediaVault. Thus, I transfer the load of working with files from thin PVE servers to the backuper. This option suits me for an all in one solution. Sometimes I deploy pbs as a virtual machine on a thin node with an nfs or iscsi disk s SAN connected to it. This allows you to utilize the storage server for a SAN solution. Yes, in this case, pbs will not backup itself - but you can sometimes stop it and make a manual backup to a local disk and then transfer it to a safe place
I'm really looking forward to the tape drive video. No one seems to have any info on the usability of tape backup in proxmox beyond saying it's possible.
There’s very little info on tape in general really, especially with auto loaders. Eventually I’ll probably get one if I like the single drive, I want a system where I can keep my video footage archived on tape (so no standby power / very high scalability) and pull it back into the file system namespace on command.
@@apalrdsadventures It's because tape is mostly passee'. It's too expensive for all but the largest companies. Between the price of tapes, drives, a proper off-site rotation plan, speed, and lifecycle.... you're in for hundreds of thousands of dollars before you achieve critical mass.
I laughed WAY too hard at 11:22
them and netgate
I also use the cheap Kingston SSD 250Gb for these kind of things works perfect no problem
I'm not sure how long their life will be, but that's why I have two of them
You will find that any real server with IPMI/iLO/iDRAC. The fans go nuts when first booted. This is due to the fans being controlled by the IPMI. Once that booted, they calm down. The fan ramp up on start is from the old days of servers. It was meant to force any settled dust out. Also, as a real, but small server. The BIOS does a lot more checks for stability etc, than consumer machines.
back when this server was new. A max memory rack server could take 10 mins to boot. All down to the basic memory checks.
Jejejeje you had the same reason as me for leaving truenas 😂
You should name your PBS as "Big_Bird"! 😆
My Microserver Gen8 is sitting here with 16GB ECC and two SDD using 20w (iLo open and system fairly busy). What are you using that does more for less power? Lots of interesting content on your channel which I found this evening so you'll probably get bored of me.
The PBS server in the video is drawing around 50-55W idle and 70W during backups (limited by the 1G NIC). It's a Xeon E3-1265L V2, 16G RAM as well, the 4 spinning SAS drives, SAS card, and two sata SSDs.
Ahhhh! You baited me. Here we go:
1: Used hard drives. Yikes. Wipe the drives with wipefs or shred. Use badblocks to test that the drives aren't junk (SMART info can't be trusted)
2: Tapes suck. 30TB of backup will take 20 LTO-5 tapes, which costs $300, and will take 60 hours to write. Nevermind the cost of the tape drive and controller. Even LTO-9 only holds 18TB.
3: Booting from SD (and USB) is deprecated. It's a bad idea as they aren't very reliable.
4: ZFS special devices aren't going to be of much benefit in a backup scenario. A better use for those would be as boot/system drives.
5: RAIDZ1 with used drives. You're flying awfully close to the sun. Make that RAIDZ2 and suddenly you are in the capacity range where a single 20TB HDD can be used for backup.
But maybe figuring it out (the hard way) is the adventure.
I don't have any hardware currently to setup/do backups on, what would you recommend?
Two weeks ago there was a hope that PBS can replace my Bareos (Bacula fork) and make my life easier. For my use case file backup is crucial. Unfortunately no rpm packages available and this debs are having dependency problem on Ubuntu 22. Bareos replacement still not found :(
I think long-term I'll end up mounting network storage on the PBS system and doing backups via systemd timers *on the PBS system*
@@apalrdsadventures Network storage is always problematic. F.e. it's not a workaround for lack of file backup agent.
For me, CephFS will connect to all the backend OSDs client-side, so running the client on the backup machine means I'm not funneling all of the CephFS traffic into one node and then on to the backup server from there.
For actual hosts it's not the best idea.
Yea truenas is kinda not important anymore proxmox does it all better at the moment.
Excited to see how you set up the schedule to have back server shutdown and start back up. Maybe looking at doing the same.
I got one gen8 micro but it doesnt came with the hdd trays... is there any other compatible model or only the ones that came with it?
Dear what will you suggest for the backup of windows workstations in the same lan of my proxmox cluster? I used urbackup in a container for some years but to me looks always a little unstable, and there is this big issue with not incremental outlook pst files. I don't want to use the integrated windows backup software because it will not take care of who is making the backup in the same moment, and if 100 workstations are making backups in the same time it's a big issue, furthermore I do not have any centralized interface.
Proxmox's client doesn't natively support Windows, although I think it's on the roadmap.
My favorite solution is to give users robust network storage and tell them if they lose data on their workstation it won't be recovered, then have good snapshots and backups server-side. Assuming they actually check in their work daily they shouldn't be able to lose much. Of course someone will keep the whole accounting excel 'database' on their desktop and mess this up though.
I don't really do Windows work though, so not good ideas for that.
You mentioned that the microserver draws too much power, what did you replace it with out of interest to solve that issue?
I still use the Microserver as the backup server but I shut it down sometimes.
What if you have over 500TB of data needing backup ? How would you do that
more drives.
You can fit 60 drives in the biggest Storinator, so in raidz2 or raidz3 stripes you can get to a PB in a single chassis, and PBS will be very happy.
Your keyboard is making me jealous
It's from Unicomp - www.pckeyboard.com/page/category/UKBD
I've been using Proxmox backup server, but my question is can NFS be set up on the server? I'd like to use it more than just for park locks is there a way?
I mean it's a Debian system that you can modify as you want, so yes. You could follow my Proxmox NAS guide (but on the PBS host) to get a file sharing UI.
We tried the ZFS special meta devices for our hardware and it didn't improve performance with Proxmox backup transfers. We are getting a quote to go full SSD for our Proxmox and off spinning rust. I did notice our CPU doesn't have SHA hardware acceleration, so I'm hoping that it will help there, but ZFS doesn't appear to use the in kernel SHA hashing, I think I read it was due to portability and compatibility. Oh well!
I wonder if it will matter more with heavily deduplicated backups (like file backups), since that should require a lot more directory listing of the chunk store.
Special vdev won't help much in terms of backup speed, but in any read related jobs it will do wonders. Especially the cleanup jobs benifits heavily from special vdev. Prune and GarbageCollection is probably the task that's most impacted.
@@SveinErikLund that makes sense but Proxmox does deduplication by writing blob hashes in nested directory structures and won't write repeat blobs, but indexing those directory structures you'd think the special devices would accelerate the listing of the filesystem tree. The deduplication in Proxmox is filesystem agnostic. In our tests, it wasn't an improvement with spinning rust, maybe the bottleneck is so great it's not perceptible?
@@VexMage You won't see much difference with 10-50 vm's but it really started to get noticable when you have pass aboutt 100 vm's. We had about 20TB of data, and gc+prune took about 3 days without special vdev. With special vdev it took about 10 hours.
VMs can make use of the qemu dirty bitmap to know which blocks are unchanged and shouldn't even be attempted (so the listing of each unmodified chunk doesn't happen), but file based backups don't have a dirty bitmap so the client will need to check every chunk against the server even if it hasn't changed, resulting in a lot more listings than VM backups from PVE.
I've been setting up PBS as well and as far as I remember that secret you can get from GUI of PBS as well - no need for cli ;)
But more surprised I am that you use... refurbished HDD - what about them being the only mechanical part and you decided for used ones? You don't care about the quality/reliability of the backed up data?
Regarding tapes: I used to work as admin and we used to do backups on tapes and also MODs - probably nobody remembers that times anymore.
Key word: tar
Without tar knowledge nothing could have happened ;-)
Yeah, I realized later it wasn't where I was looking for it. The docs point to the cli lol.
As to the drives, they actually have less hours on them than my current TrueNAS server, which I bought new drives for when I built it. I'm not particularly worried about them, as I'll still have both the current TrueNAS server and the backup server active during the Ceph transition, and each of them has redundancy via zfs (mirror on TrueNAS and RAIDZ on PBS).
those Kingston SSDs have a extreme high failure rate. good video other than that
how much power does it use
11:19 🤣
them and pfsense devs have angered me lol
How did you route power to those dual ssd's?
sata power splitter from the DVD drive power
@@apalrdsadventures cool, I have a single fdd -> sata power adapter, but wasn't sure if there's a fdd -> dual sata that would be enough power for the drives/won't burn my house down
I think you'll need two cables, fdd -> sata + sata -> two sata.
Worst case the Microserver is all metal, right?
How would you configure an offsite pbs?
As a second pbs or the main pbs server being offsite?
@@apalrdsadventures main one, if you have a pve outside your network and want to add a backup to your pbs
The backup protocol uses HTTPS, so you can backup to the remote PBS system over the internet on port 8007.
@@apalrdsadventures thats what i did, but i read that you should never open a port that is a high risk
It's not that HTTPS is high risk per se, but you need to be aware of the login methods you are allowing. So disabling root@pam for the web ui and using two-factor on the @pbs realm would improve security.
Exactly the same reason i dropped TrueNAS ... the developpers behind that thing clearly does not understand Open Source 🤷♀
are those WD red or red plus?
These are Seagate enterprise SAS drives (refurbished). The ones that came out were WD Red 2Ts.
Hey dude, maybe you know if this enclosure "HP microserver" can deal with 20TB disks?
The backplane or the internal controller? The backplane can do SAS/SATA just fine, in my case I replaced the internal sata controller with a SAS card. Not sure what the limit is of the stock controller, but it's easily replaced.
the truenas guys ARE assholes and never wrong or capable of change and I hate it so damn much
the whole hiding the package manager is maddening
do you use snapshot backup method in your production?
Yes, but make sure the qemu guest agent is installed so the VM can flush its write cache on snapshot.
you really need 2 nas for mission critical data like this - also you could add them to their own cluster and double throughput as well as have failover - why stop at one cluster also you really want to boot from nvme and not a sd card and also have a cache tier nvme zil/zlog although optane may be the best option for cache - the commercial nas devices are just like routers you get from the isp - you can do much better and you don't have to spend a lot - really surprised this device uses a lot of power - ha opnsense is another good option to have and it is also pretty cheap to get going - good future content - lastly raid cards are just a bad idea - go with sw raid so if you do have an array or a card go poof it is easy to move raid to other machines sw raid runs anywhere --- on further reading it turns out proxmox backup does not support clustering, ideally you still want a couple nas/backup instances that are low power - consider rsnapshot to a netfs then you get much more incremental granular snapshots and it is automagic
re: LTO-5
What software are you planning on using to manage your tape backup system?
I'm using LTO-8, and I just have a Ubuntu 16.04 LTS system (technically it was actually CAELinux2018, but that's kind of besides the point), where I installed IBM's LTFS single drive edition package, compiled it, and then got my tape drive up and running that way. (My LTO-8 tape drive actually goes through an external SAS 6 Gbps interface to an ATTO Express H680 I think.)
There IS supposed to be a way to add the tape drive directly to Proxmox, but I haven't messed with that yet.
And if I am being honest, I've been having some system stability issues with my Proxmox server, probably because the server is responsible for so many different things, simultaneously, so for the backup tasks, I have a separate system that connects to my Proxmox server over 100 Gbps Infiniband (doesn't need it, but I bought it for my CAE/micro HPC cluster), so I'm using that as I don't have any 10 GbE hardware (and with my Proxmox server, I was able to consolidate everything onto that system, such that I don't NEED additional networking gear. This was one of the great advantages with virtualising so many of my systems and putting it all into a single, consolidated Proxmox server because now I just have a single 16-port GbE switch that I use for management, and all of the inter-server communication happens within the single, consolidated server itself (whether it's through NFS for those that don't support virtio-fs) or through virtio-fs (for those that do). (e.g. SLES12SP4 doesn't support virtio-fs, so that goes through NFS.) (Windows 10 supports and loves virtio-fs. The virtualised NIC in Win7 reports back as being a 100 Gbps virtual NIC.)
Proxmox Backup Server (PBS) has native tape support for both single drives and libraries (it always assumes a library, so a single drive is a library where tapes are changed by sending an email and waiting for it to be done), so that's what I'm planning on using.
Proxmox VE doesn't do native tapes, but that's why I have a PBS server.
@@apalrdsadventures
Gotcha.
That would be interesting to see how well that will work.
True, whenI need a quiet place to write something on and never get answers: Truenas Forum
When I don't want updates: Pfsense Ce.
why bother with installing proxmox on top of debian, when it is faster to just iso
In this case, I wanted a nonstandard partition layout, and the debian installer lets you setup partitions manually.