So, next stop of your journey to storage nirvana has to be now the FULLY equipped 19" server rack with all JBODs! Let's see if that might last more than 2-3 years. 😉🤔😇
I have the 6800 pro and started with proxmox as well, but ended up just moving to Truenas directly installed. Now I have an awesome Truenas box with 96GB Ram (lol) and using those dual 10Gb NICs. CPU still gets toasty because the heatsink is very undersized.
Given the CPU, which in this case isn't bad for a NAS, what sort of VMs and containers would you, and wouldn't you run? Do things run better as containers? What are you using the NVMe for vs the HDDs?
So CPU virtualization is really a mature thing in Linux/KVM, but you have a huge memory footprint of running the VM kernel vs a container. Also, it's way easier to do hw pass-through to a container in most cases, and especially so when you need to share hardware (like using the igpu for rendering). In my case, the NVMe drives hold the bootloader, efi partitions, and boot filesystem (Zfs) which also contains the VM disks and container root filesystems. I then add mount points to the containers for bulk data off the HDD pool.
Love your videos - have learnt a lot. I also have the same UGreen NAS. I was wondering if you were able to get the Coral TPU working in your new / UGreen setup? Or will that be covered in another video? Thanks again and keep up the great work. 👍
I didn't have any issues with my old instructions, except that Google has completely dropped the ball on Coral TPU builds, but thankfully the community has compiled their deb packages and you can follow the link on my Frigate blog post for newer kernels. It basically just worked.
Probably, but zfs is really not that picky. You import pools by name and not by disk IDs, so zfs will scan the disks and find the disks which are a member of the pool it wants.
You keep having to reshuffle the deck to meet your needs. Why not consolidate at a larger scale instead of the mish-mash of 'little' devices and buy yourself some headroom while staying close to the same power envelope? Seems that for a moderate up-front outlay of cash, you could have an 8 drive ZFS RaidZ2 array running on a more capable system which could do multi duty for all your various separate appliance/systems while having 'wire speed' interconnect between storage and VMs. Your whole setup could easily be hosted on a used 8/9th gen Intel, AM4 or AM5 system with 64-128GB RAM. Also no mention of doing an extensive disk test of used drives before putting them in production. That's called living dangerously... but at least you are mirrored.
Usually I just do a sanity check when I get warned space is running low, then add more storage. ;-) More often than not, it's because I configured something wrong and it's chewing up space. I'd love to play around with ZFS more, but btrfs-support in kernel is SUPER convenient.
You read about this technique a lot. Also on how to migrate btrfs or lvm storage. The goal is to have minimum storage downtime and minimize risk of disk failure. When copying/sending the data you can not change anything on source or you will have a split brain situation where changes will not be copied on the target disk.
You can change data on source while using send. Zfs send is sending a SnapShot. Once the send is complete you take a final SnapShot and send the delta and then offline the original and there is no resilver. You also can use your old pool as a backup and send deltas to it to bring it up to date.
Ok this isn't exactly on par with your video, but also kind of, I just started paying around with proxmox, and the true nerd in me really thinks its fun to play with!! I just seem to get hung up on this ZFS thing and while I loosely understand it's benefits, is ZFS boot system really needed at the home lab level? And as of current I only have a single boot disk. I keep reading story's of it chewing through your SSD writes and finding more things about how to keep that in check. I just get hung up on people preaching the need for ZFS but I've been without it up to this point, but maybe I'm missing something I'm not thinking about.
@NeverEnoughRally zfs boot can make things more difficult when you run into boot problems. But using zfs snapshots and replicating the SnapShot to a backup pool with zfs makes backup and recovery easier. ZFS command line is nice as it is mostly English readable. I use ZFS at home and in production. I backup my data and VMs using zfs snapshots and zfs send. ZFS is worth learning. The most difficult part is setting up your ZFS pool, since there are a lot of options and choices that need to be made, such as block size, raid type, encryption, etc. Once the pool is made, then ZFS is simpler creating you logical volumes and block devices for your data and virtual machines.
My NAS has a problem, it's working perfectly fine 😂 great intro!
So, next stop of your journey to storage nirvana has to be now the FULLY equipped 19" server rack with all JBODs! Let's see if that might last more than 2-3 years. 😉🤔😇
solid video content - hope to see you continue to upgrade and get faster
I moved my ZFS NAS to ceph and this is best decision I ever did.
I have the 6800 pro and started with proxmox as well, but ended up just moving to Truenas directly installed. Now I have an awesome Truenas box with 96GB Ram (lol) and using those dual 10Gb NICs. CPU still gets toasty because the heatsink is very undersized.
Given the CPU, which in this case isn't bad for a NAS, what sort of VMs and containers would you, and wouldn't you run? Do things run better as containers? What are you using the NVMe for vs the HDDs?
So CPU virtualization is really a mature thing in Linux/KVM, but you have a huge memory footprint of running the VM kernel vs a container. Also, it's way easier to do hw pass-through to a container in most cases, and especially so when you need to share hardware (like using the igpu for rendering).
In my case, the NVMe drives hold the bootloader, efi partitions, and boot filesystem (Zfs) which also contains the VM disks and container root filesystems. I then add mount points to the containers for bulk data off the HDD pool.
Very Good!
Love your videos - have learnt a lot. I also have the same UGreen NAS. I was wondering if you were able to get the Coral TPU working in your new / UGreen setup? Or will that be covered in another video? Thanks again and keep up the great work. 👍
I didn't have any issues with my old instructions, except that Google has completely dropped the ball on Coral TPU builds, but thankfully the community has compiled their deb packages and you can follow the link on my Frigate blog post for newer kernels. It basically just worked.
shouldn't you use dev disk by-id instead of sd* for disks?
Probably, but zfs is really not that picky. You import pools by name and not by disk IDs, so zfs will scan the disks and find the disks which are a member of the pool it wants.
doesn´t matter for ZFS as it is using GUIDs of the hard drives internally
What happens if there are 2 pools with the same name? Like say recovering a proxmox root pool?
`zpool import -f rpool -t rpool2` will (-temporarily) import rpool as `rpool2` to avoid conflicts
@@Cynyr you can get the id number of the pool if you need it.
on proxmox did you update the cpu microcode for the intel hybrid CPUs?
You keep having to reshuffle the deck to meet your needs. Why not consolidate at a larger scale instead of the mish-mash of 'little' devices and buy yourself some headroom while staying close to the same power envelope? Seems that for a moderate up-front outlay of cash, you could have an 8 drive ZFS RaidZ2 array running on a more capable system which could do multi duty for all your various separate appliance/systems while having 'wire speed' interconnect between storage and VMs. Your whole setup could easily be hosted on a used 8/9th gen Intel, AM4 or AM5 system with 64-128GB RAM.
Also no mention of doing an extensive disk test of used drives before putting them in production. That's called living dangerously... but at least you are mirrored.
Damn, your CPU was pegging itself!? I didn't know CPU's were into that sorta thing. 😂 I just had too. Lol.
1.2%?!
This is why I use btrfs.
Hey kernel, how much space left on my NAS? ... "Yes"
Yeah, but like a number? ... "6"
Guess it's fine then!
It shows remaining capacity per dataset (which may be the total free space or may be less if the dataset has a quota).
Usually I just do a sanity check when I get warned space is running low, then add more storage. ;-) More often than not, it's because I configured something wrong and it's chewing up space.
I'd love to play around with ZFS more, but btrfs-support in kernel is SUPER convenient.
I also rely on raidz for some of my systems, so brtfs is a no-go for those.
BTRFS has a lot more issues, trust me I tried :( ZFS is a more solid choice all round
Why did you not just zfs send your old pool to a new pool? I have migrated many times just using zfs send.
You read about this technique a lot. Also on how to migrate btrfs or lvm storage. The goal is to have minimum storage downtime and minimize risk of disk failure. When copying/sending the data you can not change anything on source or you will have a split brain situation where changes will not be copied on the target disk.
You can change data on source while using send. Zfs send is sending a SnapShot. Once the send is complete you take a final SnapShot and send the delta and then offline the original and there is no resilver. You also can use your old pool as a backup and send deltas to it to bring it up to date.
Ok this isn't exactly on par with your video, but also kind of, I just started paying around with proxmox, and the true nerd in me really thinks its fun to play with!! I just seem to get hung up on this ZFS thing and while I loosely understand it's benefits, is ZFS boot system really needed at the home lab level? And as of current I only have a single boot disk. I keep reading story's of it chewing through your SSD writes and finding more things about how to keep that in check. I just get hung up on people preaching the need for ZFS but I've been without it up to this point, but maybe I'm missing something I'm not thinking about.
@NeverEnoughRally zfs boot can make things more difficult when you run into boot problems. But using zfs snapshots and replicating the SnapShot to a backup pool with zfs makes backup and recovery easier. ZFS command line is nice as it is mostly English readable. I use ZFS at home and in production. I backup my data and VMs using zfs snapshots and zfs send. ZFS is worth learning. The most difficult part is setting up your ZFS pool, since there are a lot of options and choices that need to be made, such as block size, raid type, encryption, etc. Once the pool is made, then ZFS is simpler creating you logical volumes and block devices for your data and virtual machines.
Adding the drives to the pool as /dev/sdaX might come back to bite you in the ass some day mate.
Ditching Proxmox would be a great start. YMMV
I would use libvirt because it uses less RAM than Proxmox and because I do not need Proxmox's web interface nor other features.
Xcp-ng is an alternative, but yeah, ymmv