Use code christianlempa at the link below to get an exclusive 60% off an annual Incogni plan: incogni.com/christianlempa Thanks to Incogni for sponsoring this video.
Hey Christian. Just so you know, you don't need a dedicated "nic" for proxmox clustering. Just a dedicated network, a vlan works more than fine. Using a vlan also means you can bond your NICs giving you overall better performance and redundancy. I recommend in your case, bonding the SFP ports, then bonding the bond with the inbuilt ethernet for fail over. At that point you can vlan out to your hearts content :). This is also how I handle my ceph network. (My team and I manage a proxmox cluster at work, I only watched this video because I really enjoy your content)
It is possible to change node names after the cluster is formed, just edit like... 3 files.... Just did it in my cluster last night (v.8.2.7) /etc/pve/corosync.conf /etc/hostname /etc/hosts
I run proxmox at work, with a big cluster, multiple shared storage devices. It works great, and, I'm happy to see you added a qdev to your cluster, because it really is needed to maintain cluster integrity. As far as shared storage goes, any cheap NAS will work. I'd recommend something that has NFS shares. For homelab, gigabit is mostly fine unless you have a ton of storage. The price jump networking wise to 10gbps isn't bad, but, the storage price jump that can actually utilize that network speed can be quite expensive.
@@jonathandavis4711 depends how you build things. Just saying you have something doesn’t imply it works or is bad. How was it setup and engineered? It’s people with raspberri pies as nas. You can use the synology os on x86.
FYI, check your VM CPU type is not "Host" as both PVE hosts will definitely need the same CPU if "Host" is choose but should be fine with most of the other CPU Types.
I am wondering: I have 2 machines, one is Ryzen 7 and another is Ryzen 9 - would in this case "host" be ok or not? I understand it is abouch architecture nor really the CPU type
I vote for a CEPH cluster! I would love to see you set up a CEPH Object Store and File System on your 3 ProxMox servers instead of relying on a TrueNAS server or other NAS. I understand that TH-cam content creators have a need for a lot of storage and high speed networking but not the average person running a home lab. Having a “local” file system for your work load that is mirrored to other servers would be very appealing.
I have successfully running a Proxmox cluster with 2 nodes and a quorum device in my small home lab with a simple storage sharing solution without external network shares. Both nodes have a dedicated SSD with the same size just for this purpose. How to set it up? On the first node you go to disks, create ZFS, select the disk and give a name (this name should be a general name, not dedicated to a node, like 'pve-data' or something similar), select add storage, everything else on default and check the box on the disk you want to have as storage in the device list, click on create. The first node now have a new storage. On node 2 do the same, go to disk, create zfs, now important use the same name like on node 1 (in my example 'pve-data') and also it is very important to deselect the checkbox 'Add Storage', select the disk from the device list and click on create. After that you can go to Datacenter -> Storage, select the ZFS storage 'pve-cluster' and click on Edit. In the dialog you select all nodes in the dropdown menu named Nodes. After that the nodes will be come the storage listed as local storage on each node. With replication tasks for each VM or LXC you can set how often a storage of an VM or LXC will be synchronized between the nodes. In HA you can also set the list of containers or VMs you want to have migrated live between the nodes. You have to do this for each VM or LXC container, but it works very well for me. Thank you for covering this Proxmox topic. Hope to see more of this.
Christian, you can do a poor man's HA with local disk by scheduling replication of your VM's. That way a copy of your VM is standing by on the failover node. Also, I didn't have luck with SSD RAID for hosting VM's. The cost was exorbitant, the capacity was too low for my purposes, and the i/o wasn't much better. I went with more HDD's and an SSD ZFS read/write cache. P.S. You're doing great work. Thank you for sharing and for your joyful presentation style.
Exactly! I used to have a 3rd server running TrueNAS Scale with NVME shared storage. Eventually I retired it, since I got a little tired of the extra high-pitched 1U noise. Now I just have NVMEs in the two ProxMox servers for VM storage and have set up replication for every 5 minutes. Works fine for my needs.
@christianlempa I recently created my Proxmox node and had the same problem when migrating VMs, but I fixed it by selecting on the VM configuration > Hardware > Processors > Type > x86-64-v2-AES. This works flawlessly between my Intel N100 and my i7 6700T
I was going to mention this also but you beat me to it. This is the way for different CPU's. Just find out the common instruction sets and use the highest x86-64-Vx so your cpu's can use those instruction sets.
I run two node Proxmox cluster at home with one of VMs being a router. Nothing beats migrating router to second node, while performing upgrades on first one 😀
Christian, I have been running Proxmox for ~3 years now in my home lab and used a TrueNAS shared storage solution. But mid-year last year I decided to add a 3rd node to my setup and implemented an all SSD Ceph Storage. So far it has been flawless.
You should also be able to set the "CPU TYPE" under your VM hardware to a version supported by both processors (like x86-64-v2/3/4 processors) to improve live migration compatibility. PVE has a ton of options for each generation of chips going back to 486, so you can really tune in the instruction sets made available to the VM this way. Might be worth testing if you have a mixed cluster environment.
i’ve been running proxmox 3 node cluster with ceph ha for over 6 months now and it’s been stable so far. i even upgraded to proxmox 8.1 from pve 7 without any issues. curious why you decided not to go this route. maybe an opportunity to make this topic a 2 part series? 😅
Probably due to cost in electricity and hardware. He might pick up another AMD motherboard to run as third node in the future. He's also thinking of what home lab users would do. For us home labs we tend to reuse what we can find. I use mostly old Enterprise hardware from work as it was going to be e-wasted anyway. Figure I give it a second lease on life. Plus I source old gear from e-bay.
Currently I am not running a cluster, but have plan for this in future. I was planing on failing services over, but I see that is going to take more than initially expected. At least I can do like you are doing now.
I was running a 3 node cluster but have put them back to standalone nopw, after ~12months it ate through the alwayson nodes OS ssd to around 97% wearout, this was a cheap consumer SSD so would deff recommend either a enterprise ssd for the OS or a hdd and configure it as a mount point for the logs and quorum db to write to.
Hi @ChristianLempa one note i made was when you used qdevice when running HA you need to ensure that the q device has root access which I had to do to get my cluster working but great video I use 1 dell 5090 & lenovo in my ha setup pm cluster & have a raspberry pi 4 (which also runs 24/7/365) as my q device but great content
You don't actually NEED a super fast storage for the CTs/VMs for migration. It can help when/if you are booting up/shutting them down regularly, but if they're running pretty much 24/7, once the CT/VM is booted up, it doesn't really need much (unless you are doing I/O intensive tasks). I have three N95 Mini PCs and run Ceph between them over GbE, and it works just fine. If the CT/VM disks are actually on the shared storage, I have live-migrated my CTs/VMs between the nodes in as little as 8 seconds (because it doesn't need to move the CT/VM disks around since it is already on shared storage).
Yes, there are some quirks with a 2 node proxmox cluster. I run the same, but no qdevice. I just have a direct 10Gb connection between them. I did it for a similar reason as you, I wanted fast migration between the servers. Fun fact, I'm running PBS in HA with the virtual disk running on a 1Gb link to my Synology NAS, and the PBS datastore is on the Synology also. It works quite well considering it's going over 1Gb. I've backed and waiting on the Zima Cube Pro to arrive as that will become my 3rd Proxmox node running TrueNas VM configured for my HA storage with a 10Gb link and NVME storage available.
You can do HA with two nodes by setting up a replication job of the VM between the two nodes. That way there is an (older) copy of the disks on the other node so HA will be able to start up on the 2nd node.
ALL cluster nodes need to have the latest live data for it to be classed as HA. Otherwise you could run into some nasty split brain situations depending on what you are running in the VMs.
You should definitely set the shutdown_policy to "migrate". If you try to shutdown one Node, it will automagicly copy all VMs to to remaining Nodes based on your HA-group prioritys. I've never used the VM replication in my prod environment because it's running ceph, but maybe it would solve your Problem with the missing disk after shutdown.
Been following your vids for a while and they just keep getting better. Finally have the confidence to press forward with my homelab thanks to you. Rock on brutha!
Christian during a Stream: "you know energy is realy expensive here, i was thinking about downscaling" Also Chsiatian: "wanna se my new 2 node pve cluster"😂😂😂 Ps. Love your content
Great video as always. To address your shared disk with 2 nodes issue you could use Stormagic SvSAN which takes local drives from each node and presents it as a iSCSI shared disk to the cluster. They also directly support Proxmox. Give it a try.
@@danilfun I currently run CEPH on proxmox and my VM's are slow, I am using all ssd disk and think it maybe the 1gig network and how CEPH replicates the data. Reads are fine but writes are really slow get like 25meg writes.
I am using the cloning to make sure that my vms/containers are migrated automatically if one of the nodes are down. That way, I don’t have to deal with the network storage which brings its own problems. Also you can change the votes in corosync config. That’s how I ran my cluster for a while where my main node had 2 votes and slave had 1 votes.
Nice work. Proxmox is such a nice project. I came here attrscted by the 2 nodes tittle. I imagened the voting issue and the qdevice to help out. HA is really nice. As you said a 3rd fully cspable node allows you Ceph among other things. And that is another rabbit hole. Regards Christian!
I am running a similair setup in my homelab. 2 nucs, 1 pi zero with ethernet hat as qdevice. Also configured HA. Ive setup ZFS on the nucs with replication. Therefore not requiring a shared Storage. The nucs are connected to 1gbps connection but thats fast enough for my lightweight vms and containers.
so I run a two node cluster, one machine is a custom build that is used for things like media streaming / home lab and NAS (TrueNAS Scale VM with SSD's passed directly into it) and is only powered on when I need it (WOL), the second is on a Zimaboard that runs a Docker LXC for my 24-7 services and PiHole DNS LXC which is all working great. As I didn't need (or want!) HA services on Proxmox (I disabled them as they seem to wear SSD's out super fast for some reason?) and one of the nodes would be switched off most of the time I looked into getting a qdev but found out you don't need one if you set "two_node: 1" & "wait_for_all: 0" in the "quorum" section of "/etc/pve/corosync.conf". If one of the nodes is down the VMs will all boot like normal and can edit the VM configs, backups run, etc.
@@gilfreund That applies to any storage type. The name of the storage that was saved in the VM config is what the hypervisor will be looking for to load the disk image from. That applies to local, shared or replicated storage.
Thanks for the video. I have a proxmox cluster running, but I need to reinstall proxmox on the cluster. There are a few things I didn’t take care at the first time. Hopefully with your video it will be better :)
really good content - take this to its ultimate conclusion is all i would say - think about 2 node 25 or 40g cluster - no switch needed - you could use the 56g dual port cards (40gbe) for cluster and then a 10g management network - the netfs is a good one to dig into - ceph/nfs/zfs/gluster? i think you are going to have to experiment - maybe try some nvme arrays? thanks for the content!
For HA you definitely have to assess what you want vs. what you actually need. Having HA cluster with only single node shared storage, you are back to square one on being fully redundant.
I'm running a truenas cm with 4x4TB NVMe drives just for VM migration. Works very for VM migration and very quickly. If you can, it's definitely worthwhile to have a look
Even I will be rebuilding my Proxmox with the hardware I recently got from work , Nutanix nodes G6 3.8TB RAM 60 Cores 120TB Storage Will be epic homelab
very fun! Thanks for the great video! have a beelink mini pc (intel) as primary, and an old 2012 MacBook Air (intel) that I recently put in a cluster. They are sharing storage for backups. Migration works great so far, even with the CPUs being different models. Looking forward to your SSD NAS storage in the next 1/2 of the year.
8:17 It is still possible to change the IP address of the server. You just have to be very careful because you have to change many different things, not just one file or one adapter!
One more thing that will bite you is that your storage has to be the same on both machines. If a VM is on local-zfs on one node, you have to have a local-zfs on the second node in order to migrate.
I use replication with HA and have different sync schedules depending on the VM importance. All of my hosts have storage pools with the same name and I’ve been able to fail over pretty effortlessly. The main issue I have with HA is when I do a switch update, I have do disable HA otherwise all of my HA hosts will shutdown when a total loss of connectivity is detected bring down my whole lab even tho I have fail over connections on a separate switch. Still trying to figure that out. lol.
Hi Christian, thanks a lot - another great video! 👍 - Would you please also share which NIC you installed and if you are satisfied with it (for Proxmox usage)? Thank you.
That's so cool! I enjoy seeing your setup grow. I'm considering building such a cluster myself. Given the date/time synchronization requirement, I wonder if it would be worth implementing PTP with something like a Time Card.
My home lab is simple 3x N100 mini PCs in Proxmox HA config. Synology DS423+ (NFS) used as shared storage. This HA thing save me so many times when one node goes down. Even with my local 1Gbps network everything feels snappy.
(A bit late to the party but...) My experience is that I went to three nodes immediately since two nodes requires a bit of work to keep quorum, and now I'm at 11 nodes, including 5 pi 4b's to try some lightweight ARM images. Others have mentioned ceph as a storage fabric and yeah, I recommend that highly - I have three nodes dedicated to just a ceph cluster, and they operate in HA to keep my pihole running nicely. So 5 pi's, three lower power Xeon D's with 128GB RAM for ceph and HA, an MS-01 because, an older NUC as a low power Docker home, and a mega server with dual Xeon E's and some GPU's for AI playing and also because it's an I/O monster with 8 10GB ports and 16 2.5GB ports. Outside of that I have a proxmox backup server (Do this!) and some NAS boxes for NFS storage and local windows shares. You could run the backup server as a VM on the NAS too, which I do - it's not the best performances, but it runs pretty well.
Wow that is a crazy setup!!! :D I’d love to build a 3 node cluster but I’m a bit worried about the power usage even with the Minisforum, but maybe that’s a good one for a future project :)
@@christianlempa the ceph cluster is three Xeon D servers and 11 SSD’s for about 30TB storage which of course is 1/3 usable due to three copies of data - total power consumption is 40W per or 120W total - three minisforum would be about 14W idle each with three NvmE drives each, or about half that based on my measurements here. Hardware costs would be similar too and performance better but maximum RAM about half too, which is why I went with the Xeons plus I wanted supermicro IPMI control.
You dont need shared storage or ceph for HA... It can be done like you have it now, but you need enabe VM replication for example to replicate vm every minute... Im not sure if you can do it only on ZFS for this because I have done it only on ZFS... Maybe also check HA groups because if you plan turning off one node periodicly you can make 2 ha groups and set in which host you preffer run VM for example grop run_on_pve1 and run_on_pve2 then you set in ha which vm use which group and select for pve1 group that it use pve1 host and pve2 group use pve2 host. Then when you want turn off or maintenance host pve2 you edit ha group run_on_pve2 and instead of selected pve2 host select pve1 and save. System will online migrate all VMs which run on pve2 host to pve1 host and you can turn off pve2 host when it finish... I think that this can be done also with api calls(edit ha group), check if vms are migrated and then turn off host on schedule(cronjob) on your zuma board... After host is turned off you can edit run_on_pve2 ha group and select thqt group use pve2 host and when host will be turned back VMs will be migrated back to pve 2 host...
If you use a 3 node with ceph/gluster or a network filesystem, the migration is happena almost instantly cause only ram e config file need to migrate from nodes.
Quick question: I have 2 hosts as well, I dont really want to get a 3rd either for HA to work better. Can a Raspberry Pi 5 per your video be used also as a Voting member and it would work a bit better spinning up the VMs on the other host with the Pi being up? The one I was gonna get is the 8GB ram and 128GB storage micro sd card. Thanks, just looking for general feedback on this idea.
to get live migration work more reliably this can help: pve8 has cpu types which wheren't available before. sth like x64_x86_v3 is for example intel skylake and newer and amd epyc (i assume zen1). just klick help in cpu menu of a VM and as others pointed out, with replication HA is possible. but loose data up to migration interval. otherwise do a bulk migration before shutdown?
My first foray with proxmox has been less than fun, I migrated from esxi, first few days were great then my truenas vm started doing some weird stuff, like randomly rebooting, seriously considering fgoing back to esxi
Hey Chris, take a look at CEPH and maybe you can build a network over USB4 thunderbolt? ;) at least this is what I run on a 3 node cluster with NUCs 13ths gen. But even then: it takes around 2-3 min after a Node dies to run the HA migration. But not because of slow network 😂 getting around 20-30 gbit over the TB4 ports
I tried two nodes proxmox cluster too a couple of months ago, but I ran into some weird unstable situation when one of the nodes goes off, I think the second node frees until the first goes online or something like that, so I noticed that is not a good idea.
FYI I tried to do a qdevice inside of a docker container & it was a mess. I end up messing up my entire cluster. I got it added to the cluster but it was non-voting. While messing around with it i borked my entire cluster so i probably won't be doing that again.
Hi @Christian Lempa, would you mind sharing the model of Be Quite power supply you used in this build and confirm if you used 120mm fan on top of the CPU heating? Would greatly appreciate 😊
You don’t need shared storage, you can also setup replication to run ever 15 mins…or less if you want then you don’t have to worry about HA failing when you pull the plug
So I've been wanting to give this a try, but after reading some of the comments below regarding hardware requirements, I'm thinking it may not be worth the effort. For the average home-lab guy who has a windows SFF pc (NTFS file sharing / WSL2 for linux docker stuff), an HP micro-mini pc (Proxmox) and a raspberry pi4 (docker) all making up the environment, do you think this is something that is workable or possible by adding an additional Proxmox node running on a capable Dell laptop?
You dont “need” a separate device for shared storage. You “could” use CephFS or GlusterFS to create a shared storage pool from the disks/partitions in you servers. Im using quotes because its possible, but a seperate device is probably still beter.
recommend 3 nodes at minimum, though you can get away with 2 and a qdevice.... Even for a local cluster you need 3 nodes, or a QDevice running outside the nodes. 2-node clusters are only for home-labs and experiments... If one node in this cluster goes down, the other will be in R/O mode or will fence itself. Cluster must have quorum, in a two node cluster - quorum is 2. In 3 node its also 2, so either one node + vote survives, or both prod nodes (if vote is down).
I use application high availability btw I mean, I have three pfSense virtual firewalls in HA (not the recommended scenario, they go with 2 but, meh, works) I have three VMs running mariadb galera in a cluster, HA database I'm learning kubernetes just for this reason (will have 3 master nodes and 6 worker nodes, 3 VMs for each of my proxmox hosts) I don't need a shared storage, LOL I also containerize everything so, kube it is
@@kevinneufeld3195 if it is for services that dont't need top performance (IOPS), setting ceph in VM's is more than enough. You can also have almost bare-metal performance is you passthrough the sdd's or nvme's instead of virtual disks.
Two nodes plus a q device (pi 5) seems perfect for me. They can each backup to the other node. I tried ceph but my ssds were too slow. So i just have redundant services on each node. So a pihole lxc and a tailscale lxc on each one.
@@rogertan1130 you want IOPS, but also consumer SSDs typically slow down as the drive fills up which is not ideal for ceph (initially a nand cell operates in SLC mode, when the drive fills up it divides that cell in to smaller chunks) and due to the way write caching works write performance falls off a cliff as well. However, a lot of people misconfigure ceph as well. You can have a ceph cluster with HDDs that performs well enough as long as you have enough OSDs, but you also actually need the network speeds and latency to support this. 10gbit is often recommended, but I see it more of a minimum to get any decent sort of performance with ceph. Getting ceph on it's own network should help a lot with this as well.
ssd or network interface? a typical SSD has a speed of 3500MB/s 1 gigabit network has 1gbps = 1000 mbps = 125 mb/s 10 gigabit network has 10gbps = 10 000 Mbps = 1250 mb/s with is ~1/3 of the typical SSD speed...
Yes, I just did that, but it's very little space, so I'm not sure how much it will make a difference. I have also added 2x 80mm fans at the front of the case and I believe that helps much more than the cpu fan...
Each node in the cluster requires the same storage names. Ensure they are visible on all nodes in the datacenter, activate replication, and enjoy high availability without the need for centralized storage
The problem with replication is that it can cause inconsistencies between Kubernetes nodes, databases, etc. it might be a solution for some workloads, but I'd prefer something like Ceph ideally, which unfortunately requires 3 nodes. But I'm still researching what option would be the best for my setup.
Just add dedicated HDDs to both Proxmox nodes just for VMs and then format those using ZFS. Then you can mount those drives as shared and have ZFS doing replication every XX minutes from node to the other. No need for external storage. Craft computing YT channel has a nice tutorial
Yep, that is currently what I do for home and at work. Yes I could have used CEPH for work but I don't have time to troubleshoot cluster issues. ZFS is easier to deal with and fix.
Good question, I've done it because in the future I might use a switch or another network instead of a direct link, but yea... it doesn't have to be that big :D
I've been thinking about doing something similar but have been struggling with shared storage since I don't want to use the slow HDD on my NAS and don't want to build a SAN. I didn't catch what you decided to use for shared storage. Do you mind clarifying?
@@xConundrumx iSCSI seems a bit dated at this point and there is very little used gear available on ebay that isn't ancient. I'm trying to avoid using my NAS because I'm using Unraid and it's not really fast enough without the right RAID configuration for a target.
I don’t understand why you would want to go for a 2U case instead of a 3U case especially if you are planning to use an ATX PSU. That’s still gonna occupy 3U in your rack because you can’t put another server directly above as the PSU needs to suck in air from above. You could put a full height PCIe card in the 3U case without additional riser cables and you get proper front to back airflow…
I wanted to have a little more space in the rack, and since Silverstone doesn't have a 3U case, I went with 2U instead of 4. But yeah it has some concerns about airflow that I didn't thought about in the first place.
Hi all, I want to purchase Asus NUC Intel for homelab, my concern is about to run the box 24/7. Is it stable and long lasting for ths Asus nuc to run all the time. Appreciate for all your feedback.
Hm, that's a good question. So technically industry built devices are more robust and work better in 24/7 scenarios, but honestly, I'm running all my home servers based on Desktop PC components, and never got any issues with it. Even the SSDs are consumer quality. Maybe they don't last as long as industry quality devices, but if you take backups, if you replace your setup from time to time, I personally can't say anything "bad" about consumer products in a homelab.
One Tip do not run it on a SD Card because it writes alot of data for the Q DB. I blew up an expensive SD Card within 2 wks. You can also use a cheap OrangePi as long as it can run a linux derivate preferable Debian because Proxmox is Debian based.1 GB LAN Port works fine as well.
7:00 not sure which would be better... direct connection between the two nodes? Or bound interface from the two 10 gigabit ports. through switch. Quick question I can think of: If you decide to add a third node? What do you do with direct links? btw you should look at the new settings for v8 for vlans in the datacenter/network/
Use code christianlempa at the link below to get an exclusive 60% off an annual Incogni plan: incogni.com/christianlempa
Thanks to Incogni for sponsoring this video.
Hey Christian. Just so you know, you don't need a dedicated "nic" for proxmox clustering. Just a dedicated network, a vlan works more than fine. Using a vlan also means you can bond your NICs giving you overall better performance and redundancy.
I recommend in your case, bonding the SFP ports, then bonding the bond with the inbuilt ethernet for fail over. At that point you can vlan out to your hearts content :). This is also how I handle my ceph network.
(My team and I manage a proxmox cluster at work, I only watched this video because I really enjoy your content)
It is possible to change node names after the cluster is formed, just edit like... 3 files....
Just did it in my cluster last night (v.8.2.7)
/etc/pve/corosync.conf
/etc/hostname
/etc/hosts
I run proxmox at work, with a big cluster, multiple shared storage devices. It works great, and, I'm happy to see you added a qdev to your cluster, because it really is needed to maintain cluster integrity.
As far as shared storage goes, any cheap NAS will work. I'd recommend something that has NFS shares. For homelab, gigabit is mostly fine unless you have a ton of storage. The price jump networking wise to 10gbps isn't bad, but, the storage price jump that can actually utilize that network speed can be quite expensive.
I just went to 2.5 - it was unimpressive but going to jumbo frames adds 35% boost
A 10 gig card is dirt cheap
@@kristopherleslie8343 Yes, as I said, networking isn't too bad, getting storage fast enough to actually use it is fairly expensive.
Isn't any cheap NAS a massive failure point that can take out the entire cluster?
@@jonathandavis4711 depends how you build things. Just saying you have something doesn’t imply it works or is bad. How was it setup and engineered?
It’s people with raspberri pies as nas. You can use the synology os on x86.
FYI, check your VM CPU type is not "Host" as both PVE hosts will definitely need the same CPU if "Host" is choose but should be fine with most of the other CPU Types.
This is what I do. I can migrate a vm from an Intel based node to an amd based node
@@joost00719 *Live migrate*
@@joost00719 Same here and works fine. I also have a mix Intel and AMD CPUs.
I am wondering: I have 2 machines, one is Ryzen 7 and another is Ryzen 9 - would in this case "host" be ok or not? I understand it is abouch architecture nor really the CPU type
@@zyghom Just use kvm64
I vote for a CEPH cluster! I would love to see you set up a CEPH Object Store and File System on your 3 ProxMox servers instead of relying on a TrueNAS server or other NAS. I understand that TH-cam content creators have a need for a lot of storage and high speed networking but not the average person running a home lab. Having a “local” file system for your work load that is mirrored to other servers would be very appealing.
I have successfully running a Proxmox cluster with 2 nodes and a quorum device in my small home lab with a simple storage sharing solution without external network shares. Both nodes have a dedicated SSD with the same size just for this purpose. How to set it up? On the first node you go to disks, create ZFS, select the disk and give a name (this name should be a general name, not dedicated to a node, like 'pve-data' or something similar), select add storage, everything else on default and check the box on the disk you want to have as storage in the device list, click on create. The first node now have a new storage. On node 2 do the same, go to disk, create zfs, now important use the same name like on node 1 (in my example 'pve-data') and also it is very important to deselect the checkbox 'Add Storage', select the disk from the device list and click on create. After that you can go to Datacenter -> Storage, select the ZFS storage 'pve-cluster' and click on Edit. In the dialog you select all nodes in the dropdown menu named Nodes. After that the nodes will be come the storage listed as local storage on each node. With replication tasks for each VM or LXC you can set how often a storage of an VM or LXC will be synchronized between the nodes. In HA you can also set the list of containers or VMs you want to have migrated live between the nodes. You have to do this for each VM or LXC container, but it works very well for me. Thank you for covering this Proxmox topic. Hope to see more of this.
Christian, you can do a poor man's HA with local disk by scheduling replication of your VM's. That way a copy of your VM is standing by on the failover node. Also, I didn't have luck with SSD RAID for hosting VM's. The cost was exorbitant, the capacity was too low for my purposes, and the i/o wasn't much better. I went with more HDD's and an SSD ZFS read/write cache. P.S. You're doing great work. Thank you for sharing and for your joyful presentation style.
Exactly! I used to have a 3rd server running TrueNAS Scale with NVME shared storage. Eventually I retired it, since I got a little tired of the extra high-pitched 1U noise. Now I just have NVMEs in the two ProxMox servers for VM storage and have set up replication for every 5 minutes. Works fine for my needs.
@christianlempa I recently created my Proxmox node and had the same problem when migrating VMs, but I fixed it by selecting on the VM configuration > Hardware > Processors > Type > x86-64-v2-AES. This works flawlessly between my Intel N100 and my i7 6700T
I was going to mention this also but you beat me to it.
This is the way for different CPU's. Just find out the common instruction sets and use the highest x86-64-Vx so your cpu's can use those instruction sets.
Thanks! That's a good tip, I will try that :)
I run two node Proxmox cluster at home with one of VMs being a router. Nothing beats migrating router to second node, while performing upgrades on first one 😀
Agree. I have the same setup
nothing beats router to be on ... dedicated machine ;-)
Agree. Same setup here too.
Christian, I have been running Proxmox for ~3 years now in my home lab and used a TrueNAS shared storage solution. But mid-year last year I decided to add a 3rd node to my setup and implemented an all SSD Ceph Storage. So far it has been flawless.
You should also be able to set the "CPU TYPE" under your VM hardware to a version supported by both processors (like x86-64-v2/3/4 processors) to improve live migration compatibility. PVE has a ton of options for each generation of chips going back to 486, so you can really tune in the instruction sets made available to the VM this way. Might be worth testing if you have a mixed cluster environment.
Thanks! good tip!
i’ve been running proxmox 3 node cluster with ceph ha for over 6 months now and it’s been stable so far. i even upgraded to proxmox 8.1 from pve 7 without any issues. curious why you decided not to go this route. maybe an opportunity to make this topic a 2 part series? 😅
Probably due to cost in electricity and hardware. He might pick up another AMD motherboard to run as third node in the future. He's also thinking of what home lab users would do. For us home labs we tend to reuse what we can find. I use mostly old Enterprise hardware from work as it was going to be e-wasted anyway. Figure I give it a second lease on life. Plus I source old gear from e-bay.
Currently I am not running a cluster, but have plan for this in future. I was planing on failing services over, but I see that is going to take more than initially expected. At least I can do like you are doing now.
I was running a 3 node cluster but have put them back to standalone nopw, after ~12months it ate through the alwayson nodes OS ssd to around 97% wearout, this was a cheap consumer SSD so would deff recommend either a enterprise ssd for the OS or a hdd and configure it as a mount point for the logs and quorum db to write to.
Hi @ChristianLempa one note i made was when you used qdevice when running HA you need to ensure that the q device has root access which I had to do to get my cluster working but great video I use 1 dell 5090 & lenovo in my ha setup pm cluster & have a raspberry pi 4 (which also runs 24/7/365) as my q device but great content
You don't actually NEED a super fast storage for the CTs/VMs for migration.
It can help when/if you are booting up/shutting them down regularly, but if they're running pretty much 24/7, once the CT/VM is booted up, it doesn't really need much (unless you are doing I/O intensive tasks).
I have three N95 Mini PCs and run Ceph between them over GbE, and it works just fine.
If the CT/VM disks are actually on the shared storage, I have live-migrated my CTs/VMs between the nodes in as little as 8 seconds (because it doesn't need to move the CT/VM disks around since it is already on shared storage).
Yes, there are some quirks with a 2 node proxmox cluster. I run the same, but no qdevice. I just have a direct 10Gb connection between them. I did it for a similar reason as you, I wanted fast migration between the servers.
Fun fact, I'm running PBS in HA with the virtual disk running on a 1Gb link to my Synology NAS, and the PBS datastore is on the Synology also. It works quite well considering it's going over 1Gb.
I've backed and waiting on the Zima Cube Pro to arrive as that will become my 3rd Proxmox node running TrueNas VM configured for my HA storage with a 10Gb link and NVME storage available.
You don't necessarily need a shared storage for HA. You can also replicate the VM disk on both nodes. Nevertheless, a very nice video for beginners!
Got a cluster and am quite happy with it. I've changed hostnames on them as well, without much trouble, so it can be done.
You can do HA with two nodes by setting up a replication job of the VM between the two nodes. That way there is an (older) copy of the disks on the other node so HA will be able to start up on the 2nd node.
ALL cluster nodes need to have the latest live data for it to be classed as HA. Otherwise you could run into some nasty split brain situations depending on what you are running in the VMs.
You should definitely set the shutdown_policy to "migrate". If you try to shutdown one Node, it will automagicly copy all VMs to to remaining Nodes based on your HA-group prioritys.
I've never used the VM replication in my prod environment because it's running ceph, but maybe it would solve your Problem with the missing disk after shutdown.
Thanks, that's a good tip!
Been following your vids for a while and they just keep getting better. Finally have the confidence to press forward with my homelab thanks to you. Rock on brutha!
Thank you so much for your support 👊 bro! Glad it helps you with your homelab 🫶
Christian during a Stream: "you know energy is realy expensive here, i was thinking about downscaling"
Also Chsiatian: "wanna se my new 2 node pve cluster"😂😂😂
Ps. Love your content
:D :D :D
Great video as always. To address your shared disk with 2 nodes issue you could use Stormagic SvSAN which takes local drives from each node and presents it as a iSCSI shared disk to the cluster. They also directly support Proxmox. Give it a try.
Do they have a free/community edition?
What are the advantages compared to CEPH?
@@danilfun I currently run CEPH on proxmox and my VM's are slow, I am using all ssd disk and think it maybe the 1gig network and how CEPH replicates the data. Reads are fine but writes are really slow get like 25meg writes.
@@danilfun I'm pretty sure you need 3 nodes to use ceph
Thank you! Great tip :D
I am using the cloning to make sure that my vms/containers are migrated automatically if one of the nodes are down. That way, I don’t have to deal with the network storage which brings its own problems.
Also you can change the votes in corosync config. That’s how I ran my cluster for a while where my main node had 2 votes and slave had 1 votes.
Nice work. Proxmox is such a nice project. I came here attrscted by the 2 nodes tittle. I imagened the voting issue and the qdevice to help out. HA is really nice. As you said a 3rd fully cspable node allows you Ceph among other things. And that is another rabbit hole.
Regards Christian!
THank you! :D
I am running a similair setup in my homelab. 2 nucs, 1 pi zero with ethernet hat as qdevice. Also configured HA.
Ive setup ZFS on the nucs with replication. Therefore not requiring a shared Storage. The nucs are connected to 1gbps connection but thats fast enough for my lightweight vms and containers.
so I run a two node cluster, one machine is a custom build that is used for things like media streaming / home lab and NAS (TrueNAS Scale VM with SSD's passed directly into it) and is only powered on when I need it (WOL), the second is on a Zimaboard that runs a Docker LXC for my 24-7 services and PiHole DNS LXC which is all working great. As I didn't need (or want!) HA services on Proxmox (I disabled them as they seem to wear SSD's out super fast for some reason?) and one of the nodes would be switched off most of the time I looked into getting a qdev but found out you don't need one if you set "two_node: 1" & "wait_for_all: 0" in the "quorum" section of "/etc/pve/corosync.conf". If one of the nodes is down the VMs will all boot like normal and can edit the VM configs, backups run, etc.
You can use the Replication Service. So e.g. every 5min the Disk ist mirrored
Came here to say the same. But it requires ZFS on your Proxmox hosts.
@@EBlom666 and the storage names should be the same, link the network bridges
@@gilfreund That applies to any storage type. The name of the storage that was saved in the VM config is what the hypervisor will be looking for to load the disk image from. That applies to local, shared or replicated storage.
Yes this works well for static type VM's where the data isn't changing very often.
Don't have zfs 🤷♂️
Thanks for the video. I have a proxmox cluster running, but I need to reinstall proxmox on the cluster. There are a few things I didn’t take care at the first time. Hopefully with your video it will be better :)
You're welcome! Hope it helps :)
really good content - take this to its ultimate conclusion is all i would say - think about 2 node 25 or 40g cluster - no switch needed - you could use the 56g dual port cards (40gbe) for cluster and then a 10g management network - the netfs is a good one to dig into - ceph/nfs/zfs/gluster? i think you are going to have to experiment - maybe try some nvme arrays? thanks for the content!
For HA you definitely have to assess what you want vs. what you actually need. Having HA cluster with only single node shared storage, you are back to square one on being fully redundant.
True! That's why I see HA in a Homelab only as a fun/hobby/testing project, not really a prod environment.
I'm running a truenas cm with 4x4TB NVMe drives just for VM migration. Works very for VM migration and very quickly. If you can, it's definitely worthwhile to have a look
Thank you for this! This was exactly what I needed as it's my exact use case!
Even I will be rebuilding my Proxmox with the hardware I recently got from work ,
Nutanix nodes G6
3.8TB RAM
60 Cores
120TB Storage
Will be epic homelab
bloody hell: 3.8TB RAM....;-)
very fun! Thanks for the great video! have a beelink mini pc (intel) as primary, and an old 2012 MacBook Air (intel) that I recently put in a cluster. They are sharing storage for backups. Migration works great so far, even with the CPUs being different models. Looking forward to your SSD NAS storage in the next 1/2 of the year.
Awesome! Thanks, and stay tuned for the storage server video :D
8:17 It is still possible to change the IP address of the server. You just have to be very careful because you have to change many different things, not just one file or one adapter!
One more thing that will bite you is that your storage has to be the same on both machines. If a VM is on local-zfs on one node, you have to have a local-zfs on the second node in order to migrate.
how did you manage to access the cluster domain directly? where you were able to choose on which node to log in?
I use replication with HA and have different sync schedules depending on the VM importance. All of my hosts have storage pools with the same name and I’ve been able to fail over pretty effortlessly. The main issue I have with HA is when I do a switch update, I have do disable HA otherwise all of my HA hosts will shutdown when a total loss of connectivity is detected bring down my whole lab even tho I have fail over connections on a separate switch. Still trying to figure that out. lol.
Hi Christian, thanks a lot - another great video! 👍 - Would you please also share which NIC you installed and if you are satisfied with it (for Proxmox usage)? Thank you.
That's so cool! I enjoy seeing your setup grow. I'm considering building such a cluster myself. Given the date/time synchronization requirement, I wonder if it would be worth implementing PTP with something like a Time Card.
Awesome!
My home lab is simple 3x N100 mini PCs in Proxmox HA config. Synology DS423+ (NFS) used as shared storage. This HA thing save me so many times when one node goes down.
Even with my local 1Gbps network everything feels snappy.
Wow, that is awesome to head! I didn't know if the 10G would be fast enough, :D Do you use an SSD on your NAS btw?
(A bit late to the party but...) My experience is that I went to three nodes immediately since two nodes requires a bit of work to keep quorum, and now I'm at 11 nodes, including 5 pi 4b's to try some lightweight ARM images. Others have mentioned ceph as a storage fabric and yeah, I recommend that highly - I have three nodes dedicated to just a ceph cluster, and they operate in HA to keep my pihole running nicely. So 5 pi's, three lower power Xeon D's with 128GB RAM for ceph and HA, an MS-01 because, an older NUC as a low power Docker home, and a mega server with dual Xeon E's and some GPU's for AI playing and also because it's an I/O monster with 8 10GB ports and 16 2.5GB ports. Outside of that I have a proxmox backup server (Do this!) and some NAS boxes for NFS storage and local windows shares. You could run the backup server as a VM on the NAS too, which I do - it's not the best performances, but it runs pretty well.
Wow that is a crazy setup!!! :D I’d love to build a 3 node cluster but I’m a bit worried about the power usage even with the Minisforum, but maybe that’s a good one for a future project :)
@@christianlempa the ceph cluster is three Xeon D servers and 11 SSD’s for about 30TB storage which of course is 1/3 usable due to three copies of data - total power consumption is 40W per or 120W total - three minisforum would be about 14W idle each with three NvmE drives each, or about half that based on my measurements here. Hardware costs would be similar too and performance better but maximum RAM about half too, which is why I went with the Xeons plus I wanted supermicro IPMI control.
I was just looking at this case, glad to see it in use
Cool :D
You dont need shared storage or ceph for HA... It can be done like you have it now, but you need enabe VM replication for example to replicate vm every minute... Im not sure if you can do it only on ZFS for this because I have done it only on ZFS... Maybe also check HA groups because if you plan turning off one node periodicly you can make 2 ha groups and set in which host you preffer run VM for example grop run_on_pve1 and run_on_pve2 then you set in ha which vm use which group and select for pve1 group that it use pve1 host and pve2 group use pve2 host. Then when you want turn off or maintenance host pve2 you edit ha group run_on_pve2 and instead of selected pve2 host select pve1 and save. System will online migrate all VMs which run on pve2 host to pve1 host and you can turn off pve2 host when it finish...
I think that this can be done also with api calls(edit ha group), check if vms are migrated and then turn off host on schedule(cronjob) on your zuma board... After host is turned off you can edit run_on_pve2 ha group and select thqt group use pve2 host and when host will be turned back VMs will be migrated back to pve 2 host...
try a vm based cluster for first steps with separate virtual netwerk segment. Works fine as a teaser.
You can try running DRBD Linstor on your cluster for online replicated storage (block devices). Im running few clusters this way ;)
I've tried CEPH. So looking into using Linstor.
If you use a 3 node with ceph/gluster or a network filesystem, the migration is happena almost instantly cause only ram e config file need to migrate from nodes.
So... if my two Proxmox versions are the same, but the kernel is different, might run into issues then?
with 2 node+Qdevice, we can run HA using replicate function that buildin proxmox, set time to sync VM disk.
Quick question: I have 2 hosts as well, I dont really want to get a 3rd either for HA to work better. Can a Raspberry Pi 5 per your video be used also as a Voting member and it would work a bit better spinning up the VMs on the other host with the Pi being up? The one I was gonna get is the 8GB ram and 128GB storage micro sd card. Thanks, just looking for general feedback on this idea.
to get live migration work more reliably this can help: pve8 has cpu types which wheren't available before. sth like x64_x86_v3 is for example intel skylake and newer and amd epyc (i assume zen1). just klick help in cpu menu of a VM
and as others pointed out, with replication HA is possible. but loose data up to migration interval.
otherwise do a bulk migration before shutdown?
Thanks! Good tip with the cpu type
Don't think I haven't noticed you installed the motherboard without the I/O shield plate despite your efforts to hide it.
My first foray with proxmox has been less than fun, I migrated from esxi, first few days were great then my truenas vm started doing some weird stuff, like randomly rebooting, seriously considering fgoing back to esxi
Doing similar at the moment - Just with Intel NUCs a pair of NUC6i7 with a smaller NUC as the third node, all using my Synology NAS for NFS Storage...
Nice! Do you have any performance problems over network? And do you use 10G?
Nice Video! How about a 5 NODE Proxmox cluster!!! Using an NAS for VM backups!
That would be a little... crazy :D
Hey Chris, take a look at CEPH and maybe you can build a network over USB4 thunderbolt? ;) at least this is what I run on a 3 node cluster with NUCs 13ths gen.
But even then: it takes around 2-3 min after a Node dies to run the HA migration. But not because of slow network 😂 getting around 20-30 gbit over the TB4 ports
Sounds nice! Maybe i'll do ;)
I tried two nodes proxmox cluster too a couple of months ago, but I ran into some weird unstable situation when one of the nodes goes off, I think the second node frees until the first goes online or something like that, so I noticed that is not a good idea.
FYI I tried to do a qdevice inside of a docker container & it was a mess. I end up messing up my entire cluster. I got it added to the cluster but it was non-voting. While messing around with it i borked my entire cluster so i probably won't be doing that again.
Hi @Christian Lempa, would you mind sharing the model of Be Quite power supply you used in this build and confirm if you used 120mm fan on top of the CPU heating? Would greatly appreciate 😊
You don’t need shared storage, you can also setup replication to run ever 15 mins…or less if you want then you don’t have to worry about HA failing when you pull the plug
So I've been wanting to give this a try, but after reading some of the comments below regarding hardware requirements, I'm thinking it may not be worth the effort. For the average home-lab guy who has a windows SFF pc (NTFS file sharing / WSL2 for linux docker stuff), an HP micro-mini pc (Proxmox) and a raspberry pi4 (docker) all making up the environment, do you think this is something that is workable or possible by adding an additional Proxmox node running on a capable Dell laptop?
I am running a 3 node cluster which shared NFS storage on a NAS
To make HA work. You can use replication of the VM's if the storage-name of both nodes are the same.
so many good pointers Christian. thank you so much for this valuable information
Thank you so much! :D
You dont “need” a separate device for shared storage. You “could” use CephFS or GlusterFS to create a shared storage pool from the disks/partitions in you servers.
Im using quotes because its possible, but a seperate device is probably still beter.
If you don't have shared storage do you still need a cluster to migrate to another node? Seems like a cluster should always have shared storage
I think it makes a lot of sense, but it's not a requirement.
Haven't seen the video... Already "LIKED" it! You KNOW it's good stuff when mr. Lempa uploads! THX for the TOP QUALITY content, as always!! 👌💯
Haha thanks :D
@christianlempa are you still using the same Terraform provider "Telmate" that you used before in PVE 7?
Yes
hi great video! just wondering, what sfp nic are you using there?
I'm using the intel X520-da1 and da2, they're not the newest tbh, but work pretty well.
recommend 3 nodes at minimum, though you can get away with 2 and a qdevice.... Even for a local cluster you need 3 nodes, or a QDevice running outside the nodes. 2-node clusters are only for home-labs and experiments...
If one node in this cluster goes down, the other will be in R/O mode or will fence itself. Cluster must have quorum, in a two node cluster - quorum is 2. In 3 node its also 2, so either one node + vote survives, or both prod nodes (if vote is down).
I use application high availability btw
I mean, I have three pfSense virtual firewalls in HA (not the recommended scenario, they go with 2 but, meh, works)
I have three VMs running mariadb galera in a cluster, HA database
I'm learning kubernetes just for this reason (will have 3 master nodes and 6 worker nodes, 3 VMs for each of my proxmox hosts)
I don't need a shared storage, LOL
I also containerize everything so, kube it is
Hi Christ nice tutorial, but i have question how about the case is migrate and shut down the master after migrate it
You should give CEPH a try. GO for a couple of sata SSDs, install each one on each node and configure it. You'll see the difference on HA.
How if you need 3 Nodes for Ceph ?
As well as for HA
@@kevinneufeld3195 if it is for services that dont't need top performance (IOPS), setting ceph in VM's is more than enough. You can also have almost bare-metal performance is you passthrough the sdd's or nvme's instead of virtual disks.
Two nodes plus a q device (pi 5) seems perfect for me. They can each backup to the other node. I tried ceph but my ssds were too slow.
So i just have redundant services on each node. So a pihole lxc and a tailscale lxc on each one.
hi, how can you test or tell if your ssd was too slow to support ceph?
@@rogertan1130 you want IOPS, but also consumer SSDs typically slow down as the drive fills up which is not ideal for ceph (initially a nand cell operates in SLC mode, when the drive fills up it divides that cell in to smaller chunks) and due to the way write caching works write performance falls off a cliff as well.
However, a lot of people misconfigure ceph as well. You can have a ceph cluster with HDDs that performs well enough as long as you have enough OSDs, but you also actually need the network speeds and latency to support this. 10gbit is often recommended, but I see it more of a minimum to get any decent sort of performance with ceph. Getting ceph on it's own network should help a lot with this as well.
ssd or network interface?
a typical SSD has a speed of 3500MB/s
1 gigabit network has 1gbps = 1000 mbps = 125 mb/s
10 gigabit network has 10gbps = 10 000 Mbps = 1250 mb/s
with is ~1/3 of the typical SSD speed...
@christianlempa can you add a 120mm fan on top of the heat sink? Is there enough room in the case?
Yes, I just did that, but it's very little space, so I'm not sure how much it will make a difference. I have also added 2x 80mm fans at the front of the case and I believe that helps much more than the cpu fan...
Each node in the cluster requires the same storage names. Ensure they are visible on all nodes in the datacenter, activate replication, and enjoy high availability without the need for centralized storage
The problem with replication is that it can cause inconsistencies between Kubernetes nodes, databases, etc. it might be a solution for some workloads, but I'd prefer something like Ceph ideally, which unfortunately requires 3 nodes. But I'm still researching what option would be the best for my setup.
Using a Proxmox-Cluster with 3 Nodes and Ceph as a common Storage-Pool for real HA.
Just add dedicated HDDs to both Proxmox nodes just for VMs and then format those using ZFS. Then you can mount those drives as shared and have ZFS doing replication every XX minutes from node to the other. No need for external storage.
Craft computing YT channel has a nice tutorial
Yep, that is currently what I do for home and at work. Yes I could have used CEPH for work but I don't have time to troubleshoot cluster issues. ZFS is easier to deal with and fix.
@@Darkk6969 I started with ZFS but I am planning to test at home Ceph and see if it worth to switch
the dedicated network between 2 proxmox machines: why did you set it /16 instead i.e. /31 or /30 - there are only 2 IPs there, right?
Good question, I've done it because in the future I might use a switch or another network instead of a direct link, but yea... it doesn't have to be that big :D
@@christianlempa your confirmation means 1 thing for me: I am getting better in these THINGIS ;-) - thank you ;-)
@@zyghom awesome :D
If you set the CPU type to QEMU64, live migration tends to work "most of the times". Not 100% though.
I've been thinking about doing something similar but have been struggling with shared storage since I don't want to use the slow HDD on my NAS and don't want to build a SAN. I didn't catch what you decided to use for shared storage. Do you mind clarifying?
NAS with iSCSI or NFS?
@@xConundrumx iSCSI seems a bit dated at this point and there is very little used gear available on ebay that isn't ancient. I'm trying to avoid using my NAS because I'm using Unraid and it's not really fast enough without the right RAID configuration for a target.
@@ryanmalone2681 Literally every SAN uses iSCSI, not dated at all.
why you don't use the replication feature from proxmox
I don't have zfs
I don’t understand why you would want to go for a 2U case instead of a 3U case especially if you are planning to use an ATX PSU. That’s still gonna occupy 3U in your rack because you can’t put another server directly above as the PSU needs to suck in air from above. You could put a full height PCIe card in the 3U case without additional riser cables and you get proper front to back airflow…
I wanted to have a little more space in the rack, and since Silverstone doesn't have a 3U case, I went with 2U instead of 4. But yeah it has some concerns about airflow that I didn't thought about in the first place.
Minimum of 3 nodes + ceph cluster with 3 OSD’s = Happy high availability Proxmox manager.
First things first: Do not create VMs or LXCs on node #2 _before_ it joins the cluster! 🙂
Correct! It also states that in the cluster docs.
True, just keep in mind prx-prod-2 is the first node in the cluster, due to historical reasons
Which low profile dual SFP card have you found to work well with proxmox?
I'm using an Intelx X520-Da2
Kool. Thanks. Will check to see if they are available in my area.
@@DuncanUnderwood1 check out ebay, or even amazon ;)
@@christianlempa will do. Thank you
Proxmox has glusterfs built in to solve the shared storage issue
Hi all, I want to purchase Asus NUC Intel for homelab, my concern is about to run the box 24/7. Is it stable and long lasting for ths Asus nuc to run all the time. Appreciate for all your feedback.
Hm, that's a good question. So technically industry built devices are more robust and work better in 24/7 scenarios, but honestly, I'm running all my home servers based on Desktop PC components, and never got any issues with it. Even the SSDs are consumer quality. Maybe they don't last as long as industry quality devices, but if you take backups, if you replace your setup from time to time, I personally can't say anything "bad" about consumer products in a homelab.
Please tell about SDN for whole cluster
Nice video's, is it just me, or do the latest video's have a small ms audio lagg?
Could be wrong, but I always get that feeling when watching them.
Maybe, I still have audio probs with my recording setup sometimes 🙈
some baits are worded like
= one-VM on one-cluster with multiple-nodes equals one-video-editing-desktop
without the overheads of high-availability of more-than-one cluster
Das ist ein gutes T-Shirt Mandalorian!
Danke :D
hey! great video. @HA> How about VM replication? Wuldn't that circumvent your need for network storage?
Which network card did you choose?
It's an Intel X520-DA2 chip
It will going bigger i can ensure that more clusters. I am that hole too :(
I don't know why I feel Mr Lempa is Lex Luthor.
One Tip do not run it on a SD Card because it writes alot of data for the Q DB. I blew up an expensive SD Card within 2 wks. You can also use a cheap OrangePi as long as it can run a linux derivate preferable Debian because Proxmox is Debian based.1 GB LAN Port works fine as well.
Thanks for sharing!
7:00
not sure which would be better...
direct connection between the two nodes?
Or
bound interface from the two 10 gigabit ports.
through switch.
Quick question I can think of:
If you decide to add a third node? What do you do with direct links?
btw
you should look at the new settings for v8 for vlans in the datacenter/network/
For the network aspect, I prefer configuring the bridge as a Ovs (open virtual switch) and then creating a separate vlan for the corosync link.
Linux bridge interfaces fully support VLANs, you don't need to use OVS for something so simple.
No I don't use OVS