David, Really a good video but then all your videos a have liked a lot. A while back I had built a test cluster on some temporary hardware and I decided to remove one of the nodes. I saw the warning about not rebooting the node that was killed. Since this was all a test I tried it and what a mess. The other two nodes lost their NFS shares and just trying to connect to the GUIs was miserable. I appreciated hearing all the warnings you gave in your video about making sure you are moving forward appropriately. I truly applaud the effort you put in this video. Anyone building a Proxmox cluster that might need to remove or replace a node needs to see this video.
Thanks for the feedback much appreciated I've had comments in the past and this is one of those videos that definitely needed a warning It's not a video you can just follow along and make changes as you go
I've not long actually had a server failure in my cluster. I was just about to do the re-install of the host (R610, not your fancy R620 :D) and this is exactly the guide i needed to ensure i had thought of everything and could do this properly. So thank you!
Good to know the video was useful Normally I don't upgrade or rebuild servers, I phase them out with newer ones And it's the first time I've done this with SDN involved as it's still quite new, or it least part of the main code So it was interesting to see that although the server was connected to an NFS share, SDN had to be redeployed
Timing - David this dropped at the perfect time - thanks! - I messed up my Ceph and needed to re-install ALL the nodes - 1 by 1 using this method - everything is great again - In my setup - Each node also requires the re-issue of the SSL cert since we're reverse proxying dns with cloudflare - Keep 'em coming matey. PS Open a discord server - your community would enjoy meeting you!
Good to know to the video was helpful I would like to see a cleaner removal method, a bit like removing a Windows computer from a Domain But I have to say, just removing the node and doing a clean rebuild isn't difficult and it doesn't take much time
Great video! I embarked on something similar when I broke up a 6 node cluster into two different clusters, 2 and 4 nodes. Went into it flying blind because yolo and backups I guess! Came out of the other end in one, or actually two pieces and both clusters are doing fine. Keep up the good work!
I quite like these clusters as they're pretty easy to setup and maintain Information does get left behind I see when you start removing things Not a problem I suppose as I have another cluster that had all the servers replaced over a year ago And even though I found details about those older servers, it's just information and the cluster still works fine
David, Really a good video but then all your videos a have liked a lot. A while back I had built a test cluster on some temporary hardware and I decided to remove one of the nodes. I saw the warning about not rebooting the node that was killed. Since this was all a test I tried it and what a mess. The other two nodes lost their NFS shares and just trying to connect to the GUIs was miserable. I appreciated hearing all the warnings you gave in your video about making sure you are moving forward appropriately. I truly applaud the effort you put in this video. Anyone building a Proxmox cluster that might need to remove or replace a node needs to see this video.
Thanks for the feedback much appreciated
I've had comments in the past and this is one of those videos that definitely needed a warning
It's not a video you can just follow along and make changes as you go
I've not long actually had a server failure in my cluster. I was just about to do the re-install of the host (R610, not your fancy R620 :D) and this is exactly the guide i needed to ensure i had thought of everything and could do this properly. So thank you!
Good to know the video was useful
Normally I don't upgrade or rebuild servers, I phase them out with newer ones
And it's the first time I've done this with SDN involved as it's still quite new, or it least part of the main code
So it was interesting to see that although the server was connected to an NFS share, SDN had to be redeployed
Timing - David this dropped at the perfect time - thanks! - I messed up my Ceph and needed to re-install ALL the nodes - 1 by 1 using this method - everything is great again - In my setup - Each node also requires the re-issue of the SSL cert since we're reverse proxying dns with cloudflare -
Keep 'em coming matey. PS Open a discord server - your community would enjoy meeting you!
Good to know to the video was helpful
I would like to see a cleaner removal method, a bit like removing a Windows computer from a Domain
But I have to say, just removing the node and doing a clean rebuild isn't difficult and it doesn't take much time
Great pace, great detail. Thank you as always.
Glad you liked the video
Great video! I embarked on something similar when I broke up a 6 node cluster into two different clusters, 2 and 4 nodes. Went into it flying blind because yolo and backups I guess! Came out of the other end in one, or actually two pieces and both clusters are doing fine. Keep up the good work!
I quite like these clusters as they're pretty easy to setup and maintain
Information does get left behind I see when you start removing things
Not a problem I suppose as I have another cluster that had all the servers replaced over a year ago
And even though I found details about those older servers, it's just information and the cluster still works fine
Thanks for the helpful information.
Thanks for the feedback
Good to know the video was helpful
Hopefully they'll put this in the GUI to make it easier
Rushed to subscribe. Thank you.
Thanks for the sub