RKE2: One-Click Deployment - Time To Switch From K3S!?
ฝัง
- เผยแพร่เมื่อ 1 ก.ค. 2024
- K3S isn't secured by design like RKE2. Both products are created by Rancher (SUSE) but with competing objectives. However, given that RKE2 is simple to deploy, is lightweight like K3S, and comes with a raft of security benefits, isn't it time to change?
This video provides an automated install and walkthrough for RKE2.
RKE2 Instructions:
github.com/JamesTurland/JimsG...
Rancher Page:
docs.rke2.io/install/quickstart
Recommended Hardware: github.com/JamesTurland/JimsG...
Discord: / discord
Twitter: / jimsgarage_
Reddit: / jims-garage
GitHub: github.com/JamesTurland/JimsG...
00:00 - Introduction to RKE2 & Security Benefits
03:03 - Prerequisites
05:20 - Script Walkthrough
15:52 - Deploying RKE2
18:05 - Accessing Rancher
20:40 - Outro - วิทยาศาสตร์และเทคโนโลยี
Nice😊. I had to try it. Workes on the first try. Good job again jim 👍
Awesome, thanks for confirming 😄
Great video! Tested today and fully working. Good job Jim!!
Glad to help
Thanks for the demo and info, have a great day
Thanks, Chris. You too.
Well delivered and easy to comprehend, thanks again for awesome content!
Thanks 👍
Interesting. I'll be giving this a shot, I hadn't heard of RKE2 before. Thanks for the video!
You're welcome, it's the hidden gem
Thank you and happy new year! 🥂🍾
I can confirm that both rke2 and longhorn works even on debian12 cloud generic (with a little bit of tuning of the script - like the ssh - and the installation of open-iscsi on the workers)
That's good news, thanks for confirming.
there is no enough likes for your video, the amount of work that you put into this is incredible, thanks, i'm waiting for my new homelab server to try all of this.
Thanks so much, really appreciate the feedback. Exciting times getting your new homelab, jump on Discord if you need any help.
@@Jims-Garage now that i have my proxmox server, i tried this script, but in the end the kubectl does not connect to the vip ip address, i did the complete process 3 times, with fresh vms, it still gives the same error, any ideas?
@@raulgil8207If you can come on Discord and show the output of your logs that would help. I suspect it's failing early on. Are you able to manually SSH with certificates?
@@Jims-Garage thanks, i will do that, and yes, i was able to ssh with certificate into the vip Ip
Thank you! Will give this a try
Did it work?
Great one Jim. Thanks for this great video. I was just about to hack your k3s script to use RKE2. There is already lots of content about this version. There's a big move going on from K3s to RKE2.
Thanks, that's good to know. It seems like an obvious migration given the benefits and similarities with K3S. I'm going to dual cluster for a while in case of issues (so far, so good).
Hey Jim, great video and script again. I'm on my own homelab journey too and your videos have helped me so much, as I'm also a Linux newb as well (know enough to be dangerous). I'm late to this video because I had some issues with some equipment. Thought I'd just jump in the deep end with this as had already followed your k3s setup but figured I'd keep upto date. Script worked perfectly after I figured out an issue with something two feet in front of keyboard as I copied and pasted your script like yourself into WinSCP, but could not get it to run with a error message "/bin/sh^M: bad interpreter" till I work-out about unix format. Hope you are still using rke2 as am following along, keep up the good work.
I know that must have taken quiet a lot of time getting that script to work as expected.. There are always things that we overlook hehe... Appreciate all you do and it is very helpful indeed!
You're welcome, yes it took quite a while 😂
Hi Jim, Great video and very high success rates from the looks of the feedback, although I do have 1 concern and that is combing RKE2 & Longhorn all on the single network, I built a K3S\Longhorn cluster and experience huge performance issue due to Longhorn replication and automatic snapshotting processes....how difficult would be to segregate the storage network from the RKE2 pod and ingress network? Cheers
I love this series, and it's very good for learning about kubernetes in all shapes and sizes. Excellent to see someone go through it and have an opportunity to play along.
I'm wondering though, why not create a script-download-run-embed in an image like with cloud-init. Having your own github repo host the version of the script that you need to run on each node, and then having an image for every master/worker that you can apply and copy. On startup it would get the github script, and run it on first boot to set itself up within the cluster. This makes everything much more parallel, since the scalability of this script ends if you want to do - say - 10 workers and masters. Since you have to wait for each one before going on to the next one.
Thanks. End goal is to have ansible which should address your point through the use of parallelism.
Great video!!!😉Jim, are you planning on doing a tutorial of how to deploy RKE2 cluster using an Ansible playbook?
I am, it'll be the climax of the Ansible series
Thanks for the video, I'm really looking forward to deploying it. Do you have any video/guidance on how to setup the SSH certificates to make sure your script works as intended?
it uses ssh keys you can generate them using ssh-keygen then copy them to your home directory on the admin server
You are the boss!
Thanks once for the great videos ❤ , a little request please zoom in more when viewing the scripts the texts i mean as i am watching you from mobile 😅 , thanks
Thanks, I'll try to do that. It's difficult as zooming in too much looks bad on PC...
@@Jims-Garage not too much just a little bit
This is cool! Would be nice to automate this with something like ansible as well
Thanks. My plan is to use jet porch in the near future.
Wow I can wait to build a lab to try all of this!
It's a pretty awesome set-up. Hop onto Discord if you need any help 😊
@@Jims-Garage I will download this on the phone and see if I can add the channel
Hello Jim,
Great video. Do you know if it's possible to change the cluster IP from the default 10.43.x.x to something else, in case that range is already in use on the network?
I don't believe so. However, it's an internal Kubernetes range, it will not conflict with existing external networks (much like how Docker works). You expose services through the loadbalancer defining the network range you want to use.
I believe you can alter the internal networks trough the cluster.yaml
I saw your script and the only thing I could think was; ANSIBLE :)
Ansible is great, I just wanted to do something as simple as possible for people to get started.
I’ve used ansible. And while I love the capabilities. I prefer your script as it has a lower bar to execute. Ansible requires learning the syntax and structure while I already understand scripts well enough.
RKE2 Hype! RKE2 Hype!
Script worked perfectly right away, and yeah took maybe 10 min max to install
How many of Jim's videos do I need to search before I find where he generates the cert files? I have plain old kvm/qemu not Proxmox. I can ssh into all of my nodes using ssh keys (passwordless) from the kvm hypervisor host. What sort of certs files are expected?
I simply use the certs generated by Proxmox. You should be able to use the ones you already are (or generate some new ones and use ssh-copy, I cover that in my ansible series).
Cause myself extra problems by using two sets of ssh keys. One from main pc to admin vm and from admin vm to rke cluster nodes. Had to do a round robin public key authorization on the admin node for the script to work. As I said my fault. Script worked flawlessly once I figured that out. Only took me 3 months to figure out. 😅
Great, glad to hear that you made it work.
thanks jim for the informative videos, is the script working with redhat OS
Not sure, I haven't tested it with redhat. Let me know? 😁
Would you be able to create a video showing how to set up RKE2 on a Raspberry Pi cluster?
I would advise against it, RKE2 is too heavy for a Pi IMO
The script contains also metallb (not mentioned in the video). What's the reason to include both metallb and vip?
Yes, I've added metallb since as kube-vip would not honour the source IP.
Just a quick thought, any reason why this does not deploy as LXC on proxmox other than "security" ?
Not off top of my head, although there are many reasons that could interrupt deployment (VMs are fundamentally different to LXCs). I hope to do some testing in future to enable LXCs.
What tools did you use to scan the vulnerabilities?
CIS Benchmark 1.7
Should the local cluster not be left for rancher management abs a new cluster with workers etc be deployed separately so you aren't giving local access to all your services?
In a proper production environment you want to separate clusters. In a homelab I think this is an acceptable tradeoff given most will be running Docker in a single machine.
@@Jims-Garage thanks for your insight :)
Great video, work first time, i struggle a bit in first go, realised RAM needed Atleast 5 GB and disk space 30 GB to finish the cluster setup comfortably. My setup is behind pfsense , and i use HAProxy to offload cert and redirect to port to access all app in network. However there is some extra setup need to be done with Metallb and BGP mode. I have the pfsense side ready to accept the request from Metallb using FSS plugin. But I am not sure how/what to modify the Metallb to advertise the loadbalancer ip to pfsense. Any help ?
Thanks. The lbrange should be a shared VIP that is dynamically assigned on service request. I haven't tested with OpnSense, but it works out of the gate with Sophos. What have you tried?
@@Jims-Garage I have it fixed and working now, every IP given out by metallb now advertise to pfsense. i had to deploy 2 more config file , BGPAdvertisements.yaml and BGPPeers.yaml. which define all the details. and IPAddressPools.yaml has to be edited to add protocol: BGP. after that everything should work, incase any one wornering.
@@Jims-Garage what would be the command to expose app without any certificate? my pfsense haproxy handle all https/http offloading for domain pointing. i think selfsign certificate is the reason why HAproxy doest work and i am not able to point any domain to the ip address. Thanks for your help.
@@NoBiggi in the service section of service.yaml you need to specify an IP in the loadbalancerIP range. Then you should be able to access the same as you would with Docker.
I found that as well, had to up the vm's from 20 to 30gb. Thankyou!
Hi Jim, maybe its time to Terraform and Ansible to automate creating VMs :) or maybe cloudinit templates by scripts?
Yes, I want to use terraform and jet porch soon. Just so much to do...
I like Your vids, my traefik now just work with docker. Thanks to You!
Next approach is kubernetes :)
traefik and docker works greats, but what when i want to add separate domain with proxmox, not in docker. How to do that with Your traefik template?
@@Jims-Garage
Hi, I've been trying out your cilium version, however it does not work. The lb-range does not exist in your cilium config and the vip is unable to get created as well. Any fixes regarding this?
Not yet, that's why it's labelled with do not use. I am going to move to cilium in the near future
Oh wow I don't know how I missed that. Well thank you lol. I do hope you'll release a video of it soon.@@Jims-Garage
Hey Jim, thanks so much for the video series, super helpful! I'm having a weird issue with the script however. It's asking for the password for the admin box during running. Appears to be happening during step 3, at line 147-149. When I start typing the admin password, it displays text typed in clear text. Am I missing something obvious here? Testing using all ubuntu 2204 server nodes on top of an esxi cluster.
Actually, correction. I was able to modify script with installing sshpass on all my nodes and passing through the password during that command during the install. Probably not the "right" way to do it but it seems to be working now. Strange haha.
Be sure to remove passwords on the ssh keys.
Yup, that entire ssh -tt $user@master1 -i ~/.ssh/$certName sudo su
From what I'm seeing here, the entire:
ssh -tt $user@$master1 -i ~/.ssh/$certName sudo su
The entirety of Step 3: (lines 137-151) results in a prompt for the password on the admin box and then echoes that password to the screen and this entire ssh -tt ... section is never executed on master1
I am trying to run this on Synology Ubuntu VMs, all 6 created from the one image, names and IPs changed as appropriate.
The SSH keys have no passphrase.
Hello Jim, is your script to install RKE2 with Cilium works?
Because I would like to do some tests but I am not sure if it is there but it is still "work in progress" or not (since there are some comments about kube-vip installation but without really install it)
No, it doesn't work. Still on the to do list
@@Jims-Garage Thanks 👍 hope it will be soon on top of the list 😅
About kube vip, do you think it could have sense to use it at least as service lb even with cilium?
@@crc-error-7968 hoping to do it with Ansible. It should replace kube-vip
@@Jims-Garage Ciao Jim, just a last question to let me better understand, do you know if with Cilium is possible to assign a vip for master nodes (to allow communication between the admin machine and one - random - master node) as you did in your scripts for the installations of rke2/k3s? or, to control he cluster from the admin vm do I still need kube vip (or something similar)? So chilium will manage the cloud system side of the cluster?
just ran the script after 25mins it end with ::1]:8080: connect: connection refused, The connection to the server localhost:8080 was refused - did you specify the right host or port?
Sounds like there's an issue with your kubeconfig. Can you run kubectl on one of the nodes?
Also, what OS?
I ran kubectl get nodes on the master1 I get this error......Command 'kubectl' not found, but can be installed with:sudo snap install kubectl...been trying this since yesterday afternoon after i checked your github I thought i was doing something wrong so I waited for the video..still same error.. i even spun new nodes at least 3 different time@@Jims-Garage
@@addesigns2121 hop on Discord so I can see some error messages. Sounds like something quite simple as it appears the script is failing
I ran kubectl get nodes on the master1 I get this error......Command 'kubectl' not found, but can be installed with:sudo snap install kubectl...been trying this since yesterday afternoon after i checked your github I thought i was doing something wrong so I waited for the video..still same error.. i even spun new nodes at least 3 different time
0po82.00 98😊ppoooo😊😊😊pp8😊
Btw how many cpu and ram you finally gave to rke2? Looks like they are more resource intensive
They are, CPU about the same, but about 50% more ram from my experience
@@Jims-Garage Thanks, I was following your script to install rancher, but somehow the rancher got installed to only worker node,
while I wanted to install them on the master nodes instead, is there a way to specify some parameters to let rancher only live on master nodes? Thanks a lot!
@@looper6120 yes, remove the non-schedulable tag
@@Jims-Garage got it, thanks! but removing the tag would allow all pods get moved to masters as well.
I kinda just want rancher to be on masters, was trying to play with the taint and toleration stuff but no luck yet.. not sure if Im doing it wrong.
@@looper6120 watch my videos again. Workers are tagged with worker=true and deployments reference this.
Why use script? Why not ansible? I know bash scripts are bread and butter for us. But ansible is clean and idempotent.
That's why I'm doing the Ansible series now. Script helps people to understand what is happening.
certificates? you mean ssh key. specifically the public key.
Not sure exactly what part you are referring to (you might be right). SSH keys are certificates though.
@@Jims-Garage sure, but no one calls them certificates. They are typically referred to as keys or collectively as a key pair. This is most likely where some of the viewers confusion is coming from.
@@jsross33 Fair enough, good to hear some feedback. I'll be sure to explain terms clearly in future to avoid possible confusion.
That did confuse me also. But yes SSH keys got it.@@Jims-Garage
Can I suggest changing the following line as indicated (to pick up the actual certName)?
Current: ssh-copy-id $user@node
Changed: ssh-copy-id -i $certName $user@node
Thanks, I think that might be updated already on GitHub, I'll double check
@@Jims-Garage It wasn't a few hours ago when I copied the script.
@@SMBICommunity OK, in that case I'll take a look - thanks
Homelab -》 hashicrop nomad
I have seen that, I'll try to visit in the near future. I don't believe it has the security credentials of RKE2 though.
@@Jims-Garage Per default you yre right. But to be honest. PEr defasult the credetinal management of kubernetes is worst too. You are end up in bioth platforms to use vault. And that is the same level of security am I wrong? By the way you can use Boundary to get more secure in nomad.
Thanks for the demo and info, have a great day
Cheers, have a good one.