Did you try Talos? Did you consider using it? IMPORTANT: For reasons I do not comprehend (and Google support could not figure out), TH-cam tends to delete comments that contain links. Please do not use them in your comments.
I tried it on local virtual machines. I'm kind of a beginner though, and failed on two points: - wasn't able to set up nfs storage class (due to an os dependency missing, and being constant in nature I cannot install it) - wasn't able to set up a load balancer for external access to my apps (due to being a beginner, probably) Switched to k3s for now until I have more experience. Cool project, though.
@@m.sierra5258 I don't think there is a good use case that would justify Talos as a solution for local Kubernetes clusters, and questionable value as a replacement for managed k8s. It shines (or could shine) as a solution for self-managed Kubernetes (e.g., on-prem or those who do not want managed k8s services). That being said, storage and load balancers are painful to setup, no matter the self-managed solution one chooses. Those are one of the reasons I prefer managed k8s. It works out-of-the-box.
We have been using Talos our environment with 100+ workers for about a year and it's solved some pretty big problems we had with kubespray-deployed clusters.
Thanks for another great presentation Viktor. Many moons ago I was involved working with Openshift running on Fedora Atomic Linux. It seemed like a great idea at the time until it wasn't! The very thing it achieved security, read-only filesystem etc ended up causing a fair amount of frustration amongst those assigned to manage it. Having said that we was all finding our k8s wings back then ;-) I personally ended up settling on KOPS after employing a myriad of different day 1 deployment options, but will definitely take Talos for spin in my scratch environment.
stumbled upon talon back in 20.0 when they released 0.7, the idea and the project are really cool, I am planning to use it for my home lab project. Can't wait to see what they will do when they reach 1.0.0
Very cool! Like CoreOS but it lacks of developing crd in order to expand kubernetes API and create objects like machineset and machineconfig to create nodes. Great video!!!! (As always!!!)
@@DevOpsToolkit That is exactly what Sidero Metal does - it's a cluster API provider for bare metal that manages the complete hardware/K8s lifecycle...
I personally use kops for k8s cluster creations and I have created quite a few clusters so far with kops. Very convenient and product keeps improving over time. This one looks too much manual work to be honest. Kops even supports templates using the go template, very similar to helm, which makes provisioning multiple clusters consistent and easy. Also comes with addons, if you dont want to install certmanager, ingress controller, metric server ....
Thanks. Talos could be great for Kubenetes on the Edge clusters... Harvester from Rancher uses K3OS that is a very tiny Linux distribution used basically for kubernetes.
I dont see the advantages of Talos over K3OS which is more mature. That ISO is up to 500MB already, indicative of its maturity. Talos is tiny (~ 70MB ) because there isn't much in it yet. So a bit like reinventing what is already out there, but their approach appears to be creating a control node where fleet of servers running Taloa can be managed from.
K3os is abandoned with the last release made over a year ago. Talos is tiny by design. It's meant to have only what is really needed to run Kubernetes as opposed to "traditional" operating systems. That is not an indication of maturity but a design choice.
@DevOpsToolkit So ClusterAPI, Crossplane and Talos seem like a likely combination for an IDP PaaS? CAPI can bootstrap Crossplane, and then Crossplane can bootstrap Talos, wouldn't this become a golden path to an IDP/PaaS?
You have skipped ROSA (ocp on aws fully managed: masters and workers) and ARO (ocp on azure fully managed: masters and workers) that comes with CoreOS as hostOS .
12:29 is it a bit disingenuous to say you are not doing anything except creating a new node and it's intelligent enough to join the cluster, when you are passing in worker.yaml as the userdata? all the logic for adding the node to the cluster will be in that file.
@@andrewrynhard1926 giving up access to the underlying OS is one major tradeoff. Another would be having the ability to install agents and shipping system logs to a central syslog server (mandatory in highly regulated environments on-prem envs). But this is definitely next level immutable approach to K8s, for sure. Toil does shift a bit from using conf mgmt (Ansible) to managing OS images as code using Packer and whatever provisioner. Any progress on Longhorn ? Can you elaborate how Talos differs from K3OS? Is it the control node where talosctl runs and management talos fleet?
@@rodrigocc_rata Ah gotcha. Talos is a ~40mb squashfs and is unpacked in RAM. The core of Talos is all reproducible so it can be "ephemeral". There is one partition in Talos where data survives reboots and that is mounted at /var. On upgrades we wipe the disk entirely so we call the partition "EPHEMERAL". In this partition is where etcd, containerd, and Kubernetes store their data. Does that help?
@@andrewrynhard1926 yes, thanks! So, you got me more curious now. Two more questions if you don't mind: * How does Talos boots again if I reboot/power outage or something? Do I need to ipxe boot all the time? Or boot from a pendrive? * How do you upgrade? Do you reboot and start a new version of Talos or just do it without reboot?
@@rodrigocc_rata Good question. We still have a boot loader (grub). Talos is just a vmlinuz + initramfs.xz. So I suppose we still have another persistent partition but that is for the boot loader. We do have yet another partition that is writable but it is only for configuration of Talos itself.
That's the problem that all similar tools designed to be on-prem first are facing. Since you do not know what the underlying infra is, you can it design the solution that works everywhere. That's why managed Kubernetes solution give much better first-rin experience. That being said, Talos should do better on that front. Since it uses images already created for specific vendors, those images should have basics like ingress and storage drivers baked in. The similar situation is with, let's say, Rancher RKE. It does not come with those either, and it should. So yeah, I agree with you. The first-rin experience should include more stuff instead of keeping them as addons.
@@DevOpsToolkit over the last couple of days Ive been trying to install kubernetes on bare metal and I still dont have a solution for volumes. Kubernetes itself has nothing out-of-the-box. So now Im looking into gluster. And to get the loadbalancing to work I had to get help from a github issue that told me to add a config on line 500 or so of some yaml file I needed to run. This stuff needs to be easier.
@@jurgen9568 I agree. It definitely needs to get easier, especially now that we experienced how good it can be when using managed Kubernetes services like GKE, EKS, AKS, etc.
@@jurgen9568 We used to have GlusterFS and it was always a pain. Switched to a professionell NetApp Storage solution with NFS volumes which also offers CSI via Trident. That was much more stable. Rook and Longhorn sound interesting but I haven't had a look yet
Did you try Talos? Did you consider using it?
IMPORTANT: For reasons I do not comprehend (and Google support could not figure out), TH-cam tends to delete comments that contain links. Please do not use them in your comments.
Rancher until now.. I'll take it for a spin.
I love Talos, it has been a great replacement for kubic and k0s
I tried it on local virtual machines. I'm kind of a beginner though, and failed on two points:
- wasn't able to set up nfs storage class (due to an os dependency missing, and being constant in nature I cannot install it)
- wasn't able to set up a load balancer for external access to my apps (due to being a beginner, probably)
Switched to k3s for now until I have more experience. Cool project, though.
@@m.sierra5258 I don't think there is a good use case that would justify Talos as a solution for local Kubernetes clusters, and questionable value as a replacement for managed k8s. It shines (or could shine) as a solution for self-managed Kubernetes (e.g., on-prem or those who do not want managed k8s services).
That being said, storage and load balancers are painful to setup, no matter the self-managed solution one chooses. Those are one of the reasons I prefer managed k8s. It works out-of-the-box.
@@DevOpsToolkit thanks, appreciate the advice :)
Appreciate the Video! Completely correct about the documentation - it's being worked on as a priority now! (And PRs appreciated!)
We have been using Talos our environment with 100+ workers for about a year and it's solved some pretty big problems we had with kubespray-deployed clusters.
Thanks for another great presentation Viktor. Many moons ago I was involved working with Openshift running on Fedora Atomic Linux. It seemed like a great idea at the time until it wasn't! The very thing it achieved security, read-only filesystem etc ended up causing a fair amount of frustration amongst those assigned to manage it. Having said that we was all finding our k8s wings back then ;-)
I personally ended up settling on KOPS after employing a myriad of different day 1 deployment options, but will definitely take Talos for spin in my scratch environment.
The pros of Talos reminded me of NixOS, but once you go into using the OS... Slick!
stumbled upon talon back in 20.0 when they released 0.7, the idea and the project are really cool, I am planning to use it for my home lab project. Can't wait to see what they will do when they reach 1.0.0
1.0.0 should be coming out this week, so you won't have to wait long!
This video is an instant keeper. Thank you!
Great! I was waiting for this one :)
You are the best!
Very cool! Like CoreOS but it lacks of developing crd in order to expand kubernetes API and create objects like machineset and machineconfig to create nodes. Great video!!!! (As always!!!)
True. No Cluster API or anything similar that would allow us to manage hardware as well.
@@DevOpsToolkit That is exactly what Sidero Metal does - it's a cluster API provider for bare metal that manages the complete hardware/K8s lifecycle...
Actually there is a Talos Provider for Cluster API, I'm currently testing it myself
@@wollginator That's great to know. I wasn't aware of it.
I like the overlay cli screen .
Thanks
I personally use kops for k8s cluster creations and I have created quite a few clusters so far with kops. Very convenient and product keeps improving over time. This one looks too much manual work to be honest. Kops even supports templates using the go template, very similar to helm, which makes provisioning multiple clusters consistent and easy. Also comes with addons, if you dont want to install certmanager, ingress controller, metric server ....
I am using RKE to bootstrap Kubernetes Clusters, is not using kubeadm or any of those + it has good terraform provider.
I love Talos!!!!
Thanks. Talos could be great for Kubenetes on the Edge clusters... Harvester from Rancher uses K3OS that is a very tiny Linux distribution used basically for kubernetes.
I dont see the advantages of Talos over K3OS which is more mature. That ISO is up to 500MB already, indicative of its maturity. Talos is tiny (~ 70MB ) because there isn't much in it yet. So a bit like reinventing what is already out there, but their approach appears to be creating a control node where fleet of servers running Taloa can be managed from.
K3os is abandoned with the last release made over a year ago.
Talos is tiny by design. It's meant to have only what is really needed to run Kubernetes as opposed to "traditional" operating systems. That is not an indication of maturity but a design choice.
Hey Viktor, I would love to see you create a video about Grafana Labs' Tanka. What are your thoughts about it and comparing it to Helm and Kustomize
Good idea. Adding it to my TODO list... :)
Done: th-cam.com/video/-qpcsUXElYc/w-d-xo.html
@DevOpsToolkit So ClusterAPI, Crossplane and Talos seem like a likely combination for an IDP PaaS?
CAPI can bootstrap Crossplane, and then Crossplane can bootstrap Talos, wouldn't this become a golden path to an IDP/PaaS?
I'm not sure that you need capi to bootstrap crossplane. Talos could become one of crossplane providers though.
@@DevOpsToolkit Yes, I concur.
what if i already have cluster, can i join my cluster to talos? or can we mix to that?
My best guess is that you cannot do that, but I haven't tried that scenario so I might be wrong.
You have skipped ROSA (ocp on aws fully managed: masters and workers) and ARO (ocp on azure fully managed: masters and workers) that comes with CoreOS as hostOS .
Indeed. That video is focused only on Talos. One of these days, I should do videos on those as well.
12:29 is it a bit disingenuous to say you are not doing anything except creating a new node and it's intelligent enough to join the cluster, when you are passing in worker.yaml as the userdata? all the logic for adding the node to the cluster will be in that file.
You're right. I should have been clearer on that one.
How is Telos different from K3OS ?
K3os is mostly abandoned. The last commit was pushed over a year ago.
Sounds very similar to openSuse's KubicOS exept that doesnt have a cool, talosctl
Victor, did you tried doks_debug ?? :)
Talos is an in-memory OS. When upgrading, it switches from one in-memory to another in-memory OS.
I haven't (yet) tried doks_debug :(
k3sup accomplishes the same.
Actually this is a combination of k3sup and K3OS. Could you elaborate on the differences between Talos and K3OS ?
They're different. K3sup bootstraps Kubernetes on top of an OS while Talos is an OS with Kubernetes baked in.
@@DevOpsToolkit slight tradeoffs. Will take it for a spin. Thanks for sharing it.
@@rodrigito78 I am curious as to why you consider the tradeoffs "slight." (Wondering what we can do better).
@@andrewrynhard1926 giving up access to the underlying OS is one major tradeoff. Another would be having the ability to install agents and shipping system logs to a central syslog server (mandatory in highly regulated environments on-prem envs).
But this is definitely next level immutable approach to K8s, for sure. Toil does shift a bit from using conf mgmt (Ansible) to managing OS images as code using Packer and whatever provisioner.
Any progress on Longhorn ?
Can you elaborate how Talos differs from K3OS? Is it the control node where talosctl runs and management talos fleet?
What is that mentioned that it runs on me? Does it ONLY run on me? Really? It seems quite weird
Not sure I follow what you mean by 'me'.
@@andrewrynhard1926 heh, sorry, phone autocorrect. It was mem, that it runs on RAM
@@rodrigocc_rata Ah gotcha. Talos is a ~40mb squashfs and is unpacked in RAM. The core of Talos is all reproducible so it can be "ephemeral". There is one partition in Talos where data survives reboots and that is mounted at /var. On upgrades we wipe the disk entirely so we call the partition "EPHEMERAL". In this partition is where etcd, containerd, and Kubernetes store their data. Does that help?
@@andrewrynhard1926 yes, thanks! So, you got me more curious now. Two more questions if you don't mind:
* How does Talos boots again if I reboot/power outage or something? Do I need to ipxe boot all the time? Or boot from a pendrive?
* How do you upgrade? Do you reboot and start a new version of Talos or just do it without reboot?
@@rodrigocc_rata Good question. We still have a boot loader (grub). Talos is just a vmlinuz + initramfs.xz. So I suppose we still have another persistent partition but that is for the boot loader. We do have yet another partition that is writable but it is only for configuration of Talos itself.
Meh. No ingress, no volumes. Not much of an improvement. The base install of kubernetes is easy.
That's the problem that all similar tools designed to be on-prem first are facing. Since you do not know what the underlying infra is, you can it design the solution that works everywhere. That's why managed Kubernetes solution give much better first-rin experience.
That being said, Talos should do better on that front. Since it uses images already created for specific vendors, those images should have basics like ingress and storage drivers baked in. The similar situation is with, let's say, Rancher RKE. It does not come with those either, and it should. So yeah, I agree with you. The first-rin experience should include more stuff instead of keeping them as addons.
@@DevOpsToolkit over the last couple of days Ive been trying to install kubernetes on bare metal and I still dont have a solution for volumes. Kubernetes itself has nothing out-of-the-box. So now Im looking into gluster. And to get the loadbalancing to work I had to get help from a github issue that told me to add a config on line 500 or so of some yaml file I needed to run. This stuff needs to be easier.
@@jurgen9568 I agree. It definitely needs to get easier, especially now that we experienced how good it can be when using managed Kubernetes services like GKE, EKS, AKS, etc.
@@jurgen9568 For storage you could take a look at Rook, LongHorn or OpernEBS, they manage replication of volumes across the kunernetes node and more.
@@jurgen9568 We used to have GlusterFS and it was always a pain. Switched to a professionell NetApp Storage solution with NFS volumes which also offers CSI via Trident. That was much more stable. Rook and Longhorn sound interesting but I haven't had a look yet