I agree with this, lines up with my experiences working with ArgoCD too. What I'd like to know is how you use CUE to manage cluster configurations (if you still do). I'm curious to know how you a) combine it with ArgoCD and b) how you layer your CUE files to reduce repetition and configure differences in environments (i.e 1 replica in dev vs 3 in prod) or during promotion of changes.
GitOps Pull vs Push. Push (single Argo) as in sense of central management capabilities & control. Pull (Multi-Argo) as in sense of domain segregations and distributed management. It all depends how you like to address your scale needs vs operations roles & responsibilities.
Great insights, thanks! An argument I've seen in favor of a single argo/flux instance on the control plane cluster is the ability to centralize the source of truth for everything that a teams requests, not just workloads running on kube. That way, the control plane cluster's etcd basically becomes the database for the developer portal, or anything happening with the platform be it workflow processes or actual workloads. I feel the two approaches and their distinct tradeoff will keep competing going forward without a clear winner in the short term.
I agree with that with a small correction. Manifests in git or records in etcd are not the source of trust. They are the desired state. The truth is the actual state that are actual resource. A kubernetes deployment is the desired state which represents means to manage the actual state which are containers in pods. The same can be said for most of other resources.
We have a single monorepo and a single argocd instance to manage around 20 clusters (across multiple environments) that gives us around a 1000 apps. We use appsets with cluster and git generators to dynamically deploy different stuff to different clusters based on labels and file location within our monorepo. Had no issues so far, no technical, no social.
@@iUnro That's interesting. I have had colleagues tell me about experiences with noisy neighbours effectively freezing ArgoCD. Do you pull in configuration from other teams in your setup or just from your own team?
I tend to split by environments rather than teams. There can be performance issues especially with etcd. Fix is sometimes on the control plane nodes rather that argo CD itself.
I have 1 ArgoCD per cluster (~50 clusters). I made a Crossplane composition to bring up all resources necessary to configure SSO for each ArgoCD (and any other app needing SSO). Then, I use a central ArgoCD to deploy ArgoCDs in other individual “child” clusters along with the claim for the SSO composition. Then, each cluster deploys its own apps.
Hi, Viktor! One question i have is how can we properly manage multi-cluster, multi-tenant, multi-environment setups using GitOps. For some context, im using FluxCD and I'm currently migrating from one cluster that was doing everything to dedicated production, preproduction and development clusters. For prod and preprod, theyre both syncing the same manifests, yet preprod has to somehow auto update itself when a new image is released, while keeping prod the same (we do manual promotion for those). Then on the development cluster we have what I'd consider 3 tenants, or rather 3 groups of apps: - ci apps (github runners) - dev apps (these would be 3rd party applications that developers might need. For example jenkins or artifactory) - dev environments for developers Everything is currently in a single monorepo, since we previously only had 1 cluster, and im curious to see how youd architect this! Thanks in advance!
I would have a separate flux manifest for each environment and in that manifest i would overwrite whichever differences there should be between envs. So, the manifests of the application would be the same and flux manifests would act in a similar way as helm values.yaml. in your case that would mean changing the tag.
I'd like to see a refreshed look at Kargo v1, and how to configure it for cross-cluster promotion when there is an ArgoCD instance per cluster. (I don't think this is a trick question.)
Hi Viktor, I have a question regarding crossplane & co. How backup, Disaster, Security are applied in this case? Also, have you faced situation where when a resource is updated, it must be destroyed Thanks in advance
Backup, DR, security, etc. work with Crossplane in the same way they work with any other type of kubernetes resources. That's the beauty of it. It is kubernetes-native meaning that all the solutions you might ready use with others work with Crossplane as well. You can, for example, use argo CD as a solution for backups, or velero If you prefer more traditional approach. Security is partly solved with gitops since you can lock down your cluster and deny access to anyone or anything. There is kyverno for policies that solve other type of security issues. And so on and so forth. The point is that there is no need to think about those things as crossplane but rather as kubernetes issues in general. As for resource update/destroy... It all depends on the API on the other end. Crossplane will always send instructions to the destination API to update a resource if the desired state differs from the actual. Now, whether invocation of that API endpoint will result in deletion followed with creation is beyond crossplane control (or any other tool that talks to that API).
If it's a management cluster (e.g., Crossplane) than having ArgoCD in each cluster would be a part of the Composition that created that cluster so the overhead to manage Argo CD instances is even smaller (even though it wasn't big in the big place).
Unfortunately, I rarely run in on-prem so I might not be the best person to answer that question. In the past, I was very happy and impressed with Talos but, since I'm not much in on-prem, I haven't followed it closely so I'm not sure how good it is today and how it compares with k0s and k3s.
Excelent as always!
I agree with this, lines up with my experiences working with ArgoCD too.
What I'd like to know is how you use CUE to manage cluster configurations (if you still do). I'm curious to know how you a) combine it with ArgoCD and b) how you layer your CUE files to reduce repetition and configure differences in environments (i.e 1 replica in dev vs 3 in prod) or during promotion of changes.
That's a good one. I'll add to to the list of questions I'll publish in a format like this video (probably in 3-4 weeks).
GitOps Pull vs Push. Push (single Argo) as in sense of central management capabilities & control. Pull (Multi-Argo) as in sense of domain segregations and distributed management. It all depends how you like to address your scale needs vs operations roles & responsibilities.
Great insights, thanks!
An argument I've seen in favor of a single argo/flux instance on the control plane cluster is the ability to centralize the source of truth for everything that a teams requests, not just workloads running on kube. That way, the control plane cluster's etcd basically becomes the database for the developer portal, or anything happening with the platform be it workflow processes or actual workloads. I feel the two approaches and their distinct tradeoff will keep competing going forward without a clear winner in the short term.
I agree with that with a small correction. Manifests in git or records in etcd are not the source of trust. They are the desired state. The truth is the actual state that are actual resource. A kubernetes deployment is the desired state which represents means to manage the actual state which are containers in pods. The same can be said for most of other resources.
We have a single monorepo and a single argocd instance to manage around 20 clusters (across multiple environments) that gives us around a 1000 apps. We use appsets with cluster and git generators to dynamically deploy different stuff to different clusters based on labels and file location within our monorepo. Had no issues so far, no technical, no social.
@@iUnro That's interesting. I have had colleagues tell me about experiences with noisy neighbours effectively freezing ArgoCD. Do you pull in configuration from other teams in your setup or just from your own team?
I tend to split by environments rather than teams.
There can be performance issues especially with etcd. Fix is sometimes on the control plane nodes rather that argo CD itself.
How is the UI experience? Sounds like a nightmare with 1000 apps
It greatly depends on how you organize Argo CD Application resources.
@@DevOpsToolkit you can group applications in the UI?
I have 1 ArgoCD per cluster (~50 clusters).
I made a Crossplane composition to bring up all resources necessary to configure SSO for each ArgoCD (and any other app needing SSO).
Then, I use a central ArgoCD to deploy ArgoCDs in other individual “child” clusters along with the claim for the SSO composition. Then, each cluster deploys its own apps.
Nice!
Hi, Viktor!
One question i have is how can we properly manage multi-cluster, multi-tenant, multi-environment setups using GitOps.
For some context, im using FluxCD and I'm currently migrating from one cluster that was doing everything to dedicated production, preproduction and development clusters.
For prod and preprod, theyre both syncing the same manifests, yet preprod has to somehow auto update itself when a new image is released, while keeping prod the same (we do manual promotion for those).
Then on the development cluster we have what I'd consider 3 tenants, or rather 3 groups of apps:
- ci apps (github runners)
- dev apps (these would be 3rd party applications that developers might need. For example jenkins or artifactory)
- dev environments for developers
Everything is currently in a single monorepo, since we previously only had 1 cluster, and im curious to see how youd architect this!
Thanks in advance!
I would have a separate flux manifest for each environment and in that manifest i would overwrite whichever differences there should be between envs. So, the manifests of the application would be the same and flux manifests would act in a similar way as helm values.yaml. in your case that would mean changing the tag.
This is how I also do it: a dedicated argo cd per eks cluster
I'd like to see a refreshed look at Kargo v1, and how to configure it for cross-cluster promotion when there is an ArgoCD instance per cluster. (I don't think this is a trick question.)
On it.
Here it goes: th-cam.com/video/RoY7Qu51zwU/w-d-xo.html
Kargo 1.0 is enroute... Thoughts?
It's maturing, and that's great news.
Here it goes: th-cam.com/video/RoY7Qu51zwU/w-d-xo.html
Hi Viktor,
I have a question regarding crossplane & co. How backup, Disaster, Security are applied in this case?
Also, have you faced situation where when a resource is updated, it must be destroyed
Thanks in advance
Backup, DR, security, etc. work with Crossplane in the same way they work with any other type of kubernetes resources. That's the beauty of it. It is kubernetes-native meaning that all the solutions you might ready use with others work with Crossplane as well.
You can, for example, use argo CD as a solution for backups, or velero If you prefer more traditional approach. Security is partly solved with gitops since you can lock down your cluster and deny access to anyone or anything. There is kyverno for policies that solve other type of security issues. And so on and so forth.
The point is that there is no need to think about those things as crossplane but rather as kubernetes issues in general.
As for resource update/destroy... It all depends on the API on the other end. Crossplane will always send instructions to the destination API to update a resource if the desired state differs from the actual. Now, whether invocation of that API endpoint will result in deletion followed with creation is beyond crossplane control (or any other tool that talks to that API).
Would you also install the ArgoCD instances in each cluster if also have a management kubernetes cluster or would you then use the management cluster?
If it's a management cluster (e.g., Crossplane) than having ArgoCD in each cluster would be a part of the Composition that created that cluster so the overhead to manage Argo CD instances is even smaller (even though it wasn't big in the big place).
Could you give your opinion on k0s vs k3s for small on-prem deployments? I was wondering if I should migrate from k3s.
Unfortunately, I rarely run in on-prem so I might not be the best person to answer that question. In the past, I was very happy and impressed with Talos but, since I'm not much in on-prem, I haven't followed it closely so I'm not sure how good it is today and how it compares with k0s and k3s.
Why not k8s? vCluster e.g. supports all and originally defaulted to k3s, but now defaults to k8s
We use k8s for on-prem also. Let's have cloud prod, on-prem prod and test be 99% the same