GitOps: How Many GitOps (ArgoCD) Instances Are Recommended for Multiple Environments?

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 ธ.ค. 2024

ความคิดเห็น • 35

  • @cedriclamalle
    @cedriclamalle หลายเดือนก่อน +1

    Excelent as always!

  • @DryBones111
    @DryBones111 หลายเดือนก่อน +4

    I agree with this, lines up with my experiences working with ArgoCD too.
    What I'd like to know is how you use CUE to manage cluster configurations (if you still do). I'm curious to know how you a) combine it with ArgoCD and b) how you layer your CUE files to reduce repetition and configure differences in environments (i.e 1 replica in dev vs 3 in prod) or during promotion of changes.

    • @DevOpsToolkit
      @DevOpsToolkit  หลายเดือนก่อน +5

      That's a good one. I'll add to to the list of questions I'll publish in a format like this video (probably in 3-4 weeks).

  • @fenarRH
    @fenarRH หลายเดือนก่อน +2

    GitOps Pull vs Push. Push (single Argo) as in sense of central management capabilities & control. Pull (Multi-Argo) as in sense of domain segregations and distributed management. It all depends how you like to address your scale needs vs operations roles & responsibilities.

  • @ChewieBeardy
    @ChewieBeardy หลายเดือนก่อน +1

    Great insights, thanks!
    An argument I've seen in favor of a single argo/flux instance on the control plane cluster is the ability to centralize the source of truth for everything that a teams requests, not just workloads running on kube. That way, the control plane cluster's etcd basically becomes the database for the developer portal, or anything happening with the platform be it workflow processes or actual workloads. I feel the two approaches and their distinct tradeoff will keep competing going forward without a clear winner in the short term.

    • @DevOpsToolkit
      @DevOpsToolkit  หลายเดือนก่อน

      I agree with that with a small correction. Manifests in git or records in etcd are not the source of trust. They are the desired state. The truth is the actual state that are actual resource. A kubernetes deployment is the desired state which represents means to manage the actual state which are containers in pods. The same can be said for most of other resources.

  • @iUnro
    @iUnro หลายเดือนก่อน +5

    We have a single monorepo and a single argocd instance to manage around 20 clusters (across multiple environments) that gives us around a 1000 apps. We use appsets with cluster and git generators to dynamically deploy different stuff to different clusters based on labels and file location within our monorepo. Had no issues so far, no technical, no social.

    • @DryBones111
      @DryBones111 หลายเดือนก่อน +1

      @@iUnro That's interesting. I have had colleagues tell me about experiences with noisy neighbours effectively freezing ArgoCD. Do you pull in configuration from other teams in your setup or just from your own team?

    • @DevOpsToolkit
      @DevOpsToolkit  หลายเดือนก่อน +2

      I tend to split by environments rather than teams.
      There can be performance issues especially with etcd. Fix is sometimes on the control plane nodes rather that argo CD itself.

    • @pabloaltobelli8506
      @pabloaltobelli8506 28 วันที่ผ่านมา +1

      How is the UI experience? Sounds like a nightmare with 1000 apps

    • @DevOpsToolkit
      @DevOpsToolkit  28 วันที่ผ่านมา

      It greatly depends on how you organize Argo CD Application resources.

    • @pabloaltobelli8506
      @pabloaltobelli8506 28 วันที่ผ่านมา +1

      @@DevOpsToolkit you can group applications in the UI?

  • @Dalswyn
    @Dalswyn หลายเดือนก่อน +2

    I have 1 ArgoCD per cluster (~50 clusters).
    I made a Crossplane composition to bring up all resources necessary to configure SSO for each ArgoCD (and any other app needing SSO).
    Then, I use a central ArgoCD to deploy ArgoCDs in other individual “child” clusters along with the claim for the SSO composition. Then, each cluster deploys its own apps.

    • @yol1982
      @yol1982 หลายเดือนก่อน +1

      Nice!

  • @mirceanton
    @mirceanton หลายเดือนก่อน +2

    Hi, Viktor!
    One question i have is how can we properly manage multi-cluster, multi-tenant, multi-environment setups using GitOps.
    For some context, im using FluxCD and I'm currently migrating from one cluster that was doing everything to dedicated production, preproduction and development clusters.
    For prod and preprod, theyre both syncing the same manifests, yet preprod has to somehow auto update itself when a new image is released, while keeping prod the same (we do manual promotion for those).
    Then on the development cluster we have what I'd consider 3 tenants, or rather 3 groups of apps:
    - ci apps (github runners)
    - dev apps (these would be 3rd party applications that developers might need. For example jenkins or artifactory)
    - dev environments for developers
    Everything is currently in a single monorepo, since we previously only had 1 cluster, and im curious to see how youd architect this!
    Thanks in advance!

    • @DevOpsToolkit
      @DevOpsToolkit  หลายเดือนก่อน

      I would have a separate flux manifest for each environment and in that manifest i would overwrite whichever differences there should be between envs. So, the manifests of the application would be the same and flux manifests would act in a similar way as helm values.yaml. in your case that would mean changing the tag.

  • @civilapalyan6253
    @civilapalyan6253 หลายเดือนก่อน +2

    This is how I also do it: a dedicated argo cd per eks cluster

  • @joebowbeer
    @joebowbeer หลายเดือนก่อน +1

    I'd like to see a refreshed look at Kargo v1, and how to configure it for cross-cluster promotion when there is an ArgoCD instance per cluster. (I don't think this is a trick question.)

    • @DevOpsToolkit
      @DevOpsToolkit  หลายเดือนก่อน

      On it.

    • @DevOpsToolkit
      @DevOpsToolkit  16 วันที่ผ่านมา +1

      Here it goes: th-cam.com/video/RoY7Qu51zwU/w-d-xo.html

  • @fpvclub7256
    @fpvclub7256 หลายเดือนก่อน +4

    Kargo 1.0 is enroute... Thoughts?

    • @DevOpsToolkit
      @DevOpsToolkit  หลายเดือนก่อน

      It's maturing, and that's great news.

    • @DevOpsToolkit
      @DevOpsToolkit  16 วันที่ผ่านมา

      Here it goes: th-cam.com/video/RoY7Qu51zwU/w-d-xo.html

  • @AnasSAHEL
    @AnasSAHEL 2 วันที่ผ่านมา +1

    Hi Viktor,
    I have a question regarding crossplane & co. How backup, Disaster, Security are applied in this case?
    Also, have you faced situation where when a resource is updated, it must be destroyed
    Thanks in advance

    • @DevOpsToolkit
      @DevOpsToolkit  วันที่ผ่านมา

      Backup, DR, security, etc. work with Crossplane in the same way they work with any other type of kubernetes resources. That's the beauty of it. It is kubernetes-native meaning that all the solutions you might ready use with others work with Crossplane as well.
      You can, for example, use argo CD as a solution for backups, or velero If you prefer more traditional approach. Security is partly solved with gitops since you can lock down your cluster and deny access to anyone or anything. There is kyverno for policies that solve other type of security issues. And so on and so forth.
      The point is that there is no need to think about those things as crossplane but rather as kubernetes issues in general.
      As for resource update/destroy... It all depends on the API on the other end. Crossplane will always send instructions to the destination API to update a resource if the desired state differs from the actual. Now, whether invocation of that API endpoint will result in deletion followed with creation is beyond crossplane control (or any other tool that talks to that API).

  • @chrisre2751
    @chrisre2751 หลายเดือนก่อน +1

    Would you also install the ArgoCD instances in each cluster if also have a management kubernetes cluster or would you then use the management cluster?

    • @DevOpsToolkit
      @DevOpsToolkit  หลายเดือนก่อน

      If it's a management cluster (e.g., Crossplane) than having ArgoCD in each cluster would be a part of the Composition that created that cluster so the overhead to manage Argo CD instances is even smaller (even though it wasn't big in the big place).

  • @slim5782
    @slim5782 หลายเดือนก่อน +1

    Could you give your opinion on k0s vs k3s for small on-prem deployments? I was wondering if I should migrate from k3s.

    • @DevOpsToolkit
      @DevOpsToolkit  หลายเดือนก่อน +1

      Unfortunately, I rarely run in on-prem so I might not be the best person to answer that question. In the past, I was very happy and impressed with Talos but, since I'm not much in on-prem, I haven't followed it closely so I'm not sure how good it is today and how it compares with k0s and k3s.

    • @joebowbeer
      @joebowbeer หลายเดือนก่อน +1

      Why not k8s? vCluster e.g. supports all and originally defaulted to k3s, but now defaults to k8s

    • @DennisHaney
      @DennisHaney หลายเดือนก่อน +1

      We use k8s for on-prem also. Let's have cloud prod, on-prem prod and test be 99% the same