@@n4870s I'm not sure that backups of databases should be done in the GitOps way, except for the definitions of the processes that perform the backups. In any case... Adding it to the TODO list... :)
I think it has a ver big pro. Auditing. Who did what? It works better then lot of protocols and checks to resolv any thing. Our issue now is working with github. We use our CSV system so, to package and deliver our ansible and kustomizations scripts, we use docker images.
why you talk theory so much sir , please do create proper playlist in sequence for newbie to expert ir u can arrange existing vedios even but n sequence , you are giid but strructure s very poor with how identify sequence os mess up
@@user-bv6il2nk4t This channel is not a tutorial on how to work in Kubernetes. I already published books and courses with such a goal (devopstoolkitseries.com/). Instead, this channel is about exploring different tools and practices individually and most of them assume some basic knowledge of Kubernetes and many other concepts. That being said, I do think that your comment is valid. Would you like to help organizing the videos?
It can't. Argo CD will make sure that what is in git is what is in Kubernetes. Crossplane makes sure that what is defined as Kubernetes resources is in sync with "real" resources (e.g. AWS, azure, etc.).
I have 2 questions if I may. Assuming one should separate app source repos from gitops repo storing manifests for sync with cluster, do you also have separate per-app gitops repo for their app manifests or do app manifests get stored in the one gitops repo along with infra manifests? Secondly, is it an anti pattern to keep base + innerloop overlay manifests in an app source repo for funneling into a gitops CD pipeline, or should those manifests only live in a separate gitops repo?
1. It depends on what that infra is for. For example, if it's a DB used by a single app, I would keep it in the repo of that app. If it's used by multiple apps I would keep it separate. In my case it is mostly about logical groups and their lifecycles. 2. I tend to keep them in app source repos when they are a part of that app.
This is awesome Viktor! Thanks. One question. In a real production case, would you have one argocd instance in the management cluster for everything? Like you did here? Or do you think it's better to have one aregocd in the management for infra resources and one in the application cluster for application resources? Thanks!!
@rmca11 if by separation you mean a different argo CD instance in each cluster, that's the way I tend to run it but that means no single pane of glass (no single UI). On the other hand, you can have a single argo CD instance managing resources in all the clusters and that would give you a single UI.
The current challenge is how to automatically convert the secret from crossplane generated cluster to ArgoCD's cluster secret... -> In order to have it, it does need to create a service account & rolebinding... -> then it comes to how to create a service account & grab it's token remotely via that automatic generated kubeconfig secret... Bit hard as the client cert is disabled by recommendation from security perspective
Hi Viktor, This is a great video of using crossplan for infra/service management and argocd for app management. but except the part where you need to manually put new cluster address to argo application destination. is there any way where some placeholder/variable can be placed in that destination server value and the variable value will be fetched from newly created cluster. Thanks anyway.
Crossplane does create a secret with kubeconfig so that other tools that need to use that cluster can use it. However, Argo CD does not use kubeconfig (like, for example, Flux does) but expects cluster connection in a different format. Theoretically, one should be able to create a secret in that "specific" format but I haven't tried it myself. Instead, I tend to have Argo CD in each cluster (instead of a central one) and that approach, besides other benefits, does not need connections to remote clusters.
@@DevOpsToolkit If not using a central ArgoCD, what do you suggest in order to deploy app for dev,stg,prod envs, where each is isolated by running on separate k8s cluster? Thx
@@TankaNafaka If you install Argo CD in each of those clusters, they will be truly isolated (having central Argo CD means that it needs to access other clusters). From there on, workloads in each of those clusters would be managed by local Argo CD instances with `Application` CRs pointing to whichever repo/directory contains manifests for that cluster/environment.
Awesome! I would like to see a video on canary deployment for infrastructure based on BBDD testing. If the BDD test cased passes then the chnages should apply.
@@DevOpsToolkit okay then…i am planning to follow ur tutorial & perform this on azure. So basically 1 aks cluster for argo+crossplane. Use that as the “control plane” or “brain of this automation” to deploy either a vm/db/k8s or any application+infra stack. I hope my logic is correct?
is the scenario of installing crossplane on on-premise k8s cluster and managing resources (db , cert-manager and CA and other needed things that are not apps) viable, lets say we want to replicate our cluster installations for clients , the have the k8s cluster and they just install crossplane and use our predefined configurations ?
@@DevOpsToolkit thank you for your prompt response ! I have another question , do you think its best to use crossplane for my scenario or use something else like ansible for example since in my case I will only be using on-premise clusters and will not be interacting with cloud providers
@mmt4984 if you are starting today, I cannot imagine a reason why not to extend kubernetes API with CRDs and use controllers to do the work. Anything else would be archaic. That is the important decision you should make. If you do choose CRDs and controllers and adopt kubernetes as a standard for managing resources no matter where they are, the second decision is how to do that. Crossplane is a great choice but there are others as well.
Amazing! Thanks Victor. Can there be a security concern for the delete flow. Assuming I wouldn’t want my database deleted by someone mistaking deleting a git file.
Normally, I work around that by enforcing PRs and PR reviews. That does not guarantee that reviewer would not miss it though. It's close to impossible being 100% sure that no one will do something that shouldn't be done, no matter the tech. PRs, at least, are reducing that possibility, especially if PRs are spinning up temp envs. and running tests.
@@DevOpsToolkit Thanks for the response. It may be nice to have some form of failsafe on ArgoCD/Crossplane without affecting the GitOps flow in any way, but these are just thoughts :)
There is always the option to disable auto-sync but I don't think that helps much. Ultimately, it all depends on testing. One can easily "destroy" a DB without deleting it. A single change in params could bring it down or corrupt data so it's not only about the ability to delete resources. Still, you are right that additional mechanisms to be "protected" would be good to have.
Victor. I know this is 2 years later, however re-watching this video and lining up the git repo for this all looks like it matches in the video, however the ArgoCD Application resource for your `devops-toolkit` app is missing. Or renamed ? I see the Application for GKE and ArgoCD but not one for your app that you have in devops-toolkit directory
Please let me know which file is missing and I'll do my best to fix it. As a side note, a lot changed since I created that video so I'm not sure that's the best Gist to follow.
@@DevOpsToolkit Victor. Thank you. I followed the actual Git repo the Git was referring to. I will post the file in question soon. These videos are spectacular.
Victor. All good. It was pilot error. I followed your gist to it pointing to the source repo github.com/vfarcic/combine-infra-services-apps/blob/master/controller/devops-toolkit.yaml and i see the Application CR for your devops-toolkit app. Also a future video recommendation since the books out there on Amazon for ArgoCD are not too good, any series of part one and part two of using ArgoCD App-of-Apps, the installation patterns of installing the parent in a control plane cluster and the child apps on the child (target) clusters and showing the configuration of having ArgoCD manage itself (the Application CR for argocd) vs a centralized install all the parent, child apps and ArgoCD on the same cluster and how ApplicationSets change the game vs App-of-Apps. This would be a great video. Thanks Victor
That was my first contact with Crossplane back in the days when XRDs were badly documented. Since then, I moved everything to XRDs. Unless you are doing something very simple (e.g., a few resources in total), I'd say that XRDs are (almost) always recommended. Together with Compositions they allow us to define what something is in an organization (e.g., this is a cluster instead of those 25 resources constitute a cluster).
@@DevOpsToolkit what is the advantage of using XRDs/Compositions when you can just wrap all the individual resources in a Helm chart and deploy that? is there something else offered by it? Thanks.
Dependencies and value injection is much better since those can depend on live resources. Than there is discoverability through the API that allows querying, dynamic UIs, etc. You can also think of it this way. KNative is superior to Deployment+Ingress+Service+Virtual service+... Compositions allow you to do something similar but, instead of relying on third party, that can be done by yourself and act as a tailor made service for others.
What happens when some config parameter of a cloud resource is not available in crossplane or doesn't work as intended and you had to manually update that on console? K8s will revert the console change based on code which is not the intended behavior.
I am working on a composition to create an EKS cluster and deploy argocd in it with argocd configured to monitor a git repository. For that I need to add git repository to argocd. I wanna do that using sealed secrets. How can I do that? Right now I have used helm provider to install sealed secrets controller to the EKS cluster in the compostion but how I can create a sealed secrets yaml using kubeseal is something that I can't get me brains around. Do you have an idea on that?
Since the cluster itself will be created by claiming the composition, how I can create sealed secret manifest for it before hand and use it as a template in object kind for kubernetes provider in my composition? I am not understanding how to automate the flow of creating cluster, installing argo on it and configuring it to monitor private git repository. I can do this by mentioning argocd secret for private repo directly in composition but I wanna use sealed secrets.
Hey Viktor, as always great content. What would be the best and easiest way to replace current Terraform infrastructure with Crossplane (e.g. provision EKS cluster)? Thank you.
The end goal, I believe, should be to use Crossplane Compositions (not Crossplane resources directly). The Terraform provider will not help with that. However, it might take a while to get there so moving Terraform to Kubernetes (more or less) as-is through the Crossplane Terraform provider might be a good intermediary step. I haven't used it much myself so I cannot say whether there are some serious issues to look for.
I never got convinced that I should try it out. It sounds like a weird combination and I'm not sure what is the value proposition behind it instead, let's say, running Terraform from pipelines. I'm probably wrong. I should take another (deeper) look into it.
That is true. I have been doing that for at least 20 years now (not always with terraform). All I'm saying is that there are cases when using kube api and scheduler, pulling instead of pushing manifests, and having auto-sync and drift detection could be useful. Thst does not mean that is always true, nor that terraform or running stuff from pipelines is bad. Instead, it means that if GitOps is the goal, crossplane is IaC tool that is closest to it.
@@DevOpsToolkit :thumbsup: that makes absolute sense in that case. My biggest issues with TF in a CICD pipeline are definitely addressed when moving to more of a pull auto-sync model than using CICD, cause it could be a year or two since the pipeline for something in TF was last ran. Of course I am also just talking in the context of building out cloud infrastructure only and not over leveraging TF to do more than that. Cough using helm providers cough!
Thank you all. Im asking because i m looking for something more in house with openstack as "cloud" provider to manage dev and test clusters for lot of teams. As far as i understand is more a pipeline use.
How to fetch the arn for a role created by crossplane and reference it in another managed resource of a composition? Like in terraform we can simply reference it using (resource.name.id) or (module.resourcename.id) or we can use data source. How can we achieve such a thing in crossplane.
You do that in crossplane through patches. You'll find s plenty of examples of it in github.com/vfarcic/crossplane-kubernetes/blob/main/package/all.yaml. crossplane.io docs have it relatively well documented.
Actually I was trying to create this eks-csi-addon using the composition. The crds of eks-csi-addon doesn't have matchLables in RoleSelectorARN. It only has RoleARN. So, I cannot the label matching technique here. Therefore, I have this question how can I mention the ARN of role create by 1 resource to another resource of same composition.
Crossplane will not work on-prem, unless you extend it. But that might change soon. As far as I know, IBM is working on Crossplane CRDs for on-prem. Still, I doubt that Crossplane will ever get to the same level with on-prem as, let's say, Terraform. Actually, even Terraform is not exceptionally good with on-prem, so you might want to stick with something like Ansible. That being said, you could make other tools do something similar to what I did. You could make a k8s CronJob that would run periodically (e.g., every 3 minutes). If it would clone a repo with definitions of, let's say, Ansible and it execute commands that would apply changes if there are any. Actually, it would not have to be k8s CronJob. It could be a "traditional" CronJob as well. That would not get you to the same place as Crossplane/ArgoCD combination but, at least, it would provide some semblance of GitOps and auto-sync.
@@DevOpsToolkit @socialFlasher Well, this is exactly what I care about too. I am only here for a short time now, and I am not a pro in GitOps, DevOps and Cloud stuff. What I need is Crossplane working with local clusters or just VPS in a given datacenter running OpenStack. This discussion was held 2 years ago. So maybe I'm lucky and they have made something for us.
@@MarkusEicher70 I don't think that there is a Crossplane provider for OpenStack :( It all depends on whether someone from the OpenStack community will contribute it.
Can we link variables for app from crossplane resources, instead of modifying IP or any specific resorce values manually? ::: That channel deserves 10x times more views
I don't think that we should ever use IPs. Internal (in-cluster) communication is normally handled through service discovery and external communication normally goes through pre-defined domains. Now, that might not fit your use-case. If you can describe a bit more what you're trying to do...
@@DevOpsToolkit I mean, there could be a situation, when we are not sure what specific output from our new infrastructure will be created. For example, if we are dealing with DNS and network routes and rules. But may be that is not what crossplane is supposed to be managing.
I don't think there's much Crossplane can do in that area. You would need to update the definitions and re-apply them. You could, potentially, do that through some custom steps that would be executed through CI/CD pipelines...
How to access the UI of the ArgoCD server deployed on EKS cluster ? I have tried port forwarding but can't get the password right. The password I am trying is the one I found in composition.yaml in patches section and username is admin.
@UtkarshMishra-it4oc not sure why is that so. That worked for me every time during the initial setup. Afterwards, if you change the password, it will not match the one from the secret but, i guess, that's not your case.
I haven't changed it yet, first time I was trying to access the UI. Normally with ArgoCD deployment the first password is stored as secret called argocd-initial-admin-secrets but in this case ArgoCD in the EKS cluster has secret called argocd-secret. Is it because we are doing the initial setup and configuration also using crossplane resource called Gitops. And if yes, then password should be the one that we are providing, right?
I've implement this already, having a root cluster on top of everything to manage all clusters' ArgoCD installation & configuration. Have the sub-cluster to work on their own things... Just the manual part "argocd cluster add" is not good enough though.
You're right that `argocd cluster add` executed manually is not good enough. Creating a script that would do that is probably not a good idea either since it would be hard to make sure it's always running and idempotent. We need something better.
@@DevOpsToolkit I would agree however I think this is certainly the best we've got at present. It's not a huge stretch to imagine defining all your clusters in crossplane, they get added to argoCD which with some other features (like sync waves) can onboard your cluster with other tools (i.e. loki/prometheus/istio etc) before deploying any applications, then (in theory) should you nuke a cluster, Argo has got your back :)
@@TheEmc1992 The current issue is GCP's GKE cluster doesn't allow any anonymous user to use kubeconfig to deploy & create any resource. That caused the auto generated kubeconfig useless. If we can bypass that one, we'll be good for the fully automation part.
Really glad to have found your channel! Great content!
Welcome aboard!
@@DevOpsToolkit sth for the todo list - how to do backups for static resources or databases in some gitops way?
@@n4870s I'm not sure that backups of databases should be done in the GitOps way, except for the definitions of the processes that perform the backups. In any case... Adding it to the TODO list... :)
Thanks, this knowledge are making me a better cloud professional
awesome!! Really interesting to see the mex between all the tooling ecosystem around kubermetes
Another great 😃 video 📸! Thanks Viktor 🙂
What do you think about Argo CD and Crossplane combination? Would you like me to show a similar process using a different toolset?
I think it has a ver big pro. Auditing. Who did what? It works better then lot of protocols and checks to resolv any thing.
Our issue now is working with github. We use our CSV system so, to package and deliver our ansible and kustomizations scripts, we use docker images.
why you talk theory so much sir , please do create proper playlist in sequence for newbie to expert ir u can arrange existing vedios even but n sequence , you are giid but strructure s very poor with how identify sequence os mess up
@@user-bv6il2nk4t This channel is not a tutorial on how to work in Kubernetes. I already published books and courses with such a goal (devopstoolkitseries.com/). Instead, this channel is about exploring different tools and practices individually and most of them assume some basic knowledge of Kubernetes and many other concepts.
That being said, I do think that your comment is valid. Would you like to help organizing the videos?
please send epdf link crushpak@gmail.com, its not available inmy c0untry , i amalso looking handson type book in k8 and CI cd usage
Thanks Victor, awesome video and channel! I would be interested in seeing the same with argocd + pulumi.
Hi @Viktor the way you explain crossplane is it not something that Argo Self-heal feature can do?
It can't. Argo CD will make sure that what is in git is what is in Kubernetes. Crossplane makes sure that what is defined as Kubernetes resources is in sync with "real" resources (e.g. AWS, azure, etc.).
I have 2 questions if I may. Assuming one should separate app source repos from gitops repo storing manifests for sync with cluster, do you also have separate per-app gitops repo for their app manifests or do app manifests get stored in the one gitops repo along with infra manifests? Secondly, is it an anti pattern to keep base + innerloop overlay manifests in an app source repo for funneling into a gitops CD pipeline, or should those manifests only live in a separate gitops repo?
1. It depends on what that infra is for. For example, if it's a DB used by a single app, I would keep it in the repo of that app. If it's used by multiple apps I would keep it separate. In my case it is mostly about logical groups and their lifecycles.
2. I tend to keep them in app source repos when they are a part of that app.
@@DevOpsToolkit thanks. That makes sense to me. Sounds simple now you mentioned it 😁
This is awesome Viktor! Thanks.
One question. In a real production case, would you have one argocd instance in the management cluster for everything? Like you did here? Or do you think it's better to have one aregocd in the management for infra resources and one in the application cluster for application resources?
Thanks!!
Personally, i prefer an argo CD instance in each cluster. That way they can be locked (no external access to them).
Can the argocd instances be federated?
@rmca11 as in running an argo CD instance in each cluster?
@@DevOpsToolkit yeah, so we have the seperation but single pane of glass
@rmca11 if by separation you mean a different argo CD instance in each cluster, that's the way I tend to run it but that means no single pane of glass (no single UI). On the other hand, you can have a single argo CD instance managing resources in all the clusters and that would give you a single UI.
The current challenge is how to automatically convert the secret from crossplane generated cluster to ArgoCD's cluster secret...
-> In order to have it, it does need to create a service account & rolebinding...
-> then it comes to how to create a service account & grab it's token remotely via that automatic generated kubeconfig secret...
Bit hard as the client cert is disabled by recommendation from security perspective
Hi Viktor,
This is a great video of using crossplan for infra/service management and argocd for app management. but except the part where you need to manually put new cluster address to argo application destination. is there any way where some placeholder/variable can be placed in that destination server value and the variable value will be fetched from newly created cluster. Thanks anyway.
Crossplane does create a secret with kubeconfig so that other tools that need to use that cluster can use it. However, Argo CD does not use kubeconfig (like, for example, Flux does) but expects cluster connection in a different format. Theoretically, one should be able to create a secret in that "specific" format but I haven't tried it myself. Instead, I tend to have Argo CD in each cluster (instead of a central one) and that approach, besides other benefits, does not need connections to remote clusters.
@@DevOpsToolkit If not using a central ArgoCD, what do you suggest in order to deploy app for dev,stg,prod envs, where each is isolated by running on separate k8s cluster? Thx
@@TankaNafaka If you install Argo CD in each of those clusters, they will be truly isolated (having central Argo CD means that it needs to access other clusters). From there on, workloads in each of those clusters would be managed by local Argo CD instances with `Application` CRs pointing to whichever repo/directory contains manifests for that cluster/environment.
Awesome! I would like to see a video on canary deployment for infrastructure based on BBDD testing. If the BDD test cased passes then the chnages should apply.
Adding it to my TODO list... :)
Good contents, thanks for sharing
Viktor - So to host argocd & crossplane did we have to create 1 k8s cluster then operate from there?
Yeah. You need a kubernetes cluster for those tools. That can be a dedicated cluster or one shared with other "stuff". It depends on the scale.
@@DevOpsToolkit okay then…i am planning to follow ur tutorial & perform this on azure. So basically 1 aks cluster for argo+crossplane. Use that as the “control plane” or “brain of this automation” to deploy either a vm/db/k8s or any application+infra stack. I hope my logic is correct?
@soumadeepbhattacharya2778 yep. That's the logic.
is the scenario of installing crossplane on on-premise k8s cluster and managing resources (db , cert-manager and CA and other needed things that are not apps) viable, lets say we want to replicate our cluster installations for clients , the have the k8s cluster and they just install crossplane and use our predefined configurations ?
That's what Compositions do. They are, in my opinion, the most important feature of Crossplane.
@@DevOpsToolkit thank you for your prompt response ! I have another question , do you think its best to use crossplane for my scenario or use something else like ansible for example since in my case I will only be using on-premise clusters and will not be interacting with cloud providers
@mmt4984 if you are starting today, I cannot imagine a reason why not to extend kubernetes API with CRDs and use controllers to do the work. Anything else would be archaic. That is the important decision you should make. If you do choose CRDs and controllers and adopt kubernetes as a standard for managing resources no matter where they are, the second decision is how to do that. Crossplane is a great choice but there are others as well.
Amazing! Thanks Victor.
Can there be a security concern for the delete flow. Assuming I wouldn’t want my database deleted by someone mistaking deleting a git file.
Normally, I work around that by enforcing PRs and PR reviews. That does not guarantee that reviewer would not miss it though. It's close to impossible being 100% sure that no one will do something that shouldn't be done, no matter the tech. PRs, at least, are reducing that possibility, especially if PRs are spinning up temp envs. and running tests.
@@DevOpsToolkit Thanks for the response. It may be nice to have some form of failsafe on ArgoCD/Crossplane without affecting the GitOps flow in any way, but these are just thoughts :)
There is always the option to disable auto-sync but I don't think that helps much. Ultimately, it all depends on testing. One can easily "destroy" a DB without deleting it. A single change in params could bring it down or corrupt data so it's not only about the ability to delete resources.
Still, you are right that additional mechanisms to be "protected" would be good to have.
Afaik, you can change the deletion policy of crossplane.
That's true. You can also do that on the Argo CD level as well.
Victor. I know this is 2 years later, however re-watching this video and lining up the git repo for this all looks like it matches in the video, however the ArgoCD Application resource for your `devops-toolkit` app is missing. Or renamed ? I see the Application for GKE and ArgoCD but not one for your app that you have in devops-toolkit directory
Please let me know which file is missing and I'll do my best to fix it.
As a side note, a lot changed since I created that video so I'm not sure that's the best Gist to follow.
@@DevOpsToolkit Victor. Thank you. I followed the actual Git repo the Git was referring to. I will post the file in question soon. These videos are spectacular.
Victor. All good. It was pilot error. I followed your gist to it pointing to the source repo github.com/vfarcic/combine-infra-services-apps/blob/master/controller/devops-toolkit.yaml and i see the Application CR for your devops-toolkit app. Also a future video recommendation since the books out there on Amazon for ArgoCD are not too good, any series of part one and part two of using ArgoCD App-of-Apps, the installation patterns of installing the parent in a control plane cluster and the child apps on the child (target) clusters and showing the configuration of having ArgoCD manage itself (the Application CR for argocd) vs a centralized install all the parent, child apps and ArgoCD on the same cluster and how ApplicationSets change the game vs App-of-Apps. This would be a great video. Thanks Victor
Adding it to my TODO list... :)
In the demo you don't use XRDs or compositions when deploying from helm charts. when does it make sense to use XRDs/compositions with helm charts?
That was my first contact with Crossplane back in the days when XRDs were badly documented. Since then, I moved everything to XRDs. Unless you are doing something very simple (e.g., a few resources in total), I'd say that XRDs are (almost) always recommended. Together with Compositions they allow us to define what something is in an organization (e.g., this is a cluster instead of those 25 resources constitute a cluster).
@@DevOpsToolkit what is the advantage of using XRDs/Compositions when you can just wrap all the individual resources in a Helm chart and deploy that? is there something else offered by it? Thanks.
Dependencies and value injection is much better since those can depend on live resources. Than there is discoverability through the API that allows querying, dynamic UIs, etc.
You can also think of it this way. KNative is superior to Deployment+Ingress+Service+Virtual service+... Compositions allow you to do something similar but, instead of relying on third party, that can be done by yourself and act as a tailor made service for others.
What happens when some config parameter of a cloud resource is not available in crossplane or doesn't work as intended and you had to manually update that on console? K8s will revert the console change based on code which is not the intended behavior.
You can consider it a bug and open an issue or a PR.
I am working on a composition to create an EKS cluster and deploy argocd in it with argocd configured to monitor a git repository. For that I need to add git repository to argocd. I wanna do that using sealed secrets. How can I do that?
Right now I have used helm provider to install sealed secrets controller to the EKS cluster in the compostion but how I can create a sealed secrets yaml using kubeseal is something that I can't get me brains around.
Do you have an idea on that?
You need to use kubeseal CLI to create secrets. You would need to create your own controller for that.
Like a new crossplane provider needs to be created to run kubeseal commands, is it ?
Since the cluster itself will be created by claiming the composition, how I can create sealed secret manifest for it before hand and use it as a template in object kind for kubernetes provider in my composition?
I am not understanding how to automate the flow of creating cluster, installing argo on it and configuring it to monitor private git repository.
I can do this by mentioning argocd secret for private repo directly in composition but I wanna use sealed secrets.
@UtkarshMishra-it4oc yes. Something needs to execute kubeseal commands.
Hey Viktor, as always great content. What would be the best and easiest way to replace current Terraform infrastructure with Crossplane (e.g. provision EKS cluster)? Thank you.
I just found this video: th-cam.com/video/e3vkZtdwZJk/w-d-xo.html
The end goal, I believe, should be to use Crossplane Compositions (not Crossplane resources directly). The Terraform provider will not help with that. However, it might take a while to get there so moving Terraform to Kubernetes (more or less) as-is through the Crossplane Terraform provider might be a good intermediary step. I haven't used it much myself so I cannot say whether there are some serious issues to look for.
What do you think about terraform operator?
By the way thank you for all your work with This channel.
I never got convinced that I should try it out. It sounds like a weird combination and I'm not sure what is the value proposition behind it instead, let's say, running Terraform from pipelines. I'm probably wrong. I should take another (deeper) look into it.
@@DevOpsToolkit Running terraform from a pipeline is pretty useful IMO. I've done it for years.
That is true. I have been doing that for at least 20 years now (not always with terraform). All I'm saying is that there are cases when using kube api and scheduler, pulling instead of pushing manifests, and having auto-sync and drift detection could be useful. Thst does not mean that is always true, nor that terraform or running stuff from pipelines is bad. Instead, it means that if GitOps is the goal, crossplane is IaC tool that is closest to it.
@@DevOpsToolkit :thumbsup: that makes absolute sense in that case. My biggest issues with TF in a CICD pipeline are definitely addressed when moving to more of a pull auto-sync model than using CICD, cause it could be a year or two since the pipeline for something in TF was last ran. Of course I am also just talking in the context of building out cloud infrastructure only and not over leveraging TF to do more than that. Cough using helm providers cough!
Thank you all. Im asking because i m looking for something more in house with openstack as "cloud" provider to manage dev and test clusters for lot of teams. As far as i understand is more a pipeline use.
Thank you 🎉
How to fetch the arn for a role created by crossplane and reference it in another managed resource of a composition? Like in terraform we can simply reference it using (resource.name.id) or (module.resourcename.id) or we can use data source.
How can we achieve such a thing in crossplane.
You do that in crossplane through patches. You'll find s plenty of examples of it in github.com/vfarcic/crossplane-kubernetes/blob/main/package/all.yaml. crossplane.io docs have it relatively well documented.
Thanks Victor
Actually I was trying to create this eks-csi-addon using the composition. The crds of eks-csi-addon doesn't have matchLables in RoleSelectorARN. It only has RoleARN. So, I cannot the label matching technique here. Therefore, I have this question how can I mention the ARN of role create by 1 resource to another resource of same composition.
@UtkarshMishra-it4oc there is serviceAccountRoleArnSelector. Is that the obe you're looking for?
I wish i can do this with our on prem infra.
Crossplane will not work on-prem, unless you extend it. But that might change soon. As far as I know, IBM is working on Crossplane CRDs for on-prem. Still, I doubt that Crossplane will ever get to the same level with on-prem as, let's say, Terraform. Actually, even Terraform is not exceptionally good with on-prem, so you might want to stick with something like Ansible.
That being said, you could make other tools do something similar to what I did. You could make a k8s CronJob that would run periodically (e.g., every 3 minutes). If it would clone a repo with definitions of, let's say, Ansible and it execute commands that would apply changes if there are any. Actually, it would not have to be k8s CronJob. It could be a "traditional" CronJob as well.
That would not get you to the same place as Crossplane/ArgoCD combination but, at least, it would provide some semblance of GitOps and auto-sync.
@@DevOpsToolkit @socialFlasher Well, this is exactly what I care about too. I am only here for a short time now, and I am not a pro in GitOps, DevOps and Cloud stuff. What I need is Crossplane working with local clusters or just VPS in a given datacenter running OpenStack. This discussion was held 2 years ago. So maybe I'm lucky and they have made something for us.
@@MarkusEicher70 I don't think that there is a Crossplane provider for OpenStack :( It all depends on whether someone from the OpenStack community will contribute it.
Can we link variables for app from crossplane resources, instead of modifying IP or any specific resorce values manually?
::: That channel deserves 10x times more views
I don't think that we should ever use IPs. Internal (in-cluster) communication is normally handled through service discovery and external communication normally goes through pre-defined domains.
Now, that might not fit your use-case. If you can describe a bit more what you're trying to do...
@@DevOpsToolkit I mean, there could be a situation, when we are not sure what specific output from our new infrastructure will be created. For example, if we are dealing with DNS and network routes and rules.
But may be that is not what crossplane is supposed to be managing.
I don't think there's much Crossplane can do in that area. You would need to update the definitions and re-apply them. You could, potentially, do that through some custom steps that would be executed through CI/CD pipelines...
How to access the UI of the ArgoCD server deployed on EKS cluster ? I have tried port forwarding but can't get the password right. The password I am trying is the one I found in composition.yaml in patches section and username is admin.
Password should be in a secret created in the namespace where argo CD is running.
Yeah, that same password I am trying to login with but it's not working. I have base64 decoded it but not working.
@UtkarshMishra-it4oc not sure why is that so. That worked for me every time during the initial setup. Afterwards, if you change the password, it will not match the one from the secret but, i guess, that's not your case.
I haven't changed it yet, first time I was trying to access the UI. Normally with ArgoCD deployment the first password is stored as secret called argocd-initial-admin-secrets but in this case ArgoCD in the EKS cluster has secret called argocd-secret. Is it because we are doing the initial setup and configuration also using crossplane resource called Gitops. And if yes, then password should be the one that we are providing, right?
@UtkarshMishra-it4oc it should not matter how it was applied to kube api. I'm not sure why it's behaving like that in your case...
I've implement this already, having a root cluster on top of everything to manage all clusters' ArgoCD installation & configuration.
Have the sub-cluster to work on their own things...
Just the manual part "argocd cluster add" is not good enough though.
You're right that `argocd cluster add` executed manually is not good enough. Creating a script that would do that is probably not a good idea either since it would be hard to make sure it's always running and idempotent. We need something better.
@@DevOpsToolkit I would agree however I think this is certainly the best we've got at present. It's not a huge stretch to imagine defining all your clusters in crossplane, they get added to argoCD which with some other features (like sync waves) can onboard your cluster with other tools (i.e. loki/prometheus/istio etc) before deploying any applications, then (in theory) should you nuke a cluster, Argo has got your back :)
@@TheEmc1992 The current issue is GCP's GKE cluster doesn't allow any anonymous user to use kubeconfig to deploy & create any resource. That caused the auto generated kubeconfig useless. If we can bypass that one, we'll be good for the fully automation part.
It's sad you don't talk about the limitations of ArgoCD which are pretty sever if you go into more complex examples.
I'm curious what would be the limitations you're referring to. Can you share a few examples?
First comment