anyway following GitOps it's right way to go... using ArgoCD sync-waves is just a little trick to deploy and let things work faster, avoiding many failures in the loop...
Argocd sync waves + hook jobs in sync phase in specific sync wave that validate if dependency is satisfied before progressing the sync. For example you install the operator and then in next sync wave run a hook job that validates if CRD was already created. Script validating if CRD exits with 0 if CRD is there or it exits with 1 and sleeps for 120 seconds. In job you set backofflimit to 15 and now you have gate that will check for 30 minutes if CRD was created. Once it is there you sync your custom resource. Quite handy pattern. I used for whole day2 Openshift setups, like moving ingress routers to infra nodes after Infra nodes have finished installing, but sky is the limit here 😊
Thanks for another great video 🎉 honestly I was waiting for one on this subject, app dependencies can be tricky if not done the right way and at the beginning of my kubernetes journey I was struggling with them too! I took the path of remote development and luckily I solved many problems with it, but I still think the subject is worth a clarification
The hard part about eventually consistency is that in a complex system it can take a while for the system as whole to reconcile and while waiting it is hard to tell how far along the system is, is it stuck in an crash loop, why is it stuck in the spot it is stuck in ... etc. So tools that support ordering or troubleshooting are very useful the larger the system gets. A constraint based approach can also be very helpful.
You don't need to get to imperative orchestration to get the ordering required. A query and constraints that must be true before the next set of objects to start reconciling can do the trick. So you don't write a sequence of step1, step2, step 3 rather you say something like a pre-req for this object to start reconciling is a that a query result set meets certain criteria. This allows the reconciler to figure out what things can go in parallel and what waits until certain conditions are true ... etc. Canary rollouts use this approach where they wait until a query against a metrics store meets some condition before proceeding to change the desired state. I really wish the k8s API server had a native feature that allowed objects to be accepted but does not start reconciling until a query condition is true, its is just the flip side of being in a crashloop backoff, but at least you would see something like waiting for x condition to be true to start reconciling. makes troubleshooting a lot easier.
@AdibSaikali that's similar to what i tried to expñain except that I used data as a dependency instead of a query. If data required to create a resource is available, more often than not, there is no impediment to create it.
Take a look at th-cam.com/video/LKuRtOTvlXk/w-d-xo.html. I'm working on a follow up to both that one and the one from this comment. It'll be live in a few weeks.
@@DevOpsToolkit Also looking forward to the follow up on destruction. The current video is my pointer to get colleagues up to speed about this eventual consistency model of deploying.
Great talk again today. Again, the bubble sounds distract. Would love to have seen an animation, and/or code/config example of the process as you've articulated it.
according to this explanation, or specifically the lack of one but instead just an implication, there's no need for the ordering of code anymore. hey, just let the machine work it out because it will EVENTUALLY do things in the correct order! SMH. not everything is a Prolog program! why would anyone deliberately and repeatedly waste so many cycles of system startup when all it ever needed was a small ordered list of say, 15 items?! tell me it will automatically learn the correct sequence and i'll sign up, but until then, no thanks
That's how Kubernetes works. Here's a question. Do you ensure that Secrets, ConfigMaps, and Volumes are created before creating Pods? Since Kubernetes is asynchronous, that would mean that you create a Volume first. Since creating something in Kubernetes only means that it's stored in etcd and not necessarily running right away, your next step is probably to execute `kubectl wait...` to ensure that the volume is indeed ready and running, and only then you apply a Pod. If you are, you are among only a few who do that.
@@DevOpsToolkit no, i'm not saying kubernetes does not work that way. i'm saying we should not just summarily accept this behavior simply because everyone has jumped on the k8s bandwagon (see log4shell for a perfect example of this pitfall)
I do believe that eventual consistency for managing resources (but not always for code) is the way to go especially with larger systems. It's not always the best choice though (especially on smaller systems).
How do you manage dependencies and ordering of resources managed by Kubernetes?
ArgoCD with sync and waves 🎉
Helm + taskfiles with built in dependencies and outputs (Similar to gradle tasks)
ArgoCD rules for whatever is not offered by K8s out of the box
anyway following GitOps it's right way to go... using ArgoCD sync-waves is just a little trick to deploy and let things work faster, avoiding many failures in the loop...
Argocd sync waves + hook jobs in sync phase in specific sync wave that validate if dependency is satisfied before progressing the sync.
For example you install the operator and then in next sync wave run a hook job that validates if CRD was already created.
Script validating if CRD exits with 0 if CRD is there or it exits with 1 and sleeps for 120 seconds.
In job you set backofflimit to 15 and now you have gate that will check for 30 minutes if CRD was created. Once it is there you sync your custom resource.
Quite handy pattern. I used for whole day2 Openshift setups, like moving ingress routers to infra nodes after Infra nodes have finished installing, but sky is the limit here 😊
Thanks for another great video 🎉 honestly I was waiting for one on this subject, app dependencies can be tricky if not done the right way and at the beginning of my kubernetes journey I was struggling with them too! I took the path of remote development and luckily I solved many problems with it, but I still think the subject is worth a clarification
The hard part about eventually consistency is that in a complex system it can take a while for the system as whole to reconcile and while waiting it is hard to tell how far along the system is, is it stuck in an crash loop, why is it stuck in the spot it is stuck in ... etc. So tools that support ordering or troubleshooting are very useful the larger the system gets. A constraint based approach can also be very helpful.
It's true that troubleshooting a complex system is difficult, but so is the imperative orchestration.
You don't need to get to imperative orchestration to get the ordering required. A query and constraints that must be true before the next set of objects to start reconciling can do the trick. So you don't write a sequence of step1, step2, step 3 rather you say something like a pre-req for this object to start reconciling is a that a query result set meets certain criteria. This allows the reconciler to figure out what things can go in parallel and what waits until certain conditions are true ... etc. Canary rollouts use this approach where they wait until a query against a metrics store meets some condition before proceeding to change the desired state. I really wish the k8s API server had a native feature that allowed objects to be accepted but does not start reconciling until a query condition is true, its is just the flip side of being in a crashloop backoff, but at least you would see something like waiting for x condition to be true to start reconciling. makes troubleshooting a lot easier.
@AdibSaikali that's similar to what i tried to expñain except that I used data as a dependency instead of a query. If data required to create a resource is available, more often than not, there is no impediment to create it.
19:55 this sounds very similar to:
KPT says "configuration as data"
KPR is running on client-side so the functions will be executed only once and that means that you have to run them in certain order.
@@DevOpsToolkit Seems I put in the wrong timestamp, I meant 19:30 you mentioned it's all data, I was just pointing out, KPT calls it data too.
@@autohmae Oh yeah. From the data perspective, KPT got it right.
How can we define the deployment order&dependencies in argocd? Any ideas?
Take a look at
th-cam.com/video/LKuRtOTvlXk/w-d-xo.html. I'm working on a follow up to both that one and the one from this comment. It'll be live in a few weeks.
Great video, man! I'm interested on the second part, about deletion, in case you're willing to record it :-)
Great. I'll start working on it soon.
@@DevOpsToolkit Also looking forward to the follow up on destruction. The current video is my pointer to get colleagues up to speed about this eventual consistency model of deploying.
Great talk again today. Again, the bubble sounds distract. Would love to have seen an animation, and/or code/config example of the process as you've articulated it.
I'll work on a video with hands-on variation.
For feedback, I quite like the 'bubble' sounds, they help punctuate milestones in your communication.@@DevOpsToolkit
Completely agreed on all points!
We have a deployment mission problem…let's use k8s…great…now we have two problems😂
according to this explanation, or specifically the lack of one but instead just an implication, there's no need for the ordering of code anymore. hey, just let the machine work it out because it will EVENTUALLY do things in the correct order! SMH. not everything is a Prolog program! why would anyone deliberately and repeatedly waste so many cycles of system startup when all it ever needed was a small ordered list of say, 15 items?! tell me it will automatically learn the correct sequence and i'll sign up, but until then, no thanks
That's how Kubernetes works.
Here's a question. Do you ensure that Secrets, ConfigMaps, and Volumes are created before creating Pods? Since Kubernetes is asynchronous, that would mean that you create a Volume first. Since creating something in Kubernetes only means that it's stored in etcd and not necessarily running right away, your next step is probably to execute `kubectl wait...` to ensure that the volume is indeed ready and running, and only then you apply a Pod. If you are, you are among only a few who do that.
@@DevOpsToolkit no, i'm not saying kubernetes does not work that way. i'm saying we should not just summarily accept this behavior simply because everyone has jumped on the k8s bandwagon (see log4shell for a perfect example of this pitfall)
I do believe that eventual consistency for managing resources (but not always for code) is the way to go especially with larger systems. It's not always the best choice though (especially on smaller systems).