I believe that it is important to know what is out there so that we can make informed decisions. However, it is also important to refrain from making changes for the sake of "having fun". Most of the time, there is no strong and compelling reason to switch from Helm to Kustomize and vice versa. There are often better things to invest in than rewriting Helm charts to Kustomize and vice versa.
Really amazing comparision.. though video looks like a 'fun' but it really covers serious logical comparisions. Indeed both these tools can only be compared on a ground of who's the target users so thanks for not underestimating the power of Helm's simplicity over intelligent Kustomize and giving a true verdict on winner depends on who's using it. Claps for this unbiased yet true verdict. Though I've been working with helm-charts for last 3+ years, loved it, no complain. Just started learning Kustomize from last week and instantly impressed by a single fact that it doesn't alter original YAML and yet allow customization (Open/Closed Principle of SOLID Design pattern) to show its power. Yes, it's little complex to understand at start but being kubernetes admin I really admire this concept over helm.
Excellent comparison. We use helm for its easy configuration ability without knowing internals. Teams follow defined deployment approach so there is very less chance of not meeting requirements.
You might want to try creating your own CRDs and controllers. That will allow you to have even better grasp on requirement and, at the same time, make it even easier for others.
We used, or rather were basically forced to use helm with library chart and umbrella for deployment, all functionality was pre-programmed into the library ahead of time and it was inflexible, now work with Kustomize and i find it closer to the k8s philosophy. Its nicely designed imo.
Ok, I just finished watching the video. One area that I thought should have been addressed in the video is: how do the tools compare when integrating with a CI/CD pipeline such as circle-ci, gitlabs, jenkins, etc. Injecting environment variables and other secrets from the build process into the templates can be trickier than let on in this video.
Kustomize does NOT have environment variables. It uses overlays which are pieces of YAML overlayed on top of base YAML. Now the only thing that should be changed at runtime is the image tag since that is the only thing that constantly changes from one build to another. It would be better if that is stored in Git as well but, if you're working with pipelines, that is often not done. In any case, everything else should be stored in Git. In case of Helm, those would be different values.yaml files and in case of Kustomize those would be overlays that provide the differences between enviroments. Secrets should definitely not passed as `--set` values in `helm upgrade`. Use Vault, SealedSecrets, existing k8s secrets, or something similar. As for environment variables... I'm not sure why they would not be part of definitions.
It's always fun when you use a 3rd party Helm chart, just to find out that the way you wanted to use it, that seems so natural, was never intended or needed by the creators ;-)
A little late to the party, but "kubectl apply -k " now has kustomize functionality built in. Not sure when they changed it, but it may change your viewpoint on kustomize.
Oh yeah. Back when I created that video, Kustomize integrated into `kubectl` was very old. It changed since then and I've been using `kubectl apply --kustomize ...` exclusively.
3 ปีที่แล้ว +1
it's a good practice to always bump helm chart version in chart.yaml when changing any of the chart template files (not values but the chart files itself)
That's true, mostly because Kustomize does not update anything but only sends manifests to stdout so that they can be passed to `kubectl`. Still, that is a potential problem if using Kustomize directly. In my case, I use it only to define k8s resources, not to apply them. That goes through Argo CD or Flux and, in those cases, deleting resources works by deleting manifests since both tools are tracking what's in a cluster (the actual state) and comparing it with what's in Git (the desired state).
Thank you so much for the video! Super useful :) An issue we found running Helm with flux2 is that flux2 only monitors changes in state at the HelmRelease CustomResourceDefinition level. So, if something happens to, say, a Helm managed Deployment object on your cluster (created through a HelmRelease CRD) you're out of luck. Kustomize on the other hand speaks raw YAML, and all resources seem much more easily managed with Flux2. Do you think Kustomize is the winner when working with GitOps tools like Flux/ArgoCD?
That's one of the advantages of Kustomize that I did not (forgot to) mention in that video. Kustomize is pure YAML and everything is stored in Git. Helm, on the other hand, needs to be converted into YAML and allows us to use variables at runtime. As such, Git is often not the source of truth and, when it is, tools are still have a hard time to compare the desired with the actual state. In other words, you're right. Kustomize is a better choice when practicing GitOps.
can you tell me how to calculate the pod / node efficiency, like how to calculate that "how much traffic can one cluster of 3 nodes can handle", it's a DigitalOcean $20 droplet with 2 CPU and 4 GB Ram each ???
I don't think it's possible to calculate "how much traffic can one cluster of 3 nodes" handle. It is unlikely that you'll hit the networking limit of DO, so it greatly depends on your apps. As far as I know, the only way to find that out is to run your apps in the cluster and monitor them (e.g., Prometheus). When you see that latency starts increasing, you'll know that you need to scale your app. The same goes for memory and CPU utilisation.
I wish there was an additional category for this competition - resource creation dynamism. Sometimes you need to create deployments dynamically per user defined criteria for each environment, e.g. if it's a job system or ML, or some workload that needs dedicated deployment, where the user provides a map, and a for each loop generates a deployment manifest for every object in the map. This is just impossible with Kustomize, and Helm is the only option with its templating functions. I know Kustomize can generate ConfigMaps and Secrets, but sometimes you need that for all other K8s api objects as well :)
The best option in that, and quite a few other scenarios is to create your own CRDs and controllers that will perform those types of operations inside the cluster. Try the Operator Framework, KubeBuilder, or Metacontroller (th-cam.com/video/3xkLYOpXy2U/w-d-xo.html).
@@DevOpsToolkit That's very interesting, thanks for the tip! I am, however, a bit sceptical about writing my own mini-orchestrator (in a form of a controller and CRDs) on top of K8s - using Helm would be still easier :) But it actually looks very close to ArgoCD's ApplicationSets that dynamically creates new CRDs, and it's GitOps-compatible
Creating CRDs is easy (it's just a few lines of YAML). Controllers are more tricky. Nevertheless, it depends on the complexity you're trying to tackle. Helm might seem easier (and it often is) but can easily get out of hand. From what I understood based on your description, you will likely end up with a lot of `if/else` statements and loops in Helm to accomplish "anything goes". In any case, Kustomize is not the solution for what you're trying to do.
@@DevOpsToolkit You comment made me look into Crossplane once again, and it might be just what I need ;) What I am trying to solve is complexity. I have a Deployment manifest with a Service, Secret, etc etc, that I need to let my organization to duplicate dynamically via self-service with minimum complexity and knowledge of K8s. Let's say whenever a new contract is signed, a new Deployment needs to be added to the existing application, that will be handling that contract. With Kustomize I would have to copy-paste new Deployment and it would end up too verbose and complex for another team to do. To simplify and abstract things away, I was thinking of a simple map in Helm's values.yaml that another team could add via a PR to GitOps repo, and ArgoCD would create new Deployment with some defaults - that it easy enough for a non-developer to add, just few text lines. With Crossplane, I think I should be able to stick with the simplicity of Kustomize, and manage complexity via XRs that contain my Deployment, Secret, ConfigMap etc etc, something very very short with 1-2 parameters, and easy enough for others to add without knowing the underlying structure.
@@pavelpikat8950 That is exactly it. Moving the logic into a cluster (controller or operator) and exposing it through the API (CRD) might represent extra work at the start but it almost certainly pays off later on. Crossplane is a great candidate for that especially now that the community is about to release Composite Functions which should provide more flexibility.
We used helm, but after an year I am having second thoughts, we have like 200+ services! Now if we can get rid of Spinnaker in favor of ArgoCD. We missed going with Argo. Spinnaker is a pain with bitbucket and does not support app bootstrapping. :/
First, I think your channel is wonderful. I am thinking of the helm for web application management, third party application management (proxy and database) or Kubernetes configurations that can be repeated for each environment. With Helm I generate templates based on the value files and dump the template output in .yaml files with "helm template". So I can define the components of my architecture and apply using kustomize to any desired namespace. What do you think about this?
Do I guess correctly if I say that you would store those files in Git? If that's the case, than you would use `helm template` only the first time, add Kustomize to the output, and store it in Git. From there on, you would be managing any changes using Kustomize. Right?
Is there a way to easily extract from helm the created yaml files with my preferred values and then move them over to kustomize? I find myself starting out on helm charts but would like to migrate later on to plain yaml files.
@@DevOpsToolkit it worked great. I combined it with fluxcd and am very happy with the result. Btw, I noticed that fluxcd has its own kustomize version/controller that offers neat features like automatic helm package updates .. and despite lots of kustomize being present, fluxcd seems to be centered around helm. This surprised me but it makes sense since helm provides a standardized versioning and update scheme while plain URLs to yaml files don't. Do you know whether it's possible to use fluxcd's kustomize as a replacement for standard kustomize? We operate a local dev env with k3d on every developer's laptop. This environment we obviously don't want to manage with fluxcd's controller but we'd like to leverage a gitops/kustomize repo to install the infrastructure components, e.g. treafik, db, ... Fluxcd's kustomize helm extensions would be neat to use here but I haven't found a way yet. As a workaround I turn helm chats into kustomize manifests, store them in the gitops repo and leverage them in dev, staging and production .. a bit more noisy and without automatic helm updates but it definitely feels very good.
@@JanChristophEbersbach Normally, I keep app manifests in the repo of the app itself. So, using local dev envs. is not affected by Flux in any form or way, at least not for the app itself. When it comes to other "stuff" you need in those envs, a simple kustomize.yaml that references whichever repo those are stored should do. Similarly, if you are using Helm, you can put them as dependencies. From there on, I have one (or more) repos (e.g., production), that reference those app repos through Flux-specific manifests. In other words, whatever is directly related to an app is in the repo of that app, and environment repos are just referencing app repos with the addition of env-specific vars or kustomizations. Personally, I never had the need to convert Helm charts into Kustomize. Flux works fine with both or, at least, it worked for me.
Thanks for the overview, I think I'll go with Helm for now because it's very close to pure kubernetes yaml. Btw is pronouncing kubectl as kube C T L common?
There are at least 5-6 common pronunciations of `kubectl`, and there doesn't seem to be a consensus. As for Helm... It's great but, if you want to be "very close to pure kubernetes yaml", then Kustomize is the one. It is "pure Kubernetes YAML". That does not mean that it's better though.
Summary: Helm is unavoidable for third party apps since it has by far the biggest library of chart. But, for our own apps, Kustomize is, i believe, a better choice.
While working, I prefer Oh My ZSH! (ohmyz.sh/) and the terminal from Visual Studio Code. However, for the demos, I prefer making it closer to what others might be using so the one in that video is iTerm with Bash on Mac.
Thanks a lot for this great video, I just wondered about the differences between helm and kustomize after your last video :-).Maybe you can have in some future videos how to deploy applications using ketch.
Hi Viktor! Great video, thank you for that. I have a couple of questions for you (not related to this video). More about pipeline templating and developer services. Can I reach you in any way?
If you joined the channel, the easiest and the best way could be to come to the monthly chat that is next Friday (December 3). It's an opportunity for us to talk, go through questions, help each other, etc. All "Big supporters" are invited (th-cam.com/channels/fz8x0lVzJpb_dgWm9kPVrw.htmljoin) and I typically create a post about it a few days earlier. Please let me know if that sounds like a good idea. If it does, send me a private message on Twitter (@vfarcic) or LinkedIn (www.linkedin.com/in/viktorfarcic/) and I'll give you the invite even if you did not join the channel.
i tend to use Helm for third-party apps simply because most are providing Helm charts. Otherwise, I'd need to write and maintain manifests of other people's apps and that's rarely what I want to spend time on. For my own apps (apps developed by the company I work for), I prefer Kustomize.
mmm.. I was liking the looks of helm until I saw functional scripting to declare an indentation level.. now I don't know what to think. I don't have a problem with the if statements.. although I could see those getting out of hand.. but the templating engine seems to be lacking imo. I am almost compelled to roll my own solution but I know that is probably a path down despair and regret and probably the inevitable conclusion that the existing solutions are the way they are for good reason.
Been loving your Argo CD playlist. I've been implementing a lot I've learned in them along the way in my local kind cluster. Do have a question though. What do you think is the best way to implement a dev/staging/production workflow with Argo CD? First thing that I think of is have a branch of each? But not quite sure about it all and wanted to know what are your thoughts to keep it all inline and be able to work in a gitops fashion. I did see your Argo CD video about Canary releases and blue/green. I think it would also be nice to see how you go about the full cycle process how you normally would for a project. I know a lot of your videos talk about certain topics of subject(s) and showing that and driving that home. But a full start to life cycle of how you use these tools would be an awesome series possibly if not too time invested on your end.
@@Shawn-Mosher I do not like the idea of branches. For me, a branch is something temporary that is meant to be mergeable to the mainline, and only when something comes into the mainline that something becomes more permanent. In that spirit, I prefer creating directories or repositories (either is fine) where I keep Argo CD apps for each of the environments and tie them all together with an app of apps. Something like: [Argo CD app manifests] production.yaml (points to the production dir/repo) staging.yaml (points to the staging dir/repo) production - app1.yaml (points to Kustomize app1 > overlays > production) - app2.yaml - app3.yaml staging - app1.yaml (points to Kustomize app1 > overlays > staging) - app2.yaml - app3.yaml [Kustomize manifests] app1 - base - overlays - production (references the base directory plus whatever is specific to production) - staging (references the base directory plus whatever is specific to staging) ... Now, if you're using Helm instead of Kustomize, it would be something like this. [Argo CD app manifests] production.yaml (points to the production dir/repo) staging.yaml (points to the staging dir/repo) production - app1.yaml (points to Helm app1 and contains production-specific variables) - app2.yaml - app3.yaml staging - app1.yaml (points to Helm app1 and contains production-specific variables) - app2.yaml - app3.yaml [Helm charts] app1 ... I prefer keeping Helm charts and/or Kustomize manifests in repositories of the applications, and have separate repos for Argo CD apps. That way, base manifests are close to the rest of the code of apps, while I also have a good overview and easy management of environments through Argo CD app manifests stored somewhere else. In that sense, I see an environment/cluster being similar to an application, and applications (those you are developing) being it's dependencies of the desired state of environments. Does that make sense? P.S. This wouldn't be the first time I end up confusing others and myself instead of making things clearer.
Haha! Thank you for the explanation. I think I kind of get what you're saying but a little confused about how you go about that, but hey sounds great for a video idea if you're up for making a video explanation of it?
that's debatable, in my org, we completely moved away from Helm to kustomize and it greatly improved our workflows. With kustomize, you have no releases (like with Helm) to maintain which is a big plus in my opinion, as this change helped vastly improve our CICD setup to make it fully declarative. We use helm only for off the shelf app deployments (again using Kustomize helmCharts field). All inhouse apps are deployed via kustomize using server side apply with Ansible. The transition only benefited us.
I Hate Helm! One of the worst tool ever made in devops world! The syntax is a tragedy, the usage in production world, 9/10 cases is: do a fork, add missing stuff to templates, push to own repo, which is against "packaging" principles. Like in the npm world I had to fork packages (for patching purposes) mb twice in my career and here I have to all the time! Quality even of popular charts is shitty, patching process is long an non trivial for ppl not familiar with Helm. Its way easier to use patch in Kustomize. Argo + Kustomize are native and simple, you giving an example of the overlays to devs and they can figure how this work on their own. For bigger things CDK8s, or any other tool.
Um, most if not all Helm charts are managed by someone. You could also offer your patches as PRs to those repos. Of course, that only works when your uncovered use case is a fairly general one and not some crazy niche situation. But, fixes and improvements should always be welcome and accepted.
I think your examples are too simple to showcase Helm's power in deploying complex applications (ie applications made up of many components, that need to be deployed in a certain order and in certain conditions) and Kustomize's drawback in requiring "K8s ninja" knowledge-level (you really have to know your JSON patch to do non-trivial customizing). For a non-trivial application made up of deployments, config maps, secrets, services, roles, role bindings, service accounts, ... I would think the "mesh" of Kustomize base and overlay directories and files will become a "mess", and it'll be very easy to make a mistake in an overlay and screw things up royally. Also, for unplanned customizations in Helm like networking, what stops you from pulling the chart, unpacking it, and then adding template files for the networking components? Oh, and another thing: I feel that Kustomize's ease in dealing with unplanned customizations encourages a careless attitude towards good upfront design of the composition and configurability of the application, because "you can always enhance it later". Helm forces you to carefully consider what is in your app and what is configurable in your app, and that is not a bad thing, it leads to well designed apps. Take your example for resources: a pod should almost never have no resource requests and limits defined, so an app packaged as a Helm chart will be forced to consider resources from the get go. (I know that you picked the resources example for ease of demonstration, but it also supports my counterargument 🙂)
You're right. My examples are simple, mostly because of the medium (video) and duration (~30 min). Now, whether the application is complicated or not does not influence much my decision whether to use Kustomize or Helm. What does influence it much more is the number of permutations we might have for an application. For example, a third-party app tends to have a massive number of permutations. As an example, Mongo DB needs to be defined in a way that it serves the needs of thousands of people and hundreds of companies with often opposing needs. Because of that, Helm is a better choice than Kustomize. Its templating capabilities are a better enabler of creating all those permutations than overlays. On the other hand, internal apps tend to have a much smaller number of permutations. Hosts might differ from one environment to another, service mesh might be enabled only in production, etc. When the amount of permutations is relatively small, I tend to choose Kustomize. That's why, when not given additional info, I choose Kustomize for internal apps and Helm for third-party apps (besides the fact that Helm charts are the only option for many third-party apps). However, my general division between third-party-many-permutations and internal-apps-few-permutations is not always true. Some internal apps have a lot of permutations and, in those cases, Helm is a better choice. I could argue that such a situation is sometimes an indication of a different type of a problem, that Helm fosters such situations, that other tools (e.g., cdk8s) are better suited when there is high complexity or number of permutations, and that we should not keep all that on the client side but create CRDs and operators instead. Nevertheless, if the only choices are Helm and Kustomize, I suggest using Helm or Kustomize depending on the number of permutations an app can have in different environments. Also, I would not say that Helm "leads to well-designed apps". To begin with, Helm is in charge of the manifests of an app, not the design of the app itself (app code). Furthermore, the fact that Helm applies free-text templating to a structured data language (YAML) creates confusion that often results in bad design (not of the app but of the manifests). YTT or JSonnet are closer to the "YAML structure" (even though they are too complex for many). If one needs too many conditionals, loops, and other types of constructs, cdk8s sounds like a much better choice. Finally, I do agree that Pods should always specify resources. There are many other things that I would consider a must but I did not include them in my demo. Nevertheless, I don't see what forces one to include resources in a Helm chart but not in Kustomize? In both cases, you might include it or not. Neither one forces the user to include them. Now, if that's the goal than creating CRDs and operators is almost certainly a better choice. Creating your own schema and an operator allows you to enforce the rules you want to enforce. Neither Helm nor Kustomize is not good at that. On top of that, the example you mentioned is a perfect example of why we need policies (e.g., Kyverno), especially given that it's not only about specifying resources (I'm following your example), but also ensuring quite a few other resource-related rules (e.g., you cannot have less than 0.25 CPU). P.S. I love the "fight". The diversity of opinions and experiences often results in the "don't use this, use that" type of conflicts, but also knowledge sharing. Thanks a ton for your comment. I might not (yet) agree with it fully, but I do think it's valuable and makes me consider other perspectives.
@@DevOpsToolkit Thanks for the clarifications, and the suggestions. BTW, by "app" I didn't mean my Java Spring Boot jar running in a pod, I meant the collection of K8s workloads and supporting resources that constitute the solution I'm selling to my customer; sort of a meta-app.
@@fanemanelistu9235 If you're selling it to your customer or it's a "meta app", that is a third-party app (of sorts). At least from their perspective. That makes it fall under the "do it with Helm" category from my story/perspective. I would still consider creating an operator (CRD, controller, etc.). I'm not sure what its scope and number of users/customers is so I might be completely wrong on that one.
@@DevOpsToolkit Can you suggest a good source of complex, real-life Kustomize files that I can use for inspiration? For Helm, I can download charts from public repos and unpack and inspect them. Is there something similar for Kustomize? Thanks.
@@fanemanelistu9235 I haven't had complex Kustomize manifests. I try to keep my apps manifests simple to manage when they are for internal use/management and those that are distributed to others (e.g., OSS projects) as Helm charts. There are a few "complex" examples but those are done for customers and I cannot share them (or even have access to them). You can, for example, take a look at Argo CD manifests. They are done in Kustomize format and are available at github.com/argoproj/argo-cd/tree/master/manifests. One example could be github.com/argoproj/argo-cd/blob/master/manifests/ha/base/kustomization.yaml. It's not very complex though.
That comparison should be done after the video of ephemeral environments. Kustomize simply did't carry it out. It wants everything is in code, thus everything is persistent.
That is technically true. However, Kustomize in `kubectl` is v2.x while the current `kustomize` CLI is v3.x. There are problems with merging Kustomize code into `kubectl` and hardly anyone is putting effort to fix it. Still, once Kustomize in kubectl is updated, that will definitely be a big plus for Kustomize.
I think it is important to always output YAML because that is what other tools we might be using (e.g., Argo CD, Flux, etc.) are expecting. On the other hand, it is often easier to define things through code. cdk8s combines both. I like it. The major drawback is the support. I know that I can find YAML or Helm definitions for almost anything. That means that I might need to use cdk8s for my own stuff, and something else for third-party apps. That, by itself, might not be an issue. There is nothing terribly wrong with using more than one tool for something. I hope that github.com/awslabs/cdk8s/issues/141 gets done.
I decided a few months ago to go with Helm umbrella + Argocd, but still this video was awesome. You sir are a legend . Much appreciated
I believe that it is important to know what is out there so that we can make informed decisions. However, it is also important to refrain from making changes for the sake of "having fun". Most of the time, there is no strong and compelling reason to switch from Helm to Kustomize and vice versa. There are often better things to invest in than rewriting Helm charts to Kustomize and vice versa.
Really amazing comparision.. though video looks like a 'fun' but it really covers serious logical comparisions.
Indeed both these tools can only be compared on a ground of who's the target users so thanks for not underestimating the power of Helm's simplicity over intelligent Kustomize and giving a true verdict on winner depends on who's using it. Claps for this unbiased yet true verdict.
Though I've been working with helm-charts for last 3+ years, loved it, no complain. Just started learning Kustomize from last week and instantly impressed by a single fact that it doesn't alter original YAML and yet allow customization (Open/Closed Principle of SOLID Design pattern) to show its power. Yes, it's little complex to understand at start but being kubernetes admin I really admire this concept over helm.
Great video and I 100% agree having done exactly what you say with 3rd party HELM based apps that needed to be kustomised.
the way you present your videos is very fun 👍
Excellent comparison.
We use helm for its easy configuration ability without knowing internals. Teams follow defined deployment approach so there is very less chance of not meeting requirements.
You might want to try creating your own CRDs and controllers. That will allow you to have even better grasp on requirement and, at the same time, make it even easier for others.
great great in-depth review. No subjective(pure preference based) stuff, tied to real use case.
We used, or rather were basically forced to use helm with library chart and umbrella for deployment, all functionality was pre-programmed into the library ahead of time and it was inflexible, now work with Kustomize and i find it closer to the k8s philosophy. Its nicely designed imo.
Ok, I just finished watching the video. One area that I thought should have been addressed in the video is: how do the tools compare when integrating with a CI/CD pipeline such as circle-ci, gitlabs, jenkins, etc. Injecting environment variables and other secrets from the build process into the templates can be trickier than let on in this video.
Kustomize does NOT have environment variables. It uses overlays which are pieces of YAML overlayed on top of base YAML.
Now the only thing that should be changed at runtime is the image tag since that is the only thing that constantly changes from one build to another. It would be better if that is stored in Git as well but, if you're working with pipelines, that is often not done. In any case, everything else should be stored in Git. In case of Helm, those would be different values.yaml files and in case of Kustomize those would be overlays that provide the differences between enviroments.
Secrets should definitely not passed as `--set` values in `helm upgrade`. Use Vault, SealedSecrets, existing k8s secrets, or something similar.
As for environment variables... I'm not sure why they would not be part of definitions.
@@DevOpsToolkit Kustomize has a tool that will turn an env file of VAR=blah into a kubernetes config map. I think that's important to this discussion.
True, but that will, as far as I understand, generate a file primarily meant to be stored in git (even though it does not have to).
It's always fun when you use a 3rd party Helm chart, just to find out that the way you wanted to use it, that seems so natural, was never intended or needed by the creators ;-)
Thanks a lot - very well explained
Made it easy to decide which way to go
"The comparison is nice and clear! Thank you.
Great, useful content. Thanks Viktor.
A little late to the party, but "kubectl apply -k " now has kustomize functionality built in. Not sure when they changed it, but it may change your viewpoint on kustomize.
Oh yeah. Back when I created that video, Kustomize integrated into `kubectl` was very old. It changed since then and I've been using `kubectl apply --kustomize ...` exclusively.
it's a good practice to always bump helm chart version in chart.yaml when changing any of the chart template files (not values but the chart files itself)
Kustomize can be a pain updating if you REMOVE a resource, it doesn't clean up easily
That's true, mostly because Kustomize does not update anything but only sends manifests to stdout so that they can be passed to `kubectl`. Still, that is a potential problem if using Kustomize directly.
In my case, I use it only to define k8s resources, not to apply them. That goes through Argo CD or Flux and, in those cases, deleting resources works by deleting manifests since both tools are tracking what's in a cluster (the actual state) and comparing it with what's in Git (the desired state).
Thank you so much for the video! Super useful :) An issue we found running Helm with flux2 is that flux2 only monitors changes in state at the HelmRelease CustomResourceDefinition level. So, if something happens to, say, a Helm managed Deployment object on your cluster (created through a HelmRelease CRD) you're out of luck. Kustomize on the other hand speaks raw YAML, and all resources seem much more easily managed with Flux2. Do you think Kustomize is the winner when working with GitOps tools like Flux/ArgoCD?
That's one of the advantages of Kustomize that I did not (forgot to) mention in that video. Kustomize is pure YAML and everything is stored in Git. Helm, on the other hand, needs to be converted into YAML and allows us to use variables at runtime. As such, Git is often not the source of truth and, when it is, tools are still have a hard time to compare the desired with the actual state.
In other words, you're right. Kustomize is a better choice when practicing GitOps.
@@DevOpsToolkit Thanks again Viktor! :)
can you tell me how to calculate the pod / node efficiency, like how to calculate that "how much traffic can one cluster of 3 nodes can handle", it's a DigitalOcean $20 droplet with 2 CPU and 4 GB Ram each ???
I don't think it's possible to calculate "how much traffic can one cluster of 3 nodes" handle. It is unlikely that you'll hit the networking limit of DO, so it greatly depends on your apps. As far as I know, the only way to find that out is to run your apps in the cluster and monitor them (e.g., Prometheus). When you see that latency starts increasing, you'll know that you need to scale your app. The same goes for memory and CPU utilisation.
Great comparison! Talking about Helm, I think it would be interesting to mention Helmfile. Maybe, in some of your future videos :)
Adding Helmfile to my TODO list...
Yes, Helmfile is great, let us manage helm release as state file like Teraform does.
I just finished to settup Helmfile in Dev. Until now a great way to manage everything declarative
I wish there was an additional category for this competition - resource creation dynamism. Sometimes you need to create deployments dynamically per user defined criteria for each environment, e.g. if it's a job system or ML, or some workload that needs dedicated deployment, where the user provides a map, and a for each loop generates a deployment manifest for every object in the map. This is just impossible with Kustomize, and Helm is the only option with its templating functions. I know Kustomize can generate ConfigMaps and Secrets, but sometimes you need that for all other K8s api objects as well :)
The best option in that, and quite a few other scenarios is to create your own CRDs and controllers that will perform those types of operations inside the cluster. Try the Operator Framework, KubeBuilder, or Metacontroller (th-cam.com/video/3xkLYOpXy2U/w-d-xo.html).
@@DevOpsToolkit That's very interesting, thanks for the tip! I am, however, a bit sceptical about writing my own mini-orchestrator (in a form of a controller and CRDs) on top of K8s - using Helm would be still easier :) But it actually looks very close to ArgoCD's ApplicationSets that dynamically creates new CRDs, and it's GitOps-compatible
Creating CRDs is easy (it's just a few lines of YAML). Controllers are more tricky. Nevertheless, it depends on the complexity you're trying to tackle. Helm might seem easier (and it often is) but can easily get out of hand. From what I understood based on your description, you will likely end up with a lot of `if/else` statements and loops in Helm to accomplish "anything goes".
In any case, Kustomize is not the solution for what you're trying to do.
@@DevOpsToolkit You comment made me look into Crossplane once again, and it might be just what I need ;)
What I am trying to solve is complexity. I have a Deployment manifest with a Service, Secret, etc etc, that I need to let my organization to duplicate dynamically via self-service with minimum complexity and knowledge of K8s. Let's say whenever a new contract is signed, a new Deployment needs to be added to the existing application, that will be handling that contract. With Kustomize I would have to copy-paste new Deployment and it would end up too verbose and complex for another team to do. To simplify and abstract things away, I was thinking of a simple map in Helm's values.yaml that another team could add via a PR to GitOps repo, and ArgoCD would create new Deployment with some defaults - that it easy enough for a non-developer to add, just few text lines.
With Crossplane, I think I should be able to stick with the simplicity of Kustomize, and manage complexity via XRs that contain my Deployment, Secret, ConfigMap etc etc, something very very short with 1-2 parameters, and easy enough for others to add without knowing the underlying structure.
@@pavelpikat8950 That is exactly it. Moving the logic into a cluster (controller or operator) and exposing it through the API (CRD) might represent extra work at the start but it almost certainly pays off later on. Crossplane is a great candidate for that especially now that the community is about to release Composite Functions which should provide more flexibility.
ytt seems to combine templating and overlays. Did you already try ytt?
I tried it a while ago, and it did not "feel" right. I should probably give it another go.
Its amazing! And its gotten much better recently
th-cam.com/video/DLnXkH2keNg/w-d-xo.html
If you want to override specific parts couldn't you use "helm template" and then with kustomize?
Yep. I do that often to quickly create the base for my Kustomize setup.
Useful and interesting. I would like to use both of them I think
There is nothing wrong in combining tools to get exactly what you need. As a matter of fact, combining tools is often the best choice.
We used helm, but after an year I am having second thoughts, we have like 200+ services! Now if we can get rid of Spinnaker in favor of ArgoCD. We missed going with Argo. Spinnaker is a pain with bitbucket and does not support app bootstrapping. :/
Spinnaker was designed for something very different than what we have today, so it is to be expected that tools like Argo are a better fit.
First, I think your channel is wonderful. I am thinking of the helm for web application management, third party application management (proxy and database) or Kubernetes configurations that can be repeated for each environment. With Helm I generate templates based on the value files and dump the template output in .yaml files with "helm template". So I can define the components of my architecture and apply using kustomize to any desired namespace. What do you think about this?
Do I guess correctly if I say that you would store those files in Git? If that's the case, than you would use `helm template` only the first time, add Kustomize to the output, and store it in Git. From there on, you would be managing any changes using Kustomize. Right?
Is there a way to easily extract from helm the created yaml files with my preferred values and then move them over to kustomize?
I find myself starting out on helm charts but would like to migrate later on to plain yaml files.
There is :)
You can execute the `helm template` to convert templates into "pure" YAML files. From there on, moving them to Kustomize should be easy.
@@DevOpsToolkit thanks, that's cool!
@@DevOpsToolkit it worked great. I combined it with fluxcd and am very happy with the result.
Btw, I noticed that fluxcd has its own kustomize version/controller that offers neat features like automatic helm package updates .. and despite lots of kustomize being present, fluxcd seems to be centered around helm. This surprised me but it makes sense since helm provides a standardized versioning and update scheme while plain URLs to yaml files don't.
Do you know whether it's possible to use fluxcd's kustomize as a replacement for standard kustomize? We operate a local dev env with k3d on every developer's laptop. This environment we obviously don't want to manage with fluxcd's controller but we'd like to leverage a gitops/kustomize repo to install the infrastructure components, e.g. treafik, db, ... Fluxcd's kustomize helm extensions would be neat to use here but I haven't found a way yet. As a workaround I turn helm chats into kustomize manifests, store them in the gitops repo and leverage them in dev, staging and production .. a bit more noisy and without automatic helm updates but it definitely feels very good.
@@JanChristophEbersbach Normally, I keep app manifests in the repo of the app itself. So, using local dev envs. is not affected by Flux in any form or way, at least not for the app itself. When it comes to other "stuff" you need in those envs, a simple kustomize.yaml that references whichever repo those are stored should do. Similarly, if you are using Helm, you can put them as dependencies. From there on, I have one (or more) repos (e.g., production), that reference those app repos through Flux-specific manifests. In other words, whatever is directly related to an app is in the repo of that app, and environment repos are just referencing app repos with the addition of env-specific vars or kustomizations.
Personally, I never had the need to convert Helm charts into Kustomize. Flux works fine with both or, at least, it worked for me.
Thanks for the overview, I think I'll go with Helm for now because it's very close to pure kubernetes yaml. Btw is pronouncing kubectl as kube C T L common?
There are at least 5-6 common pronunciations of `kubectl`, and there doesn't seem to be a consensus.
As for Helm... It's great but, if you want to be "very close to pure kubernetes yaml", then Kustomize is the one. It is "pure Kubernetes YAML". That does not mean that it's better though.
After all this, you did not answer the question! However, and it is a big however, you help me to ask a better question. So on, and so forth! ❤
Summary: Helm is unavoidable for third party apps since it has by far the biggest library of chart. But, for our own apps, Kustomize is, i believe, a better choice.
Thanks what terminal is that?
While working, I prefer Oh My ZSH! (ohmyz.sh/) and the terminal from Visual Studio Code. However, for the demos, I prefer making it closer to what others might be using so the one in that video is iTerm with Bash on Mac.
Thanks a lot for this great video, I just wondered about the differences between helm and kustomize after your last video :-).Maybe you can have in some future videos how to deploy applications using ketch.
Ketch is already in my TODO list. Not yet sure when it's coming, but it's coming.
I just finished recording and editing a video about Ketch. It will be published sometime next week.
@@DevOpsToolkit Great!
Hi Viktor!
Great video, thank you for that.
I have a couple of questions for you (not related to this video).
More about pipeline templating and developer services.
Can I reach you in any way?
If you joined the channel, the easiest and the best way could be to come to the monthly chat that is next Friday (December 3). It's an opportunity for us to talk, go through questions, help each other, etc. All "Big supporters" are invited (th-cam.com/channels/fz8x0lVzJpb_dgWm9kPVrw.htmljoin) and I typically create a post about it a few days earlier.
Please let me know if that sounds like a good idea. If it does, send me a private message on Twitter (@vfarcic) or LinkedIn (www.linkedin.com/in/viktorfarcic/) and I'll give you the invite even if you did not join the channel.
A good config map can make Kustomize as safe as helm with third party apps.
That's true but that also means that third party all needs to be designed like that or you'd need to create and maintain the manifests yourself.
Do you still use helm to deploy services such as Redis, NATS etc? or do you create custom Kustomize resources?
i tend to use Helm for third-party apps simply because most are providing Helm charts. Otherwise, I'd need to write and maintain manifests of other people's apps and that's rarely what I want to spend time on.
For my own apps (apps developed by the company I work for), I prefer Kustomize.
GREAT JOB
Thank you for the video....
mmm.. I was liking the looks of helm until I saw functional scripting to declare an indentation level.. now I don't know what to think. I don't have a problem with the if statements.. although I could see those getting out of hand.. but the templating engine seems to be lacking imo. I am almost compelled to roll my own solution but I know that is probably a path down despair and regret and probably the inevitable conclusion that the existing solutions are the way they are for good reason.
Great job 👏
Thanks Teacher
Awesome video! Thank you!
Glad you liked it! Any suggestions for a subject for one of the next videos?
Been loving your Argo CD playlist. I've been implementing a lot I've learned in them along the way in my local kind cluster. Do have a question though.
What do you think is the best way to implement a dev/staging/production workflow with Argo CD? First thing that I think of is have a branch of each? But not quite sure about it all and wanted to know what are your thoughts to keep it all inline and be able to work in a gitops fashion. I did see your Argo CD video about Canary releases and blue/green.
I think it would also be nice to see how you go about the full cycle process how you normally would for a project. I know a lot of your videos talk about certain topics of subject(s) and showing that and driving that home. But a full start to life cycle of how you use these tools would be an awesome series possibly if not too time invested on your end.
@@Shawn-Mosher I do not like the idea of branches. For me, a branch is something temporary that is meant to be mergeable to the mainline, and only when something comes into the mainline that something becomes more permanent. In that spirit, I prefer creating directories or repositories (either is fine) where I keep Argo CD apps for each of the environments and tie them all together with an app of apps. Something like:
[Argo CD app manifests]
production.yaml (points to the production dir/repo)
staging.yaml (points to the staging dir/repo)
production - app1.yaml (points to Kustomize app1 > overlays > production)
- app2.yaml
- app3.yaml
staging - app1.yaml (points to Kustomize app1 > overlays > staging)
- app2.yaml
- app3.yaml
[Kustomize manifests]
app1 - base
- overlays - production (references the base directory plus whatever is specific to production)
- staging (references the base directory plus whatever is specific to staging)
...
Now, if you're using Helm instead of Kustomize, it would be something like this.
[Argo CD app manifests]
production.yaml (points to the production dir/repo)
staging.yaml (points to the staging dir/repo)
production - app1.yaml (points to Helm app1 and contains production-specific variables)
- app2.yaml
- app3.yaml
staging - app1.yaml (points to Helm app1 and contains production-specific variables)
- app2.yaml
- app3.yaml
[Helm charts]
app1
...
I prefer keeping Helm charts and/or Kustomize manifests in repositories of the applications, and have separate repos for Argo CD apps. That way, base manifests are close to the rest of the code of apps, while I also have a good overview and easy management of environments through Argo CD app manifests stored somewhere else. In that sense, I see an environment/cluster being similar to an application, and applications (those you are developing) being it's dependencies of the desired state of environments.
Does that make sense?
P.S. This wouldn't be the first time I end up confusing others and myself instead of making things clearer.
Haha! Thank you for the explanation. I think I kind of get what you're saying but a little confused about how you go about that, but hey sounds great for a video idea if you're up for making a video explanation of it?
@@Shawn-Mosher You're right. I should make it clearer, more structured, etc. I'll probably write a blog post or create a video on that soon :)
awesome work, thx Viktor (y)
Glad you like it!
Helm + Helmfile, or Helm + ArgoCD is the way you go. Kustomize is a nightmare to manage!
that's debatable,
in my org, we completely moved away from Helm to kustomize and it greatly improved our workflows.
With kustomize, you have no releases (like with Helm) to maintain which is a big plus in my opinion, as this change helped vastly improve our CICD setup to make it fully declarative.
We use helm only for off the shelf app deployments (again using Kustomize helmCharts field). All inhouse apps are deployed via kustomize using server side apply with Ansible. The transition only benefited us.
1. Changes: helm values easier kustomize original files
2. Create: helm is more complicated than kustomize
like your point at the end. That's a really great response.
Thanks!
Thank you so much :-)
You're welcome!
Any suggestions for the next video?
@@DevOpsToolkit TektonCD will be absolutely awesome if you have the time.
@@MrKofiray71 Added it to the TODO list. I can't guarantee when, but only that it's coming sooner or later.
The conclusion is we should learn both
Exactly :)
Helm is fun and games until your template is producing null values and produces unhelpful error messages.
👌
Victor @ 1.5x is my homie
I Hate Helm! One of the worst tool ever made in devops world! The syntax is a tragedy, the usage in production world, 9/10 cases is: do a fork, add missing stuff to templates, push to own repo, which is against "packaging" principles. Like in the npm world I had to fork packages (for patching purposes) mb twice in my career and here I have to all the time! Quality even of popular charts is shitty, patching process is long an non trivial for ppl not familiar with Helm. Its way easier to use patch in Kustomize. Argo + Kustomize are native and simple, you giving an example of the overlays to devs and they can figure how this work on their own. For bigger things CDK8s, or any other tool.
Um, most if not all Helm charts are managed by someone. You could also offer your patches as PRs to those repos. Of course, that only works when your uncovered use case is a fairly general one and not some crazy niche situation. But, fixes and improvements should always be welcome and accepted.
I think your examples are too simple to showcase Helm's power in deploying complex applications (ie applications made up of many components, that need to be deployed in a certain order and in certain conditions) and Kustomize's drawback in requiring "K8s ninja" knowledge-level (you really have to know your JSON patch to do non-trivial customizing). For a non-trivial application made up of deployments, config maps, secrets, services, roles, role bindings, service accounts, ... I would think the "mesh" of Kustomize base and overlay directories and files will become a "mess", and it'll be very easy to make a mistake in an overlay and screw things up royally.
Also, for unplanned customizations in Helm like networking, what stops you from pulling the chart, unpacking it, and then adding template files for the networking components?
Oh, and another thing: I feel that Kustomize's ease in dealing with unplanned customizations encourages a careless attitude towards good upfront design of the composition and configurability of the application, because "you can always enhance it later". Helm forces you to carefully consider what is in your app and what is configurable in your app, and that is not a bad thing, it leads to well designed apps. Take your example for resources: a pod should almost never have no resource requests and limits defined, so an app packaged as a Helm chart will be forced to consider resources from the get go. (I know that you picked the resources example for ease of demonstration, but it also supports my counterargument 🙂)
You're right. My examples are simple, mostly because of the medium (video) and duration (~30 min). Now, whether the application is complicated or not does not influence much my decision whether to use Kustomize or Helm. What does influence it much more is the number of permutations we might have for an application. For example, a third-party app tends to have a massive number of permutations. As an example, Mongo DB needs to be defined in a way that it serves the needs of thousands of people and hundreds of companies with often opposing needs. Because of that, Helm is a better choice than Kustomize. Its templating capabilities are a better enabler of creating all those permutations than overlays. On the other hand, internal apps tend to have a much smaller number of permutations. Hosts might differ from one environment to another, service mesh might be enabled only in production, etc. When the amount of permutations is relatively small, I tend to choose Kustomize. That's why, when not given additional info, I choose Kustomize for internal apps and Helm for third-party apps (besides the fact that Helm charts are the only option for many third-party apps).
However, my general division between third-party-many-permutations and internal-apps-few-permutations is not always true. Some internal apps have a lot of permutations and, in those cases, Helm is a better choice. I could argue that such a situation is sometimes an indication of a different type of a problem, that Helm fosters such situations, that other tools (e.g., cdk8s) are better suited when there is high complexity or number of permutations, and that we should not keep all that on the client side but create CRDs and operators instead. Nevertheless, if the only choices are Helm and Kustomize, I suggest using Helm or Kustomize depending on the number of permutations an app can have in different environments.
Also, I would not say that Helm "leads to well-designed apps". To begin with, Helm is in charge of the manifests of an app, not the design of the app itself (app code). Furthermore, the fact that Helm applies free-text templating to a structured data language (YAML) creates confusion that often results in bad design (not of the app but of the manifests). YTT or JSonnet are closer to the "YAML structure" (even though they are too complex for many). If one needs too many conditionals, loops, and other types of constructs, cdk8s sounds like a much better choice.
Finally, I do agree that Pods should always specify resources. There are many other things that I would consider a must but I did not include them in my demo. Nevertheless, I don't see what forces one to include resources in a Helm chart but not in Kustomize? In both cases, you might include it or not. Neither one forces the user to include them. Now, if that's the goal than creating CRDs and operators is almost certainly a better choice. Creating your own schema and an operator allows you to enforce the rules you want to enforce. Neither Helm nor Kustomize is not good at that. On top of that, the example you mentioned is a perfect example of why we need policies (e.g., Kyverno), especially given that it's not only about specifying resources (I'm following your example), but also ensuring quite a few other resource-related rules (e.g., you cannot have less than 0.25 CPU).
P.S. I love the "fight". The diversity of opinions and experiences often results in the "don't use this, use that" type of conflicts, but also knowledge sharing. Thanks a ton for your comment. I might not (yet) agree with it fully, but I do think it's valuable and makes me consider other perspectives.
@@DevOpsToolkit Thanks for the clarifications, and the suggestions.
BTW, by "app" I didn't mean my Java Spring Boot jar running in a pod, I meant the collection of K8s workloads and supporting resources that constitute the solution I'm selling to my customer; sort of a meta-app.
@@fanemanelistu9235 If you're selling it to your customer or it's a "meta app", that is a third-party app (of sorts). At least from their perspective. That makes it fall under the "do it with Helm" category from my story/perspective.
I would still consider creating an operator (CRD, controller, etc.). I'm not sure what its scope and number of users/customers is so I might be completely wrong on that one.
@@DevOpsToolkit Can you suggest a good source of complex, real-life Kustomize files that I can use for inspiration? For Helm, I can download charts from public repos and unpack and inspect them. Is there something similar for Kustomize? Thanks.
@@fanemanelistu9235 I haven't had complex Kustomize manifests. I try to keep my apps manifests simple to manage when they are for internal use/management and those that are distributed to others (e.g., OSS projects) as Helm charts. There are a few "complex" examples but those are done for customers and I cannot share them (or even have access to them).
You can, for example, take a look at Argo CD manifests. They are done in Kustomize format and are available at github.com/argoproj/argo-cd/tree/master/manifests. One example could be github.com/argoproj/argo-cd/blob/master/manifests/ha/base/kustomization.yaml. It's not very complex though.
Bro same
That comparison should be done after the video of ephemeral environments.
Kustomize simply did't carry it out. It wants everything is in code, thus everything is persistent.
Kustomize already won, it's built into kubectl. Helm is beyond horrible while kustomize brings happiness.
That is technically true. However, Kustomize in `kubectl` is v2.x while the current `kustomize` CLI is v3.x. There are problems with merging Kustomize code into `kubectl` and hardly anyone is putting effort to fix it. Still, once Kustomize in kubectl is updated, that will definitely be a big plus for Kustomize.
I disagree :) both are the winner
Neither! I’m waiting for the day something like cdk8s takes over. It’s called DevOps not YamlOps 😉
I think it is important to always output YAML because that is what other tools we might be using (e.g., Argo CD, Flux, etc.) are expecting. On the other hand, it is often easier to define things through code. cdk8s combines both. I like it.
The major drawback is the support. I know that I can find YAML or Helm definitions for almost anything. That means that I might need to use cdk8s for my own stuff, and something else for third-party apps. That, by itself, might not be an issue. There is nothing terribly wrong with using more than one tool for something.
I hope that github.com/awslabs/cdk8s/issues/141 gets done.