I use bats-core github.com/bats-core/bats-core combined with kubectl and yq. Not saying its the best, but it works. will check out your video and try kuttl
I've had pretty good success with `tilt ci` + whatever programming language testing framework (like pytest). You can use whatever loops/variables/asserts you want. You can also do e2e like testing a web service or more "unit tests" like retrieving a resource (either from local file or running cluster) and checking for certain fields. For CI, I've used ctlptl to create clusters with decent success. It's a pretty thin wrapper around other k8s stacks like kind, k3d, docker-desktop
Currently we have a basic tests which we install on kind cluster the flux for our test environment (which is identical to prod) and waiting flux to be reconciled + everything is running / ready state. also using kubeconform to validate the yaml manifests.
Thanks for another great video! It's a pity that this project looks like it's not maintained, but I'll definitively give it a try. What about a video on DevPod ? Thanks!
Thanks for this (as always) great video! I opened an issue to ask about the status of the project (still alive/maintained). Let's see if we get an answer...
It wasn't possible for me to set a specific namespace in the kuttle-test.yaml, it was ignored. Also a huge limitation is not being able to run it in a docker-in-docker configuration since Kind doesn't select custom networks. That would be an expected feature in todays CIs running on K8S or on-premises container agents.
I never tried to set a namespace in kuttl since i always set namespace with commands that apply manifest (e.g. kubectl --namespace ... apply...) so i wasn't aware of that problem. I'm not sure what you meant with the second comment. How do you rin Docker inside kubernetes clusters? Are you using docker shim by mirantis? Also, using KinD is not required by kuttl. You're free to run it in any kubernetes cluster. What they offer with kind is just a shortcut to create cluster which you do not have to use (i do not use it myself). What do you use instead of kind? vCluster?
@@DevOpsToolkit Hi again from my first account! Regarding the namespace, I think it is important to have the same configuration for in-kind and in-cluster tests. Even adding a 00-install.yaml with an apply: namespace.yaml it was failing. I didn't want to mix declarative and imperative approach by redoing everything with kubectl, also because env variables are not parsed by Kuttl. In the second point I meant that for some testing is not possible to use a real cluster, like for operators containing cluster wide resources that would conflict during helm install. That's why I'm testing using kind on a azure devops agent on premise. This agent spawns containers to run pipelines builds over a mounted, temporary workspace. This makes Kuttl fail since it can't connect to the virtual network spawned by kind from the running container (Kind issue #273). The solution I've found is to install Kind,Helm and Kubectl each time at the beginning of the pipeline in the VM workspace (and not in the container as it is by default). It's a complex environment, I hope it is more understandable now :)
The issue with namespaces is that kuttl creates namespaces so that it can isolate tests and resources under test. I haven't tried to overwrite that so I'm not sure there is a workaround. As for "real clusters" for testing, I never use them myself. I tend to create clusters with KinD or, when working remotely, with vCluster. The reason is the same as you mentioned (cluster-level resources).
Have you use k3s/k3d before? Just curious on your experience on kind vs those. I've had a lot success with k3[s/d], but I might not know what I'm missing.
Normally, I prefer Rancher Desktop (which does use k3s). However, for testing purposes with kuttl, I prefer kind since it's the one that is fastest to create/destroy and that can be done through a CLI.
@@DevOpsToolkit Whoops, that's embarrassing. I've never used Rancher Desktop and didn't realize it was backed by k3s. You even mentioned it in the video. Thanks for the response!
@@DevOpsToolkit My comment got deleted probably because of the k3d url, you can google it :) Anyways, the gist of my comment was that you can run k3d from cli no problem, and it is a bit faster than kind. "kind create cluster -n mykindcluster: 1.30s user 0.40s system 8% cpu 18.895 total" vs "k3d cluster create myk3dcluster --port 8443:443 --port 8080:80 --k3s-arg "--disable=traefik@server:0": 0.14s user 0.07s system 1% cpu 18.711 total"
Datree (company) is no more so I would not rely on it to do anything. On top of that, Datree is not about testing about about linting and policies so it's a very different type of tool. That being said, you can, if I remember correctly, create your own policies for anything, including Crossplane manifests. Still, if you're looking into policies, I recommend Kyverno (in part because Datree is no more).
Yeah, it seems to be about deploying the manifest to the cluster, waiting for all the resources to be created, and then testing the cluster state. I guess for crossplane it is actually going to be creating the AWS (for example) resources. So this is kind of like a Terratest for Crossplane when used like this. At my co, I wrote a framework around Terratest with the Helm plugin. Using Go unit tests, we shove the Helm chart and a input values file into Helm, Helm renders the manifests, and then we read that YAML back into Go structs (provided by the Kubernetes API modules) and write normal assertions like that. A full test suite allows us to add new features to a Helm chart used by about 500 microservices without fear of regressions. On a related note, I know some people like the "YAML everything" approach, but it just leads to a kludgy mess IMO especially if you need logic. Just use a proper programming language and be done with it!
When i use it with crossplane, i am not setting up aws config so it is not creating resources in AWS (or any other provider). I am not trying to test whether AWS resources were created correctly but whether kubernetes resources (including, and even more importantly, child resources). As for YAML... I do not write my manifests in YAML unless when they are simple. So, if i stick with crossplane examples, my claims are written directly in YAML, but my compositions are mostly defined as CUE. Still, even though I am (often) not writing YAML, I need to check that whatever i do write in gets converted into proper kubernetes resources and that those are spinning up other resources.
Based on my understanding - this tool gets the manifest that was generated in the cluster, and testing diffs based on parameters that are being passed by the asserts. Therefore, it doesn’t matter which kubernetes flavor you will use.
Do you test Kubernetes resources? If you do, what do you use?
I use bats-core github.com/bats-core/bats-core combined with kubectl and yq. Not saying its the best, but it works. will check out your video and try kuttl
I've had pretty good success with `tilt ci` + whatever programming language testing framework (like pytest). You can use whatever loops/variables/asserts you want. You can also do e2e like testing a web service or more "unit tests" like retrieving a resource (either from local file or running cluster) and checking for certain fields.
For CI, I've used ctlptl to create clusters with decent success. It's a pretty thin wrapper around other k8s stacks like kind, k3d, docker-desktop
Currently we have a basic tests which we install on kind cluster the flux for our test environment (which is identical to prod) and waiting flux to be reconciled + everything is running / ready state. also using kubeconform to validate the yaml manifests.
Thanks for another great video! It's a pity that this project looks like it's not maintained, but I'll definitively give it a try. What about a video on DevPod ? Thanks!
I have it on my to-do list but I'm not sure I'll go through soon. It feels the same as many before it. I might be wrong though.
Thanks for this (as always) great video! I opened an issue to ask about the status of the project (still alive/maintained). Let's see if we get an answer...
It wasn't possible for me to set a specific namespace in the kuttle-test.yaml, it was ignored.
Also a huge limitation is not being able to run it in a docker-in-docker configuration since Kind doesn't select custom networks. That would be an expected feature in todays CIs running on K8S or on-premises container agents.
I never tried to set a namespace in kuttl since i always set namespace with commands that apply manifest (e.g. kubectl --namespace ... apply...) so i wasn't aware of that problem.
I'm not sure what you meant with the second comment. How do you rin Docker inside kubernetes clusters? Are you using docker shim by mirantis? Also, using KinD is not required by kuttl. You're free to run it in any kubernetes cluster. What they offer with kind is just a shortcut to create cluster which you do not have to use (i do not use it myself). What do you use instead of kind? vCluster?
@@DevOpsToolkit Hi again from my first account!
Regarding the namespace, I think it is important to have the same configuration for in-kind and in-cluster tests. Even adding a 00-install.yaml with an apply: namespace.yaml it was failing. I didn't want to mix declarative and imperative approach by redoing everything with kubectl, also because env variables are not parsed by Kuttl.
In the second point I meant that for some testing is not possible to use a real cluster, like for operators containing cluster wide resources that would conflict during helm install. That's why I'm testing using kind on a azure devops agent on premise. This agent spawns containers to run pipelines builds over a mounted, temporary workspace. This makes Kuttl fail since it can't connect to the virtual network spawned by kind from the running container (Kind issue #273). The solution I've found is to install Kind,Helm and Kubectl each time at the beginning of the pipeline in the VM workspace (and not in the container as it is by default). It's a complex environment, I hope it is more understandable now :)
The issue with namespaces is that kuttl creates namespaces so that it can isolate tests and resources under test. I haven't tried to overwrite that so I'm not sure there is a workaround.
As for "real clusters" for testing, I never use them myself. I tend to create clusters with KinD or, when working remotely, with vCluster. The reason is the same as you mentioned (cluster-level resources).
Great content as always from a great teacher, thanks a lot❤🎉 Sir, please 🙏 create a detailed video 📹 on Akuity Kargo (Argocd)
Adding it to my to-do list... 🙂
@@DevOpsToolkitTHANK!
Here it goes: th-cam.com/video/RoY7Qu51zwU/w-d-xo.html
@@DevOpsToolkit Thank you 😊 🙏 💓
Have you use k3s/k3d before? Just curious on your experience on kind vs those. I've had a lot success with k3[s/d], but I might not know what I'm missing.
Normally, I prefer Rancher Desktop (which does use k3s). However, for testing purposes with kuttl, I prefer kind since it's the one that is fastest to create/destroy and that can be done through a CLI.
@@DevOpsToolkit Whoops, that's embarrassing. I've never used Rancher Desktop and didn't realize it was backed by k3s. You even mentioned it in the video. Thanks for the response!
@@DevOpsToolkit My comment got deleted probably because of the k3d url, you can google it :) Anyways, the gist of my comment was that you can run k3d from cli no problem, and it is a bit faster than kind. "kind create cluster -n mykindcluster: 1.30s user 0.40s system 8% cpu 18.895 total" vs "k3d cluster create myk3dcluster --port 8443:443 --port 8080:80 --k3s-arg "--disable=traefik@server:0": 0.14s user 0.07s system 1% cpu 18.711 total"
As dutch guy this is a funny name to say the least,
CUNTTL sounds like what I think of k8s :D
Can datree be used to test crossplane manifests ?
Datree (company) is no more so I would not rely on it to do anything. On top of that, Datree is not about testing about about linting and policies so it's a very different type of tool. That being said, you can, if I remember correctly, create your own policies for anything, including Crossplane manifests. Still, if you're looking into policies, I recommend Kyverno (in part because Datree is no more).
I was really looking for a test solution but kuttl seems to really on ready state which while better then nothing usually isn't really a good test.
What do you mean by ready state?
Yeah, it seems to be about deploying the manifest to the cluster, waiting for all the resources to be created, and then testing the cluster state. I guess for crossplane it is actually going to be creating the AWS (for example) resources. So this is kind of like a Terratest for Crossplane when used like this.
At my co, I wrote a framework around Terratest with the Helm plugin. Using Go unit tests, we shove the Helm chart and a input values file into Helm, Helm renders the manifests, and then we read that YAML back into Go structs (provided by the Kubernetes API modules) and write normal assertions like that. A full test suite allows us to add new features to a Helm chart used by about 500 microservices without fear of regressions.
On a related note, I know some people like the "YAML everything" approach, but it just leads to a kludgy mess IMO especially if you need logic. Just use a proper programming language and be done with it!
When i use it with crossplane, i am not setting up aws config so it is not creating resources in AWS (or any other provider). I am not trying to test whether AWS resources were created correctly but whether kubernetes resources (including, and even more importantly, child resources).
As for YAML... I do not write my manifests in YAML unless when they are simple. So, if i stick with crossplane examples, my claims are written directly in YAML, but my compositions are mostly defined as CUE. Still, even though I am (often) not writing YAML, I need to check that whatever i do write in gets converted into proper kubernetes resources and that those are spinning up other resources.
Nice - wonder if this would work with Openshift 🤔
I don't see why it wouldn't. It does not care much which resource definitions you have.
Based on my understanding - this tool gets the manifest that was generated in the cluster, and testing diffs based on parameters that are being passed by the asserts.
Therefore, it doesn’t matter which kubernetes flavor you will use.
Awesome thanks both. Need to check it out!