Have you tried Notary v2 for signing and verifying. It uses OCI 1.1 spec to store signature as referenced artefacts which is much more cleaner than cosign's tag based approach. It also doesn't rely on sigstore tools like Fulcio/Rekor. Notary works with existing PKI you can trust. Notation also works with OPA Gatekeeper for admission control.
@@DevOpsToolkit thank you sir. I’m also using kind cluster it’s running in windows platform just wondering do I need really setup sigstore locally to authenticate or we can do only image sign and push into repos?
how can I apply the policy to verify images on existing deployments in production. Let's say I have added a step in my CI pipeline to sign the image after it is pushed to registry, and then run CD usinge helm upgrade, the policy stops me performing a rolling update. How can i dodge that?
The policy would stop you during the attempt to do helm upgrade. It does not care whether it is helm or anything else. It is activated when a request reaches kubernetes API. It that what you were looking for or...?
I feel like with all the tools etc that need some sort of private key or token i would also need a GitLab „keystore“. I love GitOps but as a beginner it sometimes feels as if your Git Instance kinda becomes a single point of failure. How do you protect yourself from that?
In this specific case, there is no relation with Git except that you might want to keep the public (not the private) key in the repo with the code of the app. Outside of Cosign, I don't think that Git is the single point of failure. If it goes down (temporarily), your system will continue working. You will not be able to deploy new releases through Git but that's not because one is (or isn't using GitOps), but because there is no code to build the release in the first place. Still, that is often no an issue in the disaster scenario. When bad things happen, the important thing is that the system is running. Now being able to deploy a new release is not a "disaster" as long as it's temporary.
@@DevOpsToolkit oh it was already late when i watched your video need to rewatch it again😊 . I understood it that if i want to sign the images in the pipeline i would need to have the private key in a CD variable. And yea thats a good point deploying a new release is not the important thing in that case. I think i had more in mind the scenario if an attacker gets access to your git instance. Thank you as always for your great videos learning a lot!!!
@@DevOpsToolkit I agree. Many of the security threats that we face originate from *inside* the organisation, so any defence-in-depth strategy needs to include internal measures also.
Does we need policy to prevent unsigned images, as might be we will configure worker node to pull signed image only. Or here you are showing you can pull any image but silly image need to signed? I'm same reddit guy who asked for signing help Thanks for covering signing
In the demo, I configured the policy to prevent unsigned `silly-demo` images. You can (and I think I said that in the video), set the policy to prevent any or all unsigned images from being used. If you do configure it to prevent any unsigned image, be sure that all images are signed by you, including third-party.
It probably can, but I don't think that's what you want. The idea is to prevent it from being scheduled in the first place (from Kubernetes even attempting to create a Pod with a container based on that image).
@@75devendrasahu I would not say "even on the k8s level" but "only on the k8s API level". The API is the only point of entry. It does not matter whether you execute `kubectl apply`, `helm install`, or even create k8s resources from inside the k8s cluster. It always goes through the API. Since that is the only point of entry, admission controllers are a safe bet. They are executed before any request to the API is committed. P.S. Admission controllers have other issues, but that would be a whole other subject to explore.
I haven't used it much so I can't comment on it in detail. From what I saw, if you are looking for a solution that works well with Google Cloud (and nowhere else), it looks like a good choice.
I'm ashamed to admit that I didn't know about Kyverno. 🙂 I (tried to) use sse-secure-systems/connaisseur but stopped when I realized it's in its early stages and there are only a couple of devs supporting it, so it's risky for production. Kyverno seems to have a little bit more support, but it's also in its early stages (beta?) and so it too is risky to use in production. I'm thinking of using OPA Gatekeeper, which is (more) advanced and supported, but boy is it hard to configure. And it seems too powerful for the simple task of image signature verification.
In the "Kubernetes World" beta is OK to use. Alpha means that the API is likely going to change, while beta typically means that the API is stable and is not likely to change. Bear in mind that Kyverno itself is NOT beta, only the signature validation is. On top of that, Kyverno is in use by many companies in production. Finally, even if something goes wrong, that will most likely not affect your workloads, as long as you do NOT use mutating webhooks. What I'm trying to say is NOT to wait with Kyverno. If you do prefer it over OPA, go for it right away. One more thing... If all you need is to validate whether images are signed, Kyverno (or any other policy engine) might be an overkill. Instead, I'd do it on the container image registry level (if it is supported). Harbor, for example, can be configured NOT to allow pulling of images that were not signed. Take a look at th-cam.com/video/f931M4-my1k/w-d-xo.html. That being said, I still strongly recommend Kyverno, but only if you want to adopt policies beyond validations of image signatures. Finally, you might want to look at th-cam.com/video/DREjzfTzNpA/w-d-xo.html for more info about Kyverno. There are other videos about Kyverno on this channel, but that one is the best one to start with.
Are you signing and verifying your container images? If you are, which tools do you use? If you're not, why not?
Signing with Harbor that used Notary, verifying with custom script, since Admission Webhooks were not a thing back then.
Great quote from Viktor: You don't have a single execuse to not use it!
We use this exact setup signing with KMS key, which is a nice way to out source the key management aspect to the cloud provider.
Have you tried Notary v2 for signing and verifying. It uses OCI 1.1 spec to store signature as referenced artefacts which is much more cleaner than cosign's tag based approach. It also doesn't rely on sigstore tools like Fulcio/Rekor. Notary works with existing PKI you can trust. Notation also works with OPA Gatekeeper for admission control.
Yeah. Notary is great.
Notary v2 took to long
Could you provide some guide like this using private repositories?
I don't think it should be any different for private registries.
Hi sir thanks for the video
How to setup sigstore locally to authenticate using oidc?
I don't think there is any difference in setup in local clusters. It should work with KinD just as well as with s "real" cluster.
@@DevOpsToolkit thank you sir.
I’m also using kind cluster it’s running in windows platform just wondering do I need really setup sigstore locally to authenticate or we can do only image sign and push into repos?
@palanisamy-dl9qe authentication is not mandatory.
@@DevOpsToolkit thanks for your time sir I really enjoyed your video as part of weekend learning
note about image verification: Image verification is a beta feature. It is not ready for production usage and there may be breaking changes.
You mean in Kyverno?
how can I apply the policy to verify images on existing deployments in production. Let's say I have added a step in my CI pipeline to sign the image after it is pushed to registry, and then run CD usinge helm upgrade, the policy stops me performing a rolling update. How can i dodge that?
The policy would stop you during the attempt to do helm upgrade. It does not care whether it is helm or anything else. It is activated when a request reaches kubernetes API.
It that what you were looking for or...?
I feel like with all the tools etc that need some sort of private key or token i would also need a GitLab „keystore“. I love GitOps but as a beginner it sometimes feels as if your Git Instance kinda becomes a single point of failure. How do you protect yourself from that?
In this specific case, there is no relation with Git except that you might want to keep the public (not the private) key in the repo with the code of the app.
Outside of Cosign, I don't think that Git is the single point of failure. If it goes down (temporarily), your system will continue working. You will not be able to deploy new releases through Git but that's not because one is (or isn't using GitOps), but because there is no code to build the release in the first place. Still, that is often no an issue in the disaster scenario. When bad things happen, the important thing is that the system is running. Now being able to deploy a new release is not a "disaster" as long as it's temporary.
@@DevOpsToolkit oh it was already late when i watched your video need to rewatch it again😊 . I understood it that if i want to sign the images in the pipeline i would need to have the private key in a CD variable.
And yea thats a good point deploying a new release is not the important thing in that case. I think i had more in mind the scenario if an attacker gets access to your git instance.
Thank you as always for your great videos learning a lot!!!
What if I work in a private network, does it still make sense to sign images?
I think it does. There's no operational or $$$ cost to it so there's no good reason not to use it.
@@DevOpsToolkit I agree. Many of the security threats that we face originate from *inside* the organisation, so any defence-in-depth strategy needs to include internal measures also.
Does we need policy to prevent unsigned images, as might be we will configure worker node to pull signed image only.
Or here you are showing you can pull any image but silly image need to signed?
I'm same reddit guy who asked for signing help
Thanks for covering signing
In the demo, I configured the policy to prevent unsigned `silly-demo` images. You can (and I think I said that in the video), set the policy to prevent any or all unsigned images from being used. If you do configure it to prevent any unsigned image, be sure that all images are signed by you, including third-party.
@@DevOpsToolkit got it, but I want to know that this can be prevented at container runtime of worker node as well ? Without any policy
It probably can, but I don't think that's what you want. The idea is to prevent it from being scheduled in the first place (from Kubernetes even attempting to create a Pod with a container based on that image).
@@DevOpsToolkit thanks now clear, we need to restrict it even at k8s level as well
@@75devendrasahu I would not say "even on the k8s level" but "only on the k8s API level". The API is the only point of entry. It does not matter whether you execute `kubectl apply`, `helm install`, or even create k8s resources from inside the k8s cluster. It always goes through the API. Since that is the only point of entry, admission controllers are a safe bet. They are executed before any request to the API is committed.
P.S. Admission controllers have other issues, but that would be a whole other subject to explore.
what about google cloud's binary authorization policy.
I haven't used it much so I can't comment on it in detail. From what I saw, if you are looking for a solution that works well with Google Cloud (and nowhere else), it looks like a good choice.
# TIL
I'm ashamed to admit that I didn't know about Kyverno. 🙂
I (tried to) use sse-secure-systems/connaisseur but stopped when I realized it's in its early stages and there are only a couple of devs supporting it, so it's risky for production.
Kyverno seems to have a little bit more support, but it's also in its early stages (beta?) and so it too is risky to use in production.
I'm thinking of using OPA Gatekeeper, which is (more) advanced and supported, but boy is it hard to configure. And it seems too powerful for the simple task of image signature verification.
In the "Kubernetes World" beta is OK to use. Alpha means that the API is likely going to change, while beta typically means that the API is stable and is not likely to change. Bear in mind that Kyverno itself is NOT beta, only the signature validation is. On top of that, Kyverno is in use by many companies in production. Finally, even if something goes wrong, that will most likely not affect your workloads, as long as you do NOT use mutating webhooks.
What I'm trying to say is NOT to wait with Kyverno. If you do prefer it over OPA, go for it right away.
One more thing... If all you need is to validate whether images are signed, Kyverno (or any other policy engine) might be an overkill. Instead, I'd do it on the container image registry level (if it is supported). Harbor, for example, can be configured NOT to allow pulling of images that were not signed. Take a look at th-cam.com/video/f931M4-my1k/w-d-xo.html.
That being said, I still strongly recommend Kyverno, but only if you want to adopt policies beyond validations of image signatures.
Finally, you might want to look at th-cam.com/video/DREjzfTzNpA/w-d-xo.html for more info about Kyverno. There are other videos about Kyverno on this channel, but that one is the best one to start with.