Hi Anton, thanks for making this video, but I observed at 9:34minutes, the accessModes shouldn't be "ReadWriteOncePod" as you talked here about the Pod accessibility. Please correct me if I misunderstood something here.
Wow! this is really very help full K8s Deployment contents for when we call a service api and it show "Service Upstream problem". Sir your content is Unique on the k8s Tutorials. 💝
This is so well explained. You also added in examples that we can understand and apply in the real world. Great thanks for sharing such knowledge. subscribed.
it would be great. There are three services in different namespaces stage, prod and green. I need to balance traffic between them. this can be done using ingress canary, but in this case, if the application crashes, it is not excluded from balancing and the user will receive either 200 or 503.
Great content, thanks Anton Putra. How could you use blue/green strategy in a cluster that have too many deployments interconnected each other? So If you have to change all service to point to the new deployment all your external client will use it too or not?
I'm not sure if I understood the question, but you can use blue/green deployment. Before providing access to your clients, you can thoroughly test your new "green" deployment. If it looks okay, you can, let's say, change the DNS or Kubernetes (k8s) label. You can use other strategies as well. For instance, you might need an additional HTTP header to hit the new version, etc. It's much more difficult for data transformation pipelines that many companies use, such as with Kafka.
thanks for your reply Anton. suppose you are running an application that has 10 or more micro services in your cluster, if you upgrade one of them and use blue/green is easy to do unit test but if you have to do test more complex like integration or functional it becomes so hard(my point of view). i mean you would have to duplicate all other deployments and make it pointing to the upgraded deployment. again great content 🤜🏼🤛🏼
@@agonzalezo Agreed. Sometimes you have to test all different applications together, let's say, in a staging environment. Instead of using blue-green deployment, you just release them all at once. Since you tested that in staging, you have a good chance of a successful production push.
thanks for the video, question: what is a deployment strategy like when there are database migrations and how do you plan a rollback in this type of situation?
Yes, although as the company grows and technology teams are formed, it becomes necessary to implement policies to ensure that these methodologies are followed by everyone on the team. So, in the case of databases, what would the policies be like? One policy could be: modifying a field in the database involves the following steps: 1) Create a new field with a different name, migrate, run a test; 2) Ensure new information is recorded in the new table, keeping new records in both tables, run a test; 3) Migrate data from one table to another, run a test; 4) Ensure new information only enters the row, run a test; 5) Delete the old table; 6) End.?
I don't, but there is a hook that you can use and provide a custom command to execute before terminating the pod - kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
So, in canary deployment as we can forward 10% of traffic to new version. Can we make sure that only our team users can access this new version 10% . And end-users or customer should access 90% of old version. Is it possible
Sure, if you use native K8s objects, you would add an additional label to the deployment, for example, "deployment: canary". Then, you'd create another service that selects only canary pods, similar to the blue/green example. In Flagger, this is already implemented, and when you run a test, it will target only the canary.
By 'canary,' we mean a controlled rollout of the new application version to a small subset of users, followed by testing that version. If the tests pass, we roll out this version to a wider audience. Kubernetes built in rolling update, gradually replacing the old version with the new one. You can potentially use health checks for testing, but it is not the same.
do you mean like a data pipeline (kafka consumer/producer), you can automate and it's much harder and that's why most examples focused on request based apps :)
forget istio, it's chinese stuff. I wouldn't trust it, remember some of the Chinese software that had seriously amateurish vulnerabilities; it's super untrustworthy. Everything else in video is great.
🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
Hi Anton, thanks for making this video, but I observed at 9:34minutes, the accessModes shouldn't be "ReadWriteOncePod" as you talked here about the Pod accessibility. Please correct me if I misunderstood something here.
Thanks! Usually all videos show these deployment strategies conceptually, but you demonstrated how it's actually done! Big thanks!
Never seen before such clear explanation..Hatts off 👍👍
thanks a lot!
В нетленку!!!! 🔥🌟🔥🌟🔥🌟🔥Сердечно благодарю, Антон!!! 🙏❤🙏❤🙏❤🙏
spasibo=)
you deserve lots of subscribers, thank you for sharing your knowledge.
Thanks :)
Wow! this is really very help full K8s Deployment contents for when we call a service api and it show "Service Upstream problem". Sir your content is Unique on the k8s Tutorials. 💝
I love your explanations, very clear, awesome examples, and straight to the point. Thank you for your hard work!!
Thank you!
This is so well explained.
You also added in examples that we can understand and apply in the real world.
Great thanks for sharing such knowledge. subscribed.
thanks!
This is awesome! please make detailed canary setup videos, its really very helpful.
Thank you! Will do
Great Content, Thanks Sir, best IT teacher, learned a lot from You! ❤
Thank you❤
This is always an exciting topic, a fantastic video, thanks for sharing this quality of content!!!
thanks!
Thanks for sharing your knowledge, your explanation is up to the mark.
Thank you!
underrated channel
awesome! visualization is the key
thank you!
Anton my man! Quality contents as usual
Thank you!
very good explained as always. thanks a lot for all your videos
🫡
You've Explained so well sir 🙏
I'd like to share this on twitter... May I ?
sure :)
awesome structured video, thanks!
Thank you!
Thank you very much for all your content.
my pleasure
These tutorials are amazing
thanks you!!
Hi, i m fron india, and your teaching style is very good. I'm waiting for more videos for kubernates and terraform with azure
Thanks, Azure is coming soon =)
Masterclass. Thanks Teacher!
Thank you!
great explanation! thankyou for doing this
thanks!
very nice explanation and helpful , thx a lot
thank you!
Thanks a lot Anton!
welcome!
this is amazing, well explained!
Thank you! Can we balance traffic between services in the different namespaces by istio with flagger or something?
It's not common, what's your use case? I'll see if I can test istio with cross namespace virtual service
it would be great. There are three services in different namespaces stage, prod and green. I need to balance traffic between them. this can be done using ingress canary, but in this case, if the application crashes, it is not excluded from balancing and the user will receive either 200 or 503.
otlicino, spasibo!
pojaluysta!
Great content, thanks Anton Putra.
How could you use blue/green strategy in a cluster that have too many deployments interconnected each other? So If you have to change all service to point to the new deployment all your external client will use it too or not?
I'm not sure if I understood the question, but you can use blue/green deployment. Before providing access to your clients, you can thoroughly test your new "green" deployment. If it looks okay, you can, let's say, change the DNS or Kubernetes (k8s) label.
You can use other strategies as well. For instance, you might need an additional HTTP header to hit the new version, etc. It's much more difficult for data transformation pipelines that many companies use, such as with Kafka.
thanks for your reply Anton. suppose you are running an application that has 10 or more micro services in your cluster, if you upgrade one of them and use blue/green is easy to do unit test but if you have to do test more complex like integration or functional it becomes so hard(my point of view). i mean you would have to duplicate all other deployments and make it pointing to the upgraded deployment. again great content 🤜🏼🤛🏼
@@agonzalezo Agreed. Sometimes you have to test all different applications together, let's say, in a staging environment. Instead of using blue-green deployment, you just release them all at once. Since you tested that in staging, you have a good chance of a successful production push.
Love your content! What workstation you have? ARM MacBook laptop?
Thanks, yes Apple M1 Pro
thanks for the video, question: what is a deployment strategy like when there are database migrations and how do you plan a rollback in this type of situation?
it's case by case but in general try to make migration backward compatible
Yes, although as the company grows and technology teams are formed, it becomes necessary to implement policies to ensure that these methodologies are followed by everyone on the team. So, in the case of databases, what would the policies be like? One policy could be: modifying a field in the database involves the following steps: 1) Create a new field with a different name, migrate, run a test; 2) Ensure new information is recorded in the new table, keeping new records in both tables, run a test; 3) Migrate data from one table to another, run a test; 4) Ensure new information only enters the row, run a test; 5) Delete the old table; 6) End.?
Thanks
Thanks!
Thank you for support!
Very good video brother and very nice explanation 🫰
thank you!
plz make a DETAILED video on Cortex.
Love your videos. Both prometheus operator videos helped me out a lot
Sure will do!
sir do you have video about kubernetes pod termination and sig term? how the pod gracefully terminated specially in prod?
I don't, but there is a hook that you can use and provide a custom command to execute before terminating the pod - kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
So, in canary deployment as we can forward 10% of traffic to new version. Can we make sure that only our team users can access this new version 10% . And end-users or customer should access 90% of old version.
Is it possible
Sure, if you use native K8s objects, you would add an additional label to the deployment, for example, "deployment: canary". Then, you'd create another service that selects only canary pods, similar to the blue/green example. In Flagger, this is already implemented, and when you run a test, it will target only the canary.
I have a question, why are there 2 pods being created in 5:08 minute while the specified maxSurge is 25% from 4 replicas which should be 1
yes, 25% is 1 pod (4 total), but k8s terminated at the same time 1, so total 5
what the diff between rolling update and canary?
By 'canary,' we mean a controlled rollout of the new application version to a small subset of users, followed by testing that version. If the tests pass, we roll out this version to a wider audience. Kubernetes built in rolling update, gradually replacing the old version with the new one. You can potentially use health checks for testing, but it is not the same.
@@AntonPutra how to set/config the rollout of canary to a small subset of user? just use small percentage? or use sticky session also?
Hi, I successfully deploy some services but having the Nameserver limits were exceeded issue, is it happen due to too many services?
it's very unlikely, what's your setup and can you paste exact error message
@@AntonPutra I found a way to fix it, I tried systemctl restart systemd-resolved.service and the issue is gone some how.
another question: how do we deploy a pod if its task or actions are executed internally and not by a request from a user?
do you mean like a data pipeline (kafka consumer/producer), you can automate and it's much harder and that's why most examples focused on request based apps :)
I’m using argocd rollout but I don’t know what’s the main difference between native k8s deployment strategy and Argo d rollout
It uses default rolling-update unless you explicitly update it in the yaml
ArgoRollouts can do canary and bluegreen
Можно помедленнее?
horosho
forget istio, it's chinese stuff. I wouldn't trust it, remember some of the Chinese software that had seriously amateurish vulnerabilities; it's super untrustworthy. Everything else in video is great.
Fortnite!!
'Promo SM'
you should enable "join" so we can support you
Thanks will do =)