Many thanks! I did not heard ever used VerticalPodAutoescaler!! There are many ways to describe scaling for applications, I also like the Scale Cube that it ir more from the point of view how microservices can be scaled.
Great video on teaching and as a refresher for me on HPA and VPA! I would like to learn and understand how to utilize metrics from Prometheus as another means for the autoscaling use-case.
Great video as always! I think that it would also be useful to introduce KEDA autoscaler along with Prometheus base HPA. I am using KEDA and it is working great (in my case with RabbitMQ) since I can scale from zero pods which is huge cost saving.
Can I ask if you're using KEDA with GKE? I've had issues with intermittent metrics server availability. I love KEDA and want to use it, but it's def a blocker.
Thank you for the video!!! Question: how do we horizontally autoscale databases in Kubernetes? What are the challenges and what would be the proper way to overcome them? (Maybe an idea for a future video)
Adding it to the TODO list for a future video... :) Until then... If designed well, DB should come with an operator that takes care of common operations including scaling and all you really have to do is change the number of replicas (unless you enable autoscaling which is still not a common option).
I started using jsonnet and it has been a pain to use and a steep learning curve. Few months later we moved to ytt as it was easier to manage but now we are going for Kustomize for all new projects. Jsonnet is really powerful but when bringing someone new to the team and you show them jsonnet, they can easily feel overwhelmed.
Hi Victor thanks for a great video :) Just a question from my side - do you know how gitops (ie with ArgoCD) handles auto-scaling as I assume the replica count on the deployment yaml will no longer conform to the declared yaml in an autoscaling setup?
Yeah. You should remove hard coded replicas or nodes when using scalers. That's not directly related to gitops. Argo CD and similar tools only sync manifests into clusters. If you do specify replicas and a scaler, the former will be overwritten by the later.
@@DevOpsToolkit thanks so much Victor - ahh ok gotcha I didnt realise I could leave out the replica count in the deployment manifest - thanks :) Im going to look into this more. Also going to checkout your videos on Argo events and rollout to see how to deal with progressing a release through different environments while still using gitops.
Great video, would it be possible to run the VPA in recommend mode while relying upon the HPA to ensure scaling of pods? Can that combination be used to fine tune the autoscaling policies?
Great video - as a complete beginner to Kubernetes it's helped me to understand what I want to with a particular project that I'm working. I currently have a long-term process that runs under Python but runs in a single thread. Up until know I've scaled vertically by moving to more powerful machines but also horizontally by runnning additional copies of the process on different processor cores and then dividing the clients up geographically. If I've understood correctly, with Kubernetes it looks like I could run one copy but get it to spread across multiple cores or even multiple servrers as required whilst to my clients it just looks like one machine ? Do I need to do anything to my process to ready it for deployment on Kubernetes or is it just a case of setting the resource limits and scaling parameters ?
Assuming that it is a stateless application, all you have to do is define HPA that will scale it for you or, if scaling is not frequent, set manually the number of replicas in the deployment.
It's stateless (I think) as nothing is left once the application exits other than some log files. I'm definitely going to have to put together a cluster and have a go. Thanks again !
Hey doing a great job waiting for your videos and the notification bell to buzz everytime ❤️ just a question hpa with respect to memory do we have any information for reference than it would be helpful also can we use them both simultaneously in our hpa manifest
Don't use vpa together with hpa. They are not aware of each other and might do conflicting actions. If you're wondering how to deduce how much memory to assign to a deployment managed by hpa, explore Prometheus. It should give you the info about memory utilization or anything else.
@@DevOpsToolkit sure thanks for the information 💯 can you please come up with video more precise on cluster autoscaling in gke cluster and how it works like poddistributionbudget the annotation safe to evict pods how it's used the correct way would be great help of you 💯
@@DevOpsToolkit Sure i would be eagerly waiting ;)...Thanks for being such a great spot by sharing your valuable 💯 knowledge for us from your videos always waiting for your new video #devops 💯
When scale in/down happens how does k8s make sure there is no traffic being served by those pods.. will there be a chance where user experience interruption due to scale in of pods
When Kubernetes decides to kill a Pod, among other things it does the following. 1. Stop all new incomming traffic from going to that Pod 2. Send SIGTERM signal to the process inside the containers in that Pod 3. Wait until the processes respond with OK to SIGTERM or it times out (timeout is configurable). 4. Destroy the Pod Assuming that SIGTERM is implemented in the app, all existing requests will be processed before the Pod is shut down. SIGTERM itself is not specific to Kubernetes but a mechanism that is applied to any Linux process (it might work on Windows as well, but I'm not familiar with it enough to confirm that). That means that if an app is implementing "best practices" that are independent of Kubernetes, there should be no issues when shutting down Pods. As a side note, the same process is used when upgrading the app (spin up new Pods and shut down the old ones) so you need to think about those things even if you never scale down.
May I know why you have deployment.yaml and ingress.yaml in overlay directory though you dont have any changes/patches to them.. you can keep them in base directory itself right.
Hi Victor, thanks for this! I'd also really appreciate a video on how to hpa based on metrics from Prometheus Edit: I also have a question about Karpenter. Does it scale both horizontally and vertically?
Karpenter scale horizontally but it have this advantage that it will add node that will handle all of your pods in pending state and not only randomly add node in one of your autoscaling groups that can be to big for your current needs.
@@Levyy1988 Exactly. That's why i said in the video that vertical scaling of nodes is typically combined with horizontal (new node, new size). Karpenter is a much better option than the "original" Cluster Autoscaler used in EKS. It provides similar functionality like GKE Autopilot.
If that is the only thing you're running in that cluster, the answer is yes. You can scale down worker nodes. However, controle planes nodes will have to keep running. Actually, now that i think of it, why don't you just create a cluster when you need it and destroy when you don't?
Demo don’t just talking about it, everybody can google 100 answers about this topic. Show people what you did at an enterprise environment. What you did in real world. Don’t just read white paper
Have you seen any other video on this channel? Almost all are with demos with a small percentage being how something works (like this one). If anything, i might need to less demos.
How do you scale your apps and #Kubernetes clusters?
I don't
Thank you for this awesome video 👍, we all would like to see a video of HPA combined with Prometheus.
Great video as usual! This channel is very underrated
I'm terrible at marketing :(
Very nice presentation as always. Looking forward to know hpa using custom metrics from prometheus
thanks for your videos, yes i would like to know to scale pods with HPA based on metrics in Prometheus. Thank you very much
I'm planning to release a video that explores different types of scaling on July 8.
Many thanks! I did not heard ever used VerticalPodAutoescaler!! There are many ways to describe scaling for applications, I also like the Scale Cube that it ir more from the point of view how microservices can be scaled.
Brilliant...That was a great explanation. Keep up the great work
Great video on teaching and as a refresher for me on HPA and VPA!
I would like to learn and understand how to utilize metrics from Prometheus as another means for the autoscaling use-case.
It's coming... :)
@@DevOpsToolkit can't wait! I need it for a project like right now
@@DrorNir If everything goes as planned, that one should go live thrid Monday from now.
is the video made/available for - using Prometheus for custom metric monitoring and using it for HPA
Hey, I want to leave my feedback. Your videos are very usefull and explanation is very good. Keep going man!
Thanks
Great video as always!
I think that it would also be useful to introduce KEDA autoscaler along with Prometheus base HPA.
I am using KEDA and it is working great (in my case with RabbitMQ) since I can scale from zero pods which is huge cost saving.
We do Keda + Karpenter .. Magic
Yeah! KEDA is awesome.
Can I ask if you're using KEDA with GKE? I've had issues with intermittent metrics server availability. I love KEDA and want to use it, but it's def a blocker.
@@johnw.8782 I haven't used it in GKE just yet. So far, most of my experience with KEDA is on other providers.
Thank you for the video!!! Question: how do we horizontally autoscale databases in Kubernetes? What are the challenges and what would be the proper way to overcome them? (Maybe an idea for a future video)
Adding it to the TODO list for a future video... :)
Until then... If designed well, DB should come with an operator that takes care of common operations including scaling and all you really have to do is change the number of replicas (unless you enable autoscaling which is still not a common option).
I started using jsonnet and it has been a pain to use and a steep learning curve. Few months later we moved to ytt as it was easier to manage but now we are going for Kustomize for all new projects.
Jsonnet is really powerful but when bringing someone new to the team and you show them jsonnet, they can easily feel overwhelmed.
That's my main issue with Jsonnet. It's too easy to over-complicate it and confuse everyone.
Dziękujemy.
Thanks a ton.
Hey Viktor.. this video is very helpful. Please make a video on HPA with Prometheus monitoring solution.
Already added to my TODO list :)
Master ❤️
Good stuff. I believe the units for describing CPU limits should be called millicores instead of milliseconds, however.
whatever you say, based on your avatar, you're right
Gist is not well documented in the description! Can you fix it please?
Sorry for that, and thanks for letting me know. It should be fixed now.
@@DevOpsToolkit thanks for the quick response, ur the best!
I would like to see a futrue video talking about metrics of auto-scaling like what you mentioned in the video. (Prometheus Kabana)
It's coming... :)
Hi Victor thanks for a great video :) Just a question from my side - do you know how gitops (ie with ArgoCD) handles auto-scaling as I assume the replica count on the deployment yaml will no longer conform to the declared yaml in an autoscaling setup?
Yeah. You should remove hard coded replicas or nodes when using scalers. That's not directly related to gitops. Argo CD and similar tools only sync manifests into clusters. If you do specify replicas and a scaler, the former will be overwritten by the later.
@@DevOpsToolkit thanks so much Victor - ahh ok gotcha I didnt realise I could leave out the replica count in the deployment manifest - thanks :) Im going to look into this more. Also going to checkout your videos on Argo events and rollout to see how to deal with progressing a release through different environments while still using gitops.
Great video, would it be possible to run the VPA in recommend mode while relying upon the HPA to ensure scaling of pods? Can that combination be used to fine tune the autoscaling policies?
It could, but I would not rely on that. VPA recommendations might easily be incorrect due to HPA activities. I recommend using Prometheus instead.
Great video - as a complete beginner to Kubernetes it's helped me to understand what I want to with a particular project that I'm working. I currently have a long-term process that runs under Python but runs in a single thread. Up until know I've scaled vertically by moving to more powerful machines but also horizontally by runnning additional copies of the process on different processor cores and then dividing the clients up geographically. If I've understood correctly, with Kubernetes it looks like I could run one copy but get it to spread across multiple cores or even multiple servrers as required whilst to my clients it just looks like one machine ? Do I need to do anything to my process to ready it for deployment on Kubernetes or is it just a case of setting the resource limits and scaling parameters ?
Assuming that it is a stateless application, all you have to do is define HPA that will scale it for you or, if scaling is not frequent, set manually the number of replicas in the deployment.
It's stateless (I think) as nothing is left once the application exits other than some log files. I'm definitely going to have to put together a cluster and have a go. Thanks again !
Hey doing a great job waiting for your videos and the notification bell to buzz everytime ❤️ just a question hpa with respect to memory do we have any information for reference than it would be helpful also can we use them both simultaneously in our hpa manifest
Don't use vpa together with hpa. They are not aware of each other and might do conflicting actions.
If you're wondering how to deduce how much memory to assign to a deployment managed by hpa, explore Prometheus. It should give you the info about memory utilization or anything else.
@@DevOpsToolkit sure thanks for the information 💯 can you please come up with video more precise on cluster autoscaling in gke cluster and how it works like poddistributionbudget the annotation safe to evict pods how it's used the correct way would be great help of you 💯
@@sahilbhawke605 Adding it to my TODO list... :)
@@DevOpsToolkit Sure i would be eagerly waiting ;)...Thanks for being such a great spot by sharing your valuable 💯 knowledge for us from your videos always waiting for your new video #devops 💯
When scale in/down happens how does k8s make sure there is no traffic being served by those pods.. will there be a chance where user experience interruption due to scale in of pods
When Kubernetes decides to kill a Pod, among other things it does the following.
1. Stop all new incomming traffic from going to that Pod
2. Send SIGTERM signal to the process inside the containers in that Pod
3. Wait until the processes respond with OK to SIGTERM or it times out (timeout is configurable).
4. Destroy the Pod
Assuming that SIGTERM is implemented in the app, all existing requests will be processed before the Pod is shut down. SIGTERM itself is not specific to Kubernetes but a mechanism that is applied to any Linux process (it might work on Windows as well, but I'm not familiar with it enough to confirm that). That means that if an app is implementing "best practices" that are independent of Kubernetes, there should be no issues when shutting down Pods.
As a side note, the same process is used when upgrading the app (spin up new Pods and shut down the old ones) so you need to think about those things even if you never scale down.
May I know why you have deployment.yaml and ingress.yaml in overlay directory though you dont have any changes/patches to them.. you can keep them in base directory itself right.
Also how is replicaset is different from hpa
You're right. I should have placed those inside the base directory. I copied those tiles from another demo and failed to adapt them for this one.
Hi Victor, thanks for this! I'd also really appreciate a video on how to hpa based on metrics from Prometheus
Edit: I also have a question about Karpenter. Does it scale both horizontally and vertically?
Great! Adding it to the TODO list... :)
Karpenter scale horizontally but it have this advantage that it will add node that will handle all of your pods in pending state and not only randomly add node in one of your autoscaling groups that can be to big for your current needs.
@@Levyy1988 hey, thanks
@@Levyy1988 Exactly. That's why i said in the video that vertical scaling of nodes is typically combined with horizontal (new node, new size).
Karpenter is a much better option than the "original" Cluster Autoscaler used in EKS. It provides similar functionality like GKE Autopilot.
Does Kubernetes support scaling to zero?
It does but that is rarely what you want. There's almost always something you need to run.
@@DevOpsToolkit question is for running LLM app which is costly to run 24*7.
If that is the only thing you're running in that cluster, the answer is yes. You can scale down worker nodes. However, controle planes nodes will have to keep running.
Actually, now that i think of it, why don't you just create a cluster when you need it and destroy when you don't?
First to comment...yooo
Demo don’t just talking about it, everybody can google 100 answers about this topic. Show people what you did at an enterprise environment. What you did in real world. Don’t just read white paper
Have you seen any other video on this channel? Almost all are with demos with a small percentage being how something works (like this one). If anything, i might need to less demos.