I'm honestly very excited as you explain. very very professional. thanks for the know-how transfer. I even recommended your videos to my colleagues, they like you too :) thanks thanks thanks you always make my days :)
I already said that your videos are awesome and if you don't mind i'd like to make a suggestion, it would be great if you would break the video in chapters using TH-cam feature, this way we can go straight to a certain point we want to review for example. And again, awesome content, you rock
Great Video! Thanks for the detailed walk through. Just one question, why did we install the VPA from a container that can access our cluster, instead of directly installing it to our cluster like we did with the metrics server and application pods?
As far as I can remember at the time of this video, the VPA install scripts are Linux compatible only, requires few dependencies. Containers make this portable
I think for autoscaling of nodes we already setup in our cloud like “target groups” and coming to pod level we use replica sets Then what will be purpose this pod level scaling again???
The VPA repo has a vpa-up script which is assumed to be executed on a person's PC to deploy the components. It seems to be written for debian\ubuntu based OS so did not want to deal with dependencies to attempt an alpine port. I'm sure you could get it to work on alpine but may require changes
Very helpful video. A blessing to me, because I am currently working on my bachelors thesis. But I have a question. So using the VPA, you can define the requested resources more efficiently. But using the VPA brings no performance improvement, right? Because no matter what the current requested cpu is in the yaml file is, the pod will take as much resources as needed and available to my understanding (provided that there is no limit specified).
You're right. Requested value simply helps the scheduler predict where the best place to schedule the pod is. There is no performance impact or gain. The pod will use as much CPU as it can see, which is the "limit" field in the YAML. Sometimes it may see all cores in the node, but the kernel will restrict it to only the limit in the YAML. Linux kernel has built in CPU throttling that K8s uses
As always awesome video and explaination. But i have one question, can we implement the VPA and HPA at the same time. If yes please make a video on that or explain in comment. Let suppose, we implement the VPA, and in the node we have limited resources in the node, then what will happen to that pod? If we want to enable VPA and Cluster Auto Scaler, who can we perform that?
You can use VPA\HPA\Cluster-autoscaler at the same time. The VPA will simply recommend resource limits based on actual usage. I would personally not have the VPA run in auto update mode. You need to manually set limits equests based on how you want to slice and dice your resources for pods. A resource hungry pod will use all resources on a node and starve it. You can add the cluster autoscaler to add more nodes when this happens
ive read that vpa shouldnt be used in prod as it regularly restarts the pods.. is this true? if yes and if I have to run it in dev env, how am I gonna do accurate load testing.. is the only way here is to replicate the load in prod and do manual load testing in dev?
Thanks for this. I needed a nice refresher on autoscaling.
Thankyou somuch marcel I've been looking everywhere for this explanation thank god i found u and didn't knew that u hv explained them a year ago ❤️❤️
You are very welcome
a reliable and practical approach to the basics of scaling
I'm honestly very excited as you explain. very very professional. thanks for the know-how transfer.
I even recommended your videos to my colleagues, they like you too :)
thanks thanks thanks you always make my days :)
Thank you so much, you are generous and kind!
I already said that your videos are awesome and if you don't mind i'd like to make a suggestion, it would be great if you would break the video in chapters using TH-cam feature, this way we can go straight to a certain point we want to review for example. And again, awesome content, you rock
Hi Marcel. Great video as always. Thanks
Wow! Thank you very much Marcel for teaching and sharing all your demo samples. This helps a lot.
Nice triceps, man! Looking jacked!! 💪💪
Maan, youre the devops monster!
Unreal quality. Thank you!
Very helpful video as always! Thanks!
Needed this 🙌 🙏
This is really amazing.
Hi, thanks for detail steps and i have followed everything but in VPA recommendations are not showing
Fantastic tutorial - thanks!
Very good video and presentation.
Thanks a lot.
Yes! Anoher gem. You're on a roll bro!
Great Video! Thanks for the detailed walk through.
Just one question, why did we install the VPA from a container that can access our cluster, instead of directly installing it to our cluster like we did with the metrics server and application pods?
As far as I can remember at the time of this video, the VPA install scripts are Linux compatible only, requires few dependencies. Containers make this portable
good job marcel, awesome video 👏🏼.. kubectl apply thankyou.yaml 😉
Many thanks JIT video
Thanks for sharing Marcel! Awesome video man
I think for autoscaling of nodes we already setup in our cloud like “target groups” and coming to pod level we use replica sets
Then what will be purpose this pod level scaling again???
I love your opening sountrack
Excellent session on hpa ...pls upload helm chart detailed video.thank you
12:44 why use debian instead of alpine?
Is a vpa able to be installed on alpine?
And thanks for the video!
The VPA repo has a vpa-up script which is assumed to be executed on a person's PC to deploy the components. It seems to be written for debian\ubuntu based OS so did not want to deal with dependencies to attempt an alpine port. I'm sure you could get it to work on alpine but may require changes
@@MarcelDempers thanks
Very helpful video. A blessing to me, because I am currently working on my bachelors thesis. But I have a question. So using the VPA, you can define the requested resources more efficiently. But using the VPA brings no performance improvement, right? Because no matter what the current requested cpu is in the yaml file is, the pod will take as much resources as needed and available to my understanding (provided that there is no limit specified).
You're right. Requested value simply helps the scheduler predict where the best place to schedule the pod is.
There is no performance impact or gain. The pod will use as much CPU as it can see, which is the "limit" field in the YAML.
Sometimes it may see all cores in the node, but the kernel will restrict it to only the limit in the YAML.
Linux kernel has built in CPU throttling that K8s uses
@@MarcelDempers Thank you very much for your quick response and further explanation. Have a great one!
Damn! This is so good material! Thanks a lot
Thanks a lot its really helpful.
How do we reserve system allocatable over ami to for kubelet.
Great work!
I like your video!
Awesome explanation!
What theme are you using in vscode? It looks nice
Amazing, thanks !
Amazing Stuff. Thanks.
thank you marcel !
A critical question I have not found an answer for yet: in AUTO mode, how often does it intervene to make changes?
dude it's really nice explained subscribe...
As always awesome video and explaination.
But i have one question, can we implement the VPA and HPA at the same time. If yes please make a video on that or explain in
comment.
Let suppose, we implement the VPA, and in the node we have limited resources in the node, then what will happen to that pod?
If we want to enable VPA and Cluster Auto Scaler, who can we perform that?
You can use VPA\HPA\Cluster-autoscaler at the same time.
The VPA will simply recommend resource limits based on actual usage. I would personally not have the VPA run in auto update mode. You need to manually set limits
equests based on how you want to slice and dice your resources for pods.
A resource hungry pod will use all resources on a node and starve it. You can add the cluster autoscaler to add more nodes when this happens
I am getting “error running summary “.
Can you help me with the fix?
I cloned from git and did kubectl apply
pure gold
ive read that vpa shouldnt be used in prod as it regularly restarts the pods.. is this true?
if yes and if I have to run it in dev env, how am I gonna do accurate load testing.. is the only way here is to replicate the load in prod and do manual load testing in dev?
Hi, any news regarding the restarting the pods ? Did you have a chance to test it ?
Thank you!!!!!!!
Awesome 👍
can you make a video on Rancher 2 ?
You deserve a good haircut 😆
Your videos are nice. Your muscular arms is sometimes distracting😀
same here😅
Ou I am so much afraid now
Some great stuff in hear but editing out every breath into one long relentless barrage of words is tiring to listen to.