Kubernetes Node Selector vs Node Affinity vs Pod Affinity vs Tains & Tolerations
ฝัง
- เผยแพร่เมื่อ 5 มิ.ย. 2024
- 🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
▬▬▬▬▬ Experience & Location 💼 ▬▬▬▬▬
► I’m a Senior Software Engineer at Juniper Networks (12+ years of experience)
► Located in San Francisco Bay Area, CA (US citizen)
▬▬▬▬▬▬ Connect with me 👋 ▬▬▬▬▬▬
► LinkedIn: / anton-putra
► Twitter/X: / antonvputra
► GitHub: github.com/antonputra
► Email: me@antonputra.com
▬▬▬▬▬▬ Related videos 👨🏫 ▬▬▬▬▬▬
👉 [Playlist] Kubernetes Tutorials: • Kubernetes Tutorials
👉 [Playlist] Terraform Tutorials: • Terraform Tutorials fo...
👉 [Playlist] Network Tutorials: • Network Tutorials
👉 [Playlist] Apache Kafka Tutorials: • Apache Kafka Tutorials
👉 [Playlist] Performance Benchmarks: • Performance Benchmarks
👉 [Playlist] Database Tutorials: • Database Tutorials
▬▬▬▬▬▬▬ Timestamps ⏰ ▬▬▬▬▬▬▬
0:00 Intro
2:16 Kubernetes Node Selector
3:27 Kubernetes Node Affinity
7:06 Kubernetes Pod Anti-Affinity
8:57 Kubernetes Pod Affinity
9:44 Kubernetes Taints and Tolerations
▬▬▬▬▬▬▬ Source Code 📚 ▬▬▬▬▬▬▬
► GitHub: github.com/antonputra/tutoria...
#kubernetes #devops #cloud - วิทยาศาสตร์และเทคโนโลยี
🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
Your explanation is really PRO !
would be great if you can create series on App Tracing and Monitoring on K8s like ELK stack for APM for app tracing
thanks, i'll see what i can do
These k8s videos are awesome let them coming.
It's so easy understand through animation
too much effort to make these
very grateful to you for making these lectures.
Thank you! Yes, it takes some time :)
Very good explanation
Thank you Anton
Thank you!
I am working with Kubernetes for last 4 years. That is a very clean explanation. Nice video!
I would request you to create a specific video on HPA with request based.
Thank you! Will do
Extremely useful topic, thanks for great content again, Anton!
Thanks!
What would be a condition when 2 both with similar configuration should always schedule in 2 diff nodes ?
@@user-pc1pm1vb7p if you form your question properly, Anton, me or gpt could answer you :)
i remember some hard days when i started in DevOps, tainting some nodes for ML, then getting not enough nodes error, i fix and then a lead dev manually change from Kubernetes dashboard, until i used ArgoCD with selfHeal then RBAC so he cannot change after that the cluster became stable
It's sometimes hard in large teams to prevent manual changes, but as you said, everyone should be using GitOps as much as possible.
Great Videos
Thank you Sid!
Very good explanation,
Thank you Anton
Thanks!
Hi, Anton, again, really interesting and REALLY well presented. For someone like me, who's only dealt with Minikube locally (so that means only one Node), it is something new, but, nevertheless, great knowledge to have for the future when I will deploy in production. As always, thank you!
Thank you! Sometimes, it's helpful to run multiple nodes even with Minikube. This could be useful, for example, to test how an app behaves if a node goes down (for example if you want to run it on spot).
minikube start --nodes 2
@@AntonPutra Much thanks for the reply, Anton! Wasn't even aware of the --nodes flag for minikube, will be sure to try it out! And can only agree with you, really useful to test behavior of when a node goes down and even architect for when you will be deploying in production on multiple nodes.
Nicely explained 👏
Thank you!
Thanks @AntonPutra for the micro detailed video. We are currently facing some issue related to pod scheduling in eks this video provided some insights . We have a monitoring demonset which took normally ~1min to 1.5 min to spin up and ready. But some of my application pods are in ready state in the same node before the demon set pod is ready because of faster startup time.Our monitoring tool will not inject agent if the pod is ready before the demonset. Could you please suggest taint or pod affinity which one is the best approach. Other than the affinity and taint is there any other helpful suggestions please provide that as well .
Thanks in advance...
Can you run your monitoring tool as a sidecar to daemonset?
Thanks 🙏 a lot , great job 👏
my pleasure!
bro, What would be a condition when 2 both with similar configuration should always schedule in 2 diff nodes ?
Great video! Can I ask what software you use for the diagrams and animations?
Thanks, adobe stack
@@AntonPutra Ah brilliant - I already have the Adobe all app suite. I probably haven't used more than half the apps though! 😂 Is the animation done with Adobe Animate?
very informative video.
Thank you
hey Anton are you working with Azure or planning to make any videos in the future ? Thanks for what you doing as always
Yes, soon
Hey anton can you make video on when we deploy pods ko gke cluster where does the container log (/var/logs/) get stored , what happens in backend if we don't mount it with persistence volume.
In GKE, EKS and even AKS you just need to update your logger to write to stdout or stderr. In GCP you'll get your logs in stackdriver, you don't need to mount anything.
@@AntonPutra so it will not write anything on my gke disk.
@@nishitkumar7650 no just stout
What would be a condition when 2 both with similar configuration should always schedule in 2 diff nodes ?
use podantiaffinity - kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#more-practical-use-cases
Thank you so much ,Can you please share kubernetes scenario based questions
Welcome, you mean interview questions?
@@AntonPutra yes Anton
Please share the entire Pod Life cycle
@@soumyamishra8734 Got it, will do
Which tools do you use for video editing for your channel
Adobe suite
First 😎
Why don't you make a course to deploy pods on gke with best practices and how to do container logs management , how to do best pods monitoring in gke , advance concepts like how to deploy microservices in gke.
Thanks, I'll think about it.
The diagram at 0:50 is confusing, the pod requests 2 CPU and 4Gi memory, why node-01 which has 6CPU and 16Gi (more than pod's request) be considered not enough memory? Same question also for node-03 which has 8CPUs why it is considered not enough CPU?
Well, that's the whole point of Kubernetes: to abstract away a data center. In the case of the cloud, we typically use large instance types to reduce wasted resources since we also need to run monitoring and logging agents on each node. So, we use large instances and schedule multiple pods on a single virtual machine.
I didn't yet watch the video, but kudos to someone dare to bend it.
Thanks:)
First minute: How does the score is resolved?
You can read more about scheduler here - kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/