Thank you soo much. You tutorials were helpful indeed. Most importantly your editing skills are top tier. Building my own channel, and you are my inspiration for TH-cam. Thank you for everything.
Beautiful explanation. The only thing I had questions about that I thought I would share is the topology key. I didn't really understand it (I'm a beginner and was missing some context) but to elaborate: I think a good way to explain it (or better common name for it) would be the node group, or, the group of nodes the affinity rule will be searching for pods within. in the video example, grouping by hostname means searching for matching pods per node, because the hostname is unique for each node. You could also use this to group based on a different label on the node, like "region" where you could group the nodes based on their region, and make sure your pods only exist on a per-region basis and whatnot.
Very valuable explanation. Interesting that i have much more flexibility using a pod affinity then node affinity. But instead of using pod anti affinity in some cases i would stick with using a daemonset.
Thank you for this great tutorial. To extend on this concept, how would you go about scheduling pods on a physical rack based anti-affinity rule? Assuming that the nodes had labels applied for the rack they were located in.
@emilne83 I would recommend topology spread constraints for most circumstances like this, assuming your use case is about spreading out replicas across different failure domains (node, rack, aisle, zone etc). I'd recommend avoiding pod affinity/anti-affinity entirely as it's got a pretty poor implementation that absolute wrecks cluster auto scalers.
Topology spread constraints. Although I recommend never using hard requirements for scheduling unless you truly need them (distributed databases etc) and stick to preferred scheduling policies in all cases.
I dont really understand this one to be honest. Nodes for a given cluster will always in the same region. Kubernetes does not work with control planes and worker nodes across geographically seperate network boundaries. You're probably going to confuse people with this example.
Thank you soo much.
You tutorials were helpful indeed.
Most importantly your editing skills are top tier.
Building my own channel, and you are my inspiration for TH-cam. Thank you for everything.
Pure gold!
Lets goo!! 💪💪💪💪
Beautiful explanation. The only thing I had questions about that I thought I would share is the topology key. I didn't really understand it (I'm a beginner and was missing some context) but to elaborate:
I think a good way to explain it (or better common name for it) would be the node group, or, the group of nodes the affinity rule will be searching for pods within. in the video example, grouping by hostname means searching for matching pods per node, because the hostname is unique for each node. You could also use this to group based on a different label on the node, like "region" where you could group the nodes based on their region, and make sure your pods only exist on a per-region basis and whatnot.
Thank you, clear and with example, keep it simple!!!
Un genio!!! Saludos desde Argentina
Love it!
Another excellent video.
Great explanation. Now Iam encourage to use it more
Amazing..Thanks a ton.
Great explanation, I'm just wondering if kind has an option to name nodes? Can't find anything in the documentation
Best explanation 👍🏻👌🏻👌🏻👌🏻
Very valuable explanation. Interesting that i have much more flexibility using a pod affinity then node affinity. But instead of using pod anti affinity in some cases i would stick with using a daemonset.
Thank you for this great tutorial.
To extend on this concept, how would you go about scheduling pods on a physical rack based anti-affinity rule? Assuming that the nodes had labels applied for the rack they were located in.
@emilne83 I would recommend topology spread constraints for most circumstances like this, assuming your use case is about spreading out replicas across different failure domains (node, rack, aisle, zone etc).
I'd recommend avoiding pod affinity/anti-affinity entirely as it's got a pretty poor implementation that absolute wrecks cluster auto scalers.
How to schedule 2 pods on 2 separate nodes having same labels?
Many ways ...
You can use Maxskew.
Topology spread constraints. Although I recommend never using hard requirements for scheduling unless you truly need them (distributed databases etc) and stick to preferred scheduling policies in all cases.
I dont really understand this one to be honest. Nodes for a given cluster will always in the same region. Kubernetes does not work with control planes and worker nodes across geographically seperate network boundaries.
You're probably going to confuse people with this example.