Node Selector, Node Affinity, Taints and Tolerations
ฝัง
- เผยแพร่เมื่อ 6 ก.พ. 2025
- The Kubernetes Scheduler is responsible for assigning pods to nodes based on available resources and scheduling constraints. However, sometimes pods may not be scheduled as expected due to reasons like:
Node constraints (specific nodes required)
Pod requirements (affinity, selectors, etc.)
Cluster configurations (taints, tolerations, etc.)
To control pod scheduling behavior, Kubernetes provides Node Selector, Node Affinity, Taints, and Tolerations.
1️⃣ Node Selector - Basic Node Scheduling
🔹 What is Node Selector?
Node Selector is the simplest way to constrain pods to specific nodes. It works by matching node labels to a pod’s nodeSelector field.
📌 How it Works?
Nodes have labels (key-value pairs).
Pods request nodes by specifying matching labels in their YAML definition.
📖 Example: Assigning a Pod to a Specific Node
Step 1: Label the Node
sh
kubectl label node worker1 environment=production
Now, worker1 has the label: environment=production.
Step 3: Check Pod Placement
sh
kubectl get pods -o wide
✅ The pod will only be scheduled on worker1 (because it has the matching label).
🚨 Limitation: nodeSelector only allows exact matches; it does not support complex rules.
2️⃣ Node Affinity - Advanced Node Scheduling
🔹 What is Node Affinity?
Node Affinity is a more expressive and flexible alternative to nodeSelector. It allows conditional scheduling rules for pods based on node labels.
🔍 Types of Node Affinity
1. Required Affinity (requiredDuringSchedulingIgnoredDuringExecution)
Hard rule: Pod must be scheduled on a matching node.
2. Preferred Affinity (preferredDuringSchedulingIgnoredDuringExecution)
Soft rule: Pod should be scheduled on a preferred node, but not mandatory.
📖 Example: Using Node Affinity
Step 1: Label the Nodes
sh
kubectl label node worker1 environment=production
kubectl label node worker2 environment=staging
✅ The pod must be scheduled on nodes labeled environment=production.
💡 Operators Available in Node Affinity
In: Node label must match one of the specified values.
NotIn: Node label must not match any of the specified values.
Exists: Node must have the specified key (value doesn’t matter).
🚨 Limitation: If no node matches the requiredDuringSchedulingIgnoredDuringExecution rule, the pod remains Pending.
3️⃣ Taints - Preventing Unwanted Pods
🔹 What are Taints?
Taints prevent certain pods from being scheduled on a node unless the pod has a matching toleration.
📌 How it Works?
Nodes have taints applied (key-value pair with an effect).
Pods need a matching toleration to bypass the taint.
📖 Example: Adding a Taint to a Node
Step 1: Apply a Taint
sh
kubectl taint nodes worker1 key=value:NoSchedule
Now, worker1 rejects all pods unless they have a matching toleration.
Taint Effects
| Effect | Description |
|--------|-------------|
| NoSchedule | Prevents new pods from being scheduled unless they tolerate the taint. |
| PreferNoSchedule | Avoids scheduling pods, but not strictly enforced. |
| NoExecute | Evicts running pods unless they tolerate the taint. |
4️⃣ Tolerations - Allowing Pods on Tainted Nodes
🔹 What are Tolerations?
Tolerations allow pods to be scheduled on tainted nodes by overriding the taint effect.
📌 How it Works?
Nodes have taints.
Pods include a matching toleration to be scheduled.
🚨 Important Note:
Tolerations do not force pods onto tainted nodes. They only allow scheduling if no other constraints block it.
Comparison: Node Selector vs Node Affinity vs Taints/Tolerations
| Feature | Node Selector | Node Affinity | Taints & Tolerations |
|---------|--------------|--------------|----------------------|
| Purpose | Assign pods to specific nodes | More complex scheduling rules | Prevent unwanted pods from running |
| Flexibility | Basic (exact match only) | Advanced (supports conditions) | Node-level restrictions |
| Used on | Pod spec (nodeSelector) | Pod spec (nodeAffinity) | Nodes (kubectl taint) |
| Key Use Case | Simple node placement | Optimized workload distribution | Node maintenance, isolation |
🎯 When to Use What?
| Scenario | Use Node Selector | Use Node Affinity | Use Taints & Tolerations |
|----------|----------------|----------------|---------------------|
| Assigning pods to specific nodes | ✅ | ✅ | ❌ |
| Enforcing strict node selection | ✅ | ✅ | ❌ |
| Using complex scheduling logic | ❌ | ✅ | ❌ |
| Avoiding scheduling on maintenance nodes | ❌ | ❌ | ✅ |
| Running only specific workloads on certain nodes | ❌ | ✅ | ✅ |
✅ Possible Fixes:
If nodeSelector or nodeAffinity is too strict, modify them.
If a taint is blocking the pod, add a matching toleration.
🚨 Pod Not Respecting Taints?
🔍 Check the node's taints:
sh
kubectl describe node node-name
✅ Ensure the pod has the correct toleration.
🔥 Final Thoughts
Node Selector = Simple but limited.
Node Affinity = More flexible, supports multiple conditions.
Taints & Tolerations = Used to repel or isolate workloads.