Thanks for this well explained video on PV,PVC,SC.👍 To answer your question : How did we manage to schedule a pod on the control-plane node? In K8s control plane nodes (master) are tainted to prevent pods from being scheduled on them. In this case, we specified nodeName explicitly in pod yaml, saying that place a pod on to master node overriding the normal default scheduling process, because this gives pod explicit ability to place on the control plane node, regardless of the taints in place. But, this is not ideal in case of production environment real time, we use a combination of taints and tolerations and nodeSelectors or nodeAffinity to allow a pod to be scheduled on a specific node. 1. Taints and Tolerations: Taints applied at node level , Tolerations at pod level - This gives node ability to allow which pods to be scheduled on them.(Node centric approach) 2. Nodename, Nodeselctor and label: This give pod ability to decide on which node it has to go.(Pod centric approach)
@@yashjha2193 That's correct, we don't do this in a production setup normally but this was just to explain that, you can force a pod to be scheduled on a particular node in case you need to do some troubleshooting on that node
How we can schedule workloads on control-plane node? As you mentioned in the series, we can use taint for nodes and toleration for workloads to permit them schedule on control-plane or specific nodes.
The TAINT on the master node will restrict us from creating our custom workload, Seems you have removed the TAINT on the master node, to create your pod.
Please explain about PV and PVC. As per below comments PV can not be shared with other PVC so how in production we manage this. In a node if multiple pods are present do each pod request storage from different PV. Is the management of PV and PVC storage is responsibility of DevOps team or K8s admin bcz this may lead to lot of unutilized memory. Also please explain once Reclaim policy -Recycle
Hello Ashish, Let me try to answer - Yes, there is a 1-1 relationship between PV and PVC however, in production, we can use dynamic volume provisioning that automatically created the volume for the PVC that you create kubernetes.io/docs/concepts/storage/dynamic-provisioning/ - It's not a good pratice to share the volume across multiple pods - It depends upon the organizational structure, but there could be a seperate team or a sub-team with storage admin privilleges who manage the storage, in some orgs, it could be a devops or ks admin with the storage admin role.
Piyush, we specified node name directly in pod yaml. So can I think that giving hostname directly in pod definition overrides or ignores any taints specified on a node ? That is let's say a node named node01 has some taint app=blue:NoSxhedule , but if I add node name directly in pod yaml file like as node01, so the pod will get scheduled on node01 without pod yaml having any toleration for it ?
That's a very good question and you are right. If we are specifically adding nodename in the yaml that means we are doing manual scheduling and Tolerations will only be considered by the scheduler while scheduling the pod. In this case we are not using the scheduler. If you want you can test it yourself and share the results
@@TechTutorialswithPiyush so due to manual scheduling here, the taint of the master node is ignored. Taunts and toleration will be considered while scheduling is done by scheduler and not manual scheduling. Is it correct Piyush?
@rinkirathore6502 Correct The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action bypasses the scheduler; the Pod is then bound onto the node where you assigned it, even if there are NoSchedule taints on that node that you selected. If this happens and the node also has a NoExecute taint set, the kubelet will eject the Pod unless there is an appropriate tolerance set.
I have one query , as you described that multiple PVC can be attached with a single PV till storage is available?? My concern is that "is it allowed to attach multiple PVC with a single PV"??
@@TechTutorialswithPiyush Hey Piyush,I think this is not correct. PV and PVC works as one-to-one mapping only. We can't allocate multiple PVC to a single PV. I have checked Kubernetes documentation as well. Please kindly have a look at it.
1. pod is scheduled/run > 2. kubelet invokes CNI plugin (Calico, weave, flannel, cilium) > 3. CNI plugin sets up IP pool, network interfaces for the Pod network namespace > 4. CNI plugin (by way of kubelet) assigns an IP address to the Pod(s). This is also specified in the Pod YAML config file. You can view the Pod cidr network specification in the /etc/cni/net.d/ path, viewing the CNI ConfigMap, viewing the NetworkPolicy configuration or by viewing the Kubeadm YAML config file.
Thanks for this well explained video on PV,PVC,SC.👍
To answer your question : How did we manage to schedule a pod on the control-plane node?
In K8s control plane nodes (master) are tainted to prevent pods from being scheduled on them. In this case, we specified nodeName explicitly in pod yaml, saying that place a pod on to master node overriding the normal default scheduling process, because this gives pod explicit ability to place on the control plane node, regardless of the taints in place.
But, this is not ideal in case of production environment real time, we use a combination of taints and tolerations and nodeSelectors or nodeAffinity to allow a pod to be scheduled on a specific node.
1. Taints and Tolerations: Taints applied at node level , Tolerations at pod level - This gives node ability to allow which pods to be scheduled on them.(Node centric approach)
2. Nodename, Nodeselctor and label: This give pod ability to decide on which node it has to go.(Pod centric approach)
Very well explained, thank you so much!
basically by Manual scheduling, with which we can deploy pod manually with out the help of Kube-scheduler
@@yashjha2193 That's correct, we don't do this in a production setup normally but this was just to explain that, you can force a pod to be scheduled on a particular node in case you need to do some troubleshooting on that node
@@TechTutorialswithPiyush yeah Understood bro
Completed the video...!!!!!!
First Comment, Thanks for the detailed explanation on K8s Volumes concepts. Thank you Piyush
Thanks for liking
Thanks for this wonderful detailed concept
Glad you found it helpful
Nicely explained...thanks for the tutorial. :)
You're welcome! Keep learning!
Thank you for the great explanations
glad you found it helpful
always enjoy your explanation in very low level perspective, good work!!
Keep the learning going 💪
Fantastics nicely explain my doubts got clear after watching PC and PVC
Glad to hear that
Hi Piyush, very well explain, because u remove taint from master node.
Correct
excellent, thanks.
You are welcome!
How we can schedule workloads on control-plane node? As you mentioned in the series, we can use taint for nodes and toleration for workloads to permit them schedule on control-plane or specific nodes.
Remove taints from control plane
Correct, or add the tolerations to the pod that the node can tolerate. It's not a production best practise but it is possible.
Thank you Jalal
@@TechTutorialswithPiyush thanks piyush, learning a lot from you
The TAINT on the master node will restrict us from creating our custom workload, Seems you have removed the TAINT on the master node, to create your pod.
Absolutely correct
Please explain about PV and PVC. As per below comments PV can not be shared with other PVC so how in production we manage this. In a node if multiple pods are present do each pod request storage from different PV. Is the management of PV and PVC storage is responsibility of DevOps team or K8s admin bcz this may lead to lot of unutilized memory. Also please explain once Reclaim policy -Recycle
@TechTutorialswithPiyush
Hello Ashish, Let me try to answer
- Yes, there is a 1-1 relationship between PV and PVC however, in production, we can use dynamic volume provisioning that automatically created the volume for the PVC that you create
kubernetes.io/docs/concepts/storage/dynamic-provisioning/
- It's not a good pratice to share the volume across multiple pods
- It depends upon the organizational structure, but there could be a seperate team or a sub-team with storage admin privilleges who manage the storage, in some orgs, it could be a devops or ks admin with the storage admin role.
@@TechTutorialswithPiyush thanks.
maybe you removed noschedule label from controlplane??
kubectl label nodename nodeName:NoSchedule-
Good catch!
Piyush, we specified node name directly in pod yaml. So can I think that giving hostname directly in pod definition overrides or ignores any taints specified on a node ? That is let's say a node named node01 has some taint app=blue:NoSxhedule , but if I add node name directly in pod yaml file like as node01, so the pod will get scheduled on node01 without pod yaml having any toleration for it ?
That's a very good question and you are right. If we are specifically adding nodename in the yaml that means we are doing manual scheduling and Tolerations will only be considered by the scheduler while scheduling the pod. In this case we are not using the scheduler. If you want you can test it yourself and share the results
@@TechTutorialswithPiyush so due to manual scheduling here, the taint of the master node is ignored. Taunts and toleration will be considered while scheduling is done by scheduler and not manual scheduling. Is it correct Piyush?
@rinkirathore6502 Correct
The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action bypasses the scheduler; the Pod is then bound onto the node where you assigned it, even if there are NoSchedule taints on that node that you selected. If this happens and the node also has a NoExecute taint set, the kubelet will eject the Pod unless there is an appropriate tolerance set.
@@TechTutorialswithPiyush got it, thanks Piyush
I have one query , as you described that multiple PVC can be attached with a single PV till storage is available??
My concern is that "is it allowed to attach multiple PVC with a single PV"??
Yes, correct. PV is the storage pool from where we can create multiple PVC to claim a slice of that storage pool and attach it to a pod.
@@TechTutorialswithPiyush
Hey Piyush,I think this is not correct.
PV and PVC works as one-to-one mapping only.
We can't allocate multiple PVC to a single PV.
I have checked Kubernetes documentation as well.
Please kindly have a look at it.
@@Sauline1231 Sorry my bad, I got confused, You are right
@@TechTutorialswithPiyush
It's okay. As humans we learn from mistakes more.
I even learned a lot from your video, and thanks for your quick response.
@@Sauline1231 Thank you
please explain about how pod gets an ip's
1. pod is scheduled/run > 2. kubelet invokes CNI plugin (Calico, weave, flannel, cilium) > 3. CNI plugin sets up IP pool, network interfaces for the Pod network namespace > 4. CNI plugin (by way of kubelet) assigns an IP address to the Pod(s).
This is also specified in the Pod YAML config file.
You can view the Pod cidr network specification in the /etc/cni/net.d/ path, viewing the CNI ConfigMap, viewing the NetworkPolicy configuration or by viewing the Kubeadm YAML config file.
Thank you so much for the detailed explanation! wonderful
🎉
Comment for target....!!!!