Day 29/40 Kubernetes Volume Simplified | Persistent Volume, Persistent Volume Claim & Storage Class

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 ม.ค. 2025

ความคิดเห็น • 51

  • @GopiVivekManne
    @GopiVivekManne 5 หลายเดือนก่อน +8

    Thanks for this well explained video on PV,PVC,SC.👍
    To answer your question : How did we manage to schedule a pod on the control-plane node?
    In K8s control plane nodes (master) are tainted to prevent pods from being scheduled on them. In this case, we specified nodeName explicitly in pod yaml, saying that place a pod on to master node overriding the normal default scheduling process, because this gives pod explicit ability to place on the control plane node, regardless of the taints in place.
    But, this is not ideal in case of production environment real time, we use a combination of taints and tolerations and nodeSelectors or nodeAffinity to allow a pod to be scheduled on a specific node.

    1. Taints and Tolerations: Taints applied at node level , Tolerations at pod level - This gives node ability to allow which pods to be scheduled on them.(Node centric approach)
    2. Nodename, Nodeselctor and label: This give pod ability to decide on which node it has to go.(Pod centric approach)

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  5 หลายเดือนก่อน

      Very well explained, thank you so much!

    • @yashjha2193
      @yashjha2193 3 หลายเดือนก่อน

      basically by Manual scheduling, with which we can deploy pod manually with out the help of Kube-scheduler

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  3 หลายเดือนก่อน

      @@yashjha2193 That's correct, we don't do this in a production setup normally but this was just to explain that, you can force a pod to be scheduled on a particular node in case you need to do some troubleshooting on that node

    • @yashjha2193
      @yashjha2193 3 หลายเดือนก่อน

      @@TechTutorialswithPiyush yeah Understood bro

  • @bhanubisht8
    @bhanubisht8 14 วันที่ผ่านมา +1

    Completed the video...!!!!!!

  • @sangativamsikrishna1691
    @sangativamsikrishna1691 5 หลายเดือนก่อน +1

    First Comment, Thanks for the detailed explanation on K8s Volumes concepts. Thank you Piyush

  • @zebra-z1v
    @zebra-z1v 5 หลายเดือนก่อน +1

    Thanks for this wonderful detailed concept

  • @RajpalSingh-ui5zu
    @RajpalSingh-ui5zu 2 หลายเดือนก่อน +1

    Nicely explained...thanks for the tutorial. :)

  • @floehden
    @floehden 5 หลายเดือนก่อน +1

    Thank you for the great explanations

  • @abc-edm
    @abc-edm 2 หลายเดือนก่อน

    always enjoy your explanation in very low level perspective, good work!!

  • @prasantkumar1986
    @prasantkumar1986 หลายเดือนก่อน

    Fantastics nicely explain my doubts got clear after watching PC and PVC

  • @niyazahmad5058
    @niyazahmad5058 2 หลายเดือนก่อน

    Hi Piyush, very well explain, because u remove taint from master node.

  • @AbdulMateen-bm3kv
    @AbdulMateen-bm3kv 5 หลายเดือนก่อน

    excellent, thanks.

  • @SinaTavakkol
    @SinaTavakkol 5 หลายเดือนก่อน +1

    How we can schedule workloads on control-plane node? As you mentioned in the series, we can use taint for nodes and toleration for workloads to permit them schedule on control-plane or specific nodes.

    • @Jalal921
      @Jalal921 5 หลายเดือนก่อน +2

      Remove taints from control plane

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  5 หลายเดือนก่อน +1

      Correct, or add the tolerations to the pod that the node can tolerate. It's not a production best practise but it is possible.

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  5 หลายเดือนก่อน

      Thank you Jalal

    • @Jalal921
      @Jalal921 5 หลายเดือนก่อน

      @@TechTutorialswithPiyush thanks piyush, learning a lot from you

  • @kaarthickpk
    @kaarthickpk 5 หลายเดือนก่อน +1

    The TAINT on the master node will restrict us from creating our custom workload, Seems you have removed the TAINT on the master node, to create your pod.

  • @ashishranjan4597
    @ashishranjan4597 3 หลายเดือนก่อน +1

    Please explain about PV and PVC. As per below comments PV can not be shared with other PVC so how in production we manage this. In a node if multiple pods are present do each pod request storage from different PV. Is the management of PV and PVC storage is responsibility of DevOps team or K8s admin bcz this may lead to lot of unutilized memory. Also please explain once Reclaim policy -Recycle

    • @ashishranjan4597
      @ashishranjan4597 3 หลายเดือนก่อน +1

      @TechTutorialswithPiyush

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  2 หลายเดือนก่อน +1

      Hello Ashish, Let me try to answer
      - Yes, there is a 1-1 relationship between PV and PVC however, in production, we can use dynamic volume provisioning that automatically created the volume for the PVC that you create
      kubernetes.io/docs/concepts/storage/dynamic-provisioning/
      - It's not a good pratice to share the volume across multiple pods
      - It depends upon the organizational structure, but there could be a seperate team or a sub-team with storage admin privilleges who manage the storage, in some orgs, it could be a devops or ks admin with the storage admin role.

    • @ashishranjan4597
      @ashishranjan4597 2 หลายเดือนก่อน +1

      @@TechTutorialswithPiyush thanks.

  • @pradipakshar
    @pradipakshar 2 หลายเดือนก่อน +1

    maybe you removed noschedule label from controlplane??
    kubectl label nodename nodeName:NoSchedule-

  • @rinkirathore6502
    @rinkirathore6502 4 หลายเดือนก่อน

    Piyush, we specified node name directly in pod yaml. So can I think that giving hostname directly in pod definition overrides or ignores any taints specified on a node ? That is let's say a node named node01 has some taint app=blue:NoSxhedule , but if I add node name directly in pod yaml file like as node01, so the pod will get scheduled on node01 without pod yaml having any toleration for it ?

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  4 หลายเดือนก่อน +1

      That's a very good question and you are right. If we are specifically adding nodename in the yaml that means we are doing manual scheduling and Tolerations will only be considered by the scheduler while scheduling the pod. In this case we are not using the scheduler. If you want you can test it yourself and share the results

    • @rinkirathore6502
      @rinkirathore6502 4 หลายเดือนก่อน

      ​@@TechTutorialswithPiyush so due to manual scheduling here, the taint of the master node is ignored. Taunts and toleration will be considered while scheduling is done by scheduler and not manual scheduling. Is it correct Piyush?

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  4 หลายเดือนก่อน

      @rinkirathore6502 Correct
      The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action bypasses the scheduler; the Pod is then bound onto the node where you assigned it, even if there are NoSchedule taints on that node that you selected. If this happens and the node also has a NoExecute taint set, the kubelet will eject the Pod unless there is an appropriate tolerance set.

    • @rinkirathore6502
      @rinkirathore6502 4 หลายเดือนก่อน

      @@TechTutorialswithPiyush got it, thanks Piyush

  • @Sauline1231
    @Sauline1231 5 หลายเดือนก่อน

    I have one query , as you described that multiple PVC can be attached with a single PV till storage is available??
    My concern is that "is it allowed to attach multiple PVC with a single PV"??

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  4 หลายเดือนก่อน

      Yes, correct. PV is the storage pool from where we can create multiple PVC to claim a slice of that storage pool and attach it to a pod.

    • @Sauline1231
      @Sauline1231 4 หลายเดือนก่อน

      @@TechTutorialswithPiyush
      Hey Piyush,I think this is not correct.
      PV and PVC works as one-to-one mapping only.
      We can't allocate multiple PVC to a single PV.
      I have checked Kubernetes documentation as well.
      Please kindly have a look at it.

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  4 หลายเดือนก่อน

      @@Sauline1231 Sorry my bad, I got confused, You are right

    • @Sauline1231
      @Sauline1231 4 หลายเดือนก่อน

      @@TechTutorialswithPiyush
      It's okay. As humans we learn from mistakes more.
      I even learned a lot from your video, and thanks for your quick response.

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  4 หลายเดือนก่อน

      @@Sauline1231 Thank you

  • @maheshwarareddy8629
    @maheshwarareddy8629 5 หลายเดือนก่อน +1

    please explain about how pod gets an ip's

    • @nope-ms4rx
      @nope-ms4rx 5 หลายเดือนก่อน +2

      1. pod is scheduled/run > 2. kubelet invokes CNI plugin (Calico, weave, flannel, cilium) > 3. CNI plugin sets up IP pool, network interfaces for the Pod network namespace > 4. CNI plugin (by way of kubelet) assigns an IP address to the Pod(s).
      This is also specified in the Pod YAML config file.
      You can view the Pod cidr network specification in the /etc/cni/net.d/ path, viewing the CNI ConfigMap, viewing the NetworkPolicy configuration or by viewing the Kubeadm YAML config file.

    • @TechTutorialswithPiyush
      @TechTutorialswithPiyush  5 หลายเดือนก่อน

      Thank you so much for the detailed explanation! wonderful

  • @dr.hemantchauhan2613
    @dr.hemantchauhan2613 5 หลายเดือนก่อน +1

    🎉

  • @bhanubisht8
    @bhanubisht8 18 วันที่ผ่านมา

    Comment for target....!!!!