Another Great Session,. Thank you for sharing your knowledge and making it simple and easy for us all to learn. You're doing an Excellent job! Much Appreciated
Great session. Completed hands-on using kubernetes-dind-cluster. Would be very helpful in creating various deployments using helm in my homelab k8s cluster without relying on cloud storage.
Hello, I like your approach of teaching. These are all interesting and great videos for quick learning. May I request you to please share back the example of how to setup Dynamic provisioning on GCP using Persistent Disk. Awaiting your inputs...thanks
@@waterkingdom9839 Thanks for your comment. So far I have only been playing with k8s on bare-metal servers. I am yet to explore it on GCP and AWS. Soon you can see videos around these. Thanks.
I have a question regarding storage. If I'm going to use NFS to store persistent volumes how much disk space do I need on the master and worker nodes? What data does k8s nodes store? I would guess docker images and logs but I bet there is more. Kudos for your efforts on sharing knowledge with the community!
Hi Giovanni, It all depends on your needs. NFS server can be external running on a separate server. You manage disks with desired capacity on that NFS server. Worker nodes and master nodes, as you guessed only needs sufficient storage space for storing docker images. If you are using it in production and have lots of pods running, then you might need to have more storage. You would normally have monitoring solutions like Nagios or Check_mk implemented and will be alerted when disk space goes low. More applications in your cluster means more containers which means more space needed. One other thing to bear in mind is if you use hostpath for binding a node volume inside the container, then you have to think about how much storage you need. Thanks.
Hi Venkat, Firstly, thanks for taking time and creating videos for us. In simple words Sema Sema!! my cluster is in vmware pks, do I need nfs pod in the cluster for auto provisioning of pvc claim? or i can directly create pvc claim ? Also could you tell whether I need NFS in my cluster?
Hi Arun, thanks for watching this video. GIven its a cloud provider, you can make use of vsphere volumes. You (or your k8s cluster admin) will have to create a storage class after that you can create pvc specifying that storage class and a pv will get created for you. th-cam.com/video/40fl9Hmi4BE/w-d-xo.html You can also use NFS persistent volume provisioning if you want. Thanks.
Very interesting... One comment/question regarding your NFS server, though. As a storage admin/engineer, I'd need a really good reason to export something to the world, especially with the no root squash option. Can you comment on why you've chosen to demo this in this manner as opposed to creating an export following the principle of least privilege?
Hi Jeff, thanks for watching. I am not an expert in storage administration. I agree with you with the least privilege concept. This video is just to demonstrate the idea of using NFS as persistent volumes in kubernetes cluster. I didn't want to concentrate on NFS in this video. Cheers.
Hi, thanks for watching. I haven done nginx and traefik ingress in kubernetes but not as reverse proxy in Docker. th-cam.com/video/2VUQ4WjLxDg/w-d-xo.html th-cam.com/video/A_PjjCM1eLA/w-d-xo.html th-cam.com/video/KnOZwxvxfnA/w-d-xo.html
Hi Venkat, Thank you. Now got a clear picture. I have Created a K8 Cluster in Vmware infrastructure, how to proceed with the creation of Persistent Volumes. The storage is available in the form of ISCSI Datastores & NFS Datastores!!! Thank you
No worries. I haven't used iscsi datastores but the below documentation looks the one for you. github.com/kubernetes-retired/external-storage/tree/master/iscsi/targetd Cheers.
Great Video...Thanks for sharing this demo.....I just need to understand how the Storage Class is identifying the provisioner. The provisioner name is an environment variable in the NFS_Provisioner POD (Which is name-spaced resource). How it is accessible in the Storage class (which is cluster wide resource)
Firstly thank you very much for making such wonderful videos in simple language to understand. I have doubt about the RWX concept. So when we say multiple read and writes, are we referring to writing into the Container when logged in from multiple nodes by multiple users? Is that it refers to? Please clarify. Thank you.
Hi Mohammad, thanks for watching. RWX mode means that you can mount that persistent volume on more than one containers and all containers can write to that volume.
Hi Ryan, thanks for watching. You can always do that but still the volumes (underlying storage) for that nfs server pod should come from somewhere outside the cluster through a central storage solution.
Hi, Venkat. I apprece so much your effort and dedication to make this videos. A little question: suppose I have a previous nfs export with some static files, pdf files for example that I need to mount in every replica of my app. How it works with this provisioner if every pod of deployment will claim for a single volume? I need the same data in every pod, it understand? Sorry if the question is not so consistent. Thank you from Argentina!
Hi Ratnakar, Thanks for watching this video. I haven't used GlusterFS before, but just had a quick glance at their docs and looks interesting. I am always open to learn new stuff. I will read through the docs and once I gain some understanding I will definitely make a video of it. Thanks for suggesting the topic though. Cheers.
@Pushp Vashisht Hi thanks for your interest. Yes I originally did the GlusterFS series to give users some basic knowledge about GlusterFS before using it in Kubernetes cluster. Since then I am struggling to find time. Its in my list and I will certainly do some videos soon on it. Cheers.
Hello Venkat, I like your approach of teaching. These are all interesting and great videos for quick learning. May I request you to please share back the example of how to setup Dynamic provisioning on GCP using Persistent Disk. Awaiting your inputs...thanks
Hi, Many thanks for watching this video and taking time to give feedback. Much appreciated. I will add this request to my to do list. I have videos waiting to be released in the coming weeks. Thanks for requesting this new topic. Cheers.
@@justmeandopensource Thanks for your prompt response. Much appreciated. There are two videos which are dependent on dynamic provisioning but because I am not able to setup NFS based storage, not able to follow. Can you just send me the files with instructions to follow as you might have them handly? ...thanks
Hi Water Kingdom, unfortunately I don't have them as I am yet to try it on GKE. All my videos are based on bare metal. I only did couple of videos on google cloud.
4 ปีที่แล้ว +1
great tutorial! One question pls: can I increate the replicas to ensure high viability?
Hi Manfred, thanks for watching. I haven't completely looked at the code to say if that is required/supported. The way the nfs client provisioner works is by mounting the nfs share on the kubernetes worker node where it is running and distributing it to the pods. I will have to test it before I can comment.
4 ปีที่แล้ว +1
@@justmeandopensource Got it working with an ISILON NFS Applicance. Really easy to install and use. It runs well in our productive environment.... But I use a helm chart for installing it - it is much easier (helm install nfs-provisioner --set nfs.server=[NFS-SERVER] --set nfs.path=[EXPORT_FS] stable/nfs-client-provisioner --namespace=nfs-provisioner --create-namespace)
@ Perfect. Even though I personally prefer using Helm, whenever I do a video I prefer the manual way so that viewers get an understanding of what they are deploying. Thanks for sharing this detail. Cheers.
Hi Kushagra, thanks for watching. This is Manjaro Gnome edition. I have done a video on my terminal setup which you can watch in the below link. Cheers. th-cam.com/video/soAwUq2cQHQ/w-d-xo.html
It seems SC and PV are not namespaced, what are the best practices to provision PVs for different namespaces? (production/staging/test or multi-tenancy)
Hi Zach, thanks for watching. Yes I believe they are not confined to any particular namespace. I think persistent volumes are created in a particular namespace depending on which namespace you create the PVC.
@@justmeandopensource I have created storage class, pvc nfs-provisioner.But not created rbac.yaml. hence the status of pvc is pending. Once I applied rbac.yaml , pvc was avaiable. My question is why we need clusterrole role service account fot nfs provisioner
@@balaspidy If you look at the resources we deploy for this nfs-provisioner, we are using a service account that needs certain privilege to list/create resources across all namespaces, hence we need a cluster wide privilege.
@@balaspidy I will try. There is also this RBAC related video I did a while ago. Might be useful as I covered some roles. th-cam.com/video/U67OwM-e9rQ/w-d-xo.html
Hi venkat,I followed vdo and have done the setup efs dynamic provisioning..have created different pods/containers with same images but want to handle different volume path for each pod level..please suggest how I can proceed..
This video will need to be revised for kubernetes versions 1.20+ because the volume mechanism has been reworked in the later releases. If you are using the LXD Provisoiner Venkat was so kind to provide to set up your k8s, you need tochange the script so 1.20.0-00 becomes 1.18.15-00. Venkat if you disagree let me know.
Thanks, you are right. I will redo this video soon for latest k8s version. Its hard to keep updating the video as the ecosystem evolves at a great speed. Cheers.
Venkat, i wonder if it is the best practice to use the PV on NFS for the purposes of deploying an SQL database? Our Kubernetes will be on premise instead the cloud.
Hi, thanks for watching. You definitely need to have persistent volumes for your database. No doubt in that. The type of storage solution you could use on-prem depends on various factors. There are quite a lot that you can use. I have explored just NFS. But in production, you can use Ceph/Rook, Gluster FS or any other clustering storage solutions.
Hello, Thanks for such a helpful video. I have question around Helm Chart and the PVC rwx NFS provisioner. The pvc is created by subchart for parent chart deployment to use. But when performing "helm uninstall chart", the pod and the pvc status gets to TERMINATING state. Any way to specify configuration so that the pod and the PVC deletes smoothly ?
Hi Venkat, I'm trying to run the nfs-client-provisioner in it's own namespace. I got everything working in the default namespace after following your tutorial, then: deleted the resources in rbac.yaml, class.yaml, deployment.yaml created a new namespace called storage, created a new context with cluster=kubernetes, user=kubernetes-admin, namespace=storage, used the new context created the resources in the yamls again but now PVCs are pending forever. Am I missing something that needs to be done to get this running in another namespace? EDIT: Ah I figured it out, "namespace: default" is written in the clusterrolebinding and rolebinding resources. Just changed those and it worked :)
Hi Tumenzul, thanks for watching. The actual nfs server which holds your data is external to your k8s cluster. Even if you delete your nfsprovisioner pod and your k8s cluster, the data will still be there in your nfs server. But depending on how you created your persistent volume, the volume might be deleted when the pvc or the pod is terminated. This is actual and expected behaviour.
Nice video agian Venkat. Thanks for that. I have question. My usecase is to have one shared NFS so every new POD will claim the same persistant volume. Can you advice how to achieve something like this? Your tutorial works great for situation when new pod comes and it does PVC which creates PV. But here I would need to allways attach each pod to the same volume.
Hi Peter, There is no guarantee that the same persistent volume will get mounted by the pod each time you delete the pod and recreate it. Persistent volumes are released once you delete the associated pod (with pvc) but not available to the next pod. The persistent volume will have to be manually deleted. There are certain storage classes if you are using the cloud provider where the persistent volumes gets deleted automatically if the reclaim policy is set accordingly. To attach the pv to the same pod, I don't think there is anyother option other than to use statefulset with one replica. So you will get one pod and one pv. Every time you delete the pod in this statefulset, it will attach to the same pv. Thanks, Venkat
@@justmeandopensource Hi Venkat seems I found a solution for my case. I basically created one PV and one PVC (out of statefulset). Then in statefulset I removed part for dynamic provisioning (volumeClaimTemplates part) and instead I have added "volumes" where I specified the PVC I created before. This allows me to create multiple replicas using the same PVC. I tested this solution and it gives me exactly what I need so I'm having 2 replicas which are accessing the same NFS mount. Thanks Peter
@@p55pp55p Cool. Hope you also set the AccessModes to RWX (ReadWriteMany) in pv and in pvc so that same volume can me mounted on multiple worker nodes with read/write permissions. Depends on your use case, but something to bear in mind. Thanks, Venkat
Thanks for this video, very helpful. I have written an application Go that use mongoDB as it database. When the frontend application (http server) start it connects to the mongoDB server and then listen for CRUD request from any http client. I have created a docker image of frontend application pushed to docker hub. I would like to ask if i would be able to deploy my application on kubernetes with this setup, NFS persisten volume and MongoDB Replicaset deployed on VMs running on my localhost machine. I have already set up my kubernetes cluster and following this video created the persistent volumes. Is this possible?
Hi Afriyie, thanks for watching. So you have your MongoDB replicaset running on virtual machines in your workstation. And you want to run your frontend in your k8s cluster and have it talk to mongodb replicaset. Yes, thats definitely possible. How is your k8s cluster provisioned? Can your k8s worker node ping the ip address of the virtual machine where mongodb is running? if yes, then you can just point your frontend to connect to the ip address of the mongodb vms. Or you might have to do some portforwarding between your workstation and the mongodb vm. Then use your workstation ip to access the mongodb. I might not have explained it very well, but its doable. Cheers.
Thank for the great video. I have following questions: Should we have more than one replica of nfs client provisioner? Let's say I have prometheus&grafana up&running and I need the data to be saved no matter what. We have accessmode ReadWriteMany. How does this work with pods with more than 1 replicaset which are storing the data on pv?
HI Sir. Thankyou sooooo much for this... all created well (SA,role,rolebinding,clusterrole,clusterrolebinding, storageclass, provisioner pod,) but when I try to create PVC ... its in pending state and waited for longtime.. what could be the reason? === Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 9s (x20 over 4m42s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator ==== If the worker1 crashed is the provisioner pod will move to another node?
Hi Nagendra, thanks for watching. Yes, the nfs-provisioner pod is again a normal k8s resource and if it gets crashed or the node where it runs crashes, it will get started on another node. Regarding your pvc in pending state, have you verified manually that you can mount the nfs share on your worker nodes? And did you specify the storage class name as mentioned in this video?
@@justmeandopensource Yes Sir, I had verified manually nfs share is able to mount that..that's good. I will reconfigure everything ...if any issues I'll come back to you on this. Thanks for support.
Really good info. I wondered how a replica set or deployment with say 3 pods would make a pvc that is unique to each, is there a host name option that could be used in the yaml to create the deployment ?
Hi Jon, It all depends on how we design the architecture. We first have to understand the application we are deploying and then plan the resources. It can be a statefulset where pvc gets bound to the same pv every time.
What I didn't get from this video is whether one can get data persistence from these dynamically provisioned "persistent" volume claims. I noticed that after you deleted the claims, the volumes also disappeared - I guess I'm struggling to understand where the "persistence" comes from since the data created in the "/mydata" mount point is now gone. Did I miss something?
Hi Venkat, Thanks for your video, I have doubt in PV, If I have created PV(PVC-storageclass-PV) by storage class, will that possible to increase PV size after the creation of PV
Hi Siva, Was it you or someone else. I had this same question. I haven't tested that. Actually the other question was would we be able to use more than the allocated pv. For example, if we defined a pvc and got a pv for 100MB, can we use more than that. I am going to test these and will share the results. Thanks.
Hi Siva, I just had a try on this one. Interestingly, the nfs provisioner I used in this video (for bare metal) doesn't support strict volume size. I tried by creating a 5MB pv and attach it in a pod. And I was able to write more than 5MB. I created a file which was 100MB. So no strict size implemented which is a limitation. github.com/kubernetes-incubator/external-storage/issues/1142 If you use one of the cloud implementation like GCP Persistent Disk or AWS EBS or Azure Disk, then you will only get what you requested and won't be allowed to use more. Although from kubernetes version 1.11 and above, you can resize a pv by updating your pvc. You don't have to delete and recreate the pv. It will be dynamically resized. However, the pod needs to be restarted to make use of the increased size. Shrinking volumes is not supported as yet. In the below link you will find some useful information. It also has list of supported storage providers that has this resize feature. kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims Thanks.
Hi, When trying to create the pvc (4-pvc-nfs.yaml), its stuck at "Pending". When I describe the pvc, I see "Warning ProvisioningFailed 6s (x2 over 8s) persistentvolume-controller no provisionable volume plugin matched". I am running 1.18.5 on ubuntu 18.04. Do you know of a way of fixing this?
If I want to have two NFS mounts how should I procceed with your examples? I have a RAID0 and a RAID1 shares on my NFS server and I wanted to create a fast storageclass and a normal storageclass
Hi Giovanni, thanks for watching this video. You can follow the same steps as shown in this video for each of your provisioner. You can't use one provisioner to provide both storage class. So please follow the steps and use one nfs provisioning first. Then use the same set of manifests (github.com/justmeandopensource/kubernetes/tree/master/yamls/nfs-provisioner) and change the name as follows. In class.yaml, change line#4 and line#5 (you should change the provisioner name from exampe.com/nfs to something else like example.com/fastnfs) In default-sc.yaml, remove annotations (line#5 & line#6), then change provisioner name on line#7 to example.com/fastnfs In deployment.yaml, update line#23 with provisioner name and then update the nfs path accordingly. Make sure to change the name of the resources and app labels accordingly as you are deploying another nfs-provisioner. Hope this makes sense. Thanks.
Nice tutorial, but im facing a different error. Turns out that when i create that PersistentVolumeClaim, it creates the PVC and the PV, but in my NFS server there's no additional folder! Can you please help ? Looking at the logs for nfs-provider pod, there's the creation, and when i try to tested with a busybox pod, then got mount failed stat volumes/long-hash does not exists. Which is right, because in NFS shared folder there's nothing.
Hi Venkat , while creating a file inside the pod , I'm getting permission denied , the /mydata has nobody owner , while creating nfs given chown nfsnobody since its centos., could you please suggest , thanks in advance
Hello, This doesn't seem to work anymore. When i try do add pvc i get: Normal ExternalProvisioning 3s (x9 over 117s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator I'm running k8s 1.20 on 3 node cluster if i start pv with: nfs: server: SERVERIP path: NFS SHARE and pvc with that pv it works. but dynamic setup fails :/
4 ปีที่แล้ว +1
when i get the logs of nfs-client-provisioner i get: I1214 22:08:03.402156 1 controller.go:987] provision "default/pvc1" class "managed-nfs-storage": started E1214 22:08:03.406446 1 controller.go:1004] provision "default/pvc1" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
I was just showing the possibility of using nfs server as a dynamic provisioning storage backend. You will have to analyze the issues/bottlenecks around the chosen solution.
Hi Venkat, Came across your channel while searching for PVC using NFS. All your videos are awesome and very detailed. I was trying out this, but my nfs-client-provisioner pod creation failed due to Error getting server version: Get 10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout. I followed your vagrant script to create the cluster. Any idea what could be the issue ? Any pointer would be really helpful.. Thanks in Advance.
Hi Nevin, the nfs-client-provisioner pod usually fails if it has problems accessing the nfs server. Hope you have setup your nfs server. And did you verify that you can mount it from your worker nodes? Cheers.
@@justmeandopensource Venkat, thanks a lot for taking time to read and reply promptly. Yes my NFS server is up and running and I am able to mount the volumes. FYI... I was able to fix the issue. I used a vagrant file which is some what similar to your git hub one. One mistake which I had in the script was my api server advertise address and pod network cidr where in the same ip range (192.168.56.XXX and 192.168.0.0/24) respectively. I read in one of the google link the ip range should be different else may result in conflict while using Calico. When i checked your script, I noticed your vagrant file has it mentioned right. --apiserver-advertise-address=172.42.42.100 --pod-network-cidr=192.168.0.0/16 Once that is fixed, my client provisioner pod started running and PVC got binded. Thanks a lot once again. You publish lots of advanced topics which are not found any where. Appreciate your effort. Keep up the great work.
Hi Venkat , as we know that there are many volume plugin available .I want to ask u that if we use gceperesitentDisk then we just need to create a storage class and automatically pvc will take when they required but while using nfs we first need to create nfs server nfs-pod using deployment and then storage class and then pvc will take place .please help me am I right and which one is better 🙏
i follow thi vedio and ddn't work for me i have 4 node 1 master and 3 workers my dist is centos7 i am using calico network my firewall is disable setenforce 0 i got this message from the describe command wrong fs type bad option bad superblock on 192.168..... missing codepage or hleper program or other error(e.g,nfs,cifs) you might need a /sbin/mount helper program
Hi Venkat, Firstly thanks for the amazing tutorial!! I have a problem and would like some insight! I have created a windows share, and I can mount it into one of my cluster workers and I can write data (with sudo). So there is no connectivity issue. I want to use this NFS share for which I have assigned all possible read+write access included it to Everyone, but every time I configure this like the way you have done it I have issues creating the pvc and pv I get an error stating could not create default-*****-*******-**** directory permission denied. Do you have any ideas on this?? Thanks! Cheers, Nithin
Hi Nithin, thanks for watching. I have only tried exporting NFS shares from a Linux machine. But that shouldn't stop you from using Windows machine for NFS sharing. You might have to change the directory permissions that you are sharing. I believe you have already done that. But just double check. Give everyone read/write permissions on that shared directory. In Linux, I used chmod 777 on the exported directory.
Hi Shaik, As the persistent volumes are created automatically when request them by creating a pvc, you will have to update the ReclaimPolicy once the pv is created. $ kubectl get pvc Look at your desired pvc and check the corresponding pv name. Then you can update the policy using below command $ kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' I just brought up the cluster and tested this which worked fine. Once you apply this patch to the pv, it won't get deleted when you delete the associated pvc. You have to then manually delete this pv. Hope this makes sense. Thanks, Venkat
HI Kasim, If you are running your K8s cluster in the cloud (GCE, AWS, Azure), there are built-in dynamic storage provisioners for each of them. I made this video as the series I am doing is on bare metal and we don't have any solution for dynamic provisioning built-in for it. You can check the below link. Scroll down to section "Types of Persistent Volumes". kubernetes.io/docs/concepts/storage/persistent-volumes/ Hope it makes sense. Thanks, Venkat
Hi Ratnakar, thanks for watching this video. In this video I didn't talk about default storage class. But later realized that I should have done. Later I added another yaml file named default-sc.yaml in the same directory in the github repo. It has annotation to make it a default storage class. So please use default-sc.yaml instead of sc.yaml. Then you don't have to mention the storageclassname in you pvc definition. Thanks.
How can you setup NFS server as master from your kubernetes master, where can I find the admin.config file which could setup NFS as master. Please guide me, thanks.
@@justmeandopensource In this Video if I create a persistent volumes ( such as name of persistent volume is pv1) before create any pvc . Will pod nfs-client-provisioner create dynamically one persistent volumes claim to bound to pv1 ?
@@nguyentruong-po4mx So your question is if you manually create a persistent volume named pv1 and then create a pvc, will the provisioner use pv1 or create a new persistent volume? Is this right? If you have a persistent volume and if that satisfies the persistent volume claim you created (like the storage size), then the existing pv will be used.
Hi, I've red in some sites got the info as below FYI! I think resize of the disk is possible ( go here kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses) provisioner: kubernetes.io/glusterfs parameters: resturl: "192.168.10.100:8080" restuser: "" secretNamespace: "" secretName: "" allowVolumeExpansion: true
I follow your instruction, I create PVC successfully, but it don't bound with PV, I 'm not sure what happened. I suspended my NFS server go wrong, but I can mount NFS server directory with my client successfully, hope for help from you , very nice lecture.
hello , hope you are doing good . i have a question regarding storage classes and pvc, after watching this video i thought to experiment on Aws cloud (EBS) as a volume , but i couldn't , so it is restricted for nodes to be on same cloud to use EBS . Like i created policy as given in Aws document then i created storage class and pvc but it was not creatin pv at own. I read somehwere or got confused with some other thing that its restricted to have nodes on whicc pods are running should be on same cloud. Any suggestions. Thx
No harm in deploying it as a daemonset. Helm charts are configured for deployments with configurable replica counts. github.com/helm/charts/tree/master/stable/nfs-client-provisioner
Sir when i create nfs client provision pod it showing error cashloofbackoff and logs showing this Error getting server version: Get 10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout. Please give me suggestion.
There is a difference though. Are you using one of cloud managed Kubernetes cluster like GKE, EKS or AKS? Then there is no need for dynamic nfs. You will have to create storage class though. But if you are not using managed service, instead launch instances in the cloud and install kubernetes yourself, then you will need this setup. Thanks.
Hi brother ... Your videos are so good and its clearing so many doubts. Could you please make some videos on common troubleshooting problems in Kubernetes. It would so helpful for peoples like me to get a job in K8s.
Hi Prabhu, thanks for your interest in this channel. I compile a list of topics based on requests from viewers and this has been requested by few others as well. Its in my list and I will look into making some video time permitting. Cheers.
HI Nehar, I haven't used Mac in years. But the process of exporting a directory through nfs shares should be simple. www.peachpit.com/articles/article.aspx?p=1412022&seqNum=11 Once you have nfs shares exported, you can proceed with dynamic nfs-client-provisioning as shown in this video. Thanks
Hi Sir, I have one question regarding aws efs. I have docker magento image and it contains all the installation files and folder inside /var/www/html directory but when i mount the efs pv claim to /var/www/html then the data inside html is not showing . it becomes empty. I want that the data which is already there inside html of my docker image should remain after mounting efs . Otherwise i wont be able to do the installation.
Hi Sarfaraz, thanks for watching this video. So you have some data in /var/www/html in your docker image. Okay. The basic Unix/Linux behaviour is that whenever you mount something to a directory, the underlying data in the original directory won't be available. This would make sense. You can mount your AWS EFS pv in a different location inside the container. There is no way to retain the data after mounting to the same directory. Thanks.
@@justmeandopensource Can I mount efs directly to worker node fstab and then mount container volume /var/www/html as a hostpath. Then will it retain the data?
@@justmeandopensource I working on product. I want to have base image ready for magento store. Whenever user sign up with their name a new magento store is created with mytest.example.com. That's why I want base image ready. So that we have to make changes only in database. I am using RDS for database and for persistent storage I am using EFS.
@@sarfarazshaikh I understand. But when you mount the persistent volume on /var/www/html, the data already there will not be accessible. So you will have to mount EFS under different directory like /var/www/data and change the logic of your web application to use this directory as the data directory or something like that. thanks.
Hi Venkat, I am trying this video and the host is Mac. I am running Vagrant k8s cluster. Host Machine: Mac NFS Server is running. /srv/nfs/kubedata - permission as below drwxr-xr-x 3 nobody admin 102 1 Sep 11:57 /srv/nfs/kubedata KWorker: [root@kworker2 ~]# showmount -e 192.168.68.XXX Export list for 192.168.68.XXX: /srv/nfs/kubedata (everyone) mount -t nfs 192.168.68.XXX:/srv/nfs/kubedata /mnt mount.nfs: access denied by server while mounting 192.168.68.XXX:/srv/nfs/kubedata Any clue what could be the issue? Thanks in advance.
What options you have in you nfs exports configuration? In my Linux server, i had to pass "insecure" option as well. Could you try it with insecure option? Thanks.
If I am following along at home, should I change provisioner in class.yaml? to NFS maybe? Also in deploy.yaml Should I use the path on the NFS server, or the path that will be mounted to the NFS server? e.g I have /var/nfsshare on my NFS, and /mnt/nfs/var/nfsshare on my nodes. which ones should I use?
Hi Yuven, Firstly, thanks for watching this video. Query 1: Should I change provisioner in class.yaml? In class.yaml, line 5, I have used "example.com/nfs" as provisioner. In deployment.yaml, line 22 and 23, I have specified the provisioner name environment variable You have to make sure the provisioner name you give in deployment.yaml matches that in class.yaml. Its just a name. You can have any name, but needs to match in these two files. Query 2: Which path should I use in deployment.yaml? You should use whatever you exported in your nfs server /etc/exports file. In your case, you should use /var/nfsshare Hope this makes sense. If not, let me know. Thanks, Venkat
@@justmeandopensource Thank you very much! You have been a great help to me. Good to know that the provisioner name is only a name. I figured out the path a bit after asking the question... by rewatching parts of you video ;) Keep up the amazing work! The world need more people like you :)
@@justmeandopensource eeh, this is getting embarassing :'D I now get an error: MountVolume.SetUp failed for volume "pvc-427e53bf-70bb-11e9-8990-525400a513ae" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9b02aec2-70be-11e9-8990-525400a513ae/volumes/kubernetes.io~nfs/pvc-427e53bf-70bb-11e9-8990-525400a513ae --scope -- mount -t nfs 11.0.0.75:/var/nfsshare/default-pvc3-pvc-427e53bf-70bb-11e9-8990-525400a513ae /var/lib/kubelet/pods/9b02aec2-70be-11e9-8990-525400a513ae/volumes/kubernetes.io~nfs/pvc-427e53bf-70bb-11e9-8990-525400a513ae Output: Running scope as unit: run-r68af7a0af3c3404eb50d1e9baf90632d.scope mount.nfs: mounting 11.0.0.75:/var/nfsshare/default-pvc3-pvc-427e53bf-70bb-11e9-8990-525400a513ae failed, reason given by server: No such file or directory When I deploy busy box. I notice that the pvc gets created, but it does not show up in the shared folder. Even though I have checked, and the worker nodes have access to the share (I created a sample file, and it works just fine) Any idea about what is wrong? I am closing in on my deadlines and I am quite stressed.
in deployment.yaml I use spec: containers: volumeMounts: mountpath: /persistentvolumes env: - name: PROVISIONER_NAME value: example.com/nfs - name: NFS_SERVER value: 11.0.0.75 - name: NFS_PATH value: /var/nfsshare volumes: - name: nfs-client-root nfs: server: 11.0.0.75 path: /var/nfsshare I am guessing there is something wrong here? the path on my NFS is /var/nfsshare and on my Node: /mnt/nfs/var/nfsshare should I make them the same?
when i try to change the value in pvc from 500Mi to 1Gi, it shows like this persistentvolumeclaims "pvc1" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize How could i increase the value?
Hi Mani, This illustration I showed in this video is for dynamic provisioning and not dynamic resizing. As the error states, it is forbidden because the storage class we are using here which is NFS based doesn't support dynamic resizing. In order to use dynamic resizing feature, you will have to use one of the supported storage class (eg. AWS EBS, Google PersistentDisk, Azure disk or other cloud offerings). Most of my videos are around bare metal and not cloud. Thanks.
@@manikandans8808 Check the below link. Might be useful. kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims Thanks
Hi Praveen, yes you can. All you need is a reachable elasticsearch endpoint from your k8s cluster. You can use fluentd or any log shipper to send logs to Amazon elasticsearch service. I haven't tried it. But when I try it, I will make a video.
Hi Praveen, thanks for watching. If you have a cluster in AWS (like their managed EKS), it will be easier to use EBS or EFS as persistent storage. If you want to use it for your locally running k8s cluster, its still possible, but I haven't tried. When I get some time I will give it a try. Cheers.
hi but i a facing some issue when i am creating nfs Dynamically provision i am using your all files that is rbac.yaml,class.yaml and deployment.yaml when i am applying rbac and class file it all work fine and also in deployment is says created but when you check " kubectl get all -o wide "command it shows tha nfs container in creating mode and then it will be same as laong as and never creates and gives this [ pod/nfs-client-provisioner-7b94998b9-lpn6w 0/1 Containercreating 0 29s ] please helm for this i need to add in my production
Hi Atul, thanks for watching. Did you verify that your nfs server is running and that you can manually mount it on the worker nodes? If you can't mount it manually on the worker nodes, the nfs-provisioner pods will not be ready. First thing is to check as shown in this video that you can mount the nfs share from your worker nodes. Then make sure the deployment.yaml has the right ip address. Also what version of Kubernetes are you running? Thanks.
@@justmeandopensource i am using minor 16 version and from my all workers we can able to mount nfs share folder but even i am trying using helm and more than every other document it still get same error line container in hang state we have two master and two worker now we are doing test so please help me if it is possible for you if you have any proper document or you want to go through remote and all
@@atulbarge7445 I don't think I can help you remotely. Sorry about that. Look at the command output of "kubectl describe deploy and look at the events section at the bottom. It might give you a clue.
Thanks for this Video and i followed the same steps but my pod is getting restart keep on..Back-off restarting failed container... Please help me to resolve..
HI Siva, thanks for watching. I had been successfully using this process for a very long time on a daiy basis. Can you first make sure that you can mount the nfs volume from the worker node?
Hi Siva, I don't think its a problem with your dynamic PV provisioning. If it was PV provisioning problem, then you pod will be in pending state and not in failed backoff state. Look at the events immediately after your deploy the resource. $ kubectl get events
@@justmeandopensource Hi Bro..Thanx , i have recreated NFS there was some network issue.. Now its working fine.. thank you so much.. your videos are helping me lot...
Hi Sir, I am running nfs server on AWS EC2 machine and followed your steps. When I create pvc it is saying status as pending..what should I do?? What I am missing... Pls suggest
@@Mr.RTaTaM Thanks for watching. As shown in this video, did you check that you can manually mount the nfs share from your worker nodes? If not please do that first. And also see if you have to update security groups to allow these traffic.
My cluster running in my local laptop and created nfs server on AWS and I'm able to mount it from my worker nodes but when I'm creating pvc it is in pending state it saying that waiting for a volume to be created, either by external provisioner example.com/nfs or manually created by sys admin!! Anything I'm missing sir
@@Mr.RTaTaM So if you can mount it from your worker nodes, then I don't think there is a problem with the setup. If you used my manifests, you would have got a storage class named managed-nfs-storage. And you will have to use the same storage class in your PVC. Also you can check the events. For example, kubectl get events. This will show you why the pvc is pending.
Hi Venkat, i am trying to expand the pvc online, but it is not working....any idea? I was able to edit PV online expansion, it got expanded from 5GB to 5GB. But PVC not responding at all. Thank you! -------------------------------------------------------------------------------------------------------------------------------------------------- root@ubuntu:/K8/nfs-storage-provision# k get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage dynamic/nfs Delete Immediate true 7m46s -------------------------------------------------------------------------------------------------------------------------------------------------- root@ubuntu:/K8/nfs-storage-provision# k get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-12a822b0-ce75-47fe-8255-ce24ff9b30b5 50Gi RWX Delete Bound default/pvc-nfs-pv2 managed-nfs-storage 4m43s -------------------------------------------------------------------------------------------------------------------------------------------------- root@ubuntu:/K8/nfs-storage-provision# k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-nfs-pv2 Bound pvc-12a822b0-ce75-47fe-8255-ce24ff9b30b5 5Gi RWX managed-nfs-storage 5m1s root@ubuntu:/K8/nfs-storage-provision#
Hi Benharath, Thanks for watching this video. Which pod is stuck at that stage? Is it one of the pods during the NFS provisioner deployment or when you are testing a pod with persistent volume after you have created the Nfs provisioners? Thanks, Venkat
You could check the events from that deployment which would give you what stage it is in and possible errors. Run the below command and towards the bottom, see if there are any clue $ kubectl describe deployment Thanks
@@justmeandopensource i got this : Warning FailedCreatePodSandBox 25s kubelet, nfs-client Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "419e367daaae5f57f1744a0b86e09c28e94544275bcdaf64efe0b8d2af079f52" network for pod "nfs-cl ient-provisioner-c84f69c7c-mvjpx": NetworkPlugin cni failed to set up pod "nfs-client-provisioner-c84f69c7c-mvjpx_default" network: unable to allocate IP address: Post 127.0.0.1:6784/ip/419e367daaae5f57f1744a0b86e09c28e94544275bcdaf64efe0b8d2af079f52: dia l tcp 127.0.0.1:6784: connect: connection refused
Hi Benharath, Looking at the errors you poseted, it seems there is some network problem. Forget about this dynamic nfs provisioning setup. Were you able to set up the cluster successfully? Could you create a simple pod like below? $ kubectl run myshell -it --rm --image busybox -- sh It will download busybox container and start a pod and give you a prompt. Check if you can ping internet (eg: google.com) or $ kubectl run nginx --image nginx I am trying to find out whether you have a general cluster networking issue or something that is specific to dynamic nfs provisioning deployment. Thanks, Venkat
Hi Venkat, My "nfs-client-provisioner" is up and running and PVC is in "PENDING" state with the following message "waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator". Storage class is also visible "managed-nfs-storage (default)". Please advise. Thank You very much.
Hi Ishan, I can see your storage class managed-nfs-storage is the default storage class which is fine. I believe there is a some mismatch between what storage class can offer and what you have requested in your pvc claim. Let me give you an example. You might have configured the storage class to offer only RWO access mode and you may have asked for RWM read write many in your claim. Something like that. You can also check the logs of the nfs-client-provisioner pod which will give you more meaningful error if there was any.
Thank you! It's the first video where dude simple show how to do dinamic provision nfs.
Thanks for watching.
Clean and clear.... I was so confused in this topic before i watched this video.
Thank you mate.
Hi, many thanks for watching and subscribing to my channel. Cheers.
Another Great Session,. Thank you for sharing your knowledge and making it simple and easy for us all to learn. You're doing an Excellent job! Much Appreciated
Hi Alexander, thanks for watching this video and taking time to comment/appreciate.
Best and most complete Kubernetes video series. Way to go!
Hi Lutt, thanks for watching.
This is wonderful content which needs to be appreciated and also be monetized for your effort. I am planning to support you once I get a job.
what a great session, you're perfect with mentioning all the details and use cases
Appreciated, thanks
Hi Mohammad, Thanks for watching.
I have been going through your channel in the past few days. Awesome material!
Hi Giovanni, thanks for watching this video and taking time to comment. Cheers.
This is too good, thanks for explaining the working of the dynamic provisioning of NFS with block diagrams.
Hi Rakesh, thanks for watching.
Great session. Completed hands-on using kubernetes-dind-cluster. Would be very helpful in creating various deployments using helm in my homelab k8s cluster without relying on cloud storage.
Hello, I like your approach of teaching. These are all interesting and great videos for quick learning. May I request you to please share back the example of how to setup Dynamic provisioning on GCP using Persistent Disk. Awaiting your inputs...thanks
@@waterkingdom9839 Thanks for your comment. So far I have only been playing with k8s on bare-metal servers. I am yet to explore it on GCP and AWS. Soon you can see videos around these. Thanks.
I have a question regarding storage. If I'm going to use NFS to store persistent volumes how much disk space do I need on the master and worker nodes? What data does k8s nodes store? I would guess docker images and logs but I bet there is more. Kudos for your efforts on sharing knowledge with the community!
Hi Giovanni,
It all depends on your needs. NFS server can be external running on a separate server. You manage disks with desired capacity on that NFS server. Worker nodes and master nodes, as you guessed only needs sufficient storage space for storing docker images. If you are using it in production and have lots of pods running, then you might need to have more storage. You would normally have monitoring solutions like Nagios or Check_mk implemented and will be alerted when disk space goes low. More applications in your cluster means more containers which means more space needed.
One other thing to bear in mind is if you use hostpath for binding a node volume inside the container, then you have to think about how much storage you need.
Thanks.
anna romba nalla iruke unga sessions ella ..god bless you:)
Magizhchi and nandri for watching my videos.
thanks for your hard effort, and appreciate it if you share the name of your terminal you use
Thank you so much for your videos. I beleive the most valuable ones even among those that are not available freely.
Hi Mazen, many thanks for watching. Cheers.
Hi Venkat, Firstly, thanks for taking time and creating videos for us. In simple words Sema Sema!! my cluster is in vmware pks, do I need nfs pod in the cluster for auto provisioning of pvc claim? or i can directly create pvc claim ? Also could you tell whether I need NFS in my cluster?
Hi Arun, thanks for watching this video. GIven its a cloud provider, you can make use of vsphere volumes. You (or your k8s cluster admin) will have to create a storage class after that you can create pvc specifying that storage class and a pv will get created for you.
th-cam.com/video/40fl9Hmi4BE/w-d-xo.html
You can also use NFS persistent volume provisioning if you want.
Thanks.
Very interesting... One comment/question regarding your NFS server, though. As a storage admin/engineer, I'd need a really good reason to export something to the world, especially with the no root squash option. Can you comment on why you've chosen to demo this in this manner as opposed to creating an export following the principle of least privilege?
Hi Jeff, thanks for watching. I am not an expert in storage administration. I agree with you with the least privilege concept. This video is just to demonstrate the idea of using NFS as persistent volumes in kubernetes cluster. I didn't want to concentrate on NFS in this video. Cheers.
Wonderful Thanks again ...! BTW do you have any reverse proxy (nginx/traefik) video for docker ?
Hi, thanks for watching. I haven done nginx and traefik ingress in kubernetes but not as reverse proxy in Docker.
th-cam.com/video/2VUQ4WjLxDg/w-d-xo.html
th-cam.com/video/A_PjjCM1eLA/w-d-xo.html
th-cam.com/video/KnOZwxvxfnA/w-d-xo.html
Hi Venkat, Thank you. Now got a clear picture. I have Created a K8 Cluster in Vmware infrastructure, how to proceed with the creation of Persistent Volumes. The storage is available in the form of ISCSI Datastores & NFS Datastores!!! Thank you
No worries. I haven't used iscsi datastores but the below documentation looks the one for you.
github.com/kubernetes-retired/external-storage/tree/master/iscsi/targetd
Cheers.
@@justmeandopensource thanks Venkat
@@vamseenath1 No worries. You are welcome.
Great Video...Thanks for sharing this demo.....I just need to understand how the Storage Class is identifying the provisioner. The provisioner name is an environment variable in the NFS_Provisioner POD (Which is name-spaced resource). How it is accessible in the Storage class (which is cluster wide resource)
Worth explanation of all videos
Hi Anand, thanks for watching.
Firstly thank you very much for making such wonderful videos in simple language to understand. I have doubt about the RWX concept. So when we say multiple read and writes, are we referring to writing into the Container when logged in from multiple nodes by multiple users?
Is that it refers to? Please clarify. Thank you.
Hi Mohammad, thanks for watching. RWX mode means that you can mount that persistent volume on more than one containers and all containers can write to that volume.
Great video, good to follow instructions. Thanks!
Thanks for watching.
Great video! I was wondering if the nfs server could be deployed in it's own container on my namespace?
Hi Ryan, thanks for watching. You can always do that but still the volumes (underlying storage) for that nfs server pod should come from somewhere outside the cluster through a central storage solution.
Hi, Venkat. I apprece so much your effort and dedication to make this videos.
A little question: suppose I have a previous nfs export with some static files, pdf files for example that I need to mount in every replica of my app. How it works with this provisioner if every pod of deployment will claim for a single volume? I need the same data in every pod, it understand?
Sorry if the question is not so consistent.
Thank you from Argentina!
Could you please make a video for Dynamically provision GlusterFS persistent volumes in Kubernetes?
Hi Ratnakar, Thanks for watching this video. I haven't used GlusterFS before, but just had a quick glance at their docs and looks interesting. I am always open to learn new stuff. I will read through the docs and once I gain some understanding I will definitely make a video of it. Thanks for suggesting the topic though. Cheers.
@Pushp Vashisht Hi thanks for your interest. Yes I originally did the GlusterFS series to give users some basic knowledge about GlusterFS before using it in Kubernetes cluster. Since then I am struggling to find time. Its in my list and I will certainly do some videos soon on it. Cheers.
For newer k8s versions, please use the kubernetes-sigs/nfs-subdir-external-provisioner repo. Do not edit kube-apiserver.yaml
Hi Peter, thanks for your comment. I already posted a follow up video using nfs-subdir-external-provisioner.
th-cam.com/video/DF3v2P8ENEg/w-d-xo.html
Hello Venkat, I like your approach of teaching. These are all interesting and great videos for quick learning. May I request you to please share back the example of how to setup Dynamic provisioning on GCP using Persistent Disk. Awaiting your inputs...thanks
Hi, Many thanks for watching this video and taking time to give feedback. Much appreciated. I will add this request to my to do list. I have videos waiting to be released in the coming weeks. Thanks for requesting this new topic.
Cheers.
@@justmeandopensource Thanks for your prompt response. Much appreciated. There are two videos which are dependent on dynamic provisioning but because I am not able to setup NFS based storage, not able to follow. Can you just send me the files with instructions to follow as you might have them handly? ...thanks
Hi Water Kingdom, unfortunately I don't have them as I am yet to try it on GKE. All my videos are based on bare metal. I only did couple of videos on google cloud.
great tutorial! One question pls: can I increate the replicas to ensure high viability?
Hi Manfred, thanks for watching. I haven't completely looked at the code to say if that is required/supported. The way the nfs client provisioner works is by mounting the nfs share on the kubernetes worker node where it is running and distributing it to the pods. I will have to test it before I can comment.
@@justmeandopensource Got it working with an ISILON NFS Applicance. Really easy to install and use. It runs well in our productive environment.... But I use a helm chart for installing it - it is much easier (helm install nfs-provisioner --set nfs.server=[NFS-SERVER] --set nfs.path=[EXPORT_FS] stable/nfs-client-provisioner --namespace=nfs-provisioner --create-namespace)
@ Perfect. Even though I personally prefer using Helm, whenever I do a video I prefer the manual way so that viewers get an understanding of what they are deploying. Thanks for sharing this detail. Cheers.
whats the bash prompt that you are using ?
looks really cool
Hi Kushagra, thanks for watching. This is Manjaro Gnome edition. I have done a video on my terminal setup which you can watch in the below link. Cheers.
th-cam.com/video/soAwUq2cQHQ/w-d-xo.html
It seems SC and PV are not namespaced, what are the best practices to provision PVs for different namespaces? (production/staging/test or multi-tenancy)
Hi Zach, thanks for watching. Yes I believe they are not confined to any particular namespace. I think persistent volumes are created in a particular namespace depending on which namespace you create the PVC.
Hi sir..
All you videos are simple and great.
I have a question here.
Why we need rbac for NFS dynamic here?
Hi Bala, thanks for watching. I don't understand you question. Could you put it in a different way please? Thanks
@@justmeandopensource
I have created storage class, pvc nfs-provisioner.But not created rbac.yaml. hence the status of pvc is pending. Once I applied rbac.yaml , pvc was avaiable.
My question is why we need clusterrole role service account fot nfs provisioner
@@balaspidy If you look at the resources we deploy for this nfs-provisioner, we are using a service account that needs certain privilege to list/create resources across all namespaces, hence we need a cluster wide privilege.
@@justmeandopensource thanks so much for your quick response...will be there video for cluster role and service account?
@@balaspidy I will try. There is also this RBAC related video I did a while ago. Might be useful as I covered some roles.
th-cam.com/video/U67OwM-e9rQ/w-d-xo.html
Hi venkat,I followed vdo and have done the setup efs dynamic provisioning..have created different pods/containers with same images but want to handle different volume path for each pod level..please suggest how I can proceed..
Hi Venkat , thanks for creating such an informative video. I have a question , how do i mount a host directory dynamically to my nfs volume ?
This video will need to be revised for kubernetes versions 1.20+ because the volume mechanism has been reworked in the later releases. If you are using the LXD Provisoiner Venkat was so kind to provide to set up your k8s, you need tochange the script so 1.20.0-00 becomes 1.18.15-00. Venkat if you disagree let me know.
Thanks, you are right. I will redo this video soon for latest k8s version. Its hard to keep updating the video as the ecosystem evolves at a great speed. Cheers.
Venkat, i wonder if it is the best practice to use the PV on NFS for the purposes of deploying an SQL database? Our Kubernetes will be on premise instead the cloud.
Hi, thanks for watching. You definitely need to have persistent volumes for your database. No doubt in that. The type of storage solution you could use on-prem depends on various factors. There are quite a lot that you can use. I have explored just NFS. But in production, you can use Ceph/Rook, Gluster FS or any other clustering storage solutions.
Hello,
Thanks for such a helpful video.
I have question around Helm Chart and the PVC rwx NFS provisioner. The pvc is created by subchart for parent chart deployment to use. But when performing "helm uninstall chart", the pod and the pvc status gets to TERMINATING state.
Any way to specify configuration so that the pod and the PVC deletes smoothly ?
Hi Venkat, I'm trying to run the nfs-client-provisioner in it's own namespace. I got everything working in the default namespace after following your tutorial, then:
deleted the resources in rbac.yaml, class.yaml, deployment.yaml
created a new namespace called storage,
created a new context with cluster=kubernetes, user=kubernetes-admin, namespace=storage,
used the new context
created the resources in the yamls again
but now PVCs are pending forever.
Am I missing something that needs to be done to get this running in another namespace?
EDIT: Ah I figured it out, "namespace: default" is written in the clusterrolebinding and rolebinding resources. Just changed those and it worked :)
hi Mr. Venkat , can we configure nfs client in different namespace ?
Yes you can on any namespace.
Cool explanation, what happened when nfs provisioner pod is destroyed( and recreated)? My data is back?
Hi Tumenzul, thanks for watching. The actual nfs server which holds your data is external to your k8s cluster. Even if you delete your nfsprovisioner pod and your k8s cluster, the data will still be there in your nfs server. But depending on how you created your persistent volume, the volume might be deleted when the pvc or the pod is terminated. This is actual and expected behaviour.
Nice video agian Venkat. Thanks for that. I have question. My usecase is to have one shared NFS so every new POD will claim the same persistant volume. Can you advice how to achieve something like this? Your tutorial works great for situation when new pod comes and it does PVC which creates PV. But here I would need to allways attach each pod to the same volume.
Hi Peter, There is no guarantee that the same persistent volume will get mounted by the pod each time you delete the pod and recreate it. Persistent volumes are released once you delete the associated pod (with pvc) but not available to the next pod. The persistent volume will have to be manually deleted. There are certain storage classes if you are using the cloud provider where the persistent volumes gets deleted automatically if the reclaim policy is set accordingly.
To attach the pv to the same pod, I don't think there is anyother option other than to use statefulset with one replica. So you will get one pod and one pv. Every time you delete the pod in this statefulset, it will attach to the same pv.
Thanks,
Venkat
@@justmeandopensource Hi Venkat seems I found a solution for my case. I basically created one PV and one PVC (out of statefulset). Then in statefulset I removed part for dynamic provisioning (volumeClaimTemplates part) and instead I have added "volumes" where I specified the PVC I created before. This allows me to create multiple replicas using the same PVC. I tested this solution and it gives me exactly what I need so I'm having 2 replicas which are accessing the same NFS mount. Thanks Peter
@@p55pp55p Cool.
Hope you also set the AccessModes to RWX (ReadWriteMany) in pv and in pvc so that same volume can me mounted on multiple worker nodes with read/write permissions. Depends on your use case, but something to bear in mind.
Thanks,
Venkat
@@justmeandopensourceSure accessmode is RWX. I think it wouldn't allow me to do this if I would keep there RWO. Peter
@@p55pp55p Yeah. It might allow you to mount but you won't be able to write to it. Not sure just a guess.
Best Kubernetes video series!
Hi Shantanu, many thanks for your interest in this channel. Glad you like it. Cheers.
Thanks for this video, very helpful. I have written an application Go that use mongoDB as it database. When the frontend application (http server) start it connects to the mongoDB server and then listen for CRUD request from any http client. I have created a docker image of frontend application pushed to docker hub. I would like to ask if i would be able to deploy my application on kubernetes with this setup, NFS persisten volume and MongoDB Replicaset deployed on VMs running on my localhost machine. I have already set up my kubernetes cluster and following this video created the persistent volumes. Is this possible?
Hi Afriyie, thanks for watching. So you have your MongoDB replicaset running on virtual machines in your workstation. And you want to run your frontend in your k8s cluster and have it talk to mongodb replicaset. Yes, thats definitely possible. How is your k8s cluster provisioned? Can your k8s worker node ping the ip address of the virtual machine where mongodb is running? if yes, then you can just point your frontend to connect to the ip address of the mongodb vms. Or you might have to do some portforwarding between your workstation and the mongodb vm. Then use your workstation ip to access the mongodb. I might not have explained it very well, but its doable. Cheers.
Thank for the great video. I have following questions: Should we have more than one replica of nfs client provisioner? Let's say I have prometheus&grafana up&running and I need the data to be saved no matter what. We have accessmode ReadWriteMany. How does this work with pods with more than 1 replicaset which are storing the data on pv?
HI Sir.
Thankyou sooooo much for this...
all created well (SA,role,rolebinding,clusterrole,clusterrolebinding, storageclass, provisioner pod,) but
when I try to create PVC ... its in pending state and waited for longtime.. what could be the reason?
===
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 9s (x20 over 4m42s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator
====
If the worker1 crashed is the provisioner pod will move to another node?
Hi Nagendra, thanks for watching. Yes, the nfs-provisioner pod is again a normal k8s resource and if it gets crashed or the node where it runs crashes, it will get started on another node.
Regarding your pvc in pending state, have you verified manually that you can mount the nfs share on your worker nodes? And did you specify the storage class name as mentioned in this video?
@@justmeandopensource Yes Sir, I had verified manually nfs share is able to mount that..that's good.
I will reconfigure everything ...if any issues I'll come back to you on this. Thanks for support.
@@nagendrareddybandi1710 You are welcome.
Really good info. I wondered how a replica set or deployment with say 3 pods would make a pvc that is unique to each, is there a host name option that could be used in the yaml to create the deployment ?
Hi Jon, It all depends on how we design the architecture. We first have to understand the application we are deploying and then plan the resources. It can be a statefulset where pvc gets bound to the same pv every time.
Try statefulsets instead of deployments. it would create diff pvc's for every pod
What I didn't get from this video is whether one can get data persistence from these dynamically provisioned "persistent" volume claims. I noticed that after you deleted the claims, the volumes also disappeared - I guess I'm struggling to understand where the "persistence" comes from since the data created in the "/mydata" mount point is now gone. Did I miss something?
Hi Venkat, Thanks for your video, I have doubt in PV, If I have created PV(PVC-storageclass-PV) by storage class, will that possible to increase PV size after the creation of PV
Hi Siva, Was it you or someone else. I had this same question. I haven't tested that. Actually the other question was would we be able to use more than the allocated pv. For example, if we defined a pvc and got a pv for 100MB, can we use more than that. I am going to test these and will share the results.
Thanks.
Hi Siva, I just had a try on this one. Interestingly, the nfs provisioner I used in this video (for bare metal) doesn't support strict volume size. I tried by creating a 5MB pv and attach it in a pod. And I was able to write more than 5MB. I created a file which was 100MB. So no strict size implemented which is a limitation.
github.com/kubernetes-incubator/external-storage/issues/1142
If you use one of the cloud implementation like GCP Persistent Disk or AWS EBS or Azure Disk, then you will only get what you requested and won't be allowed to use more. Although from kubernetes version 1.11 and above, you can resize a pv by updating your pvc. You don't have to delete and recreate the pv. It will be dynamically resized. However, the pod needs to be restarted to make use of the increased size. Shrinking volumes is not supported as yet.
In the below link you will find some useful information. It also has list of supported storage providers that has this resize feature.
kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
Thanks.
Hi, When trying to create the pvc (4-pvc-nfs.yaml), its stuck at "Pending". When I describe the pvc, I see
"Warning ProvisioningFailed 6s (x2 over 8s) persistentvolume-controller no provisionable volume plugin matched". I am running 1.18.5 on ubuntu 18.04. Do you know of a way of fixing this?
If I want to have two NFS mounts how should I procceed with your examples? I have a RAID0 and a RAID1 shares on my NFS server and I wanted to create a fast storageclass and a normal storageclass
Hi Giovanni, thanks for watching this video.
You can follow the same steps as shown in this video for each of your provisioner. You can't use one provisioner to provide both storage class.
So please follow the steps and use one nfs provisioning first.
Then use the same set of manifests (github.com/justmeandopensource/kubernetes/tree/master/yamls/nfs-provisioner) and change the name as follows.
In class.yaml, change line#4 and line#5 (you should change the provisioner name from exampe.com/nfs to something else like example.com/fastnfs)
In default-sc.yaml, remove annotations (line#5 & line#6), then change provisioner name on line#7 to example.com/fastnfs
In deployment.yaml, update line#23 with provisioner name and then update the nfs path accordingly.
Make sure to change the name of the resources and app labels accordingly as you are deploying another nfs-provisioner.
Hope this makes sense.
Thanks.
Nice tutorial, but im facing a different error. Turns out that when i create that PersistentVolumeClaim, it creates the PVC and the PV, but in my NFS server there's no additional folder! Can you please help ? Looking at the logs for nfs-provider pod, there's the creation, and when i try to tested with a busybox pod, then got mount failed stat volumes/long-hash does not exists. Which is right, because in NFS shared folder there's nothing.
Hi Venkat , while creating a file inside the pod , I'm getting permission denied , the /mydata has nobody owner , while creating nfs given chown nfsnobody since its centos., could you please suggest , thanks in advance
Awesome tutorial,keep up the good work.
Hi himansh joshi, thanks for watching this video.
Hello,
This doesn't seem to work anymore. When i try do add pvc i get: Normal ExternalProvisioning 3s (x9 over 117s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator
I'm running k8s 1.20 on 3 node cluster
if i start pv with:
nfs:
server: SERVERIP
path: NFS SHARE
and pvc with that pv it works. but dynamic setup fails :/
when i get the logs of nfs-client-provisioner i get:
I1214 22:08:03.402156 1 controller.go:987] provision "default/pvc1" class "managed-nfs-storage": started
E1214 22:08:03.406446 1 controller.go:1004] provision "default/pvc1" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
@ Hi Thanks for watching. I will re-test this video soon and let you know. Things might have changed.
@@justmeandopensource i tested, on 1.19.5 it works, on 1.20 it fails
@ Ah okay. Something must have been changed. I will work on it. Thanks for confirming.
Hi! isnt this slower? IOPS etc... network latency?
I was just showing the possibility of using nfs server as a dynamic provisioning storage backend. You will have to analyze the issues/bottlenecks around the chosen solution.
Hi Venkat, Came across your channel while searching for PVC using NFS. All your videos are awesome and very detailed. I was trying out this, but my nfs-client-provisioner pod creation failed due to Error getting server version: Get 10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout. I followed your vagrant script to create the cluster. Any idea what could be the issue ? Any pointer would be really helpful.. Thanks in Advance.
Hi Nevin, the nfs-client-provisioner pod usually fails if it has problems accessing the nfs server. Hope you have setup your nfs server. And did you verify that you can mount it from your worker nodes? Cheers.
@@justmeandopensource Venkat, thanks a lot for taking time to read and reply promptly. Yes my NFS server is up and running and I am able to mount the volumes. FYI... I was able to fix the issue. I used a vagrant file which is some what similar to your git hub one. One mistake which I had in the script was my api server advertise address and pod network cidr where in the same ip range (192.168.56.XXX and 192.168.0.0/24) respectively. I read in one of the google link the ip range should be different else may result in conflict while using Calico. When i checked your script, I noticed your vagrant file has it mentioned right.
--apiserver-advertise-address=172.42.42.100 --pod-network-cidr=192.168.0.0/16
Once that is fixed, my client provisioner pod started running and PVC got binded.
Thanks a lot once again. You publish lots of advanced topics which are not found any where. Appreciate your effort. Keep up the great work.
@@nevink3123 very glad that you got it resolved. Good job. Cheers.
Hi Venkat, Please let me know from where the ip is getting from ( 10.95.65.213)? Did you created any new network interface ??
HI Rajesh, thanks for watching this video. Where did you see this ip address? That looks like internal cluster ip which all managed by Kubernetes.
Great video. Is there way to access the Azure Blob Storage via the Persistent Volume in AKS (Kubernetes)?
Hi John, thanks for watching.
I have no idea and never use Azure or AKS in Azure.
you are genuine man !
😄
Hi Venkat , as we know that there are many volume plugin available .I want to ask u that if we use gceperesitentDisk then we just need to create a storage class and automatically pvc will take when they required but while using nfs we first need to create nfs server nfs-pod using deployment and then storage class and then pvc will take place .please help me am I right and which one is better 🙏
does we need to deploy nfs-client-provisioner across each namespace for each name space to use the nfs service ?
Hi Ahmad, thanks for watching. Its enough to deploy it in one namespace and can be used by pods cluster wide. Cheers.
i follow thi vedio and ddn't work for me i have 4 node 1 master and 3 workers my dist is centos7 i am using calico network my firewall is disable setenforce 0 i got this message from the describe command wrong fs type bad option bad superblock on 192.168..... missing codepage or hleper program or other error(e.g,nfs,cifs) you might need a /sbin/mount helper program
Hi Venkat,
Firstly thanks for the amazing tutorial!! I have a problem and would like some insight!
I have created a windows share, and I can mount it into one of my cluster workers and I can write data (with sudo). So there is no connectivity issue. I want to use this NFS share for which I have assigned all possible read+write access included it to Everyone, but every time I configure this like the way you have done it I have issues creating the pvc and pv I get an error stating could not create default-*****-*******-**** directory permission denied. Do you have any ideas on this??
Thanks!
Cheers,
Nithin
Hi Nithin, thanks for watching. I have only tried exporting NFS shares from a Linux machine. But that shouldn't stop you from using Windows machine for NFS sharing. You might have to change the directory permissions that you are sharing. I believe you have already done that. But just double check. Give everyone read/write permissions on that shared directory. In Linux, I used chmod 777 on the exported directory.
@@justmeandopensource Hi Venkat,
Thanks for the time 😄
@@nithinbhardwajsridhar4018 No worries. You are welcome.
Hi Venkat, how to change the default behavior of persistentVolumeReclaimPolicy to Retain while using Dynamic Provision ?
Hi Shaik,
As the persistent volumes are created automatically when request them by creating a pvc, you will have to update the ReclaimPolicy once the pv is created.
$ kubectl get pvc
Look at your desired pvc and check the corresponding pv name. Then you can update the policy using below command
$ kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
I just brought up the cluster and tested this which worked fine. Once you apply this patch to the pv, it won't get deleted when you delete the associated pvc. You have to then manually delete this pv.
Hope this makes sense.
Thanks,
Venkat
@@justmeandopensource Thanks, Venkat. It worked for me too. Is this the same process even for AWS EBS & Azure disk ?
HI Kasim,
If you are running your K8s cluster in the cloud (GCE, AWS, Azure), there are built-in dynamic storage provisioners for each of them. I made this video as the series I am doing is on bare metal and we don't have any solution for dynamic provisioning built-in for it.
You can check the below link. Scroll down to section "Types of Persistent Volumes".
kubernetes.io/docs/concepts/storage/persistent-volumes/
Hope it makes sense.
Thanks,
Venkat
@@justmeandopensource Thanks Venkat for the clarification. It makes sense.
Hello Venkat,
If we make this storageclass as default then is it required to modify "storageClassName" during the PVC creation in the yaml file?
Hi Ratnakar, thanks for watching this video. In this video I didn't talk about default storage class. But later realized that I should have done.
Later I added another yaml file named default-sc.yaml in the same directory in the github repo. It has annotation to make it a default storage class. So please use default-sc.yaml instead of sc.yaml. Then you don't have to mention the storageclassname in you pvc definition.
Thanks.
Thank You Venkat.
@@ratnakarreddy1627 You are welcome. Cheers.
How can you setup NFS server as master from your kubernetes master, where can I find the admin.config file which could setup NFS as master. Please guide me, thanks.
If you create persitent volumes before pvc then Dynamically provision NFS can create pvc map to pv which created before ?
Hi Nguyen, thanks for watching. I didn't get your question. Sorry.
@@justmeandopensource In this Video if I create a persistent volumes ( such as name of persistent volume is pv1) before create any pvc . Will pod nfs-client-provisioner create dynamically one persistent volumes claim to bound to pv1 ?
@@nguyentruong-po4mx So your question is if you manually create a persistent volume named pv1 and then create a pvc, will the provisioner use pv1 or create a new persistent volume? Is this right?
If you have a persistent volume and if that satisfies the persistent volume claim you created (like the storage size), then the existing pv will be used.
can we edit the pvc and increase/decrease requested volume without restarting the PODS ??
Hi Rohit, thanks for watching. I don't think you can do that without restarting the pod. I am not 100% sure on that without testing.
Hi,
I've red in some sites got the info as below
FYI! I think resize of the disk is possible ( go here kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses)
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "192.168.10.100:8080"
restuser: ""
secretNamespace: ""
secretName: ""
allowVolumeExpansion: true
@@nagendrareddybandi1710 right, i also came throw this flag to allow the expansion of pv
hi can you help me i always error like this Output: mount.nfs: access denied by server while mounting if pull the image from gitlab
How did you export your nfs share? Whats the content of your /etc/exports file?
I follow your instruction, I create PVC successfully, but it don't bound with PV, I 'm not sure what happened. I suspended my NFS server go wrong, but I can mount NFS server directory with my client successfully, hope for help from you , very nice lecture.
Hi Yi, thanks for watching. I believe I replied to this comment on a another video.
hello , hope you are doing good . i have a question regarding storage classes and pvc, after watching this video i thought to experiment on Aws cloud (EBS) as a volume , but i couldn't , so it is restricted for nodes to be on same cloud to use EBS . Like i created policy as given in Aws document then i created storage class and pvc but it was not creatin pv at own. I read somehwere or got confused with some other thing that its restricted to have nodes on whicc pods are running should be on same cloud. Any suggestions. Thx
Shouldn't be better if nfs client was a daemonset ?
Congrats on the tutorial. Pretty good!
No harm in deploying it as a daemonset. Helm charts are configured for deployments with configurable replica counts.
github.com/helm/charts/tree/master/stable/nfs-client-provisioner
Great channel, i learned alot from your videos. Can i request a tutorial on how to setup a dynamic and static glusterfs persistent volume? Thanks
Hi Rudi, thanks for watching this video. I have lot of topics to cover in Kubernetes. I will definitely add this one as well. Cheers.
@@justmeandopensource Great sir, thank you sooo much..
@@rudi.chan.78 You are welcome.
Sir when i create nfs client provision pod it showing error cashloofbackoff and logs showing this Error getting server version: Get 10.96.0.1:443/version?timeout=32s: dial tcp 10.96.0.1:443: i/o timeout.
Please give me suggestion.
Hi Nitin, thanks for watching.
I did an updated video recently on this topic which might be of some help.
th-cam.com/video/DF3v2P8ENEg/w-d-xo.html
Is it really necessary for dynamic provisioning of persistent volumes if my K8s cluster is hosted on a cloud provider?
There is a difference though. Are you using one of cloud managed Kubernetes cluster like GKE, EKS or AKS? Then there is no need for dynamic nfs. You will have to create storage class though.
But if you are not using managed service, instead launch instances in the cloud and install kubernetes yourself, then you will need this setup. Thanks.
Thank you for getting back , I am using digital ocean's managed Kubernetes cluster.
@@varun898 I am not sure but they should have storage provisioning enabled. CHeers.
Thanks for the great content
Hi Barhoumi, thanks for watching.
GitHub link is not working kindly provide the latest link.
Hi brother ... Your videos are so good and its clearing so many doubts. Could you please make some videos on common troubleshooting problems in Kubernetes. It would so helpful for peoples like me to get a job in K8s.
Hi Prabhu, thanks for your interest in this channel. I compile a list of topics based on requests from viewers and this has been requested by few others as well. Its in my list and I will look into making some video time permitting. Cheers.
@@justmeandopensource Thank you 😊
@@InspiringOrigins You are welcome.
Hi, Venkat how to do it in mac? any idea?
HI Nehar, I haven't used Mac in years. But the process of exporting a directory through nfs shares should be simple.
www.peachpit.com/articles/article.aspx?p=1412022&seqNum=11
Once you have nfs shares exported, you can proceed with dynamic nfs-client-provisioning as shown in this video.
Thanks
Hi Sir, I have one question regarding aws efs. I have docker magento image and it contains all the installation files and folder inside /var/www/html directory but when i mount the efs pv claim to /var/www/html then the data inside html is not showing . it becomes empty. I want that the data which is already there inside html of my docker image should remain after mounting efs . Otherwise i wont be able to do the installation.
Hi Sarfaraz, thanks for watching this video. So you have some data in /var/www/html in your docker image. Okay. The basic Unix/Linux behaviour is that whenever you mount something to a directory, the underlying data in the original directory won't be available. This would make sense. You can mount your AWS EFS pv in a different location inside the container. There is no way to retain the data after mounting to the same directory.
Thanks.
@@justmeandopensource Can I mount efs directly to worker node fstab and then mount container volume /var/www/html as a hostpath. Then will it retain the data?
Why do you have data in the container image? Why don't you copy all the data to the EFS and just mount it as PV?
Thanks.
@@justmeandopensource I working on product. I want to have base image ready for magento store. Whenever user sign up with their name a new magento store is created with mytest.example.com. That's why I want base image ready. So that we have to make changes only in database. I am using RDS for database and for persistent storage I am using EFS.
@@sarfarazshaikh I understand. But when you mount the persistent volume on /var/www/html, the data already there will not be accessible. So you will have to mount EFS under different directory like /var/www/data and change the logic of your web application to use this directory as the data directory or something like that. thanks.
Your the bomb V! thanks man!
Hi Zach, thanks for watching this video. Cheers.
super session bro
Hi Prasad, thanks for watching.
Hi Venkat,
I am trying this video and the host is Mac. I am running Vagrant k8s cluster.
Host Machine:
Mac NFS Server is running.
/srv/nfs/kubedata - permission as below
drwxr-xr-x 3 nobody admin 102 1 Sep 11:57 /srv/nfs/kubedata
KWorker:
[root@kworker2 ~]# showmount -e 192.168.68.XXX
Export list for 192.168.68.XXX:
/srv/nfs/kubedata (everyone)
mount -t nfs 192.168.68.XXX:/srv/nfs/kubedata /mnt
mount.nfs: access denied by server while mounting 192.168.68.XXX:/srv/nfs/kubedata
Any clue what could be the issue?
Thanks in advance.
What options you have in you nfs exports configuration? In my Linux server, i had to pass "insecure" option as well. Could you try it with insecure option? Thanks.
unbelievable!!!
Hi Dom, thanks for watching this video.
can we configure multiple NFS servers with one deployment?
I don't think you can.
@@justmeandopensource with two deployment of the provider pod will that work. Will try that.
@@nachi160 yes it will I guess and exposed through 2 different storage classes
If I am following along at home, should I change provisioner in class.yaml?
to NFS maybe?
Also in deploy.yaml
Should I use the path on the NFS server, or the path that will be mounted to the NFS server?
e.g I have /var/nfsshare on my NFS, and /mnt/nfs/var/nfsshare on my nodes.
which ones should I use?
Hi Yuven,
Firstly, thanks for watching this video.
Query 1: Should I change provisioner in class.yaml?
In class.yaml, line 5, I have used "example.com/nfs" as provisioner.
In deployment.yaml, line 22 and 23, I have specified the provisioner name environment variable
You have to make sure the provisioner name you give in deployment.yaml matches that in class.yaml.
Its just a name. You can have any name, but needs to match in these two files.
Query 2: Which path should I use in deployment.yaml?
You should use whatever you exported in your nfs server /etc/exports file.
In your case, you should use /var/nfsshare
Hope this makes sense. If not, let me know.
Thanks,
Venkat
@@justmeandopensource Thank you very much! You have been a great help to me. Good to know that the provisioner name is only a name.
I figured out the path a bit after asking the question... by rewatching parts of you video ;)
Keep up the amazing work! The world need more people like you :)
@@yuven437 You made my day. Cheers.
@@justmeandopensource eeh, this is getting embarassing :'D
I now get an error:
MountVolume.SetUp failed for volume "pvc-427e53bf-70bb-11e9-8990-525400a513ae" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9b02aec2-70be-11e9-8990-525400a513ae/volumes/kubernetes.io~nfs/pvc-427e53bf-70bb-11e9-8990-525400a513ae --scope -- mount -t nfs 11.0.0.75:/var/nfsshare/default-pvc3-pvc-427e53bf-70bb-11e9-8990-525400a513ae /var/lib/kubelet/pods/9b02aec2-70be-11e9-8990-525400a513ae/volumes/kubernetes.io~nfs/pvc-427e53bf-70bb-11e9-8990-525400a513ae Output: Running scope as unit: run-r68af7a0af3c3404eb50d1e9baf90632d.scope mount.nfs: mounting 11.0.0.75:/var/nfsshare/default-pvc3-pvc-427e53bf-70bb-11e9-8990-525400a513ae failed, reason given by server: No such file or directory
When I deploy busy box.
I notice that the pvc gets created, but it does not show up in the shared folder. Even though I have checked, and the worker nodes have access to the share (I created a sample file, and it works just fine)
Any idea about what is wrong?
I am closing in on my deadlines and I am quite stressed.
in deployment.yaml
I use spec:
containers:
volumeMounts:
mountpath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: example.com/nfs
- name: NFS_SERVER
value: 11.0.0.75
- name: NFS_PATH
value: /var/nfsshare
volumes:
- name: nfs-client-root
nfs:
server: 11.0.0.75
path: /var/nfsshare
I am guessing there is something wrong here?
the path on my NFS is /var/nfsshare
and on my Node: /mnt/nfs/var/nfsshare
should I make them the same?
when i try to change the value in pvc from 500Mi to 1Gi, it shows like this
persistentvolumeclaims "pvc1" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
How could i increase the value?
Hi Mani, This illustration I showed in this video is for dynamic provisioning and not dynamic resizing. As the error states, it is forbidden because the storage class we are using here which is NFS based doesn't support dynamic resizing. In order to use dynamic resizing feature, you will have to use one of the supported storage class (eg. AWS EBS, Google PersistentDisk, Azure disk or other cloud offerings). Most of my videos are around bare metal and not cloud.
Thanks.
@@justmeandopensource thank you Venkat. Since am gonna use that with EBS. It's very interesting.
@@manikandans8808 Check the below link. Might be useful.
kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
Thanks
that was really great
I am getting error "waiting for a volume to be created" and pv is not getting created can any one help
Hi, thanks for watching. Did you check if you can mount the nfs shares from the worker nodes first?
Can we ingest kubernetes logs in AWS Elasticsearch directly ?
Hi Praveen, yes you can. All you need is a reachable elasticsearch endpoint from your k8s cluster. You can use fluentd or any log shipper to send logs to Amazon elasticsearch service. I haven't tried it. But when I try it, I will make a video.
Thanks
Hi Murat, thanks for watching.
Instead of NFS How we use AWS EFS for PVC ?
Hi Praveen, thanks for watching. If you have a cluster in AWS (like their managed EKS), it will be easier to use EBS or EFS as persistent storage. If you want to use it for your locally running k8s cluster, its still possible, but I haven't tried. When I get some time I will give it a try. Cheers.
hi but i a facing some issue when i am creating nfs Dynamically provision i am using your all files that is
rbac.yaml,class.yaml and deployment.yaml when i am applying rbac and class file it all work fine and
also in deployment is says created but when you check " kubectl get all -o wide "command it shows tha nfs
container in creating mode and then it will be same as laong as and never creates and gives this [ pod/nfs-client-provisioner-7b94998b9-lpn6w 0/1 Containercreating 0 29s ] please helm for this i need to add in my production
Hi Atul, thanks for watching. Did you verify that your nfs server is running and that you can manually mount it on the worker nodes? If you can't mount it manually on the worker nodes, the nfs-provisioner pods will not be ready. First thing is to check as shown in this video that you can mount the nfs share from your worker nodes. Then make sure the deployment.yaml has the right ip address. Also what version of Kubernetes are you running?
Thanks.
@@justmeandopensource i am using minor 16 version and from my all workers we can able to mount nfs share folder
but even i am trying using helm and more than every other document it still get same error line container in hang state we have two master and two worker now we are doing test so please help me if it is possible for you if you have any proper document or you want to go through remote and all
@@atulbarge7445 I don't think I can help you remotely. Sorry about that. Look at the command output of "kubectl describe deploy and look at the events section at the bottom. It might give you a clue.
@@justmeandopensource ok thanks i will do that
@@atulbarge7445 cool.
Thanks for this Video and i followed the same steps but my pod is getting restart keep on..Back-off restarting failed container...
Please help me to resolve..
HI Siva, thanks for watching. I had been successfully using this process for a very long time on a daiy basis. Can you first make sure that you can mount the nfs volume from the worker node?
@@justmeandopensource hi Bro.. Its mounted even though am getting same issue..
please help on this..
Hi Siva, I don't think its a problem with your dynamic PV provisioning. If it was PV provisioning problem, then you pod will be in pending state and not in failed backoff state.
Look at the events immediately after your deploy the resource.
$ kubectl get events
@@justmeandopensource Hi Bro..Thanx , i have recreated NFS there was some network issue..
Now its working fine..
thank you so much..
your videos are helping me lot...
@@SivaKumar-og9pb Perfect.
Hi Sir, I am running nfs server on AWS EC2 machine and followed your steps. When I create pvc it is saying status as pending..what should I do?? What I am missing... Pls suggest
Sir pls help me
@@Mr.RTaTaM Thanks for watching. As shown in this video, did you check that you can manually mount the nfs share from your worker nodes? If not please do that first. And also see if you have to update security groups to allow these traffic.
My cluster running in my local laptop and created nfs server on AWS and I'm able to mount it from my worker nodes but when I'm creating pvc it is in pending state it saying that waiting for a volume to be created, either by external provisioner example.com/nfs or manually created by sys admin!! Anything I'm missing sir
@@Mr.RTaTaM So if you can mount it from your worker nodes, then I don't think there is a problem with the setup. If you used my manifests, you would have got a storage class named managed-nfs-storage. And you will have to use the same storage class in your PVC. Also you can check the events. For example, kubectl get events. This will show you why the pvc is pending.
Thankyou bro
Hi Arjun, thanks for watching. Cheers.
Came across your channel when I was trying to understand mongodb replica sets.Really appreciate your work and learning a lot from your channel.
Hi Varun, thanks for watching my videos and taking time to comment. Cheers.
what about aws EFS ?!
Yes. we can use EFS as dynamic provisioning. I haven't done any video on that. Probably I will do it at some point. Cheers.
Hi Venkat, i am trying to expand the pvc online, but it is not working....any idea?
I was able to edit PV online expansion, it got expanded from 5GB to 5GB.
But PVC not responding at all.
Thank you!
--------------------------------------------------------------------------------------------------------------------------------------------------
root@ubuntu:/K8/nfs-storage-provision# k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage dynamic/nfs Delete Immediate true 7m46s
--------------------------------------------------------------------------------------------------------------------------------------------------
root@ubuntu:/K8/nfs-storage-provision# k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-12a822b0-ce75-47fe-8255-ce24ff9b30b5 50Gi RWX Delete Bound default/pvc-nfs-pv2 managed-nfs-storage 4m43s
--------------------------------------------------------------------------------------------------------------------------------------------------
root@ubuntu:/K8/nfs-storage-provision# k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-pv2 Bound pvc-12a822b0-ce75-47fe-8255-ce24ff9b30b5 5Gi RWX managed-nfs-storage 5m1s
root@ubuntu:/K8/nfs-storage-provision#
the pod stuck in the status : creatingcontainer waht's the problem ?
Hi Benharath,
Thanks for watching this video.
Which pod is stuck at that stage? Is it one of the pods during the NFS provisioner deployment or when you are testing a pod with persistent volume after you have created the Nfs provisioners?
Thanks,
Venkat
@@justmeandopensource yes it's
one of the pods during the NFS provisioner deployment
You could check the events from that deployment which would give you what stage it is in and possible errors.
Run the below command and towards the bottom, see if there are any clue
$ kubectl describe deployment
Thanks
@@justmeandopensource i got this :
Warning FailedCreatePodSandBox 25s kubelet, nfs-client Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "419e367daaae5f57f1744a0b86e09c28e94544275bcdaf64efe0b8d2af079f52" network for pod "nfs-cl ient-provisioner-c84f69c7c-mvjpx": NetworkPlugin cni failed to set up pod "nfs-client-provisioner-c84f69c7c-mvjpx_default" network: unable to allocate IP address: Post 127.0.0.1:6784/ip/419e367daaae5f57f1744a0b86e09c28e94544275bcdaf64efe0b8d2af079f52: dia l tcp 127.0.0.1:6784: connect: connection refused
Hi Benharath,
Looking at the errors you poseted, it seems there is some network problem. Forget about this dynamic nfs provisioning setup. Were you able to set up the cluster successfully? Could you create a simple pod like below?
$ kubectl run myshell -it --rm --image busybox -- sh
It will download busybox container and start a pod and give you a prompt. Check if you can ping internet (eg: google.com)
or
$ kubectl run nginx --image nginx
I am trying to find out whether you have a general cluster networking issue or something that is specific to dynamic nfs provisioning deployment.
Thanks,
Venkat
Hi Venkat, Please let me know the root password.
kubeadmin
Hi Venkat, My "nfs-client-provisioner" is up and running and PVC is in "PENDING" state with the following message "waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator". Storage class is also visible "managed-nfs-storage (default)". Please advise. Thank You very much.
Hi Ishan, I can see your storage class managed-nfs-storage is the default storage class which is fine. I believe there is a some mismatch between what storage class can offer and what you have requested in your pvc claim. Let me give you an example. You might have configured the storage class to offer only RWO access mode and you may have asked for RWM read write many in your claim. Something like that. You can also check the logs of the nfs-client-provisioner pod which will give you more meaningful error if there was any.
@@justmeandopensource thank you Venkat. It’s working now. Really appreciate your lessons. Keep up the good work.
@@IshanRakitha Glad to hear that.