Excellent video. I also watched the dynamic Nfs also excellent stuff. Thanks !! You are a great instructor and you know what you are talking about. Learned a lot today. Regards, Hans
Hey Venkat, You are doing such a fabulous job man. I learned kubernets lot from your presentation. your presentation was awesome. What a dedication .... ahhh Here the beauty is how you got a time to reply most of the audience.. [it's unbelievable] It's really good attitude.. keep Rocking..! " I have tried this exercise on Centos 7 " It works as expected.. Note : I just mentioned share * (rw,sync), I didn't get any error.
Hi Kumar, many thanks for watching this video and taking time to give feedback. Glad that you liked my videos. Just trying to give back to community something that I have been learning. Thanks, Venkat
Thanks for the video. I had to run nfs client commands on worker nodes to get it up and running (sudo apt install nfs-common nfs4-acl-tools) for Ubuntu- (dnf install nfs-utils nfs4-acl-tools) for Centos
Hi, thanks for watching. Yeah, with documentation you can get through quickly what you need but sometimes if you want to really learn, videos are great. Cheers.
Can you create a video on how to provision ceph storage to the multimaster kubernetes cluster using ceph-rbd and ceph-fs. It will be very helpfull for us. All your videos has helped me to learn many things.
@@justmeandopensource I have successfully attached the ceph-rbd and ceph-fs to the multimaster setup. Now I am trying to provision nextcloud with this ceph storage and I am getting stuck in it. Can you help me with it and ingress for the same.
Hi Panos, thanks for watching this video and taking time to share your suggestion. Yes I am aware of that but wanted to show each steps for those who are not used to "enable --now" option. Enabling and starting makes sense for simple users. Thanks.
Excellent question! I've already tried HAProxy for the service load balancing, it's a great program and I'm very happy with his suggestion. But that's for publishing a service to an external IP address, it has nothing to do with the NFS server. This is an important step in high availability that I'm interested in.
Thank you for the awesome explanation! could you please let me know what is the name of the resource usage tool that you have on the right of the screen (showing the system's networking, resources usage)
Hi Saeid, thanks for watching. The stuff you see on the right side of my screen is a conky script which can be configured to show anything you want really from the system. You can download some sample conky configuration from internet, modify it, install conky and use it. Cheers.
This was so interesting to watch. It worked perfectly good. What are the drawbacks of creating insecure in exports. Can you tell about the nobody:nogroup credentials. Thanku
Hi Mani, Drawbacks of insecure option as it says it is insecure. Anyways NFS is insecure. I just wanted to get it working somehow as my focus was not on NFS but on Kubernetes using NFS. If you read through Ajit Singh's comment, he has explained how he worked it out without the insecure option. Basically we need to set the ownership of the nfs exported directory to a more generic owner:group. Leaving it as root:root will cause permissions problem when you try to access it from the pod for reading/writing. If you set the ownership to a generic account, all the data you write to that directory will inherit the ownership. For certain distros, the generic user is nfsnobody and for certain others it is nobody/nogroup. Thanks, Venkat
I think around 8:54 you give a clue as to what the ip needs to be set to on the nfs server in the /etc/exports/. I used the subnet assigned to the LXC containers instead of * so my /etc/exports looks like /srv/nfs/kubedata 10.105.38.0/24(rw,no_root_squash) I didn't have to use chmod 777 which I don't like to use. Let me know if this is helpful. Thanks again for all your work!
Hi James, thanks for watching and sharing your thoughts. What you have done is exactly what is needed. I wanted to cover a big audience and few people might have issues with their networking setup. So I used * for the ip range and insecure option as well. Cheers.
Hi, Venkat. I apprece so much your effort and dedication to make this videos. A little question: suppose I have a previous nfs export with some static files, pdf files for example that I need to mount in every replica of my app. How it works with this provisioner if every pod of deployment will claim for a single volume? I need the same data in every pod, it understand? Sorry if the question is not so consistent. Thank you from Argentina!
Hi Eduardo, thanks for watching. First create a pvc which will create a pv for you if have dynamic volume provisioning setup. Then copy the files on to the directory and each of your pod using that pvc will have that data.
Very amazing, you have save me with your video. Just you can tell me how you have made to display completion command directly on the prompt, it's very insane.
HI Sammy, thanks for watching. I used oh-my-zsh and the command completion in the background is using the zsh-autosuggestion plugin that suggests command based on your history. I have done few videos on my setup where I have explained my terminal setup. th-cam.com/play/PL34sAs7_26wOgqJAHey16337dkqahonNX.html Thanks
@@justmeandopensource Want to try rancher and need storage class to make it working. Don't remember anything I did before :) Thanks for your videos, great stuff.
Hello Venkat, Been following your tutorials and they are very informative. But a quick question, How does one do the same thing (PVC & PVC Claim * Deployment Template) when there is a directory and sub-directories that are involved. The single file example is real easy but in production instances an application is not bound with a single file but directories and sub directories and files inside these directories. How can one handle this situation??
Thanks Venkat. Another great video. We are all learning a lot from you. The persistent volume worked perfectly. However, sometimes when I refresh the page the response I`m getting is with the default of Nginx html page and not my own page. What do you think could be happening? it seems like the nfs connection is having some drops, but I`m not sure what is going on.
Hi, thanks for watching. I am not entirely sure why that would be happening. You are creating a deployment with certain number of replicas. And you are exposing it through a service. Your deployment manifest specifies that the pods need volumes. So I guess you are mounting the same volume on all your nginx pods as read-only on multi nodes. Service will load balance all the pods based on the labels. SO you should be seeing the same page.
@@justmeandopensource, I figured out what was the issue, and I believe it was some type of bug because for me it does not make sense anyway. From your previous video, I had nginx pod called "nginx2" running as a NodePort service on port 31600 in the default namespace. However, following this video we create another nginx pod with the NFS and this one we called "nginx-deploy" and we made a custom webpage for it, I also expose this as NodePort but on the port 31700 in the default namespace. So, in short, when I requested the page from the browser example url= "kworker1:31700", sometimes the request was hitting the service nginx2 on port 31600 that did not have the custom webpage. This was very strange behavior in my humble opinion. Once I delete the nginx2 service and deployment. I had no more problems.
is it necessary to mount the NFS export in the worker nodes ?? OR can we just use the NFS share in the Pods using pv and pvc without mounting the NFS share in the worker nodes ??
Hi David, you don't have to manually mount the nfs share on the worker nodes. Creating the PV will do that for you. I was just testing whether the worker nodes can mount the shares from the nfs server just to get the basic networking right.
Hi Venkat, Thank you for video, It is awesome. It has given more clarification regards to PV in context of k8. Can you guide us like if i expanding the size of NFS directory, is PV automatically update volume size or we need to do it manually. For Example : I have one NFS server with 50 GB . I have create PV and PVC with 50GB size. Now my storage got full and i wanted to update it, so i have updated volume on my NFS server but k8 doesn't aware about that changes because we have configured it with it 50GB. so will it do that changes automatically or we need to do some change in YAML and re-run again ?
Hi Devanshu, If you expand the file system where you have nfs exported directory, it will be available in k8s cluster. But the persistent volume will remain the same as you requested in the pvc. Thanks.
Hi Makrand, sorry, unfortunately I deleted the Github repo that had these config files. I was using that desktop setup only for a while. Not using that anymore. Many viewers asked for that conky config. I chose one from internet and customized to my liking.
Thankyou for your video.I am trying to add persistent volume to jenkins container.I am using /var/jenkins_home as mount path, but when i am creating this conatiner it is going in crashloopbackoff state and in logs I got error permission denied we can not write in this path in container.How to resolve this error?
Hi Sufia, thanks for watching this video. I am not sure how you are deploying jenkins and persistent volumes. Its worth checking the logs of the jenkins pod to see why it is failing. I have done a video on running jenkins with persistence on Kubernetes cluster. If you are interested, you can take a look at it in the below link. th-cam.com/video/ObGR0EfVPlg/w-d-xo.html Also for dynamic NFS provisioning, you can check the below video. th-cam.com/video/AavnQzWDTEk/w-d-xo.html Thanks
Do you have any examples of migrating pv,pvc,nfs-share from old nfs server to new nfs server without loosing data.... in migration also include docker-registry and metric
If we deploy our nfs server to a gcp instance for example. Should we expose any port from the firewall in order to access the server from anywhere; must we define it into the yaml file for the persistent volume description?
Hello Mr. Venkat , you doing a great videos its really useful for us. anyway i have doubt on this session. actually you using your linux base machine right? and you installing nfs there and export to all nodes . actually i am using a windows base machine so i installed nfs server in my k8s cluser which is virtualbox . cluster side working perfect but when i go to worker node and try to mount nfs mean it doesnot mounting facing issue : incorrect mount type. i could install nfs in cluster right or shouldnot do that ?
Hi Krishna veni, thanks for watching. Yes I used my Linux host machine as NFS server. You could use kmaster as your NFS server. Please let me know how you installed and how you are trying to test mounting it from other k8s nodes.
HI Sir, Its superb................... PV was created with 1Gi and PVC requested for 500Mi and it got bound that.. all cool If we create one more PVC with 500Mi will it allocate from same PV ? As I've seen that's in pending state.. if in that case in 1st PV 500Mi would waste right? Or else IF we need to increase the 1st PVC, can we increase it? (asper LVM) POD was created on worker1 only so when we expose that to internet.. it should work on only worker1's IP right? why its working with worker2 IP also?
Hi Nagendra, thanks for watching. In case of manual persistent volume like shown in this video, the 1G PV won't be reused. If you requested just 500M from a pvc, then the remaining 500M on that pv is wasted. This is why there is dynamic volume provisioning. So you don't have to create any pv in advance and a pv will get created with exact size as requested by a pvc. You can edit the pvc and increase the size, but that will take effect only when you restart the pod. Cheers.
Hi Venkat, Thanks for K8S playlist. It is very helpful. Mount is working fine. I created a persistent volume. nfsiostat > 10.128.0.8:/srv/nfs/kubedata mounted on /mnt: When I create a persistentVolumeClaim, it stays in the pending forever, since throws error "storageclass.storage.k8s.io "manual" not found" kubectl get sc > No resources found Shouldn't the creation of PV create a storageClass? What could be the issue here? Please share the resources where I can read more about it.
Hi Anjan, thanks for watching. Starting at 11:15 in this video, I showed 4-pv-nfs.yaml which creates the persistent volume. This manifest contains the storageclassname and you need to use the same storageclass in your pvc. Did you use the manifests in my github repo or you used your own? Just double check that you are using the same storage class name in both pv and pvc.
@Just me and Opensource Thank you very much for replying. Yes I used the same yaml files. It didn't work for me. I could see the persistent volume "manual", but could not create a pvc using "manual" as a storageClassName I didn't modify anything in the manifest. Thanks to your Dynamic provisioning video. It worked perfectly for me.
Hi M8, thanks for watching this video. I recently started using I3 tiling window manager. Done some videos on my setup, if you are interested. th-cam.com/video/XpNcxzzkkT0/w-d-xo.html th-cam.com/video/SMfidTyrqDo/w-d-xo.html th-cam.com/video/omhky9FgViU/w-d-xo.html Or the old desktop environment and setup I used to use can be found in the below link th-cam.com/video/soAwUq2cQHQ/w-d-xo.html Thanks.
Hi Fatima, thanks for watching. I did a video recently on that. Here it is th-cam.com/video/DF3v2P8ENEg/w-d-xo.html And you can use helm to install the nfs provisioniner in the cluser.
hey venkat, thanks for your video. Appreciate your efforts in doing so. a quick doubt about NFS volume type, I have created a nfs server and the root disk is 10gb (i created a GCE in GCP). I have mounted that nfs server to my cluster (1 master and 2 worker nodes created using kubeadm in GCE). I have created a PV for 1Gi, and a PVC for 500 Mi. I Created a pod for nginx , and mounted this PVC. Its succesfull. But when i login to the pod(using kubectl exec) command , and do "df -h" command i am seeing 10gb size for my /usr/share/nginx/html mount, even though i just provisioned 500Mi of my 1Gi PV, why is it showing 10Gi which is my root mount of NFS ?? Thanks!
Hi, thanks for watching. What you have mentioned is kind of expected. You have to put some restrictions on the nfs server side. It will be complex. You may want to configure individual disk or partition on the nfs server side and then export it.
Very nice video..I am trying to create pv using aws efs, but when i create the pvc its state is in pending state and when i describe pvc it says that pv is not found, but the pv is already being created. I want to deploy my influxdb inside k8s and mount it to efs.
Hi Rahul, I haven't used EFS as a persistent volume storage. However you can check it. Please provide bit more details on your setup. 1) Is your cluster running locally on your laptop or in AWS 2) How did you provision your Kubernetes cluster 3) How did you setup dynamic volume provisioning? NFS-provisioning? Did you create a storage class? 4) How did the persistent volume got created? Did you create the PV manually? Thanks.
@@justmeandopensource1. yes it is running locally. 2. we have provisioned using script. 3. I am using windows and I have created a EFS in aws console, I am not sure whether I need to configure NFS locally, I read a document whether he provides Ip of nfs server in deployment.yaml and I am providing EFS server in my yaml.is it correct? 4. I have created pv and pvc manually, and now it bounded to the pv I have created. But when i apply the yaml file the pod is not running, its state is container_is_creating. Not sure why it is in pending state?
@@rahulmalgujar1110 Okay. Anyways you have got the pv created manually and now pvc is bound to that pv. Usually the pod will be in pending state if it is waiting for a persistent volume. But in your case, pv is already there and is bound to a pvc. If pod is in container_creating state, it could be something else. Do the worker nodes have sufficient memory available to take this pod? Usually when you don't have enough memory on the worker node, the pods will be in container_creating state. You can check the output of the describe command. $ kubectl describe pod or $ kubectl describe deploy It will show you why the pod is in that state. Also you can check the output of "kubectl get events" immediately after deploying your pod.
@@justmeandopensource I am getting this two warnings when I describe the pod Unable to mount volumes for pod "" and MountVolume.SetUp failed for volume "pv-efs" : mount failed: exit status 32
Hi Vu, For development/test environment, and for learning purpose you can very well install nfs-server on the master node. All you need is an nfs server with exported shares that the worker nodes can access. But in production environment, this won't be practical. Either have a separate NFS server or go for container based storage solution like PortWorx, OpenEBS or anthing else. Thanks, Venkat
nice video, I wish I could subscribe twice. BTW, when you run 'kubectl version --short', what's the difference between client version and server version? I suppose client version is the version of kubectl on your local machine, and server version is the version of kubectl on the cluster? but can you help me explain below output, I ran it on the master node of my k8s cluster, why 'Server Version' is different from 'VERSION'? /root [root@10.41.143.203] [20:59] > kubectl version --short Client Version: v1.11.10 Server Version: v1.11.3 /root [root@10.41.143.203] [20:59] > kubectl get nodes NAME STATUS ROLES AGE VERSION 10.41.143.203 Ready master 9d v1.11.10 10.41.143.207 Ready 9d v1.11.10 10.41.143.209 Ready 9d v1.11.10
Hi Richard, many thanks for watching. Client version is the version of kubectl binary you are using on your machine. Server version is Kubernetes cluster version. I have also noticed this difference sometimes. The server version you see in the kubectl version command and kubectl get nodes command are slightly different. I have notices this as well occasionally. No clue why thats the case.
Hi Venkat, I am getting below error.could you please fix the yaml file. [root@master yamls]# kubectl version --short Client Version: v1.16.0 Server Version: v1.16.1 [root@master yamls]# ls 4* 4-busybox-pv-hostpath.yaml 4-nfs-nginx.yaml 4-pvc-hostpath.yaml 4-pvc-nfs.yaml 4-pv-hostpath.yaml 4-pv-nfs.yaml [root@master yamls]# kubectl create -f 4-nfs-nginx.yaml error: unable to recognize "4-nfs-nginx.yaml": no matches for kind "Deployment" in version "extensions/v1beta
Hi Santosh, Thanks for watching this video. In Kubernetes v1.16 apiVersions for some of the resources have been deprecated and can no longer be used. If you are using a DaemonSet, Deployment, StatefulSet or a ReplicaSet, update your yaml file and change the apiVersion to apps/v1 instead of extensions/v1beta1. I know a lot people will have this issue. All the yamls I have in the my Github repo are having extensions/v1beta1 as apiVersion. I don't want to change the files as that might break people using older versions of kubernetes. I have infact recorded a video about this k8s v1.16 changes which will be released soon. Thanks.
Thank you for your videos! I am trying to follow what you do, but I keep getting [...] access denied by server while mounting 11.0.0.75:/mnt/nfs/var/nfsshare Even though I have turned off firewall and opened the nfs for all
HI Yuven, thanks for watching this video. I initially had this issue. When you are exporting the share from the NFS server, did you add the insecure option in /etc/exports file? Thanks, Venkat
@@justmeandopensource /var/nfsshare 11.0.0.0/8(rw,sync,no_root_squash,no_all_squash,insecure) /home 11.0.0.0/8(rw,sync,no_root_squash,no_all_squash,insecure) /var/nfsshare *(rw,sync,no_root_squash,no_all_squash,insecure) This is my /etc/exports :) I really did not expect an answer so fast :0
@@yuven437 So you have exported /var/nfsshare from the NFS server. From one of the worker nodes, try mounting it manually may be with verbose option. From one of your worker node, $ showmount -e $ mkdir /mnt/tmp $ mount :/var/nfsshare /mnt/tmp Also just noticed, that you are trying to mount /mnt/nfs/var/nfsshare, but you have exported /var/nfsshare. Thanks, Venkat
@@justmeandopensource it seems to me that the problem happens from the k8s side. I can mount and use the nfs storage from the node, but k8s still show the same error :C
Hi Guru, thanks for watching. You can use EFS Elastic File Storage in AWS as an NFS file server. More details in this link. aws.amazon.com/premiumsupport/knowledge-center/eks-pods-efs/
@@justmeandopensource if the pod storage of the pvc increases what will happen? Since we request only for 500mi but in case of increase in Stroage what will happen?
@@manikandans8808Good question which I never thought of. Theoretically, it shouldn't let you use more than what is assigned. But I have never tried that. To find the answer you can just try it. I am gonna try it sometime this afternoon. Thanks
Thank you for the Amazing videos. Dont know if this is possible. I got stuck, I created an image that populate successfully my /nfs using docker run command, but if I use kubernetes yml, it is not populate the /nfs any more. Is this possible? actually, I'm losing my data inside the container..! Greetings from Birmingham-UK. Cheers
Hi Everton, thanks for watching. Using PV (persistent volumes) you can do that. If you could explain a bit more on your problem with some details like the yaml, it would be helpful. Cheers.
@@justmeandopensource Hi, thank you for reply me back. So, I'm trying to populate my NFS using the contents that is inside my image. For example, if you run a nginx without a volume, you can see a index.html at /usr/share/nginx/html. but, if I mount using the PV and point so the same /usr/share/nginx/html, the index.html vanish, it is not anymore in /usr/share/nginx/html. I think it is something about permission. My nfs has 777 and I also tried securityContext in the yaml. See yml below. apiVersion: v1 kind: Pod metadata: name: containers-privileged spec: restartPolicy: Never securityContext: runAsUser: 0 runAsGroup: 0 fsGroup: 0 volumes: - name: shared-data nfs: server: 192.168.149.10 path: /illumasnfs containers: - name: nginx-container image: nginx volumeMounts: - name: shared-data mountPath: /usr/share/nginx/html securityContext: privileged: true Thanks
@@oraculotube Okay. So you have the index.html in your image at /usr/share/nginx/html directory. And you are trying to mount nfs volume into the container at /usr/share/nginx/html right? In this case, you are overlaying the nfs volume /usr/share/nginx/html on top of /usr/share/nginx/html in the container. So you won't be able to see index.html that was originally in the container. You will have to create index.html in the nfs volume.
@@justmeandopensource Thanks Venkat. I have this working using docker compose, but not in kuberntes. Actually, my image has lots files, I've converted an application to the company that I'm working, but now, I'm trying to use the image in kuberntes, and populate my nfs, but I cant see any ways.
Hi, thanks for watching. I have done a series on GlusterFS which you can watch in the below playlist. th-cam.com/play/PL34sAs7_26wOwCvth-EjdgGsHpTwbulWq.html
This might be the most straith forward video on the net for k8s volumes, it's just awesome. Thank you sir for sharing it.
Hi Ivan, thanks for watching this video and taking time to comment. Much appreciated.
Excellent video. I also watched the dynamic Nfs also excellent stuff. Thanks !!
You are a great instructor and you know what you are talking about.
Learned a lot today.
Regards, Hans
Many thanks for watching.
Hi Venkat, Start started to learn about pv and pvc, The video is very good for the beginner. I am going to do it on my cluster. Thanks for the video.
You sir are a rock star. Thanks for the great video. You have saved me tons of time!
Hi Patrick, thanks for watching.
Literally awesome sir..lovely work and lovely explanation...
Hi Zaiba, thanks for watching.
Hey Venkat,
You are doing such a fabulous job man.
I learned kubernets lot from your presentation.
your presentation was awesome.
What a dedication .... ahhh
Here the beauty is how you got a time to reply most of the audience.. [it's unbelievable]
It's really good attitude..
keep Rocking..! " I have tried this exercise on Centos 7 " It works as expected.. Note : I just mentioned share * (rw,sync), I didn't get any error.
Hi Kumar, many thanks for watching this video and taking time to give feedback. Glad that you liked my videos. Just trying to give back to community something that I have been learning.
Thanks,
Venkat
Thanks Venkat. Tried this lab today. I did it without any hiccups :)
Perfect. Thanks for watching. Cheers.
Simply awesome, watched and practiced. Thank You.
Hi Pravesh, thanks for watching.
Superb explanation and demos man, I'm really enjoying your vids! Thank you for sharing your skills.
Hi Cedrick, thanks for watching.
Thanks for the video. I had to run nfs client commands on worker nodes to get it up and running (sudo apt install nfs-common nfs4-acl-tools) for Ubuntu- (dnf install nfs-utils nfs4-acl-tools) for Centos
love you man, i do love documentations but sometimes its better to watch video :)
Hi, thanks for watching. Yeah, with documentation you can get through quickly what you need but sometimes if you want to really learn, videos are great. Cheers.
Can you create a video on how to provision ceph storage to the multimaster kubernetes cluster using ceph-rbd and ceph-fs.
It will be very helpfull for us.
All your videos has helped me to learn many things.
Hi Manali, thanks for watching. I will certainly do video on ceph as thats in my list. Cheers.
@@justmeandopensource I have successfully attached the ceph-rbd and ceph-fs to the multimaster setup. Now I am trying to provision nextcloud with this ceph storage and I am getting stuck in it. Can you help me with it and ingress for the same.
very very good , your tutorial very simple , thank you so much for share this
Hi Abdul, thanks for watching. Cheers.
instead of doing enable and start the nfs server in two steps, you can do 'systemctl enable --now' which will do both at the same step ;)
Hi Panos, thanks for watching this video and taking time to share your suggestion. Yes I am aware of that but wanted to show each steps for those who are not used to "enable --now" option. Enabling and starting makes sense for simple users. Thanks.
excellent presentation, really helped me, thank you, keep up the good work
Hi Atul, thanks for watching. Cheers.
Wonderfully explained!!
Thanks.
Hi Amit, thanks for watching.
Hi - very helpful! You did a really great job and helped me out on my journey!
Hi Michel, thanks for watching. Glad it helped. Cheers.
Hi Venket,
Thanks for the great playlist on Kubernetes, How can we set up the HA(Highly available) NFS server??
Excellent question! I've already tried HAProxy for the service load balancing, it's a great program and I'm very happy with his suggestion. But that's for publishing a service to an external IP address, it has nothing to do with the NFS server. This is an important step in high availability that I'm interested in.
Thank you for the awesome explanation! could you please let me know what is the name of the resource usage tool that you have on the right of the screen (showing the system's networking, resources usage)
Hi Saeid, thanks for watching. The stuff you see on the right side of my screen is a conky script which can be configured to show anything you want really from the system. You can download some sample conky configuration from internet, modify it, install conky and use it. Cheers.
This was so interesting to watch. It worked perfectly good. What are the drawbacks of creating insecure in exports. Can you tell about the nobody:nogroup credentials. Thanku
Hi Mani, Drawbacks of insecure option as it says it is insecure. Anyways NFS is insecure. I just wanted to get it working somehow as my focus was not on NFS but on Kubernetes using NFS. If you read through Ajit Singh's comment, he has explained how he worked it out without the insecure option.
Basically we need to set the ownership of the nfs exported directory to a more generic owner:group. Leaving it as root:root will cause permissions problem when you try to access it from the pod for reading/writing.
If you set the ownership to a generic account, all the data you write to that directory will inherit the ownership.
For certain distros, the generic user is nfsnobody and for certain others it is nobody/nogroup.
Thanks,
Venkat
@@justmeandopensource perfectly explained. Thanks for that.
@@manikandans8808 No problem.
I think around 8:54 you give a clue as to what the ip needs to be set to on the nfs server in the /etc/exports/. I used the subnet assigned to the LXC containers instead of * so my /etc/exports looks like /srv/nfs/kubedata 10.105.38.0/24(rw,no_root_squash) I didn't have to use chmod 777 which I don't like to use. Let me know if this is helpful. Thanks again for all your work!
Hi James, thanks for watching and sharing your thoughts. What you have done is exactly what is needed. I wanted to cover a big audience and few people might have issues with their networking setup. So I used * for the ip range and insecure option as well. Cheers.
Hi, Venkat. I apprece so much your effort and dedication to make this videos.
A little question: suppose I have a previous nfs export with some static files, pdf files for example that I need to mount in every replica of my app. How it works with this provisioner if every pod of deployment will claim for a single volume? I need the same data in every pod, it understand?
Sorry if the question is not so consistent.
Thank you from Argentina!
Hi Eduardo, thanks for watching. First create a pvc which will create a pv for you if have dynamic volume provisioning setup. Then copy the files on to the directory and each of your pod using that pvc will have that data.
Very amazing, you have save me with your video. Just you can tell me how you have made to display completion command directly on the prompt, it's very insane.
HI Sammy, thanks for watching. I used oh-my-zsh and the command completion in the background is using the zsh-autosuggestion plugin that suggests command based on your history. I have done few videos on my setup where I have explained my terminal setup.
th-cam.com/play/PL34sAs7_26wOgqJAHey16337dkqahonNX.html
Thanks
Nice video as usual. 172.42.42 will not work as you probably connecting from 192.168.x.x which is eth0
Yeah realized it later. Thanks for watching.
@@justmeandopensource
Want to try rancher and need storage class to make it working. Don't remember anything I did before :) Thanks for your videos, great stuff.
@@alexal4 No worries.
Hello Venkat,
Been following your tutorials and they are very informative. But a quick question,
How does one do the same thing (PVC & PVC Claim * Deployment Template) when there is a directory and sub-directories that are involved. The single file example is real easy but in production instances an application is not bound with a single file but directories and sub directories and files inside these directories.
How can one handle this situation??
Thanks Venkat. Another great video. We are all learning a lot from you.
The persistent volume worked perfectly. However, sometimes when I refresh the page the response I`m getting is with the default of Nginx html page and not my own page. What do you think could be happening? it seems like the nfs connection is having some drops, but I`m not sure what is going on.
Hi, thanks for watching. I am not entirely sure why that would be happening. You are creating a deployment with certain number of replicas. And you are exposing it through a service. Your deployment manifest specifies that the pods need volumes. So I guess you are mounting the same volume on all your nginx pods as read-only on multi nodes. Service will load balance all the pods based on the labels. SO you should be seeing the same page.
@@justmeandopensource, I figured out what was the issue, and I believe it was some type of bug because for me it does not make sense anyway. From your previous video, I had nginx pod called "nginx2" running as a NodePort service on port 31600 in the default namespace. However, following this video we create another nginx pod with the NFS and this one we called "nginx-deploy" and we made a custom webpage for it, I also expose this as NodePort but on the port 31700 in the default namespace. So, in short, when I requested the page from the browser example url= "kworker1:31700", sometimes the request was hitting the service nginx2 on port 31600 that did not have the custom webpage. This was very strange behavior in my humble opinion. Once I delete the nginx2 service and deployment. I had no more problems.
@@ninja2807 Cool.
is it necessary to mount the NFS export in the worker nodes ?? OR can we just use the NFS share in the Pods using pv and pvc without mounting the NFS share in the worker nodes ??
Hi David, you don't have to manually mount the nfs share on the worker nodes. Creating the PV will do that for you. I was just testing whether the worker nodes can mount the shares from the nfs server just to get the basic networking right.
@Just me and Opensource thank you so much for the fast response 👍👍
@@david2358 No worries. You are welcome.
Hello Admin, can you do tutorial about glusterfs? Thank you. The tutorial very useful.
th-cam.com/play/PL34sAs7_26wOwCvth-EjdgGsHpTwbulWq.html
@@justmeandopensource Hi, thank you response me, Do you have tutorial about kubernetes and glusterfs?
@@SonNguyen-pw8lm Not yet. Planning to do soon.
@@justmeandopensource thank you so much :D
@@SonNguyen-pw8lm You are welcome.
thanks
Hi Murat, thanks for watching. Cheers.
Hi Venkat,
Thank you for video, It is awesome. It has given more clarification regards to PV in context of k8.
Can you guide us like if i expanding the size of NFS directory, is PV automatically update volume size or we need to do it manually.
For Example :
I have one NFS server with 50 GB . I have create PV and PVC with 50GB size. Now my storage got full and i wanted to update it, so i have updated volume on my NFS server but k8 doesn't aware about that changes because we have configured it with it 50GB. so will it do that changes automatically or we need to do some change in YAML and re-run again ?
Hi Devanshu, If you expand the file system where you have nfs exported directory, it will be available in k8s cluster. But the persistent volume will remain the same as you requested in the pvc. Thanks.
If you want to resize your persistent volume you have to delete and recreate it. Thanks.
Hi Venkat,
Off topic - that is nice concky display you got on right. Can you share it's config file? It should be /home/USER/.conkyrc
Nice video BTW
Hi Makrand, sorry, unfortunately I deleted the Github repo that had these config files. I was using that desktop setup only for a while. Not using that anymore. Many viewers asked for that conky config. I chose one from internet and customized to my liking.
great job , but I have a problem My nginx pod is always in Pending State !?
Hi Venkat, first of all what a great video. I curious about the size of the NFS server. Is it possible to use LVM volume for the NFS directory?
Hi Christian, thanks for watching this video. Yes, you can use LVM volue for NFS share. Cheers.
Thankyou for your video.I am trying to add persistent volume to jenkins container.I am using /var/jenkins_home as mount path, but when i am creating this conatiner it is going in crashloopbackoff state and in logs I got error permission denied we can not write in this path in container.How to resolve this error?
Hi Sufia, thanks for watching this video. I am not sure how you are deploying jenkins and persistent volumes. Its worth checking the logs of the jenkins pod to see why it is failing.
I have done a video on running jenkins with persistence on Kubernetes cluster. If you are interested, you can take a look at it in the below link.
th-cam.com/video/ObGR0EfVPlg/w-d-xo.html
Also for dynamic NFS provisioning, you can check the below video.
th-cam.com/video/AavnQzWDTEk/w-d-xo.html
Thanks
Thank you so much..because of your dynamic NFS provisioning video i am able to do that😊
@@sufiaalmas5354 You are welcome. Cheers.
Regards!!!!! .... & Congrats
Hi Anthony, thanks for watching this video.
@@justmeandopensource sure !!!! :)
Спасибо
Добро пожаловать
Do you have any examples of migrating pv,pvc,nfs-share from old nfs server to new nfs server without loosing data.... in migration also include docker-registry and metric
Hi Raj, thanks for watching. I haven't done any such thing before. Sorry, no idea.
If we deploy our nfs server to a gcp instance for example. Should we expose any port from the firewall in order to access the server from anywhere; must we define it into the yaml file for the persistent volume description?
Hello Mr. Venkat , you doing a great videos its really useful for us. anyway i have doubt on this session. actually you using your linux base machine right? and you installing nfs there and export to all nodes . actually i am using a windows base machine so i installed nfs server in my k8s cluser which is virtualbox . cluster side working perfect but when i go to worker node and try to mount nfs mean it doesnot mounting facing issue : incorrect mount type. i could install nfs in cluster right or shouldnot do that ?
Hi Krishna veni, thanks for watching.
Yes I used my Linux host machine as NFS server. You could use kmaster as your NFS server.
Please let me know how you installed and how you are trying to test mounting it from other k8s nodes.
HI Sir,
Its superb...................
PV was created with 1Gi and PVC requested for 500Mi and it got bound that.. all cool
If we create one more PVC with 500Mi will it allocate from same PV ? As I've seen that's in pending state..
if in that case in 1st PV 500Mi would waste right?
Or else IF we need to increase the 1st PVC, can we increase it? (asper LVM)
POD was created on worker1 only so when we expose that to internet.. it should work on only worker1's IP right? why its working with worker2 IP also?
Hi Nagendra, thanks for watching. In case of manual persistent volume like shown in this video, the 1G PV won't be reused. If you requested just 500M from a pvc, then the remaining 500M on that pv is wasted. This is why there is dynamic volume provisioning. So you don't have to create any pv in advance and a pv will get created with exact size as requested by a pvc.
You can edit the pvc and increase the size, but that will take effect only when you restart the pod.
Cheers.
cool.. Thankyou So much Sir for clarification
@@nagendrareddybandi1710 You are welcome. Cheers.
thank you very much
Hi Aleksey, thanks for watching.
Hi Venkat, Thanks for K8S playlist. It is very helpful.
Mount is working fine. I created a persistent volume.
nfsiostat
> 10.128.0.8:/srv/nfs/kubedata mounted on /mnt:
When I create a persistentVolumeClaim, it stays in the pending forever, since throws error "storageclass.storage.k8s.io "manual" not found"
kubectl get sc
> No resources found
Shouldn't the creation of PV create a storageClass?
What could be the issue here?
Please share the resources where I can read more about it.
Hi Anjan, thanks for watching.
Starting at 11:15 in this video, I showed 4-pv-nfs.yaml which creates the persistent volume. This manifest contains the storageclassname and you need to use the same storageclass in your pvc. Did you use the manifests in my github repo or you used your own? Just double check that you are using the same storage class name in both pv and pvc.
@Just me and Opensource Thank you very much for replying.
Yes I used the same yaml files. It didn't work for me. I could see the persistent volume "manual", but could not create a pvc using "manual" as a storageClassName
I didn't modify anything in the manifest.
Thanks to your Dynamic provisioning video. It worked perfectly for me.
@@anjanpoonacha no worries.
hi is there a video about your terminal tweaks etc?
Hi M8, thanks for watching this video.
I recently started using I3 tiling window manager. Done some videos on my setup, if you are interested.
th-cam.com/video/XpNcxzzkkT0/w-d-xo.html
th-cam.com/video/SMfidTyrqDo/w-d-xo.html
th-cam.com/video/omhky9FgViU/w-d-xo.html
Or the old desktop environment and setup I used to use can be found in the below link
th-cam.com/video/soAwUq2cQHQ/w-d-xo.html
Thanks.
@@justmeandopensource amazing thanks! Ill Look into then :)
@@m8_981 You are welcome. Cheers.
Do you think auto fs may propose some advantages?
Hey venkant , plz i want to ask if there is a possiblity to install nfs with helm chart in kubernetes without using this method
Hi Fatima, thanks for watching. I did a video recently on that.
Here it is
th-cam.com/video/DF3v2P8ENEg/w-d-xo.html
And you can use helm to install the nfs provisioniner in the cluser.
hi Venkat,
i'm quite confuse with PV and PVC.
i'm going to deploy wordpress in my cluster. i've created 2 PVs , do i still need PVC?
alex@bionic30:~/yamls/wordpress$ cat 01_nfs-pv-wordpress-web.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-wordpress-web
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
path: /wordpress-web
server: 192.168.0.20
persistentVolumeReclaimPolicy: Retain
alex@bionic30:~/yamls/wordpress$
alex@bionic30:~/yamls/wordpress$ cat 02_nfs-pv-wordpress-mysql.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-wordpress-mysql
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
path: /wordpress-mysql
server: 192.168.0.20
persistentVolumeReclaimPolicy: Retain
alex@bionic30:~/yamls/wordpress$
Hi Alex, thanks for watching. Yes you will still need PVC by which you are requesting the storage. You will then use the PVC in your pod definition.
hey venkat, thanks for your video. Appreciate your efforts in doing so. a quick doubt about NFS volume type,
I have created a nfs server and the root disk is 10gb (i created a GCE in GCP).
I have mounted that nfs server to my cluster (1 master and 2 worker nodes created using kubeadm in GCE).
I have created a PV for 1Gi, and a PVC for 500 Mi.
I Created a pod for nginx , and mounted this PVC. Its succesfull.
But when i login to the pod(using kubectl exec) command , and do "df -h" command i am seeing 10gb size for my /usr/share/nginx/html mount, even though i just provisioned 500Mi of my 1Gi PV, why is it showing 10Gi which is my root mount of NFS ??
Thanks!
Hi, thanks for watching. What you have mentioned is kind of expected. You have to put some restrictions on the nfs server side. It will be complex.
You may want to configure individual disk or partition on the nfs server side and then export it.
@@justmeandopensource Thanks Venkat, got it.
@@sivav3675 Cool.
My ubuntu host cannot locate nfs-utils pacman package..any ideas why? I have updated my system!
Very nice video..I am trying to create pv using aws efs, but when i create the pvc its state is in pending state and when i describe pvc it says that pv is not found, but the pv is already being created. I want to deploy my influxdb inside k8s and mount it to efs.
Hi Rahul, I haven't used EFS as a persistent volume storage. However you can check it.
Please provide bit more details on your setup.
1) Is your cluster running locally on your laptop or in AWS
2) How did you provision your Kubernetes cluster
3) How did you setup dynamic volume provisioning? NFS-provisioning? Did you create a storage class?
4) How did the persistent volume got created? Did you create the PV manually?
Thanks.
@@justmeandopensource1. yes it is running locally.
2. we have provisioned using script.
3. I am using windows and I have created a EFS in aws console, I am not sure whether I need to configure NFS locally, I read a document whether he provides Ip of nfs server in deployment.yaml and I am providing EFS server in my yaml.is it correct?
4. I have created pv and pvc manually, and now it bounded to the pv I have created.
But when i apply the yaml file the pod is not running, its state is container_is_creating. Not sure why it is in pending state?
@@rahulmalgujar1110 Okay. Anyways you have got the pv created manually and now pvc is bound to that pv. Usually the pod will be in pending state if it is waiting for a persistent volume. But in your case, pv is already there and is bound to a pvc. If pod is in container_creating state, it could be something else. Do the worker nodes have sufficient memory available to take this pod? Usually when you don't have enough memory on the worker node, the pods will be in container_creating state. You can check the output of the describe command.
$ kubectl describe pod or $ kubectl describe deploy
It will show you why the pod is in that state.
Also you can check the output of "kubectl get events" immediately after deploying your pod.
@@justmeandopensource I am getting this two warnings when I describe the pod Unable to mount volumes for pod "" and MountVolume.SetUp failed for volume "pv-efs" : mount failed: exit status 32
@@rahulmalgujar1110 Can you try mounting the EFS volume manually on the worker nodes?
Venkat, I want to do persistent volume encryption in kubernetes how can i do that?? Can you please help me out with it.
can I install nfs-server on directly Kmasternode and implement dynamically provision on Kubernetes?
Hi Vu,
For development/test environment, and for learning purpose you can very well install nfs-server on the master node. All you need is an nfs server with exported shares that the worker nodes can access. But in production environment, this won't be practical. Either have a separate NFS server or go for container based storage solution like PortWorx, OpenEBS or anthing else.
Thanks,
Venkat
How can I update a persistent volume or update the pvc configs
chown: changing ownership of '/var/lib/mysql/': Operation not permitted Got this error when tried with a mysql deployment.
Hi can u make video on glusterfs on kubernetes
HI Naga, thanks for watching. Thats already in my list but didn't get a chance to do it. I will see if I can do it. Cheers.
@@naganaga3731 I can play with it this weekend.
Tq so much
@@naganaga3731 You are welcome.
Can u make video on glusterfs
nice video, I wish I could subscribe twice.
BTW, when you run 'kubectl version --short', what's the difference between client version and server version? I suppose client version is the version of kubectl on your local machine, and server version is the version of kubectl on the cluster?
but can you help me explain below output, I ran it on the master node of my k8s cluster, why 'Server Version' is different from 'VERSION'?
/root [root@10.41.143.203] [20:59]
> kubectl version --short
Client Version: v1.11.10
Server Version: v1.11.3
/root [root@10.41.143.203] [20:59]
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.41.143.203 Ready master 9d v1.11.10
10.41.143.207 Ready 9d v1.11.10
10.41.143.209 Ready 9d v1.11.10
Hi Richard, many thanks for watching. Client version is the version of kubectl binary you are using on your machine. Server version is Kubernetes cluster version.
I have also noticed this difference sometimes. The server version you see in the kubectl version command and kubectl get nodes command are slightly different. I have notices this as well occasionally. No clue why thats the case.
Can we use this NFS Persistent to run database?
Hi Venkat,
I am getting below error.could you please fix the yaml file.
[root@master yamls]# kubectl version --short
Client Version: v1.16.0
Server Version: v1.16.1
[root@master yamls]# ls 4*
4-busybox-pv-hostpath.yaml 4-nfs-nginx.yaml 4-pvc-hostpath.yaml 4-pvc-nfs.yaml 4-pv-hostpath.yaml 4-pv-nfs.yaml
[root@master yamls]# kubectl create -f 4-nfs-nginx.yaml
error: unable to recognize "4-nfs-nginx.yaml": no matches for kind "Deployment" in version "extensions/v1beta
Hi Santosh,
Thanks for watching this video. In Kubernetes v1.16 apiVersions for some of the resources have been deprecated and can no longer be used.
If you are using a DaemonSet, Deployment, StatefulSet or a ReplicaSet, update your yaml file and change the apiVersion to apps/v1 instead of extensions/v1beta1.
I know a lot people will have this issue. All the yamls I have in the my Github repo are having extensions/v1beta1 as apiVersion. I don't want to change the files as that might break people using older versions of kubernetes.
I have infact recorded a video about this k8s v1.16 changes which will be released soon.
Thanks.
Thank you for your videos! I am trying to follow what you do, but I keep getting [...] access denied by server while mounting 11.0.0.75:/mnt/nfs/var/nfsshare
Even though I have turned off firewall and opened the nfs for all
HI Yuven, thanks for watching this video. I initially had this issue.
When you are exporting the share from the NFS server, did you add the insecure option in /etc/exports file?
Thanks,
Venkat
@@justmeandopensource /var/nfsshare 11.0.0.0/8(rw,sync,no_root_squash,no_all_squash,insecure)
/home 11.0.0.0/8(rw,sync,no_root_squash,no_all_squash,insecure)
/var/nfsshare *(rw,sync,no_root_squash,no_all_squash,insecure)
This is my /etc/exports :)
I really did not expect an answer so fast :0
@@yuven437 So you have exported /var/nfsshare from the NFS server. From one of the worker nodes, try mounting it manually may be with verbose option.
From one of your worker node,
$ showmount -e
$ mkdir /mnt/tmp
$ mount :/var/nfsshare /mnt/tmp
Also just noticed, that you are trying to mount /mnt/nfs/var/nfsshare, but you have exported /var/nfsshare.
Thanks,
Venkat
@@justmeandopensource thank you very much! I will look into this as soon as i can! You are the best :)
You have a patreon?
@@justmeandopensource it seems to me that the problem happens from the k8s side. I can mount and use the nfs storage from the node, but k8s still show the same error :C
How can we setup nfs file server on aws so it can be mounted to all nodes?
Hi Guru, thanks for watching. You can use EFS Elastic File Storage in AWS as an NFS file server. More details in this link.
aws.amazon.com/premiumsupport/knowledge-center/eks-pods-efs/
good Video
Thanks for watching Bhaskar.
Thank you
Thanks for watching.
in /etc/exports without "insecure" option deployment is working for me
Hi Hitesh, thanks for watching. When I tried, it didn't work. Good to know that it worked for you.
@@justmeandopensource Thanks Venkat, your videos are really helpful
Most welcome.
What will happen if the pods excites the pvc claim storage. Dose the pods stops working?
I didn't quite get you. What you exactly mean by excites? Thanks
@@justmeandopensource if the pod storage of the pvc increases what will happen? Since we request only for 500mi but in case of increase in Stroage what will happen?
@@manikandans8808Good question which I never thought of. Theoretically, it shouldn't let you use more than what is assigned. But I have never tried that. To find the answer you can just try it. I am gonna try it sometime this afternoon. Thanks
@@justmeandopensource sure Venkat I'll also try it out and if got the answer pls comment. It will be much helpful.
@@manikandans8808 Sure will let you know. I am very interested in trying that. May be later tonight. Cheers.
Thank you for the Amazing videos. Dont know if this is possible. I got stuck, I created an image that populate successfully my /nfs using docker run command, but if I use kubernetes yml, it is not populate the /nfs any more. Is this possible? actually, I'm losing my data inside the container..! Greetings from Birmingham-UK. Cheers
Hi Everton, thanks for watching. Using PV (persistent volumes) you can do that. If you could explain a bit more on your problem with some details like the yaml, it would be helpful. Cheers.
@@justmeandopensource Hi, thank you for reply me back.
So, I'm trying to populate my NFS using the contents that is inside my image. For example, if you run a nginx without a volume, you can see a index.html at /usr/share/nginx/html. but, if I mount using the PV and point so the same /usr/share/nginx/html, the index.html vanish, it is not anymore in /usr/share/nginx/html. I think it is something about permission. My nfs has 777 and I also tried securityContext in the yaml. See yml below.
apiVersion: v1
kind: Pod
metadata:
name: containers-privileged
spec:
restartPolicy: Never
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
volumes:
- name: shared-data
nfs:
server: 192.168.149.10
path: /illumasnfs
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
securityContext:
privileged: true
Thanks
@@oraculotube Okay. So you have the index.html in your image at /usr/share/nginx/html directory. And you are trying to mount nfs volume into the container at /usr/share/nginx/html right? In this case, you are overlaying the nfs volume /usr/share/nginx/html on top of /usr/share/nginx/html in the container. So you won't be able to see index.html that was originally in the container.
You will have to create index.html in the nfs volume.
@@justmeandopensource Thanks Venkat. I have this working using docker compose, but not in kuberntes. Actually, my image has lots files, I've converted an application to the company that I'm working, but now, I'm trying to use the image in kuberntes, and populate my nfs, but I cant see any ways.
@@oraculotube How about mounting the nfs volume in different place and copy the files.
Hello Admin, can you do tutorial about glusterfs? Thank you. The tutorial very useful.
Hi, thanks for watching. I have done a series on GlusterFS which you can watch in the below playlist.
th-cam.com/play/PL34sAs7_26wOwCvth-EjdgGsHpTwbulWq.html