i used to laugh at the pricing of separate storage by the cloud provider, thinking i could just get away with regular hard drive storage on the server. lol
Your language is on point! Its great to see your language increases in quality with each video you took :) CIVO looks very interesting im gonna take a look into it
Just love your content. And the presentation. Not just here, all videos are remarkable. (My thoughts came about it t this point:)) They are understandable, simple explanations. So keep doing this!
Would you consider giving a talk like this for cloud-native data management days? The education around storage and data is daunting for a lot of people and think you could tell a great story here.
Just started watching your tutorials and subscribed... The content is fantastic and well put together. Just one question what is the auto complete tool you are using in the wsl2 terminal?
Thank you so much! I'm using ZSH + zsh-autosuggestions, plugin: here is a video about my entire terminal setup: th-cam.com/video/oF6gLyhQDdw/w-d-xo.html&
14:53 I think this only works when remote NFS volume is mounted locally on the worker controller/nodes on the /etc/fstab. What setup/configured nfs csi how did you do the pods?
Great Video. I'm a newbee and a little confused as to how the nfs storage class works. In some of the literature I read, they refer to nfs provisioner however you seem to have simply used the IP address. Setting up the provisioner was a challenge in it self but you seem to have gotten around it! I am using Truenas. Does your approach use the SMB service or does the nfs service have to be active? Apologies if my questions don't make sense due the sketchy knowledge I have on how all of this hangs together! Your assistance is much appreciated.
You can deploy rancher. Then install longhorn from its graphic UI. It will give you a HA storage system running as containers in your cluster, using the hosts storage 😊
@@pjj7466from my limited understanding they are not the same. NFS is a network protocol, think smb or dlna. Nas is more hardware and can be accessed via different network protocols or directly thunderbolt or usb if supported
Is there an available option to add persistent volumes of the kubernetes cluster on Ceph? 3 Proxmox+Ceph physical nodes -> VMs -> kubernetes cluster connect to the host node's Ceph storage (I found some info about Ceph on a Kubernetes cluster but not the other way)
I have been using virtualbox in windows 10 with minikube and docker desktop for local development. I am using hostPath and (also tried using persistent volumes) at the moment and fetching data from apis is taking minimum of 5 seconds. 1 reuqests take 5 or more seconds. I have tried to figure out solution for it but can't. Somewhere i found that, it is the issue with DNS resolution. Do you know anything about it ? The issue with the slowness?
Hi Sir, I have a question about stateful application Let’s say I have a PostgreSQL Sharded cluster in my Kubernetes cluster and have 3 replicas with Stateful Set and storage class Case1: If replica set increase 3 to 4 one pv attached to 4th number pod dynamically also some data stored in 4th member pv and all are ok Case2. When scale down 4 to 3 my 4th number pod down and pv remaining existing. And that data remaining inaccessible. when replicate set up that pv can accessible a. If that pv inaccessible so is there any data inconsistency happen? b. If inconsistency happen how to redistributed that data from 4th number pv to others pv . c. Or what the actual thing happens that orphan pv when do scale down in stateful application
It's not that much different from running the DBs on separate VMs. But I doubt to have enough experience to tell you how you should build an HA DB cluster :/
Hello, Thanks for all your vidéo. Do you know headscale ? It's a tailscale coordination server. Ils want to test it but i don't understand how i can install it. Can you make a vidéo about it ? Thanks
I have a question, what happens when two PVC tries to claim a PV at same time? like PV is of 10GB and those both PVC is of 3GB each, and they both tries to claim the PV at same point of time?
That does not work, unfortunately. I thought first it would work this way, but the PVC always claims the entire PV, so even when PVC wants 3GB and there is only 1 PV with 10GB, the PVC claims the entire 10GB and bounds it 1:1 to the PV.
Does each app get its own PVC? and /or Does each app need its own PV ? So If I have app1 and app2 can app1 and app2 share the same PV and do they each use the same PVC?
@@christianlempa could you perhaps provide a drawing let’s say I have a Wordpress App and a Pihole app both having persistent storage. In the docker compose world each folder might be the root level storage directory so when the docker volume mounts it’s using the folders root level. Ie /home/docker/pihole/myconfigs or /home/docker/Wordpress/myDbConfigs
Great overview of an initially confusing system. As a developer, it's hugely beneficial to have storage abstracted away like this
Thanks! Great feedback :)
i used to laugh at the pricing of separate storage by the cloud provider, thinking i could just get away with regular hard drive storage on the server. lol
Never before has a kubernetes tutorial made so much sense! 👌
Glad you think so! ❤️
Glad to see your videos on Every Tuesday ☺️
....hear
here
Your language is on point! Its great to see your language increases in quality with each video you took :)
CIVO looks very interesting im gonna take a look into it
Thank you so much! Especially because you know exactly where I started :D
Excellent overview, great delivery, vielen Dank!
Vielen Dank! :)
Just love your content. And the presentation.
Not just here, all videos are remarkable. (My thoughts came about it t this point:))
They are understandable, simple explanations.
So keep doing this!
Thank you so much! Of course, I’ll do :)
nfs-common needs to be installed on all nodes: sudo apt install -y nfs-common
Would you consider giving a talk like this for cloud-native data management days? The education around storage and data is daunting for a lot of people and think you could tell a great story here.
Thank you! Well I haven't considered giving any talks yet. I usually prefer making videos and livestreaming on YT.
@@christianlempa You should consider that Christian! You are ready!
Anyone else here wondering why the f*ck Kubernetes wouldn't allow an app's replicas to mount to Longhorn?
World-Class content man, appreciated!
Thank you 😁
Hi Christian,
Have you ever considered making a video of statefulsets and when to use them instead of persistent volumes?
Cheers
Very informative.
Could you please create a similar video for iSCSI storage?
Just started watching your tutorials and subscribed... The content is fantastic and well put together. Just one question what is the auto complete tool you are using in the wsl2 terminal?
Thank you so much! I'm using ZSH + zsh-autosuggestions, plugin: here is a video about my entire terminal setup: th-cam.com/video/oF6gLyhQDdw/w-d-xo.html&
14:53 I think this only works when remote NFS volume is mounted locally on the worker controller/nodes on the /etc/fstab. What setup/configured nfs csi how did you do the pods?
If the CIVO supports only RWO, you mentioned only pods within a single node can access it. Could you explain more?
Great Video. I'm a newbee and a little confused as to how the nfs storage class works. In some of the literature I read, they refer to nfs provisioner however you seem to have simply used the IP address. Setting up the provisioner was a challenge in it self but you seem to have gotten around it! I am using Truenas. Does your approach use the SMB service or does the nfs service have to be active? Apologies if my questions don't make sense due the sketchy knowledge I have on how all of this hangs together! Your assistance is much appreciated.
You can deploy rancher. Then install longhorn from its graphic UI. It will give you a HA storage system running as containers in your cluster, using the hosts storage 😊
To be honest, I didn’t like it much. For me NFS works better.
@@christianlempa the only issue I had is I accidentally deleted its daemon, and it is not repairable 🙃
Sir is NFS and NAS both are same?
@@pjj7466from my limited understanding they are not the same. NFS is a network protocol, think smb or dlna. Nas is more hardware and can be accessed via different network protocols or directly thunderbolt or usb if supported
Great, you are life saver
This was really helpful!
Thanks! Glad it helped
Fantastic videos.. I'm subscribing!
Welcome aboard!
Is there an available option to add persistent volumes of the kubernetes cluster on Ceph?
3 Proxmox+Ceph physical nodes -> VMs -> kubernetes cluster connect to the host node's Ceph storage
(I found some info about Ceph on a Kubernetes cluster but not the other way)
I haven't looked at that, yet. Currently, I'm working on a new tutorial on Longhorn, probably coming out somewhere next year.
@@christianlempa Great news, thank you
Containers are work ephemeral, but they're NOT immutable - you can change them.
Thanks, yep didn't notice that mistake :D
Can you create a video about setting up NFS server in ubuntu
Hm, I will include NFS in my kubernetes video about wordpress soon, watch out for that
Have you ever checked OpenEBS? What do you think about it?
Not tried it yet
@@christianlempa I’ll be waiting for your videos about it! Thanks!
@@christianlempa Do you use an NFS share for your production environment? What about backup and restore?
Thank you very much!!
You're welcome!
I have been using virtualbox in windows 10 with minikube and docker desktop for local development. I am using hostPath and (also tried using persistent volumes) at the moment and fetching data from apis is taking minimum of 5 seconds. 1 reuqests take 5 or more seconds. I have tried to figure out solution for it but can't. Somewhere i found that, it is the issue with DNS resolution. Do you know anything about it ? The issue with the slowness?
Thanks
Thank you 🙏
Hi Sir,
I have a question about stateful application
Let’s say I have a PostgreSQL Sharded cluster in my Kubernetes cluster and have 3 replicas with Stateful Set and storage class
Case1: If replica set increase 3 to 4 one pv attached to 4th number pod dynamically also some data stored in 4th member pv and all are ok
Case2. When scale down 4 to 3 my 4th number pod down and pv remaining existing. And that data remaining inaccessible. when replicate set up that pv can accessible
a. If that pv inaccessible so is there any data inconsistency happen?
b. If inconsistency happen how to redistributed that data from 4th number pv to others pv .
c. Or what the actual thing happens that orphan pv when do scale down in stateful application
It's not that much different from running the DBs on separate VMs. But I doubt to have enough experience to tell you how you should build an HA DB cluster :/
Hello, Thanks for all your vidéo.
Do you know headscale ? It's a tailscale coordination server. Ils want to test it but i don't understand how i can install it. Can you make a vidéo about it ? Thanks
Hey thank you so much! I heard about it, but as I'm pretty happy with tailscale I haven't tried it out, yet.
@@christianlempa i have tailscale too. But i'm interested about this project 😁
Great video
Thanks!
I have a question, what happens when two PVC tries to claim a PV at same time?
like PV is of 10GB and those both PVC is of 3GB each, and they both tries to claim the PV at same point of time?
That does not work, unfortunately. I thought first it would work this way, but the PVC always claims the entire PV, so even when PVC wants 3GB and there is only 1 PV with 10GB, the PVC claims the entire 10GB and bounds it 1:1 to the PV.
why not use NFS storage class?
Hi, can we deploy a pod that will be an NFS server?
Yes, that works! Maybe I make a video about it in the far future.
Does each app get its own PVC? and /or Does each app need its own PV ? So If I have app1 and app2 can app1 and app2 share the same PV and do they each use the same PVC?
That depends on the kind of your app. Pods in Deployments share the same PVC, Pods in Statefulsets have all their own PVC.
@@christianlempa could you perhaps provide a drawing let’s say I have a Wordpress App and a Pihole app both having persistent storage. In the docker compose world each folder might be the root level storage directory so when the docker volume mounts it’s using the folders root level. Ie /home/docker/pihole/myconfigs or /home/docker/Wordpress/myDbConfigs
bs, too much water
Nice!!!
Thanks ;)
How to copy the default exist files like index.html in nginx to /usr/share/nginx/html instead of creating a new default.html ?