Hi Tim. Thanks for the tutorial. I have created a custom cluster. When I try to install longhorn, the Longhorn-manager pods are in CrashLoopBackOff and the installation fails. Can you help?
Do you have an opinion disk configuration? For example, my server has five disks in a striped RAID, resulting in 3TB of filesystem ext4. Should I reconfigure as RAID 0 and let longhorn manage the redundancy?
I tried OpenEBS, NFS, StorageOS, LocalProvisioner and they were ALL a pain to deploy and finicky to use. But so far Longhorn has been simple. The dedicated UI helps a ton whenever I allocate a pvc and can't the pod can't attach to the PV. Your Video also showed other features that I never explored like backups and snapshots. Keep up the great content! You are now one of my favorite channels. Subbed with Notification Bell
Dude, how can I give two thumbs up!? I was trying to solve this volumes issues for a while, and then you came and did it for me, again. Thank you so much!!!
🎯 Key Takeaways for quick navigation: 00:00 Challenges *with Kubernetes storage.* 01:46 Longhorn: *Lightweight, reliable Kubernetes storage.* 04:49 Installing *Longhorn in Kubernetes.* 10:28 Using *tags instead of taints for storage nodes.* 11:23 Setting *up persistent volumes with Longhorn.* 16:58 Creating *backups and snapshots with Longhorn.* Made with HARPA AI
@@notquitecopacetic oh lordy that's a loaded question lol. Ok. So Ill try to keep it brief on how I have my cluster setup. Whole thing is built with ansible and k3s. 3 master nodes - booting off ssd. Those 3 master nodes also run as longhorn storage nodes. Remaining nodes are currently running of SD cards. Those nodes are worker nodes only. The SSD boot is FAST for sure. But I don't have any issues (yet) with the sd cards.
Excellent video Tim! Demonstrating an automated disaster recovery would be an area to enhance this video further -- perhaps an idea for a follow-up video.
Great video indeed - I deployed Longhorn before this video (but not by much) and I always used worker nodes for storage. Now I wanted to setup, as you did, nodes dedicated only to storage workload -- regardless how I use the taints, I cannot get it to work - if I set taint, all nodes go red, regardless what I set the taint, and the setting in the webconsole settings input. Any light on that?
You figure taints out? My experience is the same. I did the stuff Tim mentioned in his docs. Nodes say "Down" until I remove the taint mentioned in the docs. Longhorn docs say to remove all the disk and then edit the yaml. Haven't tried that.
What was the benefit of spinning up storage nodes versus attaching additional volumes to your existing agent nodes? That should keep the storage of Docker images and logs separate from Longhorn storage.
@@TechnoTim Have you looked at the fsGroup within the securityContext? I have not used one but you should be able to create a custom group/GUID on the host which owns the Longhorn data directory and then modify the Longhorn Daemonset or Deployment to use that GUID allowing it access. Other Pods without the correct securityContext should be denied access.
It is so interesting. Never thought to be so interested into docker, kube, rancher. Since I started to look your videos. I am 25 years in the it business but now it’s time to learn new things and setup my home cluster services instead of installing everytime a new Ubuntu server for an application into a containerized docker cluster (based on proxmox) Thank you so much Tim 👍🏻. Btw is there a video to setup the cluster? In the setup Video (docker, kube, Rancher) you didn’t do it
@@TechnoTim what does longhorn has to do with certificates? coincidentally, half an hour ago I was able to have minio deployed using longhorn storage class and it works lime a charm when port-forwarding the minio and the console services to my local, however, minio operator is quite unorganized in terms of documentation, so I wasn't able to expose my minio with an ingress resource. I appreciate if you can help here. 🙏
Love your videos mate, keep it up, you are amazing, clear and understandable and well explained on each topic, can you please make a video about proxmox solo on storage options and possibilities and ways to configure mate.
Fantastic video, as always 👍One thing I'm missing is a description of the lower layers. OK, we have these 4 Worker Nodes which probably are running on top of VMs in Proxmox but are they distributed across different physical servers? What are underlying VM disk devices? Having 4 WNodes and replica=2, how do you prevent that both primary and replica data does not go to WNodes running in same physical server? What is minimum number of servers to provide redundancy and avoid split-brain? Asking these question because this seem important from resilience point of view.
I've actually been using the nfs-subdir-external-provisioner storage class to automatically mount a subdirectory from an exported FreeNAS NFS share. It works, but longhorn seems a lot more robust!
Great video and I like that you went more in depth. I'm trying to figure this out for an enterprise solution. Side note: love seeing your face, but sometimes the face cam blocks areas of the screen where you are typing or clicking. I've noticed this on a few vids. Not a big deal, but it would be helpful to see everything you are seeing.
Great video. Unfortunately I did not solve my issue with navidrome and its SQLite database concurrency problem. Even with all the nodes being VMs inside the same machine
Would you be able to make a tutorial for containerizing a non-catalog app and running it on a persistent volume? It would be super cool to see something like linuxserver/unifi running on Kubernetes. I don't think the community has done something like that before.
Thank you! Any app on my self-hosted playlist will work in this way. Just choose longhorn as your volume instead of bing mounting to a host path! th-cam.com/play/PL8cwSAAaP9W3bztxsC-1Huve-sXKUZmb5.html
@@TechnoTim support him, can u make a video explanation on how to use piece of longhorn with generic app like dockerhub ginx or dockerhub\ubuntu on rancher 2.6? i tried - unsucessfull.
Hi Tim, I have a question regarding this longhorn which is a good choice but still needs some tuning on the cpu and ram resources as without any specification it consumes a lot of cpu and ram when is doing his job so my question would be if you have some optimal configuration of cpu and ram requests and limits. I have deployed within k8s by using helm charts. Thank you and best regards !
For somebody as paranoid as me, is there a quick and easy way to verify the integrity of a backed up volume? Like, mount it as a regular volume and check a file...?
As always, great video! I've been trying to setup longhorn for a while now and was lowkey hoping you'd make a vid so I could see how you did it. The setup is, as you said, incredibly simple. Which is awesome! The hardest part for me has been volumes failing to attach. They'll just get stuck in an attaching/detaching loop. I assume it's something to do with my networking config, and networking is the bane of my existence.
@@TechnoTim So I never found out what the root cause was, but I did find out RancherOS is explicitly not supported by Longhorn. Which is the OS my nodes were running. Re-upped with a less niche OS and things are running great :) Your docs on taints and tolerations were a lifesaver! Would have taken me hours to figure out otherwise.
Thank you for all the work you share ! What kind of file you'll find on the NFS server when doing a backup ? Also what do you think about de k3os ISO, I tried to work with it but really didn't get anything about Proxmox Cloud-Init... maybe an idea for a next video :-)
Thanks! I've always opted out of distros dedicated to kubernetes/k3s/rancher. Although I do gain some hardening, I lose more control over the OS than I'd like. Also, I am familiar with care and feeding of Ubuntu and not so much with k3os/rancherOS/etc...
At 6:53 you mention you can add a drive to any device on your network. If you have a NAS, I assume you link to it's NFS path? Or how is this accomplished? Great video!
Great video! But btw longhorn is most likely not your default storage class! That's because you deployed k3s with the local-path storage provider and k3s always reapplys the Deployments in /var/lib/rancher/k3s/server/manifest (or might be in /etc/rancher or something I'm not quite sure right now). So even if do kubectl edit storageclass local-path and set it to not be the default it will automatically reapply the storage class yaml and set it back to the default. So either you edit it in there (.../manifest) or you just delete the file in there and use kubectl edit
Great information on Longhorn. Can you point me to setup information how we could use Kubernetes/Longhorn to create a development wordpress node that is disconnected from the production nodes and when the changes are implemented how it can be depliyed to the kubernetes node setup? Thanks in Advance
Monty, the guy who wrote MySQL and did the MariaDB fork, says the name is "ma-ree-ah", not "ma-rai-a" :) Oh, and BTW, he also says it's "my s q l", not "my sequel" (Sequel is not the same as SQL). The two databases are named after his daughters, My (pronounced as the first part of myriad, but even monty says "mai s q l") and Maria.
I'm using nfs-provisioner because I don't want to use space from my proxmox cluster. One big problem I see with longhorm is replication taking too much space if all volume is duplicated on each node, in your case 4 time the space allocated and it can add up fast. I suggest 10gb network and nfs behind a zraid of ssd or in my case I created 2 storage class. The default use hdd and a nfs-ssd
Thank you! The only downside about the nfs-provisioner is that if my nfs goes down (reboot/upgrade/whatever) I lose the mounts for every pod in my cluster.
@@TechnoTim ups sorry! I meant to say: that to my knowledge, it’s not possible to use a block storage device in a multi pod read / write config. For example: when scaling a Drupal / Wordpress server, I would use a few Webserver pods all accessing the same volume. This isn’t possible with Longhorn. NFS acts on a file system level -> this would work. I still have to find a solution similar to Longhorn but for multi pod setups 😅
it will use the remaining space of each drive, depends on how you use it. Long horn will create 3 replicas so something like total space = drives * space - n replicas (volume * 3 replicas)
Thanks for the great video Tim ;) How do you think Longhorn compares to OpenEBS Jiva? I really love how easy it is to manage volumes and backups in longhorn, but in the past Longhorn has been a bit unreliable for me, with volumes being disconnected on extensive writes, whereas OpenEBS has been rock solid. Have you encountered similar issues?
Great video, couple of questions: * You show that you have 2 replicas per volume in the "table view" but once you go into the volume details one can see 3 replicas, is that normal? * If we use 3 storage nodes, can we achieve HA by only having 2 replicas per volume, or does longhorn calculate quorum on replicas and not on nodes? * Pods: i see you drained a node and a new PV was create in longhorn, why so? Shouldn't it be possible to reuse the same PV on a different node? How do we know that PV1 and PV2 in your example are copies of each other? Is there any hint from longhorn? And what happens if the node 1 goes completely down, will the same principle apply?
How are you getting your host disks mounted to your storage nodes? Mounting a host path from the hypervisor, or creating a VM disk? Also side question, if you didn't have any workloads that required VMs, would you roll with Kubernetes on baremetal? You could try out another rancher product called Harvester for VM management (its technically HCI though)
I can only speak for myself, but I would definitely be running bare metal + kubeVirt. (Thanks for mentioning harvester, haven't heard about that before!) Especially when considering that all of wikipedia is running on bare metal kubernetes clusters. Niantic with Pokemón Go are doing sth similar, they're running lxc containers as worker nodes, because they would otherwise run into the 100 pods per node limit. So for that reason why not? If they're doing it, it can't be that bad.
Harvester looks interesting for sure! My node disks are mounted via virtual disk. Since I dedicated 4 nodes to storage, I am just using the storage on those nodes. MY PVCs are pretty small, I just need them available!
You can passthrough the SSD/HDD to the VM, that's what I ended up doing after going crazy with ceph. My mental illness is in recovery right now, thanks to Tim
would love ot see this revisited in the context of harvester. Attempting to setup now and Harvester says the default storage class is harvester-longhorn. My Rancher install is a VM on harvester and passing Harvester back through so Rancher can deploy to it. Rancher doesn't show Longhorn as installed (by default) but since its running on Harvester, shouldn't it be Longhorn? IDK
Hi. Excellent work. Can you make a video how to make backup and restore with Longhorn? I tried a few different ways and never succeed. Longhorn documentation is not very detailed and clear. With Snapshot always successfully return the data.
Is there a way to bring in iscsi nas into longhorn? I have a Dell equal logic ps4000 (very old, I know) and I am having a hard time finding documentation on getting that storage so it's available for all my services. Thanks for all the great content!
Great video..Thanks...Sorry if this is a dumb question...but how do you browse and edit files in a Longhorn volume from outside the Pod. For example - for Home Assistant, I would like to be able to edit the HA config files from my PC and restart the Pod for new automations to take effect. My Longhorn volume is mounted on a path in my home directory and the "replicas" directory is owned by root. If I browse into the directory as root, I find an "img" file which can't be browsed. Any ideas?
@@TechnoTim Thanks for the pointers. I found a better option for Home Assistant by adding a VSCode (code-server) container as a sidecar to the HA workload with the port 8443 published as a L4 Load Balancer and the HA config path as the /config/workspace mount point. I'm also experimenting with adding an OpenSSH sidecar on a L4 Load Balancing port to workloads that require external access.
having revisited and use the taints in the docs, the storage nodes show at "DOWN" in the Longhorn Dashboard now, but the storage capacity seems right. weird
Is there a way to deploy longhorn without Rancher ? I can’t get Rancher to import my cluster. Probably because my cluster master’s are running on arm processors.
Again great video and with the video's you posted, i was finally able to install Kubernetes (k3s), Rancher, and Longhorn. There are a couple of things i want to mention though First about Longhorn; i created 3 more nodes for storage purposes and i attach 150gig for each node, but in Longhorn, i only see 128 gigs available. I thought it would be 450gig. What is the purpose of spinning more nodes? The second thing i want to mention is that when the Load Balancer was set up in the K3s video it was a 4 layer LB. By launching WordPress it gave me an error caused in the Rancher configuration page it asked for a 7 layer load balancer. I don't have that so i disabled that option. What will happen if the nodes become unavailable where WordPress is running, since i can connect to WordPress through the IP Address of the worker node it is running with a port number? I thought the idea was to connect through the LB and the LB is bringing you to the container you want to connect to independently of the worker node it is running. Sorry for the long comment.
it works very well, of course it needs ssd disks otherwise the performances could reduce a lot it’s not the most faster and doesn’t support yet the disk encryption. There are other solutions like rook-ceph or trident by netapp, but longhorn from my point of view is the most reliable.
Here is a challenging use case im working on solving... i have around 20TB of longhorn storage in my cluster spread across 5 worker nodes with s3 backups enables. I would like to some how expose the longhorn storage through samba shares or NFS or iSCSI to my vmware stack or desktops for a more reliable storage DR storage option than i have. any ideas on how to accomplish this? I was thinking a container using a longhorn PV ruining NFS of some type and exposing it to my main network.
How does Rancher Longhorn manage how much space is available for all of the nodes? I need more space. I added new hard drives that were twice the size as the previous.
OK, so ESXi was running 3 VMs. I had to go and expand the LVM space so that Rancher Longhorn could fully utilize the disk for /dev/sda3... sudo fdisk -l sudo growpart /dev/sda 3 # grows the partition; note the space sudo lvextend -l +100%FREE -r /dev/ubuntu-vg/ubuntu-lv
So i have been running longhorn for some time now and backing up to S3... some how i got one of my PVs corrupted and accidentally deleted the PV. i cant figure out how to restore from back becuase when i click on backups it shows nothing
Hmmm not sure. I've always been able to restore a backup from the gui and reconnect it to the container. Sometimes the service call fails and you have to click it multiple times. you can see the failures in the Chrome dev tools. It's kind of annoying because it fails silently.
Hi Tim! Is there any way to apply this "longhorn way" to databases? I don't want to all my pods pull a db-container, just because it needs a database. How cool it would be to do the same thing with databases...
Would be awesome but not sure I would replicate a DB this way. The DB storage, yes, but not HA DBs. I would leave them up to a real HA DB service or k8s operator.
@@TechnoTim yes, the truth is yours. What i think i have a lot of legacy, monolit sites on my home dev server. When i start them, all is starting a single, non HA db.
Hi Tim, I am using longhorn for Volume Provisioning. When I deploy a statefulset with 5 PVCs, It takes more time to attach, Also sometimes they become Detached. Can you suggest a method to find the cause of why it happens?
in my test lab, i've setup an nfs share directly on the proxmox physical host, can i use this share as a storage repo for longhorn? Maybe creating 2 folders, 1 for storage and 1 for backup... remember, it's just a test env, to learn k8s :)
I have some questions regarding Longhorn. 1. I am using Proxmox and i backup all my vm's with Proxmox backup. Can you tell me what the difference is between backing up all my vm's with proxmox and backup my volume's with longhorn? 2. sometimes i need to have access to the data. I haven't figure out yet how to access the data stored in a longhorn volume. Is there a way to achieve that?
Backing up your PVCs is more efficient than backing up all your VMS, especially for kubernetes. They are just cattle. RE how to access data: You can exec into the pod to see / edit the data if you need, or use something like filebrowser and connect it to that PVC so you can have a GUI to look at it.
Thanks for all your hard work. Learned a lot by watching your video's. Need little help with access copy the data/files to/from PV/replicas created by longhorn.
use kubectl cp. If you need them there when the container starts, mount the file system to another generic container (like ubuntu or busybox) and then kubectl cp the files there.
Tried every which way to get it going and I always end up with "CreateContainerError: failed to generate container "8d3d73cd684473b793b5aaddd432676f56220d044181674d056df4431be009a0" spec: failed to generate spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount" Am I missing something?
@@TechnoTim I seen the nfs and isci install on your documentation. I threw that in and retried. Still nothing. I am able to install it on my cluster rather than the 'local' like you demonstrated in your video. Also if this helps if am running rancher v2.5.11
How are you managing storage in kubernetes?
BTW if you're new here welcome! Be sure to subscribe for more content like this!
Hi Tim.
Thanks for the tutorial. I have created a custom cluster. When I try to install longhorn, the Longhorn-manager pods are in CrashLoopBackOff and the installation fails. Can you help?
Do you have an opinion disk configuration?
For example, my server has five disks in a striped RAID, resulting in 3TB of filesystem ext4.
Should I reconfigure as RAID 0 and let longhorn manage the redundancy?
Random Q have you been able to add longhorn to your ansible script? Just watching the k3s + ansible video at the moment.
I tried OpenEBS, NFS, StorageOS, LocalProvisioner and they were ALL a pain to deploy and finicky to use. But so far Longhorn has been simple.
The dedicated UI helps a ton whenever I allocate a pvc and can't the pod can't attach to the PV.
Your Video also showed other features that I never explored like backups and snapshots. Keep up the great content! You are now one of my favorite channels. Subbed with Notification Bell
Than you so much! Looks like you have quite the experience in Rancher and Storage. Keep it up! Thanks for the comment!
3 years laters still top TechnoTim, awesome thanks for your contents
Dude, how can I give two thumbs up!? I was trying to solve this volumes issues for a while, and then you came and did it for me, again.
Thank you so much!!!
Like & Subscribe works :) Thank you!
🎯 Key Takeaways for quick navigation:
00:00 Challenges *with Kubernetes storage.*
01:46 Longhorn: *Lightweight, reliable Kubernetes storage.*
04:49 Installing *Longhorn in Kubernetes.*
10:28 Using *tags instead of taints for storage nodes.*
11:23 Setting *up persistent volumes with Longhorn.*
16:58 Creating *backups and snapshots with Longhorn.*
Made with HARPA AI
Ahhhh yes! My favorite time of the week, when Techno Tim releases a new vid.
Woohoo! (Also Woohoo for Saturday!)
What a coincidence... I was just reading longhorn docs to use in our prod, and you just made it easier 😁 thanks
Nice!
@@TechnoTim I also wonder about the question Ariel said
Thanks! A storage node manager, redundancy, and backups were exactly what I was looking to find. It's a great plus that it has a nice UI, too.
I have longhorn installed on my 12 node pi4 cluster with a few of my nodes with extra SSD storage. Works great!
@@notquitecopacetic oh lordy that's a loaded question lol. Ok. So Ill try to keep it brief on how I have my cluster setup. Whole thing is built with ansible and k3s. 3 master nodes - booting off ssd. Those 3 master nodes also run as longhorn storage nodes. Remaining nodes are currently running of SD cards. Those nodes are worker nodes only. The SSD boot is FAST for sure. But I don't have any issues (yet) with the sd cards.
This is freakin amazing! Thank you SO much for this video!
Glad you liked it!
Very thank you TIm, you provided me a new approach to the storage in k8s
Thanks Tim. This was very timely. I was just starting think about the issue.
Glad it was helpful!
Tim : "Thanks ahead of time for the likes"
Me : Instantly likes video.
Thanks after the time for the likes!
man happy I found your channel, planning for homelab for my projects, and you made my life easier.
Glad I could help!
very high quality content together with the documentation site, amazing work!
Much appreciated!
fine, you convinced me to use rancher with k3s.
Same
Me too
Thanks for very useful content. Thanks a lot and Happy New Year!
Thank you! Happy new year!
dude you rocks! just going straight to the points and showing all the good features
Thank you!
Just stumbled on this vid. Really nice explanation and pretty on time for me. Just about to start exploring Longhorn. Thank you!
Welcome aboard!
More Rancher how to's please. great job on this.
I'm always learning a lot from your videos. Thanks for sharing!
Chaps, - don't symlink anything from /var/lib/longhorn as I did,all volumes will stay unbound. Learned hard way. And thanks for the video TIm!
Good call! I think you need to set up fstab maybe? Not sure. Let me know!
Well... I'm certain it is pronounced Maria and not Maria
Finally! I now know how to pronounce Maria! It's as simple as saying "Maria"!
Thank you!!
I love k3s and looking forward to longhorn
Excellent video Tim! Demonstrating an automated disaster recovery would be an area to enhance this video further -- perhaps an idea for a follow-up video.
T-Tim is the man!
thanks so much,now i have the good choice option fir storageclass
Great!
Thanks a lot for this video Tim! 🔥
Great Job! And thanks for writing good Description. Appreciated.
Thank you!
Great video indeed - I deployed Longhorn before this video (but not by much) and I always used worker nodes for storage. Now I wanted to setup, as you did, nodes dedicated only to storage workload -- regardless how I use the taints, I cannot get it to work - if I set taint, all nodes go red, regardless what I set the taint, and the setting in the webconsole settings input. Any light on that?
You figure taints out? My experience is the same. I did the stuff Tim mentioned in his docs. Nodes say "Down" until I remove the taint mentioned in the docs. Longhorn docs say to remove all the disk and then edit the yaml. Haven't tried that.
What was the benefit of spinning up storage nodes versus attaching additional volumes to your existing agent nodes? That should keep the storage of Docker images and logs separate from Longhorn storage.
Good point. Dedicating these nodes to this role allows greater control and security over these nodes.
@@TechnoTim Have you looked at the fsGroup within the securityContext? I have not used one but you should be able to create a custom group/GUID on the host which owns the Longhorn data directory and then modify the Longhorn Daemonset or Deployment to use that GUID allowing it access. Other Pods without the correct securityContext should be denied access.
It is so interesting. Never thought to be so interested into docker, kube, rancher. Since I started to look your videos. I am 25 years in the it business but now it’s time to learn new things and setup my home cluster services instead of installing everytime a new Ubuntu server for an application into a containerized docker cluster (based on proxmox) Thank you so much Tim 👍🏻. Btw is there a video to setup the cluster? In the setup Video (docker, kube, Rancher) you didn’t do it
Thank you! Glad you like it! Here it is! th-cam.com/video/UoOcLXfa8EU/w-d-xo.html
Would you please make videos on MinIO whenever you figure out the cert piece and possibly a video on setting up a local Git instance?
I'd love to get MinIO working (and it does with TrueNAS) but I think the issue is with Longhorn. It doesn't like self signed certs.
@@TechnoTim what does longhorn has to do with certificates?
coincidentally, half an hour ago I was able to have minio deployed using longhorn storage class and it works lime a charm when port-forwarding the minio and the console services to my local, however, minio operator is quite unorganized in terms of documentation, so I wasn't able to expose my minio with an ingress resource.
I appreciate if you can help here. 🙏
back again,
I was able to securely expose my minio (which is using longhorn volumes) with nginx 😍😍😍 will try to share the manifests later
keep up the good work dude! Seriously, your channel targets so well what I'm working on, month after month! Is there a telepathy thing there? ;)
haha! Thank you! 🤯
i'm using longhorn right now !
Thanks you
Thx man. I've been quite curious about this.
Glad you enjoy it!
Love your videos mate, keep it up, you are amazing, clear and understandable and well explained on each topic, can you please make a video about proxmox solo on storage options and possibilities and ways to configure mate.
Thank you! Possibly!
Great video! Short video but good explanation of everything important.
Thank you! I try not to make them too long!
Fantastic video, as always 👍One thing I'm missing is a description of the lower layers. OK, we have these 4 Worker Nodes which probably are running on top of VMs in Proxmox but are they distributed across different physical servers? What are underlying VM disk devices? Having 4 WNodes and replica=2, how do you prevent that both primary and replica data does not go to WNodes running in same physical server? What is minimum number of servers to provide redundancy and avoid split-brain? Asking these question because this seem important from resilience point of view.
I've actually been using the nfs-subdir-external-provisioner storage class to automatically mount a subdirectory from an exported FreeNAS NFS share. It works, but longhorn seems a lot more robust!
Yeah, I too use nfs client provisioner but I don’t have HA NFS! This gives you HA block storage!
Maaria is the official Luigi approved pronounciation of that database.
Excellent video Tim! how to Set up a Storage Class with failover capability using longhorn
Thanks I like your way of explaining things, this is to the point
Glad it was helpful!
Great video and I like that you went more in depth. I'm trying to figure this out for an enterprise solution. Side note: love seeing your face, but sometimes the face cam blocks areas of the screen where you are typing or clicking. I've noticed this on a few vids. Not a big deal, but it would be helpful to see everything you are seeing.
Thanks for the feedback! I am usually pretty good about hiding the camera but missed a few scenes on this one. Noted!
Great video. Unfortunately I did not solve my issue with navidrome and its SQLite database concurrency problem. Even with all the nodes being VMs inside the same machine
Would you be able to make a tutorial for containerizing a non-catalog app and running it on a persistent volume? It would be super cool to see something like linuxserver/unifi running on Kubernetes. I don't think the community has done something like that before.
Thank you! Any app on my self-hosted playlist will work in this way. Just choose longhorn as your volume instead of bing mounting to a host path! th-cam.com/play/PL8cwSAAaP9W3bztxsC-1Huve-sXKUZmb5.html
@@TechnoTim support him, can u make a video explanation on how to use piece of longhorn with generic app like dockerhub
ginx or dockerhub\ubuntu on rancher 2.6? i tried - unsucessfull.
please make a video about MinIO. Thanks
Hi Tim, I have a question regarding this longhorn which is a good choice but still needs some tuning on the cpu and ram resources as without any specification it consumes a lot of cpu and ram when is doing his job so my question would be if you have some optimal configuration of cpu and ram requests and limits. I have deployed within k8s by using helm charts. Thank you and best regards !
Great intro video! Thanks!
For somebody as paranoid as me, is there a quick and easy way to verify the integrity of a backed up volume?
Like, mount it as a regular volume and check a file...?
Great video! Typo in the thumbnail though: «longhoNrn» ;)
Good eye! Thank you, fixed!
As always, great video! I've been trying to setup longhorn for a while now and was lowkey hoping you'd make a vid so I could see how you did it.
The setup is, as you said, incredibly simple. Which is awesome! The hardest part for me has been volumes failing to attach. They'll just get stuck in an attaching/detaching loop. I assume it's something to do with my networking config, and networking is the bane of my existence.
Be sure your nodes have all the dependencies installed. They are in my docs! Thank you!
@@TechnoTim So I never found out what the root cause was, but I did find out RancherOS is explicitly not supported by Longhorn. Which is the OS my nodes were running. Re-upped with a less niche OS and things are running great :) Your docs on taints and tolerations were a lifesaver! Would have taken me hours to figure out otherwise.
Thank you for all the work you share !
What kind of file you'll find on the NFS server when doing a backup ?
Also what do you think about de k3os ISO, I tried to work with it but really didn't get anything about Proxmox Cloud-Init... maybe an idea for a next video :-)
Thanks! I've always opted out of distros dedicated to kubernetes/k3s/rancher. Although I do gain some hardening, I lose more control over the OS than I'd like. Also, I am familiar with care and feeding of Ubuntu and not so much with k3os/rancherOS/etc...
Thanks for sharing! Great Video and love your glasses! What make and model are they please?
Warby Parker!
At 6:53 you mention you can add a drive to any device on your network. If you have a NAS, I assume you link to it's NFS path? Or how is this accomplished? Great video!
Yes! That's right. These storage nodes can mount an nfs path too!
Great stuff ! Keep it going
Please do a video on MinIO integration :)
Who the fuck thought it would be a good idea to call the things "taints" lol
Awesome video!!!
By the way, in your thumbnail is a typo... ;-)
It might be cached! Try refreshing a few times!
Great video! But btw longhorn is most likely not your default storage class! That's because you deployed k3s with the local-path storage provider and k3s always reapplys the Deployments in /var/lib/rancher/k3s/server/manifest (or might be in /etc/rancher or something I'm not quite sure right now). So even if do kubectl edit storageclass local-path and set it to not be the default it will automatically reapply the storage class yaml and set it back to the default. So either you edit it in there (.../manifest) or you just delete the file in there and use kubectl edit
Good call! I've noticed too that I can have more than one default 🤔 ???
@@TechnoTim nope 😜
@@TechnoTim Did you ever discover a solution to this?
Great information on Longhorn. Can you point me to setup information how we could use Kubernetes/Longhorn to create a development wordpress node that is disconnected from the production nodes and when the changes are implemented how it can be depliyed to the kubernetes node setup? Thanks in Advance
What about setting up gitlab using helm next? Love the chanel btw ;D
Monty, the guy who wrote MySQL and did the MariaDB fork, says the name is "ma-ree-ah", not "ma-rai-a" :) Oh, and BTW, he also says it's "my s q l", not "my sequel" (Sequel is not the same as SQL). The two databases are named after his daughters, My (pronounced as the first part of myriad, but even monty says "mai s q l") and Maria.
I'm using the latest Rancher version 2.6 and I don't see the wordpress app, do I need to add a new repository?
This is great, man. Thank you. Can you make a comparison between it and ondat? and what is your opinion?
Thanks
Excellent video! loved it!
Thank you!
You forget to mention that iscsi package installed on k8s nodes is required to use longhorn. Without that it never up.
Thanks! This is in the docs but thanks for calling it out here!
I'm using nfs-provisioner because I don't want to use space from my proxmox cluster. One big problem I see with longhorm is replication taking too much space if all volume is duplicated on each node, in your case 4 time the space allocated and it can add up fast. I suggest 10gb network and nfs behind a zraid of ssd or in my case I created 2 storage class. The default use hdd and a nfs-ssd
Thank you! The only downside about the nfs-provisioner is that if my nfs goes down (reboot/upgrade/whatever) I lose the mounts for every pod in my cluster.
@@TechnoTim hmm have you looked at ceph?
Thank you for your video! It helped a lot!
I noticed, that Longhorn acts as a Block storage device -> it won't support
Sorry, what?
@@TechnoTim ups sorry!
I meant to say:
that to my knowledge, it’s not possible to use a block storage device in a multi pod read / write config.
For example: when scaling a Drupal / Wordpress server, I would use a few Webserver pods all accessing the same volume. This isn’t possible with Longhorn. NFS acts on a file system level -> this would work.
I still have to find a solution similar to Longhorn but for multi pod setups 😅
Great video! Thank you!
You are welcome!
If i have 5 servers each with 1 tb disks, and i run longhorn on each. How much usable space do i have access to?
it will use the remaining space of each drive, depends on how you use it. Long horn will create 3 replicas so something like total space = drives * space - n replicas (volume * 3 replicas)
Thanks for the great video Tim ;)
How do you think Longhorn compares to OpenEBS Jiva?
I really love how easy it is to manage volumes and backups in longhorn, but in the past Longhorn has been a bit unreliable for me, with volumes being disconnected on extensive writes, whereas OpenEBS has been rock solid.
Have you encountered similar issues?
I haven't tried openEBS, how is it?
You dont have to formate yours disk to juse longhorn
Great video, couple of questions:
* You show that you have 2 replicas per volume in the "table view" but once you go into the volume details one can see 3 replicas, is that normal?
* If we use 3 storage nodes, can we achieve HA by only having 2 replicas per volume, or does longhorn calculate quorum on replicas and not on nodes?
* Pods: i see you drained a node and a new PV was create in longhorn, why so? Shouldn't it be possible to reuse the same PV on a different node? How do we know that PV1 and PV2 in your example are copies of each other? Is there any hint from longhorn? And what happens if the node 1 goes completely down, will the same principle apply?
How are you getting your host disks mounted to your storage nodes? Mounting a host path from the hypervisor, or creating a VM disk? Also side question, if you didn't have any workloads that required VMs, would you roll with Kubernetes on baremetal? You could try out another rancher product called Harvester for VM management (its technically HCI though)
I can only speak for myself, but I would definitely be running bare metal + kubeVirt. (Thanks for mentioning harvester, haven't heard about that before!)
Especially when considering that all of wikipedia is running on bare metal kubernetes clusters. Niantic with Pokemón Go are doing sth similar, they're running lxc containers as worker nodes, because they would otherwise run into the 100 pods per node limit.
So for that reason why not? If they're doing it, it can't be that bad.
Harvester looks interesting for sure! My node disks are mounted via virtual disk. Since I dedicated 4 nodes to storage, I am just using the storage on those nodes. MY PVCs are pretty small, I just need them available!
You can passthrough the SSD/HDD to the VM, that's what I ended up doing after going crazy with ceph. My mental illness is in recovery right now, thanks to Tim
would love ot see this revisited in the context of harvester. Attempting to setup now and Harvester says the default storage class is harvester-longhorn. My Rancher install is a VM on harvester and passing Harvester back through so Rancher can deploy to it. Rancher doesn't show Longhorn as installed (by default) but since its running on Harvester, shouldn't it be Longhorn? IDK
Thanks Tim. You next step Harvester?
*3 nodes rke cluster with longhorn and harvester installed* Who needs proxmox?
Hi. Excellent work. Can you make a video how to make backup and restore with Longhorn? I tried a few different ways and never succeed. Longhorn documentation is not very detailed and clear. With Snapshot always successfully return the data.
Is there a way to bring in iscsi nas into longhorn? I have a Dell equal logic ps4000 (very old, I know) and I am having a hard time finding documentation on getting that storage so it's available for all my services. Thanks for all the great content!
There may be, but I know for sure you could use NFS
1:48 Longhorn is WindowsVista Alpha:p
Haha! You guessed it!
Great video..Thanks...Sorry if this is a dumb question...but how do you browse and edit files in a Longhorn volume from outside the Pod. For example - for Home Assistant, I would like to be able to edit the HA config files from my PC and restart the Pod for new automations to take effect. My Longhorn volume is mounted on a path in my home directory and the "replicas" directory is owned by root. If I browse into the directory as root, I find an "img" file which can't be browsed. Any ideas?
You can, just google kubectl exec and you can remote into the pod and make changes. Thanknyou!
Or use kubectl cp and copy a file to your pod. That might be better so you can use an editor.
@@TechnoTim Thanks for the pointers. I found a better option for Home Assistant by adding a VSCode (code-server) container as a sidecar to the HA workload with the port 8443 published as a L4 Load Balancer and the HA config path as the /config/workspace mount point. I'm also experimenting with adding an OpenSSH sidecar on a L4 Load Balancing port to workloads that require external access.
having revisited and use the taints in the docs, the storage nodes show at "DOWN" in the Longhorn Dashboard now, but the storage capacity seems right. weird
Yep felt that minio cert pain
I learned a lot by watching the video. ❤ 🌹
Glad it was helpful!
Hi Tim,
In your setup, is it also possible to scale out the pod(wordpress)?
TIA
I am not sure! Most of my frontends are client side apps so I can scale to 10000 if I want!
Is there a way to deploy longhorn without Rancher ? I can’t get Rancher to import my cluster.
Probably because my cluster master’s are running on arm processors.
Again great video and with the video's you posted, i was finally able to install Kubernetes (k3s), Rancher, and Longhorn. There are a couple of things i want to mention though First about Longhorn; i created 3 more nodes for storage purposes and i attach 150gig for each node, but in Longhorn, i only see 128 gigs available. I thought it would be 450gig. What is the purpose of spinning more nodes? The second thing i want to mention is that when the Load Balancer was set up in the K3s video it was a 4 layer LB. By launching WordPress it gave me an error caused in the Rancher configuration page it asked for a 7 layer load balancer. I don't have that so i disabled that option. What will happen if the nodes become unavailable where WordPress is running, since i can connect to WordPress through the IP Address of the worker node it is running with a port number? I thought the idea was to connect through the LB and the LB is bringing you to the container you want to connect to independently of the worker node it is running. Sorry for the long comment.
Somehow i can't get the UI pod to startup: it logs an error from nginx containing: host not found in upstream "longhorn-backend" any idea?
I think it was somehow O.S. related, i was using debian 11.
I switched to k3os (yes a k3s OS) and now everything works smoothly.
it works very well, of course it needs ssd disks otherwise the performances could reduce a lot it’s not the most faster and doesn’t support yet the disk encryption. There are other solutions like rook-ceph or trident by netapp, but longhorn from my point of view is the most reliable.
Good call!
Hello Tim, your videos are so great! Can Longhorn be used in docker swarm?
Thank you! No, it's for kubernetes!
Here is a challenging use case im working on solving... i have around 20TB of longhorn storage in my cluster spread across 5 worker nodes with s3 backups enables. I would like to some how expose the longhorn storage through samba shares or NFS or iSCSI to my vmware stack or desktops for a more reliable storage DR storage option than i have. any ideas on how to accomplish this? I was thinking a container using a longhorn PV ruining NFS of some type and exposing it to my main network.
How does Rancher Longhorn manage how much space is available for all of the nodes? I need more space. I added new hard drives that were twice the size as the previous.
OK, so ESXi was running 3 VMs. I had to go and expand the LVM space so that Rancher Longhorn could fully utilize the disk for /dev/sda3...
sudo fdisk -l
sudo growpart /dev/sda 3 # grows the partition; note the space
sudo lvextend -l +100%FREE -r /dev/ubuntu-vg/ubuntu-lv
So i have been running longhorn for some time now and backing up to S3... some how i got one of my PVs corrupted and accidentally deleted the PV. i cant figure out how to restore from back becuase when i click on backups it shows nothing
Hmmm not sure. I've always been able to restore a backup from the gui and reconnect it to the container. Sometimes the service call fails and you have to click it multiple times. you can see the failures in the Chrome dev tools. It's kind of annoying because it fails silently.
Hi Tim! Is there any way to apply this "longhorn way" to databases? I don't want to all my pods pull a db-container, just because it needs a database. How cool it would be to do the same thing with databases...
Would be awesome but not sure I would replicate a DB this way. The DB storage, yes, but not HA DBs. I would leave them up to a real HA DB service or k8s operator.
@@TechnoTim yes, the truth is yours. What i think i have a lot of legacy, monolit sites on my home dev server. When i start them, all is starting a single, non HA db.
Hi Tim,
I am using longhorn for Volume Provisioning. When I deploy a statefulset with 5 PVCs, It takes more time to attach, Also sometimes they become Detached. Can you suggest a method to find the cause of why it happens?
You can check out the logs on each node or the logs in longhorn. Sometimes these are hard to track down.
in my test lab, i've setup an nfs share directly on the proxmox physical host, can i use this share as a storage repo for longhorn? Maybe creating 2 folders, 1 for storage and 1 for backup... remember, it's just a test env, to learn k8s :)
Yes you can ;) I even went as far as to use the NFS share of my Proxmox host with NFS-client-provisioner ;)
I have some questions regarding Longhorn. 1. I am using Proxmox and i backup all my vm's with Proxmox backup. Can you tell me what the difference is between backing up all my vm's with proxmox and backup my volume's with longhorn? 2. sometimes i need to have access to the data. I haven't figure out yet how to access the data stored in a longhorn volume. Is there a way to achieve that?
Backing up your PVCs is more efficient than backing up all your VMS, especially for kubernetes. They are just cattle. RE how to access data: You can exec into the pod to see / edit the data if you need, or use something like filebrowser and connect it to that PVC so you can have a GUI to look at it.
Not sure if you are aware of this but there is a typo in the video thumbnail @Techno Tim
Thanks so much, fixed but unfortunately it's cached, might not be next time you check!
Thanks for all your hard work. Learned a lot by watching your video's. Need little help with access copy the data/files to/from PV/replicas created by longhorn.
use kubectl cp. If you need them there when the container starts, mount the file system to another generic container (like ubuntu or busybox) and then kubectl cp the files there.
What do you think of Dell's Powerflex vs Longhorn?
I don't know enough about Powerflex but I do know that Longhorn gives you shared kubernetes storage really easy!
Tried every which way to get it going and I always end up with
"CreateContainerError: failed to generate container "8d3d73cd684473b793b5aaddd432676f56220d044181674d056df4431be009a0" spec: failed to generate spec: path "/var/lib/longhorn/" is mounted on "/" but it is not a shared mount"
Am I missing something?
did you install the dependencies? Check out my docs, they are listed there!
@@TechnoTim I seen the nfs and isci install on your documentation. I threw that in and retried. Still nothing. I am able to install it on my cluster rather than the 'local' like you demonstrated in your video. Also if this helps if am running rancher v2.5.11
Nice video, but how config aws s3 bucket? someone have a video or something? I dont know how confg s3 bucket with keys for longhorn.. :(
I did with minio!