Brilliant! I was just looking at something like this today, but instead I was trying mounting my synology as a cifs volume, nfs is so much easier and this video helped me set it up under 5mins. Thank you!
is it work for you? i just do as instructions. but cannot great subfolders as a volume's. i got error 500. i try add map root to admin. but got same problem. error500. but if i make volume in the root of nfs folder. it work. but im not comfortabe with that. may be any help. i got dsm7
@@christianlempa Hello again , I was wondering if by chance you knew how to specify this in a docker compose file im trying to make a template and this seems to be only my stopping point atm
Btw, one point I need to add here: doesn’t mean you don’t need to have backups. Having redundant storage over NFS is nice. But do ensure you still have restorable backups in addition to this. There can be many things that can go wrong here, your FS might get corrupt in case there was a network problem while writing a bit or the raid fails etc etc.
So it's better to run containers locally anyways... Like, with 2 separate machines you're essentially doubling the risk of something going bad or risk a database corruption because the NFS share decided to betray you for some reason. Why make things simple and reliable when you can make it complicated and error prone.
Some food for thought... Most NAS systems have 2 network interface. You could attach the NAS directly to the server on a separate VLAN and optimize that network for I/O, enabling jumbo frames etc. That basically makes it a SAN without the redundancy. Using iSCSI instead of NFS is also an option and might be preferred for database workloads I assume.
Thanks for your experience guys, I have just a little experience with it. FYI, I was playing around with Jumbo Frames on a direct connection between my PC and TrueNAS, worked pretty well. VLANs is a topic for a future video as well :D So stay tuned!
@@christianlempa I've been though that endeavor, just recently. When I searched for tagged VLAN on Linux, the documentation was hopelessly outdated referring to deprecated tools. My advice: disable anything that messes with net devices even remotely (i.e NetworkManager) and go straight for systemd-networkd. I've built a VLAN-aware bridge which functions just like managed switch. Virtual NICs from KVM attach to it, automatically as well as Docker containers. This is also a pretty good way to use tagged VLAN inside VMs which on KVM is hard to use, otherwise. If you'd like some help at the beginning, let me know. We will get you started. :)
last time I did this I had best results using ATAoE, it's not used very much nowadays but is very fast for local links (it's non-routable but ideal for single-rack use)
I wish I found this video last week. It would have saved me hours trying to mount a NFS share on my Ubuntu server. I ran into the user permission issue also and it took a lot of searching to find the answer.
Great walkthrough and howto. Thanks for that. Nevertheless, small criticism at this point. NFS volumes and snapshot backups on the target NAS IMHO do not replace an application-based backup. Of course (for example in the event of a power failure but also due to other technical and organizational problems) the volumes on the NFS can be destroyed and become inconsistent in the same way as those on the local machine. This is even more likely because the writing process on the NFS relies on more technical components. That's why I also do and highly reccomend application-based backups with at least the same frequency. If the application's backup algorithm is written sensibly, it will only complete the backup after a consistency check and then it is clear that at least this backup is not corrupted and can be restored without data loss.
You can also run docker containers in TrueNas itself and connecting to the NFS locally. If I'm right the writes can then handled synchroniously. This gurantees the data integrity.
Hey! Thank you very much for this. Can you also give an example on how to use an NFS mount in docker-compose / Stacks in Portainer? I've spent the last couple of hours trying and googling but wasn't able to find a real answer or constant examples on how to do it.
I have been stuck for weeks with an nsf volume not mounting right inside my containers. First I couldnt edit the created files. Then I could but couldn't create new ones. This fixed both my issues thanks going to set up my shares like this moving forward.
How do you replicate into docker compose? I get these steps and its graat for me to understand manually the steps. I'm not convinced my compose is working correctly for NFS. Love to see this explanation converted into a docker compose example.
Glusterfs works really well for small files or small file transfers as well. Edit: I've since been told databases don't scale well on glusterfs. IDK how much experience that person has with glusterfs but they have enough experience with k8s for me to accept it until I can test it. Works great for the really small stuff I do in my home lab though.
True, but you have Galera Cluster for MySQL/Mariadb or just replication in PostgreSQL. But I get that the real problem is the 'other' dozens of databases. They can still be on a centralised storage, but as you say they don't scale well ...
I mainly use bind mounts for my persistent docker storage (had MAJOR issues with docker databases over cifs or nfs) and an awesome docker image for volume backups: offen docker-volume-backup It stops the desired containers before a backup, creates a tarball, sends it to an S3 bucket on my truenas server, spins up the stopped containers and lastly a cloud sync task on my truenas encrypts the data before pushing the backup to the cloud.
wow dude when you typed root in that box you solved all of my problems crazy how ur the best source of information i have found over all of the internet
Many of the big datacenter also use Fibre channel based storage, network attached can be slow and subject to tcp congestion and package loss, whereas fc is guaranteed delivery.
I read somewhere the NFS is not secure when used with container as it also open access to container processes to get into the main OS processes when we expose server file system to docker container. What's your thoughts on this? Any one.
I am having issues installing a stack in a Volume. the volume is already added in portainer and I see it and Tested it but my yml I cant figure out how to add nextcloud to the volume I want it.
isnt it better to map the NFS volume in a /mnt/NFS on the host running docker so you have 1 connection open instead of hunderds for every container picking its own connection? Or is that not possible when you go docker swarm?
What if you need to perform an update on TrueNas that needs to reboot the NFS service or the TrueNas system ? Will the Docker container wait for the NFS service to come back without having trouble with data consistency ? I have myself that setup but when I want to restard my NAS it's a real pain to stop everything that depends on it...
I am triggered into high gear learning mode by all of this. The aim is to set up a home assistant server. ha runs in docker and stores its data in local volumes. I am no fan on having my data all over the place so this video solves that problem. So next step is to get my hands dirty and hope i do not get to many errors that exceed my domain of knowledge. Thanks!!
LOL I spent a week on figuring this exact thing out - wanted to use Photoprism to use pictures from Backup server rather than Import files into Docker. Just got it working last night.😂
@christianlempa is the activation of NFS4 that simple?? I've tried exactly what you deed and the mount fails returning "permission denied", always. I tried to dig on the subject and looks like that NFS4 requires a lot of effort to get working.
it's the same thing. a storage server can also crash. and you have a complicated setup with a NAS storage. either methods you select, a proper backup is best.
7:48 After many hours I finally figured out that I needed NFS4 enabled on TrueNAS to get this to work on my setup. I kept getting Error 500 from Portainer when attempting this with the default NFS / NFS3. 😅
Don't you need higher-tier networking to make sure you're not suffering any performance penalties? I can imagine that the additional latencies can make certain applications run slower when all file system access has to go through 2 network stacks and the network itself.
I have been running some basic selfhosted containers on my servers in a similar configuration as laid out in this video. (Ubuntu server with portainer & Docker connected to TrueNAS for the storage.). My TrueNAS is setup with mirrored pairs instead of raidz/raidz2- but it’s still over a 1Gbe LAN. It’s been fine. Yes I’m sure 10GbE would improve it, but it’s plenty usable for most containers. I originally set it up as a test environment before buying 10Gbe hardware and then it worked so well that I decided not to bother with 10Gbe (yet) It’s not great with a VM that has a desktop environment- but it’s been fine with server VMs. Not fantastic, but fine.
Thanks for your experience! I'm running it with a 10Gbe connection, but I highly doubt this would make a huge difference in this case. As for VM-Disks this might be totally different, of course, but for Docker Volumes 1Gbe should be fine.
You are correct. The approach outlined in the video is great! And I would recommend this approach to pretty much anyone starting with docker. As you continue your journey you will have to adapt to the requirements of what you're hosting. You are correct in saying that some applications don't play well with storage over the network. Plex is a big one. I tried hosting my plex library (the metadata, not the media) on NFS, and the performance was atrocious. The application was unusable, and I had to switch to storing the data locally. I suspect it has to do with SQLite performing tons of IOps which NFS couldn't handle. This was with a dedicated point to point 10GBe connection as well. I was using bind mounts instead of docker volumes but I don't think that created the issue (could be wrong). I have other applications that have experienced this as well. I've resorted to having all of my data be local on the machine, and then just create backups using autorestic.
Your not serving hundred of connections at a time. So really your not needing as much performance as your think. I run a 1gb connection to my homelab with a 4 disk raid 10 array and I can't tap the full bandwidth of the connection but I have no issues with performance watching 1080p (since I don't have a si gle display 4k makes sense on)
Btw, one point i need to add here: why share all of your files as root? You could just make a new group and user(s), specifically for accessing your files, and map your NFS shares to them. It is belt and suspenders since you only expose NFS to a specific IP, however not using root whenever possible is the way forwards. Probably why it got removed as a default in the new release.
Great Video! I have a question, so , if I have mounted the volume to the NFS in /etc/fstab. Do I still need to create the nfs volume? couldn't I just point to the mounted NFS on the host? What's the advantage of creating a new nfs volume in portainer? Is it just for easier to migrate from nfs to nfs? Thx!
I have read around some people complaining of database corruption using NFS as their cluster storage, didin't tried it personally and I am currently using CIFS mounts for my docker swarm. I was wondering if you have tried Glusterfs as it seems it is recommended for cluster volumes in general ,
Love your videos. Question how would I use portainer to add new volume to existing container? I found how to add the volume but after that I don't know if anything needs to be copied over.
Good question. Because the video stated that backuping the local volume directory was not ideal for databases, yet it was never explained if doing snapshots on the nfs server was overcoming those mentioned potential issues.
Great question! It depends on the storage server's file system and how you do the backup. If the Backup Server would just "copy" the files away, then the container should be stopped. If you're using ZFS with a snapshot, it shouldn't be a problem. I haven't had any scenario where this would result in an inconsistency issue with the db. However, if you do a rollback, you should of course stop the container, restore the snapshot and then start the container again.
In this case I assume NAS server must always be started before docker server and shutdown in reverse order. Otherwise I assume containers will just fail to start. How do you guys handle this?
Good idea to separate storage server but in practice not useful for homelabs. The network speeds you have at home 1gbit or 2.5gbit max creates a major bottleneck which you avoid when having direct local access.
On my NAS, CIFS is enabled for the Windows computers connected to it. NFS is disabled. Is there any reason not to use CIFS instead of NFS for storing Docker volumes on my NAS?
NFS stands for "Not For Servers" ;-) Sys admin for about 30 years now, always cringed when i got a request for NFS on any app. For home use i'd be ok with it tho
@@samuelbrohaugh9539 I'm asking you because I'm using NFS.. not being sarcastic, trying to learn the most secure best method.. if you know one.. thanks in advance! :)
@@dragonsage6909 Using a cluster filesystem is typically much better but I understand doing clusters is typically not cheap either. Guess it depends on what your talking about app wise. Is it a high avail app where downtime costs money or something simple that can handle user downtime. NFS will fail at some point, mount becomes stale, network issues locking up apps, etc. Its simple to implement, easy to use but easy to fail. Home use rarely an issue, real production use make sure expectations are met and understood. Just experience talking.
Hi . Created nfs volume on server. Can connect with the synology. But in portainer. The nfs volume appears empty .. The server volume is nobody;nogroup.
Hi Christian, thanks for the informative video. I have two questions though: 1. What is the correct way of setting user with same id, group id on nfs server and client? I have rpi with user 1000:1000. Such user doesnt exist on my Synology. Should I add new user to synology? Or should I pick one of synology user ids and create such user on raspi? If so, how do you create user with specific ids? 2. What about file locking through nfs? I had issues with network stored (samba cifs) data containing sqlite database, for example homeassistant, baikal. I couldnt network store mariadb, mysql either due to some "file locking issues".
This is nice, but. Is there a way to use a local directory on the host instead? I have docker installed on my Ubuntu 22.04 and it would be nice to use local directories.
@@HelloHelloXD it seems that the container just keeps running, it may not be able to do anything tho, i just tested this with sonarr as i had the /config folder in the nfs volume and it seemed to work as long as it didnt need anything from that folder, when i clicked on each series it just showed me a loading screen until i reconnected it, I suppose the answer is it depends entirely on what folders you put in that volume and how gracefully the application handles losing access to those files
First of all, I don't have much knowledge about Infrastructure... T,T Can the NFS of truenas VM be delivered to a container volume without 1Gb Network bottlenecks? Both truenas and container (ubuntu VM) are operated on proxmox.
@christianlempa Thanks Christian! Could install portainer on a Debian VM within TrueNAS scale, and then communicate with that? Or are you using a separate machine entirely for your Portainer/Server?
Did you experience problems with containers using NFS mounts after a reboot? Until now I used nfs only via mounting it to the host and bind mounting docker volumes to the host Since I now switched to the "direct mount" of nfs to docker host, specified in the stack code, after rebooting my CoreOS server, all these containers fail After restarting them they start fine Seems like a not available nfs service at boot time where the containers try to start but are not able to be mounted yet
Another AWESOME video!! But, I saw in the video that you have a portainer_data volume on the nfs share, how was that done? I have been trying to get this to work but getting docker error while trying to mount the volume.
Event with the "wheel" user added to TrueNAS, NFS refused to work for deluge /sonarr / radarr (CentOS using docker compose). I ended up making an SMB share (yes, Microsoft, blasphemy!) and it works perfectly. So much less of a headache than NFS, PLUS it's actually secure (authenticated with a password and ACL (Access Control). SO, yeah. Unexpected but I would just recommend making a fricking SMB share.
I'm running my docker inside a LXC on proxmox. Which has MP mount to the host. Which has NFS mounts to the storage server. I'm using bind mounts inside of portainer, is that wrong?
@@christianlempa is it really? Must be a windows 11 thing. I've never seen a split window like that other than when I've used screen on Linux. Very cool though. I like it.
One concern of using NFS in a home lab is that NFS needs the user to ensure the local network is safe, otherwise the security is compromised since the only auth is the ip address (of course you can use kerberos, but it’s too hard to configure). Besides, an malicious docker container could connect the NFS by using the host ip.
Can someone help me where I'm going wrong? I've created the volume, but when trying to save the volume in the container, I always get a "request failed with status code 500" error when clicking deploy.
this is my export: "/share/CACHEDEV1_DATA/Dockerdata" *(sec=sys,rw,async,wdelay,insecure,no_subtree_check,no_root_squash,fsid=9e50b469aef8f8a22013f16b7d3f69f9) "/share/NFSv=4" *(no_subtree_check,no_root_squash,insecure,fsid=0) "/share/NFSv=4/Dockerdata"
have you seen any issues with DB’s specifically SQLite? I tried to move my containers to an NFS share… some work just fine but anything using SQL seems to just break.
I, personally, haven't. I heard it doesn't work great for databases, that's why I used NFSv4, as it was improved to work better with that. But you still have problems, you might just switch your workflow for your databases to something else, I'd say.
@@christianlempa yeah, I'm also using v4 and DB's just didn't work. Currently looking for a solution as I don't like having all of my containers using the local storage of the VM.
Hey, thanks for the video, unfortunately i have an error "500 request failed" when trying to deploy the container. I have no issues adding the NFS on other machines, but on container it doesn't work unfortunately.
Some dockers don't like shares. Those who use SQLite as a database for example. I had big performance issues with Lidarr who was caused by database lock issues because SQLite does not work well when in a share. I understood this has something to do with file links. I had to fall back to local volumes because of this.
Could we just have mounted a nfs share on the local docker volumes directory? I suppose that because such docker native nfs mecanism exists then the answer would be no, but i'm curious of why.
That's what I do in my (older) setup. For some reason I couldn't get ACLs to work if mounted through docker (also tried docker-volume-netshare) I mount my nfs shares to a seperate location on the host and symlink the volumes' _data directories to that, though.
I'm about to do an Unraid server for hosting my NAS and so many docker containers, I can't use NFS in Unraid on my nas though right? I'm watching the video now ...
You don't need a "NAS operating system" any operating system can act as a nas as long as it supports some firm of network file share (ssh, nfs, smb, isccsi, ect)
I'm trying to do this for hours now and always run into permission issues. My User on the docker host and the NAS are exactly the same (same username, pw, UID GID) and I get permission denied when I just try to cd into the NAS folder from the ubuntu test container. Anyone an idea?
if i ever go crazy and have to setup a dirty docker system i'll try to remember this. it seems really helpful and imo any improvement possible is a god's gift with docker (i really hate it, ngl kubernetes. also not a big fan)
I'm far from an expert on linux and there is something I'm missing on permissions: When you say that we need to have the same user that use the same permissions between the NFS server and the docker image, how does it work? I though that just having the same user id or the same user name isn't enough, no? I mean, they could have different password ? Also, what about the performance implications? I'm thinking to move my plex server in a docker container, with its storage on a NFS volume, could this be an issue?
There was a permission problem when I started the container, user and group exist on both server and client, but when executing the chown command in dockerfile, it shows no permissions error, maybe I have to use root user instead. Is there any other ways to work around with using the root user?
For databases, I feel you'd be better off just taking backups and keeping a read replica or two. You'll almost certainly get better performance plus you'll be able to recover faster with the replica. If your app isn't a database, it should probably not be saving important data directly to disk unless you're doing some ad hoc operation (like running tests) where a local volume is fine. The NAS is probably more convenient for transferring files, I'll give it that.
Hi Christian, great Videos. I would love to see how to use this NFS (or maybe iSCSI) Setup with a Kibernetes Cluster. This is what I am trying to setup right now ;-)
Hmm seems based on the comments iscsi might be the way to go which is block storage vs nfs which is file storage. I don't know however but I do know when I've had two linux system sharing via nfs the nfs connection has crapped out in the past causing problems. I'm not sure this is a better option than keeping bind mounted volumes and just having a backup solution for the volumes that runs periodically to backup the volumes to remote source. Lastly I'm wondering if you run an ldap server since this would synchronize users on vms and the NAS. I'm curious if you would get nfs errors in this scenario
Brilliant! I was just looking at something like this today, but instead I was trying mounting my synology as a cifs volume, nfs is so much easier and this video helped me set it up under 5mins. Thank you!
Thanks! :) You're welcome
is it work for you?
i just do as instructions. but cannot great subfolders as a volume's.
i got error 500. i try add map root to admin. but got same problem. error500. but if i make volume in the root of nfs folder. it work. but im not comfortabe with that.
may be any help. i got dsm7
@@therus000 Hey I have the same problem, did you manage to resolve this?
I followed this for my WindowsNAS to share to docker and this is so much easier then doing host level mounts. Thank you soo much
You’re welcome :)
@@christianlempa Hello again , I was wondering if by chance you knew how to specify this in a docker compose file im trying to make a template and this seems to be only my stopping point atm
Btw, one point I need to add here: doesn’t mean you don’t need to have backups. Having redundant storage over NFS is nice. But do ensure you still have restorable backups in addition to this.
There can be many things that can go wrong here, your FS might get corrupt in case there was a network problem while writing a bit or the raid fails etc etc.
Absolutely! Great point, and I might explain this in future videos :)
Good point, "RAID is not backup" is something that really needs to be hammered home!
So it's better to run containers locally anyways...
Like, with 2 separate machines you're essentially doubling the risk of something going bad or risk a database corruption because the NFS share decided to betray you for some reason. Why make things simple and reliable when you can make it complicated and error prone.
Some food for thought...
Most NAS systems have 2 network interface. You could attach the NAS directly to the server on a separate VLAN and optimize that network for I/O, enabling jumbo frames etc. That basically makes it a SAN without the redundancy.
Using iSCSI instead of NFS is also an option and might be preferred for database workloads I assume.
Exactly, we used NFS as storage for databases and it handled not very good. Sometimes you have locks etc.
Thanks for your experience guys, I have just a little experience with it. FYI, I was playing around with Jumbo Frames on a direct connection between my PC and TrueNAS, worked pretty well. VLANs is a topic for a future video as well :D So stay tuned!
I also think that iscsi might be the better choice compared to nfs or smb. at least in a homelab environment.
@@christianlempa I've been though that endeavor, just recently. When I searched for tagged VLAN on Linux, the documentation was hopelessly outdated referring to deprecated tools.
My advice: disable anything that messes with net devices even remotely (i.e NetworkManager) and go straight for systemd-networkd. I've built a VLAN-aware bridge which functions just like managed switch. Virtual NICs from KVM attach to it, automatically as well as Docker containers. This is also a pretty good way to use tagged VLAN inside VMs which on KVM is hard to use, otherwise.
If you'd like some help at the beginning, let me know. We will get you started. :)
last time I did this I had best results using ATAoE, it's not used very much nowadays but is very fast for local links (it's non-routable but ideal for single-rack use)
I wish I found this video last week. It would have saved me hours trying to mount a NFS share on my Ubuntu server. I ran into the user permission issue also and it took a lot of searching to find the answer.
Glad it was helpful bro 😎
Great walkthrough and howto. Thanks for that. Nevertheless, small criticism at this point. NFS volumes and snapshot backups on the target NAS IMHO do not replace an application-based backup. Of course (for example in the event of a power failure but also due to other technical and organizational problems) the volumes on the NFS can be destroyed and become inconsistent in the same way as those on the local machine. This is even more likely because the writing process on the NFS relies on more technical components. That's why I also do and highly reccomend application-based backups with at least the same frequency. If the application's backup algorithm is written sensibly, it will only complete the backup after a consistency check and then it is clear that at least this backup is not corrupted and can be restored without data loss.
This was awesome. Perfect timing as exactly the next step I wanted to make with my volumes on Portainer.
Haha cool :D Glad it was helpful
You can also run docker containers in TrueNas itself and connecting to the NFS locally. If I'm right the writes can then handled synchroniously. This gurantees the data integrity.
Hey! Thank you very much for this.
Can you also give an example on how to use an NFS mount in docker-compose / Stacks in Portainer? I've spent the last couple of hours trying and googling but wasn't able to find a real answer or constant examples on how to do it.
I'm having the same issue lol , did you by chance find a way to do this?
I have been stuck for weeks with an nsf volume not mounting right inside my containers. First I couldnt edit the created files. Then I could but couldn't create new ones. This fixed both my issues thanks going to set up my shares like this moving forward.
How do you replicate into docker compose? I get these steps and its graat for me to understand manually the steps. I'm not convinced my compose is working correctly for NFS. Love to see this explanation converted into a docker compose example.
Exactly what I was looking for! Great video! Would love to see it done all via CLI also
Awesome! Thank you 😊
Glusterfs works really well for small files or small file transfers as well.
Edit: I've since been told databases don't scale well on glusterfs. IDK how much experience that person has with glusterfs but they have enough experience with k8s for me to accept it until I can test it. Works great for the really small stuff I do in my home lab though.
Oh yeah that's an interesting topic
True, but you have Galera Cluster for MySQL/Mariadb or just replication in PostgreSQL.
But I get that the real problem is the 'other' dozens of databases.
They can still be on a centralised storage, but as you say they don't scale well ...
this is great to know -i am currently looking into how to back up my more info sensitive docker containers -like vaultwarden or nextcloud -great video
I mainly use bind mounts for my persistent docker storage (had MAJOR issues with docker databases over cifs or nfs) and an awesome docker image for volume backups: offen docker-volume-backup
It stops the desired containers before a backup, creates a tarball, sends it to an S3 bucket on my truenas server, spins up the stopped containers and lastly a cloud sync task on my truenas encrypts the data before pushing the backup to the cloud.
Glad it was helpful!
What were the issues with the DBs?
wow dude when you typed root in that box you solved all of my problems
crazy how ur the best source of information i have found over all of the internet
Many of the big datacenter also use Fibre channel based storage, network attached can be slow and subject to tcp congestion and package loss, whereas fc is guaranteed delivery.
I read somewhere the NFS is not secure when used with container as it also open access to container processes to get into the main OS processes when we expose server file system to docker container. What's your thoughts on this? Any one.
When migrating cp -ar maybe better as it copies permissons, too. Nice video! Need to figure out how to do this in docker-compose.
Oh great, thank you ;)
Would be appreciated if you figured out and share how to use docker-compose on Portainer. That would be really handy!
I am having issues installing a stack in a Volume. the volume is already added in portainer and I see it and Tested it but my yml I cant figure out how to add nextcloud to the volume I want it.
isnt it better to map the NFS volume in a /mnt/NFS on the host running docker so you have 1 connection open instead of hunderds for every container picking its own connection? Or is that not possible when you go docker swarm?
I have this exact question. This is the way unraid handles this. I think I will duplicate UnRaids approach.
What if you need to perform an update on TrueNas that needs to reboot the NFS service or the TrueNas system ?
Will the Docker container wait for the NFS service to come back without having trouble with data consistency ?
I have myself that setup but when I want to restard my NAS it's a real pain to stop everything that depends on it...
I am triggered into high gear learning mode by all of this. The aim is to set up a home assistant server. ha runs in docker and stores its data in local volumes. I am no fan on having my data all over the place so this video solves that problem. So next step is to get my hands dirty and hope i do not get to many errors that exceed my domain of knowledge. Thanks!!
You're welcome! :)
LOL I spent a week on figuring this exact thing out - wanted to use Photoprism to use pictures from Backup server rather than Import files into Docker. Just got it working last night.😂
Haha great :D
@christianlempa is the activation of NFS4 that simple?? I've tried exactly what you deed and the mount fails returning "permission denied", always. I tried to dig on the subject and looks like that NFS4 requires a lot of effort to get working.
What happens when you need to reboot or shutoff the storage server. Can the docker containers stay running or do they need to be stopped first?
it's the same thing. a storage server can also crash. and you have a complicated setup with a NAS storage. either methods you select, a proper backup is best.
Thank you. A great tutorial for NFS.
Nice video, looking forward to true nas content!
Thanks! 😀
Christian, How to make a disaster recovery of the entire system ? could you simulate an example?
7:48 After many hours I finally figured out that I needed NFS4 enabled on TrueNAS to get this to work on my setup. I kept getting Error 500 from Portainer when attempting this with the default NFS / NFS3. 😅
I’m glad the video helped you :)
thanks Christian, so interesting!
Thank you! :)
And how do I use NFS to mount the initial portainer data volume, before configuring portainer?
Danke, das hat mir weitergeholfen 👍
Freut mich! :)
Don't you need higher-tier networking to make sure you're not suffering any performance penalties? I can imagine that the additional latencies can make certain applications run slower when all file system access has to go through 2 network stacks and the network itself.
I have been running some basic selfhosted containers on my servers in a similar configuration as laid out in this video. (Ubuntu server with portainer & Docker connected to TrueNAS for the storage.). My TrueNAS is setup with mirrored pairs instead of raidz/raidz2- but it’s still over a 1Gbe LAN.
It’s been fine. Yes I’m sure 10GbE would improve it, but it’s plenty usable for most containers.
I originally set it up as a test environment before buying 10Gbe hardware and then it worked so well that I decided not to bother with 10Gbe (yet)
It’s not great with a VM that has a desktop environment- but it’s been fine with server VMs. Not fantastic, but fine.
Thanks for your experience! I'm running it with a 10Gbe connection, but I highly doubt this would make a huge difference in this case. As for VM-Disks this might be totally different, of course, but for Docker Volumes 1Gbe should be fine.
You are correct. The approach outlined in the video is great! And I would recommend this approach to pretty much anyone starting with docker. As you continue your journey you will have to adapt to the requirements of what you're hosting. You are correct in saying that some applications don't play well with storage over the network. Plex is a big one. I tried hosting my plex library (the metadata, not the media) on NFS, and the performance was atrocious. The application was unusable, and I had to switch to storing the data locally. I suspect it has to do with SQLite performing tons of IOps which NFS couldn't handle. This was with a dedicated point to point 10GBe connection as well. I was using bind mounts instead of docker volumes but I don't think that created the issue (could be wrong). I have other applications that have experienced this as well. I've resorted to having all of my data be local on the machine, and then just create backups using autorestic.
Your not serving hundred of connections at a time. So really your not needing as much performance as your think. I run a 1gb connection to my homelab with a 4 disk raid 10 array and I can't tap the full bandwidth of the connection but I have no issues with performance watching 1080p (since I don't have a si gle display 4k makes sense on)
@@ElliotWeishaar I ran into the same issues with Plex. I did end up storing the media remotely but could never get the library data to work reliably.
I usually do a cron job to copy volumes and database exports to an AWS S3, and then another cron job to delete files older than 1 month!
Btw, one point i need to add here: why share all of your files as root? You could just make a new group and user(s), specifically for accessing your files, and map your NFS shares to them. It is belt and suspenders since you only expose NFS to a specific IP, however not using root whenever possible is the way forwards. Probably why it got removed as a default in the new release.
Yep that’s something I need to get fixed in the future
The folder/volume I created inside the container is user 568 so I can't access the /nfs folder in my container. Why did it use that user id over root?
Great Video! I have a question, so , if I have mounted the volume to the NFS in /etc/fstab. Do I still need to create the nfs volume? couldn't I just point to the mounted NFS on the host? What's the advantage of creating a new nfs volume in portainer? Is it just for easier to migrate from nfs to nfs? Thx!
It's just for easier management. If you already have NFS mounted on the host, that is totally fine
I have read around some people complaining of database corruption using NFS as their cluster storage, didin't tried it personally and I am currently using CIFS mounts for my docker swarm. I was wondering if you have tried Glusterfs as it seems it is recommended for cluster volumes in general ,
I hear that a couple of times, but never found any resources or details why this should be the case. Could you kindly share some insights? Thanks
Love your videos. Question how would I use portainer to add new volume to existing container? I found how to add the volume but after that I don't know if anything needs to be copied over.
I have a docker user and group on both docker and Nas machines. I use the same UID and guid on the container using env variables.
Gluster FS will be even better option to manage data inside Docker swarm.
I'm so interested in these filesystems, once I finish my projects I start looking at them
Any plans to do a tutorial for Kubernetes Persistent Volume to TrueNAS NFS ?
Awesome!!
Is this a good solution for a docker swarm volume sharing with the different nodes?
Thx! I'm not sure about that, I think NFS is still the easiest for my setup.
Great explanation. Thanks!
Thank you 😊
Hello guys ,... I had a power failure and all my docker volumes have gone. Is this a predictable behavior ? Are they still there in disk ? Thanks
Do you stop your containers when you backup your storage server?
Good question. Because the video stated that backuping the local volume directory was not ideal for databases, yet it was never explained if doing snapshots on the nfs server was overcoming those mentioned potential issues.
@@jp_baril Exactly why I asked.
Great question! It depends on the storage server's file system and how you do the backup. If the Backup Server would just "copy" the files away, then the container should be stopped. If you're using ZFS with a snapshot, it shouldn't be a problem. I haven't had any scenario where this would result in an inconsistency issue with the db. However, if you do a rollback, you should of course stop the container, restore the snapshot and then start the container again.
@@christianlempa Now I think I should have my TrueNas as my main nas, instead of my Synology.
In this case I assume NAS server must always be started before docker server and shutdown in reverse order. Otherwise I assume containers will just fail to start. How do you guys handle this?
I also would like to know how to handle this.
Good idea to separate storage server but in practice not useful for homelabs. The network speeds you have at home 1gbit or 2.5gbit max creates a major bottleneck which you avoid when having direct local access.
What about storing portainers volumes on a NFS share too?
Oh my gosh!!! You are a crack !!! Thank you very much, mater
Thanks 😆
how do you draw this stuff like @2:30 ?
On my NAS, CIFS is enabled for the Windows computers connected to it. NFS is disabled.
Is there any reason not to use CIFS instead of NFS for storing Docker volumes on my NAS?
NFS stands for "Not For Servers" ;-) Sys admin for about 30 years now, always cringed when i got a request for NFS on any app. For home use i'd be ok with it tho
What would you suggest for a production system instead!?
Great question, please tell us what you recommend instead
@@samuelbrohaugh9539 I'm asking you because I'm using NFS.. not being sarcastic, trying to learn the most secure best method.. if you know one.. thanks in advance! :)
@@dragonsage6909 Using a cluster filesystem is typically much better but I understand doing clusters is typically not cheap either. Guess it depends on what your talking about app wise. Is it a high avail app where downtime costs money or something simple that can handle user downtime. NFS will fail at some point, mount becomes stale, network issues locking up apps, etc. Its simple to implement, easy to use but easy to fail. Home use rarely an issue, real production use make sure expectations are met and understood. Just experience talking.
@@black87c4 awesome answer, thank you. I'm looking at some other options now, think I've got it.. will update asap
Is it possible with samba/cifs as well?
Hi . Created nfs volume on server. Can connect with the synology. But in portainer. The nfs volume appears empty .. The server volume is nobody;nogroup.
Hi Christian, thanks for the informative video. I have two questions though:
1. What is the correct way of setting user with same id, group id on nfs server and client? I have rpi with user 1000:1000. Such user doesnt exist on my Synology. Should I add new user to synology? Or should I pick one of synology user ids and create such user on raspi? If so, how do you create user with specific ids?
2. What about file locking through nfs? I had issues with network stored (samba cifs) data containing sqlite database, for example homeassistant, baikal. I couldnt network store mariadb, mysql either due to some "file locking issues".
Will this work with a backup solution as PCloud?
This is nice, but. Is there a way to use a local directory on the host instead? I have docker installed on my Ubuntu 22.04 and it would be nice to use local directories.
Great topic. One question. What is going to happen to the docker container if the connection between the NFS server and docker server is lost?
The container will fail to start
@@christianlempa what if the container was already running and the connection was lost?
@@HelloHelloXD it seems that the container just keeps running, it may not be able to do anything tho, i just tested this with sonarr as i had the /config folder in the nfs volume and it seemed to work as long as it didnt need anything from that folder, when i clicked on each series it just showed me a loading screen until i reconnected it, I suppose the answer is it depends entirely on what folders you put in that volume and how gracefully the application handles losing access to those files
@@vladduh3164 thank you.
First of all, I don't have much knowledge about Infrastructure... T,T
Can the NFS of truenas VM be delivered to a container volume without 1Gb Network bottlenecks?
Both truenas and container (ubuntu VM) are operated on proxmox.
@christianlempa Thanks Christian! Could install portainer on a Debian VM within TrueNAS scale, and then communicate with that? Or are you using a separate machine entirely for your Portainer/Server?
I think that should also work
Did you experience problems with containers using NFS mounts after a reboot?
Until now I used nfs only via mounting it to the host and bind mounting docker volumes to the host
Since I now switched to the "direct mount" of nfs to docker host, specified in the stack code, after rebooting my CoreOS server, all these containers fail
After restarting them they start fine
Seems like a not available nfs service at boot time where the containers try to start but are not able to be mounted yet
I mostly reboot both of my servers, so the NAS server and the Proxmox Server, then it works fine.
Another AWESOME video!! But, I saw in the video that you have a portainer_data volume on the nfs share, how was that done? I have been trying to get this to work but getting docker error while trying to mount the volume.
Thanks! You need to do it outside of the gui with docker cli commands unfortunately.
Could use iscsi targets perhaps worked well with VMWARE esx years ago when i didnt have a san
Would you please do a video about using Ceph Docker RBD volume plugin?
Hmmm I need to look that up, sounds interesting
How about volumes where performance matters? Like tmp or cache folders or source files in local development?
You can still use local volumes in that case
When I try to deploy I keep getting a "Request failed with status code 500." error message.
I have 3 rpis on a docker swarm. One of the. Is my nfs server and doing exactly this. But worry if my docker drive dies so any idea on making backups.
Hm, I would try to back up the raspberry Pis file systems with rsync or similar backup software for Linux.
@@christianlempa thanks
The nfs share mounts fine to my test ubuntu container. But i cant access it. Permissions issue
Is the same thing posible using the CIFS/SMB mounts originating in windows or it does have to be NFS?
You can use CIFS as well!
Event with the "wheel" user added to TrueNAS, NFS refused to work for deluge /sonarr / radarr (CentOS using docker compose). I ended up making an SMB share (yes, Microsoft, blasphemy!) and it works perfectly. So much less of a headache than NFS, PLUS it's actually secure (authenticated with a password and ACL (Access Control). SO, yeah. Unexpected but I would just recommend making a fricking SMB share.
Could you use cifs/samba to do the same thing?
CIFS absolutely
is it possible to use zfs pools?
Awesome video, thank you for explaining this. I am doing the exact same with all my pods in k3s ;)
Oh, that is cool! I'm planning that as well in my k3s cluster I'm currently building ;)
@@christianlempa Feel free to ping me if you have any questions ;)
I'm running my docker inside a LXC on proxmox. Which has MP mount to the host. Which has NFS mounts to the storage server. I'm using bind mounts inside of portainer, is that wrong?
I'm not entirely sure because I havent used LXC.
What's that split console you are using? Is it screen?
It's Windows Terminal
@@christianlempa is it really? Must be a windows 11 thing. I've never seen a split window like that other than when I've used screen on Linux.
Very cool though. I like it.
One concern of using NFS in a home lab is that NFS needs the user to ensure the local network is safe, otherwise the security is compromised since the only auth is the ip address (of course you can use kerberos, but it’s too hard to configure). Besides, an malicious docker container could connect the NFS by using the host ip.
Fair point!
Can someone help me where I'm going wrong? I've created the volume, but when trying to save the volume in the container, I always get a "request failed with status code 500" error when clicking deploy.
Most likely there is a network connection error or permission error.
Getting the same, and I've gone over everything I can find. I can only imagine this is something that has been broken in Truenas Core 13
how do i do this with wsl2 and synology nas?
i can't seem to create volume from qnap to the docker. can you help?
this is my export: "/share/CACHEDEV1_DATA/Dockerdata" *(sec=sys,rw,async,wdelay,insecure,no_subtree_check,no_root_squash,fsid=9e50b469aef8f8a22013f16b7d3f69f9)
"/share/NFSv=4" *(no_subtree_check,no_root_squash,insecure,fsid=0)
"/share/NFSv=4/Dockerdata"
have you seen any issues with DB’s specifically SQLite? I tried to move my containers to an NFS share… some work just fine but anything using SQL seems to just break.
I, personally, haven't. I heard it doesn't work great for databases, that's why I used NFSv4, as it was improved to work better with that. But you still have problems, you might just switch your workflow for your databases to something else, I'd say.
@@christianlempa yeah, I'm also using v4 and DB's just didn't work. Currently looking for a solution as I don't like having all of my containers using the local storage of the VM.
@@procheeseburger_2 Hey, did you find a solution? I am also finding that SQLITE wont play nice with network shares
@@chris.taylor I just use local storage.
Hey, thanks for the video, unfortunately i have an error "500 request failed" when trying to deploy the container.
I have no issues adding the NFS on other machines, but on container it doesn't work unfortunately.
Thats likely a problem with the NFS connection. Check IP, path, user settings and permissions
@@christianlempa Same issue ... already checked and entered all IP's possible. Also set "mapall" to root in TrueNAS ... no success. :(
Some dockers don't like shares. Those who use SQLite as a database for example. I had big performance issues with Lidarr who was caused by database lock issues because SQLite does not work well when in a share. I understood this has something to do with file links. I had to fall back to local volumes because of this.
I have to reboot docker host if NFS server hang. May be I need stable freenas server.
Yeah that is true
Could we just have mounted a nfs share on the local docker volumes directory?
I suppose that because such docker native nfs mecanism exists then the answer would be no, but i'm curious of why.
I guess that should also work, but in that case the Linux Host would be responsible for the NFS connection mangement and not Docker
That's what I do in my (older) setup. For some reason I couldn't get ACLs to work if mounted through docker (also tried docker-volume-netshare)
I mount my nfs shares to a seperate location on the host and symlink the volumes' _data directories to that, though.
I'm about to do an Unraid server for hosting my NAS and so many docker containers, I can't use NFS in Unraid on my nas though right? I'm watching the video now ...
Im not sure, haven’t used unraid but I’m pretty sure it does NFS
You don't need a "NAS operating system" any operating system can act as a nas as long as it supports some firm of network file share (ssh, nfs, smb, isccsi, ect)
That's what I said in the video ;)
I'm trying to do this for hours now and always run into permission issues. My User on the docker host and the NAS are exactly the same (same username, pw, UID GID) and I get permission denied when I just try to cd into the NAS folder from the ubuntu test container. Anyone an idea?
Next level: Longhorn :)
😁
if i ever go crazy and have to setup a dirty docker system i'll try to remember this. it seems really helpful and imo any improvement possible is a god's gift with docker (i really hate it, ngl kubernetes. also not a big fan)
Hi!
what tool for drawing and marking via mouse directly on screen are you using ?
Hi, I'm using EpicPen and my Galaxy Tab as a drawing screen.
I'm far from an expert on linux and there is something I'm missing on permissions: When you say that we need to have the same user that use the same permissions between the NFS server and the docker image, how does it work? I though that just having the same user id or the same user name isn't enough, no? I mean, they could have different password ?
Also, what about the performance implications? I'm thinking to move my plex server in a docker container, with its storage on a NFS volume, could this be an issue?
once installing nfs-common things just worked !! nfs-common was the missing piece
There was a permission problem when I started the container, user and group exist on both server and client, but when executing the chown command in dockerfile, it shows no permissions error, maybe I have to use root user instead. Is there any other ways to work around with using the root user?
You should try to Mapall function
For databases, I feel you'd be better off just taking backups and keeping a read replica or two. You'll almost certainly get better performance plus you'll be able to recover faster with the replica.
If your app isn't a database, it should probably not be saving important data directly to disk unless you're doing some ad hoc operation (like running tests) where a local volume is fine.
The NAS is probably more convenient for transferring files, I'll give it that.
I hear that a couple of times, but never found any resources or details why this should be the case. Could you kindly share some insights? Thanks
Hi Christian, great Videos. I would love to see how to use this NFS (or maybe iSCSI) Setup with a Kibernetes Cluster. This is what I am trying to setup right now ;-)
Thanks mate! Great suggestion, maybe I'll do that in the future.
Thank you, it really helps
Glad to hear that!
How about HACS without OS?
Hmm seems based on the comments iscsi might be the way to go which is block storage vs nfs which is file storage. I don't know however but I do know when I've had two linux system sharing via nfs the nfs connection has crapped out in the past causing problems. I'm not sure this is a better option than keeping bind mounted volumes and just having a backup solution for the volumes that runs periodically to backup the volumes to remote source. Lastly I'm wondering if you run an ldap server since this would synchronize users on vms and the NAS. I'm curious if you would get nfs errors in this scenario
Currently I don't have LDAP, but I'm planning setting up an AD at home