Please let me clarify what "IT-Mode" is: "IT" in this case stands for "initiator target". In this mode evey disc is presented individually to the host. HBAs usually come in "IR" Mode, which is what ever Raid-Modes your HBA supports.
@@JorgeGarciaM IT mode basically just passes the disks along as opposed to IR mode where the controller software 'controls' the disks. You could technically use a HBA in raid mode and set it up as single drives so it's a bit more than just 'non raid' and for most HBA's also involves flashing the card to IT mode.
it looks like modern HBAs don't bother with IT mode anymore and it's IT mode by default. from what I've seen only old sas2 hba's have raid mode as default. I just picked up an lsi 9400-16i a couple of weeks ago, IT mode is default, there's no non-it firmware.
I've been using truenas as a vm for nearly 3yrs now. Its the only vm on my host that has "NEVER" given problems, it just simply works, rock solid software. It orginally was a test project but I started to store more important data on it, then I fully commited turned it into my main storage device. Did the same passthrough of my ssd's. Its an all SSD bulid with 2 nvme's for dedup & special vdev data.
Hi Jim, I am commenting just by watching the 3 first minutes of your video to tell you how brilliant you provide the answers to my concerns before you even start showing stuff....
Hey Jim, just a few notes. If you disable pre-enroll keys under the UEFI settings for the virtual machine, you don't need to go through the whole rigmarole with disabling secure boot in the bios. It's also generally not necessary to change the boot order since the virtual machine disc is blank at the time of first boot. You can also just remove the disk image from the DVD drive rather than deleting the entire device from the hardware configuration. This means you don't have to fully shut down the vm.
Such a quandary... Proxmox with a TrueNAS Core VM, or just run TrueNAS Scale, which has quite a bit of virtualization functionality and some nice docker applications made easy. I generally lean toward Proxmox but I would probably go with TrueNAS Scale in this particular situation. I also think TrueNAS Core is going to be sunsetted in the next year or two.
Please help me lol I was just about to try and run a NAS OS VM in proxmox, then run debian and CASAos. Have 12600k 32gb ram 2x 8tb drives. Would like to run: win11 VM Home Assistant Jellyfin / plex Camera software one day But I need a raid option with mirror. Have other storages like m.2 SSD for bootdrive etc. Should i not do this? 😢
@@NFTwizardz TrueNAS Core is probably your best bet. You can do a RAID ZMirror for your storage drives.... You'll still need a boot drive but any SATA or NVME drive(s) should be ok for that.
@@NFTwizardz No. Run TrueNAS Scale on the bare metal. It can do all the containers and virtual machines you need, plus it has an easy to use application container (docker) interface and repository.
Thank you for this! I was struggling with my truenas VM hanging at the BIOS in proxmox, and couldn't find any solution. This fixed everything & passthrough is working great!
Thanks for the video! I've been running TrueNas under Proxmox for ~2 years now, but have just made a lot of changes and needed a refresher on reinstalling, specifically with the bios options and the SSD emulation part... not sure I had that set before, but hoping things will be somewhat snappier on my relatively old Intel 4th gen server. now because of it.
Thanks, I've followed allot of generic guides on the virtulisation of TrueNas and couldn't get a stable build (passing through drives manually). I used this method for the scale version and it worked well - for some reason the throughput on the data between my storage devices saw a x4 boost (the VM settings/UEFI -seemed to make a big difference).
Can create a video using the 20.10.2 TrueNAS and Proxmox and most importantly demo how to mount SMB and NFS in proxmox that are served up by the virtualized TrueNAS.
Amazing video's Jim. I am learning a lot from them. I was thinking to utilize TrueNAS on Proxmox as well on a Zimacube Pro, because I really like Proxmox for the virtualization. Then I saw your video on HBA, so this probably saved me some disaster. Do you think the HBA solution will work on a Zimacube?
@@Jims-Garage Thanks, yes it does! PCIe x16 even I think. Challenge will be that all the disks are connected to a PCB behind the bay, PCB is connected to the motherboard with some unfamiliar connection. I think removal of the PCB is necessary, however powering the discs.. I am not sure.
have not tested this myself but you might not need to disable secure boot in the UEFI if you untick "pre-enroll keys" in the system tab when creating the VM I am in the process of deploying a simple SMB server and am following apalard's approach via LXC container
I have an old RAID card in my workstation that goes 90C. I fastened a small fan with fishing cord to the heatsink to get the temps under control. Great video anyways
Interesting setup. I've always run my file server on bare metal. Was tempted to try something like this when I built a new one this past year, but opted not to. But now I'm debating a separate Proxmox build for VMs 🤔
Thank you for your video. It might just come in handy. I did find a IBM rebranded LSI SAS3084E-R and I think an even better LSI SAS9217-8i in my junk box. I use Proxmox daily for various vm's. Also have a dedicated linux box that runs my email server. Time consolidation of email server and get truenas virtualised. Just happen to have an X99 board 128GB of RAM and an E5 2680 v4.
You don't need a hba if you still have sata connectors free on your motherboard. You can pass through the sata controller on the motherboard to the vm. I think it lmportant to mention this since this video is also helpful/targeted for users doing this the first time and why buy an extra Controller if you still have free connectors on their mobo. I got the Asus Pro WS-W680 ACE mATX mobo and it has 1 slim sas connector for 4 drives and another 4 normal sata connectors. That board also has ECC support and IPMI. Its just a bit pricy. I was researching how to best set up a NAS homelab system.
@Jims-Garage sure, but they might still want to know how to pass it through, no matter if it's one or multiple. At least for me, your video was helpful. I just was not sure I'd it would apply for built-in controllers. I have some general experience (software developer and running nas on mint linux the hard way), but promox and truenas is still new to me. I'm excited to get a proper nas running with the asus ws w680 ace se board and ecc ram. Used more regular hardware before.
@@DragoMorke yeah it's a good motherboard, workstations are the sweet spot IMO. I considered that exact model for a time. Process for onboard controller should be the same, select it from the drop-down.
I have a couple questions. Ive been watching your videos but havent seen em all yet 1. Can you passthrough individual drives to the TrueNAS VM rather than an HBA like you demonstrate? I’m starting with a single large SSD to pass through, no RAID till next month. 2. Do the solutions of setting up a TrueNAS VM or an unprivileged LXC NAS both work for using it as a central store for docker and kubernetes volumes? I’d like to have docker services on multiple smaller PCs but only use one PC for the main storage for docker and kubernetes. 3. Another thing I’d like to do with either a container or VM NAS is to be able to centralize backups and snapshots before doing a single point cloud backup (another reason i want my docker services to access the NAS). Have you managed to make a NAS work well with cloud backups such as Kopia or Backblaze? Has it worked with everything that needs backed up, such as snapshots or backups sent over from other hosts? Sorry if my questions are newbish. Ive done a lot of proxmox passthrough and vms but I’ve never used a NAS before and even after multiple videos I’m still trying to figure out what a solution like this one can and cannot do. I also have never messed around with docker enough to specify remote volumes or anything. If you read all this , thank you very much! My favorite homelab channel
@@Jims-Garage Hey there i just watched your video from a couple months ago on the budget NAS/server build. You mentioned putting proxmox on there and virtualizing TrueNAS, but how were you thinking of doing that without an HBA? Or were you thinking of adding an HBA? Im thinking of building a machine with 4 x slots for NVMe in ZFS RAID-Z , which is how you have your vm storage set up right? What size nvmes are you running? I found the part you use - PCIe x16 expansion for 4x NVMe drives. Does this work like an HBA for passing through to Proxmox? Also do you know what “gen” of nvme that your PCIe expansion is? Is the integrated graphics (in your budget NAS video) mostly to be able to plug a monitor into the server/NAS? Personally i have had bad luck trying to passthrough integrated AMD graphics in Proxmox, but I’m using a laptop size CPU. Also is your server the r730 or the r730xd?. It sounds like you are also considering upgrading your main server. What direction have you been looking for that? Sorry one more question. Do you have any guesstimate on how large of a VM is necessary to run the docker containers you’ve featured in this series? Mostly curious about cpu threads and RAM. Thanks Jim and sorry this got so long and demanding! I’ve learned so much from your series. Still trying to decide whether to host a lot of these services via docker or via kubernetes. Ive done some of these in docker last year, but some things i was too newb to figure out myself, and my current workstation (dual xeons, hp z620) is a little much to keep on all the time. I’m a lot more knowledgeable in IT generally now, and also ready to build a 24/7 server/cluster.
@@ultravioletiris6241 if you're doing a virtual TrueNAS you need a HBA, simple as that. I use the firecuda 530 1TB (X4 with an Asus card). They're PCIe 4 but my dell r730 is only pcie3. I have the dell r730, not 730xd. It's fine, but you might be better off these days with a modern Ryzen or intel, depends how many PCIe lanes you need. An iGPU is used for hardware acceleration (e.g. Jellyfin), it's not used for a monitor out. You will only connect via SSH/web UI. My dell r730 is running two Kubernetes clusters and hits about 35% CPU usage, it's overkill.
I would love to see a pfsense and Truenas Scale running under the same Proxmox... I know that hardware passthrough would be alot but seems like that would would for me under one machine. Looking at running pfsense, truenas and one ubuntu server machine (to run docker in one machine could help me downsize from 3 barebone machine to just one running a Ryzen 7900 cpu to rule them all). What are your thoughts? I only have 4 x 16TB hdd for a NAS that could be used on SATA connection but I would need deduplicated NIC (one SFP+ and one 10GBE nic for the pfsense).
That should be fine, I have videos on OPNSense and Sophos XG firewall virtual, those should be a good pointer. The machine you mention is more than enough to run it. You'll probably want a HBA for the drives and a couple of NICs.
I wouldn't personally recommend doing a virtualized storage server due to the high memory demands of zfs. My storage server is running baremetal truenas scale with 8x14tb drives in raidz2, it's got 128gb of ram, and the zfs ARC takes up about 90% of the memory and that allows drastic improvements in caching and overall read speed. I'm in the middle of building out a much larger pool (16x 18tb drives (2 8 disk raidz2 vdevs) and i'm sure that it'll take even more advantage of having that additional memory for caching.
Interesting, I have 8x8TB and 6x16TB drives with 32GB ram. Runs fine for what I need in a homelab but probably would benefit from more ram in a multi user setup
Passing through a disk isn't doing what you think it is, certainly for Proxmox. All it's doing is mapping the folder structure. For proper ZFS management it needs to be the entire device (AFAIK).
great video Jim. As Im currently looking into turning my exisiting PC (i9 13900k CPU / z690-a MB & 64gb ram into a Proxmox OS with Truenas ontop, do I need an HBA for my drives? As my mobo has 6 sata ports of which I have two nvme's on the board, sata: 2 x 4tb 3.5" and 2 x 1tb SSD. I may increase the 3.5" storage down the road.
@@Jims-Garage thanks Jim, Im just currently watching your budget NAS build, and you mentioned Truenas (to minimise risk) build out as a bare metal, rather than a VM. Im happy with a VM for Truenas as I dont mind the risk - I will ensure data is backed up (321). Currently I have a Intel NUC (Proxmox) with its dedicated drives running VMs for testing & home/work LAB stuff and running low on resources.... The new PC will be a proxmox added into a DataCentre cluster, which will als be used for VMs / LXC's etc and media stuff, hence the NAS requirement.
No, the nvme is for VMs only. 2x SSDs for Proxmox and ISOs. TrueNAS doesn't really have a cache drive like unraid etc does. Always go for ram over cache from what I've read.
@@Jims-Garage Thank you! That would explain why I'm not finding much around the traps that covers passing through NVMes to TrueNas for a ZFS Cache. The short version is that I just add more RAM haha. Thanks again for all of your tireless videos....I really find you so clear in the way you lay out your process and explain your thinking!
Back looking at this and considering replacing my QNAP TS-873, but my instinct goes against this even though saves power and I'd sell it off... Originally I had the Dell T340 with a HBA330 installed as a TrueNAS server, then changed it to Proxmox and have a seperate QNAP.. I think I've seen the option to also set Proxmox up as a SMB server too.. mmm WIsh there was a decemt IX Systems reseller in the UK as I'd prob gone a different route to the QNAP originally.
I was under the impression you could just run an instance of TrueNAS in proxmox and just give it pass through to the physical drives? It seems you are the first video I've seen to say an HBA is needed... I'm a noob and now Im confused....
@@the_mad_swimbaiter455 for zfs features to work it expects the disk passed through via HBA. If you do it within Proxmox you aren't passing the disk through, things like SMART won't work.
@@Jims-Garage so if proxmox can do the zfs and clusters across systems is TrueNAS redundant? I was going to base my zima blade server on proxmox and run a TrueNAS, PLEX, and Vault warden with an additional Windows VM as my daily driver? I'm thinking ahead to clusters for redundancy? Thanks for your video, I'm just a hobbyist and i literally had never heard of an HBA lol. 🤦🏿♂️
@@Jims-Garage cool, good to know lol. I'm over complicating it. I tore apart my desktop and made a 2x2tb TrueNas server. I'm hooked on this stuff now and just trying to figure out how to tie it in. This all started with a Rapsberry Pi5 4tb SSD NAS running OMV. Lol. Thanks for the engagement, I'll stop bothering you now, but great content! I'm just hopping around your videos getting ideas lol.
@@Jims-Garage i have a zimablade I've been playing with and it only had 1 PCIe slot thats used for M.2 NVMe storage /OS. I got TrueNAS scale running in a vm and SCSI?d my storage drives attached to the blade into TrueNAS. I'm just thankful it works, but i fumbled through it
How about passing through the raw disks? I've done it and it works well. Can mount the zpool either in TrueNAS or Proxmox if necessary (and the VM is turned off). Don't really see any downside.
Hello Jim, I have been learning a lot from your videos as I'm just beginning to home lab. I am having a hard time getting my HBA card to pass over to the VM. I have followed your instruction to the "T" and when I go back to the PVE Host my drives still show there and are not passed to the VM like in your Video. I went and tripled check the my IOMMU is enabled in my Bios, I've confirmed that my HBA is indeed in IT mode and my Bios also shows that its in IT mode, but it is still not sending the drives to the VM. Could you are anyone give me advice on what to do to get this to work right?
You shouldn't share the same dataset by NFS and SMB, that will lead to problems. Instead, it's best to stick to one protocol (they both work in Windows and Linux). I use SMB for this reason.
@@Jims-Garage likely not. I think it primarily changes some defaults in terms of default hardware choices such as the virtual nics. Thanks for video. I am in the process of virtualizing a TrueNAS core instance. Was on a rather right budget, so I'm using a generic Asmedia PCIe 4x SATA controller passed through to the TrueNAS VM.
FYI for anyone with a Lenovo P920, and possibly 720 and 520, the SATA controller for the eSATA port is separate from the backplane, along with the port next to it. You can safely pass that onboard controller to the VM. Attempting to pass the other controller crashes the host, obviously, so don’t do that.
Hi. A quick notice, At 19.27 you left the option rombar enabled. No reason for that. It isnt a gpu to load a rom file for example. No point leaving it checked.
Thanks, I did miss that explanation on reflection. ROM BAR isn't just for GPUs, it's for any PCIe device and allows it to map a portion of its memory to the host. This can be beneficial for devices, and I like to assign it with a HBA for some overhead.
@@Jims-Garage Nice but my experience with rom bar (specially with gpus) was that after cheking it kept asking for a rom file to load. Why you mentioned about overhead at the end?
Is there any downside to let Proxmox handle the ZFS and only present dumb virtual disks with Ext4 formatting to TrueNAS? Thinking then it's more easily included in the proxmox backup system, and I don't have to worry about backup inside truenas and multiple layers of file system managment.
That should work. It would be zfs on zfs. I don't know if there's much wasted overhead because of that though, possible double write as well but I'd have to check.
@@Jims-Garage I mean don't use ZFS in TrueNAS, just Ext4 formatting. Double ZFS is usually a bad idea I've read. - I'm in the process of setting this up, just dealing with some networking first. Hopefully my approach is viable.
Hey Jim, so Ive managed to source an HBA LSI 9207-8i (IT-Mode) - connected 2x new WD 4TiB each to the P1 & P2 sata cable from the module, btw, I have two SSD's connected to my motherb port 1 & 2 ports and two nvme (2TiB each). Proxmox found the drives without any issues, ran the IOMMU config settings as per your Double GPU Passthrough video, rebooted and no longer visible - which is correct. However, prior to all of this, I had ZFS pool setup which is now in a health state of 'suspended' I presume this was due to the config above, for the life of me, Im unable to destroy/remove the ZFS pool and start fresh in Truenas. Error: "command 'zpool list -vHPL zpool01' failed: not a valid block device" Is there a shell command I can destroy it or any other way? Cheers.
@@Jims-Garage thanks Jim, as I was unable to see them in Proxmox - which I think you mentioned in one of your vids, they will not show up under disks. I just removed the drives > format and placed them back into the server. I can see both of them, and the ZFSpool has been removed.
Nothing wrong with that! LXC has direct access to the kernel so literally no overhead. True as though has some features for preserving your data - snapshots and cloud backups built in - I use truenas and a combo of lxc containers and casaos 😊
Great video! Spent many hours in that virtualize TrueNAS "rabbit hole", came out on the LXC side of the fence too (there is not one size fits all answer here I believe), with Proxmox handling the ZFS and importantly the memory management. For me it boiled down to the fact I am comfortable with linux and the command line, plus, TBH the features in TrueNAS were well beyond what I required, just a handful of SMB shares didn't warrant 50% of my 16GB ram, I run my LXC with just 1GB for samba and filebrowser, a web gui is really useful on slow remote connections for uploading files. The rest of the ram, for me anyway allows me to run many services as LXC, Docker or VMs in Proxmox that I miss from my old Synology (Nextcloud, Portainer, Photo Prism, Gitlab and handful of Wordpress and NGINX websites and Traefik to name a few), I was really surprised how much I actually ran on the Synology with so little resources. Reckon there might be enough resource left to run through testing Kubernetes following your tutorials, high on my list for 2024 :-)
I'm using truenas as a VM in proxmox, using the PCI passthrough of the lsi 9211-8i card, and unfortunately I can't get rid of checksum errors when detecting scrub. they always appear. Do you have any idea what I can do?
This really confuses me, needing an HBA, because I was able to pass drives individually to my truenas scale VM, and it seemed to work fine(2x8TB HDD, in a mirrored zfs pool). Though I didn’t get into any of the fancy stuff like replication and snapshots. The serials were read without issue, and I was able to set up some containers in TrueNas scale, and put datasets in the pool. Is there something in particular I should be looking out for?
You're not passing the devices through, you're creating a virtual drive and giving it to the VM. I imagine you don't see smart data in TrueNAS? For that you need a HBA, otherwise Proxmox has control over the disks.
Have you ever had issues where you try to mount the nfs share but you end up getting an error that says, 'can't find in /etc/fstab.' or 'No such file or directory'? When I do showmount -e nasaddress it shows it is indeed available. Do I need to add some sort of special permissions somewhere?
Hey, was wondering if you'd have any advice, I'm looking to run a 24/7 effectively idling live stream, on the minimum possible hardware - if possible I want it to be able to queue up a playlist of videos, using something like VLC, and stream them to Twitch & TH-cam (and maybe Facebook) - I have a budget but not a lot of it, and have options of a local Raspberry Pi 4 2GB, a Raspberry Pi 4 4GB, or a couple of command line VPSs, running Ubuntu Server - I could potentially install a desktop environment, but if I could run it command line that'd be preferable. I could get a new VPS for the project, but again its supposed to be for cost reductions so if possible I'd like to use one of those rigs. Any advice?
Would this setup work with an external usb drive bay? When i pass USB devices i get a lot of disk errors on my dmesg output on the truenas side... I want to get a dedicated usb c pci card and pass that through instead of passing virtualized usb...
@@clusty1 because you need a hba. If you're selecting a drive through passthrough it's only mapping it, things like smart etc won't work and you'll have problems.
@@Jims-Garage Any luck passing the mobo sata controller ? Would be a waste to leave it unsused :P It's a PCI-E "Hewlett-Packard Company SAS2308 PCI-Express Fusion-MPT SAS-2" (Broadcom chipset)
There's a bit of misinformation at the start of the video about requiring a HBA card as you can pass the onboard sata controller. There are definitely cases where there might be other devices in the same IOMMU group and it's not as clean but definitely doable. Good video overall :)
In case anyone is interested: these hba consume a lot of power, mine used 10W in idle with no HDD connected and got very hot. Besides getting hot it has no temperature sensor so there was no way for me to know, if the zip-tied fan on it was still working. Because of all that I ended up with a second ZFS inside proxmox and created an SMB via a Copilot LXC.
Ah the experts teaching you wrong. WHY install VMs with Safe Boot? Maybe W11 if you don’t know how to bypass that check during install. Install in Legacy/BIOS mode, or do not select Safe Boot (Enroll Keys) to avoid all those unnecessary boot changes…
I've never stated I'm an expert. I will adopt this for future videos. I assume you mean secure boot, safe boot is entirely different. There are reasons for using secure boot but in a homelab probably not.
TrueNAS is not designed to be modified/customised by end user - whatever you want to tune, it will NOT survive next upgrade. I would really reconsider this decision. Different issue is when you have full bare metal spare - then, probably, TrueNAS is the system of choice. But not on PVE, where 95% of OS things are done already.
Not absolutely true. You can put your commands in the Truenas INIT/boot scripts and NO UPGRADE would delete your scripts, utilities or custom binaries correct!?
Please let me clarify what "IT-Mode" is:
"IT" in this case stands for "initiator target". In this mode evey disc is presented individually to the host.
HBAs usually come in "IR" Mode, which is what ever Raid-Modes your HBA supports.
Thanks for adding this, pinned!
@@Jims-Garage @ichnafi8512 basically IT-Mode is non-RAID mode?
@@JorgeGarciaM IT mode basically just passes the disks along as opposed to IR mode where the controller software 'controls' the disks. You could technically use a HBA in raid mode and set it up as single drives so it's a bit more than just 'non raid' and for most HBA's also involves flashing the card to IT mode.
it looks like modern HBAs don't bother with IT mode anymore and it's IT mode by default. from what I've seen only old sas2 hba's have raid mode as default. I just picked up an lsi 9400-16i a couple of weeks ago, IT mode is default, there's no non-it firmware.
I've been using truenas as a vm for nearly 3yrs now. Its the only vm on my host that has "NEVER" given problems, it just simply works, rock solid software. It orginally was a test project but I started to store more important data on it, then I fully commited turned it into my main storage device. Did the same passthrough of my ssd's. Its an all SSD bulid with 2 nvme's for dedup & special vdev data.
Awesome, that's great to hear
TrueNAS rabbit hole is now wide open :) Snapshots, replication, 3-2-1 backup :)
Bingo! Left this too long, now it's done 👍
Hi Jim, I am commenting just by watching the 3 first minutes of your video to tell you how brilliant you provide the answers to my concerns before you even start showing stuff....
Glad it was helpful!
Protip: The LSI cards use i and e as the last letter to indicate internal vs. external ports.
Great, that's good to know, thanks!
Hey Jim, just a few notes.
If you disable pre-enroll keys under the UEFI settings for the virtual machine, you don't need to go through the whole rigmarole with disabling secure boot in the bios.
It's also generally not necessary to change the boot order since the virtual machine disc is blank at the time of first boot.
You can also just remove the disk image from the DVD drive rather than deleting the entire device from the hardware configuration. This means you don't have to fully shut down the vm.
Thanks, I'll give that a try and adopt in future videos
Agreed. And then you don't need to change the boot order of the ISO and remove it afterwards. Nice video, Jim!
Such a quandary... Proxmox with a TrueNAS Core VM, or just run TrueNAS Scale, which has quite a bit of virtualization functionality and some nice docker applications made easy. I generally lean toward Proxmox but I would probably go with TrueNAS Scale in this particular situation. I also think TrueNAS Core is going to be sunsetted in the next year or two.
You can migrate your zfs so choose whatever fits best for the time. I'd probably run docker in a separate VM but no reason why you can't use scale.
Please help me lol I was just about to try and run a NAS OS VM in proxmox, then run debian and CASAos. Have 12600k 32gb ram 2x 8tb drives.
Would like to run:
win11 VM
Home Assistant
Jellyfin / plex
Camera software one day
But I need a raid option with mirror. Have other storages like m.2 SSD for bootdrive etc.
Should i not do this? 😢
@@NFTwizardz TrueNAS Core is probably your best bet. You can do a RAID ZMirror for your storage drives.... You'll still need a boot drive but any SATA or NVME drive(s) should be ok for that.
@NetBandit70 hey thanks for replying, so run proxmox then VM or CT a truenas core? Then VM DEBIAN and install casaos?
@@NFTwizardz No. Run TrueNAS Scale on the bare metal. It can do all the containers and virtual machines you need, plus it has an easy to use application container (docker) interface and repository.
i fell peaceful when watching his video
Thank you for this! I was struggling with my truenas VM hanging at the BIOS in proxmox, and couldn't find any solution. This fixed everything & passthrough is working great!
@@aractor that's great to hear, good job 👍
Amazing video! liked and subscribed.
Thanks for the sub!
Thanks for the video! I've been running TrueNas under Proxmox for ~2 years now, but have just made a lot of changes and needed a refresher on reinstalling, specifically with the bios options and the SSD emulation part... not sure I had that set before, but hoping things will be somewhat snappier on my relatively old Intel 4th gen server. now because of it.
That's awesome, thanks for the comment 😊
Thanks for the details on configuring the UEFI in the VM. I was getting stuck on that.
If you untick "pre enrol keys" you can ignore all of it 😂 recently discovered that.
Great explanation and demonstration, as always👌
One question, is RAID done by software or hardware in this case?
Thanks Jim!
@@gustavopoa TrueNAS is software raid (raidz)
Thanks for the video and demo, have a great day
Thanks, you too!
thank you SOOOO much for this video. I followed it and have everything set up and working.
Great to hear!
Thanks, I've followed allot of generic guides on the virtulisation of TrueNas and couldn't get a stable build (passing through drives manually). I used this method for the scale version and it worked well - for some reason the throughput on the data between my storage devices saw a x4 boost (the VM settings/UEFI -seemed to make a big difference).
@@mintypockets8261 that's great to hear
Excellent guide, very in depth. Thank you so very much!
You're welcome
35:15 How then to share the music catalog via nfs and smb?
I show you how to mount it in Windows and Linux. Once it's mounted it's the same as accessing a normal folder.
How to mount one folder in linux via nfs, and in windows via smb?@@Jims-Garage
Hay Jim another fantastic video
Thanks, much appreciated
Can create a video using the 20.10.2 TrueNAS and Proxmox and most importantly demo how to mount SMB and NFS in proxmox that are served up by the virtualized TrueNAS.
I've covered most of these topics already. Check my TrueNAS for how to create SMB, then check my Proxmox backup server for how to mount it in Proxmox.
Amazing video's Jim. I am learning a lot from them. I was thinking to utilize TrueNAS on Proxmox as well on a Zimacube Pro, because I really like Proxmox for the virtualization. Then I saw your video on HBA, so this probably saved me some disaster. Do you think the HBA solution will work on a Zimacube?
Does it have a PCIe slot? If so it should do. Just be aware that it doesn't support full ECC memory.
@@Jims-Garage Thanks, yes it does! PCIe x16 even I think. Challenge will be that all the disks are connected to a PCB behind the bay, PCB is connected to the motherboard with some unfamiliar connection. I think removal of the PCB is necessary, however powering the discs.. I am not sure.
I'd "Like" this video about 10 times if I could...it came in very handy! Even managed to passthrough a NVMe drive as L2ARC for the pool too :)
That's great, good job 👏
have not tested this myself but you might not need to disable secure boot in the UEFI if you untick "pre-enroll keys" in the system tab when creating the VM
I am in the process of deploying a simple SMB server and am following apalard's approach via LXC container
Thanks, I'll give that a try.
I have an old RAID card in my workstation that goes 90C. I fastened a small fan with fishing cord to the heatsink to get the temps under control. Great video anyways
Great tip! Thanks.
This was great! Thanks so much!
Glad it was helpful!
Interesting setup. I've always run my file server on bare metal. Was tempted to try something like this when I built a new one this past year, but opted not to. But now I'm debating a separate Proxmox build for VMs 🤔
I run mine bare metal as well. It's a good option if you need to consolidate and keep power down.
Thank you for your video. It might just come in handy. I did find a IBM rebranded LSI SAS3084E-R and I think an even better LSI SAS9217-8i in my junk box. I use Proxmox daily for various vm's. Also have a dedicated linux box that runs my email server. Time consolidation of email server and get truenas virtualised. Just happen to have an X99 board 128GB of RAM and an E5 2680 v4.
Nice, that should work well.
You don't need a hba if you still have sata connectors free on your motherboard. You can pass through the sata controller on the motherboard to the vm.
I think it lmportant to mention this since this video is also helpful/targeted for users doing this the first time and why buy an extra Controller if you still have free connectors on their mobo.
I got the Asus Pro WS-W680 ACE mATX mobo and it has 1 slim sas connector for 4 drives and another 4 normal sata connectors.
That board also has ECC support and IPMI. Its just a bit pricy.
I was researching how to best set up a NAS homelab system.
@@DragoMorke you are right, just that most consumer mobos won't have more than 1 controller.
@Jims-Garage sure, but they might still want to know how to pass it through, no matter if it's one or multiple. At least for me, your video was helpful. I just was not sure I'd it would apply for built-in controllers. I have some general experience (software developer and running nas on mint linux the hard way), but promox and truenas is still new to me.
I'm excited to get a proper nas running with the asus ws w680 ace se board and ecc ram. Used more regular hardware before.
@@DragoMorke yeah it's a good motherboard, workstations are the sweet spot IMO. I considered that exact model for a time. Process for onboard controller should be the same, select it from the drop-down.
amazing work on the video, ty
Much appreciated!
I have a couple questions. Ive been watching your videos but havent seen em all yet
1. Can you passthrough individual drives to the TrueNAS VM rather than an HBA like you demonstrate? I’m starting with a single large SSD to pass through, no RAID till next month.
2. Do the solutions of setting up a TrueNAS VM or an unprivileged LXC NAS both work for using it as a central store for docker and kubernetes volumes? I’d like to have docker services on multiple smaller PCs but only use one PC for the main storage for docker and kubernetes.
3. Another thing I’d like to do with either a container or VM NAS is to be able to centralize backups and snapshots before doing a single point cloud backup (another reason i want my docker services to access the NAS). Have you managed to make a NAS work well with cloud backups such as Kopia or Backblaze? Has it worked with everything that needs backed up, such as snapshots or backups sent over from other hosts?
Sorry if my questions are newbish. Ive done a lot of proxmox passthrough and vms but I’ve never used a NAS before and even after multiple videos I’m still trying to figure out what a solution like this one can and cannot do. I also have never messed around with docker enough to specify remote volumes or anything.
If you read all this , thank you very much! My favorite homelab channel
1) No, you need a HBA AFAIK
2) Yes, you can do that.
3) Yes, I backup my NAS to GDrive. Check my 3 part backup series.
@@Jims-Garage Hey there i just watched your video from a couple months ago on the budget NAS/server build. You mentioned putting proxmox on there and virtualizing TrueNAS, but how were you thinking of doing that without an HBA? Or were you thinking of adding an HBA?
Im thinking of building a machine with 4 x slots for NVMe in ZFS RAID-Z , which is how you have your vm storage set up right? What size nvmes are you running? I found the part you use - PCIe x16 expansion for 4x NVMe drives. Does this work like an HBA for passing through to Proxmox? Also do you know what “gen” of nvme that your PCIe expansion is?
Is the integrated graphics (in your budget NAS video) mostly to be able to plug a monitor into the server/NAS? Personally i have had bad luck trying to passthrough integrated AMD graphics in Proxmox, but I’m using a laptop size CPU.
Also is your server the r730 or the r730xd?. It sounds like you are also considering upgrading your main server. What direction have you been looking for that?
Sorry one more question. Do you have any guesstimate on how large of a VM is necessary to run the docker containers you’ve featured in this series? Mostly curious about cpu threads and RAM.
Thanks Jim and sorry this got so long and demanding! I’ve learned so much from your series. Still trying to decide whether to host a lot of these services via docker or via kubernetes. Ive done some of these in docker last year, but some things i was too newb to figure out myself, and my current workstation (dual xeons, hp z620) is a little much to keep on all the time. I’m a lot more knowledgeable in IT generally now, and also ready to build a 24/7 server/cluster.
@@ultravioletiris6241 if you're doing a virtual TrueNAS you need a HBA, simple as that.
I use the firecuda 530 1TB (X4 with an Asus card). They're PCIe 4 but my dell r730 is only pcie3.
I have the dell r730, not 730xd. It's fine, but you might be better off these days with a modern Ryzen or intel, depends how many PCIe lanes you need.
An iGPU is used for hardware acceleration (e.g. Jellyfin), it's not used for a monitor out. You will only connect via SSH/web UI.
My dell r730 is running two Kubernetes clusters and hits about 35% CPU usage, it's overkill.
Timely video, thanks very helpful
You're welcome 😁
I would love to see a pfsense and Truenas Scale running under the same Proxmox... I know that hardware passthrough would be alot but seems like that would would for me under one machine. Looking at running pfsense, truenas and one ubuntu server machine (to run docker in one machine could help me downsize from 3 barebone machine to just one running a Ryzen 7900 cpu to rule them all). What are your thoughts? I only have 4 x 16TB hdd for a NAS that could be used on SATA connection but I would need deduplicated NIC (one SFP+ and one 10GBE nic for the pfsense).
That should be fine, I have videos on OPNSense and Sophos XG firewall virtual, those should be a good pointer. The machine you mention is more than enough to run it. You'll probably want a HBA for the drives and a couple of NICs.
I wouldn't personally recommend doing a virtualized storage server due to the high memory demands of zfs. My storage server is running baremetal truenas scale with 8x14tb drives in raidz2, it's got 128gb of ram, and the zfs ARC takes up about 90% of the memory and that allows drastic improvements in caching and overall read speed. I'm in the middle of building out a much larger pool (16x 18tb drives (2 8 disk raidz2 vdevs) and i'm sure that it'll take even more advantage of having that additional memory for caching.
Interesting, I have 8x8TB and 6x16TB drives with 32GB ram. Runs fine for what I need in a homelab but probably would benefit from more ram in a multi user setup
Any disadvantage to passing through the disk instead of the controller? Besides losing smart data (I think)
Yes, you lose smart data.....I don't think anything else.....
Passing through a disk isn't doing what you think it is, certainly for Proxmox. All it's doing is mapping the folder structure. For proper ZFS management it needs to be the entire device (AFAIK).
great video Jim. As Im currently looking into turning my exisiting PC (i9 13900k CPU / z690-a MB & 64gb ram into a Proxmox OS with Truenas ontop, do I need an HBA for my drives? As my mobo has 6 sata ports of which I have two nvme's on the board, sata: 2 x 4tb 3.5" and 2 x 1tb SSD. I may increase the 3.5" storage down the road.
Bare metal won't require a HBA (you can add one if you run out of sata ports though). Hba is mainly for passthrough to a VM.
@@Jims-Garage thanks Jim, Im just currently watching your budget NAS build, and you mentioned Truenas (to minimise risk) build out as a bare metal, rather than a VM. Im happy with a VM for Truenas as I dont mind the risk - I will ensure data is backed up (321). Currently I have a Intel NUC (Proxmox) with its dedicated drives running VMs for testing & home/work LAB stuff and running low on resources.... The new PC will be a proxmox added into a DataCentre cluster, which will als be used for VMs / LXC's etc and media stuff, hence the NAS requirement.
@@fearthesmeag OK, for a VM you will require a HBA.
Were the Mirrored NVMe drives just used as a Proxmox boot drive, and space for VMs? They weren't used at all as a ZFS Cache in the TrueNAS pool?
No, the nvme is for VMs only. 2x SSDs for Proxmox and ISOs. TrueNAS doesn't really have a cache drive like unraid etc does. Always go for ram over cache from what I've read.
@@Jims-Garage Thank you! That would explain why I'm not finding much around the traps that covers passing through NVMes to TrueNas for a ZFS Cache. The short version is that I just add more RAM haha. Thanks again for all of your tireless videos....I really find you so clear in the way you lay out your process and explain your thinking!
Back looking at this and considering replacing my QNAP TS-873, but my instinct goes against this even though saves power and I'd sell it off... Originally I had the Dell T340 with a HBA330 installed as a TrueNAS server, then changed it to Proxmox and have a seperate QNAP.. I think I've seen the option to also set Proxmox up as a SMB server too.. mmm WIsh there was a decemt IX Systems reseller in the UK as I'd prob gone a different route to the QNAP originally.
Should the steps be the same if setting up Scale vs Core?
Yes
Is there a dell to Lsi model number list?
You should be able to find the chip used on their website.
I was under the impression you could just run an instance of TrueNAS in proxmox and just give it pass through to the physical drives? It seems you are the first video I've seen to say an HBA is needed... I'm a noob and now Im confused....
@@the_mad_swimbaiter455 for zfs features to work it expects the disk passed through via HBA. If you do it within Proxmox you aren't passing the disk through, things like SMART won't work.
@@Jims-Garage so if proxmox can do the zfs and clusters across systems is TrueNAS redundant? I was going to base my zima blade server on proxmox and run a TrueNAS, PLEX, and Vault warden with an additional Windows VM as my daily driver? I'm thinking ahead to clusters for redundancy? Thanks for your video, I'm just a hobbyist and i literally had never heard of an HBA lol. 🤦🏿♂️
@@the_mad_swimbaiter455 no, this is why I have TrueNAS on a dedicated machine.
@@Jims-Garage cool, good to know lol. I'm over complicating it. I tore apart my desktop and made a 2x2tb TrueNas server. I'm hooked on this stuff now and just trying to figure out how to tie it in. This all started with a Rapsberry Pi5 4tb SSD NAS running OMV. Lol. Thanks for the engagement, I'll stop bothering you now, but great content! I'm just hopping around your videos getting ideas lol.
@@Jims-Garage i have a zimablade I've been playing with and it only had 1 PCIe slot thats used for M.2 NVMe storage /OS. I got TrueNAS scale running in a vm and SCSI?d my storage drives attached to the blade into TrueNAS. I'm just thankful it works, but i fumbled through it
How about passing through the raw disks?
I've done it and it works well. Can mount the zpool either in TrueNAS or Proxmox if necessary (and the VM is turned off). Don't really see any downside.
Passing through a raw disk doesn't give TrueNAS control. Things like SMART wouldn't work.
Hello Jim, I have been learning a lot from your videos as I'm just beginning to home lab. I am having a hard time getting my HBA card to pass over to the VM. I have followed your instruction to the "T" and when I go back to the PVE Host my drives still show there and are not passed to the VM like in your Video. I went and tripled check the my IOMMU is enabled in my Bios, I've confirmed that my HBA is indeed in IT mode and my Bios also shows that its in IT mode, but it is still not sending the drives to the VM. Could you are anyone give me advice on what to do to get this to work right?
I got lost at the end. You created a separate dataset for nfs. Does that mean I have to duplicate the data?
You shouldn't share the same dataset by NFS and SMB, that will lead to problems. Instead, it's best to stick to one protocol (they both work in Windows and Linux). I use SMB for this reason.
Curious...in the OS Tab (Guest OS section), should the OS type be changed from Linux to perhap "Other", as I believe TrueNAS Core is FreeBSD based?
A good question, I believe it could, but that it doesn't make much of a difference anyway.
@@Jims-Garage likely not. I think it primarily changes some defaults in terms of default hardware choices such as the virtual nics.
Thanks for video. I am in the process of virtualizing a TrueNAS core instance. Was on a rather right budget, so I'm using a generic Asmedia PCIe 4x SATA controller passed through to the TrueNAS VM.
TrueNAS SCALE is TrueNAS on Linux (Debian). 🥳
FYI for anyone with a Lenovo P920, and possibly 720 and 520, the SATA controller for the eSATA port is separate from the backplane, along with the port next to it. You can safely pass that onboard controller to the VM. Attempting to pass the other controller crashes the host, obviously, so don’t do that.
@@xgengamrgrl1591 good to know, thanks for sharing
Hi. A quick notice, At 19.27 you left the option rombar enabled. No reason for that. It isnt a gpu to load a rom file for example. No point leaving it checked.
Thanks, I did miss that explanation on reflection. ROM BAR isn't just for GPUs, it's for any PCIe device and allows it to map a portion of its memory to the host. This can be beneficial for devices, and I like to assign it with a HBA for some overhead.
@@Jims-Garage Nice but my experience with rom bar (specially with gpus) was that after cheking it kept asking for a rom file to load. Why you mentioned about overhead at the end?
Great video
Thanks 👍
Is there any downside to let Proxmox handle the ZFS and only present dumb virtual disks with Ext4 formatting to TrueNAS? Thinking then it's more easily included in the proxmox backup system, and I don't have to worry about backup inside truenas and multiple layers of file system managment.
That should work. It would be zfs on zfs. I don't know if there's much wasted overhead because of that though, possible double write as well but I'd have to check.
@@Jims-Garage I mean don't use ZFS in TrueNAS, just Ext4 formatting. Double ZFS is usually a bad idea I've read. - I'm in the process of setting this up, just dealing with some networking first. Hopefully my approach is viable.
Is the TrueNAS VM stored on the Proxmox boot drive?
It can be, but I prefer to place my VMs on nvme
Hey Jim, so Ive managed to source an HBA LSI 9207-8i (IT-Mode) - connected 2x new WD 4TiB each to the P1 & P2 sata cable from the module, btw, I have two SSD's connected to my motherb port 1 & 2 ports and two nvme (2TiB each). Proxmox found the drives without any issues, ran the IOMMU config settings as per your Double GPU Passthrough video, rebooted and no longer visible - which is correct. However, prior to all of this, I had ZFS pool setup which is now in a health state of 'suspended' I presume this was due to the config above, for the life of me, Im unable to destroy/remove the ZFS pool and start fresh in Truenas. Error:
"command 'zpool list -vHPL zpool01' failed: not a valid block device" Is there a shell command I can destroy it or any other way? Cheers.
Can't you format the drives?
@@Jims-Garage thanks Jim, as I was unable to see them in Proxmox - which I think you mentioned in one of your vids, they will not show up under disks. I just removed the drives > format and placed them back into the server. I can see both of them, and the ZFSpool has been removed.
But this is for SAS or SATA?
I use SAS to SATA converters - you can buy them as a single cable. Each port will split into 4 SATA, so each card can be 8-16 depending on model.
I use a lxc container with just casaos (a docker managment docker container) that has samba shares. What do you think?
Nothing wrong with that! LXC has direct access to the kernel so literally no overhead. True as though has some features for preserving your data - snapshots and cloud backups built in - I use truenas and a combo of lxc containers and casaos 😊
Great video! Spent many hours in that virtualize TrueNAS "rabbit hole", came out on the LXC side of the fence too (there is not one size fits all answer here I believe), with Proxmox handling the ZFS and importantly the memory management. For me it boiled down to the fact I am comfortable with linux and the command line, plus, TBH the features in TrueNAS were well beyond what I required, just a handful of SMB shares didn't warrant 50% of my 16GB ram, I run my LXC with just 1GB for samba and filebrowser, a web gui is really useful on slow remote connections for uploading files. The rest of the ram, for me anyway allows me to run many services as LXC, Docker or VMs in Proxmox that I miss from my old Synology (Nextcloud, Portainer, Photo Prism, Gitlab and handful of Wordpress and NGINX websites and Traefik to name a few), I was really surprised how much I actually ran on the Synology with so little resources. Reckon there might be enough resource left to run through testing Kubernetes following your tutorials, high on my list for 2024 :-)
I'm using truenas as a VM in proxmox, using the PCI passthrough of the lsi 9211-8i card, and unfortunately I can't get rid of checksum errors when detecting scrub. they always appear. Do you have any idea what I can do?
It's possibly temperature related, add a fan to the HBA. Worth checking all cables as well.
@@Jims-Garage I have a server case with good airflow, and the card has an additional fan. I tried using a different cable.
@@gnajmacz hmm. Perhaps a damaged card, or the data itself was corrupted before copying? I'm no zfs expert though.
This really confuses me, needing an HBA, because I was able to pass drives individually to my truenas scale VM, and it seemed to work fine(2x8TB HDD, in a mirrored zfs pool). Though I didn’t get into any of the fancy stuff like replication and snapshots. The serials were read without issue, and I was able to set up some containers in TrueNas scale, and put datasets in the pool. Is there something in particular I should be looking out for?
You're not passing the devices through, you're creating a virtual drive and giving it to the VM. I imagine you don't see smart data in TrueNAS? For that you need a HBA, otherwise Proxmox has control over the disks.
Have you ever had issues where you try to mount the nfs share but you end up getting an error that says, 'can't find in /etc/fstab.' or 'No such file or directory'? When I do showmount -e nasaddress it shows it is indeed available. Do I need to add some sort of special permissions somewhere?
Secure boot is stopping me only if i put EFI disk on different storage than OS , am i doing something wrong , am i getting something wrong ?
Hey, was wondering if you'd have any advice, I'm looking to run a 24/7 effectively idling live stream, on the minimum possible hardware - if possible I want it to be able to queue up a playlist of videos, using something like VLC, and stream them to Twitch & TH-cam (and maybe Facebook) - I have a budget but not a lot of it, and have options of a local Raspberry Pi 4 2GB, a Raspberry Pi 4 4GB, or a couple of command line VPSs, running Ubuntu Server - I could potentially install a desktop environment, but if I could run it command line that'd be preferable. I could get a new VPS for the project, but again its supposed to be for cost reductions so if possible I'd like to use one of those rigs. Any advice?
Would this setup work with an external usb drive bay? When i pass USB devices i get a lot of disk errors on my dmesg output on the truenas side...
I want to get a dedicated usb c pci card and pass that through instead of passing virtualized usb...
Why just not pass every drive to Truenas ?
@@clusty1 because you need a hba. If you're selecting a drive through passthrough it's only mapping it, things like smart etc won't work and you'll have problems.
@@Jims-Garage Any luck passing the mobo sata controller ? Would be a waste to leave it unsused :P
It's a PCI-E "Hewlett-Packard Company SAS2308 PCI-Express Fusion-MPT SAS-2" (Broadcom chipset)
@VladAndreiLazar do you have more than 1? If so, sure you can do that. Otherwise it's needed by the host.
@@Jims-Garage Host runs on a bunch of NVMEs that are used only for proxmox and the VM virtual os disk.
There's a bit of misinformation at the start of the video about requiring a HBA card as you can pass the onboard sata controller. There are definitely cases where there might be other devices in the same IOMMU group and it's not as clean but definitely doable. Good video overall :)
@@muneebabbas7141 true I guess. I can't say I've seen many machines with multiple on board sata controllers
In case anyone is interested: these hba consume a lot of power, mine used 10W in idle with no HDD connected and got very hot. Besides getting hot it has no temperature sensor so there was no way for me to know, if the zip-tied fan on it was still working.
Because of all that I ended up with a second ZFS inside proxmox and created an SMB via a Copilot LXC.
I would attach the fan to a mobo header, then you can monitor if it has failed.
Ah the experts teaching you wrong.
WHY install VMs with Safe Boot?
Maybe W11 if you don’t know how to bypass that check during install.
Install in Legacy/BIOS mode, or do not select Safe Boot (Enroll Keys) to avoid all those unnecessary boot changes…
I've never stated I'm an expert. I will adopt this for future videos.
I assume you mean secure boot, safe boot is entirely different. There are reasons for using secure boot but in a homelab probably not.
For those who want to flash the firmware in IT mode : th-cam.com/video/v5v8TCcvA8s/w-d-xo.htmlsi=zQF6dwYKDLmBFF71
Thanks for this
TrueNAS is not designed to be modified/customised by end user - whatever you want to tune, it will NOT survive next upgrade.
I would really reconsider this decision.
Different issue is when you have full bare metal spare - then, probably, TrueNAS is the system of choice.
But not on PVE, where 95% of OS things are done already.
Not absolutely true. You can put your commands in the Truenas INIT/boot scripts and NO UPGRADE would delete your scripts, utilities or custom binaries correct!?