Running a NAS on Proxmox, Different Methods and What to Know

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 มิ.ย. 2024
  • I often get asked whats the best way to run a NAS on top of Proxmox and in this video I cover different methods along with other knowledge that might be nice to know when setting up a NAS on Proxmox.
    Let me know if this video was helpful and if there are other topics that I could cover
    00:00 Intro
    00:40 Should Proxmox be used as a NAS?
    02:09 RAID
    04:48 NAS Protocols
    06:04 Method 1: Network share directly on Proxmox
    10:08 VM or Container?
    11:55 Method 2: Using a Container
    14:17 Method 3: VMs
    18:25 Conclusion
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 83

  • @haroldwren1660
    @haroldwren1660 24 วันที่ผ่านมา +5

    This is not a tutorial. It is a masterclass. Just brilliant. Thank you.

  • @monish05m
    @monish05m หลายเดือนก่อน +6

    Fantastic video.
    One note to viewer's if you doing storage and specially zfs or btrfs, always make sure you purchase CMR Drives, else be prepared to never be able to recover your pool if one of the disk goes bad.

  • @MRPtech
    @MRPtech หลายเดือนก่อน +5

    Clear explanation. Amazing!
    I have all with 2 boxes. VM/LXC should run fast so they sit inside NVME CEPH pool while ISOs and backups located on Synology NAS.

  • @idle_user
    @idle_user หลายเดือนก่อน +4

    Please make a video about SMB share. I always have to look up a guide to make a basic share.
    It'll be nice to know what other options I have

  • @rteune2416
    @rteune2416 หลายเดือนก่อน +6

    Using the Turnkey file server you suggested in your previous video. It's the best method I have seen so far. I don't need all the other bloatware the others offer. Turnkey file server offers a lean mean NAS machine lol.

    • @HerrFreese
      @HerrFreese หลายเดือนก่อน

      I also tried Turnkey file server, but at some point I missed the filesystem management features of btrfs on which I store the files. Also by then I read pretty deep into smb.conf and missed some options (which I then put into the smb.conf manually. So I now run my NAS on a Debian VM without any GUI.

  • @jrherita
    @jrherita 29 วันที่ผ่านมา +2

    This video was perfectly timed for me as I’m looking to finally migrate some NTFS Shares from a Windows 2016 VM under Proxmox to something more modern. Time to experiment - Thank you!!

  • @oliversmall
    @oliversmall 29 วันที่ผ่านมา

    This was exactly what I was looking form, thank you!

  • @twentyrothmans7308
    @twentyrothmans7308 หลายเดือนก่อน +5

    Pertinent timing!
    I was just trying to figure out a way for my Proxmox to detect where it is sharing NFS, so that it can gracefully close them down if I'm powering it off. I can parse journalctl , looked at nfswatch (which is pretty old).
    I wonder if there's a better way.

  • @1941replica
    @1941replica หลายเดือนก่อน

    Very useful video, it made it easy for me to determine the best option for my NAS setup.

  • @HerrFreese
    @HerrFreese หลายเดือนก่อน +2

    Another advantage of using VMs or Containers for the NAS is in my Opinion Network Isolation and the ease to put the NAS on the Networks I want.

  • @nirv
    @nirv หลายเดือนก่อน +2

    Thank you! I struggled with this back in August of 2023 setting up my first proxmox PC ever, and it took a few days before I understood it more and figured it out. All I wanted was to slap in some hard drives in the proxmox PC and share them to whatever CT/VM I created. One requires bindmount, the other requires NFS. I had no idea at the time because I'm still fairly new to Linux.
    But I appreciate this video because I still haven't set up an SMB share but I think I will now! I'm tired of having to use WinSCP to copy files from Linux to Windows when I can just do an SMB share and immediately have easy access.
    Thank you for useful, insightful videos for Linux n00bs like me. :d

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน

      If you need help with deploying Samba quickly, I can copy and paste my deployment notes from my OneNote at home.
      I think that a lot of guides overcomplicates things, when really, there's only less than 20 lines that you will need to get a simple, basic, SMB share up and running.

    • @nirv
      @nirv หลายเดือนก่อน

      ​@@ewenchan1239I think I'm going to use the turn key template for sharing built right in Proxmox. This guy also did a tutorial on it as he mentioned in this video. But thanks!

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน +3

      @@nirv
      No problem.
      For the benefit of everybody else, I'll still post my deployment notes for deploying Samba on Debian 11 (Proxmox 7.4-3). (The instructions should still work for Debian 12 (Proxmox 8.x).
      //begin
      Installing and configuring Samba on Debian 11:
      # apt install -y samba
      # systemctl status nmbd
      # cd /etc/samba
      # cp smb.conf smb.conf.bk
      # vi smb.conf
      add to the bottom:
      [export_myfs]
      comment = Samba on Debian
      path = /export/myfs
      read-only = no
      browsable = yes
      valid users = @joe
      writable = yes
      # smbpasswd -a joe
      # systemctl restart smbd; systemctl restart nmbd
      //end
      That's it.
      With these deployment notes, now you can have access to a SMB share, directly on your Proxmox host, without the need for a VM nor a CT.
      That way, the only way that this SMB share will go down would be if there's a problem with smbd/nmbd itself, and/or if your Proxmox host went down; at which point, you have other issues to deal with.

  • @tomo8224
    @tomo8224 หลายเดือนก่อน +1

    Love your videos, two thumbs up.
    I would also consider throwing Xpenology in a VM on Proxmox. Prox would help in testing updates.

  • @gunwao
    @gunwao 19 วันที่ผ่านมา

    So Good! Thank you!

  • @darkdnl
    @darkdnl 25 วันที่ผ่านมา +3

    My favorite way to manage smb/nfs shares is cockpit in lxc privileged container :)

    • @ElectronicsWizardry
      @ElectronicsWizardry  25 วันที่ผ่านมา +2

      Thats a good idea. I'm gonna look up cockpit some more as its been mentioned in the comments a few times.

  • @YannMetalhead
    @YannMetalhead 25 วันที่ผ่านมา

    Good information.

  • @RomanShein1978
    @RomanShein1978 หลายเดือนก่อน +4

    4:16 if I have a parity array or RAID Z array in ZFS terminology I can't just add a 6th drive to a 5 drive array for example
    - Raidz expansion has been implemented in OpenZFS. Although I don't know its current status in Proxmox.

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน +2

      Raid Z expansion seems to be in the 2.3 OpenZFS release that isn't out yet. Once ZFS releases it, it will probably be in the next Proxmox version. I'm guess probably less than a year out now, and I'll make a video when its added.

  • @SystemPromowania
    @SystemPromowania หลายเดือนก่อน +3

    Cockpit + Filesharing plugin in LXC for me is the best option.
    Thanks

  • @DominikSchmid
    @DominikSchmid หลายเดือนก่อน

    Following your review on Ugreen's NASync DXP480T I backed the project. My plan is to install Proxmox on it and virtualise a NAS as well as VM with an archlinux or nixOS desktop which I can access from anywhere. I hope that this desktop will have very little latency in order to replace my current desktop installation on bare metal. It would be great you could make a series of videos which show how to achieve this.

  • @nikolap2153
    @nikolap2153 หลายเดือนก่อน +1

    Thank your amazing videos!
    My setup is:
    Supermicro X10SRH-CF
    2x120GB SSD boot drive of Proxmox
    For the fileserver part:
    2x400GB SSD (partitioned 300+64GB)
    6x4TB
    Connected via HBA
    VM/LXC stay on 300GB SSD (zfs mirror)
    "NAS" is located at 6x4TB raidz2 + 64GB (zfs mirror) special device
    All is managed by Cockpit plus 45drives plugins for ZFS and Fileshare
    It's for home use, not HA 😅

  • @frauseo
    @frauseo หลายเดือนก่อน

    finaly someone competent answaring once and for all all the reddit question on how to create a nas XD

  • @ivanmaglica264
    @ivanmaglica264 หลายเดือนก่อน

    I install SAMBA servers in LXC containers. Best mix for performance and flexibility. If it breaks, you can easily revert back if you have a backup and you don't have to worry about polluting the host OS.

  • @asbestinuS
    @asbestinuS หลายเดือนก่อน

    I also prefer to use a vm or a container for my file shares. The nice thing with container and the mount points is that if you increase the size it automatically increses the filesystem in the container as well. I once saw apalrds video on using a container and installing 45Drives' cockpit-file-sharing to have a nice GUI. If you create a user it automatically creates a smb password as well. It so easy to use and I still use this for some backups on my main computer. I never liked truenas so using 45Drive tools was a fantastic idea.

  • @boneappletee6416
    @boneappletee6416 หลายเดือนก่อน

    Another fantastic video, thank you! :)
    I completely forgot that turnkey containers exist... 🤦🏻‍♂️ Definitely going to use that for a small file server I've been meaning to get going.
    If there's enough interest / script material for it, could you do a deeper dive into these turnkey VMs/containers? Haven't used them much, personally

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน

      Glad I helped you with setting up your nas on Proxmox.
      A video on Turnkey containers is a good idea. I'll start playing around with them soon.

  • @RomanShein1978
    @RomanShein1978 หลายเดือนก่อน +2

    A bit of a crazy suggestion for a future video: running TrueNAS Scale or UnRAID as.... a container. Theoretically, it should be possible.

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน

      You have the create the LXC container yourself, which isn't a super trivial task, but it can be done.
      (Actually, in theory, it would be easier to deploy TrueNAS Scale as a container than UnRAID because TrueNAS Scale also runs on top of Debian, so really, all you would need to do is add the repos, and then find the difference in the list of packages that are installed, write that package delta file to a text file, and then install that package diff text file in a Debian LXC container.)
      That part isn't super difficult. It might take a little bit of testing to make sure everything works, as it should, but there shouldn't really be any technical reason as to why this method *can't* work.

  • @franciscolastra
    @franciscolastra 27 วันที่ผ่านมา

    Great contribution. Many, many thanks!!!!
    Which NAS system would you recommend for the LXC case???

    • @ElectronicsWizardry
      @ElectronicsWizardry  26 วันที่ผ่านมา

      What do you mean by NAS system? I'd look at turnkey if you want a easy web interface, or a lightweight distro like Debian/Alpine if you want to edit samba file manually.

  • @cberthe067
    @cberthe067 หลายเดือนก่อน +6

    A scenario i would like to see, is creating a CephFS filesystem in a proxmox cluster and expose it as a SMB fileserver to client OS ...

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน +3

      You can do that.
      That's actually pretty easy.
      I don't know if I have my erasure coded CephFS exposed as a SMB share, but you can absolutely you that.
      You mount the CephFS to a mount point, and then in your smb.conf file, you can just point you share mount point to the same location, and now you've shared your CephFS as a SMB share.
      You can absolutely do that.

  • @TheTrulyInsane
    @TheTrulyInsane หลายเดือนก่อน +2

    I use a vm with truenas myself, 12 drives, works great

  • @DSVWARE
    @DSVWARE หลายเดือนก่อน +2

    one thing I noticed is that backuing up LXC mountpoints is pretty slow vs VM disks (qcow? I cannot remember). I imagine it's because the content of a mountpoint might have to be crawled and proxmox checks for the archive bit, versus using some sort of changed block tracking for vm disks
    so any shares I would run on VMs, even if reads are quick, it's wasted processing time and wear on the disks

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน +3

      I'll need to check the backup time comparison, but I think your right with containers doing a file level backup and being slower. Also if using PBS you can backup only changes to a running VM, making incremental backups much faster.

  • @peteradshead2383
    @peteradshead2383 หลายเดือนก่อน

    I did more or less the same but in the lxc with it mapped to the main proxmox and using webmin .
    I'm thinking of getting a Minisforum MS-01 and fitting a HBA card ( external ) in the 8 line GPU slot and feeding into a 4-8 Drive Bay enclosure , but can't find any which have sata plugs at the back with a built in psu , can't only find internal ones for servers .

  • @namregzepol
    @namregzepol หลายเดือนก่อน +1

    Good explanation for the vm vs containers! Thanks. But I miss a bit other posible solutions for the storage part. For example, i have looking into btrfs recently. Also the ideal solution, on a three nodes example, shoult be to have some storage on either of the nodes. Is there any solution like glusterfs or similar in proxmox?

    • @namregzepol
      @namregzepol หลายเดือนก่อน

      I forgot to mention Ceph, but I don't if there is the best solution...

    • @RomanShein1978
      @RomanShein1978 หลายเดือนก่อน +1

      But I miss a bit other posible solutions for the storage part. For example, i have looking into btrfs recently.
      1) This is a bit off-topic. He rightfully mentions the passthrough, because it involves a hypervisor. Otherwise, the underlying redundancy solution is a separate topic.
      2) NAS usually implies Raid5 or 6. BTRFS raid is a chronically "experimental" feature. It will burn your data. Don't do it.

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน

      I haven't touched BTRFS much as its still experimental in Proxmox and has issues with RAID 5/6 which is often used for home NAS units. I love many BTRFS features and hope it gets to a stable state soon.
      Ceph can do a single node mixed drive size and easily expandable setup, but its pretty complex and not really made for single node setups.

    • @RomanShein1978
      @RomanShein1978 หลายเดือนก่อน

      @@ElectronicsWizardry I follow BTRFS Raid56 for a decade. It ain't moving anywhere, unfortunately.

  • @ChrisHolzer
    @ChrisHolzer หลายเดือนก่อน

    I have Unraid running as VM on my Proxmox server and pass through an HBA to it.
    Works great. :)

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน

      I’m curious. Did you d push passthrough of the boot drive or are you booting from a virtual disk?

  • @protacticus630
    @protacticus630 หลายเดือนก่อน +1

    Great, just using Proxmox with Turnkey as container.

  • @OsX86H3AvY
    @OsX86H3AvY หลายเดือนก่อน

    the ONLY issue ive had with using proxmox with samba as a nas is that it requires me to make my management NIC one of my 10G cards because the mgmt nic is also the samba share nic (sa i understand it) meaning i cant or rather dont want to use jus the 1G card for it. but if you plan out nics and networking well its no biggie. i have multiple pve servers, one that is primarily my NAS but with 5 vms running, one which is mostly for VMs but which also has surveillance disks for a shinobi vm on it, and a couple more for 'play' and that's worked out pretty well. with proxmox its all about how you balance our those resources, thats the KEY thing with pve - whats the balance you want and need and do you ahve the gear to get it

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน +1

      "the ONLY issue ive had with using proxmox with samba as a nas is that it requires me to make my management NIC one of my 10G cards because the mgmt nic is also the samba share nic (sa i understand it) meaning i cant or rather dont want to use jus the 1G card for it."
      I'm not 100% sure what you mean by this.
      The protocol itself has no network interface requirements.
      You can share it on whatever interface you want, via the IP address that you want your clients to be able to connect to your SMB share with.
      So if you have a 1 GbE NIC and a 10 GbE NIC and your class C IPv4 address is something like 192.168.1.x (for the 1 GbE NIC) and 192.168.10.y (for the 10 GbE NIC), then you can have your clients connect to the 192.168.10.y subnet, if they're on the same subnet.
      The protocol has no network interface requirements.

  • @gptech2444
    @gptech2444 หลายเดือนก่อน

    Would there be any problems running mergerfs and snapraid on the proxmox node?

  • @shephusted2714
    @shephusted2714 หลายเดือนก่อน

    I'd really like you to not just break it all down but build it all up in the form of a ws to dual nas solution for prosumer, homelabbers, and smb sector - everybody wants and needs dual nas redundancy coupled with fast networking - like 40g - it is very possible and pretty cheap to get going and would make for great content - do it with cots refurb boxes and and a few nvme arrays #jumbo frames #mtu

  • @insu_na
    @insu_na หลายเดือนก่อน

    Mighty ElectronicsWizard, do you also have information on how to achieve something similar with Ceph and CephFS?
    i.e. Promox cluster of 3 machines with ceph, and VMs on those 3 cluster nodes having to access a shared drive that's in Ceph?

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน

      Ceph/CephFS is a distributed filesystem and has no relation to network sharing protocols like SMB/CIFS nor NFS.
      To that end though, if you create a CephFS and then it is mounted by your Proxmox nodes in your Proxmox cluster, you can, absolutely share that same CephFS mount point either with SMB/CIFS and/or with NFS.
      As a NFS export, you would point the export path in /etc/export to that same path and/or SMB, you would edit your smb.conf file and point your share to that.
      The two work independently on each other.

    • @insu_na
      @insu_na หลายเดือนก่อน

      @@ewenchan1239 You can absolutely consume CephFS block devices over the network. It just also needs libcephfs on the client. The problem is just that it's tricky to set up

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน +2

      The other hack Ive seen if you want to use Ceph or other clustered filesystems on SMB clients is to setup a VM/Container to mount CephFS, then have that share it over SMB. Then any system can mount the CephFS as a normal SMB share. The Samba sharing VM does become a single point of failure, but this is likely the best way of mounting on a device that doesn't have a easy way to get the CephFS client installed.

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน

      @@insu_na
      I could be wrong, but it is my understanding that libcephfs isn't available in Windows.
      Therefore; this wouldn't work.
      Conversely, if you set up CephFS and then mount it on the host (e.g. mount to it /mnt/pve/cephfs), then in your /etc/samba/smb.conf, you can point your SMB share to that mount point.
      Then that way, a) your client doesn't need libcephfs (Is there REALLY a reason why a client wants native CephFS access (i.e. CephFS, not Ceph RBD)?) (i.e. I can understand it that if you want Ceph (RBD) access, that you would want and/or need libcephfs on the client, which again, I'm not certain that that's available on Windows clients, maybe as an alternative to iSCSI, but if you don't need Ceph RBD, and you only want/need CephFS, then this method should work for you) and b) your don't need a VM to mount the CephFS only to then share it out over SMB.
      You can have the Proxmox host do that natively, on said Proxmox host itself.

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน

      @@ElectronicsWizardry
      You CAN do that, but you don't NEED to do that.
      If you're using Proxmox as a NAS, then you can just mount the CephFS pool directly in Proxmox, and then etc /etc/samba/smb.conf and point your share to that mount point (e.g. /mnt/pve/cephfs).
      You don't need to route/pass it through a VM.
      Conversely, however, if you DO route it through a VM or a CT, then what you can do is store the VM/CT disk/volume on shared storage, and then if you have a Proxmox cluster (which you'll need for Ceph anyways), you can configure HA for that VM/CT, such that if one of the nodes has an issue, you can have the VM/CT live migrate over to another node within the Proxmox cluster, and that way, you won't lose connectivity to the CephFS SMB share.
      That would be ONE option as that would be easier to present to your network than trying to configure it for the three native Proxmox nodes.

  • @DavidAlsh
    @DavidAlsh 6 วันที่ผ่านมา

    I have a bunch of odd sized hard drives that I threw into my Proxmox, what software RAID type should I use?

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 วันที่ผ่านมา +1

      Proxmox doesn't have a great native way to mix drive sizes as the main included RAID solution, ZFS, doesn't support mixed drives well. BTRFS is a option, but the unstable raid 5/6 would worry me if it was my main copy of data. You can use snapraid + mergerfs on proxmox if its for media files, but this wouldn't be suited towards something like VMs.
      I also made a video earlier about RAID like solution for mixed drive sizes that might help see the Pros and Cons of the different solutions. th-cam.com/video/NQJkTiLXfgs/w-d-xo.html

  • @andreasrichman5628
    @andreasrichman5628 27 วันที่ผ่านมา

    13:20 With Blank mount point, does it mean mounting the whole device (physical disk)? and What size (GiB) do we have to input?

    • @ElectronicsWizardry
      @ElectronicsWizardry  27 วันที่ผ่านมา +1

      Setting up a new empty mount point for the container will be limited to the sixe that you select in the add mount point prompt. You increase the size of the mount point later on if needed.

    • @andreasrichman5628
      @andreasrichman5628 26 วันที่ผ่านมา

      @@ElectronicsWizardry Thanks for your reply. Btw with container’s bind mount (TurnKey), can I move the physical disk to another computer (bare metal Ubuntu) without the need to install Proxmox or anything else? I just want it to be easy to move arround the disk in case something happen to the host. Do you have any advice? I’m new to nas/fileserver.

    • @ElectronicsWizardry
      @ElectronicsWizardry  26 วันที่ผ่านมา +1

      @@andreasrichman5628 If you do the bind mount, you should be able to move the drive to a new system and access all the data. Just move the drive to the Ubuntu system, and it should be able to mount the drive(You may need to install a package on Ubuntu to use the filesystem).

    • @andreasrichman5628
      @andreasrichman5628 24 วันที่ผ่านมา

      @@ElectronicsWizardry Just to be sure, with VM (disk passthrough) I also can move the physical disk to another machine (bare metal Ubuntu) and access all the data, right?

  • @ewenchan1239
    @ewenchan1239 หลายเดือนก่อน

    re: ZFS on root
    If you install Proxmox of a mirror ZFS root, and you want to then do things like GPU passthrough, the guides that you will likely find online for how to do this, won't always necessarily tell/teach you how to update the kernel/boot parameters for ZFS on root.
    As a result, I stayed away from it and I used my Broadcom/Avago/LSI MegaRAID 12 Gbps SAS RAID HBA and created a RAID6 array for my Proxmox OS boot drive, that way the Proxmox installed would install it onto a "single drive" when really it was 4x 3 TB HGST HDDs in a RAID6 array.
    That way, if one of my OS disks goes down, my RAID HBA can handle the rebuild.

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน

      I am pretty sure you can do PCIe passthrough with ZFS as the boot drive. I think ZFS as boot uses Proxmox boot manager instead of grub, and different config files have to be edited to enable iommu.

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน

      @@ElectronicsWizardry
      "I am pretty sure you can do PCIe passthrough with ZFS as the boot drive. I think ZFS as boot uses Proxmox boot manager instead of grub, and different config files have to be edited to enable iommu."
      You can, but the process for getting that up and running isn't nearly as well documented in the Proxmox forums vs. if you're using a non-ZFS root, where you can just update /etc/grub/default, and then run update-initramfs -u; update-grub; reboot to update the system vs. if you're using ZFS root, to update the kernel boot params, you need to do something else entirely.
      When I first deployed my consolidated server in January 2023, I originally set it up with a ZFS root, and ran into this issue very quickly, and that's how and why I ended up setting up my 4x 3 TB HGST HDDs in a RAID6 array rather than using raidz2 because with my RAID6 OS array, Proxmox would see it as like a "normal" drive, and so, I was then able to follow the documented steps for GPU passthrough.
      If it works, why break it?

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน +1

      @@ewenchan1239 I think Proxmox's reason for doing Proxmox boot tool of standard grub for ZFS boot is so that they can have redundant boot loaders. I don't think grub is made to be on multiple drives, where as Proxmox boot tool is made to be on all drives in the ZFS pool, and have all updated when a new kernel/kernel option is installed.
      I agree it would be nice if they just used grub, but I think editing kernel options with Proxmox boot tool should be editing /etc/kernel/cmdline and then proxmox-boot-tool refresh.

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน +1

      @@ElectronicsWizardry
      To be honest, since I got my "do-it-all" Proxmox server up and running, I didn't really spend much more time, trying to get ZFS on root to work with PCIe/GPU passthrough.
      As a result, I don't have deployment notes in my OneNote that I would then be able to share with others here, with step-by-step instructions so that they can deploy it themselves.
      I may revisit that in the future, but currently, I don't have any plans to do so.

  • @typingcat
    @typingcat 20 วันที่ผ่านมา

    But what about the disk IO performance in a VM?

    • @ElectronicsWizardry
      @ElectronicsWizardry  19 วันที่ผ่านมา

      This depends on how the VM is setup and if your using pass-through or virtual disks, but generally VMs have good disk performance and likely more than what would be needed for a VM.

  • @shootinputin6332
    @shootinputin6332 19 วันที่ผ่านมา

    Would it be crazy to run UnRaid as a VM on Proxmox?

    • @ElectronicsWizardry
      @ElectronicsWizardry  19 วันที่ผ่านมา

      Unraid can make a lot of sense in a VM. Their parity setup is one of the best if you want flexible multi drive setups. I have found it to work well to put a USB stick in the server and pass the USB device through so Unraid can use the GUID correctly for licensing.

  • @ewenchan1239
    @ewenchan1239 หลายเดือนก่อน +1

    "Battery backed caching for high speed I/O."
    Sorry, but that's actually NOT what the battery backup unit (BBU) is for, in regards to RAID HBAs.
    Battery backup units (BBUs) is used on RAID HBAs to prevent against the write hole issue that may present itself in the event of a power failure.
    The idea is that if you are writing data and then lose power, then the system won't know what was the data that was still in flight that was in the process of being committed to stable storage (disk(s)).
    A BBU basically keeps the RAID card alive long enough to flush the DRAM that's on said RAID HBA to disk, so that any data that's in volatile memory (DRAM cache of the RAID HBA) won't be lost.
    It has nothing to do with I/O performance.

    • @ElectronicsWizardry
      @ElectronicsWizardry  หลายเดือนก่อน +1

      I want to say the DRAM on a RAID card is used for caching disk IO in addition to storing inflight data to prevent the write hole issue. RAID cards let the onboard DRAM to be used as a write back cache safely as it won't be lost in a power outage. Also I have seen much faster short term write speeds when using RAID cards making me think the cache is used in this way. This does depend on the RAID card, and there are likely some that only use the cache for prevent write hole issues.

    • @ewenchan1239
      @ewenchan1239 หลายเดือนก่อน +1

      @@ElectronicsWizardry
      Um....it depends.
      If you're using async writes, what happens is that for POSIX compliance, writing to the DRAM on a RAID HBA will be considered as a write acknowledgement that's sent back to the application that's making said write request.
      So, in effect, your system is "lying" to you by saying that data has been written (committed) to disk when really, it hasn't. It's only been written to the DRAM cache on the RAID HBA and then the RAID HBA sets the policy/rule/frequency for how often it will commit the writes that have been cached in DRAM and flush that to disk.
      Per the Oracle ZFS Administration guide, the ZFS intent log is, by design, intended to do the same thing.
      Async writes are written to the ZIL (and/or if the ZIL is on a special, secondary, or dedicated ZIL device, known as a SLOG device), and then ZFS manages the flushes from ZIL to commit to disk either when the buffer is full or in 5 second intervals, whichever comes first.
      If you're using synchronous writes, whereby, a positive commitment to disk is required before the ACK is sent back, then you generally won't see much in the way of a write speed improvement, unless you're using tiered storage.
      Async writes CAN be dangerous for a variety of reasons, and some applications (e.g. databases) sometimes (often) require sync writes to make sure that the database table itself, doesn't get corrupted as a result of the write hole due to a power outage.

  • @LubomirGeorgiev
    @LubomirGeorgiev 19 วันที่ผ่านมา

    you got a patreon?