Running a NAS on Proxmox, Different Methods and What to Know

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ธ.ค. 2024

ความคิดเห็น • 130

  • @haroldwren1660
    @haroldwren1660 6 หลายเดือนก่อน +67

    This is not a tutorial. It is a masterclass. Just brilliant. Thank you.

    • @chimpo131
      @chimpo131 3 หลายเดือนก่อน +2

      lol made by golem 😂😂

    • @mrq332
      @mrq332 25 วันที่ผ่านมา

      @@chimpo131 it's Gollum, "My precious" 🤣🤣🤣

  • @handle_your_set
    @handle_your_set 2 วันที่ผ่านมา +1

    I don't know how you don't have a shit ton more subscribers. Your videos are perfect content. All information, presented beautifully. Thank you!

  • @Gosuminer
    @Gosuminer 3 วันที่ผ่านมา

    There are lots of videos out there about Proxmox, TrueNAS etc. but ElectronicsWizardry goes the exta mile of explaining all of it with just the right amound of detail required to not mindlessliy following instructions but to understand what you are doing. Thank you, your videos are very much appreciated.

  • @monish05m
    @monish05m 7 หลายเดือนก่อน +14

    Fantastic video.
    One note to viewer's if you doing storage and specially zfs or btrfs, always make sure you purchase CMR Drives, else be prepared to never be able to recover your pool if one of the disk goes bad.

    • @Tr4shSpirits
      @Tr4shSpirits 3 หลายเดือนก่อน

      So if I just throw in some random (but same size) drives, ie 2x 5TB sata drives in a mirrored zfs, one fails, and i replace it with a new, it will not work/be a waste?

    • @monish05m
      @monish05m 3 หลายเดือนก่อน

      @@Tr4shSpirits mirror should work, but raidz will not rebuild. Mirrors are very space inefficient.

    • @Tr4shSpirits
      @Tr4shSpirits 3 หลายเดือนก่อน

      @@monish05m got it. I know, but for personal mediastorage I am fine with it. Hdds are very cheap now so I just rund mirroring for this purpose. My compute server has a proper raid card and parity tho

  • @Bildaling92
    @Bildaling92 3 หลายเดือนก่อน +3

    You're actually a legend, clear and easy to understand. Keep up the good work!

  • @williamtopping
    @williamtopping 4 หลายเดือนก่อน +7

    By far the most comprehensive guide going. Not only did you demonstrate all the various methods, you explained how to them, AND what to be aware of if you do.
    This easily the BEST guide on TH-cam for this particular scenario. If I come across any forum post asking about this, I will be referencing this particular video.
    Outstanding.

  • @RomanShein1978
    @RomanShein1978 7 หลายเดือนก่อน +4

    4:16 if I have a parity array or RAID Z array in ZFS terminology I can't just add a 6th drive to a 5 drive array for example
    - Raidz expansion has been implemented in OpenZFS. Although I don't know its current status in Proxmox.

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 หลายเดือนก่อน +2

      Raid Z expansion seems to be in the 2.3 OpenZFS release that isn't out yet. Once ZFS releases it, it will probably be in the next Proxmox version. I'm guess probably less than a year out now, and I'll make a video when its added.

  • @rteune2416
    @rteune2416 7 หลายเดือนก่อน +12

    Using the Turnkey file server you suggested in your previous video. It's the best method I have seen so far. I don't need all the other bloatware the others offer. Turnkey file server offers a lean mean NAS machine lol.

    • @HerrFreese
      @HerrFreese 7 หลายเดือนก่อน +2

      I also tried Turnkey file server, but at some point I missed the filesystem management features of btrfs on which I store the files. Also by then I read pretty deep into smb.conf and missed some options (which I then put into the smb.conf manually. So I now run my NAS on a Debian VM without any GUI.

  • @MRPtech
    @MRPtech 7 หลายเดือนก่อน +12

    Clear explanation. Amazing!
    I have all with 2 boxes. VM/LXC should run fast so they sit inside NVME CEPH pool while ISOs and backups located on Synology NAS.

  • @darkdnl
    @darkdnl 6 หลายเดือนก่อน +10

    My favorite way to manage smb/nfs shares is cockpit in lxc privileged container :)

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 หลายเดือนก่อน +7

      Thats a good idea. I'm gonna look up cockpit some more as its been mentioned in the comments a few times.

    • @williamtopping
      @williamtopping 4 หลายเดือนก่อน +1

      I have another video stored in my playlist that explains exactly how to do this. You're not the only person to recommend this solution.
      I appreciate this video as it outlines all the other options on a high level whilst still explaining how to go about it.

  • @idle_user
    @idle_user 7 หลายเดือนก่อน +8

    Please make a video about SMB share. I always have to look up a guide to make a basic share.
    It'll be nice to know what other options I have

  • @HerrFreese
    @HerrFreese 7 หลายเดือนก่อน +4

    Another advantage of using VMs or Containers for the NAS is in my Opinion Network Isolation and the ease to put the NAS on the Networks I want.

  • @OvergrownBear
    @OvergrownBear 12 วันที่ผ่านมา +1

    Greetings from Germany! You really helped to choose the right solution. Thank you! And have a nice day.

  • @AnkushNarula
    @AnkushNarula หลายเดือนก่อน

    Keep up the good work!

  • @nirv
    @nirv 7 หลายเดือนก่อน +4

    Thank you! I struggled with this back in August of 2023 setting up my first proxmox PC ever, and it took a few days before I understood it more and figured it out. All I wanted was to slap in some hard drives in the proxmox PC and share them to whatever CT/VM I created. One requires bindmount, the other requires NFS. I had no idea at the time because I'm still fairly new to Linux.
    But I appreciate this video because I still haven't set up an SMB share but I think I will now! I'm tired of having to use WinSCP to copy files from Linux to Windows when I can just do an SMB share and immediately have easy access.
    Thank you for useful, insightful videos for Linux n00bs like me. :d

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน

      If you need help with deploying Samba quickly, I can copy and paste my deployment notes from my OneNote at home.
      I think that a lot of guides overcomplicates things, when really, there's only less than 20 lines that you will need to get a simple, basic, SMB share up and running.

    • @nirv
      @nirv 7 หลายเดือนก่อน

      ​@@ewenchan1239I think I'm going to use the turn key template for sharing built right in Proxmox. This guy also did a tutorial on it as he mentioned in this video. But thanks!

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน +7

      @@nirv
      No problem.
      For the benefit of everybody else, I'll still post my deployment notes for deploying Samba on Debian 11 (Proxmox 7.4-3). (The instructions should still work for Debian 12 (Proxmox 8.x).
      //begin
      Installing and configuring Samba on Debian 11:
      # apt install -y samba
      # systemctl status nmbd
      # cd /etc/samba
      # cp smb.conf smb.conf.bk
      # vi smb.conf
      add to the bottom:
      [export_myfs]
      comment = Samba on Debian
      path = /export/myfs
      read-only = no
      browsable = yes
      valid users = @joe
      writable = yes
      # smbpasswd -a joe
      # systemctl restart smbd; systemctl restart nmbd
      //end
      That's it.
      With these deployment notes, now you can have access to a SMB share, directly on your Proxmox host, without the need for a VM nor a CT.
      That way, the only way that this SMB share will go down would be if there's a problem with smbd/nmbd itself, and/or if your Proxmox host went down; at which point, you have other issues to deal with.

  • @KrisFromFuture
    @KrisFromFuture 7 หลายเดือนก่อน +6

    Cockpit + Filesharing plugin in LXC for me is the best option.
    Thanks

  • @bretlinden8248
    @bretlinden8248 4 หลายเดือนก่อน +1

    Very educational. Explained very well. Thank you. As a new Proxmox user I learned a lot from this.

  • @jrherita
    @jrherita 6 หลายเดือนก่อน +2

    This video was perfectly timed for me as I’m looking to finally migrate some NTFS Shares from a Windows 2016 VM under Proxmox to something more modern. Time to experiment - Thank you!!

  • @twentyrothmans7308
    @twentyrothmans7308 7 หลายเดือนก่อน +5

    Pertinent timing!
    I was just trying to figure out a way for my Proxmox to detect where it is sharing NFS, so that it can gracefully close them down if I'm powering it off. I can parse journalctl , looked at nfswatch (which is pretty old).
    I wonder if there's a better way.

  • @tomo8224
    @tomo8224 7 หลายเดือนก่อน +2

    Love your videos, two thumbs up.
    I would also consider throwing Xpenology in a VM on Proxmox. Prox would help in testing updates.

  • @DSVWARE
    @DSVWARE 7 หลายเดือนก่อน +2

    one thing I noticed is that backuing up LXC mountpoints is pretty slow vs VM disks (qcow? I cannot remember). I imagine it's because the content of a mountpoint might have to be crawled and proxmox checks for the archive bit, versus using some sort of changed block tracking for vm disks
    so any shares I would run on VMs, even if reads are quick, it's wasted processing time and wear on the disks

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 หลายเดือนก่อน +3

      I'll need to check the backup time comparison, but I think your right with containers doing a file level backup and being slower. Also if using PBS you can backup only changes to a running VM, making incremental backups much faster.

  • @andreasrichman5628
    @andreasrichman5628 6 หลายเดือนก่อน

    13:20 With Blank mount point, does it mean mounting the whole device (physical disk)? and What size (GiB) do we have to input?

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 หลายเดือนก่อน +1

      Setting up a new empty mount point for the container will be limited to the sixe that you select in the add mount point prompt. You increase the size of the mount point later on if needed.

    • @andreasrichman5628
      @andreasrichman5628 6 หลายเดือนก่อน

      @@ElectronicsWizardry Thanks for your reply. Btw with container’s bind mount (TurnKey), can I move the physical disk to another computer (bare metal Ubuntu) without the need to install Proxmox or anything else? I just want it to be easy to move arround the disk in case something happen to the host. Do you have any advice? I’m new to nas/fileserver.

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 หลายเดือนก่อน +1

      @@andreasrichman5628 If you do the bind mount, you should be able to move the drive to a new system and access all the data. Just move the drive to the Ubuntu system, and it should be able to mount the drive(You may need to install a package on Ubuntu to use the filesystem).

    • @andreasrichman5628
      @andreasrichman5628 6 หลายเดือนก่อน

      @@ElectronicsWizardry Just to be sure, with VM (disk passthrough) I also can move the physical disk to another machine (bare metal Ubuntu) and access all the data, right?

  • @ivanmaglica264
    @ivanmaglica264 7 หลายเดือนก่อน

    I install SAMBA servers in LXC containers. Best mix for performance and flexibility. If it breaks, you can easily revert back if you have a backup and you don't have to worry about polluting the host OS.

  • @namregzepol
    @namregzepol 7 หลายเดือนก่อน +1

    Good explanation for the vm vs containers! Thanks. But I miss a bit other posible solutions for the storage part. For example, i have looking into btrfs recently. Also the ideal solution, on a three nodes example, shoult be to have some storage on either of the nodes. Is there any solution like glusterfs or similar in proxmox?

    • @namregzepol
      @namregzepol 7 หลายเดือนก่อน

      I forgot to mention Ceph, but I don't if there is the best solution...

    • @RomanShein1978
      @RomanShein1978 7 หลายเดือนก่อน +1

      But I miss a bit other posible solutions for the storage part. For example, i have looking into btrfs recently.
      1) This is a bit off-topic. He rightfully mentions the passthrough, because it involves a hypervisor. Otherwise, the underlying redundancy solution is a separate topic.
      2) NAS usually implies Raid5 or 6. BTRFS raid is a chronically "experimental" feature. It will burn your data. Don't do it.

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 หลายเดือนก่อน

      I haven't touched BTRFS much as its still experimental in Proxmox and has issues with RAID 5/6 which is often used for home NAS units. I love many BTRFS features and hope it gets to a stable state soon.
      Ceph can do a single node mixed drive size and easily expandable setup, but its pretty complex and not really made for single node setups.

    • @RomanShein1978
      @RomanShein1978 7 หลายเดือนก่อน

      @@ElectronicsWizardry I follow BTRFS Raid56 for a decade. It ain't moving anywhere, unfortunately.

  • @frauseo
    @frauseo 7 หลายเดือนก่อน +2

    finaly someone competent answaring once and for all all the reddit question on how to create a nas XD

  • @nikolap2153
    @nikolap2153 7 หลายเดือนก่อน +1

    Thank your amazing videos!
    My setup is:
    Supermicro X10SRH-CF
    2x120GB SSD boot drive of Proxmox
    For the fileserver part:
    2x400GB SSD (partitioned 300+64GB)
    6x4TB
    Connected via HBA
    VM/LXC stay on 300GB SSD (zfs mirror)
    "NAS" is located at 6x4TB raidz2 + 64GB (zfs mirror) special device
    All is managed by Cockpit plus 45drives plugins for ZFS and Fileshare
    It's for home use, not HA 😅

  • @TheTrulyInsane
    @TheTrulyInsane 7 หลายเดือนก่อน +2

    I use a vm with truenas myself, 12 drives, works great

  • @crackshot7579
    @crackshot7579 22 วันที่ผ่านมา

    Awesome video. Thank you!!

  • @billytran910
    @billytran910 2 หลายเดือนก่อน

    Love your in depth videos! SMB video would be awesome!!

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 หลายเดือนก่อน

      Let me add that to my list. Want to make sure I do the video right so it might take some time.

  • @franciscolastra
    @franciscolastra 6 หลายเดือนก่อน

    Great contribution. Many, many thanks!!!!
    Which NAS system would you recommend for the LXC case???

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 หลายเดือนก่อน

      What do you mean by NAS system? I'd look at turnkey if you want a easy web interface, or a lightweight distro like Debian/Alpine if you want to edit samba file manually.

  • @DominikSchmid
    @DominikSchmid 7 หลายเดือนก่อน

    Following your review on Ugreen's NASync DXP480T I backed the project. My plan is to install Proxmox on it and virtualise a NAS as well as VM with an archlinux or nixOS desktop which I can access from anywhere. I hope that this desktop will have very little latency in order to replace my current desktop installation on bare metal. It would be great you could make a series of videos which show how to achieve this.

  • @1941replica
    @1941replica 7 หลายเดือนก่อน

    Very useful video, it made it easy for me to determine the best option for my NAS setup.

  • @asbestinuS
    @asbestinuS 6 หลายเดือนก่อน

    I also prefer to use a vm or a container for my file shares. The nice thing with container and the mount points is that if you increase the size it automatically increses the filesystem in the container as well. I once saw apalrds video on using a container and installing 45Drives' cockpit-file-sharing to have a nice GUI. If you create a user it automatically creates a smb password as well. It so easy to use and I still use this for some backups on my main computer. I never liked truenas so using 45Drive tools was a fantastic idea.

  • @User-ec2bh
    @User-ec2bh หลายเดือนก่อน

    a dedicated video on samba sounds great. Still struggling to get the rights correct for both windows and linux at the same time.

  • @cberthe067
    @cberthe067 7 หลายเดือนก่อน +7

    A scenario i would like to see, is creating a CephFS filesystem in a proxmox cluster and expose it as a SMB fileserver to client OS ...

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน +6

      You can do that.
      That's actually pretty easy.
      I don't know if I have my erasure coded CephFS exposed as a SMB share, but you can absolutely you that.
      You mount the CephFS to a mount point, and then in your smb.conf file, you can just point you share mount point to the same location, and now you've shared your CephFS as a SMB share.
      You can absolutely do that.

  • @DavidAlsh
    @DavidAlsh 6 หลายเดือนก่อน +1

    I have a bunch of odd sized hard drives that I threw into my Proxmox, what software RAID type should I use?

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 หลายเดือนก่อน +1

      Proxmox doesn't have a great native way to mix drive sizes as the main included RAID solution, ZFS, doesn't support mixed drives well. BTRFS is a option, but the unstable raid 5/6 would worry me if it was my main copy of data. You can use snapraid + mergerfs on proxmox if its for media files, but this wouldn't be suited towards something like VMs.
      I also made a video earlier about RAID like solution for mixed drive sizes that might help see the Pros and Cons of the different solutions. th-cam.com/video/NQJkTiLXfgs/w-d-xo.html

  • @ChrisHolzer
    @ChrisHolzer 6 หลายเดือนก่อน

    I have Unraid running as VM on my Proxmox server and pass through an HBA to it.
    Works great. :)

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 หลายเดือนก่อน

      I’m curious. Did you d push passthrough of the boot drive or are you booting from a virtual disk?

  • @oliversmall
    @oliversmall 6 หลายเดือนก่อน

    This was exactly what I was looking form, thank you!

  • @davidlakes5087
    @davidlakes5087 4 หลายเดือนก่อน

    Thank you so much!! Is it feasible to do SMB in an unprivileged container?

    • @ElectronicsWizardry
      @ElectronicsWizardry  4 หลายเดือนก่อน

      I think and works fine in unprivileged containers but haven’t tested it myself.

  • @AlexBenfica
    @AlexBenfica หลายเดือนก่อน

    Excellent explanation. I tried turnkey and it did not allow my nvme to work at full speed. I tried a Windows 10 vm with samba and it did. Still don't know why

    • @ElectronicsWizardry
      @ElectronicsWizardry  28 วันที่ผ่านมา

      What speeds were you seeing? I have seen that tuning can be needed at times to get the most out of SMB with > 1gbe networks.

  • @ooisee
    @ooisee หลายเดือนก่อน

    wow! it is amazing condensed 18 minutes ^_^ thanks a million ^_^

  • @GeekendZone
    @GeekendZone 2 หลายเดือนก่อน +1

    Good explanation!

  • @insu_na
    @insu_na 7 หลายเดือนก่อน

    Mighty ElectronicsWizard, do you also have information on how to achieve something similar with Ceph and CephFS?
    i.e. Promox cluster of 3 machines with ceph, and VMs on those 3 cluster nodes having to access a shared drive that's in Ceph?

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน

      Ceph/CephFS is a distributed filesystem and has no relation to network sharing protocols like SMB/CIFS nor NFS.
      To that end though, if you create a CephFS and then it is mounted by your Proxmox nodes in your Proxmox cluster, you can, absolutely share that same CephFS mount point either with SMB/CIFS and/or with NFS.
      As a NFS export, you would point the export path in /etc/export to that same path and/or SMB, you would edit your smb.conf file and point your share to that.
      The two work independently on each other.

    • @insu_na
      @insu_na 7 หลายเดือนก่อน

      @@ewenchan1239 You can absolutely consume CephFS block devices over the network. It just also needs libcephfs on the client. The problem is just that it's tricky to set up

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 หลายเดือนก่อน +2

      The other hack Ive seen if you want to use Ceph or other clustered filesystems on SMB clients is to setup a VM/Container to mount CephFS, then have that share it over SMB. Then any system can mount the CephFS as a normal SMB share. The Samba sharing VM does become a single point of failure, but this is likely the best way of mounting on a device that doesn't have a easy way to get the CephFS client installed.

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน

      @@insu_na
      I could be wrong, but it is my understanding that libcephfs isn't available in Windows.
      Therefore; this wouldn't work.
      Conversely, if you set up CephFS and then mount it on the host (e.g. mount to it /mnt/pve/cephfs), then in your /etc/samba/smb.conf, you can point your SMB share to that mount point.
      Then that way, a) your client doesn't need libcephfs (Is there REALLY a reason why a client wants native CephFS access (i.e. CephFS, not Ceph RBD)?) (i.e. I can understand it that if you want Ceph (RBD) access, that you would want and/or need libcephfs on the client, which again, I'm not certain that that's available on Windows clients, maybe as an alternative to iSCSI, but if you don't need Ceph RBD, and you only want/need CephFS, then this method should work for you) and b) your don't need a VM to mount the CephFS only to then share it out over SMB.
      You can have the Proxmox host do that natively, on said Proxmox host itself.

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน

      @@ElectronicsWizardry
      You CAN do that, but you don't NEED to do that.
      If you're using Proxmox as a NAS, then you can just mount the CephFS pool directly in Proxmox, and then etc /etc/samba/smb.conf and point your share to that mount point (e.g. /mnt/pve/cephfs).
      You don't need to route/pass it through a VM.
      Conversely, however, if you DO route it through a VM or a CT, then what you can do is store the VM/CT disk/volume on shared storage, and then if you have a Proxmox cluster (which you'll need for Ceph anyways), you can configure HA for that VM/CT, such that if one of the nodes has an issue, you can have the VM/CT live migrate over to another node within the Proxmox cluster, and that way, you won't lose connectivity to the CephFS SMB share.
      That would be ONE option as that would be easier to present to your network than trying to configure it for the three native Proxmox nodes.

  • @LinusTorvi
    @LinusTorvi 3 หลายเดือนก่อน

    Man i wish i knew so much on this topic! Could you help me out with my setup? I run proxmox on 128gb ssd. On top of this i have nvme 2tb and 2.5 mechanical also 2tb. I am planning to create a zfs for nvme to use for vm's and another zfs for mechanical drive to use for backups. Would this be fine? I also have an old NAS on the network so could use some network mount points if i lets say setup a nextcloud for instance. What do you think? Thanks a lot

    • @ElectronicsWizardry
      @ElectronicsWizardry  3 หลายเดือนก่อน +1

      Sure that seems like what I'd do here. Having a 'speed' pool for vms that need speed and a 'space' pool for stuff that doesn't need speed makes a lot of sense.

    • @LinusTorvi
      @LinusTorvi 3 หลายเดือนก่อน

      Thanks a lot! I made it today :) so far so good. ​@ElectronicsWizardry

  • @lynor95
    @lynor95 3 หลายเดือนก่อน

    Is OMV 7 a good option if for some personal preferences I prefer using it isntead of Unraid or TrueNAS? Will it still supports SMB, NFS, Time Machine Backup, and most of the features Unraid & TrueNAS have?

    • @ElectronicsWizardry
      @ElectronicsWizardry  3 หลายเดือนก่อน

      Sure OMV 7 works fine. It should support all the standard features(and uses the same linux and utilities under the hood for sharing so performance and compatibility should be similar).

  • @peteradshead2383
    @peteradshead2383 7 หลายเดือนก่อน

    I did more or less the same but in the lxc with it mapped to the main proxmox and using webmin .
    I'm thinking of getting a Minisforum MS-01 and fitting a HBA card ( external ) in the 8 line GPU slot and feeding into a 4-8 Drive Bay enclosure , but can't find any which have sata plugs at the back with a built in psu , can't only find internal ones for servers .

  • @protacticus630
    @protacticus630 7 หลายเดือนก่อน +2

    Great, just using Proxmox with Turnkey as container.

  • @MrTR909
    @MrTR909 2 หลายเดือนก่อน

    did I get that right, when I passthrough a storage drive (ssd, nvme, etc) to a VM, than I cant use PBS to back up those passed storage devices, even though I can check the backup box?

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 หลายเดือนก่อน

      Yup, passing through a disk like /dev/sda can't be backed up. Your best way to back it up is to switch to virtual disks or backup withing the Vm.

    • @MrTR909
      @MrTR909 2 หลายเดือนก่อน

      @@ElectronicsWizardry thank you for responding, in that case I need to do it within the VM, which make sense.

  • @OsX86H3AvY
    @OsX86H3AvY 7 หลายเดือนก่อน

    the ONLY issue ive had with using proxmox with samba as a nas is that it requires me to make my management NIC one of my 10G cards because the mgmt nic is also the samba share nic (sa i understand it) meaning i cant or rather dont want to use jus the 1G card for it. but if you plan out nics and networking well its no biggie. i have multiple pve servers, one that is primarily my NAS but with 5 vms running, one which is mostly for VMs but which also has surveillance disks for a shinobi vm on it, and a couple more for 'play' and that's worked out pretty well. with proxmox its all about how you balance our those resources, thats the KEY thing with pve - whats the balance you want and need and do you ahve the gear to get it

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน +1

      "the ONLY issue ive had with using proxmox with samba as a nas is that it requires me to make my management NIC one of my 10G cards because the mgmt nic is also the samba share nic (sa i understand it) meaning i cant or rather dont want to use jus the 1G card for it."
      I'm not 100% sure what you mean by this.
      The protocol itself has no network interface requirements.
      You can share it on whatever interface you want, via the IP address that you want your clients to be able to connect to your SMB share with.
      So if you have a 1 GbE NIC and a 10 GbE NIC and your class C IPv4 address is something like 192.168.1.x (for the 1 GbE NIC) and 192.168.10.y (for the 10 GbE NIC), then you can have your clients connect to the 192.168.10.y subnet, if they're on the same subnet.
      The protocol has no network interface requirements.

  • @sale666
    @sale666 2 วันที่ผ่านมา

    One thing im trying to do now is add disk to truenas... i went to disks and set it as LVM-thin (dont know if it makes a difference) than i go to truenas an no disk is listed there.. what am i doing wrong?

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 วันที่ผ่านมา +1

      TrueNAS should just see the disk added to the VM. Maybe check if TrueNAS can see the disk in the terminal. Also try using different types of disks, like SATA instead of Virtio.

    • @sale666
      @sale666 วันที่ผ่านมา

      @@ElectronicsWizardry yep done.. seems i didnt do the formating as i thought i did... now truenas sees the disk! Thanks!

  • @boneappletee6416
    @boneappletee6416 7 หลายเดือนก่อน

    Another fantastic video, thank you! :)
    I completely forgot that turnkey containers exist... 🤦🏻‍♂️ Definitely going to use that for a small file server I've been meaning to get going.
    If there's enough interest / script material for it, could you do a deeper dive into these turnkey VMs/containers? Haven't used them much, personally

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 หลายเดือนก่อน +1

      Glad I helped you with setting up your nas on Proxmox.
      A video on Turnkey containers is a good idea. I'll start playing around with them soon.

  • @gptech2444
    @gptech2444 7 หลายเดือนก่อน

    Would there be any problems running mergerfs and snapraid on the proxmox node?

  • @RomanShein1978
    @RomanShein1978 7 หลายเดือนก่อน +3

    A bit of a crazy suggestion for a future video: running TrueNAS Scale or UnRAID as.... a container. Theoretically, it should be possible.

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน

      You have the create the LXC container yourself, which isn't a super trivial task, but it can be done.
      (Actually, in theory, it would be easier to deploy TrueNAS Scale as a container than UnRAID because TrueNAS Scale also runs on top of Debian, so really, all you would need to do is add the repos, and then find the difference in the list of packages that are installed, write that package delta file to a text file, and then install that package diff text file in a Debian LXC container.)
      That part isn't super difficult. It might take a little bit of testing to make sure everything works, as it should, but there shouldn't really be any technical reason as to why this method *can't* work.

  • @gerry2345
    @gerry2345 5 หลายเดือนก่อน

    I like this vid. Good insiight and good tips.

  • @typingcat
    @typingcat 6 หลายเดือนก่อน

    But what about the disk IO performance in a VM?

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 หลายเดือนก่อน

      This depends on how the VM is setup and if your using pass-through or virtual disks, but generally VMs have good disk performance and likely more than what would be needed for a VM.

  • @ewenchan1239
    @ewenchan1239 7 หลายเดือนก่อน

    re: ZFS on root
    If you install Proxmox of a mirror ZFS root, and you want to then do things like GPU passthrough, the guides that you will likely find online for how to do this, won't always necessarily tell/teach you how to update the kernel/boot parameters for ZFS on root.
    As a result, I stayed away from it and I used my Broadcom/Avago/LSI MegaRAID 12 Gbps SAS RAID HBA and created a RAID6 array for my Proxmox OS boot drive, that way the Proxmox installed would install it onto a "single drive" when really it was 4x 3 TB HGST HDDs in a RAID6 array.
    That way, if one of my OS disks goes down, my RAID HBA can handle the rebuild.

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 หลายเดือนก่อน

      I am pretty sure you can do PCIe passthrough with ZFS as the boot drive. I think ZFS as boot uses Proxmox boot manager instead of grub, and different config files have to be edited to enable iommu.

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน

      @@ElectronicsWizardry
      "I am pretty sure you can do PCIe passthrough with ZFS as the boot drive. I think ZFS as boot uses Proxmox boot manager instead of grub, and different config files have to be edited to enable iommu."
      You can, but the process for getting that up and running isn't nearly as well documented in the Proxmox forums vs. if you're using a non-ZFS root, where you can just update /etc/grub/default, and then run update-initramfs -u; update-grub; reboot to update the system vs. if you're using ZFS root, to update the kernel boot params, you need to do something else entirely.
      When I first deployed my consolidated server in January 2023, I originally set it up with a ZFS root, and ran into this issue very quickly, and that's how and why I ended up setting up my 4x 3 TB HGST HDDs in a RAID6 array rather than using raidz2 because with my RAID6 OS array, Proxmox would see it as like a "normal" drive, and so, I was then able to follow the documented steps for GPU passthrough.
      If it works, why break it?

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 หลายเดือนก่อน +1

      @@ewenchan1239 I think Proxmox's reason for doing Proxmox boot tool of standard grub for ZFS boot is so that they can have redundant boot loaders. I don't think grub is made to be on multiple drives, where as Proxmox boot tool is made to be on all drives in the ZFS pool, and have all updated when a new kernel/kernel option is installed.
      I agree it would be nice if they just used grub, but I think editing kernel options with Proxmox boot tool should be editing /etc/kernel/cmdline and then proxmox-boot-tool refresh.

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน +1

      @@ElectronicsWizardry
      To be honest, since I got my "do-it-all" Proxmox server up and running, I didn't really spend much more time, trying to get ZFS on root to work with PCIe/GPU passthrough.
      As a result, I don't have deployment notes in my OneNote that I would then be able to share with others here, with step-by-step instructions so that they can deploy it themselves.
      I may revisit that in the future, but currently, I don't have any plans to do so.

  • @C0LPAN1C
    @C0LPAN1C หลายเดือนก่อน

    I’ve struggled passing hw resources into lxc. Passing a controller or zfs drives into a proxmox ve hosted trueNAS vm is eons easier.

  • @shephusted2714
    @shephusted2714 7 หลายเดือนก่อน

    I'd really like you to not just break it all down but build it all up in the form of a ws to dual nas solution for prosumer, homelabbers, and smb sector - everybody wants and needs dual nas redundancy coupled with fast networking - like 40g - it is very possible and pretty cheap to get going and would make for great content - do it with cots refurb boxes and and a few nvme arrays #jumbo frames #mtu

  • @gunwao
    @gunwao 6 หลายเดือนก่อน

    So Good! Thank you!

  • @shootinputin6332
    @shootinputin6332 6 หลายเดือนก่อน

    Would it be crazy to run UnRaid as a VM on Proxmox?

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 หลายเดือนก่อน

      Unraid can make a lot of sense in a VM. Their parity setup is one of the best if you want flexible multi drive setups. I have found it to work well to put a USB stick in the server and pass the USB device through so Unraid can use the GUID correctly for licensing.

  • @dexzoyp
    @dexzoyp 3 หลายเดือนก่อน

    I still do not understand how to manage simple lab ( Proxmox on SSD, 2 x 2TB HDD), how can I handle one hdd fail or ssd fail to recover whole system? Can someone help me figure it out? I am lost...

    • @ElectronicsWizardry
      @ElectronicsWizardry  3 หลายเดือนก่อน

      If you want RAID for redundancy in Proxmox, your best option in software is probably by using ZFS. Setup a ZFS pool of the drives with a mirror or other pool that has redundancy. Otherwise you can use hardware RAID if your system supports it.

    • @dexzoyp
      @dexzoyp 3 หลายเดือนก่อน

      @@ElectronicsWizardry you mean to create zfs pool in proxmox storage?

    • @ElectronicsWizardry
      @ElectronicsWizardry  3 หลายเดือนก่อน +1

      @dexzoyp Yup, you can make a ZFS pool under the host, then disks, then ZFS. Then make a new mirrored pool and select add as storage. Then you can add virtual disks to the ZFS pool that will be mirrored on the selected drives.

    • @dexzoyp
      @dexzoyp 3 หลายเดือนก่อน

      @@ElectronicsWizardry
      - Proxmox ( running on SSD )
      - TrueNAS ( running on HDD )
      - NextCloud
      - FileShare
      - Gitlab Server ( running on HDD )
      Does the architecture make sense? Is it safe in your opinion? Can you guide me based on your experience?

  • @ewenchan1239
    @ewenchan1239 7 หลายเดือนก่อน +1

    "Battery backed caching for high speed I/O."
    Sorry, but that's actually NOT what the battery backup unit (BBU) is for, in regards to RAID HBAs.
    Battery backup units (BBUs) is used on RAID HBAs to prevent against the write hole issue that may present itself in the event of a power failure.
    The idea is that if you are writing data and then lose power, then the system won't know what was the data that was still in flight that was in the process of being committed to stable storage (disk(s)).
    A BBU basically keeps the RAID card alive long enough to flush the DRAM that's on said RAID HBA to disk, so that any data that's in volatile memory (DRAM cache of the RAID HBA) won't be lost.
    It has nothing to do with I/O performance.

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 หลายเดือนก่อน +1

      I want to say the DRAM on a RAID card is used for caching disk IO in addition to storing inflight data to prevent the write hole issue. RAID cards let the onboard DRAM to be used as a write back cache safely as it won't be lost in a power outage. Also I have seen much faster short term write speeds when using RAID cards making me think the cache is used in this way. This does depend on the RAID card, and there are likely some that only use the cache for prevent write hole issues.

    • @ewenchan1239
      @ewenchan1239 7 หลายเดือนก่อน +1

      @@ElectronicsWizardry
      Um....it depends.
      If you're using async writes, what happens is that for POSIX compliance, writing to the DRAM on a RAID HBA will be considered as a write acknowledgement that's sent back to the application that's making said write request.
      So, in effect, your system is "lying" to you by saying that data has been written (committed) to disk when really, it hasn't. It's only been written to the DRAM cache on the RAID HBA and then the RAID HBA sets the policy/rule/frequency for how often it will commit the writes that have been cached in DRAM and flush that to disk.
      Per the Oracle ZFS Administration guide, the ZFS intent log is, by design, intended to do the same thing.
      Async writes are written to the ZIL (and/or if the ZIL is on a special, secondary, or dedicated ZIL device, known as a SLOG device), and then ZFS manages the flushes from ZIL to commit to disk either when the buffer is full or in 5 second intervals, whichever comes first.
      If you're using synchronous writes, whereby, a positive commitment to disk is required before the ACK is sent back, then you generally won't see much in the way of a write speed improvement, unless you're using tiered storage.
      Async writes CAN be dangerous for a variety of reasons, and some applications (e.g. databases) sometimes (often) require sync writes to make sure that the database table itself, doesn't get corrupted as a result of the write hole due to a power outage.

  • @YannMetalhead
    @YannMetalhead 6 หลายเดือนก่อน

    Good information.

  • @xxbluetomatoxx
    @xxbluetomatoxx 5 หลายเดือนก่อน

    Thank you ❤

  • @LubomirGeorgiev
    @LubomirGeorgiev 6 หลายเดือนก่อน

    you got a patreon?

  • @Korppi08
    @Korppi08 2 หลายเดือนก่อน +1

    Hey mate, i just want to say thanks tou you. Your first video i saw when i tried to gonfig promox and get SFTPGo to working, thanks toy i got it. Afer 2 years, bought real HP Servr. Just wanted tell that to you, thanks a million for your videos!

  • @rcdenis1
    @rcdenis1 3 หลายเดือนก่อน

    My pve box has 1 256gb nvme where proxmox lives, 1 500gb where vm's live and 4 hd's for storage.

  • @MysticMachina
    @MysticMachina 4 หลายเดือนก่อน

    That hair tho😂