You can find part 1 (hardware setup) here: th-cam.com/video/Hix0l8cFaMw/w-d-xo.html (hardware is linked in the description) Subscribe to Wendell & Level1Techs! th-cam.com/channels/4w1YQAJMWOz4qtxinq55LQ.html Wendell's tutorial is here: forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764
Off topic but thought you might this intersting...I just looked on Newegg.com and found the Intel i9-9900KS going for 1,199.00. Couldn't that be considered price gouging?
I have to admit, this is the first vid on this NAS stuff that has me thinking I could actually set this up and use it, instead of glamour shot vids that only show a pretty box with blinking lights that "does stuff" and leaving me with no clue how to actually use the box sitting in the corner, lol. now for the hassle of getting it to show up in "my computer" on this PC like any other hard drive in this PC, so I don't need to recompile third-party apps I don't have access to the source code for to convince them to work with some obscure network storage thing on the LAN, lol. Great vid Wendell, Steve, Andrew, and crew. B)
It's not often that Steve appears out of his depth, but I love the willingness and enthusiasm to learn, and just confirms that he is, in fact, human, as much as we all refer to him as Tech Jesus.
It's impossible to always be in your depth, if that's a phrase, in this industry. Those who do well are the ones who know when it's time to ask for help. Wendell is a great teacher!
Love GN and Level1Techs, haven't seen this before. Server hardware is a different ball game, stuff like IPMI seemed stupid to me before I built a server but being able to tuck it away in the closet, have redundant NICs and a remote management interface with a hot spare drive in case I have one die while I am way from home, makes it easy to manage, which is good, I spend enough time at work fixing other people's shit, I want mine to work for my modest needs.
I'd love to see this sort of thing look more the decision making process and enabling that. "why do you do, why do you do it, here's the solution that maps to that and why". These two parts felt like random bits of the process that folks would have a hard time determining if it applies to their (likely lesser) needs/desires.
As an unraid user myself, I love watching Steve learning stuff I already know for a change. Absolutly love the questions and stuff though. Its rare that you can see people actually learn with you.
@@jakobfindlay4136 They have used platters coated with a high coercivity alloy for decades but iron oxide is traditionally one of the materials used in magnetic recording media.
@@jakobfindlay4136 they are using the best possible material for the task..... I'm sure it's a unbelievably pure iron oxide formed from a specific iron alloy under very specific conditions. It just happens to have an almost identical chemical makeup to generic rust. Hence "Spinning Rust" 😁
look i have to pause just to write this, Amazing 11:03. How you akw that you didnt know what LVM is. Im so subscribed to you because of this. To akw something that people should "believe you know" but u clearly dont is a humble amazing thing. kudos man... really kudos
I was hoping to learn something about building NAS servers but this series didn't really touch on any basics. Could you make a video on NAS servers sometime please? What platform should I choose? Do I need ECC memory? How many cores do I need? What budget/advanced options would you recommend? Should I go with existing solution? What should I look for when selecting my hard drives? Which RAID X should I choose? Should I rely on my motherboard? Should I use software manager? Should I go for some sort of extension card? When? Which OS should I go for? Why? What if I want to extend my storage capability in the future? Should I just go for more capacity immediately? What if one of the drives fails? How do I know? What do I do then? Are there any other considerations? Is UPS a good investment?
I'm by no means an expert but a UPS is a must. Don't want to corrupt your data mid processing. I run freenas (It's free) with a nextcloud plugin on top. I use a PC tower with ECC however from what I read nowadays ECC requirements are a myth. Uses a core i3-6100, 16GB RAM and WD Red drives That cost me about £300 quite a few years back (drives not included) I need to update my system (I rarely use it so the configuration is out of date with my needs. I hate using power so rarely have it on. Draws about 60w without HDD'S). I now mainly use it as a Windows PC but it's all still there when I need it. I just need VMWare more for my Uni projects (It's my only desktop tower. For some reason I'm not using my Dell blade server (R710. Got that for virtualisation) That draws 180w idle (probably the reason). I need solar panels. Why do I love tech and the environment?
@@Jimmy_Jones ZFS "can't" corrupt data. It can only either write it or not. It's a copy-on-write system. Also, the data is checksummed, which further makes it extremely unlikly to corrupt anything and have it remain corrupted.
I feel the same way. I've watched all the big tech channels on NAS, this one, Linus tech tips, and so many more and they don't really cover any useful and step by step information. It's all generic or to "impress" us with some crazy big storage system. Add to your question list, should I use software or hardware to run RAID nas.
I get why you like that Silverstone case, but I feel like for the features it has, the roadblocks you encountered, and the fact that you've got an external drive array, you would've been better off going with a standard rack-mountable chassis.
Agree. But this is "kind of" a review of the Silverstone case as well, since Silverstone had kindly sent Steve that case. I do believe Steve will write up a report on how the case should be modified for the next iteration and send a feedback to Silverstone. So, I'll be looking forward to a Tech Jesus Edition (i.e., v2) of the case. I think I can see some use cases for that case (forgive me the pun) in my home ;-)
PXE Boot is extremely helpful. We used to image using PXE but since switching to Windows 10, we have been having to use a bootable flash to pull images, or just doing full images off flash drives. I wish we still had PXE boot.
Thank god you can't and thus have still a chance to learn from actual experts. FYI: they just un-unraided UnRaid, dumping exactly what the point of UnRaid really is.
Wendell was born from human parents, but thanks to his desire to learn and dedication he obtained the ultimate knowledge and ascended to god-level understanding of tech.
Current god list: Tech Jesus / Weird Al: Steve Tech Dad: Jay Tech Satan (plus others): Linus Tech Moses: AkBkukU Tech Buddha: either Wendell or Gordon Any others?
What a coincidence in the past couple weeks at my work I've created a FOG raspberry PI system that we give to customers to deploy clean images to their computers to manage.
Nice. Youre the first ohne i mention, Who doesnt go all on the Top Hardware. Instead searched a Konfiguration which maches your needs, with some headroom.
Ah good ol' ghost 32 and ghost 64 definitely good times to be had there! These videos are really making it a challenge to not build a server of my own. Awesome content as always and thumbs up for part 2!
Love seeing more enterprise and soho content! Reminds me of work, and some of the stuff level1 mentioned with the details are good to know details. Keep it up!
Would tdarr meet the scripted transcode needs? Install the docker, point to the library, and setup your plugins for transcoding. It's been working wonders on my media folders.
Unraid is a decent hypervisor and docker host, but for the money i'd've gone with a straight linux install for a ZFS setup like this. The value of unraid is the ease of use, and mixed drive compatibility, which they are simply wasting in this setup.
I love how all the brown nosing bootlickers commenting on this video don't even realize this most glaring fault of all of its flaws. Thank you for an actual expert comment on a perceived experts show.
I love Wendell, but when it comes to unraid I think we need to get SpaceInvaderOne in here. Get community applications installed on that unraid box! Plenty of apps for scheduling and such. (also moving the cache to spinning rust scheduling is built in) Over all I'm somewhat confused on the point here. I can't find a point in running ZFS on unraid, at least for any practical setup. The only advantage I can come up with is just maybe increased read speeds. For that you give up all the reasons one would choose to use Unraid in the first place. You lose the cost savings of only needing 2 parity drives for up to 28 data drives, now with ZFS you need at least 1 for every what, 4 drives? You lose the ability to add single drives when ever you need more storage. You lose the ability to add drives of different sizes. You lose the fact that in unraid if more than 2 drives fail you only lose the data on those drives and everything on the other drives is still safe. Over all you are left paying for an unraid key while neutering all of the reasons to choose it over free options like freenas... or simply running the server as a linux machine and running KVM VMs as needed. It's neat that it can be done, but I'm unsure why it would be done. Am I just missing something here? I suppose, from what I can see it looks like the ZFS pools are set up as unassigned devices, so it might be a 'hack' to get around the 28 drive limit, I suppose, while retaining parity on those additional drives?
ZFS is supposedly more battle-tested. And you can add drives of different sizes, it's just that the smallest one determines the vdev size. That does not prevent you from having different sized vdevs though.
To be honest I find myself more wondering why they're using Unraid if you're using ZFS and all these manual scripts anyway. Having all the storage marked as "unassigned" and then not having an interface for ZFS and Cron is so cringe. Why not just use something like Proxmox that's actually made to work with ZFS.
@@LA-MJ I'm of the oppinion that being able to waste drive space unused does not count as being able to use different sized drives. if you have the money to waste on not actually using the full capacity of a drive, then you have the cash to buy 5 or 6 identical drives at once in the first place. With unraid running it's native file system, you can mix and match drives of any size while retaining the full storage capacity of every drive. the only limiting factor of this is that the parity drives must be at least equal, or larger than all other drives. As far as ZFS beiing more tested, or better in some way, in general it is more robust than unraid in some ways and theoretically has faster data transfer, though using the SSD cache in unraid tends to negate the speed issue in most use cases. As far as redundancy being better or worse, when it comes dow to it if that is your primary concern... making sure you don't lose any data, I still think running unraid native storage makes more sense. First in the event of multi drive failure you only lose the data on the failed drives, where as in ZFS you lose all data across all drives in that array. Secondly, no matter which file system is used, your box is still in the one location. if protecting the data is the goal, then it seems smarter to me to save some cash on drives in the first place and buy more drives to run a mirrored back-up server off site.
Hi, I think there's a polarisation of responsibilities when it comes to the success of UNRAID. Namely in the more complex core features of unraid vs what can be achieved with the app store/plugins. I've found some very fantastic utilities to enhance the unraid experience. Namely libvert hotplug and auto user scripts. Cheers
I was really hoping they named the storage tank "DumpsterFire", but apparently Wendell and Steve are not as original when it comes to naming things. ;)
@@maxmustermann194 Windows sometimes does strange things without user input, it corrupted my GRUB for Linux dual boot twice already with just a Windows update
@@or2kr They have a reason for that. Lots of windows "cracks" (slic, bios hardware spoofers etc) and viruses are installed in the bootsector. Besides.. why bother with dual boot? Just make a virtual machine. Having to restart and select which one to use is just a pain in the rear
@@Chriva The issue is, I really like playing games on my machine and I need bare metal performance for that. I'm trying to get my GPU to pass through and only my latest attempt today worked due to a newer Bios that I was already on hindering the process. The Laptop with the issue is still not fixed since the last "breakage", but I only really use it for things only a native Windows machine can do + things that aren't really dependant on a good OS, so I sadly have retreated from that front. My gaming rig will always have Linux as the root system though
Great video, love DIY server stuff! Could you show a little bit about how the disk shelf is connected to the unraid server both from a hardware standpoint and the unraid OS?
Watched the video when it came out. Managed to grab 72TB for 600$. Enough to store my "personnal 4K BR collection", to game on it, and to act as a free server to collect data from LORA sensors. Now I'm turning my 1950X based gaming/video editing computer to a server... Wanted to get information on LSI cards configuration, came back to watch the video again :p.
unRAID out of the box is a bare system with a minimum of features. You need to add the Community Applications then add the System Plugins and Docker Applications you need. This includes the Cronjobs and user scripts plugins. There are a number of things specific to unRAID you need to learn by using it and customizing it to your needs.
@@blackfireburn Proxmox has supported windows and pass through the whole time Ive been using it. Which has been almost 3 years. You used to have to do pass through on the command line but its in the GUI now
I am all for DIY and really enjoy this video deeply (Steve and Wendell are the best and most knowledgable Tech Tubers around currently) but for my storage I personally just trust Synology to do the job for me. May be I am getting old and lazy :).
Thankfully I was able to follow along for much of this, but i'll definitely need to look up the tutorial on his website for the how to. Im not really sure my home network would need anything like this, and if it did, im not sure i'd be able to set this up successfully. Lots going on with that setup.
unRAID has a pretty robust community, you'll find there's a plug-in for CRON :) Their Container interface is pretty outstanding too - until you want to do things outside of their ecosystem and then it's a little harder but again the community kicks butt building Containers. Also, be careful editing at the XML level, that's under the covers and could get reset when you tweak something in the GUI.
I’ve used unraid for years successfully with multi gpu/device pass through. I tried doing things with freenas... freenas is a huge HUGE pain in the ass. It would break and would require so much time trying to figure out WTF is going on.... unraid had some of that, but the last year or so has been fantastic for me
I've never tried UnRAID but I did give FreeNAS a shot. Not only was it a bit of a learning curve but it was also bizarre and unpredictable. When I signed up for the forums to ask about some truly weird behavior that could not possibly have been by design I got a couple of replies that were slightly helpful and then I some .ickhead, silver-back wannabe steps in to tell me how I'm doing everything wrong and how I should just get off the forums if I don't want to do it exactly the way he says. Half of the posts on the forums were not much more than Intel worshiping kumquats who thought that it was stupid to want to run FreeNAS under bhyve on FreeBSD. Especially on AMD HW. Yes, of course, who would want to run FreeNAS in a VM? How stupid! Yeah, that was a few years ago but the whole thing left a bad taste for FreeNAS. I doubt I'll ever use it. A big part of why I have no plans at all to use FreeNAS is ZFS. As someone else mentioned, ZFS looks absolutely awesome. On paper. In reality most of what I saw of ZFS a few years ago on the FreeNAS forums was, "Oh, GOD! HELP ME! All of my data is GONE! It won't recognize my VDEVS any more! How do I get it to recognize the storage again? It just... quit. For no reason!" I really don't know how much of that was stupid user tricks and how much was just basic flakiness on the part of ZFS but I sure as hell wouldn't even think about using ZFS without serious enterprise HW. My home server is a dual socket Opteron board with 128GB of ECC RAM from about 7 years ago. It's pretty good HW. I still wouldn't use ZFS. There might not be enough redundancy in my system. Seriously, I wouldn't use ZFS in any situation where you can't run two identical systems with complete duplication of the data. And maybe a third identical system for testing changes. Not a VM. Identical triple-redundant HW. Maybe you want to call me paranoid. That's fine. But I have data that's older than some of the people reading this...
Im really happy to hear about your positive experiences with unraid. Ironically, my experience was flipped and I migrated to FreeNAS haha. Different use cases I suppose. Cheers!
linuxinstalled yeah funny how that goes huh :) everybody has different needs, though now that freenas is a bit more mature I would probably like it. Though unraid is pretty fantastic. I’m still running my unraid and it’s been flawless the last year or so
Do the viewers who watch this care much about editing it down from an hour to 35 minutes or whatever it is. Maybe when the conversation dries up and you only copy-paste commands but it is Wendell, he isn't boring. ;) You have to be quite a nerd to watch this at all. I like the Star Trek names. I might have thrown in one Star Wars name, just to troll. I like both.
I finally have discovered a flaw of Wendell, he is one of those people who touches other people their monitors. I always hated it when people do that, I don't want to clean my monitor more often than necessary and I don't want to see grease stains. Also outdoors I can't really clean your monitor, at home I just use some water with a bit of vinegar, that seems to work well.
@@guy_autordie Yeah, for reference I was administering TrueNAS systems with more than 300TB circa 2014 at a company with a much lower market valuation than Google/Alphabet, though older. Though, that was a CARP HA setup, so the raw storage was > 600TB. Despite what Wendell mentioned in the video, Btrfs is *not* trustworthy and never has been. Personally, I wouldn't trust anything less than an OpenZFS RAIDz2 as I was experiencing RAID-DP (NetApp's version of RAID6) with double drive failures way back in 2006. OpenZFS offers RAIDz3, use it. A RAIDz1 given the MTBF and BER (Bit Error Rates) on drives in excess of 1TB has been known to be abysmal enough that RAID5 was considered insufficient for anything intended for prod at least as far back as 2005.
I run my daily work on desktop and laptop in VMs. I run the VMs and the desktop Host from a 512GB nvme-SSD. The host is a minimal install of Ubuntu 19.10 with the experimental ZFS booting. I have a recovery install of Ubuntu using an ext4 partition on one of my two HDDs. In September 2019 I did try out this dual boot configuration in Virtualbox. I use ZFS to incremental copy my VMs from desktop to laptop, so on the road I have exactly the same distros with the same settings. Desktop and laptop also use the same Virtualbox shared folders (datasets) with all my info using the same mount points. My backup server is a 32-bits Pentium 4 HT (3.0 GHz) with 1.2TB in 4 HDDs (2 IDE and 2 SATA-1). It runs on FreeBSD 12.1 and weekly I run my incremental backups to that system out of 2003. Also this combination has been tested first in Virtualbox. The disadvantage is, that the Pentium limits the 1 Gbps link to 200 Mbps due to a 95% CPU load of the network process. I still have 4GB DDR2 and a 3-core Phenom, so I consider to look for a cheap AM2 motherboard to get ~800Mbps.
Steve, love your content! Do you think you could do a review for gaming monitors sometime (such as the Gigabyte Aorus I guess you're using in this video)?
Ohh Wendell you magnificent bastard!, those two are too much nerdiness for 1 video. One slip into an in-depth talk about IOMMU, chipset drivers or filesystems performance and this whole channel would colapse and Skynet would awake
What’s better for a simple home theatre/home file nas, unraid or freenas? Can freenas allocate drives in a pattern like unraid? I heard Linus talk about how unraid formats the drives vs traditional raid, and unraid seems to be a better solution. Is that so? And can freenas do that?
Wendell & Steve - A lot of the frustration mentioned at 24:33 of missing things like Scheduled Tasks has been solved for awhile with unRAID's extensive plugin and container/app support through the Community Applications plugin.. .install that one plugin and you'll have access to tons of plugins and containers that answer basically all your needs. In this case, the User Scripts plugin which adds scheduling tasks and custom scripts in the GUI with an interface for CRON. I'm surprised you didn't add the Community Applications plugin since practically every unRAID user knows this is one of the first things you install on any unRAID build. I'm surprised that this is something even Linus seems to know that Wendell doesn't. There's even dedicated sections of the official unRAID forum for all the apps, plugins, containers installed through the Community Applications plugin. Using unRAID without it is like driving a car without tires.
the setup guide literally showing "user scripts" to setup cron.... just as you describe.. we did add the community applications... is there... see the guide... whats surprising is that cron, at the cli level, is a bit broken without a "community plugin" ...
@@Level1Techs Ahhh sorry didn't look at the setup guide. Just going by the video you have and your complaints in the video. You don't have Community Apps installed on the unRAID server you setup with Steve, at least as shown in the video above and the User Scripts pllugin is also not installed/visible in the video. Both CA and User Scripts were not mentioned in the actual video and the impression given is that they weren't there, despite what the actual written guide says. You also manually installed docker containers rather than using the usual method of using community applications in the video above.
@@jed1mstr the video was recorded as we setup steves nas, and we said see the guide. Also the guide explains why the unraid gui is actually bugged for the particular docker setup we did, so it is not possible to use the usual docker UI to have two containers share an IP. Which is absolutely a necessity to get http+https sni proxy working properly.
So three years late... though i saw this one three years ago but now i noticed the Verge's swiss army knife which hopefully has a phillips head screw driver.....
I have been researching FreeNAS and UnRaid and have yet to decide my path. However, I am confused by one statement made by Wendell. Specifically, around 6:48 he states "FreeNAS doesn't do hardware passthrough". Over in the FreeNAS forums I have been asking about the possibility of running FreeNAS in a VM and, although there is lots of debate about that, there seem to be folks there who are specifically using FreeNAS in a VM with passthrough. Who is correct? PS: Not interested the the FreeNAS vs. Unraid debate here - just want to know if the statement about FreeNAS not supporting passthrough is accurate.
I think he should have said "it does it poorly" -- Linux KVM virtualization is outpacing FreeNAS (which is BSD). User base on BSD is way smaller, and the GIANT server farm crowd is pushing the KVM tech forward fast.
I see all the youtubers using that monitor. Is it really that good? Should i get that or the PX7 or the LG? It would be for gaming and media consumption
I think you guys may want to try freenas since I believe 11.3 (stable af beta at the moment) has updated smb shares and you don't need to setup cron jobs manually
What Ram did you use ? Sorry if I missed the part where you mentioned it, but I am planning on building a Proxmox Server with a 3600 on ZFS and I am currently planing on going ECC Ram because I love my data and one VM will be a nextcloudhost.
I don't understand why you would use unraid. Just use Proxmox as enterprise hypervisor, it's Debian based and also capable of PCI passthrough. Also since Proxmox v6 you can use ZFS as boot medium as well. Also, proxmox is free, you don't have to pay up front and don't have to pay more if you reach a certain amount of disks. Just use Openmediavault as NAS-VM, it's also debian based. Proxmox itself is very resource-friendly as well. In Proxmox you have the ability to use ceph-storages for your VMs. It has an integrated backup for the VMs as well. At work we are only installing Proxmox lateley, and as you can tell I'm grown very fond of it and really can recomend it. Thanks for reading, have a nice day!
Call me crazy but I think setting up all the things y'all did should have been in their own VM. Each service/function as a VM because you invested so much effort into customizing the host/physical OS that disaster recovery would need Wendell back ( or even remotely) for help.
You can find part 1 (hardware setup) here: th-cam.com/video/Hix0l8cFaMw/w-d-xo.html (hardware is linked in the description)
Subscribe to Wendell & Level1Techs! th-cam.com/channels/4w1YQAJMWOz4qtxinq55LQ.html
Wendell's tutorial is here: forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764
Off topic but thought you might this intersting...I just looked on Newegg.com and found the Intel i9-9900KS going for 1,199.00. Couldn't that be considered price gouging?
Where's the cores for sale montage video
I have to admit, this is the first vid on this NAS stuff that has me thinking I could actually set this up and use it, instead of glamour shot vids that only show a pretty box with blinking lights that "does stuff" and leaving me with no clue how to actually use the box sitting in the corner, lol.
now for the hassle of getting it to show up in "my computer" on this PC like any other hard drive in this PC, so I don't need to recompile third-party apps I don't have access to the source code for to convince them to work with some obscure network storage thing on the LAN, lol.
Great vid Wendell, Steve, Andrew, and crew. B)
I'm trying to figure out Wendell's accent. It's not very strong making it a little difficult to discern. Alabama maybe?
GMT is the only real time zone. All other time zones are fake.
It's not often that Steve appears out of his depth, but I love the willingness and enthusiasm to learn, and just confirms that he is, in fact, human, as much as we all refer to him as Tech Jesus.
Even Tech Jesus is not all-knowing but he is evolving all the time until he reaches his final form!
It's impossible to always be in your depth, if that's a phrase, in this industry. Those who do well are the ones who know when it's time to ask for help. Wendell is a great teacher!
You could say he’s both fully human and fully Tech Jesus at the same time. Some cardinals agreed on this many decades ago.
@Iama Dinosaur so like the holy trinity? Bless your hardware in the name of The Wendell, The Steve and the Holy Buildzoid?
@@wax88 Ha!
Steve is very diligent following good tips. I see that he used the swiss army knife for building the storage server
Theres one way...
Thats it xD
Which hopefully has a Philips screwdriver on it..
What I learnt from this video.
If you need a proper storage solution you should get an expert like Wendell.
I was tired but Wendell and Steve in one video is a must watch for me
Love GN and Level1Techs, haven't seen this before.
Server hardware is a different ball game, stuff like IPMI seemed stupid to me before I built a server but being able to tuck it away in the closet, have redundant NICs and a remote management interface with a hot spare drive in case I have one die while I am way from home, makes it easy to manage, which is good, I spend enough time at work fixing other people's shit, I want mine to work for my modest needs.
I'd love to see this sort of thing look more the decision making process and enabling that. "why do you do, why do you do it, here's the solution that maps to that and why". These two parts felt like random bits of the process that folks would have a hard time determining if it applies to their (likely lesser) needs/desires.
As an unraid user myself, I love watching Steve learning stuff I already know for a change. Absolutly love the questions and stuff though. Its rare that you can see people actually learn with you.
Is Wendel saying spinning rust when he refers to the HDD???
yep
Older mechanical hard drives where coated in iron oxide, essentially rust.
@@pronstorestiffi ahh thought they were always made of of a higher quality metal
@@jakobfindlay4136 They have used platters coated with a high coercivity alloy for decades but iron oxide is traditionally one of the materials used in magnetic recording media.
@@jakobfindlay4136 they are using the best possible material for the task..... I'm sure it's a unbelievably pure iron oxide formed from a specific iron alloy under very specific conditions. It just happens to have an almost identical chemical makeup to generic rust. Hence "Spinning Rust" 😁
24:34
There is a plugin to Schedule Tasks, there is also a plugin to have a Recyclebin inside the share. :)
I want to see more about the PXE boot setup.
Wendell reminds me of a guy that use to work for my dad. I like see Wendell in this type of mood and environment, creates very sentimental feelings.
look i have to pause just to write this, Amazing 11:03. How you akw that you didnt know what LVM is. Im so subscribed to you because of this. To akw something that people should "believe you know" but u clearly dont is a humble amazing thing. kudos man... really kudos
This collab stuff is really awesome, Wendell and Steve are a good match for making these in-depth vids.
I was hoping to learn something about building NAS servers but this series didn't really touch on any basics. Could you make a video on NAS servers sometime please?
What platform should I choose? Do I need ECC memory? How many cores do I need? What budget/advanced options would you recommend? Should I go with existing solution?
What should I look for when selecting my hard drives?
Which RAID X should I choose? Should I rely on my motherboard? Should I use software manager? Should I go for some sort of extension card? When?
Which OS should I go for? Why?
What if I want to extend my storage capability in the future? Should I just go for more capacity immediately?
What if one of the drives fails? How do I know? What do I do then?
Are there any other considerations? Is UPS a good investment?
I'm by no means an expert but a UPS is a must. Don't want to corrupt your data mid processing. I run freenas (It's free) with a nextcloud plugin on top.
I use a PC tower with ECC however from what I read nowadays ECC requirements are a myth. Uses a core i3-6100, 16GB RAM and WD Red drives That cost me about £300 quite a few years back (drives not included)
I need to update my system (I rarely use it so the configuration is out of date with my needs. I hate using power so rarely have it on. Draws about 60w without HDD'S).
I now mainly use it as a Windows PC but it's all still there when I need it. I just need VMWare more for my Uni projects (It's my only desktop tower. For some reason I'm not using my Dell blade server (R710. Got that for virtualisation) That draws 180w idle (probably the reason). I need solar panels. Why do I love tech and the environment?
@@Jimmy_Jones ZFS "can't" corrupt data. It can only either write it or not. It's a copy-on-write system. Also, the data is checksummed, which further makes it extremely unlikly to corrupt anything and have it remain corrupted.
@@cr0ft-2k That explains why having a lot of RAM is important.
I feel the same way. I've watched all the big tech channels on NAS, this one, Linus tech tips, and so many more and they don't really cover any useful and step by step information. It's all generic or to "impress" us with some crazy big storage system. Add to your question list, should I use software or hardware to run RAID nas.
@@cr0ft-2k well, if you have a power cut mid-write or, say, during a rebuild of the array, bad things may happen without a battery backup UPS
I get why you like that Silverstone case, but I feel like for the features it has, the roadblocks you encountered, and the fact that you've got an external drive array, you would've been better off going with a standard rack-mountable chassis.
Agree. But this is "kind of" a review of the Silverstone case as well, since Silverstone had kindly sent Steve that case. I do believe Steve will write up a report on how the case should be modified for the next iteration and send a feedback to Silverstone.
So, I'll be looking forward to a Tech Jesus Edition (i.e., v2) of the case. I think I can see some use cases for that case (forgive me the pun) in my home ;-)
Love getting the add for the jellyfish server when trying to watch this video
Just use the crap out of Unraid plugins, and you can pretty much have all the functionality you want, like job scheduling from the web interface.
Haha, Wendell fried Steve's brain before the shoot started :D EPIC
too many bong hits.
There's something very satisfying/comforting about all this.
PXE Boot is extremely helpful. We used to image using PXE but since switching to Windows 10, we have been having to use a bootable flash to pull images, or just doing full images off flash drives. I wish we still had PXE boot.
virt-manager as of 2.2.0 has an xml modification switch which has been awesome. good on unraid for adding this.
Hi Steve loved the video, always enjoy videos of collaborations featuring Wendell.
Oh man... I wish I could just download all Wendel's knowledge into my brain. Servers and networking are definitely one of my bigger blindspots.
Thank god you can't and thus have still a chance to learn from actual experts.
FYI: they just un-unraided UnRaid, dumping exactly what the point of UnRaid really is.
Wow, this was some serious information. I will need to watch this a few hundred times more, even then my brain will melt..again! Great stuff GN.
Is it fair to call Wendell "Tech Buddha"?
It's already Gordon from PCWorld
Wendell was born from human parents, but thanks to his desire to learn and dedication he obtained the ultimate knowledge and ascended to god-level understanding of tech.
After seeing him introduced a few days ago I think tech Lucifer might be more fitting
@@teddygoboom1 and master of the dark arts of the penguin and FOSS
Current god list:
Tech Jesus / Weird Al: Steve
Tech Dad: Jay
Tech Satan (plus others): Linus
Tech Moses: AkBkukU
Tech Buddha: either Wendell or Gordon
Any others?
What a coincidence in the past couple weeks at my work I've created a FOG raspberry PI system that we give to customers to deploy clean images to their computers to manage.
Nice. Youre the first ohne i mention, Who doesnt go all on the Top Hardware. Instead searched a Konfiguration which maches your needs, with some headroom.
The cool thing is that they very likely benefited from the web cache when testing RDR2, it's really cool seeing past content that has already paid off
This is just great. The only reason I stuck with FreeNAS was ZFS, unraid here I come.
Great job once again both of you, Wendell you are the person to go to for nuts and bolts, but darn you can spit it out fast.
Wendell, I can’t wait for your next video about comparing Server solutions like Unraid and FreeNAS.
34:09 Thanks for speaking on BTRFS for the snippet.
Interesting video all around. Thanks!
I love when my favorite tech youtubers mention my other favorite youtubers :D
Ah good ol' ghost 32 and ghost 64 definitely good times to be had there! These videos are really making it a challenge to not build a server of my own. Awesome content as always and thumbs up for part 2!
wendell, the original tech jesuz, long live tech syndicate
"And it started deleting 2018." GN, you are f*cking amazing! :D
do you think he could delete 2020?
@@CptKenny I wouldn't mind if he tried to be honest. :|
My goodness. I really to need to start upping my knowledge, so much of this went straight over my head!
Love seeing more enterprise and soho content! Reminds me of work, and some of the stuff level1 mentioned with the details are good to know details. Keep it up!
Would tdarr meet the scripted transcode needs? Install the docker, point to the library, and setup your plugins for transcoding. It's been working wonders on my media folders.
Wendell + Steve = Awesome!
ZFS on UnRaid? You created UnUnraid.
UnUnRaidRaid
So just RAID again.
Unraid is a decent hypervisor and docker host, but for the money i'd've gone with a straight linux install for a ZFS setup like this.
The value of unraid is the ease of use, and mixed drive compatibility, which they are simply wasting in this setup.
@@garytill GUI is slick, though.
I love how all the brown nosing bootlickers commenting on this video don't even realize this most glaring fault of all of its flaws. Thank you for an actual expert comment on a perceived experts show.
Great stuff, very informative. Always love a good techtuber crossover
Thanks guys for these videos. It's great you can do ZFS on Unraid, FreeNAS and Proxmox. Lot's of options.
Wish I knew Wendell and Steve were so into trek.
Wendell should have shown up stating
“Please State The Nature Of The Technical Emergency”
I love Wendell, but when it comes to unraid I think we need to get SpaceInvaderOne in here. Get community applications installed on that unraid box! Plenty of apps for scheduling and such. (also moving the cache to spinning rust scheduling is built in)
Over all I'm somewhat confused on the point here. I can't find a point in running ZFS on unraid, at least for any practical setup. The only advantage I can come up with is just maybe increased read speeds. For that you give up all the reasons one would choose to use Unraid in the first place. You lose the cost savings of only needing 2 parity drives for up to 28 data drives, now with ZFS you need at least 1 for every what, 4 drives? You lose the ability to add single drives when ever you need more storage. You lose the ability to add drives of different sizes. You lose the fact that in unraid if more than 2 drives fail you only lose the data on those drives and everything on the other drives is still safe.
Over all you are left paying for an unraid key while neutering all of the reasons to choose it over free options like freenas... or simply running the server as a linux machine and running KVM VMs as needed. It's neat that it can be done, but I'm unsure why it would be done. Am I just missing something here? I suppose, from what I can see it looks like the ZFS pools are set up as unassigned devices, so it might be a 'hack' to get around the 28 drive limit, I suppose, while retaining parity on those additional drives?
KVM VMs
ZFS is supposedly more battle-tested. And you can add drives of different sizes, it's just that the smallest one determines the vdev size. That does not prevent you from having different sized vdevs though.
To be honest I find myself more wondering why they're using Unraid if you're using ZFS and all these manual scripts anyway. Having all the storage marked as "unassigned" and then not having an interface for ZFS and Cron is so cringe. Why not just use something like Proxmox that's actually made to work with ZFS.
@Muf he said, it offers better compatibility with Ryzen CPUs and hardware pass-through.
@@LA-MJ I'm of the oppinion that being able to waste drive space unused does not count as being able to use different sized drives. if you have the money to waste on not actually using the full capacity of a drive, then you have the cash to buy 5 or 6 identical drives at once in the first place.
With unraid running it's native file system, you can mix and match drives of any size while retaining the full storage capacity of every drive. the only limiting factor of this is that the parity drives must be at least equal, or larger than all other drives.
As far as ZFS beiing more tested, or better in some way, in general it is more robust than unraid in some ways and theoretically has faster data transfer, though using the SSD cache in unraid tends to negate the speed issue in most use cases. As far as redundancy being better or worse, when it comes dow to it if that is your primary concern... making sure you don't lose any data, I still think running unraid native storage makes more sense. First in the event of multi drive failure you only lose the data on the failed drives, where as in ZFS you lose all data across all drives in that array. Secondly, no matter which file system is used, your box is still in the one location. if protecting the data is the goal, then it seems smarter to me to save some cash on drives in the first place and buy more drives to run a mirrored back-up server off site.
Hi, I think there's a polarisation of responsibilities when it comes to the success of UNRAID. Namely in the more complex core features of unraid vs what can be achieved with the app store/plugins. I've found some very fantastic utilities to enhance the unraid experience. Namely libvert hotplug and auto user scripts. Cheers
Huge fan of the Star Trek terms. I need to name my things with Star Trek terms
Eventually it will lead to those names, figure star trek is a few hundred years into the future
@@Tallnerdyguy the Star Trek names for these machines are like transparent aluminium! 😁
I was really hoping they named the storage tank "DumpsterFire", but apparently Wendell and Steve are not as original when it comes to naming things. ;)
This sort of nomenclature is common in IT. I've seen a large PLMN name all their firewalls from Klingon mythology for example.
Star Trek HAS inspired MUCH of the technology we have today. So it's not really a stretch
Windows eating paste confirmed
What does he refer to? Some another win10 bug?
@@maxmustermann194 I think it's his version of "the kid eating glue".
I like it.
@@maxmustermann194 Windows sometimes does strange things without user input, it corrupted my GRUB for Linux dual boot twice already with just a Windows update
@@or2kr They have a reason for that. Lots of windows "cracks" (slic, bios hardware spoofers etc) and viruses are installed in the bootsector. Besides.. why bother with dual boot? Just make a virtual machine. Having to restart and select which one to use is just a pain in the rear
@@Chriva The issue is, I really like playing games on my machine and I need bare metal performance for that. I'm trying to get my GPU to pass through and only my latest attempt today worked due to a newer Bios that I was already on hindering the process. The Laptop with the issue is still not fixed since the last "breakage", but I only really use it for things only a native Windows machine can do + things that aren't really dependant on a good OS, so I sadly have retreated from that front. My gaming rig will always have Linux as the root system though
Great video, love DIY server stuff! Could you show a little bit about how the disk shelf is connected to the unraid server both from a hardware standpoint and the unraid OS?
That's the easy bit - via an external SAS connection. More likely two for more bandwidth and redundancy.
Finally!!!
I've been waiting for this video! :D sooooo excited! Liked & Subbed :D
Watched the video when it came out.
Managed to grab 72TB for 600$. Enough to store my "personnal 4K BR collection", to game on it, and to act as a free server to collect data from LORA sensors.
Now I'm turning my 1950X based gaming/video editing computer to a server...
Wanted to get information on LSI cards configuration, came back to watch the video again :p.
This was an awesome series... I love learning from people and this is refreshing to see
unRAID out of the box is a bare system with a minimum of features. You need to add the Community Applications then add the System Plugins and Docker Applications you need. This includes the Cronjobs and user scripts plugins. There are a number of things specific to unRAID you need to learn by using it and customizing it to your needs.
This
Great video, gave me some good ideas for when I build my own NAS
This is probably the most interesting video I’ve ever fallen asleep to.
"just the 14 TB Hard Disks"
certified r/DataHoarder member
@Marcin Berman Google it next time
*I have no use for this and don't understand any of it, but when Steve & Wendell team up, it's a must watch!*
You should be using Proxmox if you want ZFS, hardware pass-through, clustering, virtual machines and containers. Oh, and its free.
Doesn't support windows VM or pass through last i checked
@@blackfireburn Proxmox has supported windows and pass through the whole time Ive been using it. Which has been almost 3 years. You used to have to do pass through on the command line but its in the GUI now
I also thought it was weird Wendell chose Unraid. Proxmox seems like a perfect fit. It would be nice to hear his reasoning for it
@@blackfireburn I setup a Windows 2016 VM with an LSI HBA passthrough on Proxmox yesterday. Works fine.
@@bourbonwarrior1618 user friendliness for steve
I was about to sleep but who needs that now I guess. 🤷🏾♂️🤷🏾♂️🤷🏾♂️
Sleep? What is this sleep madness you speak of? Here I am at around 2am (aest) watching this stuff. Oh well, time for more coffee!
Love the video, this is exactly what I’m looking at the moment, ZFS, NAS etc.
I am all for DIY and really enjoy this video deeply (Steve and Wendell are the best and most knowledgable Tech Tubers around currently) but for my storage I personally just trust Synology to do the job for me. May be I am getting old and lazy :).
7:15, you cant forget the ET routine, Windows phones home so often they've stopped picking up.
Best video to watch at 5:57 am :D
Good way to start your day
@@hammerheadcorvette4 Was actually going to bed ;) Just woke up :P
there is a plugin for custom scripts.... its called "User Scripts"
Wait! Did Wendell just recommend Unraid over FreeNAS?
6:59 earned the thumbs-up
Loving the Swiss Army knife.
Gotta Love How Wendel has that Shit-Eating Grin in the beginning , ahaha Lmao Love You Guys !! Awesome Video!
2 people nerding it out to 11... why yes I would be smiling too
Question: What is your data backup solution for all of this new storage?
Tape drive
Thankfully I was able to follow along for much of this, but i'll definitely need to look up the tutorial on his website for the how to. Im not really sure my home network would need anything like this, and if it did, im not sure i'd be able to set this up successfully. Lots going on with that setup.
unRAID has a pretty robust community, you'll find there's a plug-in for CRON :) Their Container interface is pretty outstanding too - until you want to do things outside of their ecosystem and then it's a little harder but again the community kicks butt building Containers. Also, be careful editing at the XML level, that's under the covers and could get reset when you tweak something in the GUI.
Amazing Collaboration !
Just a reminder that there is a docker container for handbrake which might make it more efficient, no need to allocate RAM specifically for it.
I’ve used unraid for years successfully with multi gpu/device pass through. I tried doing things with freenas... freenas is a huge HUGE pain in the ass. It would break and would require so much time trying to figure out WTF is going on.... unraid had some of that, but the last year or so has been fantastic for me
I've never tried UnRAID but I did give FreeNAS a shot. Not only was it a bit of a learning curve but it was also bizarre and unpredictable. When I signed up for the forums to ask about some truly weird behavior that could not possibly have been by design I got a couple of replies that were slightly helpful and then I some .ickhead, silver-back wannabe steps in to tell me how I'm doing everything wrong and how I should just get off the forums if I don't want to do it exactly the way he says.
Half of the posts on the forums were not much more than Intel worshiping kumquats who thought that it was stupid to want to run FreeNAS under bhyve on FreeBSD. Especially on AMD HW. Yes, of course, who would want to run FreeNAS in a VM? How stupid! Yeah, that was a few years ago but the whole thing left a bad taste for FreeNAS. I doubt I'll ever use it.
A big part of why I have no plans at all to use FreeNAS is ZFS.
As someone else mentioned, ZFS looks absolutely awesome. On paper. In reality most of what I saw of ZFS a few years ago on the FreeNAS forums was, "Oh, GOD! HELP ME! All of my data is GONE! It won't recognize my VDEVS any more! How do I get it to recognize the storage again? It just... quit. For no reason!"
I really don't know how much of that was stupid user tricks and how much was just basic flakiness on the part of ZFS but I sure as hell wouldn't even think about using ZFS without serious enterprise HW. My home server is a dual socket Opteron board with 128GB of ECC RAM from about 7 years ago. It's pretty good HW. I still wouldn't use ZFS. There might not be enough redundancy in my system. Seriously, I wouldn't use ZFS in any situation where you can't run two identical systems with complete duplication of the data. And maybe a third identical system for testing changes. Not a VM. Identical triple-redundant HW. Maybe you want to call me paranoid. That's fine. But I have data that's older than some of the people reading this...
Im really happy to hear about your positive experiences with unraid. Ironically, my experience was flipped and I migrated to FreeNAS haha. Different use cases I suppose. Cheers!
linuxinstalled yeah funny how that goes huh :) everybody has different needs, though now that freenas is a bit more mature I would probably like it. Though unraid is pretty fantastic. I’m still running my unraid and it’s been flawless the last year or so
I see you have your tech grade Swiss Army knife, Vergil...
Do the viewers who watch this care much about editing it down from an hour to 35 minutes or whatever it is. Maybe when the conversation dries up and you only copy-paste commands but it is Wendell, he isn't boring. ;) You have to be quite a nerd to watch this at all.
I like the Star Trek names. I might have thrown in one Star Wars name, just to troll. I like both.
I finally have discovered a flaw of Wendell, he is one of those people who touches other people their monitors. I always hated it when people do that, I don't want to clean my monitor more often than necessary and I don't want to see grease stains. Also outdoors I can't really clean your monitor, at home I just use some water with a bit of vinegar, that seems to work well.
He also touches CPU pins and PCIe card edge contacts, which makes me cringe.
20:42 Holodeck is the name of the Star Trek Online Live server.
27:28 WSUS exists
RaidZ1 with 14 TB drives? Is that even legal?
It just shows the difference between an actual expert and a _perceived_ expert ...
His ex-google server is the host of his 172Tb.
th-cam.com/video/MyK7ZF-svMk/w-d-xo.html
@@sbdnsngdsnsns31312 it's not about file system, raw hdd speed isn't high enough.
@@guy_autordie Yeah, for reference I was administering TrueNAS systems with more than 300TB circa 2014 at a company with a much lower market valuation than Google/Alphabet, though older. Though, that was a CARP HA setup, so the raw storage was > 600TB.
Despite what Wendell mentioned in the video, Btrfs is *not* trustworthy and never has been.
Personally, I wouldn't trust anything less than an OpenZFS RAIDz2 as I was experiencing RAID-DP (NetApp's version of RAID6) with double drive failures way back in 2006. OpenZFS offers RAIDz3, use it.
A RAIDz1 given the MTBF and BER (Bit Error Rates) on drives in excess of 1TB has been known to be abysmal enough that RAID5 was considered insufficient for anything intended for prod at least as far back as 2005.
I run my daily work on desktop and laptop in VMs. I run the VMs and the desktop Host from a 512GB nvme-SSD. The host is a minimal install of Ubuntu 19.10 with the experimental ZFS booting. I have a recovery install of Ubuntu using an ext4 partition on one of my two HDDs. In September 2019 I did try out this dual boot configuration in Virtualbox.
I use ZFS to incremental copy my VMs from desktop to laptop, so on the road I have exactly the same distros with the same settings. Desktop and laptop also use the same Virtualbox shared folders (datasets) with all my info using the same mount points.
My backup server is a 32-bits Pentium 4 HT (3.0 GHz) with 1.2TB in 4 HDDs (2 IDE and 2 SATA-1). It runs on FreeBSD 12.1 and weekly I run my incremental backups to that system out of 2003. Also this combination has been tested first in Virtualbox. The disadvantage is, that the Pentium limits the 1 Gbps link to 200 Mbps due to a 95% CPU load of the network process. I still have 4GB DDR2 and a 3-core Phenom, so I consider to look for a cheap AM2 motherboard to get ~800Mbps.
Steve, love your content!
Do you think you could do a review for gaming monitors sometime (such as the Gigabyte Aorus I guess you're using in this video)?
Why ZFS and UnRaid?? Which this video started with goal sheet with explanation.
Ohh Wendell you magnificent bastard!, those two are too much nerdiness for 1 video. One slip into an in-depth talk about IOMMU, chipset drivers or filesystems performance and this whole channel would colapse and Skynet would awake
Wendell is my hero
Wendell is a cool guy!
Don't forget to add in recycle bin plugin for a place to store deleted files before cleaning them out
What’s better for a simple home theatre/home file nas, unraid or freenas?
Can freenas allocate drives in a pattern like unraid?
I heard Linus talk about how unraid formats the drives vs traditional raid, and unraid seems to be a better solution.
Is that so? And can freenas do that?
Wendell & Steve - A lot of the frustration mentioned at 24:33 of missing things like Scheduled Tasks has been solved for awhile with unRAID's extensive plugin and container/app support through the Community Applications plugin.. .install that one plugin and you'll have access to tons of plugins and containers that answer basically all your needs. In this case, the User Scripts plugin which adds scheduling tasks and custom scripts in the GUI with an interface for CRON. I'm surprised you didn't add the Community Applications plugin since practically every unRAID user knows this is one of the first things you install on any unRAID build. I'm surprised that this is something even Linus seems to know that Wendell doesn't. There's even dedicated sections of the official unRAID forum for all the apps, plugins, containers installed through the Community Applications plugin. Using unRAID without it is like driving a car without tires.
the setup guide literally showing "user scripts" to setup cron.... just as you describe.. we did add the community applications... is there... see the guide... whats surprising is that cron, at the cli level, is a bit broken without a "community plugin" ...
@@Level1Techs Ahhh sorry didn't look at the setup guide. Just going by the video you have and your complaints in the video. You don't have Community Apps installed on the unRAID server you setup with Steve, at least as shown in the video above and the User Scripts pllugin is also not installed/visible in the video. Both CA and User Scripts were not mentioned in the actual video and the impression given is that they weren't there, despite what the actual written guide says. You also manually installed docker containers rather than using the usual method of using community applications in the video above.
@@jed1mstr the video was recorded as we setup steves nas, and we said see the guide. Also the guide explains why the unraid gui is actually bugged for the particular docker setup we did, so it is not possible to use the usual docker UI to have two containers share an IP. Which is absolutely a necessity to get http+https sni proxy working properly.
@@Level1Techs Agree - the base OS seems a bit TOO Spartan these days. And the NGINX does (and finally has) seen some development love recently.
Love watching these, but I think I will stick with a pre-built NAS from QNAP with ZFS.
So three years late... though i saw this one three years ago but now i noticed the Verge's swiss army knife which hopefully has a phillips head screw driver.....
I am going to pull the trigger and build my Unraid server with this motherboard.
I have been researching FreeNAS and UnRaid and have yet to decide my path. However, I am confused by one statement made by Wendell. Specifically, around 6:48 he states "FreeNAS doesn't do hardware passthrough". Over in the FreeNAS forums I have been asking about the possibility of running FreeNAS in a VM and, although there is lots of debate about that, there seem to be folks there who are specifically using FreeNAS in a VM with passthrough. Who is correct? PS: Not interested the the FreeNAS vs. Unraid debate here - just want to know if the statement about FreeNAS not supporting passthrough is accurate.
I think he should have said "it does it poorly" -- Linux KVM virtualization is outpacing FreeNAS (which is BSD). User base on BSD is way smaller, and the GIANT server farm crowd is pushing the KVM tech forward fast.
I see all the youtubers using that monitor. Is it really that good? Should i get that or the PX7 or the LG?
It would be for gaming and media consumption
I think you guys may want to try freenas since I believe 11.3 (stable af beta at the moment) has updated smb shares and you don't need to setup cron jobs manually
What Ram did you use ? Sorry if I missed the part where you mentioned it, but I am planning on building a Proxmox Server with a 3600 on ZFS and I am currently planing on going ECC Ram because I love my data and one VM will be a nextcloudhost.
Spinning rust. I like that.
I don't understand why you would use unraid. Just use Proxmox as enterprise hypervisor, it's Debian based and also capable of PCI passthrough. Also since Proxmox v6 you can use ZFS as boot medium as well. Also, proxmox is free, you don't have to pay up front and don't have to pay more if you reach a certain amount of disks. Just use Openmediavault as NAS-VM, it's also debian based. Proxmox itself is very resource-friendly as well. In Proxmox you have the ability to use ceph-storages for your VMs. It has an integrated backup for the VMs as well.
At work we are only installing Proxmox lateley, and as you can tell I'm grown very fond of it and really can recomend it.
Thanks for reading, have a nice day!
Call me crazy but I think setting up all the things y'all did should have been in their own VM. Each service/function as a VM because you invested so much effort into customizing the host/physical OS that disaster recovery would need Wendell back ( or even remotely) for help.
A friend of mine is running BeeGFS pools, because it is free and it can be made to go fast.
While watching this I felt like I wanted to see block diagrams of the layout Wendell was describing.
watch SpaceInvader's channel on UnRaid tutorials - does just that !
@@joespurlock4628 Thanks!