Do not use rubberbands to mount the Heatsinks, they will break at some point from the softeners evaporating and potentially short something out when the Heatsinks flap around.
Would love to see the mini-series on setting up the Multicast DNS setup. This is one of the main pain points that I have dealt with in my homelab and I feel my current solution is really kludgy and I want to clean it up.
*talks about how other solutions have so much maintenance and how low maintenance ones are better *proceeds to over maintain low maintenance solutions and make them overkill love that!
This honestly looks perfect for me. Have a small apartment and wife is not keen on my ideas of starting a homelab because of the space it might take and the noise. I would definitely love to see your setup with this! Interested in the mini-series for sure as I start my homelab journey this could be super insightful!
3:10 - "about 2 to 1 per TB of storage" - Yeah, sure, sure... Yeah... No. I was looking into an all SSD NAS. I settled for hard drives, as even the cheapest 4TB SSDs could have given me 12TB hard drives.
my thoughts as well also the board on this unit doesn't have enough PCIE Lanes to use the NVME's at full speed. I still prefer my Factual Design Meshify XL gives me plenty of of options to add things and will last years
@@jamiei543 The argument behind PCIe lanes is honestly a terrible one... You would need a 100Gb/s network to even use them at their full speed, and even at 9 lanes this is still above 7000MB/s which I guarantee you only 0.01% of all personal computers really need, less so storage servers of this scale.
@@ElvenSpellmaker And what's the worst about it? You may never know a drive is SMR before it's too late as there's usually no information about it anywhere and the distinction between CMR and SMR is usually a single character in the model number...
@@shapelessed Sure it's hard to find out (when I bought my IronWolf drives I checked they were CMR and basically only the 2TB were), but with SSDs you don't have that problem so they should last way longer and so bring down the price per GB quite substantially
I have a day job and a private datacenter in my house. My servers run our system engineered runtime platform and usually run for 20 years non-stop. I just put in cooling this Summer and there is a solar power plant being installed this January. Day before yesterday I acquired two Oracle X7-2 servers, to renew and bolster the server park further. Set up NTP V4 and ISC DHCP with DDNS talking to PowerDNS, both of which I ported to Solaris 10 and packaged them and their configurations cleanly. Multiple NTP, DHCP, and PowerDNS servers for high availability. Works like clockwork, and Solaris 10 is so mind blowingly fast with that "FireEngine" TCP stack and the high performance kernel it sports.
Hi Tomaž, be careful with those rubber rings that hold the aluminum heatsinks on the NVMes, they tend to dry out and leave the aluminum heatsink loose, which can cause a short circuit inside the device. d---(^_^)z
thank you Tomaz from Slovenia. I Am looking fw to: A. How you will overcome Plex hw transoding in a container on this thing :-) and B. Which fs / raid configuration will be retained... btrfs / raid 5 or 6 is ok but will drain the drives lifespan as well as upper the overall power consuption. I have similar config and found skhynix gold ssd to be the best fit precisely because of their longer than average lifespan + overall low power consumption... then mirrot shoud fix that but will rise the price of the space even more ... just curious about your choice and the reason behind them...
I can probably answer A: almost any modern chip can transcode* a couple streams without issue. For a household that's usually enough since it's rare that 3+ people will want to watch separate things in separate rooms at exactly the same time. Then on top of that, there are client devices support hardware decoding of even 4k HDR so video transcoding does not need to be done for those. *Big caveat that no one seems to realize: Audio transcoding. Transcoding a 7-channel DTS in to a 2.0 can bring a low-power CPU to it's knees in a way that makes H.265 jealous.
Interesting video as always, would love to see the setup mini series. Especially more about avahi and the CA. I personally use pihole with unbound in one docker container which handles cname records for each service. But always looking to experiment with other solutions. Also a tip (you might already be aware) but using .local can cause some conflicts.
I'm currently in the same boat with a failing five. I went with (at the time large) 2Tb * 4 drives in a zfs pool. One drive has been reporting errors I have two new drives on order (one spare this time!). Server is a now slightly long in the tooth HP micro server, that has been super reliable over the years, just keeps going.
What's the benefit of your avahi way vs. doing DHCP integrated with a local DNS? I think dnsmasq should provide an easy way of doing this for example (exposing the assigned IPs via DNS that is). Since everything uses DNS, but mDNS can be flaky depending on device implementation, I haven't seen mDNS being used for a centralised setup like this - in my mind, it is more a tool for when you need something but cannot easily switch the DNS or DHCP servers in a network, such as announcing a printer in a network managed by a locked-down ISP router...
If you don't use the service advertisement feature then there is none. Put the pihole container on a macvlan and you're already done. I, would put OpenMediaVault on the box install omv-extras and use the compose plugin for docker containers. Pihole as container for the dns/dhcp stuff, jellyfin for media stuff, nextcloud for cloud stuff. OMV itself handles smb/nfs mounts
The price between HDD and SSD per gigabyte is still huge if you look at HDDs with around 16TB, which is a size that is not even available for SSD. Anything above 4TB in the SSD segment gets insaneöy expensive again. I just bought a 4TB NVME at the cost of 2/3 of my 18TB HDD. So the price per GB ratio is in that case about 1:3,2 in favor of the HDD. In addition does the HDD not wear down like a SSD. A SSD which is being written on for certain tasks can wear down rather quickly. I still have 4TB HDDs that are perfectly usable that are already around 10 years old, no SSD can last that long if used in the manner I need them for. My current system SSD that is about 2 years old is already down to 80%.
Please explain the maintenace they need to do on their ravks, I don't have the amount of storage they have but I have over 300tb total and there is no maintenance that is needed?
I'm not sure where you're shopping for harddrives, but even the cheapest SSD's I can find still go for about 50 euro per terabyte around here, and the cheapest harddisks are about 15 euro per terabyte for me. That's still a massive difference if you ask me :) But the SSD's are nice if you don't need too much storage and high performance I suppose. But in the case of ZFS you could easily go for a spinning disk storage and an SSD cache ;)
I run everything off a proxmox on an i7-7700T / 32GB system / 12TB storage - NAS, the usual homelab stack of apps and even a VM with an nvidia card passed through that runs ubuntu desktop connected to the telly for the wife to play Sims 3. Then there's the n100 router box (proxmox: opnsense, wireguard, pihole, network monitoring, a kube instance exposed to the interwebs on its own vlan) and a 4GB pi 4 running grafana, prometheus, another pihole and proxmox backup server. People really overestimate the hardware they need to run stuff at home.
My main concern with that SSD you picked is how long will they last? I tend to go with enterprise SSD’s for the long lasting drives. Most consumer / pro drives will only last for about 600-1200 TBW which for a nas that should last 10 year ish isn’t much.
Server grade hardware can get picky over small issues. So, it's not that surprising nothing bad happened yet. Generally, when small errors pop up, a run of basblocks or spinrite can correct them. Running a smartctl check can also reveal where the error lies. SAS drives in particular report Recoverable (ECC) and Uncorrectable Errors separately as well as defective sector remapping. This can be invaluable when determining the real severity of an issue. This is why consumer/prosumer NAS devices can not give great early warnings of drive failures. SATA just doesn't report detailed metrics.
I moved from argentina to spain, i left my homelab in argentina in my mother house with UPS, so here in europe i need to start up a new homelab. tailscale is so awesome because of cg-nat moms connection. i want to replicate my homelab here. but i think to change HDD for SSD, no sure if the endurance of cheap drives really take advantage of HDD NAS type.
2:58 let it be all M.2... Please let it be all M.2... OMG IT'S 8x M.2 SSDs!! Absolutely love the form factor! Yes please, tutorial! I was looking myself into an all SSD NAS, found a couple PCIe cards with M.2 slots, but then I could not find a motherboard with a SBC form factor and PCIe 16x slot, and things started to complicate (and cost got very "complicated"). There is also the Asustor Flashstor with 12x M.2 slots, 2x10Gb Eth, but the price is also beefy (before the SSDs). Thank you for recommending this NAS, I might treat myself this Christmas too :)
yeahh well €840 for an i3 or €630 for a N95.... but hey for free is nice. Adding SATA storage to a MS-01 using a 9400-8e HBA that I bought on ebay is my idea. not cheap either but I already had the MS-01
@tomazzaman could you tell me, where did you find your New York wallpaper? i remember we had very similiar at our living room when i was a kid - it is a blast from the past, I would love to look into buying one. thanks!
See, my problem is I have more than 30 TB of data and that is increasing every day. I don't have a rack, though. What I have are two large towers, one with 8 disks (4 6TB disks in raidz1 and 4 10TB disks in raidz1) and the other with 6 12 TB disks in raidz2. One serves as the primary nas and the other is my backup. There's absolutely no way to downsize now 😅
It's a great solution if your local DNS and your DHCP server can talk to each other. If I'm spit balling, maybe he wants devices to be able to be served new IPs without the delay of a DNS cache. (I run a DNS+DHCP setup like you mention and sometimes it's annoying that everything caches DNS lookups, especially when I power down a machine and the failure is cached)
That's just an external USB enclosure with Ethernet, that's not a NAS. And good luck having 40/50TB on NVME storage without needing to do a loan. Considering you can buy a small used desktop for 150 euro, getting in 4 disks, maxing out around 40 TB easily and installing OSes like unRaid that need 0 maintenance, i don't see the need to get expensive stuff just because of the brand and be limited by OS and I/O. We are talking 800 Euro for this box.
I'm confused about the CA. Intuition says that a cert wants stable hostnames provided by DNS but your hostnames are Avahi/mDNS which is an Apple tech and not an official DNS spec. Since this might be a more modern way to run HTTPS in-LAN, can you do a video just about that?
While it seems to be a wonderful NAS, sadly when you look at the specs of it along with the CPU there is not enough PCIe Lanes to run all the PCIe Gen 3 SSD's at their full speed, has like 9 PCIe lanes and 1 is for the LAN, leaving 8 for the 8 drives, and the specs for PCIe Gen 3 SSD's says 1GB for a x1 lane per drive assuming they are running with 1 lane for each drive, or they are sharing 4 lanes per 4 drives. Either way it looks like they are running the drives at 25% their speeds. BUT, if its storage space that you want and not speed then this is fine, but if its speed then you might have an issue, especially if fully populated with 8 drives. I do like the concept of this NAS though.
oooo yea and acually i had a problem in proxmox with a dual 2.5GBe NIC in an x1 slot becuase it used an asmedia pcie mux/demux chip. it messed up pcie iommu grouyping and effectively turned it off, and thats all related to vtd. i think it happens when you're overprovisioned the PCIE lanes more or less and I am betting thats why you had troubles with yours running debian with vtd on and with drives installed.
Stock debian is still on 6.1.x I haven't checked but there's a good chance that there are fixes in newer kernels as the asmedia switch chip is showing up a lot in newer hardware.
If you value your data, why would you trust your data to a bunch of capacitors that slowly lose their charge? Magnets lose their magnetism at a much slower rate. The only thing you needed to do is stick in a new drive and tell the system to use the new drive.
Where are you getting NAND flash for so cheap? My spinning rust is 18TB disks ~$430/ea right now (~$23.89/TB) [WUH721818AL5204] {up from ~$21.83/TB}. Though I have 19 of them and 24 8TB disks [HUH728080AL5200] all in one system, so, my needs are a bit different, That and SAS SSDs are pricey AF, but not half duplex sata garbage. Too bad Broadcom bought the PLX guys out or we might have had better options for large count NVMe NAS/JBOFs.
I don't understand how people can function with only 6TB of storage. Your storage pricing is weird. New hard drives here can be routinely found for $15/TB, and flash storage is $50/TB on a deal. Not saying you're wrong for where you live, but are hard drives that expensive there?
Anyway I was thinking the same, I don't know how he got to 2:1 ratio from hdd to flash storage. I got 12tb hdds for 110€ and 18tb enterprise grade hdd for 190€, so around 10€/tb. While my most cost-effective sata ssd is probably a no name 1tb ssd that I got for 43€, or an 8tb samsung qvo for 345€. So 43€/tb for used drives. If I had to go new and nvme, the lowest is around 70-75€/tb for 4tb nvme. The 54tb that I got for 540€ would cost me 3780€!
Those are rookie numbers, I sat on my error count increasing to 677 over the course of a month before replacing the drive finally (didnt loose data, raidz2-array)
Finally just had one drive fail in my 8-drive RAID-Z2. It's 6 years old. I yawned. I'm still yawning. I'll wait until next year to buy a replacement. Maybe get a 20TB± helium and start slowly replacing every drive.
why would any of you two specialists loose any data in a mirror setup with only one failing disk to begin with? You are both running a one drive setup at this point.
@@DJDocsVideos Might I suggest searching for RAID-Z? Because I can still lose another drive without losing data. The number is the number of parity drives. (not really, but close enough)
@@klfjoat Yea I did that when we helped porting it from Solaris to FreeBSD back in the day. You should read up on SnapRAID if you couldn't extrapolate the information that the number of parity drives is variable. Of course I could run 2 data and 2 parity drives, that's your 4 drive Raid2-Z config but I decided that I run 4 data drives and one party drive as I see no point in keeping a damaged disk running and short of things that trashed every single disk in a machine I haven't seen more than one drive failing at the same time in 30+ years.
I doubt if multicast DNS is the best fit.. won't that constantly like you mentioned broadcast the whole network, shouting all these IPs. Why not setup Unbound server instead? And use a proper DNS server to handle this.
Methinks Terramaster is Doing A Marketing. 😂 Review of the Terramaster 12 bay HDD model on one advanced TH-camr's channel released the same day as a review of the Terramaster 8 bay M.2 model on a different moderately advanced TH-camr's channel.
how about create a mini rack system for ur router and other feature network devices so we can enjoy sliding the device into rack like real one , and also stack all devices nicely, and save some spaces
So a half-width rack is apparently a thing for some small stuff. I found an open telco (2-post, not 4) one in tilted and straight versions when I looked a few months ago. I was thinking of putting all of my small 4-6 port Ubiquiti gear on shelves on it. Would have fit beautifully. But I have a UPS for the equipment that would have had to hang out on the ground, so I didn't go that route.
That sparks some hope for me that new HDDs will come down in price. They have been stuck at 15-ish€/TB for over 2 years now, while SSDs dropped below 40. With the space, energy and noise disadvantages HDDs have in comparison, they have to compete more with prices. Hopefully the Seagate MACH3 result in drastic price drops for HDDs. I'm a huge fan of HDDs. Storage needs increase day by day, if you are fine building a storage dense HDD NAS the next few years are hopefully very promising. For smaller and less maintenance-intense applications like the SSD NAS you show it does make sense to go without spinning rust.
as long as the top end for consumer SSD's is still 8GB... not soon. There are only 3 players left in the HDD "market" so why wouldn't they keep the prices as high as possible. Works for OPEC since decades. You all can stop waisting space with funny ZFS raid constructs that are utterly pointless for you. Look into SnapRAID and mergerfs. Oh and the other benefit for what most of you do ,store media files, that you write once and read a lot you can easily go with SMR HDDs that way. My test NAS uses 4 cheap ass 8TB Seagate SMR HDDs for data and one 8TB Toshiba CMR HDD for parity. That allows for one data drive or the parity drive to fail. Good enough for my BR Backups. Runs stable for 5 years and 5 month as of today. I use a seperate SSD for temp media files like jellifin transcodes.
@DJDocsVideos Yeah definitely, I have the fear we won't see more than 8TB for customer SSDs in the foreseeable future. We can be glad there's a WD_Black SN850X 8TB Modell now. Most customers are relying on Cloud Services for "comfort" (lol) and not practising local copies and proper storage. The HDD business is in fact a Oligopoly, however three providers can still push prices down when one provider has a massive benefit. That can currently be seen with Intel and their 2nd Gen ARC GPUs. I hope the same thing happens with HAMR; WD and Toshiba cannot compete with Storage Density or Efficiency per TB now. And a 1/3 uplift in storage density matters for Edge Case Computing like Enterprise Storage, where those storage densities are usually located. Because the big companies buy them in the thousands, Toshiba and WD have to compete with pricing now. Hopefully. We'll see what happens :) And yes SMR is definitely an option for Archival Tier storage. Even if you want to replace local storage with a 10G Capable NAS with let's say 6 20TB HDDs and a cache 2TB SSD, you'll have to hit pretty hard writing workloads to overwhelm the HDDs. That's likely why Seagate and WD ramp up SMR models again after they stopped them for a while. For me, I try to avoid SMR when possible because you never know. I had my fair share of problems with SMR on the External Seagate Expansion Portables regarding Performance, even when reading. I didn't do that many write intensive tasks, still they struggled, while 2,5" 5400rpm counterparts with CMR worked flawlessly. For my further DataHoarder arc, I specifically bought CMR Exos and Toshiba MG Enterprise Drives and never looked backs. Perhaps maybe in the future when the prices of density advantages are really worth it again.
@tobs2470 I personally like the Toshiba MGSeries a lot. We have a good number if those running at our customers and I upgraded my personal Storage Server with a bunch of 18TB models 😃
Yes! I am very interested in how you set this up. Especially the Yubikey and certificate signing, but also the multicast DNS
This!
Following!
+1 for miniseries. The way you explain things is exceptional. Thank you.
Please make a mini series, would love to see it.
Do not use rubberbands to mount the Heatsinks, they will break at some point from the softeners evaporating and potentially short something out when the Heatsinks flap around.
I'd love to see the setup tutorial videos!
I'd like to see the setup tutorial videos, thank you for the offer.
Would love to see the mini-series on setting up the Multicast DNS setup. This is one of the main pain points that I have dealt with in my homelab and I feel my current solution is really kludgy and I want to clean it up.
*talks about how other solutions have so much maintenance and how low maintenance ones are better
*proceeds to over maintain low maintenance solutions and make them overkill
love that!
Realizing all videos I watch are about someone trying to sell me something
ikr, this dude has such an annoying vibe too
@@chimpo131 Then don't watch. 🤷♂
Welcome to Capitalism where we all are the Marks.
so real. there's only a few youtubers left who don't do this.
So true they are getting this stuff for free most of the time
That DNS setup is _chef's kiss_ imao.
Thank you for normalising running your own OS on any hardware you get.
This honestly looks perfect for me. Have a small apartment and wife is not keen on my ideas of starting a homelab because of the space it might take and the noise.
I would definitely love to see your setup with this!
Interested in the mini-series for sure as I start my homelab journey this could be super insightful!
0:59 missed opportunity to say "cosplay as a sysadmin"
3:10 - "about 2 to 1 per TB of storage" - Yeah, sure, sure... Yeah... No. I was looking into an all SSD NAS. I settled for hard drives, as even the cheapest 4TB SSDs could have given me 12TB hard drives.
my thoughts as well also the board on this unit doesn't have enough PCIE Lanes to use the NVME's at full speed. I still prefer my Factual Design Meshify XL gives me plenty of of options to add things and will last years
@@jamiei543 The argument behind PCIe lanes is honestly a terrible one... You would need a 100Gb/s network to even use them at their full speed, and even at 9 lanes this is still above 7000MB/s which I guarantee you only 0.01% of all personal computers really need, less so storage servers of this scale.
Don't forget a lot of the bigger drives are SMR which will definitely impact life and also speed...
@@ElvenSpellmaker And what's the worst about it? You may never know a drive is SMR before it's too late as there's usually no information about it anywhere and the distinction between CMR and SMR is usually a single character in the model number...
@@shapelessed Sure it's hard to find out (when I bought my IronWolf drives I checked they were CMR and basically only the 2TB were), but with SSDs you don't have that problem so they should last way longer and so bring down the price per GB quite substantially
I would love to see a step by step tutorial for the docker stuff, it sounds incredibly useful
Re: detailed setup video - yes pls!!
Would also love a blog post with commands or links to a guide
Whatever happens, loving this journey your on 👍
I have a day job and a private datacenter in my house. My servers run our system engineered runtime platform and usually run for 20 years non-stop. I just put in cooling this Summer and there is a solar power plant being installed this January. Day before yesterday I acquired two Oracle X7-2 servers, to renew and bolster the server park further. Set up NTP V4 and ISC DHCP with DDNS talking to PowerDNS, both of which I ported to Solaris 10 and packaged them and their configurations cleanly. Multiple NTP, DHCP, and PowerDNS servers for high availability. Works like clockwork, and Solaris 10 is so mind blowingly fast with that "FireEngine" TCP stack and the high performance kernel it sports.
Hi Tomaž, be careful with those rubber rings that hold the aluminum heatsinks on the NVMes, they tend to dry out and leave the aluminum heatsink loose, which can cause a short circuit inside the device. d---(^_^)z
thank you Tomaz from Slovenia. I Am looking fw to:
A. How you will overcome Plex hw transoding in a container on this thing :-)
and B. Which fs / raid configuration will be retained... btrfs / raid 5 or 6 is ok but will drain the drives lifespan as well as upper the overall power consuption. I have similar config and found skhynix gold ssd to be the best fit precisely because of their longer than average lifespan + overall low power consumption... then mirrot shoud fix that but will rise the price of the space even more ... just curious about your choice and the reason behind them...
I can probably answer A: almost any modern chip can transcode* a couple streams without issue. For a household that's usually enough since it's rare that 3+ people will want to watch separate things in separate rooms at exactly the same time.
Then on top of that, there are client devices support hardware decoding of even 4k HDR so video transcoding does not need to be done for those.
*Big caveat that no one seems to realize: Audio transcoding. Transcoding a 7-channel DTS in to a 2.0 can bring a low-power CPU to it's knees in a way that makes H.265 jealous.
Interesting video as always, would love to see the setup mini series. Especially more about avahi and the CA. I personally use pihole with unbound in one docker container which handles cname records for each service. But always looking to experiment with other solutions. Also a tip (you might already be aware) but using .local can cause some conflicts.
I'm currently in the same boat with a failing five. I went with (at the time large) 2Tb * 4 drives in a zfs pool. One drive has been reporting errors I have two new drives on order (one spare this time!). Server is a now slightly long in the tooth HP micro server, that has been super reliable over the years, just keeps going.
What's the benefit of your avahi way vs. doing DHCP integrated with a local DNS? I think dnsmasq should provide an easy way of doing this for example (exposing the assigned IPs via DNS that is). Since everything uses DNS, but mDNS can be flaky depending on device implementation, I haven't seen mDNS being used for a centralised setup like this - in my mind, it is more a tool for when you need something but cannot easily switch the DNS or DHCP servers in a network, such as announcing a printer in a network managed by a locked-down ISP router...
If you don't use the service advertisement feature then there is none. Put the pihole container on a macvlan and you're already done.
I, would put OpenMediaVault on the box install omv-extras and use the compose plugin for docker containers. Pihole as container for the dns/dhcp stuff, jellyfin for media stuff, nextcloud for cloud stuff. OMV itself handles smb/nfs mounts
The price between HDD and SSD per gigabyte is still huge if you look at HDDs with around 16TB, which is a size that is not even available for SSD. Anything above 4TB in the SSD segment gets insaneöy expensive again. I just bought a 4TB NVME at the cost of 2/3 of my 18TB HDD. So the price per GB ratio is in that case about 1:3,2 in favor of the HDD. In addition does the HDD not wear down like a SSD. A SSD which is being written on for certain tasks can wear down rather quickly. I still have 4TB HDDs that are perfectly usable that are already around 10 years old, no SSD can last that long if used in the manner I need them for. My current system SSD that is about 2 years old is already down to 80%.
7:09 Fusion designers rise up!!
Great video and yes, I would like to see more details. But only if they are also for people who are no network and linux experts :-) Let's see -
Please explain the maintenace they need to do on their ravks, I don't have the amount of storage they have but I have over 300tb total and there is no maintenance that is needed?
I guess he was referring to the amount of components that you might have to mantain and replace in a rack vs a single small machine .
I have 24bay nas that is always on and has been for several years. The on replacement has been upgrading hard drives.
I'm not sure where you're shopping for harddrives, but even the cheapest SSD's I can find still go for about 50 euro per terabyte around here, and the cheapest harddisks are about 15 euro per terabyte for me. That's still a massive difference if you ask me :)
But the SSD's are nice if you don't need too much storage and high performance I suppose. But in the case of ZFS you could easily go for a spinning disk storage and an SSD cache ;)
I run everything off a proxmox on an i7-7700T / 32GB system / 12TB storage - NAS, the usual homelab stack of apps and even a VM with an nvidia card passed through that runs ubuntu desktop connected to the telly for the wife to play Sims 3. Then there's the n100 router box (proxmox: opnsense, wireguard, pihole, network monitoring, a kube instance exposed to the interwebs on its own vlan) and a 4GB pi 4 running grafana, prometheus, another pihole and proxmox backup server.
People really overestimate the hardware they need to run stuff at home.
I also went all flash recently, built 6L NAS, but based on 14900K (I limited it by 125W).
+1 for mini series of step by step manuals
I want one but the Aussie price is jacked to $1200… far more than tEH exchange rate difference
My main concern with that SSD you picked is how long will they last? I tend to go with enterprise SSD’s for the long lasting drives. Most consumer / pro drives will only last for about 600-1200 TBW which for a nas that should last 10 year ish isn’t much.
Server grade hardware can get picky over small issues. So, it's not that surprising nothing bad happened yet. Generally, when small errors pop up, a run of basblocks or spinrite can correct them. Running a smartctl check can also reveal where the error lies. SAS drives in particular report Recoverable (ECC) and Uncorrectable Errors separately as well as defective sector remapping. This can be invaluable when determining the real severity of an issue. This is why consumer/prosumer NAS devices can not give great early warnings of drive failures. SATA just doesn't report detailed metrics.
Making VDD work would be great: you could run Proxmox and make that thing your entire homelab
I moved from argentina to spain, i left my homelab in argentina in my mother house with UPS, so here in europe i need to start up a new homelab. tailscale is so awesome because of cg-nat moms connection. i want to replicate my homelab here. but i think to change HDD for SSD, no sure if the endurance of cheap drives really take advantage of HDD NAS type.
2:58 let it be all M.2... Please let it be all M.2... OMG IT'S 8x M.2 SSDs!! Absolutely love the form factor! Yes please, tutorial!
I was looking myself into an all SSD NAS, found a couple PCIe cards with M.2 slots, but then I could not find a motherboard with a SBC form factor and PCIe 16x slot, and things started to complicate (and cost got very "complicated").
There is also the Asustor Flashstor with 12x M.2 slots, 2x10Gb Eth, but the price is also beefy (before the SSDs).
Thank you for recommending this NAS, I might treat myself this Christmas too :)
i'm gonna make the all SSD nas when price per TB becomes same as HDD
Terramaster F8 ssd looks nice, as for dashboard i myself use Homarr as my dashboard/hub for local network.
yeahh well €840 for an i3 or €630 for a N95.... but hey for free is nice. Adding SATA storage to a MS-01 using a 9400-8e HBA that I bought on ebay is my idea. not cheap either but I already had the MS-01
Yeah having a small series of tutorials would be awesome
Just wait. If you keep YouTubing you’ll be on a 45-drives truenas setup before you know it
@tomazzaman could you tell me, where did you find your New York wallpaper? i remember we had very similiar at our living room when i was a kid - it is a blast from the past, I would love to look into buying one. thanks!
yes, for the miniseries to setup homelab.
Definitely make the miniseries, it'll be in my bookmarks instantly! Haha
See, my problem is I have more than 30 TB of data and that is increasing every day. I don't have a rack, though. What I have are two large towers, one with 8 disks (4 6TB disks in raidz1 and 4 10TB disks in raidz1) and the other with 6 12 TB disks in raidz2. One serves as the primary nas and the other is my backup. There's absolutely no way to downsize now 😅
I would like to see setup videos, if your time permits this.
The 16GB eight core i3 version is only $100 more right now. The iGPU is also twice as fast.
i'm in a similar situation and i'm very tempted to go with a base mac mini m4, don't know which enclosure yet
Dumb q, if the Pihole is your DNS, why can't you use it for the hostnames too?
its not dumb.. I think it makes much more sense to use a proper DNS server.. Like maybe Unbound or whatever pihole comes with.
It's a great solution if your local DNS and your DHCP server can talk to each other. If I'm spit balling, maybe he wants devices to be able to be served new IPs without the delay of a DNS cache. (I run a DNS+DHCP setup like you mention and sometimes it's annoying that everything caches DNS lookups, especially when I power down a machine and the failure is cached)
An avahi cname setup would be a great video series!
Insert spongebob "I don't need it... I NEED IT!" meme here.
That's just an external USB enclosure with Ethernet, that's not a NAS. And good luck having 40/50TB on NVME storage without needing to do a loan.
Considering you can buy a small used desktop for 150 euro, getting in 4 disks, maxing out around 40 TB easily and installing OSes like unRaid that need 0 maintenance, i don't see the need to get expensive stuff just because of the brand and be limited by OS and I/O. We are talking 800 Euro for this box.
$600 seems a bit steep knowing a regular N95 mini PC is like $100...
I'm confused about the CA. Intuition says that a cert wants stable hostnames provided by DNS but your hostnames are Avahi/mDNS which is an Apple tech and not an official DNS spec.
Since this might be a more modern way to run HTTPS in-LAN, can you do a video just about that?
+ 1 for miniseries 😊
+1 for miniseries ❤
I would like to see the tutorial using raspberry pi's. Love the vids
I‘d also would have a closer Look how to set up
While it seems to be a wonderful NAS, sadly when you look at the specs of it along with the CPU there is not enough PCIe Lanes to run all the PCIe Gen 3 SSD's at their full speed, has like 9 PCIe lanes and 1 is for the LAN, leaving 8 for the 8 drives, and the specs for PCIe Gen 3 SSD's says 1GB for a x1 lane per drive assuming they are running with 1 lane for each drive, or they are sharing 4 lanes per 4 drives. Either way it looks like they are running the drives at 25% their speeds.
BUT, if its storage space that you want and not speed then this is fine, but if its speed then you might have an issue, especially if fully populated with 8 drives.
I do like the concept of this NAS though.
oooo yea and acually i had a problem in proxmox with a dual 2.5GBe NIC in an x1 slot becuase it used an asmedia pcie mux/demux chip. it messed up pcie iommu grouyping and effectively turned it off, and thats all related to vtd. i think it happens when you're overprovisioned the PCIE lanes more or less and I am betting thats why you had troubles with yours running debian with vtd on and with drives installed.
Stock debian is still on 6.1.x I haven't checked but there's a good chance that there are fixes in newer kernels as the asmedia switch chip is showing up a lot in newer hardware.
If you value your data, why would you trust your data to a bunch of capacitors that slowly lose their charge? Magnets lose their magnetism at a much slower rate. The only thing you needed to do is stick in a new drive and tell the system to use the new drive.
Where are you getting NAND flash for so cheap? My spinning rust is 18TB disks ~$430/ea right now (~$23.89/TB) [WUH721818AL5204] {up from ~$21.83/TB}. Though I have 19 of them and 24 8TB disks [HUH728080AL5200] all in one system, so, my needs are a bit different, That and SAS SSDs are pricey AF, but not half duplex sata garbage. Too bad Broadcom bought the PLX guys out or we might have had better options for large count NVMe NAS/JBOFs.
please put together that tutorial series
Homepage is awesome
I don't understand how people can function with only 6TB of storage. Your storage pricing is weird. New hard drives here can be routinely found for $15/TB, and flash storage is $50/TB on a deal. Not saying you're wrong for where you live, but are hard drives that expensive there?
A lot of people live with the 5-15gb of free space given for free by cloud companies.
Anyway I was thinking the same, I don't know how he got to 2:1 ratio from hdd to flash storage. I got 12tb hdds for 110€ and 18tb enterprise grade hdd for 190€, so around 10€/tb. While my most cost-effective sata ssd is probably a no name 1tb ssd that I got for 43€, or an 8tb samsung qvo for 345€. So 43€/tb for used drives. If I had to go new and nvme, the lowest is around 70-75€/tb for 4tb nvme. The 54tb that I got for 540€ would cost me 3780€!
That's more like 4:1 for used sata and 7:1 for new nvme.
Cheapest /TB is about 10 to 16€ for spinning rust.
Cheapest ssd I can find goes for 49€/TB
I would for sure watch a step by step
Please make a series out of it.
You should switch from Pi-hole to adguard.
Those are rookie numbers, I sat on my error count increasing to 677 over the course of a month before replacing the drive finally (didnt loose data, raidz2-array)
Okay, you're even more daring haha :)
Finally just had one drive fail in my 8-drive RAID-Z2. It's 6 years old. I yawned. I'm still yawning. I'll wait until next year to buy a replacement. Maybe get a 20TB± helium and start slowly replacing every drive.
why would any of you two specialists loose any data in a mirror setup with only one failing disk to begin with? You are both running a one drive setup at this point.
@@DJDocsVideos Might I suggest searching for RAID-Z? Because I can still lose another drive without losing data. The number is the number of parity drives. (not really, but close enough)
@@klfjoat Yea I did that when we helped porting it from Solaris to FreeBSD back in the day.
You should read up on SnapRAID if you couldn't extrapolate the information that the number of parity drives is variable.
Of course I could run 2 data and 2 parity drives, that's your 4 drive Raid2-Z config but I decided that I run 4 data drives and one party drive as I see no point in keeping a damaged disk running and short of things that trashed every single disk in a machine I haven't seen more than one drive failing at the same time in 30+ years.
I doubt if multicast DNS is the best fit.. won't that constantly like you mentioned broadcast the whole network, shouting all these IPs. Why not setup Unbound server instead? And use a proper DNS server to handle this.
8:18 and that's how world war 3 started kids.
If you understand hardware design, all hardware is open source with a little effort 😂
Why is he yelling?
He has a video about it 🤭😁🤣
HE IS ALWAYS YELLING constantly. Very annoying actually.
Methinks Terramaster is Doing A Marketing. 😂
Review of the Terramaster 12 bay HDD model on one advanced TH-camr's channel released the same day as a review of the Terramaster 8 bay M.2 model on a different moderately advanced TH-camr's channel.
why is he talking so aggressive lol
get back to working on your router
how about create a mini rack system for ur router and other feature network devices so we can enjoy sliding the device into rack like real one , and also stack all devices nicely, and save some spaces
So a half-width rack is apparently a thing for some small stuff. I found an open telco (2-post, not 4) one in tilted and straight versions when I looked a few months ago. I was thinking of putting all of my small 4-6 port Ubiquiti gear on shelves on it. Would have fit beautifully. But I have a UPS for the equipment that would have had to hang out on the ground, so I didn't go that route.
@klfjoat but no one really make half width devices. right? I got one full width rack but it take out so many spaces:(
That sparks some hope for me that new HDDs will come down in price. They have been stuck at 15-ish€/TB for over 2 years now, while SSDs dropped below 40. With the space, energy and noise disadvantages HDDs have in comparison, they have to compete more with prices. Hopefully the Seagate MACH3 result in drastic price drops for HDDs. I'm a huge fan of HDDs. Storage needs increase day by day, if you are fine building a storage dense HDD NAS the next few years are hopefully very promising.
For smaller and less maintenance-intense applications like the SSD NAS you show it does make sense to go without spinning rust.
as long as the top end for consumer SSD's is still 8GB... not soon.
There are only 3 players left in the HDD "market" so why wouldn't they keep the prices as high as possible. Works for OPEC since decades.
You all can stop waisting space with funny ZFS raid constructs that are utterly pointless for you. Look into SnapRAID and mergerfs. Oh and the other benefit for what most of you do ,store media files, that you write once and read a lot you can easily go with SMR HDDs that way.
My test NAS uses 4 cheap ass 8TB Seagate SMR HDDs for data and one 8TB Toshiba CMR HDD for parity. That allows for one data drive or the parity drive to fail. Good enough for my BR Backups.
Runs stable for 5 years and 5 month as of today.
I use a seperate SSD for temp media files like jellifin transcodes.
@DJDocsVideos Yeah definitely, I have the fear we won't see more than 8TB for customer SSDs in the foreseeable future. We can be glad there's a WD_Black SN850X 8TB Modell now. Most customers are relying on Cloud Services for "comfort" (lol) and not practising local copies and proper storage. The HDD business is in fact a Oligopoly, however three providers can still push prices down when one provider has a massive benefit. That can currently be seen with Intel and their 2nd Gen ARC GPUs. I hope the same thing happens with HAMR; WD and Toshiba cannot compete with Storage Density or Efficiency per TB now. And a 1/3 uplift in storage density matters for Edge Case Computing like Enterprise Storage, where those storage densities are usually located. Because the big companies buy them in the thousands, Toshiba and WD have to compete with pricing now. Hopefully. We'll see what happens :)
And yes SMR is definitely an option for Archival Tier storage. Even if you want to replace local storage with a 10G Capable NAS with let's say 6 20TB HDDs and a cache 2TB SSD, you'll have to hit pretty hard writing workloads to overwhelm the HDDs. That's likely why Seagate and WD ramp up SMR models again after they stopped them for a while. For me, I try to avoid SMR when possible because you never know. I had my fair share of problems with SMR on the External Seagate Expansion Portables regarding Performance, even when reading. I didn't do that many write intensive tasks, still they struggled, while 2,5" 5400rpm counterparts with CMR worked flawlessly. For my further DataHoarder arc, I specifically bought CMR Exos and Toshiba MG Enterprise Drives and never looked backs. Perhaps maybe in the future when the prices of density advantages are really worth it again.
@tobs2470 I personally like the Toshiba MGSeries a lot. We have a good number if those running at our customers and I upgraded my personal Storage Server with a bunch of 18TB models 😃