Wendell last year while used SSD prices were under $25/TB, I colored outside the lines and build a ZFS pool of 16 x Micron 1100 SATA SSDs using two RAIDZ1 VDEVs. The platform is running AMD Epyc Milan with the RomeD8-2T board you've demo'd quite a bit. Your channel was a big motivation and inspiration for doing so. Is that something you'd like to see a post-mortem on the process? I've thought about doing a big write-up/blog on it but it's quite a bit of work and I'm not sure if it will have value to others.
@@Level1Techs Sounds good - I'll put it on my TODO List. Currently putting the finishing touches on my brother's Unraid machine to take back home with him after our annual family vacation. I might just document both machines for compare/contrast if I can get to it before the trip.
Every time I see a PCB Analysis I start craving Chipolte burritos because I'd sit down at work with one and GN would drop a video like that on the weekend (I worked nights back then).
Just a little note on the NetApp shelfs... The one you are presenting is a DS4243 with IOM3 modules... you can replace the IOM modules with IOM6 or even IOM12 modules (IOM12 will give you 12G SAS out the back, but still only 6G to the disks).. Because I work with this stuff I use the DS212C shelfs (12 disks in a 2U shelf with 12G SAS all the way to the disks)... There are also the DS460 which is 60 slots in a 4U shelf, all 12G SAS... In general the "NetApp" shelfs works very well and are very stable...
SAS2 is fabulous from a drive cost perspective. I have a pile of 4TB SAS2 SSDs that I got back when I was still running a 4U server. I'd LOVE to have a shelf full of SSDs on the cheap.
Very glad to see this coverage of the MS-01, I picked one up as soon as they were announced and have been using it for this exact same scenario. I have a 9300-8e card going to two MD1200 shelfs with 24x 4TB SAS drives in a 4x6 raidz2 config which gives me about 58TiB of usable space, and couple of NVMe SSD's in the MS-01 itself for cache/log. TrueNAS has been a great platform for this config and was able to get rid of a bunch of old servers which saved me about 700-800W of power draw.
5:49 the SAS connector on the disk shelf is very much standard, the SFF-8088 connector is the old standard for SAS (6Gbit) external cables, then with SAS (12Gbit and higher) they went to the square SFF-8644 connector because it's smaller and you can fit 4 of those on a low profile card (which is not possible with the older one).
11:15 fun fact about interposers: many of those I tried (cheap ones easily available on ebay) will cap your speed to somewhere between 250 and 500 MB/s. This is a non-issue for mechanical drives, but will bottleneck SSDs. Also they might not like some SSD model/brands at all and refuse to even show up on the bus.
@0xKruzr I laugh when people call Steve tech Jesus. Dude is well researched but doesn't come close to Wendell in actual tech knowledge and application. Steve is like the college kid who reads up on something and can regurgitate facts but is horrible in lab or application..
What do you mean? The system fans aren't enough? I'm going to switching to an HBA this weekend - I think it is the LSI 9211 8i. I see that it has a heatsink but I don't see a fan on the device.
@@iamamish They run HOT. The tiny heatsink is enough for being shoved into a 2U server with a full row of screaming 80mm fans. I had an 80mm fan strapped directly over it and it would still instantly burn my fingers if I accidentally touched it. As in blisters.
@@iamamish I've had many different LSI models and they all needed active cooling. Inside my server, they're okay, but I was building a big NAS out of commodity hardware. A 140mm fan has lots of airflow and fits nicely over a few HBA and NIC cards. You don't need pressure, just airflow, so it can still be quiet.
@@iamamish kinda, I have some 120mm fans dedicated to blow air on my PCIe cards (HBA, NVMe riser cards, etc), but before that I just mounted a noctua 40mm fan with zipties loosely over the HBAs heatsink, and it stayed cool for years. You just need a little bit of airflow, because usually that part of an ATX chassis has quite stagnant air.
Jonsbo announced the N5 recently. 12x3.5" drives in a desktop case. Could be a nice option for people who don't want rack stuff. Hopefully they're using 3xSAS connectors instead of 12x SATA cables, but it's still early.
Try and get the newer LSI 9305. It draws only half the amount of juice the super old 9300 does (~12W vs. 25W). The additional 20 or 30 bucks are worth it. The LSI 9400 is worth considering as well, especially if you have some spare U.2 drives lying around, cause the 9400 can do NVME passthrough. Just got a 9400-16i for 160€ for my new NAS build.
I am using the MS-01 with the QNAP TL-R400S and it works a treat. If i ever need more LFF drives it should be easy to upgrade to a SAS controller as per Wendel!
Yeah a self hosted voice assistant is something I've wanted for ages and a huge part of the reason for it is smart home control without having to wait a full minute for a cloud assistant to process the command, send it upstream, process it again, forward it to another cloud service and then finally send it straight back to my network
@@bosstowndynamics5488 there are some simple limited ways to go about that sort of thing if you offload to a paid service, meaning the footprint is small. Local LLM does kinda need a pretty phat GPU to be more than the basic stuff, that is kinda the limiting factor for most. You can get reworked 2080Ti's with 22Gb VRAM from China for about $500 which should allow decent response time as well as model, but beyond that the pricetag climbs very steeply. Sure, bigger model and faster chip, but you'll need to be a dedicated enthusiast to drop $10k or more on a GPU för LLM use.
@@noth606 That's mostly for training and high end models though, a pre-trained, narrow scope, optimised model for (relatively) simple tasks like processing natural language voice commands can run on a Raspberry Pi (Facebook's LLM has a cut down version that runs quite happily on the RPi and in terms of raw capabilities should be pretty close to sorted for this task). The big problem is actually plugging everything together.
Can i just say, im glad we got a you tuber, who teaches how to turn e-waste, into gold. Most of em just wanna is to go buy the new hotness….not wendell, hes a man of the people! Rejoice in his glory…….wendell 2024, a disk shelf in every pot
That llm smart home project sounds like part of what I’m working on as a startup, would be amazing having a chat about it, looking forward to the video as well
i have a chasis with 20 bays that has a SAS backplane which i connected to my 9400-16i (and also one to my motherboard sas connector) later on i bought a 9300-8i to connect it to a self made sas jbod (with an expander card using a SAS35X36 chip) So far so good haven't had any problems
And here's me waiting for Milan CPU's to drop a bit more to maybe upgrade my Rome + h12ssl-i proxmox setup, never mind Genoa. I think I'm going to grab myself an HL15 even though it costs a ton, like you I plan to build in it now and leave it for a very long time and for some reason the rosewill RSV-L4500U seems to be unobtanium at the moment.
I have a QNAP TL-D1600S connected to Minisforum MS-01, which makes for a great home media NAS together. I don't think the SFF-8088 mini-SAS cables are a bottleneck 🚀But if it is within your price range, the 1-device solution, QNAP TVS-h1688X, might give better performance.
My disk shelves are Promise VTRAK units, which work pretty much the same. Cheap (free to me) spinning rust isn't a bad option for slow storage in large quantities.
There are quite a few reports of JBODs (many QNAP I think) reporting failures of bad sectors check on new drives (LSI HBA) which disappeared with non-MS01 hosts. Not sure if there is now a fix for that or it was not tested extensively in this video or PEBKAC?
I'm also concerned about this. I got an ms-01 and wanted to connect a disk shelf to it, but those posts made me hesitate. Some people in this comment section seem to have done it successfully though.
Just built a gaming rig with a 5600x 32 gig bdie, 7800 gre video card. 2t 980 pro drive, primocache. Whole family loves it, got it all used off ebay for dirt cheap.
I am running 8 drives in an Antec P101 case - if I wanted more, I'd probably go with something like the homelabs case. Honestly though for the vast majority of home use cases, 8 drives is already ridiculous overkill. I'm migrating my drives to an HBA this coming weekend. I've already bought and flashed it (updated firmware, IT mode, etc) and I'm making a few other changes to my setup too.
This is something i was looking at doing. Purely because of all the IO in the box. But then i also wanted a full GPU which is the main annoyance. Id have to use a pcie nvme to pcie x16 for the controller and have the gpu off an riser as well. Alternatively building something with the chip in it.
The main thing that I've been wondering about disk shelf controllers is their CPU usage overhead. For example, you can get a HBA SAS card with 16 ports, which technically would allow you to connect 16 drives with a break-out cable, adding 0 power consumption overhead. These disk shelfs allow you to share SAS lanes with a lot more drives, but what is the trade off in terms of power consumption? EDIT: 7:00 250W just idling? with the drives or without them?
Could someone please direct me to the video Wendel referred to in this video? The one with the thumbnail titled "most ridiculous home server chassis"? It seems like required reading and I've not been able to find it. Help please?
So, here is why I chose the 45HomeLab HL15. I could not find one of the mini machines that has ECC RAM. I run TrueNAS on mine and thus ZFS. I also didn't want the headaches that come with building all this stuff and finding all the roadblocks on my own.
10:02 -- "Make sure that you've got a Plan B." That's exactly my Golden Rule #1 for Hardware Architecture design and planning. For us puny peon non-hyperscalers, having double or triple sets of matched hardware and firmware at the ready seems essential. Kindest regards, friends and neighbours.
11:30 -- errors "as a result of cabling or signal issues." 3:33 -- "SATA wasn't designed to push cables this long." Thank you 👍 Kindest regards, friends and neighbours.
I love the video I just dont know how useful it would be for most people. I know you called it out but something like the HL15 or a cheaper chassis from rosewill will probably be better for most. It is a single chassis which saves space, power, and noise. HDDs are just so big now that unless someone actually needs to store a ton of data the single systwm is gping to be better. I mean if you got 15 24TB drives that would be 360TB of raw capacity. Unless you are planning on using a bunch of old drives I think the single system is better.
Wendell's content is generally technical enough that it's assumed the audience will know if they would find any given solution useful, as a hobbyist in the field myself it was pretty clear to me that this setup is more than I need and I would hope most/all of Level1's viewers would be in a similar enough boat to at least figure out if they need 15 or 24 drives.
@@bosstowndynamics5488 Yeah I guess I just mean that disk shelves were great when drive capacities were 8TB and less with less being the more common ones. I think increasingly more exotic configurations like this will be less useful in favor of more exotic software solutions. Instead of having a bunch of 4 core servers with 16Tb of total storage and having to figure out hot to combine them together to get useful work done we are going to have 32-192 core servers with 100s of TB of insanely fast flash storage that we will need to figure out how to split up to get useful work done.
Willlll this SSD fitt inn the U2 slott of the Minisforum MS-01 and functionn without any issues? It's a Western Digital Ultrastar DC SN650, U.3, 15MM, 15,360GB, 2.5-inch, PCIe.
From a pure cost perspective, everyone interested in setups like these should take a mental step back and ask if more than 6 drives are *really* necessary. No, really. First impulse is "this looks nice, finally unlimited space for drives, hot-swapping and industry grade hardware!". And we all see the appeal. But *for the money* most of us are better off with a (potentially multiple) smaller machine(s). For my last NAS i went from 4x 3TB in RAID 5 to a single 18TB disk for my second machine. Second, because the old one still lives at a remote location and is a mirror of the new one. There will be a point when the old one is full but i can get cheap 8TB drives by then. Not the point. The point is that one 18TB drive (with a flash cache disk in front) satisfies my performance and capacity needs. It could fail and be offline until the new part arrives and has re-synced or restored the data onto it. But for a home server i can live with that. Backup in general is cheaper than actual redundancy, and you need backups anyway. Anyway, with energy cost and living-room-compatibility (a.k.a. WAF) in mind, smaller systems make a lot of sense.
Where do you get the interposers to connect SATA drives to a dual-port SAS storage shelf? I have such a shelf that does not recognize SATA drives, but SAS drives come right up. Guessing I need those interposers, but all the ones I can find say they're for connecting SAS drives to SATA motherboard ports, not SATA devices to SAS controllers.
they are on ebay, and they are called "interposer" often. They are not standard so you probably need to look for those of the same brand of your shelf. Also you can't connect SAS to Sata ports, there is no device that does that conversion because why would anybody want that
@@marcogenovesi8570 I found hundreds of SAS-to-SATA "interposers" on Amazon, Newegg, eBay etc. It sounds like they're saying that some boards support SAS on their SATA ports, and that you can use their gadget to hook SAS drives up that way. I'm not familiar with that and have never wanted to do it, but it must be useful for somebody. 🤷♂ I did see some sleds with interposers on eBay that look compatible, though they're like $30/each which would make a "free" 24-bay project *substantially* more expensive... 😞 I ordered one to try it, and if they work I guess I'll start watching for a deal.
@@jeffw991 Hm very important, the interposers you want MUST have electronic components and chips on their board. Those are the items that will actually convert Sata to SAS. There are some interposers (often the cheaper ones) that are just a physical port adapter with no chips, only the ports and the bare green board. Those do nothing. Afaik interposers are not a "hot" item so you can get away with asking for aggressive discounts. I never paid more than 10-15 euros/bucks apiece
WD just announced 32+64TB nvme ssds for Q3/24 and for next year 128TB and the year after 256TB capacities, we might get capacity vs price run up that could benefit the user, can not wait to consolidate all my archives dvds and external drives on a mini pc with two U.2/U.3
I actually thought about getting MS01 with QNAP's TL-800S (or two of those with apropriate controller) or TL-1600S as for simple NAS. Or equivalent one day in the future :)
Just a tipp for building a chassis for a homeserver: Use an old (like 10+ years) ATX Big Tower case which has a lot of 5.25" slots in the front, and then get disk cages which convert those to 3.5" disk bays, and voilà you get something very close to the HL15, for way less. Sure you can't mount it in a Rack, but as far as I can tell, not many people use Racks if they have a highly converged homeserver which runs all of their stuff. I use an old Lian Li Big tower that I got used for about 40$ which has 12 5.25" slots. It houses 12 drives + room for more, and has proper front to back cooling like a regular Diskshelf/Serverchassis. One big thing to keep in mind is, that only about 3% of the worlds population lives in the US, so using of the shelf ATX cases is usually way easier and cheaper then something like an HL15 chassis.
There is one tower from 12yrs ago about that would be very suitable, all alu, front is all 5.25" bays, mesh side 'windows' etc. Was on the pricey side at 3-400$ back then but good quality relative to most consumer class towers. Lots of mags and sites built skulltrail etc in them but the name escapes me at the moment, which is dumb since I own one. - Found it, CM Stacker 830. It might be about as good as you can get for this sort of thing, although I will warn that E-ATX is iffy, I have a big dual socket Xeon supermicro board in mine and it needed a hacksaw massage to the case to fit. The downside to it is that the alu is a bit on the soft side, so care should be taken moving the thing when loaded. Upside is it's easy to modify and looks decent IMO.
Is this still a recommended platform with the Intel CPU issues currently ongoing? I wanted to go eith the ryzen version but it is lacking the pcie slot and the sfp+ ports i really liked off of this for a server solution
6:20 - Why would you call it SAS6 if it's SAS2 6Gb/s, SAS3 is latest generation that supports 12Gb/s... I understand why it might be more convenient, but that's terrible to mix naming conventions like that
@@xgod978 still think Jim would have been better off using the 10G networking for his cluster network and using the Thunderbolt for drive expansion. I really think it was all to eliminate the need for a 10G switch and they're really just not that expensive anymore!
PLEASE do a video soon on how to use an LLM for home automation because I have Ollama running with a bunch of different open LLM's downloaded that I used to show RAG to my boss (its SOOOOO slow on E5-V4's without GPU's!) so I mean I can run simpleLLM's but then I have to use Google Gemini or whatever for the Home ASsistant integration when I want to runt it locally!!! HAOS is SOOOOO much better now that its integrated though, no more messy aliases everywhere, i can just speak and it just does it, sooooo nice but i want it in house!!!
i would love to see a “M1 Pocket Mini Soft Router” with M.2 (2242) to Mini SAS HD adapter running this drive bay monster 😎 i guess you could network boot 🤓
I'd like an hl-15... but man I just can't justify 900 bucks on a case. I have a used supermicro 4u with dual 900 watt gold super micro psus I got for free. It plugged right into a z690 with a 12900k. It's not the nices looking case, but it gets the job done!
Overkill much? What most people "need" is 2 to 4 drives and what works best (as in is cheapest and gets the job done) is an externel Disk encloser with JBOD mode over USB (USB 3.1 Gen 1 is enough) or eSATA something like a FANTEC QB-35US3-6G, IB-3640SU3 (both ron fine even on a Rasberry Pi 4 with Open Media Vault) or a Orico DS500U3 if you need 5 drives. You can even go as far as run a Icy Box IB-3810U3 for 10 drives but at that point I would consider your overkill as the better alternative.
He was forced to due to being sick from a tick bite that had bad side effects effecting his ability to tolerate certain foods. Mainly meats and dairy. Not to sell him short. I know a fat vegan.
ms01 doesn't support ecc, I don't think I'd ever want to build a serious storage array without lack of ecc support undercutting my trust in my data store
Wow, so the ms01 is way out of budget. Off couse if you're spending that kinda money, you have pcie ports. I would love to see someone attempting this on a NUC or similar
Wendell last year while used SSD prices were under $25/TB, I colored outside the lines and build a ZFS pool of 16 x Micron 1100 SATA SSDs using two RAIDZ1 VDEVs. The platform is running AMD Epyc Milan with the RomeD8-2T board you've demo'd quite a bit. Your channel was a big motivation and inspiration for doing so. Is that something you'd like to see a post-mortem on the process? I've thought about doing a big write-up/blog on it but it's quite a bit of work and I'm not sure if it will have value to others.
I’m keen!
heck yeah, post on the forum or I can link to it :D
Yeah show us
@@Level1Techs Sounds good - I'll put it on my TODO List. Currently putting the finishing touches on my brother's Unraid machine to take back home with him after our annual family vacation.
I might just document both machines for compare/contrast if I can get to it before the trip.
The price is double for now
That feel when you sit down with food and your favourite techtuber dropped a video 👌
Every time I see a PCB Analysis I start craving Chipolte burritos because I'd sit down at work with one and GN would drop a video like that on the weekend (I worked nights back then).
I didn’t upload anything tho
Woah you read my mind, i just back order food at work and wanted to watch this video while eating.
Just a little note on the NetApp shelfs... The one you are presenting is a DS4243 with IOM3 modules... you can replace the IOM modules with IOM6 or even IOM12 modules (IOM12 will give you 12G SAS out the back, but still only 6G to the disks).. Because I work with this stuff I use the DS212C shelfs (12 disks in a 2U shelf with 12G SAS all the way to the disks)... There are also the DS460 which is 60 slots in a 4U shelf, all 12G SAS... In general the "NetApp" shelfs works very well and are very stable...
SAS2 is fabulous from a drive cost perspective. I have a pile of 4TB SAS2 SSDs that I got back when I was still running a 4U server. I'd LOVE to have a shelf full of SSDs on the cheap.
Very glad to see this coverage of the MS-01, I picked one up as soon as they were announced and have been using it for this exact same scenario. I have a 9300-8e card going to two MD1200 shelfs with 24x 4TB SAS drives in a 4x6 raidz2 config which gives me about 58TiB of usable space, and couple of NVMe SSD's in the MS-01 itself for cache/log. TrueNAS has been a great platform for this config and was able to get rid of a bunch of old servers which saved me about 700-800W of power draw.
Is your minisforum still going strong?
5:49 the SAS connector on the disk shelf is very much standard, the SFF-8088 connector is the old standard for SAS (6Gbit) external cables, then with SAS (12Gbit and higher) they went to the square SFF-8644 connector because it's smaller and you can fit 4 of those on a low profile card (which is not possible with the older one).
No, the NetApp machines use a QSFP connection, which is neither of those. You definitely need a cable that goes from QSFP to SFF-8088.
11:15 fun fact about interposers: many of those I tried (cheap ones easily available on ebay) will cap your speed to somewhere between 250 and 500 MB/s. This is a non-issue for mechanical drives, but will bottleneck SSDs. Also they might not like some SSD model/brands at all and refuse to even show up on the bus.
will bottleneck them on throughput, but some applications might still need low latency without necessarily the need for high throughput.
Wendell bullying a LLM into being The Computer from Star Trek is something I've been waiting for a good minute. 😂😂
" Linus called me for tech support". lol everybody knows Wendell is the man.
there's a reason I unsubbed from LTT around when I subbed to L1T. 🤷🏻
LTT not only acts like a fool
@@0xKruzr what's the reason?
@0xKruzr I laugh when people call Steve tech Jesus. Dude is well researched but doesn't come close to Wendell in actual tech knowledge and application. Steve is like the college kid who reads up on something and can regurgitate facts but is horrible in lab or application..
He's referring to Linus Torvalds
Important to note that you need dedicated airflow over those sas hbas in pretty much any desktop case.
What do you mean? The system fans aren't enough? I'm going to switching to an HBA this weekend - I think it is the LSI 9211 8i. I see that it has a heatsink but I don't see a fan on the device.
@@iamamish They run HOT. The tiny heatsink is enough for being shoved into a 2U server with a full row of screaming 80mm fans. I had an 80mm fan strapped directly over it and it would still instantly burn my fingers if I accidentally touched it. As in blisters.
@@indignasmr7379 wow. So an extra fan is required?
@@iamamish I've had many different LSI models and they all needed active cooling. Inside my server, they're okay, but I was building a big NAS out of commodity hardware. A 140mm fan has lots of airflow and fits nicely over a few HBA and NIC cards. You don't need pressure, just airflow, so it can still be quiet.
@@iamamish kinda, I have some 120mm fans dedicated to blow air on my PCIe cards (HBA, NVMe riser cards, etc), but before that I just mounted a noctua 40mm fan with zipties loosely over the HBAs heatsink, and it stayed cool for years.
You just need a little bit of airflow, because usually that part of an ATX chassis has quite stagnant air.
Jonsbo announced the N5 recently. 12x3.5" drives in a desktop case. Could be a nice option for people who don't want rack stuff. Hopefully they're using 3xSAS connectors instead of 12x SATA cables, but it's still early.
I've been waiting to see if anyone else would do this and of course it was you! I'm so glad people and enjoying the full potential of the MS01
Try and get the newer LSI 9305.
It draws only half the amount of juice the super old 9300 does (~12W vs. 25W).
The additional 20 or 30 bucks are worth it.
The LSI 9400 is worth considering as well, especially if you have some spare U.2 drives lying around, cause the 9400 can do NVME passthrough.
Just got a 9400-16i for 160€ for my new NAS build.
Did you grab the actual lsi version or the Lenovo one for the 9400?
@@-Good4Y0u For that price probably the ebay china edition. Retail price for the broadcom 9400-16i is around 450€.
I am using the MS-01 with the QNAP TL-R400S and it works a treat. If i ever need more LFF drives it should be easy to upgrade to a SAS controller as per Wendel!
LLM to control my house iiiiimmmmeeediately peaked my interest. Can't wait to see that.
Yeah a self hosted voice assistant is something I've wanted for ages and a huge part of the reason for it is smart home control without having to wait a full minute for a cloud assistant to process the command, send it upstream, process it again, forward it to another cloud service and then finally send it straight back to my network
@@bosstowndynamics5488 there are some simple limited ways to go about that sort of thing if you offload to a paid service, meaning the footprint is small. Local LLM does kinda need a pretty phat GPU to be more than the basic stuff, that is kinda the limiting factor for most. You can get reworked 2080Ti's with 22Gb VRAM from China for about $500 which should allow decent response time as well as model, but beyond that the pricetag climbs very steeply. Sure, bigger model and faster chip, but you'll need to be a dedicated enthusiast to drop $10k or more on a GPU för LLM use.
@@noth606 That's mostly for training and high end models though, a pre-trained, narrow scope, optimised model for (relatively) simple tasks like processing natural language voice commands can run on a Raspberry Pi (Facebook's LLM has a cut down version that runs quite happily on the RPi and in terms of raw capabilities should be pretty close to sorted for this task). The big problem is actually plugging everything together.
Can i just say, im glad we got a you tuber, who teaches how to turn e-waste, into gold. Most of em just wanna is to go buy the new hotness….not wendell, hes a man of the people! Rejoice in his glory…….wendell 2024, a disk shelf in every pot
That llm smart home project sounds like part of what I’m working on as a startup, would be amazing having a chat about it, looking forward to the video as well
i have a chasis with 20 bays that has a SAS backplane which i connected to my 9400-16i (and also one to my motherboard sas connector)
later on i bought a 9300-8i to connect it to a self made sas jbod (with an expander card using a SAS35X36 chip)
So far so good haven't had any problems
Digging all the recent Level1Techs content!
I use the MS-01 like this for one of our storage servers.
Use the 45 drives case for your fast disks (SSDs and dual actuator drives) and the netapp shelf for slower bulk storage
thank you for this. i am always a nervous person about cramming a bunch of hard drives in to a case.
And here's me waiting for Milan CPU's to drop a bit more to maybe upgrade my Rome + h12ssl-i proxmox setup, never mind Genoa. I think I'm going to grab myself an HL15 even though it costs a ton, like you I plan to build in it now and leave it for a very long time and for some reason the rosewill RSV-L4500U seems to be unobtanium at the moment.
I have a QNAP TL-D1600S connected to Minisforum MS-01, which makes for a great home media NAS together. I don't think the SFF-8088 mini-SAS cables are a bottleneck 🚀But if it is within your price range, the 1-device solution, QNAP TVS-h1688X, might give better performance.
probably the best and most simple breakdown of sata vs. sas! thanks!
My disk shelves are Promise VTRAK units, which work pretty much the same. Cheap (free to me) spinning rust isn't a bad option for slow storage in large quantities.
Targeted the MS-01 just for this cause. Retiring a HP 360P Gen9 as the head unit ontop of the diskshelf. Working flawlessly since implementation
Those Seagate Exos drives are some of the most top notch HDDs I've ever used.
11:57 look at those prices, how how?
I (sometimes) hate this side of the pond :(
3:38 that's a pretty short-looking 1 meter. Is Wendell a giant?
no, it's just an imperial meter.
@@gustavo_vanni dammit, imperial Sata cables are shorter
@@marcogenovesi8570 Yeah... it was probably measured using someone else's cable.
The 45Drives case is kinda expensive… the Fractal Design Define PC case can fit almost as many drives (although without a hot plug backplane)
Does this setup supports ECC RAM?
Wish Minisforum come out with a updated workstation, wondering when that will be.
Won’t those lsi cards literally fry or shut down the machineinside the ms-01? The airflow on it is quite subpar and those tend to get really toasty
If you put that LSI card in the ms-01 you need to add a fan. That card overheats in that computer without airflow.
There are quite a few reports of JBODs (many QNAP I think) reporting failures of bad sectors check on new drives (LSI HBA) which disappeared with non-MS01 hosts. Not sure if there is now a fix for that or it was not tested extensively in this video or PEBKAC?
I'm also concerned about this. I got an ms-01 and wanted to connect a disk shelf to it, but those posts made me hesitate. Some people in this comment section seem to have done it successfully though.
Just built a gaming rig with a 5600x 32 gig bdie, 7800 gre video card. 2t 980 pro drive, primocache.
Whole family loves it, got it all used off ebay for dirt cheap.
I am running 8 drives in an Antec P101 case - if I wanted more, I'd probably go with something like the homelabs case. Honestly though for the vast majority of home use cases, 8 drives is already ridiculous overkill.
I'm migrating my drives to an HBA this coming weekend. I've already bought and flashed it (updated firmware, IT mode, etc) and I'm making a few other changes to my setup too.
This is something i was looking at doing. Purely because of all the IO in the box. But then i also wanted a full GPU which is the main annoyance.
Id have to use a pcie nvme to pcie x16 for the controller and have the gpu off an riser as well.
Alternatively building something with the chip in it.
The main thing that I've been wondering about disk shelf controllers is their CPU usage overhead.
For example, you can get a HBA SAS card with 16 ports, which technically would allow you to connect 16 drives with a break-out cable, adding 0 power consumption overhead.
These disk shelfs allow you to share SAS lanes with a lot more drives, but what is the trade off in terms of power consumption?
EDIT: 7:00 250W just idling? with the drives or without them?
with drives. The fans and expander modles are not low power but it's around 50w empty
@@marcogenovesi8570 ok thanks
Will definitely be going down this path in the future. Waiting for next gen threadripper CPU & nvme pcie cards.
Could someone please direct me to the video Wendel referred to in this video? The one with the thumbnail titled "most ridiculous home server chassis"? It seems like required reading and I've not been able to find it. Help please?
th-cam.com/video/Ipkg0qj3qq0/w-d-xo.html
hey, it's this one, the 45drives homelab chassis: th-cam.com/video/Ipkg0qj3qq0/w-d-xo.html
i tried to see where the MS-01 is connected to 24 HDD's but i couldn't find it. works for a nice theory class.
Wendell is the premium tech support for other tech youtubers :)
So, here is why I chose the 45HomeLab HL15. I could not find one of the mini machines that has ECC RAM. I run TrueNAS on mine and thus ZFS. I also didn't want the headaches that come with building all this stuff and finding all the roadblocks on my own.
10:02 -- "Make sure that you've got a Plan B."
That's exactly my Golden Rule #1 for Hardware Architecture design and planning.
For us puny peon non-hyperscalers, having double or triple sets of matched hardware and firmware at the ready seems essential.
Kindest regards, friends and neighbours.
11:30 -- errors "as a result of cabling or signal issues."
3:33 -- "SATA wasn't designed to push cables this long."
Thank you 👍
Kindest regards, friends and neighbours.
Is there a mini pc with lots of HDD space for around 200? I just need something simple for 1 user local jellyfin server with 4 HDD slots.
If you can find a SilverStone SST-CS351B on sale it could bit the bill.
@@ro55mo22 I'll check it out. Thanks mate.
Would love to get a deep dive in ASPM territory and how to always hit c8 with what components
I love the video I just dont know how useful it would be for most people. I know you called it out but something like the HL15 or a cheaper chassis from rosewill will probably be better for most. It is a single chassis which saves space, power, and noise. HDDs are just so big now that unless someone actually needs to store a ton of data the single systwm is gping to be better. I mean if you got 15 24TB drives that would be 360TB of raw capacity. Unless you are planning on using a bunch of old drives I think the single system is better.
Wendell's content is generally technical enough that it's assumed the audience will know if they would find any given solution useful, as a hobbyist in the field myself it was pretty clear to me that this setup is more than I need and I would hope most/all of Level1's viewers would be in a similar enough boat to at least figure out if they need 15 or 24 drives.
@@bosstowndynamics5488 Yeah I guess I just mean that disk shelves were great when drive capacities were 8TB and less with less being the more common ones. I think increasingly more exotic configurations like this will be less useful in favor of more exotic software solutions. Instead of having a bunch of 4 core servers with 16Tb of total storage and having to figure out hot to combine them together to get useful work done we are going to have 32-192 core servers with 100s of TB of insanely fast flash storage that we will need to figure out how to split up to get useful work done.
Willlll this SSD fitt inn the U2 slott of the Minisforum MS-01 and functionn without any issues? It's a Western Digital Ultrastar DC SN650, U.3, 15MM, 15,360GB, 2.5-inch, PCIe.
feel like SSDs might still make sense if you needed low latency? plus you can use fairly cheap SSDs.
From a pure cost perspective, everyone interested in setups like these should take a mental step back and ask if more than 6 drives are *really* necessary.
No, really. First impulse is "this looks nice, finally unlimited space for drives, hot-swapping and industry grade hardware!".
And we all see the appeal. But *for the money* most of us are better off with a (potentially multiple) smaller machine(s).
For my last NAS i went from 4x 3TB in RAID 5 to a single 18TB disk for my second machine. Second, because the old one still lives at a remote location and is a mirror of the new one. There will be a point when the old one is full but i can get cheap 8TB drives by then. Not the point.
The point is that one 18TB drive (with a flash cache disk in front) satisfies my performance and capacity needs. It could fail and be offline until the new part arrives and has re-synced or restored the data onto it. But for a home server i can live with that. Backup in general is cheaper than actual redundancy, and you need backups anyway.
Anyway, with energy cost and living-room-compatibility (a.k.a. WAF) in mind, smaller systems make a lot of sense.
Might it possible, to attach two nodes on that shelf and have some sort of failover? Like the netapp Ontap software does?
Heard about tye Oxidation flaw in Intel's new chips fitted on these PCs?
When talking about out of band management dont forget the AsRock PAUL which is still in stock :D
How does the MS01 handle the heat generated by those cards? That’s my main concern.
Yay. I was researching something like this.
Thanks for creating a video.
Could you daisy chain JBODs and have more than 24 Disks?
Where do you get the interposers to connect SATA drives to a dual-port SAS storage shelf? I have such a shelf that does not recognize SATA drives, but SAS drives come right up. Guessing I need those interposers, but all the ones I can find say they're for connecting SAS drives to SATA motherboard ports, not SATA devices to SAS controllers.
they are on ebay, and they are called "interposer" often. They are not standard so you probably need to look for those of the same brand of your shelf.
Also you can't connect SAS to Sata ports, there is no device that does that conversion because why would anybody want that
@@marcogenovesi8570 I found hundreds of SAS-to-SATA "interposers" on Amazon, Newegg, eBay etc. It sounds like they're saying that some boards support SAS on their SATA ports, and that you can use their gadget to hook SAS drives up that way. I'm not familiar with that and have never wanted to do it, but it must be useful for somebody. 🤷♂
I did see some sleds with interposers on eBay that look compatible, though they're like $30/each which would make a "free" 24-bay project *substantially* more expensive... 😞 I ordered one to try it, and if they work I guess I'll start watching for a deal.
@@jeffw991 Hm very important, the interposers you want MUST have electronic components and chips on their board. Those are the items that will actually convert Sata to SAS. There are some interposers (often the cheaper ones) that are just a physical port adapter with no chips, only the ports and the bare green board. Those do nothing.
Afaik interposers are not a "hot" item so you can get away with asking for aggressive discounts. I never paid more than 10-15 euros/bucks apiece
@@jeffw991 for example, "NETAPP 110-00208+A1" is one of those interposers that have no chips and is not doing any Sata to Sas conversion
I love my HL15 but I really wish they would make one that supports 30 hard drives
Is there any casing from AliExpress that does a 24* u.2 drive ?
13:14 Please use Marjel Barrett's voice for this! 🖖
What if you still want dual pathway redundancy for power and data? ATX or 45 drivers not gonna cut it? Updated recommendation for disk shelves?
WD just announced 32+64TB nvme ssds for Q3/24 and for next year 128TB and the year after 256TB capacities, we might get capacity vs price run up that could benefit the user, can not wait to consolidate all my archives dvds and external drives on a mini pc with two U.2/U.3
Yeah but that's WD, I wouldn't put a floppys worth of data on a WD that I actually cared about.
@@JF32304 what's your gripe with WD?
I actually thought about getting MS01 with QNAP's TL-800S (or two of those with apropriate controller) or TL-1600S as for simple NAS. Or equivalent one day in the future :)
What is the model of motherboard being used in the test bench?
This is exactly what I'm doing using a node 804 with disks on both side of the chassis. Its cheap because I already had it
Where is the A1 version? There is next to no information on when that will be out
Even four sata drives (3 ssd x 2tb, 1 hd 8tb) on arch linux (kernel 6.12.1) B650 MB gets errors...
I feel so lucky to have an old lenovo SA120 for this exact use.
Just a tipp for building a chassis for a homeserver:
Use an old (like 10+ years) ATX Big Tower case which has a lot of 5.25" slots in the front, and then get disk cages which convert those to 3.5" disk bays, and voilà you get something very close to the HL15, for way less.
Sure you can't mount it in a Rack, but as far as I can tell, not many people use Racks if they have a highly converged homeserver which runs all of their stuff.
I use an old Lian Li Big tower that I got used for about 40$ which has 12 5.25" slots. It houses 12 drives + room for more, and has proper front to back cooling like a regular Diskshelf/Serverchassis.
One big thing to keep in mind is, that only about 3% of the worlds population lives in the US, so using of the shelf ATX cases is usually way easier and cheaper then something like an HL15 chassis.
There is one tower from 12yrs ago about that would be very suitable, all alu, front is all 5.25" bays, mesh side 'windows' etc. Was on the pricey side at 3-400$ back then but good quality relative to most consumer class towers. Lots of mags and sites built skulltrail etc in them but the name escapes me at the moment, which is dumb since I own one.
- Found it, CM Stacker 830. It might be about as good as you can get for this sort of thing, although I will warn that E-ATX is iffy, I have a big dual socket Xeon supermicro board in mine and it needed a hacksaw massage to the case to fit. The downside to it is that the alu is a bit on the soft side, so care should be taken moving the thing when loaded. Upside is it's easy to modify and looks decent IMO.
thats a real nice minis forum computer. perfect!
Is this still a recommended platform with the Intel CPU issues currently ongoing? I wanted to go eith the ryzen version but it is lacking the pcie slot and the sfp+ ports i really liked off of this for a server solution
What’s the best interface card for storing farts as raw data?
Thank you!
6:20 - Why would you call it SAS6 if it's SAS2 6Gb/s, SAS3 is latest generation that supports 12Gb/s... I understand why it might be more convenient, but that's terrible to mix naming conventions like that
waiting someone crazy enough to build a 3 node CEPH cluster of ms-01 using the tb port, then connect all 3 of them to another 3 disk shelf.
Jim's Garage....
@@DavidAshwell but he didn't connect to an external disk shelf to expand the ceph
@@xgod978 still think Jim would have been better off using the 10G networking for his cluster network and using the Thunderbolt for drive expansion. I really think it was all to eliminate the need for a 10G switch and they're really just not that expensive anymore!
PLEASE do a video soon on how to use an LLM for home automation because I have Ollama running with a bunch of different open LLM's downloaded that I used to show RAG to my boss (its SOOOOO slow on E5-V4's without GPU's!) so I mean I can run simpleLLM's but then I have to use Google Gemini or whatever for the Home ASsistant integration when I want to runt it locally!!! HAOS is SOOOOO much better now that its integrated though, no more messy aliases everywhere, i can just speak and it just does it, sooooo nice but i want it in house!!!
also im poor, please give me a netapp shelf :( i'll never get one working for a school
i would love to see a
“M1 Pocket Mini Soft Router”
with
M.2 (2242) to Mini SAS HD adapter
running this drive bay monster 😎
i guess you could network boot 🤓
I'd like an hl-15... but man I just can't justify 900 bucks on a case. I have a used supermicro 4u with dual 900 watt gold super micro psus I got for free. It plugged right into a z690 with a 12900k. It's not the nices looking case, but it gets the job done!
Do you communicate with Dave's Garage? You two could make an interesting video.
Hi, nice and useful video, this is worth exploring further.
Hey guys, great video!
I'm already waiting on the Home assistant LLI project video if this will go out at any point in time 😂
Overkill much? What most people "need" is 2 to 4 drives and what works best (as in is cheapest and gets the job done) is an externel Disk encloser with JBOD mode over USB (USB 3.1 Gen 1 is enough) or eSATA something like a FANTEC QB-35US3-6G, IB-3640SU3 (both ron fine even on a Rasberry Pi 4 with Open Media Vault) or a Orico DS500U3 if you need 5 drives. You can even go as far as run a Icy Box IB-3810U3 for 10 drives but at that point I would consider your overkill as the better alternative.
I am using the front off an HP DL380P LFF as a disk shelf/jbod and an LSI 9208 (maybe) to hook it to my server. Write up on the L1 forums. 😂
The old Antec Twelve Hundred gaming case has 12 5.25" open drive bays in the front. Each bay can hold 8 hot-swap NVME drives. 96 drives anyone?
I'm not sure why you recommended this MS-01. The forums are full of this thing crashing randomly and people returning the MS-01
And you can get 10 meter SAS cables so you can run your fast quiet computer locally, but not have to listen to the pile of hard drives.
250w idle is insanely high. It's why I don't use my disk shelves
Waiting for Jonsbo n5 to be available so I can build my media server.
Here I was expecting to see how to connect like 6 drives to the minisforum.
But I should've known better - BAM GOLIATH DISK SHELF'D
unskippeble 2 minute ad :) you go youtube
The prom bots are out in force today :/
Just call the MS-01 a SAS/10gbe dongle with some extra ICs.
just in general the MS-01 is an absolute beast with I/O. I'm still mad it hadn't come out yet when I built my Proxmox cluster early this year.
I’m early!
Thank you!
Wow you've lost a lot of weight! I remember you from Tek Syndicate.
Looking healthy!
He was forced to due to being sick from a tick bite that had bad side effects effecting his ability to tolerate certain foods. Mainly meats and dairy.
Not to sell him short. I know a fat vegan.
I don't think this problem will last his entire life.
The fat vegan thing is puzzling, unless it's a lot of beer, every day.
go go gadget Ozempic maybe. (this is not hate, I'm on it myself and it changed my life in a billion ways for the better)
@@Terran.Marine.2 it's my uncle. His wife is a vegan chef and he just eats a ton.
ms01 doesn't support ecc, I don't think I'd ever want to build a serious storage array without lack of ecc support undercutting my trust in my data store
Wow, so the ms01 is way out of budget. Off couse if you're spending that kinda money, you have pcie ports.
I would love to see someone attempting this on a NUC or similar
I miss the old intro music
500+MB/s from a mechanical is mind blowing