I admit, I did skip to the end, but was so impressed, I went back and rewatched the whole video, then subscribed. Great information, it would have saved me a lot of heartache a year ago, hopefully it will save me a lot in the future!
The chinese mobos support is basically non existent. They have really bad quality control from the reviews online. BIOS updates seem to be shared around in random MEGA links. You're basically doing a lottery with these boards hoping nothing goes wrong
An alternative to those which is a bit of a bodge are the Erying motherboards with Intel laptop CPUs. These CPUs have a big higher consumption, but you have 24/28 lanes, three NVMe slots and so on. NAS Compares did a build video using one of these some time ago.
The boards you showed off are perfect and exactly what I was looking for. Right now I am running a 13 year old QNAP rack mount NAS for storage and a separate Haskell based opto plex for Jellyfin with an Intel Arc a310 for transcoding. This board seems like one that I can just combine those two into a single box
Very nice video ! I did buy this exact same Q670 motherboard even before watching your video.... 3 NVME, 8 sata, IPMI, real socket for modern Intel CPU : Too good to pass up. CWWK is starting to build really great motherboards for NAS. The only thing that could be even better would be ECC support with a W680 chipset.
careful, wolfgangs channel has video labeled "Don't Make This Mistake Building a Home Server!" about getting asus mobo in hope of getting it all, and it turns out that with ipmi the igpu of intel cpu cant be used for transcoding.
@@EmperorTerran go one step further and add a discrete GPU and do both transcode and inference. Oh, and @PeterBrockie this review earned my subscription. Keep up the good work, sir.
Great video. I watched the whole thing (although I did swipe the play bar to see if it reveal viewing behavior!) Thanks for making, and for sharing your BIOS woes!
Myself, I'm still using rocking the ASrock C2750D41 for my home NAS. This board supports 12 SATA drives outta the box. I have yet to use the PCIe slot for any HBA interface.
Did not skip, very interesting as i was looking for a similar solution. Enjoyed the video, Thanks ! Will continue my Lenovo m720q route with a pcie sata card 1st.
Hi henry, I have 4 3.5" drives and will power them seperately with a 200w pico psu (inter-Tech 88882190) in combination with a 150w 12v power supply. I reckon that should be plenty of power. I am about to build the system this week and can then determine the exact power draw. For the enclosure i use a 3d printed 4 bay itx nas found on printables. I designed a bracket for the mini pc and it fits well. Later i will remix the back panel to fit my needs.
Loved the video. Yes, I've tried to flash the wrong bios to a board, but the installer caught it. The only thing I have to accept soldered in cpus is in laptops, and i don't' like them there either... And, the sound jacks ... I don't get that either... Great Video. Love the fact you explained why. Thanks!!!
Since I've yet to see it mentioned recently: supermicro m11sdv Up to amd epyc 3251 cpu (8c16t) 1 4x m.2 slot 1 16x slot that can be bifurcated 4x1gig (thats the con- The cpu has 10g capabilities) 12v/atx psu uptions UP TO 512GB of ddr4 ecc ram 4x sata slots
Thank you for making this video! I'm running the CWWK N100 NAS MoBo and all the limitations you mentioned are felt. And, I wish they would quit stuffing 4xNICs into a NAS MoBo. Subscribed, in case you buy more and talk about them! :)
@@graysonpeddie wouldn't it be also useful for proxmox? I'm only looking out on hw for my first home server and thought I'd have xpenology under proxmox for the nas part and then spare other ports to something else? Tempting to buy one of those n100 cwwk with Intel i226v 4port, but still hesitant too)
With 16 HDDs I think a mini ITX motherboard is not more an option, cause you need to provide a good power and a good ventilation, and so a case will be big enough for mATX/ATX motherboard with few extra PCIe for SATA adapters, and a fast ethernet, and a proper bifurcation for M.2 drives. Like, you can use full ATX motherboards for E5 Xeon v3/v4, or Threadripper 1950p, etc.. PCIe gen 3.0 is fine there.
You buy one of B550 quality boards and Ryzen 7 5700G. They have 2 PCIe M.2 NVMe slots on board. And 4x4x4x4 PCIe configuration, enabling another 4 M2. NVMe without expensive card. Quite a few have 2.5G LAN. My server with this idles at ~25W while having all the server services running... eating some 28GB out of 64GB RAM, ready for workloads. This SATA obsession reminds me of: "Roads?! Where we're going We don't need Roads!"
@@PeterBrockie Of course you are right, but the idea is nice, you need a large loan anyway, these days HDDs are not that cheap, my 5 (Truenas Scale) 2TB WD drives are still error free after 14 years
spot on. I wish there were more options for matx nas cases other than from aliexpress. matx boards also tend to be a good bit cheaper than an equal itx board.
I never felt limited by the generic ITX boards I come across. A single m2 slot gets you all the SATA you'd need for an ITX NAS case, thunderbolt gets you 10GbE, onboard wifi gets you wifi 7 or 2 extra SATA ports, and you still have a second m2 and 16x slot left. This board is only good for a NAS and little else because of the CPU limitations, I'd rather have 16+ cores for everything else I'm hosting on that machine.
I used an ASRock H370M-ITXac motherboard (6 x SATA ports) in a Jonsbo N2 case with a Core i3-8100 CPU (eBay $40). That motherboard supports 8th or 9th generation Intel CPUs. In the one PCIe slot, I put in a 10G Ethernet card.
The board you talk about looks great but seems to be something of a unicorn. Found one used on ebay but not available through usual retailers or online
I kinda hoped ECC. Cuz all i see as something extra are sata ports which while nice is kinda solved by by ASM1166 based m.2 to sata adapter which is reported to support lower power stages and any mobo that has two m.2. Gotta check that ryzen board for ECC. Oh yeah, ipmi sounds good, would be nice to see in the video what that intel version ipmi looks like.
Basically any 12-14th gen CPU should have no problem transcoding anything with QuickSync. As for the older embedded boards, I doubt they will handle a 4k transcode well, but 1080p stuff should be fine.
Thanks. You video is very informative and fits perfectly with what I was struggling to find. I think I’ll follow your recommendation about the processor too. I wish at least that this board had at least 3 or 4 NICs, but I think I can survive. Thank you again!
Great video, SUBSCRIBED! Great information and explanation! (Those boards are pretty interesting but i have no reason to upgrade from my z590/i5-11600k unraid)
I'm mid build. Asrock X570D4I-2T Supports DDR4 260-pin ECC*/non-ECC SO-DIMM up to 32GB/dimm. single PCIe3.0 x16 slot. 1 m.2 2280 PCIe4x4, 2 OCuLink (PCIe4.0* x4 or 4 SATA 6Gb/s), 2 RJ45 (10GbE) by Intel® X550-AT2, remote managment port. I think this is pretty much the ultimate one IMHO. Drop an AM4 Pro cpu, 64gb ram in this, m.2 boot/os drive, and 8 SATA drive hanging off the occulink and have to 2 10GB Ethernet ports built in. The only irritation so far is the Intel Cooler based AM4 setup..
For months I was trying to decide what to upgrade my home TrueNAS SCALE box with. I have lots of experience over 20+ years with Supermicro, so I stuck to what I know. Ended up going with an X12STL-IF with a Xeon E-2336 and 64GB of ECC UDIMMs. The Xeon is way overkill, but it will take lower end CPUs and standard DDR4 3200 DIMMs. The 4.0 x16 slot will do 8x/8x via bifurcation jumper. Might do a GPU for transcoding or a dual SFP+ card later. Mini SAS HD w/sideband. Two DOM power ready SATA3 ports which I use for mirrored boot disks. 3.0 x4 M.2 slot for $takeYourPick. All around I'm satisfied with it. Yes it does idle around 28W for me, but I wanted the horsepower if I need it and the cost wasn't that much more for the Xeon E + ECC DIMMs.
jesus christ, such a nice mobo and available... but if it just had 8 sata instead of 6. Since it has single m.2 you would need pcie to expand sata and that would mean no 10gbit
Very usefull information. Thank you for that. I do have a question. Did you test bifurcation switch to x8x8 and if you did, did that work? Also, how does BIOS looks like, and does it have many customized settings? is BIOS generic looking or actually looks like one of the main brands?
Damn.. I have already committed to the CWWK/Topton 7940HS 9 SATA port board for my home server, but that last board is definitely impressive, and outperforms it in a lot of areas.
The N5105 and J6413 both have QuickSync. Albeit older encoders. That being said, most of these boards are available with an N100. Nothibg wrong with needing audio, but for NAS boards I'd rather see something else used for the board space audio takes up. You can always grab a cheap usb audio dongle with the same bottom end Realtek audio chip they're sticking on these boards.
The problem is that you can essentially saturate 10Gbps interface with just 5 or 6 SATA drives. For me 4 SATA ports, or 5-6 in RAID is a limit what I would put on a 10Gbps system. Nothing wrong with wanting more storage, but not able to utilize disk bandwidth well, is not good for me.
@@PeterBrockie I was all ready to order one of these as soon as I read about it. Now, I'll wait for your review. I could see one of these taking over most of the duties of my homelab
The Problem is that IO is very power hungry. The more lanes a CPU has the higher the idle Power Consumption. My Server uses about 15-18w idle (at the wall, drives spun down) (ryzen 3600, 1x m.2, no gpu, 2 RAM Sticks (RAM needs ~2.5w per stick), 4x8tb drive) but even with optimizing if you need the PCI lanes you basically can't get any lower, maybe you can achieve 10-13w at the wall, but if you need a 400$ PSU to do it, meeh...
I feel the cheaper ones you showed are perfect for what they can handle. You would've have the raw processing power to saturate a 10g connection anyway. And the internal usb header is meant to house the OS such as truenas. While the audio header is included because these are used as NVRs alot. I wouldn't even run more than 5 sata drives on something underpowered like that. An HBA would be a waste even if it had the lanes for it.
I'd love to see a modern denverton type setup that was relatively inexpensive, low power but also had a bunch of PCIE lanes so you could do 2x M.2, 10gb on board and 8-10 SATA ports that was also fairly easy for consumers to get their hands on. That CWWK board isn't bad but no 10gb means I'd have to add a nic so it wouldn't have enough sata ports for my use case sadly.
@@PeterBrockie I've pretty much settled on something that is higher power and larger but comes with everything I want. A case that's going to fit 10 drives isn't going to be small regardless so I kind of gave up on something lower than 45w tdp.
The problem with this idea is you would need PCIe switching. The embedded CPUs do have pretty fine grain bifurcation available at the motherboard manufacturer level, but that isn't always exposed to the users. But, they are only going to have 8 lanes to work with, and you get what the manufacturer things you are going to use with little to no configuration. The next step up laptop/desktop CPUs have way more lanes, but the configuration is pretty rigid. 20 lanes is great, but if the SoC only internally supports 16/4, 8/8/4, or 8/4/4/4 then there isn't anything a manufacturer can do. This is a problem I'm personally hitting. I have several of the cheap Tiger Lake ES M-ATX boards that I'd LOVE to reconfigure hardware on. The SoC is for laptops, and technically supports bifurcating the 16 link into 8/8 or 8/4/4 links, but the motherboard manufacturer made it as a "gaming" board, so there isn't a way to configure it. I think it would require a BIOS hack AND a hardware hack, because I think the PCIe link bring-up for these SoCs checks high/low status on certain pins. It makes sense, since you would never actually be changing a configuration like this on a laptop motherboard. It would have the lane assignments baked into the design. But boy would it be nice to give 8 lanes of PCIe 4.0 to a 100G card, have 3x NVMe drives, and then use the 4x 3.0 lanes off the chipset for a SATA/SAS controller...
@@Prophes0r That's why I specifically mentioned denverton. Something like that but more modern would have enough PCIE lanes for most low power NAS use cases (not counting a ton of NVME drives) with 16 lanes instead of the 8-9 we get now a days, especially with a modern igpu.
I'd love one where they use the extra space for more m.2. Keep it one slot, but have like 6 NVMe slots for people who also want a fast storage pool with their drives.
Hi @PeterBrockie, Thanks for creating the video, it was quite informative. I am currently in the middle of building a NAS and am considering the H670 + 13500T + SSD as the configuration. Can you create a setup video for this Motherboard, how to flash BIOS and configure this board. Thanks.
No consumer boards support it aside from AMD (and even then it can be sketchy depending on the motherboard). Personally I have changed my opinion on it over the years and don't consider it a big deal for home use anymore.
I have the Asrock x570 ITX motherboard in my N2 and my solution is going to be a dual m.2 pcie expansion card. Luckily I don't need a GPU on this board even though I'm using a 3700x. Right now I'm having a problem getting bifurcation working and I'm taking my time to diagnose it since I have to take the entire pc apart since I need to plug in a gpu to get into bios.
Hi Peter, i am using the similar Asrock board, Z690M-ITXonly 2 m.2 slot tho. 12600k with a hyper 212 evo cooler, did cinebench R23 run, it only got into to mid 70s max temps. Now i need to add some drives. Not sure what OS i will run. Subscribed. Prob True NAS
The CWWK Q670 board it's having a memory problem The board shares its recources (lanes?) between its PCIEx5x16 slot and its first memory bank. When PCIEx slot is occupied with an PCIEx x16 adapter (i.e. Mellanox ConnectX-4 single QSFP28 port) then memory in first bank is ignored and it only recognizes 48GB.
I'm sorry I can't reproduce this error. Although I don't have 48GB DIMMs, just dual 16s. I have the same Connectx-4 card and all my memory shows no problem. The only memory related issue I had was when installing a generic Realtek 2.5g card which would cause the system not to boot with a memory error (3 beeps), but I saw other reviews for that cards on totally different systems that had the exact same issue. PCIe slots and memory don't share lanes - memory is its own controller directly connected to the DIMMs.
connect the thundebolt directly to a pc to get 20gb/s, something else i always wanted to try is an infiniband adapter connected with thunderbolt, but currently waiting for the m.2 -> PCIe adapter i also just bought an "Supermicro X10SLH-N6" for just 15€, it has 6x 10gbit on board, but its µATX, i gues it also pretty nice for a nas depending on your setup, if the motherboard area of your nas is build like a 1U server, i gues you were in luck with that board because you just need 1 pcie card anyway
I'm still waiting for a manufacturer to make an all flash U.2 NVME NAS enclosure like the QNAP TS-h1290FX but more affordable. I have a bunch of 8TB U.2 HGST drive but no where to use the other than an old Dell server.
I don't think an affordable one will come anytime soon unless it's using older hardware. Simply because of a lack of PCIe lanes on anything other than high end server stuff. That being said, if you're willing to drop down to something like 1st gen EPYC, you can probably get a board and a bunch of adapters to break out all the PCIe slots into NVMe.
Mmmmm....3 x NVME slots: add 3 x NVME PCI-E Card Riser III...18 drives plus the 8 onboard SATAs then add PCIE SATA Card 16 Ports 6Gb SATA 3.0: 18 + 8 + 16: 42 SATA 3 drives. But wait, there is more: 4 external USB ports. That is a whopping 46 drives. IF you want more drives though, you can change the USB drives so each USB port connects to a dual HDD dock (dual 10 TB drives), which will then change it to 42 internal plus 8 external drives.....50 drives. Imagine the cost of 50 x 10TB drives
There are 1 to 5 SATA port multipliers. Needs SATA controller support (most or all modern ones should do), and all 5 disks will obviously share 1xSATA600 worth of bandwidth. In many cases that's not an issue.
The thing with port multipliers is that they are often unreliable and require older controller chips not found on newer motherboards. Many even require Windows specific drivers so they won't even work on something like TrueNAS.
@PeterBrockie the PMP functionality is a part of the SATA standard, so there cannot be any Windows-only PMPs. In Windows you may just need to install drivers for the SATA controller from the vendor (AMD, Intel etc) as opposed to Microsoft ones coming with the OS. As for reliability, indeed, certain kinds of hiccups on one drive may affect others. And hot-plugging one port may result in re-negotiation on all ports, which could cause issues.
It's an optional part of the spec as far as I know, so it can be a bit hit or miss. I've seen controllers which say they only support all their ports under Windows, maybe it's just a translation error. Haha
How would power consumption with a build with this and a 13500T look like? For example: Mentioned MB, 13500T, 32GB Ram, 2x12TB Seagate IronWolf and low to moderate workloads in Proxmox VM`s.
I took the Chineseium route. Originally, I started out using a loader that will not be named and it served its purpose as a file server. Now I’m getting into encoding my ISOs and home lab applications so I need the horsepower.
The motherboard seems to no longer be available, and the Amazon reviews are wild. Why someone would want to build a tiny NAS, I do not know. I’ll stick to a Meshify 2 XL case will a zillion 3.5 inch hard drives. 😂
I guess if you really, really want a "brand name" board, but you're looking at about twice the cost and ECC SO-DIMMs (if you want ECC), which aren't the most common things around. Also only one M.2 unless you want to start trading Oculink ports for NVMe. The 10gig is always welcomed on ITX though. :D
I watched the whole thing and understood everything except: why is it recommended to use xx500 CPU and up and not 14100 for instance? What am I missing? thanks
You need a xx500 (or higher) CPU to use the remote control vPro features. If you don't care about 'em, use anything. Also, the 13500T is specifically mentioned by the board maker as the idea CPU (supports vPro, low TDP, decent core count).
I think you're going to be stuck going to HEDT (like an older X299 board) or a server/workstation chipset. Intel and AMD don't usually have that level of bifurcation support on consumer stuff.
I think the issue is just cost for SAS ports. Broadcom is basically the only one making anything SAS. I'm sure the cost to get their controllers for integration to a motherboard is absurd. There might be a creative way to reuse the super common 9xxx-8i controllers like putting a second horizontal 8x PCIe slot in addition to the normal 16x to slot to fit a second card on an ITX board. Plus SAS isn't super popular in home labbing. I suspect most used SAS drives from enterprise clients are destroyed after use rather than sold used.
@@PeterBrockie SAS ports are cheap especially considering intel chipsets support SAS natively. The connectors are no longer expensive either with Nvme drives sharing the same ports as SAS drive breakout cables. The only drawback is the motherboard vendor has to enable SAS in the chipset via the bios, like thunderbolt.
@@PeterBrockie There are some X99 boards and I could have sworn I saw an X299 board, but none of those are ITX. Do you count Intel atom ITX with SAS ports as server workstation? I don't and you can just search atom ITX SAS, but some of those have extra chips. There are MANY atom ~31 watt NAS style boxes that handle 4-8 drives that I would not consider as workstation class, they are IoT at best. Intel called those chips C3000 SOC. The C chipsets like C602 are identical to the desktop chipsets in every way but they have unbuffered ECC enabled, they even support celeron, i3, i5, etc desktop chips and unbuffered memory. I know of ATX boards with SAS from Intel chipsets, but none for ITX. The closest modern board I found used the Intel H770. Look up the Maxsun Terminator H770 YTX D5 Wi-Fi with Intel h770, it looks to only support sata.
Not really. You're so limited on space with ITX it's basically impossible to get anything other than an iGPU or really, really old GPU on an ITX board. Best bet is the fastest AMD G series APU you can find.
is a mobo design possible just to expose all the pcie lanes? surely that's the holy grail. im rocking asrock z87 extreme 4, 32GB ram with i7 4770k or i3 4130, still playing around. 4 hdd and 2 ssds hitting 40 watts with truenas. got 3 PCIe 3.0 x16, 2 PCIe 2.0 x1, 2x sata slots spare. my view is old HW is king as NAS setups. but im still a complete newbie
Yes and no. There are limits on how a CPU can split its lanes out. For example, you generally can't take all x20 lanes of a CPU and split them into 20 x1 slots. However, you can take a x16 slot and add a PCIe splitter to break it out, but they are only made by a few companies and are generally really expensive (especially PCIe 5.0). There are high end desktop boards out there with seemingly more PCIe slots than you should be able to get away with on the desktop and they will usually take the chipset PCIe lanes and run them through a PCIe splitter. So you're already limited to the max link between the CPU and chipset (usually x4 PCIe).
why can't you replace the wifi card with a SATA adapter or plug the SATA adapter into the m.2 slot? of course you can, to get to the wifi slot you need to unscrew the radiator I have a server at home in Jonsbo N3, Asus Rog B550i and Ryzen 7 5700G with an m.2 to 5 SATA adapter, which gave me total of 9 SATA ports and instead of a Wifi card I installed an adapter for the second 2.5Gb LAN port I solved the problem with only one PCIe x16 port by bifurcation and used an adapter from x16 to x8x4x4, which gave me two m.2 nvme x4 slots and one PCIe x16 slot with x8 speed, to which I added a low-profile Intel ARC A380 GPU so in total, not counting the adapters, the jonsbo N3 has room for 12 drives, 8x3.5 inch, one 2.5 inch and three M.2 nvme
Seems to me PCIe configurations have gone totally sideways ever since gen5.... Most mobo OEMs are SO worried about having it that we get crap slot support (even ATX are usually just two 16x "size" that drop to 8x speed the moment you populate both, plus maybe two 1x that aren't much use for most people anyway), and despite the fact that there is essentially nothing in the consumer market that can utilize it (no Gen5 GPU's, just M2 NVMe's that consumers will NEVER need/notice any improvement). I wish ALL mobo OEMs would just park the bulls**t gen5 hype, stop the STUPID "armor"/put all extra M2's on the back, and go back to using topside real estate for EXPANSION SLOTS like were not all morons.
I think they just hit space limitations for the VRM. I'm sure there is a way to make more room like stacking the M.2, dropping audio, etc. But they probably figured it was just a NAS. Haha
Oof, that X570 ITX/TB3 has a very garbage chipset cooler and a terrible placement of the M.2 slot (on the backside). The BIOS is garbage too... how the heck did they get Intel to "certify" a board where Thunderbolt devices don't work at all during boot, and where hotplugging a Thunderbolt device fails to allocate any resources for the device?
It seems like none of these ITX boards support ECC. Looks like I have to move up to micro-ATX for that kinda support. I'm guessing you don't care too much about ECC in your builds?
There are a couple of options for ECC in ITX form with lots of SATA ports, but they are either expensive or use sketchy Jmicron controllers, etc. You can track down some of the Supermicro boards with Xeons - many have lots of SATA and ECC support. However you're going to pay a lot and have older hardware. Most (but not all) consumer AM4 boards support ECC. But pretty much none have 8x SATA ports. So you're going to use your one PCIe card for a SAS controller or M.2 to SATA controllers. Some Intel boards support in-band ECC. Meaning they eat some RAM for error correction and turn normal RAM into ECC. Sadly, few boards support this feature and it's limited to select 11th gen or higher CPUs. The N100 supports it, but BIOS support is required. Personally I am not concerned about ECC. It's nice to have, but I haven't lost a single thing to bad RAM as far as I know in my 30-ish years of computer using and that's with storage in the hundreds of TB at home. Generally if you have bad RAM you're going to have obvious signs before anything serious happens - random crashes, etc. as long as you're backing stuff up and keeping on top of stuff like data scrubbing it's a non-issue for me.
I have been eyeballing this board for a while. Though I have been holding off since Intels booboo with crashing 13 and 14 series CPU's, and they are not exactly forthcoming regarding which CPU's are effected. I'm actually hoping they would do this board just with AMD processors. And to be honest, I'd rather have an AMD 7600 then a Intel I5 13500. Power consumption actually matters where I live, NOT CHEAP.
@@PeterBrockie TY for the answer. 20W in idle is more then fine actually. Thats how it goaing to live most of the time It's gonan be a NAS/plex server for the most part any ways. Though I do plan on running a ARK server, if it wasnt for that the N305 type bords would be more then fine I think.
@sprocket5526 I don't mind the N series, but I think the N305 costs way too much for 8 E cores. A NAS board with the N305 is usually around $300. For a little more you can get this board with WAY more expansion, and a 13500(T or normal) and have 20 threads for actual VM work.
The last one of those which sold on eBay went for $1500. For a motherboard with a 7 year old CPU. :D It's not a bad board or anything, you're just paying a ton for something which has the same PCIe limitations as a lot of these boards. Only a single x4 PCIe 3.0 slot, single M.2 at x2 PCIe 3.0. The 10gig is nice though, I wish more boards had it these days.
Shift the conversation to mATX and all your concerns go away. I will say that if I look at this gem you found here I might get what I needed and save $3-500. With mATX you can get a ton more stuff though. Big NIC, more drives, pcie 4x4 nvme in numbers, etc.
It's kinda funny how few mATX options there are for the newer platforms (AM4, AM5, LGA1700) with 8 SATA ports. Obviously you're getting more slots so you can just toss in a card, but for native motherboard SATA it's just a handful of AM4/AM5 boards which are currently available with that many ports. It's a case of either going with older gen stuff or these specialized NAS motherboards coming out of China.
@@PeterBrockie absolutely. That's where I'm at now unless I go up to an ATX. Is there any reason an LSI 8 drive pcie card is less useful than ports on the motherboard? I love that cooler you've got dude.
@@ckckck12 Not unless you're trying to keep costs or heat/power down. The LSI cards do pull decent wattage when you're talking about a board which pulls under 20w. Plus they indirectly add noise since you -need- to cool them with either an attached 40mm fan or high case airflow. That being said, they work well and are cheap now. Plus with a cheap SAS expander you can add dozens of drives for under $100.
@@PeterBrockie ugggh. I am trying to get it low. Electricity is cheap here but I'm just trying to build a NAS to be relevant for 8 years or so.... Aiming at 35w tdp stuff. I have a Synology but from their show in Singapore this week I can tell they're walking away from small time users. I don't want a 2019 processor in 2030 and I doubt they're putting out a hot modern DS1825+ this fall. I swear you can't win these optimizations. Thanks for being helpful.
Curious if anyone has actually gotten this board properly configured with 13500T and 5600 Mhz DDR5 RAM… the BIOS is probably the most convoluted I’ve ever configured, so i totally could be getting settings wrong. But right now my cpu is artificially being power throttled even though cpu temps are only 38 degrees C. Can’t figure out how to increase power limits. RAM is stuck at stock 4800 MT/s even though i have applied XMP for 5600. The CWWK web site has some support resources but it’s all in Chinese and poorly organized. Really impossible to find drivers… :/
The BIOS just has nothing hidden so it has a billion confusing options. This is pretty common on boards like this. It's one of those things which is both good and bad. Lots of control, but you gotta google half the items in the BIOS. :D The board by default has some pretty strict power limits (I think they actually obey Intel ironically). For example my system will pull 90W during an all core load (13500T), but drop to 50W shortly after. It looks like there are options to enable keeping it at full turbo, but I didn't look into it since the VRM is pretty wimpy and I'm using it as low power VM system. The memory settings are bit weird at least with the sticks I have. The XMP profile doesn't seem to actually have the right speed. I just selected it anyway and picked the correct speed under "Maximum memory freq" or whatever it was called. Update: The memory speed control I found is located at: Chipset -> System Agent -> Memory Configuration -> "Maximum Memory Frequency" As for Turbo. On my CPU PL1 is 35W and PL2 (you can find these under Advanced -> Power & Performance -> CPU -> View Configure Turbo Options). Before you go into Turbo options, under CPU make sure the Boot Performance Mode is set to Turbo Performance - also check your OS to make sure it isn't running a power saving energy profile.
@@PeterBrockie So a few things I've been able to figure out that greatly helped my sanity. 1. The 13500T CPU doesn't support RAM faster than 4800MT/s, so that explains why I couldn't set the XMP profile. 2. The CPU is limited to a 35W TDP, but was expecting that it would boost to higher TDPs as needed temporarily under load. Was expecting my CPU fan to ramp up, but that never happened. But confirmed the system was not being limited by measuring power draw from the wall. 3. Finally found a setting in the BIOS that allows the processor to run at expected clock speeds under multi-core workloads. It was literally hours of trial and error and testing, but good to know it's working now. Figuring this out was the saving grace between keeping and returning the board. I guess with great power comes great.... patience. I do think hardware-wise, it's a pretty versatile NAS board.
The only bummer is I'm not a big fan of these random brand boards. They offer cool and weird features but you just got to wonder what's hidden in them. Granted windows is basically malware now so I don't know why I'm worried. 💩
Where would it be hidden? I mean, the bios flash chips can be read out and a hidden device will show up to the os as something (storage, keyboard, or whatever). I fairness CWWK is a brand I would consider a name brand in the space along with Topton, etc. They've been around for a while.
@@PeterBrockie beats me. Ironically I think Asus got in trouble for having malicious stuff on one of their boards. With these more sophisticated UEFI BIOS boards I think there is more they can do. Especially when most good boards can phone home for BIOS updates.
Yeah, it is ASUS. They store their driver installer in the bios (or maybe it's a system to just pass a download request to Windows). I'm just sayin' something like that would be found pretty fast, I suspect.
@@PeterBrockie probably. All my infosec friends tell me all I need to do is watch network traffic and write a firewall policy. I'm lucky I'm able to use Wireshark to figure out unknown static IPs of devices though. 💩
Great video! I've been researching NAS ITX motherboards myself recently, and I just stumbled across this video: th-cam.com/video/R41_ZW4nMR8/w-d-xo.html Looks like a new revision of the Q670. Looks interesting, but I haven't been able to fine any more information about it anywhere. Opinions? Please let me know if you find a decent place to buy one.
I've seen that video and motherboard, but despite it saying it's available... it doesn't seem to be. I'll pick one up for review if it isn't crazy expensive. Looks like an interesting board. M.2 SATA falls into the same recommendation as PCIe SATA in that they tend to be unreliable as they use cheap controllers instead of chipset SATA. That being said, they do work for lots of people and are cheap, so if you're willing to take the risk and maybe pick up a spare if one dies, they can be a really good option for boards with limited ports.
I admit, I did skip to the end, but was so impressed, I went back and rewatched the whole video, then subscribed. Great information, it would have saved me a lot of heartache a year ago, hopefully it will save me a lot in the future!
The chinese mobos support is basically non existent. They have really bad quality control from the reviews online. BIOS updates seem to be shared around in random MEGA links. You're basically doing a lottery with these boards hoping nothing goes wrong
An alternative to those which is a bit of a bodge are the Erying motherboards with Intel laptop CPUs. These CPUs have a big higher consumption, but you have 24/28 lanes, three NVMe slots and so on. NAS Compares did a build video using one of these some time ago.
The boards you showed off are perfect and exactly what I was looking for. Right now I am running a 13 year old QNAP rack mount NAS for storage and a separate Haskell based opto plex for Jellyfin with an Intel Arc a310 for transcoding. This board seems like one that I can just combine those two into a single box
Very nice video ! I did buy this exact same Q670 motherboard even before watching your video.... 3 NVME, 8 sata, IPMI, real socket for modern Intel CPU : Too good to pass up. CWWK is starting to build really great motherboards for NAS. The only thing that could be even better would be ECC support with a W680 chipset.
A version with W680 would probably be $250 more going by other W680 stuff. Thanks, Intel. :P
What's the power consumption of the board + Intel 12/13th gen CPU? I heard they can be quite power hungry.
@@olokeloyeah, this aspect that I want to know the most
careful, wolfgangs channel has video labeled "Don't Make This Mistake Building a Home Server!" about getting asus mobo in hope of getting it all, and it turns out that with ipmi the igpu of intel cpu cant be used for transcoding.
@@EmperorTerran go one step further and add a discrete GPU and do both transcode and inference. Oh, and @PeterBrockie this review earned my subscription. Keep up the good work, sir.
Great video. I watched the whole thing (although I did swipe the play bar to see if it reveal viewing behavior!) Thanks for making, and for sharing your BIOS woes!
Myself, I'm still using rocking the ASrock C2750D41 for my home NAS. This board supports 12 SATA drives outta the box. I have yet to use the PCIe slot for any HBA interface.
Did not skip, very interesting as i was looking for a similar solution. Enjoyed the video, Thanks !
Will continue my Lenovo m720q route with a pcie sata card 1st.
How are you planning to power the drives / what are you using for an enclosure? I have a m920q and am exploring diy "nas" options
Hi henry,
I have 4 3.5" drives and will power them seperately with a 200w pico psu (inter-Tech 88882190) in combination with a 150w 12v power supply. I reckon that should be plenty of power. I am about to build the system this week and can then determine the exact power draw.
For the enclosure i use a 3d printed 4 bay itx nas found on printables. I designed a bracket for the mini pc and it fits well. Later i will remix the back panel to fit my needs.
@@EvertZwevertwhat case did you go with? I am also interested in the challenges you had with power!
@@ShivanandChanderballyi'm using this one on printables: Network Storage - 4 Bay / 3.5" / ITX NAS
Loved the video. Yes, I've tried to flash the wrong bios to a board, but the installer caught it. The only thing I have to accept soldered in cpus is in laptops, and i don't' like them there either... And, the sound jacks ... I don't get that either... Great Video. Love the fact you explained why. Thanks!!!
Great video. Subbed. Only thing missing is SAS NAS support.
Since I've yet to see it mentioned recently: supermicro m11sdv
Up to amd epyc 3251 cpu (8c16t)
1 4x m.2 slot
1 16x slot that can be bifurcated
4x1gig (thats the con- The cpu has 10g capabilities)
12v/atx psu uptions
UP TO 512GB of ddr4 ecc ram
4x sata slots
Thank you for making this video! I'm running the CWWK N100 NAS MoBo and all the limitations you mentioned are felt. And, I wish they would quit stuffing 4xNICs into a NAS MoBo. Subscribed, in case you buy more and talk about them! :)
The only reason why I would go with 4 NIC ports is if I want a router+NAS all in one. (sigh) Man this is crazy...
@@graysonpeddie wouldn't it be also useful for proxmox? I'm only looking out on hw for my first home server and thought I'd have xpenology under proxmox for the nas part and then spare other ports to something else? Tempting to buy one of those n100 cwwk with Intel i226v 4port, but still hesitant too)
With 16 HDDs I think a mini ITX motherboard is not more an option, cause you need to provide a good power and a good ventilation, and so a case will be big enough for mATX/ATX motherboard with few extra PCIe for SATA adapters, and a fast ethernet, and a proper bifurcation for M.2 drives. Like, you can use full ATX motherboards for E5 Xeon v3/v4, or Threadripper 1950p, etc.. PCIe gen 3.0 is fine there.
Thank you, i'm not the only one. Cpu makers love limiting pcie lanes.
You buy one of B550 quality boards and Ryzen 7 5700G. They have 2 PCIe M.2 NVMe slots on board. And 4x4x4x4 PCIe configuration, enabling another 4 M2. NVMe without expensive card.
Quite a few have 2.5G LAN. My server with this idles at ~25W while having all the server services running... eating some 28GB out of 64GB RAM, ready for workloads.
This SATA obsession reminds me of: "Roads?! Where we're going We don't need Roads!"
Haha great response, exactly my thought, it's not that difficult...
And your options if you need more than 8TB/drive and a small loan? Sadly we are still tied to spinning rust if you want anywhere decent sized storage.
@@PeterBrockie Of course you are right, but the idea is nice, you need a large loan anyway, these days HDDs are not that cheap, my 5 (Truenas Scale) 2TB WD drives are still error free after 14 years
spot on. I wish there were more options for matx nas cases other than from aliexpress. matx boards also tend to be a good bit cheaper than an equal itx board.
I never felt limited by the generic ITX boards I come across. A single m2 slot gets you all the SATA you'd need for an ITX NAS case, thunderbolt gets you 10GbE, onboard wifi gets you wifi 7 or 2 extra SATA ports, and you still have a second m2 and 16x slot left. This board is only good for a NAS and little else because of the CPU limitations, I'd rather have 16+ cores for everything else I'm hosting on that machine.
I used an ASRock H370M-ITXac motherboard (6 x SATA ports) in a Jonsbo N2 case with a Core i3-8100 CPU (eBay $40). That motherboard supports 8th or 9th generation Intel CPUs. In the one PCIe slot, I put in a 10G Ethernet card.
The board you talk about looks great but seems to be something of a unicorn. Found one used on ebay but not available through usual retailers or online
@TheDesertsweeper It's on both Amazon (US) and Aliexpress. I'm pretty sure Ali ships to most places. CWWK also sells direct on their site, I think.
@@PeterBrockie I was responding to the Asrock this person talks about, not your board. Yes yours is everywhere!
Dude that is a perfect board
I kinda hoped ECC. Cuz all i see as something extra are sata ports which while nice is kinda solved by by ASM1166 based m.2 to sata adapter which is reported to support lower power stages and any mobo that has two m.2. Gotta check that ryzen board for ECC.
Oh yeah, ipmi sounds good, would be nice to see in the video what that intel version ipmi looks like.
Most people purchasing low end combo "nas" boards don't realize the issue with lanes being very limited.
Hi there, great video.... One question... Did you try any transcoding via plex or jellyfin with that cpu?
Basically any 12-14th gen CPU should have no problem transcoding anything with QuickSync. As for the older embedded boards, I doubt they will handle a 4k transcode well, but 1080p stuff should be fine.
Wow, great find. Exactly what im looking for! Any idea how much power it will get off the wall in idle?
Nice. I have a h670 I ordered maybe a week ago. Waiting for my 12600T to arrive.
Thanks. You video is very informative and fits perfectly with what I was struggling to find. I think I’ll follow your recommendation about the processor too.
I wish at least that this board had at least 3 or 4 NICs, but I think I can survive. Thank you again!
You also need the v-type Ethernet controller if you need vlan settings in windows. The non v doesn’t support it
Great video thank you!
Great video! Subscribed!
Great video, SUBSCRIBED! Great information and explanation! (Those boards are pretty interesting but i have no reason to upgrade from my z590/i5-11600k unraid)
Additionally, Pcie on an mitx uses the cpu lane, which will prevent low c states.
Great Video, I've been looking for a NAS MoBo for a while. Thanks!
I'm mid build. Asrock X570D4I-2T Supports DDR4 260-pin ECC*/non-ECC SO-DIMM up to 32GB/dimm. single PCIe3.0 x16 slot. 1 m.2 2280 PCIe4x4, 2 OCuLink (PCIe4.0* x4 or 4 SATA 6Gb/s), 2 RJ45 (10GbE) by Intel® X550-AT2, remote managment port.
I think this is pretty much the ultimate one IMHO. Drop an AM4 Pro cpu, 64gb ram in this, m.2 boot/os drive, and 8 SATA drive hanging off the occulink and have to 2 10GB Ethernet ports built in. The only irritation so far is the Intel Cooler based AM4 setup..
love this board, just wish it had sas instead of oculink
For months I was trying to decide what to upgrade my home TrueNAS SCALE box with. I have lots of experience over 20+ years with Supermicro, so I stuck to what I know. Ended up going with an X12STL-IF with a Xeon E-2336 and 64GB of ECC UDIMMs. The Xeon is way overkill, but it will take lower end CPUs and standard DDR4 3200 DIMMs. The 4.0 x16 slot will do 8x/8x via bifurcation jumper. Might do a GPU for transcoding or a dual SFP+ card later. Mini SAS HD w/sideband. Two DOM power ready SATA3 ports which I use for mirrored boot disks. 3.0 x4 M.2 slot for $takeYourPick. All around I'm satisfied with it. Yes it does idle around 28W for me, but I wanted the horsepower if I need it and the cost wasn't that much more for the Xeon E + ECC DIMMs.
jesus christ, such a nice mobo and available... but if it just had 8 sata instead of 6. Since it has single m.2 you would need pcie to expand sata and that would mean no 10gbit
excellent video - I wish there were more like it - subscriber count now +1
Very usefull information. Thank you for that. I do have a question. Did you test bifurcation switch to x8x8 and if you did, did that work? Also, how does BIOS looks like, and does it have many customized settings? is BIOS generic looking or actually looks like one of the main brands?
Damn.. I have already committed to the CWWK/Topton 7940HS 9 SATA port board for my home server, but that last board is definitely impressive, and outperforms it in a lot of areas.
I started this video worried that I may have to return the parts I just bought on impulse, turns out I did good buying a 13500T ($169) and a Q670!
Depends, but SM have some Mini-ITX boards which has 8 ports and IPMI.
Some of us want audio jacks 😮! Lack of QuickSync video hardware encoding is a far bigger concern. N100 CPU is a must.
The N5105 and J6413 both have QuickSync. Albeit older encoders.
That being said, most of these boards are available with an N100.
Nothibg wrong with needing audio, but for NAS boards I'd rather see something else used for the board space audio takes up. You can always grab a cheap usb audio dongle with the same bottom end Realtek audio chip they're sticking on these boards.
The problem is that you can essentially saturate 10Gbps interface with just 5 or 6 SATA drives. For me 4 SATA ports, or 5-6 in RAID is a limit what I would put on a 10Gbps system. Nothing wrong with wanting more storage, but not able to utilize disk bandwidth well, is not good for me.
Personally I'm way more likely to need more drives for capacity rather than adding them for bandwidth.
The Jonsbo N5 has gone up to 12 and atx board support which is interesting but definitely not compact.
I saw that. I'll try to order one for review once Jonsbo has it for sale.
@@PeterBrockie I was all ready to order one of these as soon as I read about it. Now, I'll wait for your review. I could see one of these taking over most of the duties of my homelab
The Problem is that IO is very power hungry.
The more lanes a CPU has the higher the idle Power Consumption.
My Server uses about 15-18w idle (at the wall, drives spun down) (ryzen 3600, 1x m.2, no gpu, 2 RAM Sticks (RAM needs ~2.5w per stick), 4x8tb drive)
but even with optimizing if you need the PCI lanes you basically can't get any lower, maybe you can achieve 10-13w at the wall, but if you need a 400$ PSU to do it, meeh...
can you maybe look into m-atx nas type mainboards pls? ty
Dang if that thing had dual 10gb Ethernet and 4x4x4x4 bifurcation this thing would be perfect for what I need.
I feel the cheaper ones you showed are perfect for what they can handle. You would've have the raw processing power to saturate a 10g connection anyway. And the internal usb header is meant to house the OS such as truenas. While the audio header is included because these are used as NVRs alot. I wouldn't even run more than 5 sata drives on something underpowered like that. An HBA would be a waste even if it had the lanes for it.
I'd love to see a modern denverton type setup that was relatively inexpensive, low power but also had a bunch of PCIE lanes so you could do 2x M.2, 10gb on board and 8-10 SATA ports that was also fairly easy for consumers to get their hands on. That CWWK board isn't bad but no 10gb means I'd have to add a nic so it wouldn't have enough sata ports for my use case sadly.
You can always trade an M.2 slot for a 10 gig Nic if you're desperate.
@@PeterBrockie I've pretty much settled on something that is higher power and larger but comes with everything I want. A case that's going to fit 10 drives isn't going to be small regardless so I kind of gave up on something lower than 45w tdp.
The problem with this idea is you would need PCIe switching.
The embedded CPUs do have pretty fine grain bifurcation available at the motherboard manufacturer level, but that isn't always exposed to the users. But, they are only going to have 8 lanes to work with, and you get what the manufacturer things you are going to use with little to no configuration.
The next step up laptop/desktop CPUs have way more lanes, but the configuration is pretty rigid. 20 lanes is great, but if the SoC only internally supports 16/4, 8/8/4, or 8/4/4/4 then there isn't anything a manufacturer can do.
This is a problem I'm personally hitting.
I have several of the cheap Tiger Lake ES M-ATX boards that I'd LOVE to reconfigure hardware on.
The SoC is for laptops, and technically supports bifurcating the 16 link into 8/8 or 8/4/4 links, but the motherboard manufacturer made it as a "gaming" board, so there isn't a way to configure it. I think it would require a BIOS hack AND a hardware hack, because I think the PCIe link bring-up for these SoCs checks high/low status on certain pins. It makes sense, since you would never actually be changing a configuration like this on a laptop motherboard. It would have the lane assignments baked into the design.
But boy would it be nice to give 8 lanes of PCIe 4.0 to a 100G card, have 3x NVMe drives, and then use the 4x 3.0 lanes off the chipset for a SATA/SAS controller...
@@Prophes0r That's why I specifically mentioned denverton. Something like that but more modern would have enough PCIE lanes for most low power NAS use cases (not counting a ton of NVME drives) with 16 lanes instead of the 8-9 we get now a days, especially with a modern igpu.
A board like that but mATX, with another PCIe slot and the 2nd and 3rd NVMe slots on the top, would be fantastic.
I'd love one where they use the extra space for more m.2. Keep it one slot, but have like 6 NVMe slots for people who also want a fast storage pool with their drives.
@@PeterBrockie that's a nice idea. That board I linked to in my other comment has PCIe bifurcation so you can have two NVMe drives per slot.
@@PeterBrockie Not enough lanes for that sadly. 3 is already very nice. Usually on mITX boards you have 1 or 2 maximum.
Hi @PeterBrockie, Thanks for creating the video, it was quite informative. I am currently in the middle of building a NAS and am considering the H670 + 13500T + SSD as the configuration. Can you create a setup video for this Motherboard, how to flash BIOS and configure this board. Thanks.
I like my Asrock H370m-itx, six sata ports, dual gigabit nics, I3-8100 through I7-9700, Qsync hd 630 and an m.2 WiFi.
Nice board! The issue is lack of ECC RAM support
No consumer boards support it aside from AMD (and even then it can be sketchy depending on the motherboard).
Personally I have changed my opinion on it over the years and don't consider it a big deal for home use anymore.
@@PeterBrockie yes.. I saw some boards with Xeon from Ali but I'm not sure how much they really support ECC
Search for:
X99 Motherboard Combo LGA2011 C612
@@HaimPeretz The W680 chipset or whatever it is called supports ECC on consumer 12th+ Gen chips, but they are rare and overpriced.
@@PeterBrockie
Please search for the:
X99 Motherboard Combo LGA2011 C612 in Ali
I have the Asrock x570 ITX motherboard in my N2 and my solution is going to be a dual m.2 pcie expansion card. Luckily I don't need a GPU on this board even though I'm using a 3700x. Right now I'm having a problem getting bifurcation working and I'm taking my time to diagnose it since I have to take the entire pc apart since I need to plug in a gpu to get into bios.
Nice video exactly what I needed
Hi Peter, i am using the similar Asrock board, Z690M-ITXonly 2 m.2 slot tho. 12600k with a hyper 212 evo cooler, did cinebench R23 run, it only got into to mid 70s max temps. Now i need to add some drives. Not sure what OS i will run. Subscribed. Prob True NAS
The CWWK Q670 board it's having a memory problem
The board shares its recources (lanes?) between its PCIEx5x16 slot and its first memory bank. When PCIEx slot is occupied with an PCIEx x16 adapter (i.e. Mellanox ConnectX-4 single QSFP28 port) then memory in first bank is ignored and it only recognizes 48GB.
I'm sorry I can't reproduce this error. Although I don't have 48GB DIMMs, just dual 16s. I have the same Connectx-4 card and all my memory shows no problem. The only memory related issue I had was when installing a generic Realtek 2.5g card which would cause the system not to boot with a memory error (3 beeps), but I saw other reviews for that cards on totally different systems that had the exact same issue.
PCIe slots and memory don't share lanes - memory is its own controller directly connected to the DIMMs.
Could you please let us know which Wake-on-LAN modes are supported by the NICs? Thanks so much!
connect the thundebolt directly to a pc to get 20gb/s, something else i always wanted to try is an infiniband adapter connected with thunderbolt, but currently waiting for the m.2 -> PCIe adapter
i also just bought an "Supermicro X10SLH-N6" for just 15€, it has 6x 10gbit on board, but its µATX, i gues it also pretty nice for a nas depending on your setup, if the motherboard area of your nas is build like a 1U server, i gues you were in luck with that board because you just need 1 pcie card anyway
I'm still waiting for a manufacturer to make an all flash U.2 NVME NAS enclosure like the QNAP TS-h1290FX but more affordable. I have a bunch of 8TB U.2 HGST drive but no where to use the other than an old Dell server.
I don't think an affordable one will come anytime soon unless it's using older hardware. Simply because of a lack of PCIe lanes on anything other than high end server stuff. That being said, if you're willing to drop down to something like 1st gen EPYC, you can probably get a board and a bunch of adapters to break out all the PCIe slots into NVMe.
Mmmmm....3 x NVME slots: add 3 x NVME PCI-E Card Riser III...18 drives plus the 8 onboard SATAs then add PCIE SATA Card 16 Ports 6Gb SATA 3.0: 18 + 8 + 16: 42 SATA 3 drives. But wait, there is more: 4 external USB ports. That is a whopping 46 drives. IF you want more drives though, you can change the USB drives so each USB port connects to a dual HDD dock (dual 10 TB drives), which will then change it to 42 internal plus 8 external drives.....50 drives. Imagine the cost of 50 x 10TB drives
I use M.2 SATA controllers in this case, which comes with another problem - where is the M2 port, and can I route a bulky SFF cable there.
There are 1 to 5 SATA port multipliers. Needs SATA controller support (most or all modern ones should do), and all 5 disks will obviously share 1xSATA600 worth of bandwidth. In many cases that's not an issue.
The thing with port multipliers is that they are often unreliable and require older controller chips not found on newer motherboards. Many even require Windows specific drivers so they won't even work on something like TrueNAS.
@PeterBrockie the PMP functionality is a part of the SATA standard, so there cannot be any Windows-only PMPs. In Windows you may just need to install drivers for the SATA controller from the vendor (AMD, Intel etc) as opposed to Microsoft ones coming with the OS. As for reliability, indeed, certain kinds of hiccups on one drive may affect others. And hot-plugging one port may result in re-negotiation on all ports, which could cause issues.
It's an optional part of the spec as far as I know, so it can be a bit hit or miss. I've seen controllers which say they only support all their ports under Windows, maybe it's just a translation error. Haha
How would power consumption with a build with this and a 13500T look like?
For example:
Mentioned MB, 13500T, 32GB Ram, 2x12TB Seagate IronWolf and low to moderate workloads in Proxmox VM`s.
Most modern Intel systems idle around 20w and then you add for each hard drive. Usually 7w or so.
I took the Chineseium route. Originally, I started out using a loader that will not be named and it served its purpose as a file server. Now I’m getting into encoding my ISOs and home lab applications so I need the horsepower.
The motherboard seems to no longer be available, and the Amazon reviews are wild. Why someone would want to build a tiny NAS, I do not know. I’ll stick to a Meshify 2 XL case will a zillion 3.5 inch hard drives. 😂
this actually looks like the perfect board but you can get the same features just by going with an mATX board for sooo much cheaper.
CWWK has a AMD 8845HS embedded version of this. Alth, they are sold out most of the time...
This hase driven me insane looking for a motherboard for this
asrock rack X570D4I-2T has 2 occulink that can be 8 sata, it has 10Gbe and 1x m.2 and 1x pcie
I guess if you really, really want a "brand name" board, but you're looking at about twice the cost and ECC SO-DIMMs (if you want ECC), which aren't the most common things around. Also only one M.2 unless you want to start trading Oculink ports for NVMe. The 10gig is always welcomed on ITX though. :D
@@PeterBrockie .these boards both also have IPMI out of band management interfaces
good overview
Do those boards come with an IO shield? if not, where can I get one?
@@saqueo1966 Mine did.
I watched the whole thing and understood everything except: why is it recommended to use xx500 CPU and up and not 14100 for instance? What am I missing? thanks
You need a xx500 (or higher) CPU to use the remote control vPro features. If you don't care about 'em, use anything. Also, the 13500T is specifically mentioned by the board maker as the idea CPU (supports vPro, low TDP, decent core count).
These Intel Celeron chips only have 1 channel memory regardles of two dimms. They just run in serial, but bandwith is the same as with one dimm.
According to Intel's own site the N5095, etc. are dual channel.
@@PeterBrockie Yeah, you're righ. Just checked it. I cofused N5095 for newer N95. My fault.
What intel board would you recommend with a 16x pcie slot w/bifurcation (4x4x4x4) and 2 8x pcie slots?
I think you're going to be stuck going to HEDT (like an older X299 board) or a server/workstation chipset. Intel and AMD don't usually have that level of bifurcation support on consumer stuff.
Is the H670 also limited to having 1x500? Or will a 12400 work with the H670?
Any CPU will work in either. You just need a 1x500 or higher if you want to use vPro on the Q670.
Thunderbolt accessories are still absurdly expensive. What a failure of a protocol.
actually 4 SATA + 2 M.2 + 1 PCI ports is enough, the thing is. that u can't even find these itx's mobos
The true problem is NO SAS ports.
I think the issue is just cost for SAS ports. Broadcom is basically the only one making anything SAS. I'm sure the cost to get their controllers for integration to a motherboard is absurd.
There might be a creative way to reuse the super common 9xxx-8i controllers like putting a second horizontal 8x PCIe slot in addition to the normal 16x to slot to fit a second card on an ITX board.
Plus SAS isn't super popular in home labbing. I suspect most used SAS drives from enterprise clients are destroyed after use rather than sold used.
@@PeterBrockie SAS ports are cheap especially considering intel chipsets support SAS natively. The connectors are no longer expensive either with Nvme drives sharing the same ports as SAS drive breakout cables. The only drawback is the motherboard vendor has to enable SAS in the chipset via the bios, like thunderbolt.
@@michaelcarson8375 Which chipsets support it other than their server/workstation ones? As far as I know no consumer chipsets support SAS.
@@PeterBrockie There are some X99 boards and I could have sworn I saw an X299 board, but none of those are ITX.
Do you count Intel atom ITX with SAS ports as server workstation? I don't and you can just search atom ITX SAS, but some of those have extra chips. There are MANY atom ~31 watt NAS style boxes that handle 4-8 drives that I would not consider as workstation class, they are IoT at best. Intel called those chips C3000 SOC. The C chipsets like C602 are identical to the desktop chipsets in every way but they have unbuffered ECC enabled, they even support celeron, i3, i5, etc desktop chips and unbuffered memory. I know of ATX boards with SAS from Intel chipsets, but none for ITX. The closest modern board I found used the Intel H770. Look up the Maxsun Terminator H770 YTX D5 Wi-Fi with Intel h770, it looks to only support sata.
ECC is most important if running a nas. I'd rather have SATA drives on my home/home office NAS instead of SAS if I had to choose.
Is there a way to have discrete graphic and 10Gb PCI LAN on It motherboard?
Not really. You're so limited on space with ITX it's basically impossible to get anything other than an iGPU or really, really old GPU on an ITX board. Best bet is the fastest AMD G series APU you can find.
yup. pretty much.
I got an itx board with 3 m.2’s I think that’s your best bet if u want to use a beefier cpu
Use a m.2 to sata adaptor.
Thanks. Nice video.
is a mobo design possible just to expose all the pcie lanes? surely that's the holy grail. im rocking asrock z87 extreme 4, 32GB ram with i7 4770k or i3 4130, still playing around. 4 hdd and 2 ssds hitting 40 watts with truenas. got 3 PCIe 3.0 x16, 2 PCIe 2.0 x1, 2x sata slots spare. my view is old HW is king as NAS setups. but im still a complete newbie
Yes and no. There are limits on how a CPU can split its lanes out. For example, you generally can't take all x20 lanes of a CPU and split them into 20 x1 slots. However, you can take a x16 slot and add a PCIe splitter to break it out, but they are only made by a few companies and are generally really expensive (especially PCIe 5.0).
There are high end desktop boards out there with seemingly more PCIe slots than you should be able to get away with on the desktop and they will usually take the chipset PCIe lanes and run them through a PCIe splitter. So you're already limited to the max link between the CPU and chipset (usually x4 PCIe).
why can't you replace the wifi card with a SATA adapter or plug the SATA adapter into the m.2 slot?
of course you can, to get to the wifi slot you need to unscrew the radiator
I have a server at home in Jonsbo N3, Asus Rog B550i and Ryzen 7 5700G with an m.2 to 5 SATA adapter, which gave me total of 9 SATA ports and instead of a Wifi card I installed an adapter for the second 2.5Gb LAN port
I solved the problem with only one PCIe x16 port by bifurcation and used an adapter from x16 to x8x4x4, which gave me two m.2 nvme x4 slots and one PCIe x16 slot with x8 speed, to which I added a low-profile Intel ARC A380 GPU
so in total, not counting the adapters, the jonsbo N3 has room for 12 drives, 8x3.5 inch, one 2.5 inch and three M.2 nvme
Seems to me PCIe configurations have gone totally sideways ever since gen5.... Most mobo OEMs are SO worried about having it that we get crap slot support (even ATX are usually just two 16x "size" that drop to 8x speed the moment you populate both, plus maybe two 1x that aren't much use for most people anyway), and despite the fact that there is essentially nothing in the consumer market that can utilize it (no Gen5 GPU's, just M2 NVMe's that consumers will NEVER need/notice any improvement).
I wish ALL mobo OEMs would just park the bulls**t gen5 hype, stop the STUPID "armor"/put all extra M2's on the back, and go back to using topside real estate for EXPANSION SLOTS like were not all morons.
Does the h670 support ECC by chance?
Only the W680 chipset supports ECC on LGA1700.
Does it support ecc memory?
Goods stuff, subbed
they could have boosted the power delivery and made a pro pro board.
I think they just hit space limitations for the VRM. I'm sure there is a way to make more room like stacking the M.2, dropping audio, etc. But they probably figured it was just a NAS. Haha
Oof, that X570 ITX/TB3 has a very garbage chipset cooler and a terrible placement of the M.2 slot (on the backside). The BIOS is garbage too... how the heck did they get Intel to "certify" a board where Thunderbolt devices don't work at all during boot, and where hotplugging a Thunderbolt device fails to allocate any resources for the device?
It seems like none of these ITX boards support ECC. Looks like I have to move up to micro-ATX for that kinda support. I'm guessing you don't care too much about ECC in your builds?
There are a couple of options for ECC in ITX form with lots of SATA ports, but they are either expensive or use sketchy Jmicron controllers, etc.
You can track down some of the Supermicro boards with Xeons - many have lots of SATA and ECC support. However you're going to pay a lot and have older hardware.
Most (but not all) consumer AM4 boards support ECC. But pretty much none have 8x SATA ports. So you're going to use your one PCIe card for a SAS controller or M.2 to SATA controllers.
Some Intel boards support in-band ECC. Meaning they eat some RAM for error correction and turn normal RAM into ECC. Sadly, few boards support this feature and it's limited to select 11th gen or higher CPUs. The N100 supports it, but BIOS support is required.
Personally I am not concerned about ECC. It's nice to have, but I haven't lost a single thing to bad RAM as far as I know in my 30-ish years of computer using and that's with storage in the hundreds of TB at home. Generally if you have bad RAM you're going to have obvious signs before anything serious happens - random crashes, etc. as long as you're backing stuff up and keeping on top of stuff like data scrubbing it's a non-issue for me.
I have been eyeballing this board for a while. Though I have been holding off since Intels booboo with crashing 13 and 14 series CPU's, and they are not exactly forthcoming regarding which CPU's are effected. I'm actually hoping they would do this board just with AMD processors. And to be honest, I'd rather have an AMD 7600 then a Intel I5 13500. Power consumption actually matters where I live, NOT CHEAP.
My 13500t idles at 20w with multiple SSDs and fans, etc. Turbos to 90 all core, and drops to 50w.
So it isn't too bad.
@@PeterBrockie TY for the answer. 20W in idle is more then fine actually. Thats how it goaing to live most of the time It's gonan be a NAS/plex server for the most part any ways. Though I do plan on running a ARK server, if it wasnt for that the N305 type bords would be more then fine I think.
@sprocket5526 I don't mind the N series, but I think the N305 costs way too much for 8 E cores. A NAS board with the N305 is usually around $300. For a little more you can get this board with WAY more expansion, and a 13500(T or normal) and have 20 threads for actual VM work.
there are m.2 to 6 port sata cards for $39.
it can use the i5-12500T?
Yep. Although if buying for the build and budget isn't a problem, I recommend getting the 13500t as it has way more cores.
Supermicro A2SDi-H-TP4F
The last one of those which sold on eBay went for $1500. For a motherboard with a 7 year old CPU. :D
It's not a bad board or anything, you're just paying a ton for something which has the same PCIe limitations as a lot of these boards. Only a single x4 PCIe 3.0 slot, single M.2 at x2 PCIe 3.0. The 10gig is nice though, I wish more boards had it these days.
@@PeterBrockie Wow, maybe I should sell mine.
Theres also:
A3SPI-8C-LN6PF or A3SPI-4C-LN6PF
I sold off most of my Supermicro stuff simply because in general I could get a "better" consumer board with the money and have money left over. :D
Okay, that's a cool fricken board. Hope it lasts you. Only problem you have now is that TrueNAS doesn't support modern hardware lmao
Works on everything I have. Truenas Scale has better hardware support which will only get better over time as they stopped Core feature development.
Too bad those boards are impossible to find in Europe.
AliExpress
Shift the conversation to mATX and all your concerns go away.
I will say that if I look at this gem you found here I might get what I needed and save $3-500.
With mATX you can get a ton more stuff though. Big NIC, more drives, pcie 4x4 nvme in numbers, etc.
It's kinda funny how few mATX options there are for the newer platforms (AM4, AM5, LGA1700) with 8 SATA ports. Obviously you're getting more slots so you can just toss in a card, but for native motherboard SATA it's just a handful of AM4/AM5 boards which are currently available with that many ports. It's a case of either going with older gen stuff or these specialized NAS motherboards coming out of China.
@@PeterBrockie absolutely. That's where I'm at now unless I go up to an ATX.
Is there any reason an LSI 8 drive pcie card is less useful than ports on the motherboard? I love that cooler you've got dude.
@@ckckck12 Not unless you're trying to keep costs or heat/power down. The LSI cards do pull decent wattage when you're talking about a board which pulls under 20w. Plus they indirectly add noise since you -need- to cool them with either an attached 40mm fan or high case airflow.
That being said, they work well and are cheap now. Plus with a cheap SAS expander you can add dozens of drives for under $100.
@@PeterBrockie ugggh. I am trying to get it low. Electricity is cheap here but I'm just trying to build a NAS to be relevant for 8 years or so.... Aiming at 35w tdp stuff. I have a Synology but from their show in Singapore this week I can tell they're walking away from small time users. I don't want a 2019 processor in 2030 and I doubt they're putting out a hot modern DS1825+ this fall. I swear you can't win these optimizations. Thanks for being helpful.
@ckckck12 Agreed, given ITXs primary market isn't NAS, board space for SATA is lower priority. mATX isn't much of a compromise.
Curious if anyone has actually gotten this board properly configured with 13500T and 5600 Mhz DDR5 RAM… the BIOS is probably the most convoluted I’ve ever configured, so i totally could be getting settings wrong. But right now my cpu is artificially being power throttled even though cpu temps are only 38 degrees C. Can’t figure out how to increase power limits. RAM is stuck at stock 4800 MT/s even though i have applied XMP for 5600.
The CWWK web site has some support resources but it’s all in Chinese and poorly organized. Really impossible to find drivers… :/
The BIOS just has nothing hidden so it has a billion confusing options. This is pretty common on boards like this. It's one of those things which is both good and bad. Lots of control, but you gotta google half the items in the BIOS. :D
The board by default has some pretty strict power limits (I think they actually obey Intel ironically). For example my system will pull 90W during an all core load (13500T), but drop to 50W shortly after. It looks like there are options to enable keeping it at full turbo, but I didn't look into it since the VRM is pretty wimpy and I'm using it as low power VM system.
The memory settings are bit weird at least with the sticks I have. The XMP profile doesn't seem to actually have the right speed. I just selected it anyway and picked the correct speed under "Maximum memory freq" or whatever it was called.
Update:
The memory speed control I found is located at: Chipset -> System Agent -> Memory Configuration -> "Maximum Memory Frequency"
As for Turbo. On my CPU PL1 is 35W and PL2 (you can find these under Advanced -> Power & Performance -> CPU -> View Configure Turbo Options). Before you go into Turbo options, under CPU make sure the Boot Performance Mode is set to Turbo Performance - also check your OS to make sure it isn't running a power saving energy profile.
@@PeterBrockie So a few things I've been able to figure out that greatly helped my sanity.
1. The 13500T CPU doesn't support RAM faster than 4800MT/s, so that explains why I couldn't set the XMP profile.
2. The CPU is limited to a 35W TDP, but was expecting that it would boost to higher TDPs as needed temporarily under load. Was expecting my CPU fan to ramp up, but that never happened. But confirmed the system was not being limited by measuring power draw from the wall.
3. Finally found a setting in the BIOS that allows the processor to run at expected clock speeds under multi-core workloads. It was literally hours of trial and error and testing, but good to know it's working now. Figuring this out was the saving grace between keeping and returning the board.
I guess with great power comes great.... patience. I do think hardware-wise, it's a pretty versatile NAS board.
The only bummer is I'm not a big fan of these random brand boards. They offer cool and weird features but you just got to wonder what's hidden in them. Granted windows is basically malware now so I don't know why I'm worried. 💩
There is not much more then on others. Considering the chipsets are intel/amd
Where would it be hidden? I mean, the bios flash chips can be read out and a hidden device will show up to the os as something (storage, keyboard, or whatever).
I fairness CWWK is a brand I would consider a name brand in the space along with Topton, etc. They've been around for a while.
@@PeterBrockie beats me. Ironically I think Asus got in trouble for having malicious stuff on one of their boards. With these more sophisticated UEFI BIOS boards I think there is more they can do. Especially when most good boards can phone home for BIOS updates.
Yeah, it is ASUS. They store their driver installer in the bios (or maybe it's a system to just pass a download request to Windows). I'm just sayin' something like that would be found pretty fast, I suspect.
@@PeterBrockie probably. All my infosec friends tell me all I need to do is watch network traffic and write a firewall policy. I'm lucky I'm able to use Wireshark to figure out unknown static IPs of devices though. 💩
Great video! I've been researching NAS ITX motherboards myself recently, and I just stumbled across this video: th-cam.com/video/R41_ZW4nMR8/w-d-xo.html
Looks like a new revision of the Q670. Looks interesting, but I haven't been able to fine any more information about it anywhere. Opinions? Please let me know if you find a decent place to buy one.
Also, what's your opinion about those sata controllers you mount in an m.2 slot?
I've seen that video and motherboard, but despite it saying it's available... it doesn't seem to be. I'll pick one up for review if it isn't crazy expensive. Looks like an interesting board.
M.2 SATA falls into the same recommendation as PCIe SATA in that they tend to be unreliable as they use cheap controllers instead of chipset SATA.
That being said, they do work for lots of people and are cheap, so if you're willing to take the risk and maybe pick up a spare if one dies, they can be a really good option for boards with limited ports.