Get started with Notion, sign up for free or unlock AI for $10 per month: ntn.so/WolfgangsChannel *Corrections:* - At 14:40, the N100's iGPU is not just 'playing back' the AV1 content, it's transcoding it to H.264 - Intel i3-6100 is a dual core processor, not quad core (15:25) *Links:* Asrock N100DC-ITX geni.us/ECsSoNr (Amazon) Jonsbo N2 geni.us/8rpN (Amazon) ASMedia ASM1166 M.2 SATA Controller geni.us/FL52EAe (Amazon) Trendnet 10Gbit SFP+ Adapter geni.us/wASn1 (Amazon) DC Jack to block terminals adapter geni.us/CFMe (Amazon) 4-pin ATX extension cable geni.us/Q26G (Amazon) PicoPSU geni.us/nX6uO (Amazon) Sharkoon SilentStorm SFX Bronze 450W geni.us/wKg3i3 (Amazon) As an Amazon Affiliate, I earn from qualifying purchases.
If I manage get a 24 Pin ATX to 4 Pin ATX, or 24 Pin to 8 Pin then 8 Pin to 4 Pin, will that work? Would it be safer than just hacking the 24 Pin to jump the power?
7 หลายเดือนก่อน
Why not full SSD build? It seems way simpler to power, lower consumption, quiet, smaller & possibly faster. Is it just HDD/SSD capacity vs. price consideration, or am I missing something?
6:52 the electrically and mechanically more safe option is to crimp the two pairs of cables together with a two-cable crimp each, before screwing them into the barrel jack (or rather its terminal block). But that's just a nit, I think your block is rated for multi-strand wire because there's a tiny metal plate between screw and cable, so it won't damage the strands. Still better crimped.
@@MartinZeitler nah its fine wires not one big one. You always have to put a crimp on them otherwise it's a fire hazard especially if it's higher current!
@@einfachmanu90 just image-search for something "4-pin ATX cable pinout", it will tell you which wire is connected to which pin on the connector. If the wires are colored, black is ground, yellow is 12V, red is 5V, usually. You will want to double check and measure with a multi-meter if you can. A standard barrell-jack connector is center-positive, but the brick should tell you that, too.
I use the N100M as main working PC. With 32 GB RAM and Debian 12 it is more than enough for programming with nodejs. With the low power consumption I can power it entirly from solar (island-system). An best of all... silence... I use two case fans but they only run on full cpu load. This is the first low power mainboard where I can configure the case fans for 0% on low work load. I love working on a complete silent pc :)
I've learned a great deal from your videos, and I've been a UNIX/Linux systems administrator for 25 years (back to the days when I had to download a new kernel image over a 14.4Kbps modem and compile it - hours! - just to get a 3Com Ethernet adapter working). Keep up the good work.
@@desklamp4792 If the 12VHPWR connector disaster taught us anything, it's that you shouldn't drive power connectors up to their specified limits in regular production.
iperf3 before 3.16 version is single-threaded and can't saturate 10G by default. In order to saturate 10G you have to run multiple iperf3 server processes: iperf3 -s -p 5101&; iperf3 -s -p 5102&; iperf3 -s -p 5103 & and run multiple clients: iperf3 -c hostname -T s1 -p 5101 &; iperf3 -c hostname -T s2 -p 5102 &; iperf3 -c hostname -T s3 -p 5103 &;
@@5467nick But even the latest Version (TEG-10GECSFP (Version v3.0R))is only PCIe 2.0 which would be a bottleneck if you only have two Lanes. However, the RJ45 Version (TEG-10GECTX (Version v3.0R)) has got an PCIe 3.0 Link, so two Lanes would be sufficient to saturate the 10GbE Network. This Info is directly from the Trendnet Homepage.
You might also set the CPU affinity of the threads to 0. This way each thread will be distributed to different cores (if there are as many as the number of threads).
I got the N100M exactly because I would've needed to hack a standard PSU to power the DC version and it has significantly less room to add cards. I think it's a pretty nice platform to work with, really power efficient and flexible. About the SSDs I would advise againt splicing in more connectors from the on board JST connector because, even if they use less power, it's pulling from 5V instead of 12V so it's using a rail with less current available then 12V. It's also weird seeing it not idling down to C10. My N100M with TrueNAS Scale goes to C10 and the NIC is not giving me that issue. Last but not least, if cooling is sufficient, you can bring the short and long TDP to 30W and get better transcoding performance since the long TDP on the board is just 10W.
I have a few N100 boards and all show over 96% in C10 state - the worst being the Proxmox server. Edit: I think I might know why his CPU didn't go to C10 state. I just read the 12th Gen processor datasheet and it says that the dependency for that is "Display in PSR or powered off" - none of my servers have a display attached and powertop even says GPU "Powered On 0.0%". It might just be that in his case, he had the iGPU busy, which doesn't allow it to go from C8, to C10.
He said "the only case in which I would recommend this solution is if you're building an SSD only NAS, since SSDs need less power than hard drives". Power is a product of voltage and current. If the connector can only output 2A per pin and there's only one 5V pin the connector supports a maximum of 10W. Which is not a lot of power if you wanna use more than two consumer SSDs. Usually SSDs are rated to use 1A, so even when using SSDs you're limited to two drives unless you want to risk it. And, if you just didn't know, that connector carries 12 and 5V but only 5V is used for SSDs.
re: low networking throughput from the NIC the AQN-100 supports up to 8 parallel queues ( AQ_CFG_VECS_DEF ) to help balance interrupt handling / work over multiple CPU cores [even while the workload is only a single TCP connection]; it might be a little hobbled by only being on 4 kinda slow cores since it can't fully take advantage of the existing hardware parallelism a similar TDP processor with 8 real cores to service hardware interrupts in parallel might achieve better throughput, even if the individual cores were somewhat slower
also double check what congestion control algo is being used, on a direct connection over 20gbit fiber, it can change my results with iperf3 from 3gbit to 19gbit, I mostly use bbr
Hey, the barrell jack adapter does not have spring loaded contacts, so you want to crimp your wires before screwing them in. If you don't they can come loose during operation and this is a fire hazard.
I migrated my old NAS to a N2 with my old N6005 board. All I can say is that I love it (now). It's purely a NAS running nothing else other that TrueNAS core. I've not even considered an upgrade to N100/N300 as I do not believe it will offer any additional benefits. Great video, otherwise. Learn something new every day and I have met that small challenge. Thanks for the effort you obviously put into this video.
Just read the 12th gen processor datasheet and one of the requirements to go from C8, to C10 is: "Display in PSR or powered off". You could potentially go to C10, if you make sure the iGPU is not busy with work. All my N100 servers(which don't have displays attached, btw) are in C10 state. Just a wild guess but it might work.
6W quad core, swappable single channel DIMM in server style that allows direct airflow, NVMe, dual sata, an OPEN PCI-E x2 slot....Yeah this is pretty much what I need. My home server is a 2009 era eMachines with a single core 1.6GHz 2650e, 2GB, dual sata 3gbps, PCI-E g2x1 (empty) and g2x16 (10GbE SFP). This looks like a more than suitable replacement in speed alone.
I think for me the most valuable data was right at the end. I never know where exactly to place these CPUs like the n100 in relation to their bigger brothers in the core i-something series. So the performance and performance per watt graphic really helps.
@@arouraios4942 yes, but it still is a reference to something I'm more familiar with. Let's take Intel's atom CPU, it never came off that favorably, compared to it's bigger brothers, not even a generation offset.
Very good work. Thank you for sharing. Loved it from start to end. Only potential omission (from my point of view) is a short mention of relevant BIOS settings (in addition to the Realtek NIC) to minimize power consumption.
I've seen that situation with iperf3 before. I was just recently looking at some MoCA videos and decided to go with that to improve my mesh system, and in one video it was pointed out that iperf3 wasn't using the full gigabit connection, but with the parameter for parallel streams, it did reach gigabit speed. I had the same experience once I set up the MoCA boxes for myself. You're definitely not alone there. This was an interesting build as I've been thinking about looking at what N100 boards would be like, especially for power consumption, for something like this.
If you must poke wires into a power connector at least use a resistor to limit the current if you make a mistake. Deburring and grommets are a good practice if you route wires through sheet metal
A very interesting video for all those who want to buy efficient hardware. Now, for those of us who already have hardware, and at the moment cannot change it, the software part would be very interesting. Do you have a video where you explain that setup with Unraid and different containers? The tricks and ways to configure the Dockers of delugevpn, prowlarr, booksonic, cloudflared, invoiceninja, nextcloud, paperless, photprism, radarr, recyclarr, sonarr and vaultwarden would be very appreciated. Thanks for everything Wolfgang.
Tbf I wouldn't be too comfortable running a "hacked" psu instead of the provided one, especially if you have to buy one just for that purpose it kinda defeat the premise of the motherboard in the first place. I think something like the N100I-D D4 would make more sense, even if you have to ditch the 10Gb card.
I've been running my Nas on an asrock q1900dc-itx since 2014. I started with FreeNAS, and it's now running TrueNAS Core. I've got 2x 2 TB Seagate Nas drives, and I added 2x 4 TB WD Red plus drives. The PSU is a 60w ThinkPad brick from ~2010. I don't run any services besides NAS on the system, because I don't want to add any complications to its management. I ran FreeNAS 9.2 from 2014 until TrueNAS core was a year old. Maybe I'll migrate to truenas scale, if core gets discontinued. But more likely I'd only do that by building a new NAS on something like the n100dc-itx. I love having a 10 year old NAS with industry standards parts that I can maintain without any particular vendor staying in business.
1:15 of course there is the j5040-itx it can handle 4 drives and has also a pcie connector and a m.2 wifi port. it supports 2 4k streams and is also passive cooled, has dual channel ram with max 8gb per channel. pared with a 12v adapter and a pico psu you can go low as 5w on idle.
The PCI slot on it is 2.0 x1 - which is the main reason why I prefer the N100DC-ITX board, as mentioned multiple times in the video. Besides, it costs almost the same as N100 boards new and comes with worse performance and an older iGPU
I wouldn't be surprised if the NIC just overheats, since this is the exact same issue I had until I strapped a little Noctua fan onto the heat sink of my 10G NIC. Might be worth a shot
Have a passively cooled Asus 10G in my PC still which doesnt overheat at all... Its due to iperf not taking advantage of multiple cores i think. Had the same issue on a low power Intel pc but between my zen 3 and zen 4 40w+ ryzens it's fine
I just built this. The problem is not only the DC pins can handle only two amps but the on-board DC-DC converter is only 90W (basically on board picoPSU). I don't have that many data, so I am fine for now, but I was thinking down the road I can get those external HDD power supplies (give you a molex with 12 and 5 Volts) OR since I have some soldering skills - make a little DC-DC PCB from 19 to 12 and 5 Volts and power everything I need from one beefy 19V laptop adapter. But then again, by the time I will fill up my 14 Tb, some new board like N500 will come out, that would have a more powerful DC-DC converter on board with more PCI-e lanes and more SATAs.
To be honest, N100 can be configured with higher TDP.As for certain N100 Box configured with 25w TDP with 16G DDR5 4800 RAM, its Geekbench6 score can achieve around 3300, which is quite close to skylake 4 cores i5 :D But it performed a lot worse when it's configured with 6 watts TDP, though. :) I was using it with Windows 10 back then with my N100 box, changing TDP between 6w and 25w in BIOS really made a huge difference :) Thanks for your great videos and they help me save my power bills a lot :D
I built a system based on this video and it's awesome! N100DC-ITX, 32gb Ram, bequiet! Sfx power 3 450W, 2 18Tb Sata drives (more on the way), one usb nvme and a boot ssd. No Pcie card. I had to manually set the DDR4-3200 CL16 stick to DDR4 -3000 CL 16 since I don't think the board has XMP and DRAM Voltage can only be set to 1.26V. My stick is running fine (no memtest yet). With Proxmox installed and several containers (vpn, jellyfin, ...) idling powertop shows 73% at C10 and 7% at C8 which sounds great (I don't know much about Linux power management). A Wall-plug power meter shows 28 Watts on idle, up to 80 watts under full load. I think the bequiet psu was a bad choice, it alone draws 10 Watts when the board is powered down, but I found out about Wolfgangs PSU Chart too late. All in all I'm very happy tho. Great video on a great board, thanks for the content Wolfgang!
Have been tracking your channel for a while. Great stuff. Looking forward to your build video. Immediately bought three of these for my new low-power, quiet Ceph cluster.
i thought for a long time about going for an asrock or asus n100 card but finally, i bought a tiny pc hp prodesk mini, i5 8500t, 8gb ddr4, 256gb ssd nvme for 95€. I just added 2x16gb ddr4 and 1tb hdd sata + 1tb usb hdd. With all this added, idle power consumption is 5w on Debian 12, C10 pkg (no monitor, no mouse, no keyboard) Of course I don't have parity, but that's not crucial in my case.
9 หลายเดือนก่อน +3
Yep, there is no replacement for this board, if we talking about support for 10Gbps NIC, but to be honest Asus Prime N100I-D D4 is also not that bad, at least it has second M.2 e-key port with PCIe (Asrock M.2 e-key port hasn't PCIe), so it can be used to install AI accelerator or „slightly” slower 2.5-5Gbps NIC, and it's a bit cheaper.
Who needs 10Gbps on a home server anyway, that's kind of a meme tbh. Even the usefulness of 2.5Gbps could be questioned for a computer that can only transfer to and from SATA ports.
4 หลายเดือนก่อน
@@billmurray7676 SATA can handle up to 6Gbps, single HDD is capable to saturate 2.5Gbps... IMO 1Gbps NIC is just slow, even from regular SD card I can read data faster than that.
it doesn't matter what sata can support, it matters what you put on it : you put HDDs on it, so that's about 200Mo/s, which is 1.6Gps. So no, you won't saturate 2.5Gbps with your HDD. That means 10Gbps is clearly useless on a home server. And 2.5Gbps, well, like I said, it's a questionable investment or reason to buy hardware in pratice.
4 หลายเดือนก่อน
@@billmurray7676, fastest available hard drives are reaching SATA limit, RAID in NAS is common thing, it's possible to achieve this limit even with standard hard drives and we haven't even started talking about modern SSDs used as an buffer or main storage. It's 2024, 1Gbps NIC should be considered as an obsolete for anything above low-end NASes.
You said HDD, so obviously SSD are out. Also, that would be pretty stupid to build a RAID for performance in a NAS since, by essence, you're supposed to build for data safety. You can't have both in a RAID. 1Gbps, although not ideal, is clearly not obsolete, 2.5/5 are options, but 10Gbps is definitely useless, which means that PCIe x1 is fine, and you shouldn't sacrifice other benefits for PCIe x2 or x4.
As long as you were doing a bit of custom cables, you could also use your 19 V power supply to branch to a straight through 19 V to your motherboard and 2 cheap-ish buck regulators, 12 V @5V, and 5V at 6A both available on Amazon or ebay for under 20 usd add a SATA power cables. A little more involved, but may less intimidating (perhaps cheaper) than working with a 120/240 V power supply.
I don't think the buck converters + 19V power brick would do much for the efficiency. Given that the bulk of the "active" or spun up power draw is 12V & 5V for the HDD, you'd be loosing allot via those buck converts.
What you could do instead for the power IMO is put a female barrel jack in those two wifi antena openings and use a small jumper cable to connect that to the board power. The whole "pull 4 cables from inside the case to plug into something outside looks a bit botched IMO.
Now CWWK has an N100/N305 board with only two 2.5 ethernet ports, but it has a PCIe slot with 4 lanes to take a higher bandwidth network card. I also has 6 SATA ports. This meets the requirements in more straight forward way than was possible before.
I have this board! It sits under my TV. Also it's nice for someone like yourself to review this board and show off the capabilities. If I get some drives, I'll probably upgrade mine to a very similar setup to yours, though likely 2.5GbE.
Do you by any chance use the onboard audio with the 3.5mm ports? I am looking for a cheap board that I can directly plug my (cheap old) 5.1 System into directly. If you tried it, how was the quality? Not looking for high end, but it also shouldn't be terrible.
@@MrMoralHazard sorry! I'm using only the HDMI for audio. I would test, but I literally lack the "ear" to tell you if the 3.5mm is worth a damn or not.
there was a misleading information about the N100 vs I3-6100 performance.. Because you calculated the TDP of 6 vs TDP of 51. I'm pretty sure that the N100 goes above 20W when running Cinebench R23. I dont think it will be 8.5 times more efficient at the same performance. Can you please measure? Thanks. Love your videos!
Thank you for your channel! It is a really nice resource when shopping for hardware. I was contemplating a N100DC, but chose a N100M with separate Pico PSU because of availability. I would like more cheap alternatives with ECC, but that is just my preference.
The power draw is amazing though, I tried a friendly elec nas board with RK3588 and topped at 13W if I remember. They say that x86 consumes more but well, it all goes to CPU manufacturing (7nm in this case), motherboard connected devices etc... Can't wait to have the equivalent of N100 but in 1nm in 2027 !
Very nice NAS-project! Because of the bandwith-issue, i once had a switch (some TP-Link if i remember correctly) that listed in the specs that it would auto-adjust the framesize, but this actually nerver worked. After updating the switch, i could set this setting manually and actually enable jumbo frames.
I went in a different direction. Bought the Aoostar R1, added 2 20tb drives for storage, bam low power media server running at 10 watts from the wall with the hard drives spun down, plus you can set the bios to run the N100 at 15w tdp for more performance at a max temp of 70c.
In some ways i kinda wish the manufacturers for the n-series motherboards are crazy enough to cram basic IO (SATA/LAN/USB3) onto a single pcie lane and expose the remaining 8 as pciex4 slots
Thanks for this guide! I am currently working on my own version of this. After having it build with just the basics, without the HDD's yet (only 500Gb SSD with Proxmox), I measured 22 watts at being idle, way off from your measurements... I watched your video over again to find some hints to it. I found the problem: I bought a Flex PSU at AliExpress. It turns out that when I only have the power supply turned on, with nothing else on it, it already is using up 10 watts!! Omg. (at 32 cents/kwh 24/7 running that costs 28 euro/year) This is going back and I have ordered a 300W Pico PSU + 12v 10a dc power adapter (so 240W effectively). Lets see how that goes.
I would love to see you build the ultimate Unraid-Plex home server with 5-6 SATA HDDs using the new Minisforum MS-01 taking out the motherboard and putting in a small case and also putting a cheap graphics card that can easily handle at least 4x4K streams with transcoding!
Perfect timing. I am considering a n100 board for a NAS too. Curious if you would recommend this board over a N100M with more expansion or the Cwwk / Topton N100 boards. My goal is low power, not necessarily high network speed
The problem is that Cwwk / Topton N100 boards have the JMB585 chip, which prevent the system to reach deep c-states which result to higher consumption.
Topton now has a 8505 board that works great as a NAS (and/or router) because it has more PCIe lanes and in addition to Quicksync, it has quickassist. For around 200USD it can handle M.2 4.0x4, 6 SATA, 1*PCIEx4, 4*Intel i226-V 2.5Gb ethernet and dual channel DDR5... Oh yeah, it can also do a 4*3.0x1 NVMe adapter board.
Managed to get my hp prodesk 400 to idle at 5 ish watts (jumps between 4 to peaks of 6 occasionally) with 2 hdd spun down whilst running samba shares with mergerfs on the proxmox host, haos vm & jellyfin lxc, i3 8100 16g ram, 1 disk spinning is 12w, 2 is about 18, under load is about 60w
The Avoton/Denverton based boards are really, really good. I think they check all the boxes for what you're looking for in a small low power motherboard. Those chips were designed from the ground up by Intel to perform exactly the function you're trying to achieve. Plus they support ECC memory, which is awesome. I used to have an AsRock C2750D4I. Mini ITX. 8 core Silvermont/Bay Trail generation low power Atom processor, with four DDR3 slots supporting ECC. 12 SATA ports. PCIe-8x slot. IPMI onboard with a dedicated ethernet port, plus two additional onboard gigabit ports. And they're old enough now that you can probably get them for cheap. The newer Denverton based boards are probably significantly better, but I haven't had direct experience with those. The only reason I got rid of the C2750D4I was because I moved to 10 gigabit fibre and the poor little Atom cores couldn't push more than about 190 megabytes/second over rsync when I was doing backups between servers.
This pulled me over the line. Been looking at this board for a while now, but was doubting about the performance. Think I am going to acquire an N100M for new NAS
Hey, did you get the N100M and which case did you use for it? I'm looking for a small case ideally some kind of mini itx that has space for the N100M if used with PicoPSU
Van you pls do a video on how to make existing hardware more efficient. Your videos are nice but keeping your old stuff is cheaper and some people cannot afford buying a new motherboard every time you do a video. Very informative video
To improve power efficiency you need to reduce the distance between transistors on a chip, so there is less electrical resistance between them Typically this is done by shrinking the manufacturing node from say 14nm to say 7nm. So you'd need to buy a new chip to take advantage of that. But what big tech doesn't want you to know is that you can shrink the distance between transistors yourself by just pushing really hard on the sides of the chip. My trick is to take my CPU and put it in a vice and then squeeze as hard as I can. Using this method I've turned a 28nm Xeon v2 into a 7nm Epyc 7002 (although the pressure did make some transistors pop out of the side).
Looks awesome. Explaining computers did a really nice miniPC build with a N100 board. Think it was different to this one. Also Supermiro has an awesome relatively low power (for the number of cores etc) epyc ITX board. This is great if you want more lanes. Also, it should last forever. Obviously, it is more expensive......
I built a desktop with LMDE running on ASRock N100M with an NVME SSD and 16GB Kingston memory, almost like Explaining Computers did, except I use HDMI output to an LG TV. Everything would be fine, but to my huge disappointment the system tends to freeze or reload spontaneously. I have tried Debian as well as almost all available Debian-based distros with different desktop environments, Fedora desktop - with no effect. Finally I ended up with LMDE that seems to be more stable on my hardware. As for working as remotely controlled NAS or home server -- N100M MoBo worked with no issues at all. It seems the problem is somewhere with video output/drivers.
Great stuff. I was asking in the comment about Mellanox Connect -X3 which is terrible. My server needed more horsepower so I went with i3 14100. Bought Intel X710 (dual port) for 100EUR with 2 transceivers. It goes down to C8 and there is a 10G speed always. Server is just one thing. Got RACK with 2 switches (8x10G SFP+ and 8x1G (4x POE+)), shutdown server with ipmi and 18 cores, 1x router, 1x wifi 6 AP, 1x camera, above i3 server and UPS. Previously with server based on this 18 core xeon total idle consumption was 136W, went down to 69W with i3 which is still a lot. So there are also unoptimized things like camera (15W!), ups (not measured) and IPMI (5W), probably networking could also do better.
I migrated my home server from an old Atom N280 to the N100DC a few months ago, and the performance gain is impressive to say the least. Since I don't hoard that much data, two USB3 attached SATA HDDs for the NAS part were sufficient. The box is hosting my web and mail server, a dockerized Home Assistant and a small Minceraft world, all in the low double digit% load and under 10 watts of total power. I don't really get the point in putting a SATA controller into the M.2.. four USB3 10Gbit ports provide plenty of connectivity for spinning rust, and I'd much rather put a big NVME drive in there for the stuff that needs to be really fast.
@@-.eQuiNoX.- I thought 3.1 was 5Gb and 3.2 was 10Gb.. and the gen2 were double that each.. [insert lengthy rant about stupid USB naming conventions], but still enough for the ~150MB/sek that the best spinning rust can provide.
@@WolfgangsChannel hmmm. i've been running my NAS off USB drives for almost two decades now without issues, but I guess that mileage will vary by manufacturers and usage.
From a tinkerer's point of view, this might be a nice solution, but I would still consider this M.2-to-SATA card a workaround. However the Intel N100 seems to be the perfect fit for an energy efficient NAS. The first turn key solutions are already round the corner, I will wait for one of these, I prefer to tinker with software over hardware. 😁
It doesn't support AV1 encoding using the integrated GPU, still you can encode AV1 using SVT on CPU using ffmpeg. If I remember correctly, encoding on CPU gets better quality despite taking longer encoding time.
For a nas like this, av1 decoding is probably more important than encoding. If you have av1 files, you want to be able to transcode on the fly to e.g. h264 or HEVC to stream to a tv or media player box.
@12:41 Wait a minute... So these power consumption numbers are when using the Bronze label SFX power supply because you hooked up HDD's ? ..or did you use the pico power supply? It's not very clear to me.. And can someone tell me if I can truly pass through the 6 SATA ports on the nvme board to a VM in Proxmox? In other words; Does the nvme slot share it's IOMMU group with other devices? If so, which devices?
I´m running one for my OPNsense FW with a dual 2.5G NIC. For some reason it kept crashing. The culprit was DRAM voltage being too low on a 3200 16GB stick. The XMP support in the UEFI is kinda weird. Anyway for 24/7 operation in a case I slapped on a little fan for airflow. Now it runs with no problems.
Maybe ok with ITX formfactor. But if you want/need smaller and more energy efficient. Go with odroid m1. It has pcie 3.0 2-lanes and can handle same JBM585 etc nvme adapters without issues.
Nice video! It would have been interesting for you test real world network file transfer speeds with Truenas, etc. as that’s what really matters to most of us.
Thanks for the interesting video! I was wondering at the beginning why you didn't use the "ASUS Prime N100I-D D4", but if you need the higher PCI-E speed for a 10gb network card, it's clear again. Presumably the external power supply of the Asrocks board also needs less power than the ATX/SFX power supply of the Asus board, right? I have seen a very similar construction proposal at "Elefacts" and am still thinking about it.
Very interesting build but I would warn that running power wires through grounded holes in thin metal is a fire hazard. The drives cause substantial vibration, and over time this will degrade the insulation. Rubber grommets are advised.
I'm really paranoid about ECC and use it whenever possible. I have 5 servers here, with a few hundred terabytes of storage between them, and I upgrade machines fairly regularly. I always check the logs, and have never seen a bit flip on those machines. Over the last 13-14 years of running multiple servers, I've only ever had one machine that would register ECC errors in the logs. I *ALWAYS* run ECC on anything mission critical, but I think really it probably doesn't matter. Unless you're living in an area with really high radiation (like a basement with radon) or a noisy electrical environment I don't think it's necessary at all. It's more of an indicator that you have failing hardware that needs to be replaced, rather than something that serves a purpose in day-to-day operation. If you use a checksumming filesystem it can probably catch most of the data problems without the additional layer of safety from ECC.
I have a similar setup. I am running 6 HDDs and 2 SATA SSDs off a Lenovo P330 Tiny. Using an HBA card. The Lenovo mini PC uses a 20V power supply. I have a 650W ATX gold PSU powering the drives. What I did was cut off an old usb cable, connect the power leads to a 5V relay module and connect the N-O of the relay to the PSU's jumper pins. This let me power on and off everything together. 😁
I recently started experimenting with a bare M920x board (they could be configured with a dedicated graphics card using the custom PCIe slot) and soldered to exposed PCIE_12V and 5V pads. The PCIE_12V line should be able to source at least 30W - should be enough for 2 to 4 drives (maybe more) depending on their startup characteristics.
@@antoine6521 I am using a M.2 Wifi to NVMe adapter board (one with a long FPC cable between them) and a ASM1166 based m.2 to sata card. When i write a short report on it i will try to remember to link it here :)
5:23 yes - 2A when spinning up. but the rating of that jst plug is for continous current, not peak. so it would probably be fine. but still a janky solution
If ONLY it had 2 m2 slots... we could have both the additional SATA ports AND a nvme cache drive for max throughput on the NAS front when using 10Gbe SFP... also man you just dropped that new AQC100 chipset on us as if it was "no biggie" when you no so well how we've been waiting for this for ages haha. Love it though thanks a bunch. Visiting EU soon so I'll use your links
I bought AIMB-275 motherboard with i5-6600 for my NAS build. I'm Polish, so I bought them on Allegro, and they we're really cheap. With 1 HDD and some sh*tty eMMC drive I had 4W idle power consumption. I think you're able to shop on Allegro since you live in Germany, so you should defintely try this.
@@PLTorontoAle po co mam sprawdzać moc przy pełnym obciążeniu, skoro serwer 90% stoi nieużywany? Ważniejszą liczbą wtedy są te rzeczone 4W w pełnym spoczynku.
9:29 I copied the code but running lspci still shows aspm=disabled and can't go below c3 still different asrock mobo but same issue the ethernet controller looks the same, ls'd the folder for the driver and it also has l1_aspm is it supposed to show aspm=disabled even if I run the sudo tee? appreciate the help!
Yeah I think you've nailed the weakness of this board: I wish so much that this board had either 6x SATA or 2x M.2 On my NAS I love to have 2x SSDs as a cache and half a dozen SATA disks, but there's no sensible way to do it with this board I guess I could use the M.2 to SATA board you use and a PCIe x4 to 2x NVME card? But that seems like I'd be relying on adapters more than I'm comfortable with
Yep. And since this board doesn’t support PCIe bifurcation, you’ll have to go for an NVMe adapter with a PCIe switch. Which would cost more than the board itself. By the way, CWWK finally brought out an N100-based NAS board with the ASM1166 controller (as opposed to JMB585): www.aliexpress.com/item/1005007001584335.html
I think if pursuit a cheap build, you can buy an Intel 82599EN based network adapter, its usualy branded as X520-DA1, and this motherboard should fit x8 pci-e without any problems.
passive cooling is a relative concept, a passive cooler computer in a room with AC isn't passively cooling, the AC is picking up the slack. And if you live in a tropical country near the equator your passive cooling computer of Europe will get hot and require a fan, even with AC in the room because our ACs don't have cool air outside and can't do much.
Hey, Thanks for the nice video. One question to the pcie power management (Min. 4:18). What is the problem, if the motherboard/sata controller does not support it?
@@joels7605this is still only for people that know what c-states mean many don't know what this concept even is. And I don't blame them they aren't really focused in most hardware channels, reviews etc.
Thanks for the great video. The board is really sweet. The idle power consumption is amazing. It's a shame that there are not more PCIe lanes to go around. There are simply not enough lanes to get to SATA6 speeds while also having 10Gbit networking. That might be fine for spinning disks but I am still looking for the 'perfect' 6-8 disk, all flash NAS. It's a real shame that all those efficient Fujitsu boards are incredibly hard to com by.
Basically ZimaCube but DIY. for clearance - i'm not saying that this build is trash, i mean that ZimaCube IS trash bc of marketing that trying to sell to us Pro version with much beefier CPU. this build is actually THE good one.
Great video 👍 9:26 -- ASPM (active state power management) setting, thank you. Kindest regards, friends and neighbours. P.S. Please do *_not_* do a UGreen NAS video.
Get started with Notion, sign up for free or unlock AI for $10 per month: ntn.so/WolfgangsChannel
*Corrections:*
- At 14:40, the N100's iGPU is not just 'playing back' the AV1 content, it's transcoding it to H.264
- Intel i3-6100 is a dual core processor, not quad core (15:25)
*Links:*
Asrock N100DC-ITX geni.us/ECsSoNr (Amazon)
Jonsbo N2 geni.us/8rpN (Amazon)
ASMedia ASM1166 M.2 SATA Controller geni.us/FL52EAe (Amazon)
Trendnet 10Gbit SFP+ Adapter geni.us/wASn1 (Amazon)
DC Jack to block terminals adapter geni.us/CFMe (Amazon)
4-pin ATX extension cable geni.us/Q26G (Amazon)
PicoPSU geni.us/nX6uO (Amazon)
Sharkoon SilentStorm SFX Bronze 450W geni.us/wKg3i3 (Amazon)
As an Amazon Affiliate, I earn from qualifying purchases.
I had problem with my Asus n100 and hevc playback
ASUS H170i Pro mini !!!! Please! As a home server and as a router
No.
If I manage get a 24 Pin ATX to 4 Pin ATX, or 24 Pin to 8 Pin then 8 Pin to 4 Pin, will that work? Would it be safer than just hacking the 24 Pin to jump the power?
Why not full SSD build? It seems way simpler to power, lower consumption, quiet, smaller & possibly faster. Is it just HDD/SSD capacity vs. price consideration, or am I missing something?
6:52 the electrically and mechanically more safe option is to crimp the two pairs of cables together with a two-cable crimp each, before screwing them into the barrel jack (or rather its terminal block). But that's just a nit, I think your block is rated for multi-strand wire because there's a tiny metal plate between screw and cable, so it won't damage the strands. Still better crimped.
Major hickup for me (network engineer and electrician) too haha glad im not the only one
Not a problem while they're nicely aligned parallel, so that the clap inside presses both down equally.
@@MartinZeitler nah its fine wires not one big one. You always have to put a crimp on them otherwise it's a fire hazard especially if it's higher current!
How do I know what is positive and what is negative? Can I tell somehow from the plug? I'm afraid of breaking something :(
@@einfachmanu90 just image-search for something "4-pin ATX cable pinout", it will tell you which wire is connected to which pin on the connector. If the wires are colored, black is ground, yellow is 12V, red is 5V, usually. You will want to double check and measure with a multi-meter if you can. A standard barrell-jack connector is center-positive, but the brick should tell you that, too.
I use the N100M as main working PC. With 32 GB RAM and Debian 12 it is more than enough for programming with nodejs. With the low power consumption I can power it entirly from solar (island-system). An best of all... silence... I use two case fans but they only run on full cpu load. This is the first low power mainboard where I can configure the case fans for 0% on low work load. I love working on a complete silent pc :)
I've learned a great deal from your videos, and I've been a UNIX/Linux systems administrator for 25 years (back to the days when I had to download a new kernel image over a 14.4Kbps modem and compile it - hours! - just to get a 3Com Ethernet adapter working). Keep up the good work.
Those screw terminal barrel jack connectors are usually only rated for around 600mA, so I'd recommend to use a soldered connector instead.
Shouldn't that be okay since the board claims to only use something like 6W?
@@desklamp4792 If the 12VHPWR connector disaster taught us anything, it's that you shouldn't drive power connectors up to their specified limits in regular production.
iperf3 before 3.16 version is single-threaded and can't saturate 10G by default. In order to saturate 10G you have to run multiple iperf3 server processes: iperf3 -s -p 5101&; iperf3 -s -p 5102&; iperf3 -s -p 5103 & and run multiple clients: iperf3 -c hostname -T s1 -p 5101 &; iperf3 -c hostname -T s2 -p 5102 &; iperf3 -c hostname -T s3 -p 5103 &;
Yes 100%, jumbo MTU might help here to, but the issue here is more likely that his PCIe 10G sfp+ card can only do PCIe 2.0 which limits to 2x4 GT/s
@@erikmagkekse Video shows it hitting over 8Gb/s at 11:17. Also he claimed it has a new chipset.
@@5467nick But even the latest Version (TEG-10GECSFP (Version v3.0R))is only PCIe 2.0 which would be a bottleneck if you only have two Lanes. However, the RJ45 Version (TEG-10GECTX (Version v3.0R)) has got an PCIe 3.0 Link, so two Lanes would be sufficient to saturate the 10GbE Network. This Info is directly from the Trendnet Homepage.
You might also set the CPU affinity of the threads to 0. This way each thread will be distributed to different cores (if there are as many as the number of threads).
I got the N100M exactly because I would've needed to hack a standard PSU to power the DC version and it has significantly less room to add cards. I think it's a pretty nice platform to work with, really power efficient and flexible. About the SSDs I would advise againt splicing in more connectors from the on board JST connector because, even if they use less power, it's pulling from 5V instead of 12V so it's using a rail with less current available then 12V. It's also weird seeing it not idling down to C10. My N100M with TrueNAS Scale goes to C10 and the NIC is not giving me that issue. Last but not least, if cooling is sufficient, you can bring the short and long TDP to 30W and get better transcoding performance since the long TDP on the board is just 10W.
I have a few N100 boards and all show over 96% in C10 state - the worst being the Proxmox server.
Edit: I think I might know why his CPU didn't go to C10 state. I just read the 12th Gen processor datasheet and it says that the dependency for that is "Display in PSR or powered off" - none of my servers have a display attached and powertop even says GPU "Powered On 0.0%". It might just be that in his case, he had the iGPU busy, which doesn't allow it to go from C8, to C10.
Well no consumer SSDs use the 12V rail on SATA molex connectors... So your reasoning there for not using the connector is not really valid.
He said "the only case in which I would recommend this solution is if you're building an SSD only NAS, since SSDs need less power than hard drives". Power is a product of voltage and current. If the connector can only output 2A per pin and there's only one 5V pin the connector supports a maximum of 10W. Which is not a lot of power if you wanna use more than two consumer SSDs. Usually SSDs are rated to use 1A, so even when using SSDs you're limited to two drives unless you want to risk it.
And, if you just didn't know, that connector carries 12 and 5V but only 5V is used for SSDs.
got myself the same one and I'm pleased with it, too
How low in power consumption can the N100M get in the C10 state?
re: low networking throughput from the NIC
the AQN-100 supports up to 8 parallel queues ( AQ_CFG_VECS_DEF ) to help balance interrupt handling / work over multiple CPU cores [even while the workload is only a single TCP connection]; it might be a little hobbled by only being on 4 kinda slow cores since it can't fully take advantage of the existing hardware parallelism
a similar TDP processor with 8 real cores to service hardware interrupts in parallel might achieve better throughput, even if the individual cores were somewhat slower
also double check what congestion control algo is being used, on a direct connection over 20gbit fiber, it can change my results with iperf3 from 3gbit to 19gbit, I mostly use bbr
Do you know how many parallel queues a X520-DA2 has?
Awesome video dude. Thanks to your tip my N100 NixOS home server is finally running at C8 and not C3 like before.
Hey, the barrell jack adapter does not have spring loaded contacts, so you want to crimp your wires before screwing them in. If you don't they can come loose during operation and this is a fire hazard.
How do I know what is positive and what is negative? Can I tell somehow from the plug? I'm afraid of breaking something :(
@@einfachmanu90 It should be documented in the manual. Usually outside is Negative (GND), but i have seen the other configuration as well...
I migrated my old NAS to a N2 with my old N6005 board. All I can say is that I love it (now). It's purely a NAS running nothing else other that TrueNAS core. I've not even considered an upgrade to N100/N300 as I do not believe it will offer any additional benefits. Great video, otherwise. Learn something new every day and I have met that small challenge. Thanks for the effort you obviously put into this video.
They make a micro atx version of this board called the N100-M, which has a full size pci-e slot and uses a standard ATX power supply.
Just read the 12th gen processor datasheet and one of the requirements to go from C8, to C10 is: "Display in PSR or powered off". You could potentially go to C10, if you make sure the iGPU is not busy with work. All my N100 servers(which don't have displays attached, btw) are in C10 state. Just a wild guess but it might work.
with VMs, it's almost impossible to avoid iGPU i guess ? That's why i prefer using a headless OS with all containers in docker.
What is your idle power draw in C10? Does it worth the effort to enable it?
@@Reza1984_ IDLE(Proxmox, no USB or HDMI connected) = ~6.4W. IDLE + USB(keyboard) + HDMI = 7W-10W(mostly ~7.5W).
6W quad core, swappable single channel DIMM in server style that allows direct airflow, NVMe, dual sata, an OPEN PCI-E x2 slot....Yeah this is pretty much what I need. My home server is a 2009 era eMachines with a single core 1.6GHz 2650e, 2GB, dual sata 3gbps, PCI-E g2x1 (empty) and g2x16 (10GbE SFP). This looks like a more than suitable replacement in speed alone.
Great channel, Wolfgang, really great content! I really do appreciate the drive to find power-efficient build parts!
I think for me the most valuable data was right at the end. I never know where exactly to place these CPUs like the n100 in relation to their bigger brothers in the core i-something series. So the performance and performance per watt graphic really helps.
Keep in mind that's a 6th gen i3. So we don't know how it compares to a current gen cpu
@@arouraios4942 yes, but it still is a reference to something I'm more familiar with. Let's take Intel's atom CPU, it never came off that favorably, compared to it's bigger brothers, not even a generation offset.
Very good work. Thank you for sharing. Loved it from start to end. Only potential omission (from my point of view) is a short mention of relevant BIOS settings (in addition to the Realtek NIC) to minimize power consumption.
I've seen that situation with iperf3 before. I was just recently looking at some MoCA videos and decided to go with that to improve my mesh system, and in one video it was pointed out that iperf3 wasn't using the full gigabit connection, but with the parameter for parallel streams, it did reach gigabit speed. I had the same experience once I set up the MoCA boxes for myself. You're definitely not alone there.
This was an interesting build as I've been thinking about looking at what N100 boards would be like, especially for power consumption, for something like this.
If you must poke wires into a power connector at least use a resistor to limit the current if you make a mistake. Deburring and grommets are a good practice if you route wires through sheet metal
A very interesting video for all those who want to buy efficient hardware.
Now, for those of us who already have hardware, and at the moment cannot change it, the software part would be very interesting.
Do you have a video where you explain that setup with Unraid and different containers?
The tricks and ways to configure the Dockers of delugevpn, prowlarr, booksonic, cloudflared, invoiceninja, nextcloud, paperless, photprism, radarr, recyclarr, sonarr and vaultwarden would be very appreciated.
Thanks for everything Wolfgang.
Tbf I wouldn't be too comfortable running a "hacked" psu instead of the provided one, especially if you have to buy one just for that purpose it kinda defeat the premise of the motherboard in the first place.
I think something like the N100I-D D4 would make more sense, even if you have to ditch the 10Gb card.
Small point, this board doesn't come with a power supply.
I've been running my Nas on an asrock q1900dc-itx since 2014. I started with FreeNAS, and it's now running TrueNAS Core. I've got 2x 2 TB Seagate Nas drives, and I added 2x 4 TB WD Red plus drives. The PSU is a 60w ThinkPad brick from ~2010. I don't run any services besides NAS on the system, because I don't want to add any complications to its management. I ran FreeNAS 9.2 from 2014 until TrueNAS core was a year old.
Maybe I'll migrate to truenas scale, if core gets discontinued. But more likely I'd only do that by building a new NAS on something like the n100dc-itx.
I love having a 10 year old NAS with industry standards parts that I can maintain without any particular vendor staying in business.
1:15 of course there is the j5040-itx it can handle 4 drives and has also a pcie connector and a m.2 wifi port. it supports 2 4k streams and is also passive cooled, has dual channel ram with max 8gb per channel. pared with a 12v adapter and a pico psu you can go low as 5w on idle.
The PCI slot on it is 2.0 x1 - which is the main reason why I prefer the N100DC-ITX board, as mentioned multiple times in the video. Besides, it costs almost the same as N100 boards new and comes with worse performance and an older iGPU
I wouldn't be surprised if the NIC just overheats, since this is the exact same issue I had until I strapped a little Noctua fan onto the heat sink of my 10G NIC. Might be worth a shot
Have a passively cooled Asus 10G in my PC still which doesnt overheat at all... Its due to iperf not taking advantage of multiple cores i think. Had the same issue on a low power Intel pc but between my zen 3 and zen 4 40w+ ryzens it's fine
I just built this. The problem is not only the DC pins can handle only two amps but the on-board DC-DC converter is only 90W (basically on board picoPSU). I don't have that many data, so I am fine for now, but I was thinking down the road I can get those external HDD power supplies (give you a molex with 12 and 5 Volts) OR since I have some soldering skills - make a little DC-DC PCB from 19 to 12 and 5 Volts and power everything I need from one beefy 19V laptop adapter.
But then again, by the time I will fill up my 14 Tb, some new board like N500 will come out, that would have a more powerful DC-DC converter on board with more PCI-e lanes and more SATAs.
Sorry for spam, was trying to comment multiple times, but yt doesn't like me mentioning some chinese ali-shop
To be honest, N100 can be configured with higher TDP.As for certain N100 Box configured with 25w TDP with 16G DDR5 4800 RAM, its Geekbench6 score can achieve around 3300, which is quite close to skylake 4 cores i5 :D
But it performed a lot worse when it's configured with 6 watts TDP, though. :)
I was using it with Windows 10 back then with my N100 box, changing TDP between 6w and 25w in BIOS really made a huge difference :)
Thanks for your great videos and they help me save my power bills a lot :D
I built a system based on this video and it's awesome!
N100DC-ITX, 32gb Ram, bequiet! Sfx power 3 450W, 2 18Tb Sata drives (more on the way), one usb nvme and a boot ssd. No Pcie card.
I had to manually set the DDR4-3200 CL16 stick to DDR4 -3000 CL 16 since I don't think the board has XMP and DRAM Voltage can only be set to 1.26V. My stick is running fine (no memtest yet).
With Proxmox installed and several containers (vpn, jellyfin, ...) idling powertop shows 73% at C10 and 7% at C8 which sounds great (I don't know much about Linux power management). A Wall-plug power meter shows 28 Watts on idle, up to 80 watts under full load. I think the bequiet psu was a bad choice, it alone draws 10 Watts when the board is powered down, but I found out about Wolfgangs PSU Chart too late. All in all I'm very happy tho.
Great video on a great board, thanks for the content Wolfgang!
Have been tracking your channel for a while. Great stuff. Looking forward to your build video.
Immediately bought three of these for my new low-power, quiet Ceph cluster.
I really thanks your video and all of video comments. It will help my current nas build. and I learned wire need crimp.
i thought for a long time about going for an asrock or asus n100 card but finally, i bought a tiny pc hp prodesk mini, i5 8500t, 8gb ddr4, 256gb ssd nvme for 95€. I just added 2x16gb ddr4 and 1tb hdd sata + 1tb usb hdd. With all this added, idle power consumption is 5w on Debian 12, C10 pkg (no monitor, no mouse, no keyboard)
Of course I don't have parity, but that's not crucial in my case.
Yep, there is no replacement for this board, if we talking about support for 10Gbps NIC, but to be honest Asus Prime N100I-D D4 is also not that bad, at least it has second M.2 e-key port with PCIe (Asrock M.2 e-key port hasn't PCIe), so it can be used to install AI accelerator or „slightly” slower 2.5-5Gbps NIC, and it's a bit cheaper.
Who needs 10Gbps on a home server anyway, that's kind of a meme tbh. Even the usefulness of 2.5Gbps could be questioned for a computer that can only transfer to and from SATA ports.
@@billmurray7676 SATA can handle up to 6Gbps, single HDD is capable to saturate 2.5Gbps... IMO 1Gbps NIC is just slow, even from regular SD card I can read data faster than that.
it doesn't matter what sata can support, it matters what you put on it : you put HDDs on it, so that's about 200Mo/s, which is 1.6Gps. So no, you won't saturate 2.5Gbps with your HDD.
That means 10Gbps is clearly useless on a home server. And 2.5Gbps, well, like I said, it's a questionable investment or reason to buy hardware in pratice.
@@billmurray7676, fastest available hard drives are reaching SATA limit, RAID in NAS is common thing, it's possible to achieve this limit even with standard hard drives and we haven't even started talking about modern SSDs used as an buffer or main storage. It's 2024, 1Gbps NIC should be considered as an obsolete for anything above low-end NASes.
You said HDD, so obviously SSD are out.
Also, that would be pretty stupid to build a RAID for performance in a NAS since, by essence, you're supposed to build for data safety. You can't have both in a RAID.
1Gbps, although not ideal, is clearly not obsolete, 2.5/5 are options, but 10Gbps is definitely useless, which means that PCIe x1 is fine, and you shouldn't sacrifice other benefits for PCIe x2 or x4.
As long as you were doing a bit of custom cables, you could also use your 19 V power supply to branch to a straight through 19 V to your motherboard and 2 cheap-ish buck regulators, 12 V @5V, and 5V at 6A both available on Amazon or ebay for under 20 usd add a SATA power cables. A little more involved, but may less intimidating (perhaps cheaper) than working with a 120/240 V power supply.
I don't think the buck converters + 19V power brick would do much for the efficiency. Given that the bulk of the "active" or spun up power draw is 12V & 5V for the HDD, you'd be loosing allot via those buck converts.
@@evanvandenberg2260 it wasn't for efficiency, the idea was not to use a 240V supply.
What you could do instead for the power IMO is put a female barrel jack in those two wifi antena openings and use a small jumper cable to connect that to the board power. The whole "pull 4 cables from inside the case to plug into something outside looks a bit botched IMO.
Ok. I was expecting to see more NAS related stuff. ZFS, truenas, unraid? configurations and performance on those sata ports. Looking forward for that.
Open source version of Synology OS
Now CWWK has an N100/N305 board with only two 2.5 ethernet ports, but it has a PCIe slot with 4 lanes to take a higher bandwidth network card. I also has 6 SATA ports. This meets the requirements in more straight forward way than was possible before.
issue is 20W power in idle
I have this board! It sits under my TV.
Also it's nice for someone like yourself to review this board and show off the capabilities.
If I get some drives, I'll probably upgrade mine to a very similar setup to yours, though likely 2.5GbE.
Do you by any chance use the onboard audio with the 3.5mm ports? I am looking for a cheap board that I can directly plug my (cheap old) 5.1 System into directly. If you tried it, how was the quality? Not looking for high end, but it also shouldn't be terrible.
@@MrMoralHazard sorry! I'm using only the HDMI for audio. I would test, but I literally lack the "ear" to tell you if the 3.5mm is worth a damn or not.
there was a misleading information about the N100 vs I3-6100 performance.. Because you calculated the TDP of 6 vs TDP of 51. I'm pretty sure that the N100 goes above 20W when running Cinebench R23. I dont think it will be 8.5 times more efficient at the same performance. Can you please measure? Thanks. Love your videos!
Thank you for your channel! It is a really nice resource when shopping for hardware.
I was contemplating a N100DC, but chose a N100M with separate Pico PSU because of availability.
I would like more cheap alternatives with ECC, but that is just my preference.
The power draw is amazing though, I tried a friendly elec nas board with RK3588 and topped at 13W if I remember. They say that x86 consumes more but well, it all goes to CPU manufacturing (7nm in this case), motherboard connected devices etc... Can't wait to have the equivalent of N100 but in 1nm in 2027 !
Very nice NAS-project! Because of the bandwith-issue, i once had a switch (some TP-Link if i remember correctly) that listed in the specs that it would auto-adjust the framesize, but this actually nerver worked. After updating the switch, i could set this setting manually and actually enable jumbo frames.
Managed switch from tp link? Nice 😊
@@mmuller2402 i think the model name was something "jetstream ". it actually was not mine, i just made it run for the customer...
I went in a different direction. Bought the Aoostar R1, added 2 20tb drives for storage, bam low power media server running at 10 watts from the wall with the hard drives spun down, plus you can set the bios to run the N100 at 15w tdp for more performance at a max temp of 70c.
In some ways i kinda wish the manufacturers for the n-series motherboards are crazy enough to cram basic IO (SATA/LAN/USB3) onto a single pcie lane and expose the remaining 8 as pciex4 slots
Thanks for this guide! I am currently working on my own version of this. After having it build with just the basics, without the HDD's yet (only 500Gb SSD with Proxmox), I measured 22 watts at being idle, way off from your measurements... I watched your video over again to find some hints to it. I found the problem: I bought a Flex PSU at AliExpress. It turns out that when I only have the power supply turned on, with nothing else on it, it already is using up 10 watts!! Omg. (at 32 cents/kwh 24/7 running that costs 28 euro/year) This is going back and I have ordered a 300W Pico PSU + 12v 10a dc power adapter (so 240W effectively). Lets see how that goes.
9:12
May I know what's the 12V power supply you used for picopsu?
Another great video. Thanks mate. I saw a glimpse of Traefik in this video. Would love to see a setup of it like you did with Nginx Proxy Manager.
I would love to see you build the ultimate Unraid-Plex home server with 5-6 SATA HDDs using the new Minisforum MS-01 taking out the motherboard and putting in a small case and also putting a cheap graphics card that can easily handle at least 4x4K streams with transcoding!
Perfect timing. I am considering a n100 board for a NAS too.
Curious if you would recommend this board over a N100M with more expansion or the Cwwk / Topton N100 boards. My goal is low power, not necessarily high network speed
I'm also interested in how the Asrock N100DC-ITX compares to the CWWK and Topton motherboards N100 boards.
I second this
The problem is that Cwwk / Topton N100 boards have the JMB585 chip, which prevent the system to reach deep c-states which result to higher consumption.
@@Eujanous That needs to be compared to see actual results. If jmb585 is more efficient than it may not make much difference.
Topton now has a 8505 board that works great as a NAS (and/or router) because it has more PCIe lanes and in addition to Quicksync, it has quickassist. For around 200USD it can handle M.2 4.0x4, 6 SATA, 1*PCIEx4, 4*Intel i226-V 2.5Gb ethernet and dual channel DDR5... Oh yeah, it can also do a 4*3.0x1 NVMe adapter board.
Managed to get my hp prodesk 400 to idle at 5 ish watts (jumps between 4 to peaks of 6 occasionally) with 2 hdd spun down whilst running samba shares with mergerfs on the proxmox host, haos vm & jellyfin lxc, i3 8100 16g ram, 1 disk spinning is 12w, 2 is about 18, under load is about 60w
The Avoton/Denverton based boards are really, really good. I think they check all the boxes for what you're looking for in a small low power motherboard. Those chips were designed from the ground up by Intel to perform exactly the function you're trying to achieve. Plus they support ECC memory, which is awesome.
I used to have an AsRock C2750D4I. Mini ITX. 8 core Silvermont/Bay Trail generation low power Atom processor, with four DDR3 slots supporting ECC. 12 SATA ports. PCIe-8x slot. IPMI onboard with a dedicated ethernet port, plus two additional onboard gigabit ports. And they're old enough now that you can probably get them for cheap.
The newer Denverton based boards are probably significantly better, but I haven't had direct experience with those. The only reason I got rid of the C2750D4I was because I moved to 10 gigabit fibre and the poor little Atom cores couldn't push more than about 190 megabytes/second over rsync when I was doing backups between servers.
You are off the mark. Those atoms cant transcode 4k/hdr in any serious way
This pulled me over the line. Been looking at this board for a while now, but was doubting about the performance. Think I am going to acquire an N100M for new NAS
Hey, did you get the N100M and which case did you use for it? I'm looking for a small case ideally some kind of mini itx that has space for the N100M if used with PicoPSU
Van you pls do a video on how to make existing hardware more efficient. Your videos are nice but keeping your old stuff is cheaper and some people cannot afford buying a new motherboard every time you do a video. Very informative video
To improve power efficiency you need to reduce the distance between transistors on a chip, so there is less electrical resistance between them Typically this is done by shrinking the manufacturing node from say 14nm to say 7nm. So you'd need to buy a new chip to take advantage of that.
But what big tech doesn't want you to know is that you can shrink the distance between transistors yourself by just pushing really hard on the sides of the chip. My trick is to take my CPU and put it in a vice and then squeeze as hard as I can. Using this method I've turned a 28nm Xeon v2 into a 7nm Epyc 7002 (although the pressure did make some transistors pop out of the side).
Can't wait for the build and setup video. The power consumption on this beast is brilliant! Please also mention the cost of the build.
Nice update to your last Serv-Nas-Homelab build !
~Evilcorp crippleware hack, brings tech to the masses !
Soldering will help to improve the DC connector, as you solder directly into the board.
Re: the network speed.
We ran in to a similar issue at work with one of our windows servers. The fix was to enable smb multichannel.
Here's the command (9:26) to test your PCIe devices for ASPM:
sudo lspci -vv | awk '/ASPM/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]+|ASPM )'
Looks awesome. Explaining computers did a really nice miniPC build with a N100 board. Think it was different to this one. Also Supermiro has an awesome relatively low power (for the number of cores etc) epyc ITX board. This is great if you want more lanes. Also, it should last forever. Obviously, it is more expensive......
I didn't check, but bifurcation on the PCIe slot is a really good option for addition drives. Not all boards have this though.
I built a desktop with LMDE running on ASRock N100M with an NVME SSD and 16GB Kingston memory, almost like Explaining Computers did, except I use HDMI output to an LG TV. Everything would be fine, but to my huge disappointment the system tends to freeze or reload spontaneously. I have tried Debian as well as almost all available Debian-based distros with different desktop environments, Fedora desktop - with no effect. Finally I ended up with LMDE that seems to be more stable on my hardware. As for working as remotely controlled NAS or home server -- N100M MoBo worked with no issues at all. It seems the problem is somewhere with video output/drivers.
Great stuff. I was asking in the comment about Mellanox Connect -X3 which is terrible. My server needed more horsepower so I went with i3 14100. Bought Intel X710 (dual port) for 100EUR with 2 transceivers. It goes down to C8 and there is a 10G speed always. Server is just one thing. Got RACK with 2 switches (8x10G SFP+ and 8x1G (4x POE+)), shutdown server with ipmi and 18 cores, 1x router, 1x wifi 6 AP, 1x camera, above i3 server and UPS. Previously with server based on this 18 core xeon total idle consumption was 136W, went down to 69W with i3 which is still a lot. So there are also unoptimized things like camera (15W!), ups (not measured) and IPMI (5W), probably networking could also do better.
Despite what the manual says I have a N100 running with 32GB, runs like a charm.
I migrated my home server from an old Atom N280 to the N100DC a few months ago, and the performance gain is impressive to say the least. Since I don't hoard that much data, two USB3 attached SATA HDDs for the NAS part were sufficient. The box is hosting my web and mail server, a dockerized Home Assistant and a small Minceraft world, all in the low double digit% load and under 10 watts of total power.
I don't really get the point in putting a SATA controller into the M.2.. four USB3 10Gbit ports provide plenty of connectivity for spinning rust, and I'd much rather put a big NVME drive in there for the stuff that needs to be really fast.
USB HDDs have proved to be unreliable for permanently attached storage in my experience
The 4x usb 3.2 of this board are gen 1 , that support only 5gbps, not 10gbps, but I see your point, there’s nothing bad in using this
@@-.eQuiNoX.- I thought 3.1 was 5Gb and 3.2 was 10Gb.. and the gen2 were double that each.. [insert lengthy rant about stupid USB naming conventions], but still enough for the ~150MB/sek that the best spinning rust can provide.
@@WolfgangsChannel hmmm. i've been running my NAS off USB drives for almost two decades now without issues, but I guess that mileage will vary by manufacturers and usage.
Just a general FYI. Despite Intel's N100 spec sheet, I've been running a 32GB 3200Mhz DIMM on this board without issue for 7+ months. (TrueNAS Scale)
From a tinkerer's point of view, this might be a nice solution, but I would still consider this M.2-to-SATA card a workaround. However the Intel N100 seems to be the perfect fit for an energy efficient NAS. The first turn key solutions are already round the corner, I will wait for one of these, I prefer to tinker with software over hardware. 😁
It doesn't support AV1 encoding using the integrated GPU, still you can encode AV1 using SVT on CPU using ffmpeg. If I remember correctly, encoding on CPU gets better quality despite taking longer encoding time.
I don't think it's the smartest idea to do it on an N100 however.
Even with SVT-AV1 getting better and better, it would probably run single digit fps if not less on decent preset(5/6).
For a nas like this, av1 decoding is probably more important than encoding. If you have av1 files, you want to be able to transcode on the fly to e.g. h264 or HEVC to stream to a tv or media player box.
@@kepstin Yes and I don't know if you can use the intel igpu or even intel arc AV1 encoder in ffmpeg for example.
@12:41 Wait a minute... So these power consumption numbers are when using the Bronze label SFX power supply because you hooked up HDD's ? ..or did you use the pico power supply? It's not very clear to me..
And can someone tell me if I can truly pass through the 6 SATA ports on the nvme board to a VM in Proxmox?
In other words; Does the nvme slot share it's IOMMU group with other devices? If so, which devices?
Watch the beginning of the “Power consumption” section
@@WolfgangsChannel Alright, so it's using the Pico PS, with all HDD's getting power from the single (pico) power line I presume
Hi Wolfgang Wolfgang! - lol, that made me chuckle….
Why not just get the N100M instead of this one? Only the price?
I´m running one for my OPNsense FW with a dual 2.5G NIC. For some reason it kept crashing. The culprit was DRAM voltage being too low on a 3200 16GB stick. The XMP support in the UEFI is kinda weird. Anyway for 24/7 operation in a case I slapped on a little fan for airflow. Now it runs with no problems.
7:14 you don't need a thick wire, you are connecting the signal pin to the ground, a random paper clip will do the job just right.
The thick wire is there to fit snugly into the female ATX pins
@@WolfgangsChannel ok, fair point.
Maybe ok with ITX formfactor. But if you want/need smaller and more energy efficient. Go with odroid m1. It has pcie 3.0 2-lanes and can handle same JBM585 etc nvme adapters without issues.
Nice video! It would have been interesting for you test real world network file transfer speeds with Truenas, etc. as that’s what really matters to most of us.
0:24 if it's a naked-crystal CPU, maybe handle the heatsink more careful than that. Instantly made me remember the chipped Athlon/Duron cores.
Than what? I didn’t do anything with the heatsink in the video
@@WolfgangsChannel it looked like you held down the board by pushing on the heatsink, to connect the power cable.
Thanks for the interesting video! I was wondering at the beginning why you didn't use the "ASUS Prime N100I-D D4", but if you need the higher PCI-E speed for a 10gb network card, it's clear again. Presumably the external power supply of the Asrocks board also needs less power than the ATX/SFX power supply of the Asus board, right? I have seen a very similar construction proposal at "Elefacts" and am still thinking about it.
Very interesting build but I would warn that running power wires through grounded holes in thin metal is a fire hazard. The drives cause substantial vibration, and over time this will degrade the insulation. Rubber grommets are advised.
Thank you for the video, could please talk more about remote control? Or if there were a video before could you please share the link?
How much do you care about ECC for home server?
I'm really paranoid about ECC and use it whenever possible. I have 5 servers here, with a few hundred terabytes of storage between them, and I upgrade machines fairly regularly. I always check the logs, and have never seen a bit flip on those machines. Over the last 13-14 years of running multiple servers, I've only ever had one machine that would register ECC errors in the logs. I *ALWAYS* run ECC on anything mission critical, but I think really it probably doesn't matter. Unless you're living in an area with really high radiation (like a basement with radon) or a noisy electrical environment I don't think it's necessary at all. It's more of an indicator that you have failing hardware that needs to be replaced, rather than something that serves a purpose in day-to-day operation. If you use a checksumming filesystem it can probably catch most of the data problems without the additional layer of safety from ECC.
@@joels7605 thank you very much for your reply
I'm convinced by this video, and I almost forget that I don't want to build a NAS.
love your videos ! i bought a used hp sff business pc for £50 to use as a minecraft server, i feel like it's unbeatable for home servers
I have a similar setup. I am running 6 HDDs and 2 SATA SSDs off a Lenovo P330 Tiny. Using an HBA card. The Lenovo mini PC uses a 20V power supply. I have a 650W ATX gold PSU powering the drives.
What I did was cut off an old usb cable, connect the power leads to a 5V relay module and connect the N-O of the relay to the PSU's jumper pins.
This let me power on and off everything together. 😁
I recently started experimenting with a bare M920x board (they could be configured with a dedicated graphics card using the custom PCIe slot) and soldered to exposed PCIE_12V and 5V pads. The PCIE_12V line should be able to source at least 30W - should be enough for 2 to 4 drives (maybe more) depending on their startup characteristics.
@bbrice100 which drive case are you using? Which hba card also?
@@antoine6521 I am using a M.2 Wifi to NVMe adapter board (one with a long FPC cable between them) and a ASM1166 based m.2 to sata card. When i write a short report on it i will try to remember to link it here :)
@@rayncadiana
Looking forward to it
5:23 yes - 2A when spinning up. but the rating of that jst plug is for continous current, not peak. so it would probably be fine. but still a janky solution
If ONLY it had 2 m2 slots... we could have both the additional SATA ports AND a nvme cache drive for max throughput on the NAS front when using 10Gbe SFP... also man you just dropped that new AQC100 chipset on us as if it was "no biggie" when you no so well how we've been waiting for this for ages haha. Love it though thanks a bunch. Visiting EU soon so I'll use your links
There is a slot for a wifi card, isn't there ssd's that can fit into that?
Nope, it doesn’t carry PCIe signal on this board
@@Beatleman91it's probably USB 2.0 so A+E I think... so a coral m.2 A+E would fit in there.. (if you can buy one that is)
I bought AIMB-275 motherboard with i5-6600 for my NAS build. I'm Polish, so I bought them on Allegro, and they we're really cheap. With 1 HDD and some sh*tty eMMC drive I had 4W idle power consumption. I think you're able to shop on Allegro since you live in Germany, so you should defintely try this.
Może i tak zejdzie ale wątpię sam HDD to 5W lekko go obciążysz i ten proc z tym TDP skoczy do 40 50 W wiem bo mam i5 7500
@@PLTorontoZ wybudzonym HDD miałem 7W. Poza tym TDP można ograniczyć.
@@BlueCombPL ale nie do 7W pochwal się ile masz przy 100% syntetyk to daje jakieś porównanie .realnie osiaga wtedy koło 30-70% w czasie pracy
@@PLTorontoAle po co mam sprawdzać moc przy pełnym obciążeniu, skoro serwer 90% stoi nieużywany? Ważniejszą liczbą wtedy są te rzeczone 4W w pełnym spoczynku.
Give the Topton NAS Motherboard N6005/N5105 Mini ITX a look. You will find it to be a super DIY home NAS MB!
9:29 I copied the code but running lspci still shows aspm=disabled and can't go below c3 still
different asrock mobo but same issue the ethernet controller looks the same, ls'd the folder for the driver and it also has l1_aspm
is it supposed to show aspm=disabled even if I run the sudo tee?
appreciate the help!
Apart from the form factor, is there any other reason not to choose the ASRock N100M? There you can directly use a normal 12v Pc power supply
Nope
Shout out to Wolfgang's parents who apparently have a 10 gig network at home.
But a German Internet connection 🤡🤡🤡
50M DSL max kekw
Yeah I think you've nailed the weakness of this board: I wish so much that this board had either 6x SATA or 2x M.2
On my NAS I love to have 2x SSDs as a cache and half a dozen SATA disks, but there's no sensible way to do it with this board
I guess I could use the M.2 to SATA board you use and a PCIe x4 to 2x NVME card? But that seems like I'd be relying on adapters more than I'm comfortable with
Yep. And since this board doesn’t support PCIe bifurcation, you’ll have to go for an NVMe adapter with a PCIe switch. Which would cost more than the board itself.
By the way, CWWK finally brought out an N100-based NAS board with the ASM1166 controller (as opposed to JMB585): www.aliexpress.com/item/1005007001584335.html
I think if pursuit a cheap build, you can buy an Intel 82599EN based network adapter, its usualy branded as X520-DA1, and this motherboard should fit x8 pci-e without any problems.
passive cooling is a relative concept, a passive cooler computer in a room with AC isn't passively cooling, the AC is picking up the slack. And if you live in a tropical country near the equator your passive cooling computer of Europe will get hot and require a fan, even with AC in the room because our ACs don't have cool air outside and can't do much.
AC’s don’t use outside air, only recirculating the room air and the compressor is chilling it.
Well, using your own words from one of the previous comments - looks like your life is a mess and poorly organized. Just get an AC installed :)
Dude, having Notion as sponsor is quite cool.
Not the usual boring squarespace bs or whatever.
I really appreciate that you don't sell your soul 😄
Hey, Thanks for the nice video. One question to the pcie power management (Min. 4:18). What is the problem, if the motherboard/sata controller does not support it?
The power consumption increases
Moin aus Münster.
Can you please provide bit more infomation about C-States?
How you check, debug and configure them?
Moin, he has a video about this called "Building a power efficient home server", released a year ago.
@@subrezon Thanks a lot.
Somehow i missed this video.
You can usually just run powertop if you're running linux. It's a pretty good little utility.
@@joels7605this is still only for people that know what c-states mean many don't know what this concept even is. And I don't blame them they aren't really focused in most hardware channels, reviews etc.
Thanks for the great video. The board is really sweet. The idle power consumption is amazing. It's a shame that there are not more PCIe lanes to go around. There are simply not enough lanes to get to SATA6 speeds while also having 10Gbit networking. That might be fine for spinning disks but I am still looking for the 'perfect' 6-8 disk, all flash NAS. It's a real shame that all those efficient Fujitsu boards are incredibly hard to com by.
Good video man !
Basically ZimaCube but DIY.
for clearance - i'm not saying that this build is trash, i mean that ZimaCube IS trash bc of marketing that trying to sell to us Pro version with much beefier CPU. this build is actually THE good one.
AFAIK, ZimaCube (at least in the version that was shipped to creators) has major problems with power efficiency, due to the PCIe layout
In the UK, the cheapest I could find one of those 10G NICs was for just over £80 from Misco.
Great video 👍 9:26 -- ASPM (active state power management) setting, thank you. Kindest regards, friends and neighbours. P.S. Please do *_not_* do a UGreen NAS video.
No ECC memory though, right? For a NAS I'm looking at ECC.
I did just the same but with the Gigabyte N5105I H, comes with a past generation CPU but can handle easily truenas and over a gigabyte transfers
With four cores anything over 25% load could be saturating a core. Check your interrupts, and if the driver has any features to reduce them
With Linux reporting programs the CPU utilization is usually relative to a single core. 100% means one core saturated and 400% means fully loaded.
asus pro h610t dsm d4 is the my choice, because i need more cpu power. It have 12v for storage built-in, but limited only 2 SATA too.