I really would like to see a full video on the software stack. Like one comprehensive guide outlining everything u need to think about. So security, access from the internet, the whole deal really. The hardware is easy to figure out to me but the software and keeping everything secure and private while having access from the internet is a lot more confusing to me.
Realistically, practically, and high-level for newer people learning (in no specific order and you can look up set up guides for all of these): 1. wireguard for accessing your home network remotely (NO PASSWORDS EVER ONLY KEYS) 2. I personally expose SSH on a random port of my choosing (AGAIN NO PASSWORDS EVER ONLY KEYS AND NO ROOT ACCESS) 3. If self-hosting applications to expose to the internet, reverse proxy with SSL (cloudflare is very powerful and the free tier is honestly pretty amazing for a homelab) 4. Set up your server's firewall to default deny all, and specifically open ports as needed. Then follow-up with port forwarding on your WAN router. 5. If self-hosting a password manager such as bitwarden, get a physical hardware key. yubikeys are very nice imo. 6. BACKUPS BACKUPS BACKUPS. Use the 3-2-1 backup mentality. 7. Good luck and when self-hosting stuff you need to stay on top of software updates and overall stay up to date on security happenings and on things regarding the hardware and software you are running.
Wendell doesn't mention it in this video (and in fact he points out this Synology card as specifically not requiring it), but the B650I Aorus Ultra actually does support x8/x8, x8/x4/x4, and x4/x4/x4/x4 bifurcation in the BIOS. That's the kind of thing I really wish they'd just put in the spec sheet, it's pretty important to some of us. Three M.2 slots built in gives you good storage and the opportunity to use M.2 NICs or M.2 SATA HBAs (up to six SATA ports per M.2 slot) to expand its capabilities. In addition, you can fit a bifurcation card to the x16 slot to get even more opportunity to expand. Coupled with AM5 finally bringing iGPUs to every processor in the lineup, it really is the ideal mITX home server motherboard. Only thing it's really missing are IPMI and onboard 10GbE, but at least one of those you can add yourself reasonably easy. I guess I could have done with more fan headers, you only get two, but whatever, that's still enough for most use cases. And you can always use splitters.
Honestly cutting down PCIe cards for home servers is an underrated art, I've got 2 cards cut down to PCIe x1 and it's super easy to do reliably. Contrast with me nearly breaking my motherboard with the "recommended" approach of cutting out the back of the x1 slot. Once it's done and tested it's going to be as reliable as an unmodified card in a full slot, and you avoid having to use dodgy cheap x16 to x1 risers and awkward mounting setups too.
I went with 5.25" bays in a >10 yr old case, since I had extra CPUs and MBs from my desktop system upgrades. Use what you have. Especially, since you can no longer buy cases with enough 5.25" bays. Thanks, Steve!
Use what you have - definitely agree if you have old stuff laying around. I run my plex server on a skylake i5 inside an Antec full tower case from 2002. I do want to upgrade someday and downsize to an ITX server, but not until I replace my current gaming PC so... probably not for another couple years.
I kinda needed a case with tons of drive bays, so Phanteks Enthoo Pro, insert old X99 desktop, replace CPU with a Haswell-E Xeon. Add a 3x5.25"->4x3.5" cage, print more 2.5" drive trays -> 10x 3.5" + 4x 2.5". nVidia Quadro K2200 for transcoding, which currently takes up 16 lanes but there's a slot with 4 lanes free, so I could have 2 x16 slots free. Okay, I did have to also buy a new PSU, because the old one didn't have enough SATA connectors without a bunch of adapters. In any case, I don't see myself replacing the system any time soon.
Hey L1Techs can you make a show about network security for folks? Just the basic stuff like the kinds of things one would run into doing basic dev ops, exposing ports, running containers, connecting via ssh, these types of things. I feel like chatgpt has increased the number of folks that are able to get themselves in trouble :]
I'm in the same boat. I have my Linux server setup and feel pretty good about ssh into the machine from a laptop, getting the ssh key sent, etc. Setting up docker, samba, nfs, still confused with SELinux, but it works well for the most part. I still haven't opened it to the outside world as I need to brush up on my networking, firewalls and figure out nginx, etc.
@@rudysal1429 Yup! Even just hearing brief reference to the 'worst case scenario' with some of these technologies and software stacks will probably put most of us into a defensive space where we know what to google. L1Tech doesn't need to be prolific or even in-depth.
nginx server good example! It touches a lot of software stacks. A lot of new folks don't 'know' tcp/ip or what YOU DEFINITELY DON'T WANT TO DO but they do want dockerize a python web app, anyways.
Lol same. I'm using one of my M.2 slots with a 10Gb SFP NIC with a PCIe riser cable, and the other with an LSI HBA to connect moar SATA drives, also with a PCIe riser cable. In the Jonsbo N1 with a Z690 ITX motherboard.
I am using the Jonsbo N1 with five (5) 4TB spinning rust drives, one (1) 2.5 SSD, and one (1) optane/NMVe. I am using a Ryzen 5 5600G and 32GB ram. I am running TrueNAS Scale. It works pretty good.
very glad that when we talking about a server purpose built (especially a storage oriented one like this that uses TrueNAS) you're not going to skimp on the fundamental importance of using ECC RAM. Well done. Lots of info also for using your build as a reference for going with other components.
My experience has been that lack of redundancy in other systems is more likely to cause downtime or data loss. Putting everything is a single server with one power supply and one disk controller and one drive array and one network interface connected to one switch is a ticking time bomb. A flaky ECC implementation further pares any protective pretense.
@@shanent5793 it'd be interesting to see how feasible could be achieving redundancy by adding clustered TrueNAS servers through TrueCommand UI. Didn't dive into it yet
@@tyyuuuihycyctct Cloud services can be okay, I just hate being in the situation where a connectivity failure requires reconfiguration, but the configuration requires connectivity 🤦
IIRC filebot has a sketchy license history in that the dev basically tried to delete an open source tool off the internet and then charge for it without actually making improvements. Enough people use it that you can find a docker container for the FOSS version anyway but that also means it's basically abandonware and not as stable or useful as it could be.
It’s great you can create a small home lab server that is this compact and powerful today 🤯✊💪🥰🥳! Definitely something I would be interested in building myself.
It's going to get even more interesting when Zen 4 APUs drop! Especially if the rumored V-cache variants are real and offer the iGPU performance benefit that I think a lot of us would be hoping for.
@RyTrapp0 I am hoping they can catch up with transcoding so they can be used in a mini jellyfin home server. I have a 4650g but can't figure out how to test the apu as I also have a p2000 configured for jellyfin on docker and I think the apu is being disabled by the kernel as it shows unclaimed with lshw. I love and hate computers lol.
By the way the N2 is out like two month ago, you are late to the party but I'll still watch it. Since I brought N1 a week before N2 release. Silly me, they don't accept return.
I use a Fractal Design Define Nano S. With a few of the 120mm HDD brackets from my Define 7 XL, I was able to mount 4 x 3.5" HDD inside this case, and 3 x 2.5" SSD plus i gets plenty of airflow. It does become a bit noisy though if standing on a hard floor due to the case feet are just too stiff. But that can be fixed by placing some thick rubber between the floor and the feet.
The Jonsbo N1 looks good on paper. Once you take a look at the insides or have had a chance to work with it however....it has a lot of potential issues.
Rocking a full size Thor V2 case, i9 10900 (65w tdp) / 64gb 3600mhz / Intel Arc A380 server here, kinda makes me wish I had a smaller case for this task but with 2x 230mm side and front intake fans, 1x top exhaust 230mm and a rear 140mm exhaust it's cool and quiet so as a server it works well.
Loved it. Thank you so much. Although I do have a lot of questions that I want to ask. But I'll research a bit more in the forum, If I can't find I'll shoot a comment.
your not kidding about how one dosent need this much HP for a file/media/transcode box. I've got an old C2Q 6600 with 8GB of DDR800 and a few old 2tb drives, slackware 15 and UMS and it works mint. Bonus natchos with this approach, all of the bits came out of the scrap pile for free lolol
I have built a NAS in this case. I have two comments: first, you can buy right-angled SATA adapters, and second you need to remove the front plate to enable proper airflow. If you do a follow-up, I'd like to see you show us how to mod that front plate. With regards to the N2 case, there doesn't look to be much room for the CPU cooler.
Wish you'd take notes of part names in the video description. The scrubbing might drive up retention time numbers, but I guarantee it's driving down your ad clickthrough rates and actual viewer attention.
I love home server builds! I got a 1st gen Threadripper setup at home. Considering a new build for off-site backup at my parents house. This is nice, but overkill for that job 😂
That would be interesting. I wonder if it would be possible to set something up that you can wake from lan and backup with an rsync script and then put it too sleep when it's done.
@@rudysal1429 Yeah, you could do that with a Meraki router or switch. It can send a WOL packet and wake up connected devices. It’s cloud connected network equipment, so you can automate it through their API
I love my N1, it's the perfect size for that enthusiast home server setup. The N2 looks fantastic but I might build a second N1 and stick it in a corner at my parent's place to give myself some purpose-built geo-redundancy.
I have an old lian li pcq08 that I have 5 hdd in, and 3200g, and works pretty good. I use it as a backup server to my main server and also have rocky Linux vs Ubuntu on the other one. Trying to expose myself to a much as I can. Very quiet, put a noctua fan on the bottom and adding a 2.5gb m.2 nic but need an extender. Have an lsi card in the pci slot.
If that cooler have hard time, you could use liquid metal on that copper without touching the aluminum, I did and it works, it helps running i3 full load with the small cooler around 80-86C all day. Could help you 7900 breathe.
I had a N1 but sold it due to the dodgy backplane. It didn't feel robust enough and I didn't like the sata placement and was powered by Molex connector. Molex should be banned! I hear the N2 has similar issues, lets hope the N3 fixes these issues. Big thumbs up for Filebot, it's awesome!!
I would have totally bought that Synology PCIE card except for the fact the I only have one x1 slot open, my x4 slot is being occupied by a LSI 9211-8i P20 HBA. (I got a bunch of 12TB enterprise drives for cheap: ~$120 per drive.)
12:40 I have optane both as boot and as special metadata drives, though finding enough PCIe lanes on a consumer board for proper redundancy can be a challenge. I wish it was easier to bifurcate into 1x because i do not need the speed.
Wendel can you take a look on Jonsbo N3? Will it fit a Deep Mini-ITX from Asrock? Thinking about building an SFF virtualization server with TrueNAS storage server. Best of both worlds.
I have looked at the Jonsbto 1 case, but could never find for a reasonable price in Australia. So I build in a secondhand tower and love it with TruNas
About need to run "enterprise-grade" SSDs ... probably not. Used to be true that consumer-grade SSDs could wear out (before wear-levelling), and might still be true of some old or very cheap SSDs. But more recent SSDs are over-provisioned enough. Several years ago saw internal numbers from a vendor building high-end flash storage arrays. They found that the failure rates under heavy use for consumer-grade SSDs and enterprise SSDs were not significantly different. After, they used consumer-grade SSDs in their builds. If you are going to use an SSD as a cache on a heavily used server, then use a name brand, not the cheapest Chinese no-name on Amazon. Also you can divide the failure rate by the size of the SSD. Given how cheap SSDs have become, adding a larger SSDs is easy. Plus the larger cache also improves performance.
Your description of DD5 ECC seems to be backward. The standard on-die ECC common to all DDR5 protects the data while it is "at rest" inside the DRAM (so far as dynamic memory can be said to be "at rest") with a 128+8 ECC. DDR5 also supports a CRC on data bursts, it would protect data while it is being transmitted between the CPU and DRAM packages. The CRC isn't mandatory, so it's safe to assume that it's not going to be used. Together this means that errors in the DRAM caused by cosmic ray bit flips or overly cramped timings can be caught and corrected before being sent back to the CPU. But if the data is corrupted on the bus (eg. by noise on the lines, defective buffers or transceivers), there is nothing the standard on-die ECC can do about it. With server type ECC, the redundant parity bits (4 or 8 per 32-bit channel) are calculated in the CPU before being placed on the data bus. Thus the data and the parity bits are both protected by the (possible) CRC and the on-die ECC. When data is retrieved, the CPU memory controller recalculates the parity bits, corrects and logs recoverable errors, and halts the system in the event of an unrecoverable error, then forwards the data to the caches or registers (which universally have their own ECC). In summary, standard DDR5 on-chip ECC protects the data while it is resident in DRAM. In addition, end-to-end ECC with x72 or x80 DIMMS (both registered and unbuffered, unbuffered in the case of Ryzen desktop) protects data the entire time it is outside the CPU.
If just for VM host with some level local storage, P330/M920X and newer are good best all arounder. 2 M.2, 1 PCIe3 x8, SATA connection optional. But a storage server with enough M.2 and PCIe options, and hot swap slot, really hard to find.
Wendell, Wendell, on one Level, How does your keyboard go With 1.75u, 1.25u, and split space on the bottom row? Seriously, what model has that bizarre layout??
Had you considered using a smaller PSU - eg something like a HDPlex or similar super small power supply. This should give you more space for SATA SSDs, or cables, or M.2 hacks, or whatever
Great Video! :D I did a similar build, but i'm a little struggling with the m.2 to sata card. I cant boot from it. Could you boot from it and what exact card are you using?
Instant a cat sees that case upright, it will move in for the full body bump-rub. You know you still have some shred of sanity if you haven't begun to dremel, band saw, etc... cases...sigh...been gone for a while now.
For me, the biggest problem are the ethernet cables throughout the whole house. I really don't know how you guys do this lol. This home server looks amazing!
How long before some motherboard manufacturer decides to mount M.2 sockets on their side and facing the "back" to effectively create small form factor M.2 (PCIe) slots to more directly support M.2 cheat codes and save space compared to full size PCIe slots, which are being dropped from some boards anyway?
Is there any video on this E20M10-T1 card? I was under the assumption it would only work with Synology devices and only function as a cache for the NVME drives. I can't seem to find info on the use case as described in this video.
Wendell; Heatsink for the cpu. I think that I found a better solution that will fit perfect. I have this case and a Zen 9 7900 in mine. Try the Noctua NH-L9a-AM5.
At around 1m50s, is that a coal fired furnace in the center of the frame, and then a gas furnace off to the left? What is that thing to the right? #LevelOneMechs
I just put together an AM5 build (with the ASRock B650E PG-ITX) and it POSTs with Supermicro Micron ECC UDIMMs and the latest UEFI. However, I can't load the amd64_edac module to get error reporting in Linux. It looks like nobody ever updated the module for Ryzen 7000 (family 19h model 61h). See drivers/edac/amd64_edac.c around line 3887. Also, mine (with a 7700X) idles at around 45W with 2 3.5" HDDs and an RX 5500 XT. I'm curious why your build seems to have a higher idle draw.
Been waiting for this for a few weeks, don't you go cautionary tale on me Wendell. I am about to dedicate a serious chunk to get a 110 tb server running with the N1 or.. N2? :O
I went the opposite way, in almost every way, with my home file server. I made a Core X9 into an end table. It blends in with the regular furniture as an end table, with 8X 4TB HDDs and 8X 2TB SSDs with a Ryzen 1700X, running Windows Server 2019 and Windows Storage Spaces.
What is the model of the m.2 Sata Controller you are using? I tried to find it on the cheat codes video but unless I totally missed it, I couldn't find it on that video either.
I really would like to see a full video on the software stack. Like one comprehensive guide outlining everything u need to think about. So security, access from the internet, the whole deal really. The hardware is easy to figure out to me but the software and keeping everything secure and private while having access from the internet is a lot more confusing to me.
same
Realistically, practically, and high-level for newer people learning (in no specific order and you can look up set up guides for all of these):
1. wireguard for accessing your home network remotely (NO PASSWORDS EVER ONLY KEYS)
2. I personally expose SSH on a random port of my choosing (AGAIN NO PASSWORDS EVER ONLY KEYS AND NO ROOT ACCESS)
3. If self-hosting applications to expose to the internet, reverse proxy with SSL (cloudflare is very powerful and the free tier is honestly pretty amazing for a homelab)
4. Set up your server's firewall to default deny all, and specifically open ports as needed. Then follow-up with port forwarding on your WAN router.
5. If self-hosting a password manager such as bitwarden, get a physical hardware key. yubikeys are very nice imo.
6. BACKUPS BACKUPS BACKUPS. Use the 3-2-1 backup mentality.
7. Good luck and when self-hosting stuff you need to stay on top of software updates and overall stay up to date on security happenings and on things regarding the hardware and software you are running.
@@ibrudiiv I mean this sorta thing is what I was looking for! Thanks
I don't have personal experience with it, but I've seen Tailscale recommend as a secure and easy way of remotely connecting devices together.
Wendell doesn't mention it in this video (and in fact he points out this Synology card as specifically not requiring it), but the B650I Aorus Ultra actually does support x8/x8, x8/x4/x4, and x4/x4/x4/x4 bifurcation in the BIOS. That's the kind of thing I really wish they'd just put in the spec sheet, it's pretty important to some of us. Three M.2 slots built in gives you good storage and the opportunity to use M.2 NICs or M.2 SATA HBAs (up to six SATA ports per M.2 slot) to expand its capabilities. In addition, you can fit a bifurcation card to the x16 slot to get even more opportunity to expand. Coupled with AM5 finally bringing iGPUs to every processor in the lineup, it really is the ideal mITX home server motherboard. Only thing it's really missing are IPMI and onboard 10GbE, but at least one of those you can add yourself reasonably easy.
I guess I could have done with more fan headers, you only get two, but whatever, that's still enough for most use cases. And you can always use splitters.
Honestly cutting down PCIe cards for home servers is an underrated art, I've got 2 cards cut down to PCIe x1 and it's super easy to do reliably. Contrast with me nearly breaking my motherboard with the "recommended" approach of cutting out the back of the x1 slot. Once it's done and tested it's going to be as reliable as an unmodified card in a full slot, and you avoid having to use dodgy cheap x16 to x1 risers and awkward mounting setups too.
Just be very careful to never breathe the sawdust... Breathing in fiberglass particles like this is basically equivalent to breathing in asbestos.
@@insu_na Agreed
@@insu_na a high humidity, respirator, and a water spray bottle to kill the dust after?
@@stephen1r2 Take basically the same precautions you would as if you were working with asbestos and you should be fine
I went with 5.25" bays in a >10 yr old case, since I had extra CPUs and MBs from my desktop system upgrades. Use what you have. Especially, since you can no longer buy cases with enough 5.25" bays. Thanks, Steve!
Use what you have - definitely agree if you have old stuff laying around. I run my plex server on a skylake i5 inside an Antec full tower case from 2002. I do want to upgrade someday and downsize to an ITX server, but not until I replace my current gaming PC so... probably not for another couple years.
I kinda needed a case with tons of drive bays, so Phanteks Enthoo Pro, insert old X99 desktop, replace CPU with a Haswell-E Xeon. Add a 3x5.25"->4x3.5" cage, print more 2.5" drive trays -> 10x 3.5" + 4x 2.5". nVidia Quadro K2200 for transcoding, which currently takes up 16 lanes but there's a slot with 4 lanes free, so I could have 2 x16 slots free.
Okay, I did have to also buy a new PSU, because the old one didn't have enough SATA connectors without a bunch of adapters. In any case, I don't see myself replacing the system any time soon.
"Hot air goes wherever the fan blows it" - GN Steve
They recently came out with the jonsbo n2, and apparently its better, smaller, better airflow, etc...
Hey L1Techs can you make a show about network security for folks? Just the basic stuff like the kinds of things one would run into doing basic dev ops, exposing ports, running containers, connecting via ssh, these types of things. I feel like chatgpt has increased the number of folks that are able to get themselves in trouble :]
Instill into us a spidey-sense for personal home network security but we will call it the Wendel-sense.
I can recommend the Lawrence Systems YT channel. It's very educational but also practical.
I'm in the same boat. I have my Linux server setup and feel pretty good about ssh into the machine from a laptop, getting the ssh key sent, etc. Setting up docker, samba, nfs, still confused with SELinux, but it works well for the most part. I still haven't opened it to the outside world as I need to brush up on my networking, firewalls and figure out nginx, etc.
@@rudysal1429 Yup! Even just hearing brief reference to the 'worst case scenario' with some of these technologies and software stacks will probably put most of us into a defensive space where we know what to google. L1Tech doesn't need to be prolific or even in-depth.
nginx server good example! It touches a lot of software stacks. A lot of new folks don't 'know' tcp/ip or what YOU DEFINITELY DON'T WANT TO DO but they do want dockerize a python web app, anyways.
Lol same. I'm using one of my M.2 slots with a 10Gb SFP NIC with a PCIe riser cable, and the other with an LSI HBA to connect moar SATA drives, also with a PCIe riser cable. In the Jonsbo N1 with a Z690 ITX motherboard.
I am using the Jonsbo N1 with five (5) 4TB spinning rust drives, one (1) 2.5 SSD, and one (1) optane/NMVe. I am using a Ryzen 5 5600G and 32GB ram. I am running TrueNAS Scale. It works pretty good.
What motherboard are you using?
MSI B550I (MPG), I am using a m.2 to sata for a couple of the extra drives.
Oh, man! I think the Jonsbo N2 you mentioned at the end is exactly the case I've been waiting for!
I want to see a Jonsbo N2 build! Been considering this option to replace my R720XD filled with 4TB's with new crazy capacities.
I want to see this too! I like the form factor way more!
Same. I've heard more than a few times that the N1 can overheat and the N2 looks like the perfect solution to that problem.
very glad that when we talking about a server purpose built (especially a storage oriented one like this that uses TrueNAS) you're not going to skimp on the fundamental importance of using ECC RAM. Well done. Lots of info also for using your build as a reference for going with other components.
My experience has been that lack of redundancy in other systems is more likely to cause downtime or data loss. Putting everything is a single server with one power supply and one disk controller and one drive array and one network interface connected to one switch is a ticking time bomb. A flaky ECC implementation further pares any protective pretense.
@@shanent5793 it'd be interesting to see how feasible could be achieving redundancy by adding clustered TrueNAS servers through TrueCommand UI. Didn't dive into it yet
@@tyyuuuihycyctct Cloud services can be okay, I just hate being in the situation where a connectivity failure requires reconfiguration, but the configuration requires connectivity 🤦
@@shanent5793 Backups are important.
Can't wait to see a video on Unraids ZFS implementation that should be releasing soon.
Level1Techs is on another level! Insane knowledge on all the nitty gritty too 😊
IIRC filebot has a sketchy license history in that the dev basically tried to delete an open source tool off the internet and then charge for it without actually making improvements. Enough people use it that you can find a docker container for the FOSS version anyway but that also means it's basically abandonware and not as stable or useful as it could be.
Had my eye on a jonsbo case for my PC a while back, awesome to see them featured here
It’s great you can create a small home lab server that is this compact and powerful today 🤯✊💪🥰🥳! Definitely something I would be interested in building myself.
It's going to get even more interesting when Zen 4 APUs drop! Especially if the rumored V-cache variants are real and offer the iGPU performance benefit that I think a lot of us would be hoping for.
@RyTrapp0 I am hoping they can catch up with transcoding so they can be used in a mini jellyfin home server. I have a 4650g but can't figure out how to test the apu as I also have a p2000 configured for jellyfin on docker and I think the apu is being disabled by the kernel as it shows unclaimed with lshw. I love and hate computers lol.
By the way the N2 is out like two month ago, you are late to the party but I'll still watch it. Since I brought N1 a week before N2 release. Silly me, they don't accept return.
Built similar in Jonsbo N2. 3 mechanical, 4 sata ssd's, 3 nvme's. I liked the layout a little better on the N2.
I use a Fractal Design Define Nano S. With a few of the 120mm HDD brackets from my Define 7 XL, I was able to mount 4 x 3.5" HDD inside this case, and 3 x 2.5" SSD plus i gets plenty of airflow. It does become a bit noisy though if standing on a hard floor due to the case feet are just too stiff. But that can be fixed by placing some thick rubber between the floor and the feet.
The Jonsbo N1 looks good on paper. Once you take a look at the insides or have had a chance to work with it however....it has a lot of potential issues.
at 22:46 "they can't all die at the same time, right?" I LOLed, seems like you interacted with Erlang in your time
Rocking a full size Thor V2 case, i9 10900 (65w tdp) / 64gb 3600mhz / Intel Arc A380 server here, kinda makes me wish I had a smaller case for this task but with 2x 230mm side and front intake fans, 1x top exhaust 230mm and a rear 140mm exhaust it's cool and quiet so as a server it works well.
Loved it. Thank you so much. Although I do have a lot of questions that I want to ask. But I'll research a bit more in the forum, If I can't find I'll shoot a comment.
your not kidding about how one dosent need this much HP for a file/media/transcode box. I've got an old C2Q 6600 with 8GB of DDR800 and a few old 2tb drives, slackware 15 and UMS and it works mint. Bonus natchos with this approach, all of the bits came out of the scrap pile for free lolol
I have built a NAS in this case. I have two comments: first, you can buy right-angled SATA adapters, and second you need to remove the front plate to enable proper airflow. If you do a follow-up, I'd like to see you show us how to mod that front plate.
With regards to the N2 case, there doesn't look to be much room for the CPU cooler.
Go ahead and log another vote for the N2 over the N1 (if you're into Jonsbo). Personally I just bought a U-NAS NSC-810A.
Wish you'd take notes of part names in the video description. The scrubbing might drive up retention time numbers, but I guarantee it's driving down your ad clickthrough rates and actual viewer attention.
I love home server builds! I got a 1st gen Threadripper setup at home. Considering a new build for off-site backup at my parents house. This is nice, but overkill for that job 😂
That would be interesting. I wonder if it would be possible to set something up that you can wake from lan and backup with an rsync script and then put it too sleep when it's done.
@@rudysal1429 Yeah, you could do that with a Meraki router or switch. It can send a WOL packet and wake up connected devices. It’s cloud connected network equipment, so you can automate it through their API
I love my N1, it's the perfect size for that enthusiast home server setup. The N2 looks fantastic but I might build a second N1 and stick it in a corner at my parent's place to give myself some purpose-built geo-redundancy.
I have an old lian li pcq08 that I have 5 hdd in, and 3200g, and works pretty good. I use it as a backup server to my main server and also have rocky Linux vs Ubuntu on the other one. Trying to expose myself to a much as I can. Very quiet, put a noctua fan on the bottom and adding a 2.5gb m.2 nic but need an extender. Have an lsi card in the pci slot.
The Engenius On-prem Controller for the Fit series is AWESOME, the Hosted version STAY AWAY !!
Got me at bandsaw.
If that cooler have hard time, you could use liquid metal on that copper without touching the aluminum, I did and it works, it helps running i3 full load with the small cooler around 80-86C all day. Could help you 7900 breathe.
The compactness and functionality is insane
I had a N1 but sold it due to the dodgy backplane. It didn't feel robust enough and I didn't like the sata placement and was powered by Molex connector. Molex should be banned! I hear the N2 has similar issues, lets hope the N3 fixes these issues. Big thumbs up for Filebot, it's awesome!!
Hard to believe that 5 drives can fit in that. It looks fancy, not made for the floor, made small enough for the desk . Nice case.
I love Filebot! I've been using it for years.
That was quite the band saw sound effect, bravo
I want a bandsaw more than I want any mini server.
These vids really made me love that Jonsbo case even more but cant find any in stock
The "Mad Scientist in His Laboratory" saga continues! I'm well pleased...🇺🇸 😎👍☕
I would have totally bought that Synology PCIE card except for the fact the I only have one x1 slot open, my x4 slot is being occupied by a LSI 9211-8i P20 HBA. (I got a bunch of 12TB enterprise drives for cheap: ~$120 per drive.)
What's the manufacturer of the drives & where'd you get them from if you please?
All hail the Nerd King.
This is most definitely a cautionary tale 👁👃👁... I thought it was a joke, he really used a band saw 😂
i love the cluster outros like that
You had me at 'bandsaw'.
I am planning on making an offsite backup with the new N3 version it looks so slick.
12:40 I have optane both as boot and as special metadata drives, though finding enough PCIe lanes on a consumer board for proper redundancy can be a challenge. I wish it was easier to bifurcate into 1x because i do not need the speed.
Wendel can you take a look on Jonsbo N3? Will it fit a Deep Mini-ITX from Asrock? Thinking about building an SFF virtualization server with TrueNAS storage server. Best of both worlds.
I saw "bandsaw", I came for blood...
I have looked at the Jonsbto 1 case, but could never find for a reasonable price in Australia.
So I build in a secondhand tower and love it with TruNas
Dang, looking healthier these days! 💪
Love the channel! Keep up the good work!
1:49 OMG! All those high-tech apparatuses in your Doungen! Now I want one. It's alive, ALIVE! Muhahahahahha! 😶🌫
THE TRANSITION INTO BANDSAWING LMAOOOO my ears
(also 0 comments glitch on this video 🦇👻)
About need to run "enterprise-grade" SSDs ... probably not. Used to be true that consumer-grade SSDs could wear out (before wear-levelling), and might still be true of some old or very cheap SSDs. But more recent SSDs are over-provisioned enough. Several years ago saw internal numbers from a vendor building high-end flash storage arrays. They found that the failure rates under heavy use for consumer-grade SSDs and enterprise SSDs were not significantly different. After, they used consumer-grade SSDs in their builds.
If you are going to use an SSD as a cache on a heavily used server, then use a name brand, not the cheapest Chinese no-name on Amazon.
Also you can divide the failure rate by the size of the SSD. Given how cheap SSDs have become, adding a larger SSDs is easy. Plus the larger cache also improves performance.
Your description of DD5 ECC seems to be backward. The standard on-die ECC common to all DDR5 protects the data while it is "at rest" inside the DRAM (so far as dynamic memory can be said to be "at rest") with a 128+8 ECC. DDR5 also supports a CRC on data bursts, it would protect data while it is being transmitted between the CPU and DRAM packages. The CRC isn't mandatory, so it's safe to assume that it's not going to be used.
Together this means that errors in the DRAM caused by cosmic ray bit flips or overly cramped timings can be caught and corrected before being sent back to the CPU. But if the data is corrupted on the bus (eg. by noise on the lines, defective buffers or transceivers), there is nothing the standard on-die ECC can do about it.
With server type ECC, the redundant parity bits (4 or 8 per 32-bit channel) are calculated in the CPU before being placed on the data bus. Thus the data and the parity bits are both protected by the (possible) CRC and the on-die ECC. When data is retrieved, the CPU memory controller recalculates the parity bits, corrects and logs recoverable errors, and halts the system in the event of an unrecoverable error, then forwards the data to the caches or registers (which universally have their own ECC).
In summary, standard DDR5 on-chip ECC protects the data while it is resident in DRAM. In addition, end-to-end ECC with x72 or x80 DIMMS (both registered and unbuffered, unbuffered in the case of Ryzen desktop) protects data the entire time it is outside the CPU.
Best explanation I've read, thank you.
If just for VM host with some level local storage, P330/M920X and newer are good best all arounder. 2 M.2, 1 PCIe3 x8, SATA connection optional. But a storage server with enough M.2 and PCIe options, and hot swap slot, really hard to find.
Can we get a a build on the Jonsbo N3? a top speck build with enough cooling.
The perfect home server is rack mountable.
Wendell, Wendell, on one Level,
How does your keyboard go
With 1.75u, 1.25u, and split space on the bottom row?
Seriously, what model has that bizarre layout??
22:40 maybe I'll see one in the homelab someday
Wendell, what about the Antec P101 silent; like 10 drive bays?!
This or the SAMA im02 ? That's a good question. Ot NZXT H1
Is that an 3x drive mount from a Antec 900 case?!
Amazon link is broken btw
Yeah, I'll still use my Define R4 with full size Z590/10850K server, with 8 3TB reds in raid 5. This is my media/file server, and my plex server.
Had you considered using a smaller PSU - eg something like a HDPlex or similar super small power supply. This should give you more space for SATA SSDs, or cables, or M.2 hacks, or whatever
Great Video! :D
I did a similar build, but i'm a little struggling with the m.2 to sata card. I cant boot from it. Could you boot from it and what exact card are you using?
Loved the intro! 😂
Instant a cat sees that case upright, it will move in for the full body bump-rub. You know you still have some shred of sanity if you haven't begun to dremel, band saw, etc... cases...sigh...been gone for a while now.
For me, the biggest problem are the ethernet cables throughout the whole house. I really don't know how you guys do this lol. This home server looks amazing!
You run the cables through the walls. If you don't know how to do it, you can hire somebody to do so.
@@Scarsuna yeah, I know of this option, it's just that it costs a lot of money
How long before some motherboard manufacturer decides to mount M.2 sockets on their side and facing the "back" to effectively create small form factor M.2 (PCIe) slots to more directly support M.2 cheat codes and save space compared to full size PCIe slots, which are being dropped from some boards anyway?
Im assuming the new BIOS F5b that came out on 2023/04/26 didnt fix the ECC problem?
Jonsbo n2 definitely save the day, so much better than the n1
Is there any video on this E20M10-T1 card? I was under the assumption it would only work with Synology devices and only function as a cache for the NVME drives. I can't seem to find info on the use case as described in this video.
Hmm, what is that 6-SATA-Port M.2 adapter? It's shown and mentioned, but zero info.
What about a ZIL/SLOG device? Would sending it to the striped mirror be a viable option?
So no news of the ECC support of the Gigabyte B650-I?
Wendell; Heatsink for the cpu. I think that I found a better solution that will fit perfect. I have this case and a Zen 9 7900 in mine. Try the Noctua NH-L9a-AM5.
Q: should a server built for he-men have an optical drive and a beer stein holder?
At around 1m50s, is that a coal fired furnace in the center of the frame, and then a gas furnace off to the left? What is that thing to the right?
#LevelOneMechs
The iron fireman automatic coal loader
I'd love to see your unholy hacks on the Jonsbo N2
Given that hardware, what shallow depth rack mount chassis would you recommend?
what sata controller is used in this video? I see 1 m.2 to 6 sata controller but you didnt give us any link.
Sounds like a table saw.
I just put together an AM5 build (with the ASRock B650E PG-ITX) and it POSTs with Supermicro Micron ECC UDIMMs and the latest UEFI. However, I can't load the amd64_edac module to get error reporting in Linux. It looks like nobody ever updated the module for Ryzen 7000 (family 19h model 61h). See drivers/edac/amd64_edac.c around line 3887.
Also, mine (with a 7700X) idles at around 45W with 2 3.5" HDDs and an RX 5500 XT. I'm curious why your build seems to have a higher idle draw.
Been waiting for this for a few weeks, don't you go cautionary tale on me Wendell. I am about to dedicate a serious chunk to get a 110 tb server running with the N1 or.. N2? :O
Hot air rises...yes. But ANY amout of airflow from fans completely overwhelms convection.
5.25" bay is now 4x 2.5" bay as a 2x2 bay.
Woot!
Wendell, did you use the mobo's sata ports for zfs volumes? or rather, would you use the m.2 to sata expansion for a zfs volume?
Used both. More bandwidth via onboard sata
I went the opposite way, in almost every way, with my home file server. I made a Core X9 into an end table. It blends in with the regular furniture as an end table, with 8X 4TB HDDs and 8X 2TB SSDs with a Ryzen 1700X, running Windows Server 2019 and Windows Storage Spaces.
What is the model of the m.2 Sata Controller you are using? I tried to find it on the cheat codes video but unless I totally missed it, I couldn't find it on that video either.
Could you run the SATA and 2.5g m.2 off a pci.e to multi m.2 card?
The N3 might be a better bet?
3:29 As opposed to the Millennial or Gen X/Y Connector 🤣
I run this with a low profile rtx a2000. 4650g with 2x16gb ecc. Fan swap for a noctua 140mm
What board are you using and what model ECC if I may ask a question?
@@hardcorehardware361 b550i aorus pro ax. Samsung M391A2G43BB2-CWE
@@vincentvega3093 Thank you I appreciate the reply.
18:35 i was looking at this and thought an Nvidia t400 would fit just fine, am I missing something?