Sata isn't really dying... There aren't affordable NVMe drives above 4TB, but you can get affordable 20TB SATA hard drives. Maybe SATA SSDs are dying, but for "Bulk slow storage" it's still pretty alive.
Yep. I recently was browsing the internet comparing prices and wondering what drives I wanna use for a small home server. Something small around 20TB... Even going with the cheapest SSDs, by the time I reached 20TB of SSD storage within a reasonable number of drives I could have gotten 60-70TB of spinning rust.. Uhm, no, thank you...
@@shapelessed yep. I got a pair of 12tb hgst refurbished Enterprise drives for 120 last year for my home nas. Plenty fast enough for storing videos and documents.
Jeff, I just have to say, when a new video of yours pops up in my subscription feed, I get a little too overly sense of ease. I think, “ oh yes, just good vibes and positivity and I’m going to learn something interesting”. Thank you for all of your hard work. I wish you and your family a wonderful holiday season and wish you all the best for the new year. Keep it up!
This would be much more insteresting as a board with 3.5" spacing, a PWM fan header, and a gen3 switch. Though that said, you are quickly getting to the point that a N100 NAS board is cheaper.
SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying.....
Yeah, for bulk storage you still can't beat the price of HDDs yet. But M.2 and Sata based SSDs are close enough in price now you can lump the two together. How long until you get an m.2 based one of these?
@@LockonKubi - It depends on capacity? Anything under 1 terabyte, SSDs are actually cheaper because of base component costs, but above 2 terabytes, HDDs are often less than half the cost per terabyte, sometimes as low as 1/3rd or even 1/4th the cost.
Love to see it. I just put together a NAS with a Pi 5 and a Radxa Penta SATA hat thanks to your videos and guides. I already had the psu and some drives lying around so it was the perfect inexpensive solution. Great content!
I use a flatbed scanner to get a clear image of a circuit board. You get high resolution image that's properly focused, if the components aren't too tall/ Then you can reduce the image size & resolution to the desired level. I scanned a lot of compents for my old websie. I still have the backup files, but mt host went away.
@@ChaoticAssembly I started doing it that was when a new scanner was $30, and digital cameras were over $1000 and low resolution. I've even scanned clear plastic knobs with numbering. I lay a clean white tee shirt over the item then add backlighting with a old Florescent ring light,. The scanner came with Paperport, which allows you to straighten an image. If you do this before resizing, thee is no sign that it wasn't straight. I will turn an item about 56 degrees to minimize any distortion, from rotating the image.
I find SATA still to be super handy for hobby projects and people with a smaller budget since at least on the consumer side of things, M2 slots are rare on most pieces of hardware and if those exist, things raise in pricing easily. Actually now I remember how I wanted to make my offside backup station that's sitting save in the company, eventually I went with an old Pi 3 and two spare SATA SSDs I still had which I have connected via USB. Works for me but this here is definitely cooler.
I'd say SATA's just getting started, you can get excellent 6-port controller cards that go in an M.2 slot. You can get hilarious amounts of SATA like that nowadays, with very cheap motherboards.
True but for these high capacity uses SAS is probably the better option. Depending on your exact needs and motherboard I can see how it'd be possible for this to win out over SAS, but generally speaking a single SAS card can get 12-24 SAS lanes each capable of 12gbps. Of course you're not going to saturate that without using splitters, but SAS is also full duplex, meaning you can be reading and writing at the same time. There just seems like way too small a niche between high capacity of SAS cards and high speed of using an NVME tbh.
A lot of recent 3.5 in HDDs are thinner than spec, i think those should work. Also, SATA isn't dead as long as HDDs aren't. HDDs are still a better $/Tb ratio than SSDs, so they'll be around for at least a bit longer.
People forget the 2.5" "laptop" hdds. Not as fast but still available in large sizes, and the 5400rpms are good enough for storage.. WD had a magic blue drive that was used in POS and somehow almost never failed. Back when 160gb and 320gb was acceptable. (where it should have been a red drive for 24x7)
I'm still holding out hope someone designs a drop-in board with 3.5" spacing and either adapters to mount it in old NAS enclosures, or uses some standard that many of the NAS vendors use for the spacing and mounting... Would be cool to buy an old NAS for like $30 on eBay, or get one free from e-waste, and pop in a Pi and Taco, and have the full enclosure working out of the box!
With how large 3.5’s are I honestly have a hard time visualising how you’d even safely slot them in to the board. I guess some sort of horizontal mounting with drive cages holding up the drives. Feels a bit safer just requiring some number of cables between the 3.5’s and board like it essentially needs right now
Without a case this setup is too delicate for me. I think i saw a video where someone 3D printed an aluminum case for this but it didn't really clamp down on the drives at all.
Are you sure they meant compatibility with a Raspberry Pi CM5, not a Radxa CM5? I'm not, because from a short search I could not find compatibility info on either of those... Hardware wise I really want this, it seems so much better than the Penta SATA HAT, but the hardware compatibility questions... not even getting to software support... oof.
They have been predicting the death of SATA for almost 20 years now. The last time I spoke with HDD vendors, they suggested that SATA will continue to be around likely for at least the next decade. Yes, for home computes M.2 or similar interconnects will rule the roost, but for enterprise or datacenters SATA still holds a lot of control as it is both cheap and can be plugged into an external backplane. And unless HDD vendors want to do NVMe for rotating media, I just can't see them going away. Not to mention that NVMe adds cost to the drives, which is the primary reason that SCSI/SAS never booted ATA/SATA out of the market.
Since when is sata big in the datacenter? I see mostly see mostly sas there, and for large spinning drives I don't see much of a cost premium at least on the drives I buy in mostly sas but a few of the same in data for a few oddball setups which didn't have sas support.
@@glennmcgurrin8397 Since the biggest consumers of drives in the industry are the big 3, MSFT, Google and Amazon. The vast majority of their spinning data is actually SATA has been and will likely continue. When you are buying petabytes a penny or so per GB matters. Also, they tend to near the highest capacities for whatever time frame is available as they are looking for density. This has been true for as long as I can remember and again the last time I saw a roadmap from both Seagate and WD this was stated as being true for the next decade. While I personally prefer SAS simply because of the increased error correction and error reporting that SAS has, if you were to do some serious research about PB sold, I think you will find that SATA is king still.
Uh, who exactly said SATA is dying? Unless motherboards started including onboard SAS controllers and I just wasn't informed, SATA ain't going nowhere.
@@Level2Jeff I'd love Sas connectors on the board with sata breakout cables instead of running a bunch of individual sata cables to them, you know without the need for a card.
@@charlesturner897 that's really not sustainable though. Putting aside overall increased costs, that essentially means dedicating entire PCIE lanes for single storage devices which is a super wasteful use of resources. For main system drives or game drives or something it *_can_* be worth it, but the only way all drives on the system will be Nvme (& not be massively wasteful) is if we start getting a *_lot_* more pcie lanes on consumer hardware. Granted, pcie switches can seemingly mitigate this problem, but then your invisibly linking your storage's performance with your GPU's performance and any other pcie devices that are on the same root lanes. (Which, unfortunately, since most motherboard manufacturers don't even give you block diagrams to track the lanes, you can't really even plan for) If the idea that we're moving past SATA speeds is the issue then SAS is really the only way to go. You'll still be using a PCIE lane sure, but that ONE lane (or set of lanes) will be in charge of storage entirely, meaning you'll have a nice, clean, theoretical maximum I/O rather than having inconsistent I/O performance based on how loaded your GPU is or vice versa. Not to mention that SAS allows for higher capacity in general and supports HDDs as well. I dedinitely do think we'll be seeing a push to reduce SATA because "well just use NVME" but I have a really hard time ever seeing SATA actually 'die' until builtin SAS controllers are a thing. It's just not a storage solution that really scales all that well. (And given how big games, movies, shows, etc. are getting with increasing resolutions becoming increasingly common, "scaling" is a bar the average consumer can hit very easily now.) TBH its always been weird to me we haven't already dropped SATA for SAS. I guess it's a market inertia thing but it really seems like more companies would want to advertise they're "THE ONLY GAMING MOTHERBOARD WITH FULL HIGH SPEED SAS BUILTIN" and then other companies follow suit.
Glad to see this product. :) I think refurb enterprise SATA SSDs are still an excellent deal and better than new consumer equivalents for people who don't need the performance or NVME or bulk storage of HDDs. I'm using them in my Proxmox Backup Server; I have a pair of 1.92 TB enterprise SSDs in a ZFS mirror that I paid $40 total for ($20 each), and each of them has a total write endurance somewhere in the 1+ PB range. :) They're not the fastest thing in the world, but the PBS server only has 2x2.5 GbE, so they're more than enough.
About five years back I did something similar for booting my TrueNAS server. I boot off of a mirror pair of 40GB Intel enterprise drives. (I’ve got 14 SATA ports so I can spare a couple.) At the time I think I got ten for ~$50 USD. It’s getting about time to go to enterprise SATA for my data disks.
thanks for the cool video jeff. i always pronounced "sata" like "sat-uh." then i found a great 17+ year old post on tomshardware (from its good old days) that converted me to pronounce it like you ("sayta"). it referenced this famous ST:TNG quote: Dr. Katherine Pulaski: "Dayta, Dahta, what's the difference?" Data: "One is my name, the other is not."
SATA still retains a nice advantage in that it is hot-swap compatible using affordable parts. I agree it is a very niche application but I find that very useful.
@Level2Jeff: I am curious - were the issues with the Taco and the CM5 ever sorted out? Is this a software issue, or does RADXA need to create a new revision of the board?
I really want to see an open source NAS for 3.5" drives that's designed with upgradability and longevity in mind. I bought an older generation Synology years ago that ended up becoming e-waste less than a couple years later because it stopped getting updates and had some security problems. Especially for background backup and light-duty use cases the CM4 has more than enough juice to do the job.
Aw, maaaan, I was wondering whether the Taco was plug and play with the CM5... I've been using my Taco-NAS for a year now and it runs pretty well. I would've appreciated a little speed bump though. Thanks for testing, Jeff!
I was hoping you were going to test out the Radxa CM5 when you mentioned it earlier in the video. After getting my Latte Panda Delta and a couple Orange Pi 5's (and OPi5+'s), I find they have significantly less compatibility which really sucks b/c they're great platforms in terms of hardware
I think cameras have an "infinity focus" setting, where everything is sharp regardless of the distance. Figuring that out should help with the thumbnails a lot.
Aww sad to see them go ! I recently purchased another MX500 (I got 4 of them). This time a whopping 4TB one. I love these drives very much for computers unable to use M.2 SSD's
BTW, PWM does not require 3 pins. The power pin is pulsed to control the rpm. The 3rd pin is the signal from the fan back to the controller to report speed.
"PWM fans" need 4 pin. It means the fan itself has the control logic to change its own speed depending on a PWM signal, while the power pin is non-pwm 12v. "DC fans" use 3 pin. The host must use pwm or voltage on the power pin to control their speed, but it is more finicky and might wear out the fan or go too low for spin up at low speeds. This is why the PWM fans exist, their onboard logic handles the fan-specific settings.
I have yet to see anything but specialized drives that have anything but SAS or SATA for spinning rust. So long as SSDs are as pricey per gigabyte as they are compared with spinning rust, SATA isn't going anywhere.
i can remember back then, when you first showed the TACO, I even messaged Radaxa and asked if the Board is compatible with 3.5" Drives. Sadly it wasn't cause if it were, I'd have bought it immediately and used it as a cheap and power efficient backup NAS solution...
There's a lot of Netgear ReadyNAS enclosures out there that are now aging doorstops, Netgear have ended to product line entirely, and these devices no longer get any updates. most of them were badly under powered anyway, which is a damn shame. Would totally love to see a CM Based board replacement for the internals that re-uses the case and if possible some of the backplanes too.
My last SATA drive kept giving errors. So I ended up replacing it with a second NVME. So I took a screw driver, and broke open the casing to have a look inside. Just out of curiosity. Very interesting! And when I went back to close up my PC again I saw that the sata cable wasn't properly pushed down :) So...don't be me. RIP Samsung EVO 2 TB, you did not deserve your demise.
I meant more for SATA SSDs. HDDs will persist for a while. the SSDs will probably dwindle to only a few cheaper models as NVMe and a variety of formats (including U.2 which has come down in price, quite a bit) make their way into everything.
Random question, if you're up for answering it (perhaps with a future video): How many wired ethernet ports can a CM4 (or CM5) support, either in theory, or in practice with existing boards to plug the module into? I'd like to have a multi-lan router, and thinking building something from something Pi-based might make sense... but I don't really want to use USB adapters, which seems to be the current option that I know of to get beyond 2 NICs. Any tips/thoughts/relevant knowledge you could share?
I still use SATA SSDs because it is still vastly easier to find motherboards with > 4 SATA ports than it is to try and find a motherboard that has >4 NVMe slots (as the form factor, when placed horizontally, takes up a huge motherboard footprint that can be used for other stuff instead). I think that Asus might be one of the few motherboard manufacturers that included the Asus Hyper M.2 carrier board with certain models of their motherboards, but then that can take up a PCIe slot, which, a) if your CPU doesn't have enough PCIe lanes as it is, it can become a problem (I think that the 9th and 10th generation Intel CPUs were the last generation to have > 40 PCIe lanes from the CPU, WITHOUT it needing to be a HEDT CPU), and b) if you're running a multi-GPU system (e.g. for AI workloads), those GPUs often will block some of the PCIe slots, which means that you WON'T be able to install said NVMe carrier boards anyways. Compare and contrast that with SATA headers -- you can find 4, 6, or even 8 SATA headers, with very little issue. One of my Proxmox test systems, which is running dual 3090s, is also running four Samsung 850 EVO 1 TB SATA SSDs in raidz via a TrueNAS Scale VM. (Testing how much space ZFS snapshots take.) It's a LOT harder to do the same with NVMe SSDs. (I'd have to move up to a Threadripper system to be able to have enough PCIe lanes.) I would LOVE a Pi NAS like that! I'd have to research the state of ZFS on Linux on ARM though.
Can always get/make some short SATA cables and still use this with real drives (real, in the sense that you get 30TB instead of 5TB). 5 slots would be enough for a pretty nice NAS capacity.
I would love to take an old Buffalo TerraStation I have and re-fit it w/ a Pi CM5. Been thinking about just mounting a full sized Pi inside it b/c of how much room is available.
If the Taco can provide enough power to run 3.5" hard drives then perhaps SATA power and date combo extension cables would allow all 5 SATA ports to be used for 3.5" hard drives.
@Level2Jeff -- Will we see some SAS based fun with one of these and a SFF disk drawer at some point? I have a few disk drawers i use with a LSI SAS controller(HBA), reducing power overhead and allowing me to shutdown arrays when needed. I know you have touched on Hardware Raid on the CM4 in 2021. But, questions arise with CM5 and something like a disk drawer to reduce some of the headaches. I would love to see this explored at some point. (CM5, HBA, SATA/SAS HDD+SSD, NVME ARC drive). Its mostly silly, but as a concept, it would show the possibilities.
I sincerely don't know why we don't have a SATA 4 or something similar for hdds. These new 20TB drives even with all of those plates still can't saturate SATA III?
That's precisely why. Even the best HDDs are struggling to get close to 300MB/s and once you move to SSDs NVMe is just so much more efficient for high end, and the SATA bottleneck actually helps SSDs consistency in my experience. I actually moved back to SATA SSD for a consistent 500MB/s performance where cheap NVMe drives immediately saturate the SLC cache, performing more like a cheap SD card. So the OS is more responsive from SATA.
This is a very small suggestion but on the side close up cam the Aperture seems to be a little too low that the board is out of focus while your hand is sharp.
Yeah, still working on the ideal fix for that; even at f/8 or f/11 the focus can be a problem. I wish Sony had "PCB tracking" - it seems to prefer faces, hands, etc. instead of computers!
@@Level2Jeff that was with such a low aperture?? You must be blinded by your studio lights :D Idk if this would work but maybe one can abuse the face registration feature (at least the sony a7iii has it not sure about others) to make it focus on a pcb 🤔
I'd love to run TrueNAS on something like this for offsite backups and it would be epic if there was a CM board with an IT mode HBA that could take 3, 6 or 12 3.5" drives! I guess OMV works until TrueNAS decides to figure out ARM!
@@fujinshu I'm not sure what dependencies are currently incompatible, but it would be interesting to try building it for ARM. The underlying system which is Debian has full support so the issue lies somewhere in the middleware I think, porting it might be pretty easy for a software wizard with just a few patches!
wait, MX500 is being RETIRED?! damnit. I'm gonna have to find a new SATA SSD model to put in my infinite build ideas lol. (only half joking on the last part)
Strange they have asmedia switch but use jmicron for sata. Not dying when 30 and 32TB drives just released. I would have liked to see a new sata standard though that included higher bandwidth and different cable ends. The current one's are so archaic compared to something like USB C.
I don't think I would try to stand a 3.5" HDD vertically on its connector. The weight of most HDDs is something like 1.5lbs if I recall correctly and the likelyhood of breaking the connector on the drive or the board is very high!!!!
For the stuff I do on my server, SATA SSDs and HDDs are plenty fast for Plex. I'd would rather have 16 4TB SATA drives than 4 4TB NVMe on one HBA Card. I need capacity, not blazing fast speed.
wait what?, SATA is a dying tech?, do we have 16tb ssd/nvme's already? and can any of you get me a 8tb ssd for less than 200 bucks? lol, i know, i know, SATA is dead as a "main stream" technology, great video as always!!!
These Nvmes nowadays have a terrible pci-lanes to stored-gb ratio. we oftentimes don't need the speed Nvmes provide, but a big amount of fast enough sata ssd storage.
I think it's great that it can use the 4 lanes when it needs to, but you don't have to use those lanes. Just look at the read/write speeds and see if cutting the lanes down makes the interface speed equal to the actual drive speed.
If M2 ssds run faster your defeating the perpas is a raid controller it would have been nice to be able to hook up a 4 20 TB hard drives to have affordable storage for a gaming server .
Until NVME has affordable muxing and cabling, sata will be the go-to. Even at 1 lane per drive, a NVME nas is just too expensive to build for on anything non-x86.
SATA is far superior to NVME right now. How many SATA drives can you use on a current motherboard? How many NVME? Until CPUs have more lanes, which doesnt appear to be anytime soon, SATA is KING!
Only when mechanical tech has reached its limits will there be a true push for bulk NAND storage. This is not going to happen for a decade at least. SATA is perfectly fine for bulk storage even more so that it runs cooler. Wish there was affordable 8TB+ NAND drives in SATA.
Using the Raspberry Pi 5 as a NAS seems a great idea but one question I have is would hardware RAID offer better performance than software RAID? I’d have thought so…
It may, in some cases, but the difference is very small these days, with the CPU in a Pi 5. Hardware RAID can have a few small benefits in some niche use cases, but check out Level1Techs' video on hardware RAID being dead for some interesting insights!
hardware raid stopped providing "better performance than software RAID" a long time ago. Mainly because "hardware raid" is just "software raid running on a small dedicated CPU on a card". Nowadays it's mostly for better convenience or reliability than Windows own software raid, or to boot Windows from RAID.
In my very simple understanding of RAID, I thought that the PCIe on the Pi 5 would be a bottleneck ultimately as it does its magic writing data across the drives. I assumed that slower writing of this data across drives (amongst other checks) would also increase load on the CPU whereas with hardware RAID you just send the data to it and as far as the Pi 5 was concerned, job done. Obviously I’m wrong :D Thanks for the replies, appreciate the time taken to answer my question :)
@@Level2Jeff Thanks, I know that. I've been commenting the same thing on your videos for the last 5 years lol. I mean the real ECC that also protects the link between memory and CPU and reports detected errors. Real ECC is not server only. Someone could build an CM5 compatible module with ECC memory. I am waiting/looking for that. Especially with anything that looks like a NAS.
@@kwinzman Not possible and has never been possible and is very likely not going to be possible in the future. To use the real ECC ram in ECC mode you need ECC ram support in the memory controller and AFAIK the raspberry SoC never had support for that and they never mention adding it. Afaik it's not a feature you find on "mediacenter/mobile" kind of SoCs like Raspberry and similar but more on the parts for NAS and network appliances, so in general the Marvell Armada line and NXP's network appliance SoC lines do support ECC ram but I've rarely if ever seen a device that actually bothers to implement it. For these embedded devices the best you can hope for is on-die ECC, either added as a special treat for DDR4 or because the spec requires it with DDR5
@@marcogenovesi8570 Are you aware that there are multiple pin compatible boards that are compatible with cm4/cm5, but use a completely different SoC? I would like to post the links but I doubt TH-cam spam filter would like that. It's 100% possible to include real ECC if somebody bothers to build it. With those NXP, Marvell SoC or others.
@@kwinzman Yes I am. The issue is price, because these SoCs are not made for mediacenter/mobile consumer market. For example a somewhat old Armada A388 SoM from SolidRun (a SoM is System on Module, an industry term for what the CM4/5 modules are) has optional 2GB DDR3 ECC but the base model with 2GB is already 100 bucks and it's just a dualcore Armv7 1.6Ghz. With the prices I've seen quoted for stuff that is actually recent and comparable to a raspberry 5, you are better off with a mini itx AMD motherboard and ECC ram
The 3.5" has too much weight/leverage.. (laptop/2.5" 4500rpm hdds are also a good option instead) For the ssds it can still be fine but i would prefer to strut them together and add some screws.. But the price of cobbling together these things ends up close to or above standard micro itx / older components "good enough". If theres a complete cheaper set, OK..
Oh yeah killing SATA is going to be great when consumer platforms only have enough PCIE to run 2-3 drives if you have a graphics card. Can't wait for that. I'm already using every lane my 7800x3d/B650 has to offer.
Sata isn't really dying... There aren't affordable NVMe drives above 4TB, but you can get affordable 20TB SATA hard drives. Maybe SATA SSDs are dying, but for "Bulk slow storage" it's still pretty alive.
Yep. I recently was browsing the internet comparing prices and wondering what drives I wanna use for a small home server. Something small around 20TB...
Even going with the cheapest SSDs, by the time I reached 20TB of SSD storage within a reasonable number of drives I could have gotten 60-70TB of spinning rust..
Uhm, no, thank you...
@@shapelessed yep. I got a pair of 12tb hgst refurbished Enterprise drives for 120 last year for my home nas. Plenty fast enough for storing videos and documents.
There's little/no discount for SATA SSDs over NVMe SSDs on Newegg these days. SATA's gonna be HDD-only again, eventually.
NVMe is limited by PCIE lanes.
Each type of storage has pros and cons.
@@joshxwho Overcoming that limit is what the chipset is for. Albeit at reduced speed, but probably still beating SATA.
Jeff, I just have to say, when a new video of yours pops up in my subscription feed, I get a little too overly sense of ease. I think, “ oh yes, just good vibes and positivity and I’m going to learn something interesting”. Thank you for all of your hard work. I wish you and your family a wonderful holiday season and wish you all the best for the new year. Keep it up!
This would be much more insteresting as a board with 3.5" spacing, a PWM fan header, and a gen3 switch. Though that said, you are quickly getting to the point that a N100 NAS board is cheaper.
SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying. SATA is not dying.....
All of 3.5 hard working spinnies shall never retire
@@HksF16 SAS
@@Adam130694 Long live SAS.
If SAS took over from SATA at SATA price levels…
@@williamp6800this and we all know its not happing so long live sata
I still stand with SATA.
Hard drives will give it a long life yet!
Yeah, for bulk storage you still can't beat the price of HDDs yet. But M.2 and Sata based SSDs are close enough in price now you can lump the two together. How long until you get an m.2 based one of these?
@@LockonKubi Once the pi actually gets some PCIE lanes. That's the main limiting factor for more NVME. that and the CPU grunt to shift all that data.
@@Cynyr The cm3588 NAS kit devices break down to four m.2 sockets of PCIe3x1, not exactly wonderful, but if cheap enough...
@@LockonKubi - It depends on capacity? Anything under 1 terabyte, SSDs are actually cheaper because of base component costs, but above 2 terabytes, HDDs are often less than half the cost per terabyte, sometimes as low as 1/3rd or even 1/4th the cost.
We need Level 3 Jeff to show us photoshopping the thumbnail
Jeffception
I just crop and resize in Paint.
Love to see it. I just put together a NAS with a Pi 5 and a Radxa Penta SATA hat thanks to your videos and guides. I already had the psu and some drives lying around so it was the perfect inexpensive solution. Great content!
I use a flatbed scanner to get a clear image of a circuit board. You get high resolution image that's properly focused, if the components aren't too tall/ Then you can reduce the image size & resolution to the desired level. I scanned a lot of compents for my old websie. I still have the backup files, but mt host went away.
That is insanely clever. lol I might have to start doing this
@@ChaoticAssembly I started doing it that was when a new scanner was $30, and digital cameras were over $1000 and low resolution. I've even scanned clear plastic knobs with numbering. I lay a clean white tee shirt over the item then add backlighting with a old Florescent ring light,. The scanner came with Paperport, which allows you to straighten an image. If you do this before resizing, thee is no sign that it wasn't straight. I will turn an item about 56 degrees to minimize any distortion, from rotating the image.
I find SATA still to be super handy for hobby projects and people with a smaller budget since at least on the consumer side of things, M2 slots are rare on most pieces of hardware and if those exist, things raise in pricing easily.
Actually now I remember how I wanted to make my offside backup station that's sitting save in the company, eventually I went with an old Pi 3 and two spare SATA SSDs I still had which I have connected via USB. Works for me but this here is definitely cooler.
I'd say SATA's just getting started, you can get excellent 6-port controller cards that go in an M.2 slot. You can get hilarious amounts of SATA like that nowadays, with very cheap motherboards.
True but for these high capacity uses SAS is probably the better option. Depending on your exact needs and motherboard I can see how it'd be possible for this to win out over SAS, but generally speaking a single SAS card can get 12-24 SAS lanes each capable of 12gbps. Of course you're not going to saturate that without using splitters, but SAS is also full duplex, meaning you can be reading and writing at the same time.
There just seems like way too small a niche between high capacity of SAS cards and high speed of using an NVME tbh.
That little thumbnail making tip at the end was handy for all the TH-cam creator newbies out there!
Gotta get things in focus!
A lot of recent 3.5 in HDDs are thinner than spec, i think those should work.
Also, SATA isn't dead as long as HDDs aren't. HDDs are still a better $/Tb ratio than SSDs, so they'll be around for at least a bit longer.
People forget the 2.5" "laptop" hdds.
Not as fast but still available in large sizes, and the 5400rpms are good enough for storage..
WD had a magic blue drive that was used in POS and somehow almost never failed. Back when 160gb and 320gb was acceptable.
(where it should have been a red drive for 24x7)
I like SATA because power comes from the PSU, so if the mobo fails it's less likely to affect your storage. Also, M-Disc.
I love how Level 2 Jeff allows himself to be 'messy' and 'janky'; now I dont feel so bad when I do my experiments, thanks mate
Not spacing them for 3.5 is such an own goal
I'm still holding out hope someone designs a drop-in board with 3.5" spacing and either adapters to mount it in old NAS enclosures, or uses some standard that many of the NAS vendors use for the spacing and mounting...
Would be cool to buy an old NAS for like $30 on eBay, or get one free from e-waste, and pop in a Pi and Taco, and have the full enclosure working out of the box!
@@Level2Jeff can you check if it holds slim 3.5" drives? you may not get a lot of capacity, but would be a cool proyect if you already have them!
@@Level2Jeff That's why I'm still sitting on 2 ancient but beautiful LaCie 5big Network 2 boxes .... so I can MAYBE fit a RasPi in there one day...
Agreed, if it was spaced for 3.5in I would have a couple. I need one for home, and one for my mom's
With how large 3.5’s are I honestly have a hard time visualising how you’d even safely slot them in to the board. I guess some sort of horizontal mounting with drive cages holding up the drives. Feels a bit safer just requiring some number of cables between the 3.5’s and board like it essentially needs right now
Wow, nice shot with the photoshopping of alternate focus on the board and your face. Really works out nice 🙂
even with headphones the heating system is barely noticeable. no need to apologize for it.
3:10 totally could use SATA extension cables (yes they have data+power versions)
Without a case this setup is too delicate for me. I think i saw a video where someone 3D printed an aluminum case for this but it didn't really clamp down on the drives at all.
Are you sure they meant compatibility with a Raspberry Pi CM5, not a Radxa CM5?
I'm not, because from a short search I could not find compatibility info on either of those...
Hardware wise I really want this, it seems so much better than the Penta SATA HAT, but the hardware compatibility questions... not even getting to software support... oof.
Yeah, their own shop page mentions Radxa CM3 and RPI CM4.
They have been predicting the death of SATA for almost 20 years now. The last time I spoke with HDD vendors, they suggested that SATA will continue to be around likely for at least the next decade. Yes, for home computes M.2 or similar interconnects will rule the roost, but for enterprise or datacenters SATA still holds a lot of control as it is both cheap and can be plugged into an external backplane. And unless HDD vendors want to do NVMe for rotating media, I just can't see them going away. Not to mention that NVMe adds cost to the drives, which is the primary reason that SCSI/SAS never booted ATA/SATA out of the market.
Since when is sata big in the datacenter? I see mostly see mostly sas there, and for large spinning drives I don't see much of a cost premium at least on the drives I buy in mostly sas but a few of the same in data for a few oddball setups which didn't have sas support.
@@glennmcgurrin8397 Since the biggest consumers of drives in the industry are the big 3, MSFT, Google and Amazon. The vast majority of their spinning data is actually SATA has been and will likely continue. When you are buying petabytes a penny or so per GB matters. Also, they tend to near the highest capacities for whatever time frame is available as they are looking for density. This has been true for as long as I can remember and again the last time I saw a roadmap from both Seagate and WD this was stated as being true for the next decade. While I personally prefer SAS simply because of the increased error correction and error reporting that SAS has, if you were to do some serious research about PB sold, I think you will find that SATA is king still.
Uh, who exactly said SATA is dying? Unless motherboards started including onboard SAS controllers and I just wasn't informed, SATA ain't going nowhere.
Ooh now that's an idea! Get SAS everywhere!
@@Level2Jeff I'd love Sas connectors on the board with sata breakout cables instead of running a bunch of individual sata cables to them, you know without the need for a card.
I think the point isn't "sata is going away move to sas" it's "sata is going away to focus on NVMe"
@@charlesturner897 that's really not sustainable though. Putting aside overall increased costs, that essentially means dedicating entire PCIE lanes for single storage devices which is a super wasteful use of resources. For main system drives or game drives or something it *_can_* be worth it, but the only way all drives on the system will be Nvme (& not be massively wasteful) is if we start getting a *_lot_* more pcie lanes on consumer hardware.
Granted, pcie switches can seemingly mitigate this problem, but then your invisibly linking your storage's performance with your GPU's performance and any other pcie devices that are on the same root lanes. (Which, unfortunately, since most motherboard manufacturers don't even give you block diagrams to track the lanes, you can't really even plan for)
If the idea that we're moving past SATA speeds is the issue then SAS is really the only way to go. You'll still be using a PCIE lane sure, but that ONE lane (or set of lanes) will be in charge of storage entirely, meaning you'll have a nice, clean, theoretical maximum I/O rather than having inconsistent I/O performance based on how loaded your GPU is or vice versa.
Not to mention that SAS allows for higher capacity in general and supports HDDs as well.
I dedinitely do think we'll be seeing a push to reduce SATA because "well just use NVME" but I have a really hard time ever seeing SATA actually 'die' until builtin SAS controllers are a thing. It's just not a storage solution that really scales all that well. (And given how big games, movies, shows, etc. are getting with increasing resolutions becoming increasingly common, "scaling" is a bar the average consumer can hit very easily now.)
TBH its always been weird to me we haven't already dropped SATA for SAS. I guess it's a market inertia thing but it really seems like more companies would want to advertise they're "THE ONLY GAMING MOTHERBOARD WITH FULL HIGH SPEED SAS BUILTIN" and then other companies follow suit.
my mobo might have 2 nvme slots but it has an additional 4 sata slots. Why would i not want Sata Drives avaiable in this reality?
Glad to see this product. :)
I think refurb enterprise SATA SSDs are still an excellent deal and better than new consumer equivalents for people who don't need the performance or NVME or bulk storage of HDDs. I'm using them in my Proxmox Backup Server; I have a pair of 1.92 TB enterprise SSDs in a ZFS mirror that I paid $40 total for ($20 each), and each of them has a total write endurance somewhere in the 1+ PB range. :)
They're not the fastest thing in the world, but the PBS server only has 2x2.5 GbE, so they're more than enough.
Where the heck did you buy those SSDs from for 20 bucks each??
About five years back I did something similar for booting my TrueNAS server. I boot off of a mirror pair of 40GB Intel enterprise drives. (I’ve got 14 SATA ports so I can spare a couple.)
At the time I think I got ten for ~$50 USD.
It’s getting about time to go to enterprise SATA for my data disks.
thanks for the cool video jeff. i always pronounced "sata" like "sat-uh." then i found a great 17+ year old post on tomshardware (from its good old days) that converted me to pronounce it like you ("sayta"). it referenced this famous ST:TNG quote:
Dr. Katherine Pulaski: "Dayta, Dahta, what's the difference?"
Data: "One is my name, the other is not."
Yeah I agree with most comments. Bulk storage is alive and well.
This interface is hot.
"I do my best!"
SATA still retains a nice advantage in that it is hot-swap compatible using affordable parts. I agree it is a very niche application but I find that very useful.
@Level2Jeff: I am curious - were the issues with the Taco and the CM5 ever sorted out? Is this a software issue, or does RADXA need to create a new revision of the board?
I really want to see an open source NAS for 3.5" drives that's designed with upgradability and longevity in mind. I bought an older generation Synology years ago that ended up becoming e-waste less than a couple years later because it stopped getting updates and had some security problems. Especially for background backup and light-duty use cases the CM4 has more than enough juice to do the job.
Aw, maaaan, I was wondering whether the Taco was plug and play with the CM5... I've been using my Taco-NAS for a year now and it runs pretty well. I would've appreciated a little speed bump though. Thanks for testing, Jeff!
I was hoping you were going to test out the Radxa CM5 when you mentioned it earlier in the video.
After getting my Latte Panda Delta and a couple Orange Pi 5's (and OPi5+'s), I find they have significantly less compatibility which really sucks b/c they're great platforms in terms of hardware
I think cameras have an "infinity focus" setting, where everything is sharp regardless of the distance. Figuring that out should help with the thumbnails a lot.
Aww sad to see them go ! I recently purchased another MX500 (I got 4 of them). This time a whopping 4TB one. I love these drives very much for computers unable to use M.2 SSD's
BTW, PWM does not require 3 pins. The power pin is pulsed to control the rpm. The 3rd pin is the signal from the fan back to the controller to report speed.
True; though most of the better fans (especially quieter ones) seem to like 4-pin PWM instead of just chopping the voltage.
"PWM fans" need 4 pin. It means the fan itself has the control logic to change its own speed depending on a PWM signal, while the power pin is non-pwm 12v.
"DC fans" use 3 pin. The host must use pwm or voltage on the power pin to control their speed, but it is more finicky and might wear out the fan or go too low for spin up at low speeds. This is why the PWM fans exist, their onboard logic handles the fan-specific settings.
It will be great if we have something like that but wider and higher power rail , a quick 4/5bay nas 😂
sata m.2 is lower power than nvme m.2. For use cases where watts matter more that megabytes per second (small rpi home servers?) sata is still king
I have yet to see anything but specialized drives that have anything but SAS or SATA for spinning rust. So long as SSDs are as pricey per gigabyte as they are compared with spinning rust, SATA isn't going anywhere.
i can remember back then, when you first showed the TACO, I even messaged Radaxa and asked if the Board is compatible with 3.5" Drives. Sadly it wasn't cause if it were, I'd have bought it immediately and used it as a cheap and power efficient backup NAS solution...
There's a lot of Netgear ReadyNAS enclosures out there that are now aging doorstops, Netgear have ended to product line entirely, and these devices no longer get any updates. most of them were badly under powered anyway, which is a damn shame. Would totally love to see a CM Based board replacement for the internals that re-uses the case and if possible some of the backplanes too.
My last SATA drive kept giving errors. So I ended up replacing it with a second NVME. So I took a screw driver, and broke open the casing to have a look inside. Just out of curiosity. Very interesting! And when I went back to close up my PC again I saw that the sata cable wasn't properly pushed down :) So...don't be me. RIP Samsung EVO 2 TB, you did not deserve your demise.
I remember jumping from 500 MB a second all the way to 5000. Going from SATA to Nvme is crazy.
I don't forsee a death for sata any time soon, honestly. This seems like a pretty hot take.
I meant more for SATA SSDs. HDDs will persist for a while. the SSDs will probably dwindle to only a few cheaper models as NVMe and a variety of formats (including U.2 which has come down in price, quite a bit) make their way into everything.
Random question, if you're up for answering it (perhaps with a future video): How many wired ethernet ports can a CM4 (or CM5) support, either in theory, or in practice with existing boards to plug the module into? I'd like to have a multi-lan router, and thinking building something from something Pi-based might make sense... but I don't really want to use USB adapters, which seems to be the current option that I know of to get beyond 2 NICs. Any tips/thoughts/relevant knowledge you could share?
I still use SATA SSDs because it is still vastly easier to find motherboards with > 4 SATA ports than it is to try and find a motherboard that has >4 NVMe slots (as the form factor, when placed horizontally, takes up a huge motherboard footprint that can be used for other stuff instead).
I think that Asus might be one of the few motherboard manufacturers that included the Asus Hyper M.2 carrier board with certain models of their motherboards, but then that can take up a PCIe slot, which, a) if your CPU doesn't have enough PCIe lanes as it is, it can become a problem (I think that the 9th and 10th generation Intel CPUs were the last generation to have > 40 PCIe lanes from the CPU, WITHOUT it needing to be a HEDT CPU), and b) if you're running a multi-GPU system (e.g. for AI workloads), those GPUs often will block some of the PCIe slots, which means that you WON'T be able to install said NVMe carrier boards anyways.
Compare and contrast that with SATA headers -- you can find 4, 6, or even 8 SATA headers, with very little issue.
One of my Proxmox test systems, which is running dual 3090s, is also running four Samsung 850 EVO 1 TB SATA SSDs in raidz via a TrueNAS Scale VM. (Testing how much space ZFS snapshots take.)
It's a LOT harder to do the same with NVMe SSDs. (I'd have to move up to a Threadripper system to be able to have enough PCIe lanes.)
I would LOVE a Pi NAS like that!
I'd have to research the state of ZFS on Linux on ARM though.
Can confirm that your process at the end made an a-okay thumb nail! 👍
Just don't look too closely at the arm haha
Still love SATA, starting an Ultimate PiNAS build.
Can always get/make some short SATA cables and still use this with real drives (real, in the sense that you get 30TB instead of 5TB). 5 slots would be enough for a pretty nice NAS capacity.
just a confirmation, does this thing support the 5tb 2.5'' drives too? They are 15mm "thick" so they are non-standard size
It seems those would fit
I would love to take an old Buffalo TerraStation I have and re-fit it w/ a Pi CM5.
Been thinking about just mounting a full sized Pi inside it b/c of how much room is available.
If the Taco can provide enough power to run 3.5" hard drives then perhaps SATA power and date combo extension cables would allow all 5 SATA ports to be used for 3.5" hard drives.
@Level2Jeff -- Will we see some SAS based fun with one of these and a SFF disk drawer at some point?
I have a few disk drawers i use with a LSI SAS controller(HBA), reducing power overhead and allowing me to shutdown arrays when needed. I know you have touched on Hardware Raid on the CM4 in 2021. But, questions arise with CM5 and something like a disk drawer to reduce some of the headaches.
I would love to see this explored at some point. (CM5, HBA, SATA/SAS HDD+SSD, NVME ARC drive).
Its mostly silly, but as a concept, it would show the possibilities.
I sincerely don't know why we don't have a SATA 4 or something similar for hdds. These new 20TB drives even with all of those plates still can't saturate SATA III?
That's precisely why. Even the best HDDs are struggling to get close to 300MB/s and once you move to SSDs NVMe is just so much more efficient for high end, and the SATA bottleneck actually helps SSDs consistency in my experience. I actually moved back to SATA SSD for a consistent 500MB/s performance where cheap NVMe drives immediately saturate the SLC cache, performing more like a cheap SD card. So the OS is more responsive from SATA.
I hope SATA is around for a while I need it for archives
This is a very small suggestion but on the side close up cam the Aperture seems to be a little too low that the board is out of focus while your hand is sharp.
Yeah, still working on the ideal fix for that; even at f/8 or f/11 the focus can be a problem. I wish Sony had "PCB tracking" - it seems to prefer faces, hands, etc. instead of computers!
@@Level2Jeff that was with such a low aperture?? You must be blinded by your studio lights :D
Idk if this would work but maybe one can abuse the face registration feature (at least the sony a7iii has it not sure about others) to make it focus on a pcb 🤔
A sad death for the MX500. It got enshittified with Crucial swapping controllers, flash and dram cache.
Damn.
Since when
I'd love to run TrueNAS on something like this for offsite backups and it would be epic if there was a CM board with an IT mode HBA that could take 3, 6 or 12 3.5" drives! I guess OMV works until TrueNAS decides to figure out ARM!
I hope someone forks TrueNAS and creates an ARM build.
Or maybe OMV implements ZFS in the GUI.
@@fujinshu I'm not sure what dependencies are currently incompatible, but it would be interesting to try building it for ARM. The underlying system which is Debian has full support so the issue lies somewhere in the middleware I think, porting it might be pretty easy for a software wizard with just a few patches!
Would great to see a SAS board. Used drives go for reasonable prices.
Some of Seagate's newer hard drive revisions seem to be slightly thinner, I wonder if they'll fit.
I should try to pick up a few slim 3.5" drives!
wait, MX500 is being RETIRED?! damnit. I'm gonna have to find a new SATA SSD model to put in my infinite build ideas lol.
(only half joking on the last part)
Yeah, make it wide enough for mechanicals for sure. Thanks
SATA ForEver!! Sata will never die.
Cm5 itx board please? With GPU support?
it occurs to me that a 12" x 12" mirror "tile" from Menards or Lowes could give a view of the reverse side of a two-sided board like this one.
Like was used in TV repair since the '50s?
1:52 I *hate* those connectors.
I also hate the overuse of blue LEDs. :P
Can you use extension cables for the 3.5“ drive?😀
Some people have tested them, I'm not sure how many you can power safely off here, but at least a few.
Strange they have asmedia switch but use jmicron for sata. Not dying when 30 and 32TB drives just released. I would have liked to see a new sata standard though that included higher bandwidth and different cable ends. The current one's are so archaic compared to something like USB C.
There is no need for a new Sata standard when the main user of Sata standard are mechanical drives. SSDs have moved on to nvme
Some HDs are 20mm rather than the 26.2mm of most
Might fit!
I don't think I would try to stand a 3.5" HDD vertically on its connector. The weight of most HDDs is something like 1.5lbs if I recall correctly and the likelyhood of breaking the connector on the drive or the board is very high!!!!
would that ~300MBps speeds be across all drives, or is it for a single drive only? would a zfs disk group be faster like traditional raid?
Do you use any ham repeaters on the Illinois side? Either way, what is the RX frequency of your favorite repeater.
For the stuff I do on my server, SATA SSDs and HDDs are plenty fast for Plex. I'd would rather have 16 4TB SATA drives than 4 4TB NVMe on one HBA Card. I need capacity, not blazing fast speed.
wait what?, SATA is a dying tech?, do we have 16tb ssd/nvme's already? and can any of you get me a 8tb ssd for less than 200 bucks? lol, i know, i know, SATA is dead as a "main stream" technology, great video as always!!!
These Nvmes nowadays have a terrible pci-lanes to stored-gb ratio. we oftentimes don't need the speed Nvmes provide, but a big amount of fast enough sata ssd storage.
I think it's great that it can use the 4 lanes when it needs to, but you don't have to use those lanes.
Just look at the read/write speeds and see if cutting the lanes down makes the interface speed equal to the actual drive speed.
hey! Radxa CM5 mentioned ;)
I just ordered one for my first big project. What do you think of it? :)
It's fast, but also can be frustrating, getting it to boot the way you want (or in some cases, with the OS you want).
@@Level2Jeff Thanks! I hope I learn a lot.
Good thumbnail
Ha!
If M2 ssds run faster your defeating the perpas is a raid controller it would have been nice to be able to hook up a 4 20 TB hard drives to have affordable storage for a gaming server .
I wonder if it can boot from the sata drive
Until NVME has affordable muxing and cabling, sata will be the go-to. Even at 1 lane per drive, a NVME nas is just too expensive to build for on anything non-x86.
Anyone know what the exact PCIe specs of the M.2 M-key is? There are not much details in the specs section on the Radxa site.
SATA is far superior to NVME right now. How many SATA drives can you use on a current motherboard? How many NVME? Until CPUs have more lanes, which doesnt appear to be anytime soon, SATA is KING!
Sata will stick around for use in NAS
Not my beloved MX500 😭
Of all the Level 2 Jeff, this has been the most Level 2 lol
2.5" drives?
if they just made a X86 system with 4-6 sata ports and a nifty little case = instant winner... N100 or similar chip would be nice
Only when mechanical tech has reached its limits will there be a true push for bulk NAND storage. This is not going to happen for a decade at least. SATA is perfectly fine for bulk storage even more so that it runs cooler. Wish there was affordable 8TB+ NAND drives in SATA.
confirmed. nice thumbnail
Yeah! 😮😮
Using the Raspberry Pi 5 as a NAS seems a great idea but one question I have is would hardware RAID offer better performance than software RAID? I’d have thought so…
It may, in some cases, but the difference is very small these days, with the CPU in a Pi 5. Hardware RAID can have a few small benefits in some niche use cases, but check out Level1Techs' video on hardware RAID being dead for some interesting insights!
hardware raid stopped providing "better performance than software RAID" a long time ago. Mainly because "hardware raid" is just "software raid running on a small dedicated CPU on a card". Nowadays it's mostly for better convenience or reliability than Windows own software raid, or to boot Windows from RAID.
In my very simple understanding of RAID, I thought that the PCIe on the Pi 5 would be a bottleneck ultimately as it does its magic writing data across the drives. I assumed that slower writing of this data across drives (amongst other checks) would also increase load on the CPU whereas with hardware RAID you just send the data to it and as far as the Pi 5 was concerned, job done.
Obviously I’m wrong :D
Thanks for the replies, appreciate the time taken to answer my question :)
Waiting for a video on how to make i-ram from old hard drives
Waiting on your thoughts about that new Nvidia release.
I swear I got a dodgy pi4 yesterday.
Ive never had to edit the config. some bs with no signal.
I'll consider it if I can get a compute module with ECC RAM.
_Technically_ the CM5 has ECC, it's just LPDDR4x on-chip ECC, so not the same as a server ECC DRAM stick.
@@Level2Jeff Thanks, I know that. I've been commenting the same thing on your videos for the last 5 years lol. I mean the real ECC that also protects the link between memory and CPU and reports detected errors. Real ECC is not server only. Someone could build an CM5 compatible module with ECC memory. I am waiting/looking for that. Especially with anything that looks like a NAS.
@@kwinzman Not possible and has never been possible and is very likely not going to be possible in the future. To use the real ECC ram in ECC mode you need ECC ram support in the memory controller and AFAIK the raspberry SoC never had support for that and they never mention adding it.
Afaik it's not a feature you find on "mediacenter/mobile" kind of SoCs like Raspberry and similar but more on the parts for NAS and network appliances, so in general the Marvell Armada line and NXP's network appliance SoC lines do support ECC ram but I've rarely if ever seen a device that actually bothers to implement it.
For these embedded devices the best you can hope for is on-die ECC, either added as a special treat for DDR4 or because the spec requires it with DDR5
@@marcogenovesi8570 Are you aware that there are multiple pin compatible boards that are compatible with cm4/cm5, but use a completely different SoC? I would like to post the links but I doubt TH-cam spam filter would like that. It's 100% possible to include real ECC if somebody bothers to build it. With those NXP, Marvell SoC or others.
@@kwinzman Yes I am. The issue is price, because these SoCs are not made for mediacenter/mobile consumer market.
For example a somewhat old Armada A388 SoM from SolidRun (a SoM is System on Module, an industry term for what the CM4/5 modules are) has optional 2GB DDR3 ECC but the base model with 2GB is already 100 bucks and it's just a dualcore Armv7 1.6Ghz.
With the prices I've seen quoted for stuff that is actually recent and comparable to a raspberry 5, you are better off with a mini itx AMD motherboard and ECC ram
For anyone else in the midwest, "taco" rhymes with "clock," not "hat" heh
At work I probably purchased 50 MX500s.
The 3.5" has too much weight/leverage.. (laptop/2.5" 4500rpm hdds are also a good option instead)
For the ssds it can still be fine but i would prefer to strut them together and add some screws..
But the price of cobbling together these things ends up close to or above standard micro itx / older components "good enough". If theres a complete cheaper set, OK..
What is tako?
Oh yeah killing SATA is going to be great when consumer platforms only have enough PCIE to run 2-3 drives if you have a graphics card. Can't wait for that. I'm already using every lane my 7800x3d/B650 has to offer.
Sata dying? Whats interfacing with hdd?