IcyDock has all kinds of great stuff but I really wish they would do lower end things. The data hoarder community would love some enclosures and adapters that are on the cheap. Plastic, mostly passive. Maybe a high density M.2 enclosure with a PCIe bridge / multiplexer built in. Basically something like what crypto miners did but for storage.
they do with the "flexidock" product line. M.2 and pcie bridge/switch is expensive so forget that, crypto mining does not need pcie speed so they are fine with cheap 1.0 bridges in many cases
@@marcogenovesi8570 flexidock doesn't fit the purpose described. They are still have active backplanes (which in my experience break too often or add additional instability) and don't come in the forms I think people would want. Older PCIe bridges/switches are not expensive. I bought the Ceacent ANU28PE16 NVMe SSD Riser for $90. An IcyDock TouchArmor SATA enclosure can cost 20-30$ more than that easy. And data hoarders don't need speed either. The bridge chip used on my ANU28PE16 can push more than 2GB/s on some Intel P4510s I have. That is WAY more than I need. The ports / pcie lanes could be split further to give the ability to plug in more devices. We want density and ease of setup. I don't need activity lights. I don't need sleds. I don't need metal. I just need easy connection, power, and density. Given the decreasing prices in in enterprise nvme and sata drives I would be more than happy to have an external enclosure that took U.2 drives or SATA 2.5" drives and connected back to the main machine via a janky USB3 cable, or few SFF-8088s, or OCuLink. Whatever is easiest and cost effective. I've talked with IcyDock's design team and a 3rd party engineer about this. It probably wouldn't be difficult to throw together. Maybe even a community project. Though I understand that the bridge chips do need setup/programming which could complicate that.
Old Xeons give you 40 lanes of PCIe gen4, bifurcateable any way you want. Active switches/bridges have a lot of functionality, but bifurcating (via switch) when the BIOS doesn't support it natively is one of the LEAST interesting (but most used) features of switches. They can also have way more bandwidth between expansions than the expansions have combined to talk to the CPU, and form their own compute-hub as a consequence, as they can DMA into each other much faster than the CPU can DMA into them.
The tangent from 8:00 to 9:21 speaks to me. I'm a bit of a data hoarder, closer to a digital historian(?) and I just like to keep track of certain things of interest and their progression over time. A weird niche, I guess? Either way, your idea of a digital knowledge base is what I've been wanting to do for a while. Slowly working towards that.
I have a bunch of Icy Dock equipment, even added a secondary "CD Copier" style enclosure for 3 additional 5.25 bays hooked up to an external SAS card just to get more drive bays. Love their products.
The issue is price though. Who is their target customer? The entities that can afford to buy this stuff would NEVER choose it over a marginally more expensive black-box with a 5-year support contract. The kind of people that WANT this stuff, are also the ones that will spend $300-$400 on a 4U chassis for a HomeLab. Those same people don't spend another $450 to get a few bays of front-access NVMe. If they did, they could have bought a barebones chassis from any of the DOZENS of suppliers that already make this stuff for ~$1000. IcyDock stuff makes no sense. And it never really has. Look back at their 5-in-3 SATA/SAS stuff. Those are like...$150-$250...each. That is insane. You can buy a whole disk shelf for the cost of 2-3 of those. IcyDock's prices need to be like 1/5 of what they are currently to be "worth it", and that just isn't even remotely feasible. A stamped Aluminum sheet with a $3 BOM PCB cannot cost 2/3 of the price of the drive you intend to put in it. People are not putting $5k drives into these. They are putting $50-$150 drives into them. So I ask again. Who are these FOR?
@Prophes0r that's what I'm struggling with currently. I want them. But the enclosures cost more than the drives I want to put into them... Makes no sense.
Ahh.. good ole antec 300's. Always knew Wendell was a well traveled, and cultured man, lol. There was a time u could get those for like 30 bucks new... great little economy work horses... Had several myself!
Every time you release a video about interesting Icy Dock things I start to plan new possibilities for my rack. My brain says yes while my wallet is concerned.
If you want PCIe/NVMe (also M.2 via adapter) Hot-Plugging you can use HBAs with an active PCIe Switch Chipset in any motherboard. For example in Windows the NVMe SSDs then show up like external hard drives and can be ejected via an icon in the taskbar.
@@NPzed Nope, the only bottlenecking is going to be dependent on the amount of PCIe lanes the PCIe Switch itself gets from the host system and the number of NVMe SSDs getting handled on the other end.
I don't know how they do it on the technical level, but my Dell has 2 NVME hot pluggable in their so-called Flexbay in the front. PCIe 4.0, runs from factory Kioxia XG8 full speed without any problem and on Linux and Windows they show up as perfectly normal nvme devices.
@@thisiswaytoocomplicated I would surmise there is a carrier board that the M.2 cards get docked in? It's the M.2 connector itself that is not hot-pluggable, not necessarily the drives themselves. U.2 for example is hot-pluggable (assuming MB/CPU support). IIRC, this is usually tied to the length of the power and ground pins relative to the data pins/fingers. When all the connections are the same length, there is no way to safely ground>power>data link the drive. So if a carrier board took care of that process, then in theory an M.2 could be hot-swappable.
I would store a copy of Dr. Stone, to have a (fictional) guide to how to speed-run human civilization development. Also a copy of Primitive Technology TH-cam channel.
You know they have books on that stuff right? Do you know how many books on survival/building/chemistry/teaching/medicine you can fit on an old E-book reader? A well-compressed textbook with diagrams can be 1-5MB. My 'borderline free' 2016 kindle has 2GB of storage.
@@manitoba-op4jx I agree, it results in being more like a list of things you should know about, than actually teaching these things if you needed them.
Despite the pretty-ness of all the new cases coming out, I keep going back to my Antec 900 cases. 9 5.25" bays! Plenty of fans! I really wish they kept making them. We went so hard to get away from 5.25 but now we're going right back to it.
I wish that motherboards had more physical PCIe slots and SATA ports to accomodate these large storage arrays and systems. At least, motherboards with these features at a reasonable price for under $200 or so. It seems like there is a lot of missed potential with the extra PCIe lanes available on consumer AM5 and Intel platforms for multiple high-bandwidth PCIe slots that isn't being effectively utilized, since they instead feature PCIe 5.0 or only one 16x slot that limits the type of immense connectivity seen in this video.
Those "missing" lanes aren't there though. Consumer CPUs usually only have ~24 lanes. That's 1x16 link + 1x4 link + whatever your chipset/SATA/Network/USB use. Usually referred to as 20+4, since only the x16 & x4 are exposed. Motherboards that present more than that are almost always using lower speed links that come off the chipset. Which is STILL super useful, but not the same. PCIe Gen5 actually makes a bunch of different consumer-grade divisions viable though. A single x8 link is actually a reasonable option for a GPU. x1 links are ACTUALLY useful for NVMe. There are a LOT of ways to usefully subdivide 20 lanes of Gen5.
yeah if only AMD didn't basically drop the ball on Threadripper (and Intel likewise abandoned that market) eh? Those had hundreds of pcie lanes, hundreds
@@Prophes0r That doesn't really change anything you said, but for completeness sake: Zen4 actually upped the lane counts by 4 to 24+4. On most AM5 MBs they are exposed as the x16 GPU slot and two M.2 slots directly from the CPU.
@Prophes0r There is plenty of bandwidth in those gen5 lane, and as you said there is a lot of ways to subdivide those 24 lanes. But sadly there is no motherboard who tries what OP asked for really, they all seem to be focused on cramming as many M.2 slots into it as possible. Which makes sense for the most common desktop config, but can be annoying for building a budget workstation/server with it. If you want to use the lanes for anything but NVME storage, you gotta frankenstein something together with M.2 to PCIe riser adapters. That works, but is more cumbersome than going the other way of sticking a PCIe to NVME adapter into a pcie slot, if you need more storage. And you can't usually bifurcate those into multiple x1 links, which would indeed be very useful. What I'd like to see is at least one unique motherboard model among the gazillion basically the same ones, that comes with plenty of pcie slots from CPU and chipset. Maybe even with a PLX chip or custom chipset that exposes all that gen5 bandwidth as lots of Pcie3/4 slots alongside a smaller set of gen5 ones. Lots of pcie devices that aren't GPUs or NVMEs just want gen3 lanes. I know PLX chips are expensive, but then again, those MB manufactures are trying to charge like $200 extra for stuff like onboard LCD display, might as well make one unique expensive model with actually useful extra features.
8:22 I’ve had this same idea but deployed in poor parts of the world that don’t have access to the internet, along with some inexpensive computers and access points.
We stopped purchasing IcyDock 4-5drive USB cages because all five of our first purchases had dead-bays - dead connectors. We solidified on Oricos which have been perfect for our customers' home drive servers. We use Antec Two-Hundreds which are superior to your Antec 300s for one reason: the A200's have a pop-off front facia-with-fans and this allows all bays to be emptied and filled from the front. No 'dragging drives over motherboards.' They are SOOO wonderful. But far more rare than those 300s, which beg for a Dremel to slice out the front sheet metal and figure a way to put the 120fans onto the front-facia 'forever'.
Antec 300 was my first "expensive" case. I still have it in storage, too bad i was angry back then and punched it good a few time, that bend the top panel. I will some day refurbish it. Still searching for a Antec 300 though :)
Looks like a great set of options to combine with ASRock Rack thin mITX Epyc board... the first one or two ports split to 16 SATA drives stored in those SATA banks in 5.25" for relatively capacity oriented cold tier. The rest at least 4 PCIe4x8 ports to split amongst NVMe storage bays for 8 hot tier speedy drives... with maybe now somewhat cheaper Rome or some promo Milan with 24 cores or more and all 4 DIMM slots populated with now relatively cheaper DDR4 memory for at least 128GB total... the dual 10Gbps links on board make it somewhat useful without extra NICs, and thus, the sole PCIe4x16 slot could be used for something else... maybe GPGPU, or transcoding ASIC, or... those 21xM.2 storage adapters for ultimate small-ish all flash setup? Wrapped in a dedicated chassis to be smallish as possible yet accommodate the FH/FL AIC and those 5.25" Bay adapters, you could get a remarkable Microserver with storage in mind... Though if thinking big and flash-y, I'd say try to use such 7xPCIe4x16 board and populate each with those 21xM.2 adapters... 147 M.2s in a single chassis, even if of slower variety per drive/unit, would constitute some extreme storage as a whole... leaving any other PCIe/OCULink or SATA ports left for further storage uses to add more to the extremity of such an "appliance" as it would easily saturate the dual 10Gbps ports all the time with what's coming from the drives even on consumer M.2s of lower performance ;)
Can you do a deep dive on options for PCIe switch hardware? I've heard there's options floating around that aren't absurdly expensive and I'm sure I'm not the only home labber who could do with more lanes even at the expense of a bit of latency and speed per lane
Agreed, some of us are rocking gen3 with limited PCIe slots in our servers. The price of nvme flash is very attractive even if I couldn't get all the speed out of the drives (now). The only plx cards I've seen, are about the price of a whole new server.
PCIE 4 switch chips look obscenely expensive, although they are also out of stock everywhere, so they might still be at shortage pricing. A 28-lane PCIE 4.0 switch from Microchip is around 155 USD right now if you order more than 100 of them. And I bet that Broadcom would fleece you even more for their PLX chips.
Broadcom P411W-32P is around $750, connects PCIe 4.0 x16 host to four x8 SFF-8654, which you can fan out to x4, x2, or x1 at the drives. I'd also like to know if there's anything else out there
i personally love icy dock. they make 5.25 adapters to hold like, 4 2.5 SSDs or 2 2.5 and 1 3.5. that was my introduction then I saw they have drive sleds for things like old dell cases and whatnot. great stuff!
Recommendation for a current more compact case, suitable for rack-mounted installation, 4 x 5.25” bays, up to E-ATX motherboard size and space for numerous 120 mm fans: SilverStone Grandia GD07
And it will only need to be QiLC... (Q = Quad- = four. Qi = Quint- = five) I CAN'T WAIT to hear some marketer/prick try to convince us that 10 drive-writes (total) is perfectly acceptable endurance...
The icy dock adapters are great, however the price is not so great. Hoping for lower prices in the future or cheaper Chinese clones, so the adapters would come in a price range that are feasible for most people with a diy homelab
This is something I CONSTANTLY re-search for. I have been looking for YEARS. I realize the use-case is semi-niche so it's hard to get economy-of-scale working for us, but I just can't justify spending $150 on a 5.25" enclosure. I'm fine with injection molded plastic rather than Steel/Aluminum. But the price needs to be SO MUCH lower. Which probably isn't possible right now. It needs to cost less than a bare 4U chassis, to fill up that many 5.25" bays (9-10). That's like $30 PER BAY at most. That is the problem with IcyDock stuff. I'm not going to spend $75 for an enclosure to hold a $80 drive. They have the same issue with their 5-in-3 HDD enclosures. I can buy a used disk-shelf, gut it, and build an entire system in it for the cost of a new case + IcyDock bays. Is that going to fly in an office? No, but their stuff has never been at that level. It has ALWAYS had that HomeLab/HomeNAS, hacked it together but nicely, thing going. There is clearly SOME market for this stuff, but I'd honestly rather build 2-3 systems using sheet-metal bay converters than a single "nice" one using tool-less IcyDock stuff.
I agree with you, but the demand for consumer storage mobile racks is really going to be small. If you want to consider it, pay attention to ELS storage. There is only one KINGWIN brand NVME mobile rack in the North American market, but in Russia (Eastern Europe?), ICY DOCK level products are sold under the name of PRO CASE.
The only thing stopping me from swimming in the 5.25 drive bays is the price which I unfortunately think is about 3 times too high usually. I don't know what the economy of scale would be for these niche units but I have bought a few at the three-times-too-high price because I really needed what they did, so I guess there's that too.
Same. I'd love to buy IcyDock stuff, but they charge way too damned much. Just like that 2-bay NVMe thingie Wendell's showing (at first) is $150+. Should be $50 bucks or less.
I still remember my 5.25" to 2.5" SAS 6 Gbps Icy Dock adapter. That was pretty awesome, for my systems that only had SATA connections, natively, but I had SAS drives, and a SAS controller, and needed a way to hook those up together.
6:40 I have bought two m.2 to 2.5in U.2 (MB705M2P). When I plug them in when server is powered off, then the server (Supermicro 4U 4124GS) will not pass boot (the white IPMI screen). Plug them to a Threadripper (MSI TRX40) workstation with PCIe to U.2 cable, They work fine. Not sure how to diagnose.
Be careful when getting Samsung Enterprise SSDs, while the price per TB storage and their performance is great, it’s a pain to get firmware updates and Samsung doesn’t offer end customer warranty service for the “OEM” variants. I recommend the Micron 7450 line for that reason, 5-year warranty and updates publicly available.
I actually have a Micron 7450 Pro in my personal system an E1.s version at that. Was like $150 for a 3.84tb drive so it was worth it though to put up with an PCIe adapter. Got it off eBay
It's kinda wild that the most compact storage is still a shoebox full of MicroSD cards. And the highest bitrate "network" is still a shoebox full of MicroSD cards on a plane.
@@shanent5793 Uhh...no? I can get on a plane with 1PiB of MicroSD cards in my pocket and fly anywhere in the world with it in a day or two. How long would it take you to "move" 1PiB of data anywhere? MicroSD isn't the best for a multi-user database, but it IS still the highest density storage. But what if you care about accessing the data quickly too? That same shoebox will hold how many 40GiB enterprise NVMe drives? 50? 100? How long does it take to priority-ship that anywhere in the world? How long does it take an actual courier to move it? It makes networking look like a joke. ANY networking. 400Gbit? 800Gbit? Sneakernet still wins. And it does so by a Loooooooong margin. Even Amazon and Google offer sneakernet services. When you are talking about LARGE data, it is way faster and way more cost effective to freight-ship a box of drives somewhere, copy it locally, then ship it back to a datacenter. Obligatory XKCD reference. what-if.xkcd.com/31/
We use the icydock 3.5 u.2 front dock with their separate m.2 to u.2 adapter. So we can check/erase/firmware u.2 or nvme drives at work in one bay. Sucks you have to reboot for them to be seen.
This feels like the REAL use-case for these things. They are for bench-top or shop-level maintenance. They are 5x=10x too expensive to be usable in an actual production sense. Who is going to put a bunch of $120 drives into $70 enclosures?
That project to store information for the future if anything was to go wrong is a worthy goal . We dont want to get to the point where we have a load of technology we dont know how to build we only keep it running , that would be a bad time.
I have some Icy Dock 5.25" bay SATA cages. They work great. I have several different OS's and I choose which drive to boot to during power on of the PC. No boot loaders for me. However, I've been looking at the U.2 cages and drives and they look great but I can't find much about connecting them via cables and a PCIe add-in card to a regular non-server motherboard. Any suggestions for use in regular PC's? At modest cost of course! Thanks.
I was looking at the U.2 x4 dock for desktop case server after noticing that >2Tb enterprise nvme drives have become more affordable. I’m wondering what sort of cards/HBAs people are using, I have seen some simple miniSAS HD electrical bifurcation cards and my motherboard can do bifurcation but something feels off about using a card like that.
I just want the tools to build my own JBOF enclosure... But companies like Supermicro are still charging insane premiums for their JBOF backplanes... Even the pcie gen 3 ones. 😔
PCIe switching is expensive. You are GENUINELY better off getting a used EPYC Rome + Supermicro board and loading it with $10-$15 4x4x4x4 bifurcation cards.
Recently spent a fortune tracking down a second old Zalman MS800 Plus for my home server for the front bays. Starting to wonder if I should set up a niche business making custom home server cases with nothing but 5.25" bays. :) As for the mobos...I really just wish mobo manufacturers would just pack in as many bifurcated PCI-E slots as they could manage and differentiate their products by having most of the different built-in stuff (like 2.5G/10G eth interfaces) on optional PCI-E cards. Why spend lanes on multiple mobo M.2 connections when you could just make your slots support 4x4x4x4 bifurcation and include cheap passive risers. Or waste lanes on the on-board 2.5G interfaces when you only need the 10G?). Or being stuck with SATA slots you don't need consuming lanes. Or being stuck with whatever previous version of connectors the board came with when what you actually need are different versions (i.e. M.2/SFF-8639/OCuLink/U.3/etc). I appreciate that's only starting to become viable with newer CPUs with enough lanes, but it has to be the end goal, surely? Surely it'd be cheaper for the manufacturers to have simpler and less variants of their boards?
This IcyDock system would be great for backup on my system. Where I could do a quick data backup and swap out the drive. But at the price they want, I'll continue to use USB cable cases. Much slower, but also much cheaper.
ปีที่แล้ว
I'd like to see a review of the ToughArmor MB872MP-B even though I can't afford one but it seems really cool.
i want to grab a couple of their new 5.25" 8 bay M.2 enclosures but im not sure if two slimsas connectors can be enough for splitting to 16 (2x 8) oculink connectors. otherwise ill just go with the 16 bay sas/sata one that has 4 mini sas on the back which i know is for sure enough.
The first adapter won't work on any LGA 1700 systems, regardless of motherboard. Alder and Raptor Lake CPUs only support x8/x8 bifurcation. Only one of the slots would be functional. PCIe lanes are always capacitively coupled, they're never wired directly into the CPU
"Wired Directly" means "Not run through another logic chip". Also, that capacitive coupling isn't meant to be any REAL protection against ESD or overloads.
I think restarting civilisation might need videos on the basics, as simple as making fire! Of course, electricity to watch it might be the first priority....back to cave painting it is! Great work Wendell, nice to see an old case like mine crop up 👍
Step 1: relocate adjacent water wheel and generator to a water fall or river with at least 2000W of available water power through to the generator. Also acceptable is a wind turbine or solar panels. Step 2. Set stuff up and plug green grass colored plug A into grass colored receptable A. Plug monitor cable red (blood colored) cable B into red port B on the back of the computer. Plug brown tree colored keyboard and mouse plugs into brown USB receptacles. Step 2. Hit power button shaped like this (C-) Step 3. Find folder on screen labelled "Restarting humanity" which will direct you to the most relevant Wikipedia pages. Step 4: learn Step 5: implement what you have learned Step 6: repeat and expand.
Power banks, a converter, a solar panel, a good quality DIY kb and mouse with plenty of replacement parts and a bunch of reliable monitors (because I have no idea how to get around panels wearing out). You could even get around the converter with some engineering and make everything pure DC, I think. There could be some unforeseen issue I'm not thinking about somewhere. You could probably build up a "Library of Alexandria" that could survive anything short of getting directly nuked for less than $5k and the whole rig would fit in a couple of backpacks.
@@Prophes0r The Wikipedia page has great resources on how to make safe drinking water, different ways to make fire with what is around, what is safe to eat and how to prepare food, how to make soap and how to make steel. As well as many other things like how to treat wounds, make traps, build easy and effective means of protection, etc.
@@mikes2381 Sure. But there are also a LOT of great books on this stuff. A wiki is an AWFUL way to organize information that is intended to be taught. It is also a pretty bad way to organize quick-reference data for comparisons. We have had hundreds of years of innovation when it comes to text-books and field guides. It is MUCH more complicated to teach someone something than to just barf information onto a page. Wiki's have their use, but their design-goals are completely different than REAL educational or reference material. I'd bet you can fit WAY more useful information in the form of e-books with monochrome images onto a single 1TB MicroSD card than the entirety of a single-language Wikipedia dump. And video is even WORSE. Video-education is a luxury at-best, or worthless. Usually worthless. And for those times where an animation would be genuinely helpful? A carefully made, low frame rate monochrome "gif", or something more extravagant like an interactive 2d-3d model that builds real intuition about a mechanical relationship, is going to be better. Even something GREAT, and purposefully built for education and building intuition like a 3Blue1Brown video is going to be incredibly space-inefficient. And space IS still a problem. How long does NAND flash reliably function? Key word "Reliably". How much duplication do you need? How much energy is required to handle the parity checks? If you take a few hours trying to figure out an ACTUAL way to store information to rebuild civilization, you will VERY quickly find yourself returning to ridiculous forms of storage like 2d data-matrix stored on stable microfilm. Or like...just books. Preferably made of Teflon/Kapton and Gold/Platinum. And THAT is just for the short-term stuff. 50-100ish years. You want 500? 1000? Now we are talking about gold/platinum tablets kept in a vault. And it needs to be somewhere geologically stable too. Storing stuff long-term is HARD.
I have a question, recently I bought a M.2 to PCI-E adapter hoping to add NVME capability to an old system that doesn't have a M.2 slot inside. I cloned the SATA SSD entirely to the NVME stick with dd but now it's refusing to boot from the NVME. The original SATA SSD has a dos partition system. I tried using MBR2GPT tool on Windows without success because it throws an error 0X00000000. Any advice would be greatly appreciated.
Why exactly aren't m.2 considered hot-swappable? For example, my bios has an option for pcie hot swap. Is that just a bad idea in general or is there something special about the m.2 form factor that makes it bad?
I'm running icy dock SATA adapter cages on the front of my home server and they've worked amazingly. I'm just about to swap out my NVMe drives on the motherboard and looking at other options for making it a bit easier - Are there gen4 M.2->U2 adapters that you've reviewed that would then be attachable via cable to one of the U2 cages? Sure would be nice to not have to disassemble the entire case if I want to swap them out again...
have two of their unique in the market 10x hdd enclosures and several adapters from them, they're not perfect products (changed the fan and added rpm tuning to my enclosures) but icy dock fills users needs and they know it...priced accordingly :/
Ok this might be a silly question but I've seen some products in videos that seem to break the norms with bifurcation. Are there products that handle the bifurcation on the PCIE card so that you can use multiple m.2 drives in a single PCIE slot? I have some older motherboards that I'd like to use as a NAS with m.2 drives. These machines currently have spinning disks and no option for bifurcation in the bios. Is there a product that does what I'd need?
@@Level1Techs thanks for the reply! Must be a super niche product, not many options, most are direct from china with bad reviews. I see a few server grade going for $600 plus. Far cheaper just to buy a board and cpu with bifurcation support it seems.
Can the pcie 2 slot nvme card be used in a regular desktop pc and use the gpu for gaming? I dont need high fps i have 5950x amd cpu the motherboard has the bifurcation settings . Someone told it will slow down my GPU?
IcyDock has all kinds of great stuff but I really wish they would do lower end things. The data hoarder community would love some enclosures and adapters that are on the cheap. Plastic, mostly passive. Maybe a high density M.2 enclosure with a PCIe bridge / multiplexer built in. Basically something like what crypto miners did but for storage.
they do with the "flexidock" product line. M.2 and pcie bridge/switch is expensive so forget that, crypto mining does not need pcie speed so they are fine with cheap 1.0 bridges in many cases
@@marcogenovesi8570 flexidock doesn't fit the purpose described. They are still have active backplanes (which in my experience break too often or add additional instability) and don't come in the forms I think people would want. Older PCIe bridges/switches are not expensive. I bought the Ceacent ANU28PE16 NVMe SSD Riser for $90. An IcyDock TouchArmor SATA enclosure can cost 20-30$ more than that easy.
And data hoarders don't need speed either. The bridge chip used on my ANU28PE16 can push more than 2GB/s on some Intel P4510s I have. That is WAY more than I need. The ports / pcie lanes could be split further to give the ability to plug in more devices. We want density and ease of setup. I don't need activity lights. I don't need sleds. I don't need metal. I just need easy connection, power, and density.
Given the decreasing prices in in enterprise nvme and sata drives I would be more than happy to have an external enclosure that took U.2 drives or SATA 2.5" drives and connected back to the main machine via a janky USB3 cable, or few SFF-8088s, or OCuLink. Whatever is easiest and cost effective.
I've talked with IcyDock's design team and a 3rd party engineer about this. It probably wouldn't be difficult to throw together. Maybe even a community project. Though I understand that the bridge chips do need setup/programming which could complicate that.
Old Xeons give you 40 lanes of PCIe gen4, bifurcateable any way you want. Active switches/bridges have a lot of functionality, but bifurcating (via switch) when the BIOS doesn't support it natively is one of the LEAST interesting (but most used) features of switches. They can also have way more bandwidth between expansions than the expansions have combined to talk to the CPU, and form their own compute-hub as a consequence, as they can DMA into each other much faster than the CPU can DMA into them.
The tangent from 8:00 to 9:21 speaks to me. I'm a bit of a data hoarder, closer to a digital historian(?) and I just like to keep track of certain things of interest and their progression over time. A weird niche, I guess? Either way, your idea of a digital knowledge base is what I've been wanting to do for a while. Slowly working towards that.
I have a bunch of Icy Dock equipment, even added a secondary "CD Copier" style enclosure for 3 additional 5.25 bays hooked up to an external SAS card just to get more drive bays. Love their products.
The issue is price though. Who is their target customer?
The entities that can afford to buy this stuff would NEVER choose it over a marginally more expensive black-box with a 5-year support contract.
The kind of people that WANT this stuff, are also the ones that will spend $300-$400 on a 4U chassis for a HomeLab.
Those same people don't spend another $450 to get a few bays of front-access NVMe.
If they did, they could have bought a barebones chassis from any of the DOZENS of suppliers that already make this stuff for ~$1000.
IcyDock stuff makes no sense. And it never really has.
Look back at their 5-in-3 SATA/SAS stuff. Those are like...$150-$250...each.
That is insane. You can buy a whole disk shelf for the cost of 2-3 of those.
IcyDock's prices need to be like 1/5 of what they are currently to be "worth it", and that just isn't even remotely feasible.
A stamped Aluminum sheet with a $3 BOM PCB cannot cost 2/3 of the price of the drive you intend to put in it.
People are not putting $5k drives into these.
They are putting $50-$150 drives into them.
So I ask again. Who are these FOR?
@Prophes0r that's what I'm struggling with currently. I want them. But the enclosures cost more than the drives I want to put into them... Makes no sense.
Ahh.. good ole antec 300's. Always knew Wendell was a well traveled, and cultured man, lol.
There was a time u could get those for like 30 bucks new... great little economy work horses...
Had several myself!
Icy Dock has the coolest niche pc stuff. And it's good quality stuff too. Love them.
The editing is really a huge step up and I'm loving it!
Every time you release a video about interesting Icy Dock things I start to plan new possibilities for my rack. My brain says yes while my wallet is concerned.
Yeah, my wallet too.
If you want PCIe/NVMe (also M.2 via adapter) Hot-Plugging you can use HBAs with an active PCIe Switch Chipset in any motherboard. For example in Windows the NVMe SSDs then show up like external hard drives and can be ejected via an icon in the taskbar.
Out of curiosity, does Windows have any performance hits when seeing the drive as "external"?
@@NPzed Nope, the only bottlenecking is going to be dependent on the amount of PCIe lanes the PCIe Switch itself gets from the host system and the number of NVMe SSDs getting handled on the other end.
@@abavariannormiepleb9470 Thanks for the info!
I don't know how they do it on the technical level, but my Dell has 2 NVME hot pluggable in their so-called Flexbay in the front. PCIe 4.0, runs from factory Kioxia XG8 full speed without any problem and on Linux and Windows they show up as perfectly normal nvme devices.
@@thisiswaytoocomplicated I would surmise there is a carrier board that the M.2 cards get docked in? It's the M.2 connector itself that is not hot-pluggable, not necessarily the drives themselves. U.2 for example is hot-pluggable (assuming MB/CPU support).
IIRC, this is usually tied to the length of the power and ground pins relative to the data pins/fingers. When all the connections are the same length, there is no way to safely ground>power>data link the drive. So if a carrier board took care of that process, then in theory an M.2 could be hot-swappable.
I'd love to see a deep dive on the "almost ewaste" computers. Playing with hand me down enterprise gear always fascinated me
I would store a copy of Dr. Stone, to have a (fictional) guide to how to speed-run human civilization development. Also a copy of Primitive Technology TH-cam channel.
You know they have books on that stuff right?
Do you know how many books on survival/building/chemistry/teaching/medicine you can fit on an old E-book reader?
A well-compressed textbook with diagrams can be 1-5MB.
My 'borderline free' 2016 kindle has 2GB of storage.
@@Prophes0rBooks, so boring. ADD hewman need moving pictures otherwise attention span drops to negative infinity
@@manitoba-op4jx I agree, it results in being more like a list of things you should know about, than actually teaching these things if you needed them.
Despite the pretty-ness of all the new cases coming out, I keep going back to my Antec 900 cases. 9 5.25" bays! Plenty of fans! I really wish they kept making them. We went so hard to get away from 5.25 but now we're going right back to it.
Antec300 cases were great high-airflow cases due to the front-fans w/out crazy filter restrictions. 5 1/4 bays are a nice bonus
I love these cases, we have 30 of them in the office. I just serviced all over the last few weeks, fucking phenomenal cases.
I wish that motherboards had more physical PCIe slots and SATA ports to accomodate these large storage arrays and systems. At least, motherboards with these features at a reasonable price for under $200 or so. It seems like there is a lot of missed potential with the extra PCIe lanes available on consumer AM5 and Intel platforms for multiple high-bandwidth PCIe slots that isn't being effectively utilized, since they instead feature PCIe 5.0 or only one 16x slot that limits the type of immense connectivity seen in this video.
Those "missing" lanes aren't there though.
Consumer CPUs usually only have ~24 lanes.
That's 1x16 link + 1x4 link + whatever your chipset/SATA/Network/USB use. Usually referred to as 20+4, since only the x16 & x4 are exposed.
Motherboards that present more than that are almost always using lower speed links that come off the chipset. Which is STILL super useful, but not the same.
PCIe Gen5 actually makes a bunch of different consumer-grade divisions viable though.
A single x8 link is actually a reasonable option for a GPU.
x1 links are ACTUALLY useful for NVMe.
There are a LOT of ways to usefully subdivide 20 lanes of Gen5.
yeah if only AMD didn't basically drop the ball on Threadripper (and Intel likewise abandoned that market) eh? Those had hundreds of pcie lanes, hundreds
@@Prophes0r That doesn't really change anything you said, but for completeness sake: Zen4 actually upped the lane counts by 4 to 24+4. On most AM5 MBs they are exposed as the x16 GPU slot and two M.2 slots directly from the CPU.
@Prophes0r There is plenty of bandwidth in those gen5 lane, and as you said there is a lot of ways to subdivide those 24 lanes. But sadly there is no motherboard who tries what OP asked for really, they all seem to be focused on cramming as many M.2 slots into it as possible. Which makes sense for the most common desktop config, but can be annoying for building a budget workstation/server with it. If you want to use the lanes for anything but NVME storage, you gotta frankenstein something together with M.2 to PCIe riser adapters. That works, but is more cumbersome than going the other way of sticking a PCIe to NVME adapter into a pcie slot, if you need more storage. And you can't usually bifurcate those into multiple x1 links, which would indeed be very useful.
What I'd like to see is at least one unique motherboard model among the gazillion basically the same ones, that comes with plenty of pcie slots from CPU and chipset. Maybe even with a PLX chip or custom chipset that exposes all that gen5 bandwidth as lots of Pcie3/4 slots alongside a smaller set of gen5 ones. Lots of pcie devices that aren't GPUs or NVMEs just want gen3 lanes. I know PLX chips are expensive, but then again, those MB manufactures are trying to charge like $200 extra for stuff like onboard LCD display, might as well make one unique expensive model with actually useful extra features.
@@Hugh_I small LCD displays cost like 5$ in bulk while even pcie 4.0 PLX chips that cost less than 200$ don't really exist
Icy Dock makes some useful stuff. I have 2 of the 6xSATA SSD to 5.25" bay adapters that I bought for my all solid state NAS.
8:22 I’ve had this same idea but deployed in poor parts of the world that don’t have access to the internet, along with some inexpensive computers and access points.
We stopped purchasing IcyDock 4-5drive USB cages because all five of our first purchases had dead-bays - dead connectors. We solidified on Oricos which have been perfect for our customers' home drive servers. We use Antec Two-Hundreds which are superior to your Antec 300s for one reason: the A200's have a pop-off front facia-with-fans and this allows all bays to be emptied and filled from the front. No 'dragging drives over motherboards.' They are SOOO wonderful. But far more rare than those 300s, which beg for a Dremel to slice out the front sheet metal and figure a way to put the 120fans onto the front-facia 'forever'.
Antec 300 was my first "expensive" case. I still have it in storage, too bad i was angry back then and punched it good a few time, that bend the top panel. I will some day refurbish it. Still searching for a Antec 300 though :)
Looks like a great set of options to combine with ASRock Rack thin mITX Epyc board... the first one or two ports split to 16 SATA drives stored in those SATA banks in 5.25" for relatively capacity oriented cold tier. The rest at least 4 PCIe4x8 ports to split amongst NVMe storage bays for 8 hot tier speedy drives... with maybe now somewhat cheaper Rome or some promo Milan with 24 cores or more and all 4 DIMM slots populated with now relatively cheaper DDR4 memory for at least 128GB total... the dual 10Gbps links on board make it somewhat useful without extra NICs, and thus, the sole PCIe4x16 slot could be used for something else... maybe GPGPU, or transcoding ASIC, or... those 21xM.2 storage adapters for ultimate small-ish all flash setup? Wrapped in a dedicated chassis to be smallish as possible yet accommodate the FH/FL AIC and those 5.25" Bay adapters, you could get a remarkable Microserver with storage in mind... Though if thinking big and flash-y, I'd say try to use such 7xPCIe4x16 board and populate each with those 21xM.2 adapters... 147 M.2s in a single chassis, even if of slower variety per drive/unit, would constitute some extreme storage as a whole... leaving any other PCIe/OCULink or SATA ports left for further storage uses to add more to the extremity of such an "appliance" as it would easily saturate the dual 10Gbps ports all the time with what's coming from the drives even on consumer M.2s of lower performance ;)
Wendell! I have 2 Antec 300s. and I LOVE THEM.
I just love the way he bashes those old cases around haha! And the reason he likes those older cases is that he can beat the shit out of em!
I had an Antec Twelve Hundred back in the day with some icy dock thing. You're making my brain feel fuzzy.
Antec 300s were the poor man's server cases back in the day. I have 6! Too bad their cable management provisions are...dated!
Can you do a deep dive on options for PCIe switch hardware? I've heard there's options floating around that aren't absurdly expensive and I'm sure I'm not the only home labber who could do with more lanes even at the expense of a bit of latency and speed per lane
Agreed, some of us are rocking gen3 with limited PCIe slots in our servers. The price of nvme flash is very attractive even if I couldn't get all the speed out of the drives (now). The only plx cards I've seen, are about the price of a whole new server.
@@Cynyr Yeah, it's a bit ridiculous that the cheapest way to buy flash these days is NVMe, which is somehow cheaper than even used SATA SSDs...
PCIE 4 switch chips look obscenely expensive, although they are also out of stock everywhere, so they might still be at shortage pricing. A 28-lane PCIE 4.0 switch from Microchip is around 155 USD right now if you order more than 100 of them. And I bet that Broadcom would fleece you even more for their PLX chips.
Broadcom P411W-32P is around $750, connects PCIe 4.0 x16 host to four x8 SFF-8654, which you can fan out to x4, x2, or x1 at the drives. I'd also like to know if there's anything else out there
@@shanent5793 I think the Microchip PFX series is cheaper.
i personally love icy dock. they make 5.25 adapters to hold like, 4 2.5 SSDs or 2 2.5 and 1 3.5. that was my introduction then I saw they have drive sleds for things like old dell cases and whatnot. great stuff!
Wendall you look better in every new video you make, keep it up!
Antec Three Hundred Two ftw. Bought mine in November 2011 for $45, still going strong.
@3:15 You talk about an adapter that supports 4 m.2 SSDs, but the drive you pull out looks a lot more like a u.2 ssd.
Recommendation for a current more compact case, suitable for rack-mounted installation, 4 x 5.25” bays, up to E-ATX motherboard size and space for numerous 120 mm fans: SilverStone Grandia GD07
I hope we get 16tb+ consumer SSDs soon.
And it will only need to be QiLC...
(Q = Quad- = four. Qi = Quint- = five)
I CAN'T WAIT to hear some marketer/prick try to convince us that 10 drive-writes (total) is perfectly acceptable endurance...
You forgot to ask IF we should restart civilization
The icy dock adapters are great, however the price is not so great. Hoping for lower prices in the future or cheaper Chinese clones, so the adapters would come in a price range that are feasible for most people with a diy homelab
This is something I CONSTANTLY re-search for. I have been looking for YEARS.
I realize the use-case is semi-niche so it's hard to get economy-of-scale working for us, but I just can't justify spending $150 on a 5.25" enclosure.
I'm fine with injection molded plastic rather than Steel/Aluminum.
But the price needs to be SO MUCH lower. Which probably isn't possible right now.
It needs to cost less than a bare 4U chassis, to fill up that many 5.25" bays (9-10). That's like $30 PER BAY at most.
That is the problem with IcyDock stuff.
I'm not going to spend $75 for an enclosure to hold a $80 drive.
They have the same issue with their 5-in-3 HDD enclosures.
I can buy a used disk-shelf, gut it, and build an entire system in it for the cost of a new case + IcyDock bays.
Is that going to fly in an office? No, but their stuff has never been at that level.
It has ALWAYS had that HomeLab/HomeNAS, hacked it together but nicely, thing going.
There is clearly SOME market for this stuff, but I'd honestly rather build 2-3 systems using sheet-metal bay converters than a single "nice" one using tool-less IcyDock stuff.
I agree with you, but the demand for consumer storage mobile racks is really going to be small. If you want to consider it, pay attention to ELS storage. There is only one KINGWIN brand NVME mobile rack in the North American market, but in Russia (Eastern Europe?), ICY DOCK level products are sold under the name of PRO CASE.
The only thing stopping me from swimming in the 5.25 drive bays is the price which I unfortunately think is about 3 times too high usually. I don't know what the economy of scale would be for these niche units but I have bought a few at the three-times-too-high price because I really needed what they did, so I guess there's that too.
Same. I'd love to buy IcyDock stuff, but they charge way too damned much. Just like that 2-bay NVMe thingie Wendell's showing (at first) is $150+. Should be $50 bucks or less.
I still remember my 5.25" to 2.5" SAS 6 Gbps Icy Dock adapter.
That was pretty awesome, for my systems that only had SATA connections, natively, but I had SAS drives, and a SAS controller, and needed a way to hook those up together.
I would still recommend getting SuperMicro SC745 chassis if you can find them under $200, they are incredibly versatile homelab all-rounder
oh man, loved that Antec case back in the day.
The biggest frustration I have with IcyDock is stuff's never in stock.
6:40 I have bought two m.2 to 2.5in U.2 (MB705M2P). When I plug them in when server is powered off, then the server (Supermicro 4U 4124GS) will not pass boot (the white IPMI screen). Plug them to a Threadripper (MSI TRX40) workstation with PCIe to U.2 cable, They work fine. Not sure how to diagnose.
Be careful when getting Samsung Enterprise SSDs, while the price per TB storage and their performance is great, it’s a pain to get firmware updates and Samsung doesn’t offer end customer warranty service for the “OEM” variants. I recommend the Micron 7450 line for that reason, 5-year warranty and updates publicly available.
I actually have a Micron 7450 Pro in my personal system an E1.s version at that. Was like $150 for a 3.84tb drive so it was worth it though to put up with an PCIe adapter. Got it off eBay
Awesome stuff, looking into these recently. One thing I noted is that if you have an older motherboard to check if it supports the card
It's kinda wild that the most compact storage is still a shoebox full of MicroSD cards.
And the highest bitrate "network" is still a shoebox full of MicroSD cards on a plane.
gigabit Ethernet is faster than microSD cards
@@shanent5793 Uhh...no?
I can get on a plane with 1PiB of MicroSD cards in my pocket and fly anywhere in the world with it in a day or two.
How long would it take you to "move" 1PiB of data anywhere?
MicroSD isn't the best for a multi-user database, but it IS still the highest density storage.
But what if you care about accessing the data quickly too?
That same shoebox will hold how many 40GiB enterprise NVMe drives? 50? 100?
How long does it take to priority-ship that anywhere in the world?
How long does it take an actual courier to move it?
It makes networking look like a joke.
ANY networking.
400Gbit?
800Gbit?
Sneakernet still wins.
And it does so by a Loooooooong margin.
Even Amazon and Google offer sneakernet services.
When you are talking about LARGE data, it is way faster and way more cost effective to freight-ship a box of drives somewhere, copy it locally, then ship it back to a datacenter.
Obligatory XKCD reference.
what-if.xkcd.com/31/
I've been waiting for an nvme dock - been using and external usb 3.0 enclosure that I pretty much made tool less ...
We use the icydock 3.5 u.2 front dock with their separate m.2 to u.2 adapter. So we can check/erase/firmware u.2 or nvme drives at work in one bay. Sucks you have to reboot for them to be seen.
This feels like the REAL use-case for these things.
They are for bench-top or shop-level maintenance.
They are 5x=10x too expensive to be usable in an actual production sense.
Who is going to put a bunch of $120 drives into $70 enclosures?
Magic to see all the bits and pieces you can use. and also lets you think of all the nerdy ways to use them lol
Portable Lirbray of Alexandria...Do.It.Please...for the love of humanity we gon need it
i love the products Icy Dock offers. i do NOT love the prices they offer them at. oh well. xD
Completely agree, they prices are high!
That project to store information for the future if anything was to go wrong is a worthy goal . We dont want to get to the point where we have a load of technology we dont know how to build we only keep it running , that would be a bad time.
that's often portrayed in science-fiction and since we're almost at Gattaca dystopia I'm pretty sure this will happen too
I have some Icy Dock 5.25" bay SATA cages. They work great. I have several different OS's and I choose which drive to boot to during power on of the PC. No boot loaders for me.
However, I've been looking at the U.2 cages and drives and they look great but I can't find much about connecting them via cables and a PCIe add-in card to a regular non-server motherboard.
Any suggestions for use in regular PC's? At modest cost of course! Thanks.
0:49 I felt that case landing on heatsink
Dell T5820/7920 have delivered this kind of NVMe to PCIe adapters by FlexBay for years. No surprise
That storage box is what we need to bring with as we colonize planets!
SSDs aside, will Icy Dock manufacture water chips in time for the GECK?
"More is always better" - Laird Hamilton
I was looking at the U.2 x4 dock for desktop case server after noticing that >2Tb enterprise nvme drives have become more affordable. I’m wondering what sort of cards/HBAs people are using, I have seen some simple miniSAS HD electrical bifurcation cards and my motherboard can do bifurcation but something feels off about using a card like that.
I just want the tools to build my own JBOF enclosure... But companies like Supermicro are still charging insane premiums for their JBOF backplanes... Even the pcie gen 3 ones. 😔
PCIe switching is expensive.
You are GENUINELY better off getting a used EPYC Rome + Supermicro board and loading it with $10-$15 4x4x4x4 bifurcation cards.
Recently spent a fortune tracking down a second old Zalman MS800 Plus for my home server for the front bays. Starting to wonder if I should set up a niche business making custom home server cases with nothing but 5.25" bays. :)
As for the mobos...I really just wish mobo manufacturers would just pack in as many bifurcated PCI-E slots as they could manage and differentiate their products by having most of the different built-in stuff (like 2.5G/10G eth interfaces) on optional PCI-E cards. Why spend lanes on multiple mobo M.2 connections when you could just make your slots support 4x4x4x4 bifurcation and include cheap passive risers. Or waste lanes on the on-board 2.5G interfaces when you only need the 10G?). Or being stuck with SATA slots you don't need consuming lanes. Or being stuck with whatever previous version of connectors the board came with when what you actually need are different versions (i.e. M.2/SFF-8639/OCuLink/U.3/etc).
I appreciate that's only starting to become viable with newer CPUs with enough lanes, but it has to be the end goal, surely?
Surely it'd be cheaper for the manufacturers to have simpler and less variants of their boards?
Yes, Antec Three Hundred, I own one too. And it’s successor the Three Hundred Two. Which is my Unraid server at the moment.
This IcyDock system would be great for backup on my system. Where I could do a quick data backup and swap out the drive. But at the price they want, I'll continue to use USB cable cases. Much slower, but also much cheaper.
I'd like to see a review of the ToughArmor MB872MP-B even though I can't afford one but it seems really cool.
Antec 300 was my first new desktop pc case, my amd 6870 just barely fit lol
are there any available options for cases with all 5.25" bays in the front? Antec doesn't make these old cases anymore :(
I’m feeling you on being able to restart civilization with these devices and configs.
I'm not rich enough to build those nvme arrays - but it is getting closer - with the all the drops in prices on them ...
i want to grab a couple of their new 5.25" 8 bay M.2 enclosures but im not sure if two slimsas connectors can be enough for splitting to 16 (2x 8) oculink connectors.
otherwise ill just go with the 16 bay sas/sata one that has 4 mini sas on the back which i know is for sure enough.
I am looking for a case that has the back in the front along with bays for removable drives. I've been using the single m.2 icydocks for awhile.
I hate that my 16tb 7200rpm seagate drives max out at 100mb/s. I really want ssd storage but its so expensive...
The first adapter won't work on any LGA 1700 systems, regardless of motherboard. Alder and Raptor Lake CPUs only support x8/x8 bifurcation. Only one of the slots would be functional.
PCIe lanes are always capacitively coupled, they're never wired directly into the CPU
"Wired Directly" means "Not run through another logic chip".
Also, that capacitive coupling isn't meant to be any REAL protection against ESD or overloads.
I think restarting civilisation might need videos on the basics, as simple as making fire! Of course, electricity to watch it might be the first priority....back to cave painting it is!
Great work Wendell, nice to see an old case like mine crop up 👍
Step 1: relocate adjacent water wheel and generator to a water fall or river with at least 2000W of available water power through to the generator. Also acceptable is a wind turbine or solar panels.
Step 2. Set stuff up and plug green grass colored plug A into grass colored receptable A. Plug monitor cable red (blood colored) cable B into red port B on the back of the computer. Plug brown tree colored keyboard and mouse plugs into brown USB receptacles.
Step 2. Hit power button shaped like this (C-)
Step 3. Find folder on screen labelled "Restarting humanity" which will direct you to the most relevant Wikipedia pages.
Step 4: learn
Step 5: implement what you have learned
Step 6: repeat and expand.
Power banks, a converter, a solar panel, a good quality DIY kb and mouse with plenty of replacement parts and a bunch of reliable monitors (because I have no idea how to get around panels wearing out).
You could even get around the converter with some engineering and make everything pure DC, I think. There could be some unforeseen issue I'm not thinking about somewhere. You could probably build up a "Library of Alexandria" that could survive anything short of getting directly nuked for less than $5k and the whole rig would fit in a couple of backpacks.
Water > Food > Fire > SOAP > Steel
infection is a bitch...
@@Prophes0r The Wikipedia page has great resources on how to make safe drinking water, different ways to make fire with what is around, what is safe to eat and how to prepare food, how to make soap and how to make steel. As well as many other things like how to treat wounds, make traps, build easy and effective means of protection, etc.
@@mikes2381 Sure.
But there are also a LOT of great books on this stuff.
A wiki is an AWFUL way to organize information that is intended to be taught.
It is also a pretty bad way to organize quick-reference data for comparisons.
We have had hundreds of years of innovation when it comes to text-books and field guides. It is MUCH more complicated to teach someone something than to just barf information onto a page.
Wiki's have their use, but their design-goals are completely different than REAL educational or reference material.
I'd bet you can fit WAY more useful information in the form of e-books with monochrome images onto a single 1TB MicroSD card than the entirety of a single-language Wikipedia dump.
And video is even WORSE.
Video-education is a luxury at-best, or worthless. Usually worthless.
And for those times where an animation would be genuinely helpful? A carefully made, low frame rate monochrome "gif", or something more extravagant like an interactive 2d-3d model that builds real intuition about a mechanical relationship, is going to be better.
Even something GREAT, and purposefully built for education and building intuition like a 3Blue1Brown video is going to be incredibly space-inefficient.
And space IS still a problem.
How long does NAND flash reliably function? Key word "Reliably".
How much duplication do you need?
How much energy is required to handle the parity checks?
If you take a few hours trying to figure out an ACTUAL way to store information to rebuild civilization, you will VERY quickly find yourself returning to ridiculous forms of storage like 2d data-matrix stored on stable microfilm.
Or like...just books. Preferably made of Teflon/Kapton and Gold/Platinum.
And THAT is just for the short-term stuff.
50-100ish years.
You want 500? 1000?
Now we are talking about gold/platinum tablets kept in a vault. And it needs to be somewhere geologically stable too.
Storing stuff long-term is HARD.
I have a question, recently I bought a M.2 to PCI-E adapter hoping to add NVME capability to an old system that doesn't have a M.2 slot inside. I cloned the SATA SSD entirely to the NVME stick with dd but now it's refusing to boot from the NVME. The original SATA SSD has a dos partition system. I tried using MBR2GPT tool on Windows without success because it throws an error 0X00000000. Any advice would be greatly appreciated.
When my old laptop died, I used a Icy Dock product to mount the ultra slim Blu-ray drive (and another hard drive) in my desktop PC.
In a post apocalyptic time printers would suddenly become useful, but mostly laserprinters and they are still going to break in less than a year.
Why exactly aren't m.2 considered hot-swappable? For example, my bios has an option for pcie hot swap. Is that just a bad idea in general or is there something special about the m.2 form factor that makes it bad?
connector not designed for it. grounds float on signal lines which is bad
As a musician, "the lick" theme song of this channel is like a constant teabag that the teabagger himself is not aware of.
QNAP has many M.2 adapters with great heat sinks, but they are bit more expensive
Not sure if I missed what brand case that you're building in. Definitely not against the idea of just using a regular desktop case in a rack.
Do they make an affordable usb 4 nvme enclosure? The current ones on the market are way over 100$, nothing is under 50$
I'm running icy dock SATA adapter cages on the front of my home server and they've worked amazingly. I'm just about to swap out my NVMe drives on the motherboard and looking at other options for making it a bit easier - Are there gen4 M.2->U2 adapters that you've reviewed that would then be attachable via cable to one of the U2 cages? Sure would be nice to not have to disassemble the entire case if I want to swap them out again...
I love Icy Dock products. i got one of their HDD clone docks.
have two of their unique in the market 10x hdd enclosures and several adapters from them, they're not perfect products (changed the fan and added rpm tuning to my enclosures) but icy dock fills users needs and they know it...priced accordingly :/
If some company made a 3 x 5.25" external case with glass and front air intake. I'd so buy.
Ok this might be a silly question but I've seen some products in videos that seem to break the norms with bifurcation. Are there products that handle the bifurcation on the PCIE card so that you can use multiple m.2 drives in a single PCIE slot? I have some older motherboards that I'd like to use as a NAS with m.2 drives. These machines currently have spinning disks and no option for bifurcation in the bios. Is there a product that does what I'd need?
Plx bridges or plx based adapters are what you would be looking for.
@@Level1Techs thanks for the reply! Must be a super niche product, not many options, most are direct from china with bad reviews. I see a few server grade going for $600 plus. Far cheaper just to buy a board and cpu with bifurcation support it seems.
Is it just me or does the enclosure that holds "naked" drives look like a modern floppy drive.
Can the pcie 2 slot nvme card be used in a regular desktop pc and use the gpu for gaming? I dont need high fps i have 5950x amd cpu the motherboard has the bifurcation settings . Someone told it will slow down my GPU?
not watched this channel in a while but Wendel has lost loads of weight congrats Wendel
Uhh...about that...
en.wikipedia.org/wiki/Alpha-gal_syndrome
icydock prices are insane
The mighty Antec 300!
Imagine if you will, a single slot bifurcation card, that somehow mounts 4x hot swap NVMe drives in a chassis with 4 of these cards.
I still have my Antec 300!
What if I have x8x4x4 bifurcation and no x4x4x4x4 would it still work.
Never had much luck with those add on cards . Too slow usually - even on pcie 4.0 cards ...
IF you can find available inventory.
Theoretically Intel VROC with VMD support is hot swapable, but I'm not going to test it out.
Cool teck but as a gamer, do I need this?🤔
Wendell "This is almost e-waste. With a brand new motherboard...
What is power consumption of genoa CPU?)
how could you use that card for a mirrored boot drive?
Linux md works fine and she'll scripts to mirror /boot and EFI partitions when you update your initrd
@@Level1Techs thank you, big help
I nominate myself to remote test for you 😂
My antec 300 took this personally
Oh shit, PC Power & Cooling.
Icydock stuff is a bit to expensive in my opinion. I suppose the folks that absolutely need it can afford it though.