Also, on cheaper AMD boards, you might run out of SATA ports. Had to disconnect my CD Drive (I'm one of those K-Pop stans that actually buys CDs and rips them to FLAC), because after adding a LOUD (I can hear it through the dampening foam (CM Silencio?)) Toshiba, my B450 was out of (Chipset, not physical, PCIE?) ports.
Ooh that's neat. I love that they still did the "end user choice" way and didn't just implement a RAID0 in "hardware" inside the controller. More room for experimentation
RAID 0 would be kind of scary for a drive like this. I'd rather not double chances of drive failure. This also gives us the chance to do RAID 5 with just 2 drives. Which is kind of weird.
@@administrator4728 RAID5 on 2 disks isn’t viable on these, a loss of either drive would result in losing 2 “logical” drives. I personally think RAID0 makes a lot of sense with something else on top, striping across 1 drive with either mirroring or parity across multiple striped disks for redundancy.
@@ProcrastinatorAlex I never said it would be an optimal cofiguration, just weird. Depending on the expected drive failure RAID 5 could be viable. (e.g. failed actuator motor or head). Would require a decade of datacenter operation before we will know for sure for something like that.
@@administrator4728 depends on how you define "drive failure". The two actuators can't read the same platter areas so if one fails the drive is "failed" as you lose access to half the capacity (which is what, 9 TB?). Also both actuators share the same spindle motor and the controller board (and power/sata cable) so a failure in those parts would take offline both actuators and lose the whole 18TB anyway. Imho it would have not been THAT much of a stretch to just assume "nah people are never gonna care and Windows is never going to handle this, let's just RAID0 in hardware and show pretty numbers in the performance sheet". Doing a RAID5 with 2 drives is indeed weird and possibly dumb. That's why this is cool, as it allows the rest of the stack to actually make choices and decide how they want to deal with this strange abomination. Because even if you decide to not do a RAID0 and just "join" the drives in a single large partition (as they are by default) the OS can still handle writing to them in a different way so both sides are active and contributing to performance
In 2001, I was working for a large media company and we needed to build a very large storage system to store hundreds of thousands of media files. We ended up building a bleeding edge system with three dual channel fiber channel SANs, each with 220 36gb of storage. It came up to 21 Terabytes of usable storage. It cost over 2 million dollars. Things sure have changed in 22 years.
My first drive is full height, about 2x the thickness of a drive today. It's capable of storing 40megabytes! It wasn't cheap even used and was made by Micropolis - I still have it. Things have changed indeed over the years :)
Wow, I haven't watched this channel in so long, Wendell looks great. He looks so happy and healthy! And I remember back in the Tek Syndicate days when he didn't want to be on camera, lol. I'm glad he's had such a glow-up!
I would think their would be an advantage of having the dual actuators each access the entire drive.The current configuration is (mostly) equivalent to having two drives, but in one form factor. With having both actuators accessing the entire drive, and with smart enough firmware, an application performing high amount of reads and writes (to the same logical drive/partition) would avoid excessive seeking. From the video, it appears that the two actuators rotate about the same axle. This would have to change. Thumbs up to Seagate.
That doesn't fit when you look at the sweep of the arms holding the heads. They have to shrink the disks, making them lower capacity, or change the form factor.
Wendell this looks like the perfect ace up your sleeve to beat 45Drives in their 45HomeLab speed race. Would make a great video to see what you can do with 15 of those things together!
What would be really cool would be extending the length of the drives to the point where there is another set of two actuators on the other end of the drive. The second pair of actuators would cover the same disk's partitions as the first making simultaneous read & write operations possible on the same partitions. Since the SAS bus is already saturated a new smarter controller that would add the ability to do file system level read write updates to move data from one location on the drive to another location without passing the data to the computer would massively increase the scalability of the drives.
As an alternative to the on drive smarter controller. You could add a controller module that sits in between the computer & the drive running a minimal Linux on ARM or RISC-V to do file system level or data base level operations . Also If you're upgrading the class of interface to the drive , you could keep the interface from the Linux module to the computer as SAS/SATA. You could even add some form of interconnect between multiple drive support modules to scale up RAID even further.
that was already tried in the past, the issue is that you have to shrink the platters to fit the actuators on both sides so you are losing a lot of storage capacity, which is THE selling point of HDDs and therefore that's bad. Yeah adding another PC in the PC to do some PC things leaves the other PC free to do different PC things. That's kind of obvious but even with block-level RAID the host system is vastly faster than a card so this "new and improved smart controller" is going to be as expensive as a small server. So you might as well just get a small server and connect it with fabric RDMA/iWarp/whatever networking at 100-400Gbit whatevers and have it be your "storage controller" while the servers running the applications are accessing it over the fabric.
@@marcogenovesi8570 The point was to increase the length of drives to avoid shrinking platters. Do you even read what you reply to? Also SSDs already have controllers performing in a very similar role. There's no need to run any minimal Linux or any other general purpose OS for that matter though.
@@MikeKrasnenkov I missed that (or maybe it was added in an edit later, note how his post was edited). Increasing the length is incredibly bad as now you lose compatibility with 99.999% of storage systems that expect a standard 3.5 drives so wtf are you selling those to. You might as well have 6 actuators in an hexagonal-shaped drive (hexagons are bestagons btw) because it's a proprietary form factor anyway, why limit ourselves to 2 when we can have 6. SSD controllers are not in any way shape or form doing anything at filesystem level as his proposal and are vastly inferior at their job than letting the OS and main CPU handle the job of spreading writes, caching and garbage collection. The only thing they are there for is compatibility with legacy stuff or Windows (that is kind of legacy at this point too). That's why higher end storage systems like Pure Storage are so-called "open-channel SSD" and have no "ssd controller". There is obviously some form of controller but it's just there to pass data along to the NAND chips without touching it, all NAND flash management logic is run by the main CPU. A device that is comparable to his idea are the DPUs, aka the 100gb+ network cards that actually run an OS to do their job. For example NVIDIA BlueField. But even DPUs would be hard-pressed to come anywhere near what a dedicated storage appliance using server-grade processors like Pure Storage can do.
Any idea why the Mach.2 models aren’t really available on European retailers? (Only see a 16 TB variant for over EUR 500 which is absurd, you could almost get 2 regular Exos drives for that)
I'm confused, you bounce around saying you need to play with the LUNs for these drives. I thought you didn't need to mess with that at all with the SATA versions of these and that that "limitation" was on the SAS version. I have an Unraid server and I'm just looking to migrate to a ZFS pool now. Just in the planning stages, but I've been keeping my eyes on these for awhile and now I'm not sure if that's a good idea.
These drives show as a "single big drive" because that's the only thing it can do on Sata. But the drive controller is not doing a RAID0 on its own. The first 9TB are served by the first actuator, and the other 9TB are served by the second actuator. So if you want to actually load both and get the performance of using both actuators at the same time you need to make two 9TB partitions and then do a RAID0 between these two partitions. Then you can take that volume and use that in a RAID that actually has redundancy. If you just make a single large partition it's left to the OS to decide how to write and it may or may not understand that these are dual controller drives. In most cases it will treat them as normal drive so it will load only one actuator at a time. You still get the same capacity but the performance is not as high. ZFS is fine on these drives but is unlikely to treat them in any special way so you need to divide them in partitions and then do what was shown with the SAS version.
Largely waste of money for a media server: that use case has very low random IOPS (assuming directories are cached in memory after the first read after boot). But if you're running VMs, databases, processing big data, very useful.
Is there any problem or special considerations for mixing these in with standard drives in a NAS? I have 3 standard x18s I've been sitting on for a while waiting to build a nas with (need to head over to your forums, noob here) and wouldn't want to just get rid of them, but these are REALLY cool.
Would love to see a piece showing what would happen if you drop these into an older storage setup, the bottlenecks that creates and what some good solutions for those might be. I've got an aging 8 drive array of 4TB drives that I've been wanting to upgrade, but with these dual actuator drives, the choice between going solid state and spinning rust gets a bit harder.
If they are able to put 2 x 8 TB drives in one 3.5 inch drive, maybe they should come out with some nice 4, 6 or 8 TB 2.5 inch CMR drives too. High capacity, affordable 2.5 inch CMR drives are non-existent today. I'm not sure whether there is a market for it though...
you could never afford that unless you have 10k+ to spend on the drive. tape drives are insanely niche and expensive.you cant make it affordable because the scale is not there. they werent cheap when thats all we had either.
This "dual actuator" config doesn't improve storage density, the current max is 20 (or maybe 22) TB in a single 3.5 inch drive. A laptop-sized 2.5" drive is only 1/6 the volume so 4 or 6 TB is probably the absolute max. The problem is once again market. Somebody needing tons of storage probably has the space for a 3.5" drive. The only real losers are laptop users.
Why is no one talking about reliability? Surely this would tank reliability given the heads are almost always the reason for mech drives shitting the bed.
I still use a Seagate 2TB mech drive to storage videos and family photos on. And then NVMe for Windows and SSDs for games and heavy programs That's what I've been asking about why hasn't SATA got to SATA 4 or 5 or whatever is next.
Because sata is mostly used for consumer HDD's. For pro level hdds it's sas and for ssd's it's pcie in m.2 or u.2 form factor. No real motivation for a faster sata spec. Though if these catch on in the consumer space (or their successor tech) that may change, eventually. We still need to get past current atx layout. Right now we're making due with riser cables and/or just not having available slots to deal with gpu long since to big for current atx (especially in vertical layouts).
they ditched sata and are just going pure nvme. look up e3 drives, they're going to replace 2.5" ssd and m,2 in the next couple years. dumping everything on the pci bus/cpu is pretty dumbd imo. people dont understand how much a dedicated storage controller does.
@@jmwintenn From what I understand about the e3 specs it's quite possible to have a dedicated storage controller. It looks to mainly be a pcie slot replacement technology with additional capabilities for server grade needs. NVME drives do have a storage controller built in, it manages the nand flash and wear leveling and a lot more.
so the actuators start from the in and out side? I figured they just split up the arms holding the actuators for different platters making basically two drives powered by a single motor rather than one drive with a short and a long stroke section.
they split a normal hard drive actuator "pile" in two so they have all arms on the same pivot point and the separation between the two "drives" is vertical. The first X platters are served by the first arm, the second X platters are served by the second arm
tbh in most games even today there is very little impact on actually playing a game on a HDD. slightly longer loading times sure but once you in the game you would be pressed to actually find anything wrong.. I have a NVMe drive and a HDD and to be frank I only keep a few games on the NVMe as most my games I have never noticed any difference
Not quite the case of current gen console games(the ones not back compat with last gen), the console-specific storage & memory architecture is a step up from PCs, that's why we're seeing these PC ports with brutal performance(and lots and lots of high res textures too). One of these drives would make it "suck less", but it sucks pretty bad as is lol. That said, there's still certainly plenty of great games that will run just fine on an HDD.
Wonder what is the real world afr on those. From sometechguys channel (iirc) video on backblaze statistics, the usual Seagates like x18/x16 have much higher afr than, say, ultrastars. And since with large drive volumes you generally want 2-disk redundancy, even "slower" disks will still saturate 10gbe since there's enough of those in the nas
@Level1Techs I would like to see you try use 4 of those drives in 1+0 2 redunt and 2 speed. Then use VeloSSD or AMDs caching software put a PCI4 or 5.0 drive as a cache. Benchmark and test gaming performance and normal work loads as in downloads, uploads, maybe even file xfers.
Not really. But they make some funny noises when awaking from sleep mode. At least all my normal Exos X18 do. But in general they are pretty quiet. I have two X18 in the tower on the desk in front of me - like at an arm's length away. No problem.
Great, 10 gigabit is too slow now and my network is all still on plain ol' single gigabit. Luckily only the initial backups hurt, the incrementals after that aren't as bad...
What would you recommend for a music collection of 75tb (consisting of 500k+ mp4 music videos alone), which takes up a majority of the space. The rest, pretty much all MP3s. I was thinking about getting the synology nas (that ends with 21 & can be expanded on), but I'm not sure about which HDDs. I've narrowed it down to the 20 or 22 tb drives, but I'm not sure on which series I should go with
You're probably limited by network bandwidth, so I'd stick with single actuator drives, running at 5400 RPM to save power. Your use case is streaming single files at a time with very little random IO.
@@MarkRose1337 how would I find out how many people I can have streaming all this simultaneously? Is there some type of a rule to go by? (( The average MP3 is 10 MB, MP4 100 MB, if that matters )) - also ill be using a 10 gb port hardwired & streaming the content using data or wifi
@@carlos_mann I'd go by max sequential rate, divide by two become some of the drive is slower, and divide by two again because it won't be _purely_ sequential. A modern drive is rated over 200 megabytes a second so you realistically will have no problem reading 50 megs a second per drive even under poor conditions. If you have six drives in RAID-5 you multiply by 5 to get 250 megabytes a second so the drives will _easily_ serve two complete music videos a second. Multiply that by the average length of video and that's how many people can stream at once. If each video is 3 minutes long that means you could theoretically have 360 people streaming at once.
Does this mean double the write/read speed of the drives? Or twice the normal performance per "side"? If it was a flat doubling of performance then it's a very good feature for big drives, especially if you need to rebuild your array.
Each side is basically its own harddrive. If your data is only on 1 side you still get normal hdd speeds. If you request data on both sides then you get double the performance.
You should get double the throughput if you stripe a RAID0 across both "sides" of the drive. At least up to the limit of the SATA interface. IOPS may also increase, but double only under ideal circumstances, i.e. when your IOPS happen to be evenly distributed across both sides. Overall just think of it as two drives. it's pretty much what it is: two drives in one package.
I'd love to see if games would work if you did some sort of raid with the sas drives. I'm planning on buying these to test that with a mirrored nvme used as cache on zfs. If nothing else, I was gonna buy exos drives anyway
Why is the sequential performance higher than the single actuator drives? Maximum sequential performance should depend only on the number of heads, areal density and RPM. The single-actuator X24 has both higher areal density and more platters than the 2X18 yet the X24 has only half the sequential performance
I don't believe normal drives stripe the heads during sequential reads. If you look at a datasheet smaller capacity drives tend to have the same speeds as higher capacity drives with more platters.
@@eDoc2020 yeah that's probably it, the tracks have gotten so small they wouldn't line up in a cylinder if temperature is uneven or in the presence of vibration. Maybe it'll be like razor blades, soon we'll have 3X, 4X, 5X... !
@@shanent5793 The thing is that doesn't explain it. At the end of the arm there are micro-actuators for each head (and AFIAK even smaller actuators after that) to handle the relative inaccuracy of the main arm.
@@eDoc2020 there could be a bottleneck in the controller, it might not have enough horsepower to track and command all those serial actuators, sample the analog signals, equalize, decode, CRC, ECC, etc. Each bit probably looks different depending on the surrounding bits so it also has to pre-emphasize and equalize and the data would have some sophisticated coding and forward error correction
So basically it’s the alternative solution to trying to build some a physically different drive (which would be a terrible idea). What’s the limit I wonder?
@@alyssalovethedj Hardware RAID does not know about the unique geometry of the drive, with the SAS version, there is a chance you might be able to coax some very simplistic topologies out of it, but you'd best be served by skipping hardware RAID altogether. These things are absolutely ideal in creating high performance ZFS arrays, just so long as you keep the actuators in separate VDEVs (dRAID and zraid) or aggregate them the two actuators as a single striped VDEV.
@@kyubre With sas version afaik they appear as two drives (two different wwns) so the RAID controller will work fine but you have to know what is what to not do incredibly dumb mistakes assigning the drives to the volumes
Ok, this one in particular is the Exos 2X18 ST18000NM0092 and it's currently $249.99 new or $239.99 recertified on serverpartdeals (new are out of stock and only 45 refurbished drives currently available). That's compared with $289.99 new or $199.99 recertified for the Exos X18 ST18000NM000J. That seems very reasonable assuming you can get your hands on them.
You probably just have to create two partitions on each of the drive's halves, then create two OSDs out of them. That should work just like having two OSDs on two drives.
@@Hugh_I That isn't really compatible with bluestore. Bluestore wants the block device itself and doesn't really accept partitions and such. It's probably possible to split in two perhaps, but not so simple as partitions. But I was more thinking if there was something slightly more intelligent than that such as what he mentioned about the kernel support with a reserve queue. Considering how ceph is highly parallell and designed for a huge number of readers and writers at a time, I think that would be a good approach, but I wonder if ceph is compatible with that mode.
@@danieljonsson8095 hum, I'm not an expert on Ceph, but AFAIK you can put bluestore volumes on any proper block device you want, including partitions or even logical volumes. I'm pretty sure I've done that - though it is generally recommended to use entire drives (but for reasons that don't apply here, like not sharing IO for multiple OSDs on the same device). I don't exactly know how ceph/bluestore hooks into the kernel I/O system, but my guess here would be that the low level communication with the drives is certainly done by the kernel, while ceph just hands it blocks to read/write. If that is so, I would think that all the benefits the reordering of the command queue happening in the kernel should also benefit Ceph in cases when you access both halves of the drive (both OSDs) at the same time.
It shows up as 2 drives. To me that says that it looks like 2 9GB disks. Can you irror them then in the OS or stripe them in a RAID 0 configuration? If you mirror them and the drive fails, can you recover your data from the other drive or did that fail as well? Seems like a mirror would be a stupid waste of space on a single drive. Also, buying a refurb Seagate drive is BEGGING for trouble.
The SAS versions show as two 9TB drives. For these SATA ones you need to split them manually and then recombine the halves if you want proper performance. I would only consider striping the halves. If one half fails the other _might_ work but I wouldn't count on it.
Maybe there is someone that knows what I am doing wrong or can tell me if this is even possible in Windows; I have the SATA version of this drive and tried to make two primary partitions at 0%-50% and 50%-100% using parted in ubuntu live usb. Then I made a mdadm raid0 and formatted as ntfs. The RAID was working in ubuntu getting 493MB/s. Now in windows I installed WinMD driver and rebooted my pc, but the drive is not showing up in explorer. In disk management the disk is visible with two healthy primary partitions but I cannot change anything to them. How can I benefit from the dual actuators in Windows? I know striping on the same drive is not possible in Windows hence doing it in Ubuntu.
the space is the issue. To have two actuators on the same platter you need to shrink the platters (as it was done in some older dual actuator designs), and that means you are sacrificing storage capacity. That's what kills the idea, HDDs exist solely because of high capacity so any idea that reduces that is bad
yeah they can theoretically go up to one actuator per platter, and there are more than 4 platters in those drives. They are basically splitting and controlling independently the writing arms that would be controlled by the same actuator on a normal drive. In this they have half the arms controlled by one actuator and half the arms controlled by another. How easily they can fit the actuators to move the arms is another question.
If they can manage to squeeze the actuators in, sure. But they'd have to switch to using U.2 or something as an interface, as these dual ones already saturate the max SATA transfer speeds.
Does anyone have insights as to why Seagate doesn't increase their market share in the NAND space either through acquisition or business shift? Seagate is very dependent on HDD sales to hyperscaler. But the pace of NAND price declines have been on a consistent trajectory. I know NAND is a competitive landscape and dominated by Samsung/SK HYNIX/Micron, but at least Western Digital acquired SanDisk (which didn't work-out that well) and try to merge with Kioxia (although that fall apart at the last moment) But WD at least demonstrated a plan to pivot away from HDD (their core business).
I say we go back to one read head per track like good ole drum memory. zero seek time rules!. That said, what about durability in a raid? The problem with large drives in arrays is always rebuild time. However we are now doubling the mechanical components in the drive, and adding complexity to the electronics... could that increase the failure rate? Or better yet, could we raid across half drives such that a raid 5 with two drives is feasible (or 3 drive raid 6). This would protect against failures in the heads or electronics specific to a set of heads, but not other failures that impact the entire drive. Would that protection be enough (assuming good backups)? Finally... what about performance. Typically the drive is faster in the first half than the second half... does this mean one set of heads are slower than the other set?
> Typically the drive is faster in the first half than the second half... does this mean one set of heads are slower than the other set? Looking at the promotional stuff where they show the internals, I'd say no. The first "half" has its own platters, the second "half" has its own platters. These are basically two separate hard drives that share the same spindle motor and electronic control board. Also I kinda dispute the "faster in the first half", it's technically true but not enough to make a difference overall with modern drives.
@@marcogenovesi8570 The "faster in the first half" is talking about regular hard drives. With a constant RPM the linear speed is higher on the outside of the platters. This is the "beginning" of the drive, as the speed is lower at the "end" sequential performance decreases. To the OP, this drive is logically one regular drive appended to another regular drive. The performance will start high and degrade as you reach the middle LBA. Then at the next LBA it will jump back to the fastest speed and degrade until reaching the end.
Why isn't the "RAID0" just handled internally? Shouldn't have to set that up in the OS. It should just be a faster plug-and-play replacement for any other SATA drive, and shouldn't appear any different to the host device. This is unnecessarily obtuse.
No, please. I would rather it be part of OS installation (If Microsoft can actually make their installer a bit better) Creating a RAID0/1 volume should be easy to do with a GUI. I would much rather have this be part of base OS than be part of yet another buggy UEFI feature that I will have to squint at in their spec sheet to ensure they support it.
If they managed the "RAID0" internally these drives would be way more appealing to the "average" consumer as well. The best option could be if you could switch between "internal" and "external" partitioning with a switch on the drive or in software.
Yeah, vast majority of users will only see it as a standard performance drive. Probably a lazy way to prevent excess vibrations by not informing the user they need to do some fiddly hoop jumping to get it to work.
because it would add complexity and cost and they are more expensive than similarly sized drives already. These drives are 99.99% used with RAID controllers (or software RAID) anyway so why bother
I thought that platters used something like a RAID0 internally already, this crushed my beliefs. So anyway, why don't drives do this already, with all their heads even though using single actuator? Are the tracks not aligned properly?
This would've been a nice innovation when the WD Velociraptors were all the rage, but now that SSDs have taken over, it'll get lost in the shuffle unless you're specifically looking for mass storage.
The implementation is kinda disappointing. It would be cool if both actuators had access to the whole storage medium and balanced the IO load according to which head can get to the target sector first. Now it's just two dumb hard drives bolted together.
Doing that is ridicolously complicated, the control board would need a powerful processor like a RAID card and a complicated firmware. Hard drives aren't in a place where they can afford that
@@marcogenovesi8570 I don't think it's _that_ complicated. The electronics should only be twice what a normal drive is. Splitting the workload among two actuators isn't very difficult.
@@eDoc2020 "splitting the workload" is different from what OP said. He said "balanced the IO load according to which head can get to the target sector first" That implies the controller is running complex logic to split the workload depending on where the data is and where the heads are.
@@marcogenovesi8570 Consumer drives have been reordering requests (with command queuing) to increase performance for almost two decades. I don't believe the logic is much more complicated by adding another head. I mean it is decently more complicated but compute performance has increased faster. I'd figure the logic for dealing with SMR media is more complicated.
Did I just hear "ten gigabit is too slow"
What a time to be alive
HDDs arent going anywhere, but trying to find good ATX cases for holding HDDs is getting increasingly tough.
AND ones that are not $100+ as well. Just seems like cases have gotten very expensive lately.
Totaly agree with you. I had to get a NAS so i could use the case i have.
Also, on cheaper AMD boards, you might run out of SATA ports. Had to disconnect my CD Drive (I'm one of those K-Pop stans that actually buys CDs and rips them to FLAC), because after adding a LOUD (I can hear it through the dampening foam (CM Silencio?)) Toshiba, my B450 was out of (Chipset, not physical, PCIE?) ports.
Define 7 and Define 7 XL
More than you need if you use modern capacity drives.
@@alext3811 if you run out of Sata ports it's the time to start looking into SAS cards (that can run Sata drives too and most home NAS users do that)
Ooh that's neat. I love that they still did the "end user choice" way and didn't just implement a RAID0 in "hardware" inside the controller. More room for experimentation
RAID 0 would be kind of scary for a drive like this. I'd rather not double chances of drive failure. This also gives us the chance to do RAID 5 with just 2 drives. Which is kind of weird.
@@administrator4728 RAID5 on 2 disks isn’t viable on these, a loss of either drive would result in losing 2 “logical” drives. I personally think RAID0 makes a lot of sense with something else on top, striping across 1 drive with either mirroring or parity across multiple striped disks for redundancy.
@@ProcrastinatorAlex I never said it would be an optimal cofiguration, just weird. Depending on the expected drive failure RAID 5 could be viable. (e.g. failed actuator motor or head).
Would require a decade of datacenter operation before we will know for sure for something like that.
@@administrator4728 depends on how you define "drive failure". The two actuators can't read the same platter areas so if one fails the drive is "failed" as you lose access to half the capacity (which is what, 9 TB?). Also both actuators share the same spindle motor and the controller board (and power/sata cable) so a failure in those parts would take offline both actuators and lose the whole 18TB anyway.
Imho it would have not been THAT much of a stretch to just assume "nah people are never gonna care and Windows is never going to handle this, let's just RAID0 in hardware and show pretty numbers in the performance sheet".
Doing a RAID5 with 2 drives is indeed weird and possibly dumb. That's why this is cool, as it allows the rest of the stack to actually make choices and decide how they want to deal with this strange abomination.
Because even if you decide to not do a RAID0 and just "join" the drives in a single large partition (as they are by default) the OS can still handle writing to them in a different way so both sides are active and contributing to performance
In 2001, I was working for a large media company and we needed to build a very large storage system to store hundreds of thousands of media files. We ended up building a bleeding edge system with three dual channel fiber channel SANs, each with 220 36gb of storage. It came up to 21 Terabytes of usable storage.
It cost over 2 million dollars.
Things sure have changed in 22 years.
My first drive is full height, about 2x the thickness of a drive today. It's capable of storing 40megabytes! It wasn't cheap even used and was made by Micropolis - I still have it. Things have changed indeed over the years :)
I bought 24TB Exos this week. Full format took 29h.
I'm going to need several of these for my media server. Thank you
Wow, I haven't watched this channel in so long, Wendell looks great. He looks so happy and healthy! And I remember back in the Tek Syndicate days when he didn't want to be on camera, lol. I'm glad he's had such a glow-up!
The Wilson period of peering over monitors had mystique and intrigue
Time to spend 5 minutes wiping down that monitor, Wendell...
I would think their would be an advantage of having the dual actuators each access the entire drive.The current configuration is (mostly) equivalent to having two drives, but in one form factor. With having both actuators accessing the entire drive, and with smart enough firmware, an application performing high amount of reads and writes (to the same logical drive/partition) would avoid excessive seeking. From the video, it appears that the two actuators rotate about the same axle. This would have to change. Thumbs up to Seagate.
That doesn't fit when you look at the sweep of the arms holding the heads. They have to shrink the disks, making them lower capacity, or change the form factor.
2:19 RAID0 on the same device. What a crazy piece of engineering. Thanks for this insight!
I want some double clicky drives!
Hmm would be interesting to see how they preform on ZFS
I am curious about that too. I'm guessing ZFS is a no go, since this is one drive with two partitions. And bolting ZFS on top of LVM sounds scary AF.
Isn't ZFS "Thee" LVM ?@@LtdJorge
Wendell this looks like the perfect ace up your sleeve to beat 45Drives in their 45HomeLab speed race. Would make a great video to see what you can do with 15 of those things together!
What would be really cool would be extending the length of the drives to the point where there is another set of two actuators on the other end of the drive.
The second pair of actuators would cover the same disk's partitions as the first making simultaneous read & write operations possible on the same partitions.
Since the SAS bus is already saturated a new smarter controller that would add the ability to do file system level read write updates to move data from one location on the drive to another location without passing the data to the computer would massively increase the scalability of the drives.
As an alternative to the on drive smarter controller. You could add a controller module that sits in between the computer & the drive running a minimal Linux on ARM or RISC-V to do file system level or data base level operations .
Also If you're upgrading the class of interface to the drive , you could keep the interface from the Linux module to the computer as SAS/SATA.
You could even add some form of interconnect between multiple drive support modules to scale up RAID even further.
@@DavidMohringthat's just plain dumb
that was already tried in the past, the issue is that you have to shrink the platters to fit the actuators on both sides so you are losing a lot of storage capacity, which is THE selling point of HDDs and therefore that's bad.
Yeah adding another PC in the PC to do some PC things leaves the other PC free to do different PC things. That's kind of obvious but even with block-level RAID the host system is vastly faster than a card so this "new and improved smart controller" is going to be as expensive as a small server.
So you might as well just get a small server and connect it with fabric RDMA/iWarp/whatever networking at 100-400Gbit whatevers and have it be your "storage controller" while the servers running the applications are accessing it over the fabric.
@@marcogenovesi8570 The point was to increase the length of drives to avoid shrinking platters. Do you even read what you reply to?
Also SSDs already have controllers performing in a very similar role. There's no need to run any minimal Linux or any other general purpose OS for that matter though.
@@MikeKrasnenkov I missed that (or maybe it was added in an edit later, note how his post was edited). Increasing the length is incredibly bad as now you lose compatibility with 99.999% of storage systems that expect a standard 3.5 drives so wtf are you selling those to.
You might as well have 6 actuators in an hexagonal-shaped drive (hexagons are bestagons btw) because it's a proprietary form factor anyway, why limit ourselves to 2 when we can have 6.
SSD controllers are not in any way shape or form doing anything at filesystem level as his proposal and are vastly inferior at their job than letting the OS and main CPU handle the job of spreading writes, caching and garbage collection. The only thing they are there for is compatibility with legacy stuff or Windows (that is kind of legacy at this point too).
That's why higher end storage systems like Pure Storage are so-called "open-channel SSD" and have no "ssd controller".
There is obviously some form of controller but it's just there to pass data along to the NAND chips without touching it, all NAND flash management logic is run by the main CPU.
A device that is comparable to his idea are the DPUs, aka the 100gb+ network cards that actually run an OS to do their job. For example NVIDIA BlueField.
But even DPUs would be hard-pressed to come anywhere near what a dedicated storage appliance using server-grade processors like Pure Storage can do.
supposedly we get 40TB HAMR drives next year.
That title truly embiggened my heart.
Any idea why the Mach.2 models aren’t really available on European retailers?
(Only see a 16 TB variant for over EUR 500 which is absurd, you could almost get 2 regular Exos drives for that)
Very cool, hope to see proper certification from Synology someday
I love this stuff. Thanks, Wendell.
I'm confused, you bounce around saying you need to play with the LUNs for these drives. I thought you didn't need to mess with that at all with the SATA versions of these and that that "limitation" was on the SAS version. I have an Unraid server and I'm just looking to migrate to a ZFS pool now. Just in the planning stages, but I've been keeping my eyes on these for awhile and now I'm not sure if that's a good idea.
These drives show as a "single big drive" because that's the only thing it can do on Sata. But the drive controller is not doing a RAID0 on its own. The first 9TB are served by the first actuator, and the other 9TB are served by the second actuator.
So if you want to actually load both and get the performance of using both actuators at the same time you need to make two 9TB partitions and then do a RAID0 between these two partitions. Then you can take that volume and use that in a RAID that actually has redundancy.
If you just make a single large partition it's left to the OS to decide how to write and it may or may not understand that these are dual controller drives. In most cases it will treat them as normal drive so it will load only one actuator at a time. You still get the same capacity but the performance is not as high.
ZFS is fine on these drives but is unlikely to treat them in any special way so you need to divide them in partitions and then do what was shown with the SAS version.
Mechanical harddrives have come a long ways.
Sounds great for homelab and media server.
Largely waste of money for a media server: that use case has very low random IOPS (assuming directories are cached in memory after the first read after boot). But if you're running VMs, databases, processing big data, very useful.
How does the dual actuator drive perform in OpenZFS?
Great video! Can you post a video or instructions on how to set these drives up correctly in RAID0 on Windows 11 ?
LONG overdue. Why it's not two per platter(one each side) is beyond me. We've had all the needed tech ingredients for a long time now.
I would be interested in case studies of these being used with Unraid and TrueNAS.
Agree, as a longtime Unraid user, I would certainly like to see how well it performs there, and how to implement it.
Exciting stuff, but still keen to know the power payoff of having these is, is it more efficient than having two drives of half the size running?
Yes it is more power efficient because there is only one drive spindle instead of two.
Where do you get the recertified drives?
Is there any problem or special considerations for mixing these in with standard drives in a NAS? I have 3 standard x18s I've been sitting on for a while waiting to build a nas with (need to head over to your forums, noob here) and wouldn't want to just get rid of them, but these are REALLY cool.
: HDDs are back. In POG form!
I'm suppressed they haven't brought back the quantum bigfoot drive 5" format to make larger drives.
Would love to see a piece showing what would happen if you drop these into an older storage setup, the bottlenecks that creates and what some good solutions for those might be.
I've got an aging 8 drive array of 4TB drives that I've been wanting to upgrade, but with these dual actuator drives, the choice between going solid state and spinning rust gets a bit harder.
I just like to call it Winchester instead of spinning rust
If they are able to put 2 x 8 TB drives in one 3.5 inch drive, maybe they should come out with some nice 4, 6 or 8 TB 2.5 inch CMR drives too. High capacity, affordable 2.5 inch CMR drives are non-existent today. I'm not sure whether there is a market for it though...
you could never afford that unless you have 10k+ to spend on the drive. tape drives are insanely niche and expensive.you cant make it affordable because the scale is not there. they werent cheap when thats all we had either.
@@jmwintenn What does it have to do with tape drives? I'm talking about HDDs.
If you want small capacity, just buy SSDs. It's 2023, not 2013. Small hard drives aren't competitive in any way now.
This "dual actuator" config doesn't improve storage density, the current max is 20 (or maybe 22) TB in a single 3.5 inch drive. A laptop-sized 2.5" drive is only 1/6 the volume so 4 or 6 TB is probably the absolute max. The problem is once again market. Somebody needing tons of storage probably has the space for a 3.5" drive. The only real losers are laptop users.
@@MarkRose1337 4-8TB is far from what I would consider small capacity. The cheapest 4TB SSD is around $200. That's also about what a 12TB drive costs.
Why is no one talking about reliability? Surely this would tank reliability given the heads are almost always the reason for mech drives shitting the bed.
not a whole lot of data about that, besides educated guesses
@1:06 do you keep that display in a leaky barn or what?
I still use a Seagate 2TB mech drive to storage videos and family photos on. And then NVMe for Windows and SSDs for games and heavy programs
That's what I've been asking about why hasn't SATA got to SATA 4 or 5 or whatever is next.
Because sata is mostly used for consumer HDD's. For pro level hdds it's sas and for ssd's it's pcie in m.2 or u.2 form factor. No real motivation for a faster sata spec. Though if these catch on in the consumer space (or their successor tech) that may change, eventually. We still need to get past current atx layout. Right now we're making due with riser cables and/or just not having available slots to deal with gpu long since to big for current atx (especially in vertical layouts).
they ditched sata and are just going pure nvme. look up e3 drives, they're going to replace 2.5" ssd and m,2 in the next couple years. dumping everything on the pci bus/cpu is pretty dumbd imo. people dont understand how much a dedicated storage controller does.
@@jmwintenn From what I understand about the e3 specs it's quite possible to have a dedicated storage controller. It looks to mainly be a pcie slot replacement technology with additional capabilities for server grade needs. NVME drives do have a storage controller built in, it manages the nand flash and wear leveling and a lot more.
It makes sense to go straight PCIe and possibly use an NVMe form factor for the connector.
I have a Ryzen cpu without port multipliers. Are port multipliers necessary to see both sides of the 2x14 drive ?
Does that also mean that this Seagate drive also breaks at twice the speed from other manufactors? So 4 times as often? ;)
so the actuators start from the in and out side? I figured they just split up the arms holding the actuators for different platters making basically two drives powered by a single motor rather than one drive with a short and a long stroke section.
they split a normal hard drive actuator "pile" in two so they have all arms on the same pivot point and the separation between the two "drives" is vertical. The first X platters are served by the first arm, the second X platters are served by the second arm
1:00 woof thats one grubby screen
i sooo want some of those haha, think im gonna buy some for sure for my unraid
love the videos
The dual actuator drives in desktop form factors were developed by Quantum drives before Seagate bought them IIRC?
You remember that time Santa's Little Helper ate my goldfish? And then you said I didn't have any goldfish?
I still need to work out why my 100g network taps out at 24g :(
Yeah those dual actuator winyós are epic as f
How's the noise level of these drives?
tbh in most games even today there is very little impact on actually playing a game on a HDD. slightly longer loading times sure but once you in the game you would be pressed to actually find anything wrong.. I have a NVMe drive and a HDD and to be frank I only keep a few games on the NVMe as most my games I have never noticed any difference
Not quite the case of current gen console games(the ones not back compat with last gen), the console-specific storage & memory architecture is a step up from PCs, that's why we're seeing these PC ports with brutal performance(and lots and lots of high res textures too). One of these drives would make it "suck less", but it sucks pretty bad as is lol.
That said, there's still certainly plenty of great games that will run just fine on an HDD.
most cute hard drive review ever
About time 🥳🥳
Wonder what is the real world afr on those. From sometechguys channel (iirc) video on backblaze statistics, the usual Seagates like x18/x16 have much higher afr than, say, ultrastars. And since with large drive volumes you generally want 2-disk redundancy, even "slower" disks will still saturate 10gbe since there's enough of those in the nas
@Level1Techs I would like to see you try use 4 of those drives in 1+0 2 redunt and 2 speed. Then use VeloSSD or AMDs caching software put a PCI4 or 5.0 drive as a cache. Benchmark and test gaming performance and normal work loads as in downloads, uploads, maybe even file xfers.
Are EXOS drives actually louder? They are higher reliability but I've heard they're noisy.
Not really. But they make some funny noises when awaking from sleep mode. At least all my normal Exos X18 do. But in general they are pretty quiet. I have two X18 in the tower on the desk in front of me - like at an arm's length away. No problem.
Great, 10 gigabit is too slow now and my network is all still on plain ol' single gigabit. Luckily only the initial backups hurt, the incrementals after that aren't as bad...
What would you recommend for a music collection of 75tb (consisting of 500k+ mp4 music videos alone), which takes up a majority of the space. The rest, pretty much all MP3s.
I was thinking about getting the synology nas (that ends with 21 & can be expanded on), but I'm not sure about which HDDs. I've narrowed it down to the 20 or 22 tb drives, but I'm not sure on which series I should go with
You're probably limited by network bandwidth, so I'd stick with single actuator drives, running at 5400 RPM to save power. Your use case is streaming single files at a time with very little random IO.
@@MarkRose1337 how would I find out how many people I can have streaming all this simultaneously? Is there some type of a rule to go by?
(( The average MP3 is 10 MB, MP4 100 MB, if that matters )) - also ill be using a 10 gb port hardwired & streaming the content using data or wifi
@@carlos_mann I'd go by max sequential rate, divide by two become some of the drive is slower, and divide by two again because it won't be _purely_ sequential. A modern drive is rated over 200 megabytes a second so you realistically will have no problem reading 50 megs a second per drive even under poor conditions. If you have six drives in RAID-5 you multiply by 5 to get 250 megabytes a second so the drives will _easily_ serve two complete music videos a second. Multiply that by the average length of video and that's how many people can stream at once. If each video is 3 minutes long that means you could theoretically have 360 people streaming at once.
@@eDoc2020 you're a genius 👏
Thank you so much!
Will these work in QNAP NAS?
any idea why the SaS video is unlisted ?
because we forgot to make it live! oops!
-grant
@@Level1Techs was that a doh! i just heard all the way over here :D
link to the sas video? thanks
@@qdeqdeqdeqde its in the description but th-cam.com/video/_jYvtv-ILd4/w-d-xo.html
Does this mean double the write/read speed of the drives? Or twice the normal performance per "side"? If it was a flat doubling of performance then it's a very good feature for big drives, especially if you need to rebuild your array.
Each side is basically its own harddrive. If your data is only on 1 side you still get normal hdd speeds. If you request data on both sides then you get double the performance.
You should get double the throughput if you stripe a RAID0 across both "sides" of the drive. At least up to the limit of the SATA interface. IOPS may also increase, but double only under ideal circumstances, i.e. when your IOPS happen to be evenly distributed across both sides. Overall just think of it as two drives. it's pretty much what it is: two drives in one package.
What about using these drives in a truenas sever (freeBSD with zfs)?
I'd love to see if games would work if you did some sort of raid with the sas drives. I'm planning on buying these to test that with a mirrored nvme used as cache on zfs. If nothing else, I was gonna buy exos drives anyway
Games would work, but spinning rust still has poor latency. Dual actuator drives have little effect on latency, just throughput.
Next models will probably use M.2 connectors :P
Wendel what happens if you stripe them?
It would be nice to see how they work in a surveillance system
They could increase the number of parallel streams being recorded, but that would be about it. Probably very little benefit in most situations
Why is the sequential performance higher than the single actuator drives? Maximum sequential performance should depend only on the number of heads, areal density and RPM. The single-actuator X24 has both higher areal density and more platters than the 2X18 yet the X24 has only half the sequential performance
I don't believe normal drives stripe the heads during sequential reads. If you look at a datasheet smaller capacity drives tend to have the same speeds as higher capacity drives with more platters.
@@eDoc2020 yeah that's probably it, the tracks have gotten so small they wouldn't line up in a cylinder if temperature is uneven or in the presence of vibration. Maybe it'll be like razor blades, soon we'll have 3X, 4X, 5X... !
@@shanent5793 The thing is that doesn't explain it. At the end of the arm there are micro-actuators for each head (and AFIAK even smaller actuators after that) to handle the relative inaccuracy of the main arm.
@@eDoc2020 there could be a bottleneck in the controller, it might not have enough horsepower to track and command all those serial actuators, sample the analog signals, equalize, decode, CRC, ECC, etc. Each bit probably looks different depending on the surrounding bits so it also has to pre-emphasize and equalize and the data would have some sophisticated coding and forward error correction
@@shanent5793 Perhaps. I would suspect they could just add more of those controllers on a single actuator and still double sequential performance.
Are these still rated 550TB/y transfers?
A bit off topic, but #heyWendell, are we ever going to see an Intel ARC on Linux video ?
i would like to see how the sata version acts in unraid
Could you raid0 a drive like this and get more read performance than you would a regular drive?
So basically it’s the alternative solution to trying to build some a physically different drive (which would be a terrible idea). What’s the limit I wonder?
what if you have a hardware raid card?
Good question
@@alyssalovethedj Hardware RAID does not know about the unique geometry of the drive, with the SAS version, there is a chance you might be able to coax some very simplistic topologies out of it, but you'd best be served by skipping hardware RAID altogether.
These things are absolutely ideal in creating high performance ZFS arrays, just so long as you keep the actuators in separate VDEVs (dRAID and zraid) or aggregate them the two actuators as a single striped VDEV.
sell it
@@kyubre With sas version afaik they appear as two drives (two different wwns) so the RAID controller will work fine but you have to know what is what to not do incredibly dumb mistakes assigning the drives to the volumes
Would zfs on Truenas scale benefit from these drives?
Heck yes
I'm in the single digit TB range and I feel kind of left out :/
What's the model number of these drives?
Ok, this one in particular is the Exos 2X18 ST18000NM0092 and it's currently $249.99 new or $239.99 recertified on serverpartdeals (new are out of stock and only 45 refurbished drives currently available). That's compared with $289.99 new or $199.99 recertified for the Exos X18 ST18000NM000J. That seems very reasonable assuming you can get your hands on them.
Quad actuator drives when?
Is that some Petticoat Junction?
Would something like Ceph be able to understand and make use of the dual actuators?
You probably just have to create two partitions on each of the drive's halves, then create two OSDs out of them. That should work just like having two OSDs on two drives.
@@Hugh_I That isn't really compatible with bluestore. Bluestore wants the block device itself and doesn't really accept partitions and such. It's probably possible to split in two perhaps, but not so simple as partitions. But I was more thinking if there was something slightly more intelligent than that such as what he mentioned about the kernel support with a reserve queue. Considering how ceph is highly parallell and designed for a huge number of readers and writers at a time, I think that would be a good approach, but I wonder if ceph is compatible with that mode.
@@danieljonsson8095 hum, I'm not an expert on Ceph, but AFAIK you can put bluestore volumes on any proper block device you want, including partitions or even logical volumes. I'm pretty sure I've done that - though it is generally recommended to use entire drives (but for reasons that don't apply here, like not sharing IO for multiple OSDs on the same device).
I don't exactly know how ceph/bluestore hooks into the kernel I/O system, but my guess here would be that the low level communication with the drives is certainly done by the kernel, while ceph just hands it blocks to read/write. If that is so, I would think that all the benefits the reordering of the command queue happening in the kernel should also benefit Ceph in cases when you access both halves of the drive (both OSDs) at the same time.
We need ZFS support for these.
Add the partitions to separate vdevs manually?
it's easy enough to just partition them and use the partitions instead of the whole drive
It shows up as 2 drives. To me that says that it looks like 2 9GB disks. Can you irror them then in the OS or stripe them in a RAID 0 configuration? If you mirror them and the drive fails, can you recover your data from the other drive or did that fail as well? Seems like a mirror would be a stupid waste of space on a single drive.
Also, buying a refurb Seagate drive is BEGGING for trouble.
The SAS versions show as two 9TB drives. For these SATA ones you need to split them manually and then recombine the halves if you want proper performance. I would only consider striping the halves. If one half fails the other _might_ work but I wouldn't count on it.
Maybe there is someone that knows what I am doing wrong or can tell me if this is even possible in Windows;
I have the SATA version of this drive and tried to make two primary partitions at 0%-50% and 50%-100% using parted in ubuntu live usb. Then I made a mdadm raid0 and formatted as ntfs. The RAID was working in ubuntu getting 493MB/s. Now in windows I installed WinMD driver and rebooted my pc, but the drive is not showing up in explorer. In disk management the disk is visible with two healthy primary partitions but I cannot change anything to them. How can I benefit from the dual actuators in Windows? I know striping on the same drive is not possible in Windows hence doing it in Ubuntu.
1 hard drive can saturate 10gb too but hybrids. Not bad 3 of them to saturate a 10gb. Soon people will use cat8+ at home with hard drives😁
Why not 2 actuators on the same disk, maybe higher read write rates with controller trickery
the read heads are possibly the most expensive and complicated part, not the platters
the space is the issue. To have two actuators on the same platter you need to shrink the platters (as it was done in some older dual actuator designs), and that means you are sacrificing storage capacity. That's what kills the idea, HDDs exist solely because of high capacity so any idea that reduces that is bad
Could they use triple or quadruple actuators?
yeah they can theoretically go up to one actuator per platter, and there are more than 4 platters in those drives. They are basically splitting and controlling independently the writing arms that would be controlled by the same actuator on a normal drive. In this they have half the arms controlled by one actuator and half the arms controlled by another.
How easily they can fit the actuators to move the arms is another question.
If they can manage to squeeze the actuators in, sure. But they'd have to switch to using U.2 or something as an interface, as these dual ones already saturate the max SATA transfer speeds.
Are they loud?
Does anyone have insights as to why Seagate doesn't increase their market share in the NAND space either through acquisition or business shift?
Seagate is very dependent on HDD sales to hyperscaler.
But the pace of NAND price declines have been on a consistent trajectory.
I know NAND is a competitive landscape and dominated by Samsung/SK HYNIX/Micron, but at least Western Digital acquired SanDisk (which didn't work-out that well) and try to merge with Kioxia (although that fall apart at the last moment)
But WD at least demonstrated a plan to pivot away from HDD (their core business).
As long as HDD is cheaper per TB, I don't see it going anywhere.
I say we go back to one read head per track like good ole drum memory. zero seek time rules!.
That said, what about durability in a raid? The problem with large drives in arrays is always rebuild time. However we are now doubling the mechanical components in the drive, and adding complexity to the electronics... could that increase the failure rate?
Or better yet, could we raid across half drives such that a raid 5 with two drives is feasible (or 3 drive raid 6). This would protect against failures in the heads or electronics specific to a set of heads, but not other failures that impact the entire drive. Would that protection be enough (assuming good backups)?
Finally... what about performance. Typically the drive is faster in the first half than the second half... does this mean one set of heads are slower than the other set?
> Typically the drive is faster in the first half than the second half... does this mean one set of heads are slower than the other set?
Looking at the promotional stuff where they show the internals, I'd say no. The first "half" has its own platters, the second "half" has its own platters.
These are basically two separate hard drives that share the same spindle motor and electronic control board.
Also I kinda dispute the "faster in the first half", it's technically true but not enough to make a difference overall with modern drives.
@@marcogenovesi8570 The "faster in the first half" is talking about regular hard drives. With a constant RPM the linear speed is higher on the outside of the platters. This is the "beginning" of the drive, as the speed is lower at the "end" sequential performance decreases.
To the OP, this drive is logically one regular drive appended to another regular drive. The performance will start high and degrade as you reach the middle LBA. Then at the next LBA it will jump back to the fastest speed and degrade until reaching the end.
@@eDoc2020 that's what I said already
Why isn't the "RAID0" just handled internally? Shouldn't have to set that up in the OS. It should just be a faster plug-and-play replacement for any other SATA drive, and shouldn't appear any different to the host device. This is unnecessarily obtuse.
No, please. I would rather it be part of OS installation (If Microsoft can actually make their installer a bit better)
Creating a RAID0/1 volume should be easy to do with a GUI.
I would much rather have this be part of base OS than be part of yet another buggy UEFI feature that I will have to squint at in their spec sheet to ensure they support it.
If they managed the "RAID0" internally these drives would be way more appealing to the "average" consumer as well. The best option could be if you could switch between "internal" and "external" partitioning with a switch on the drive or in software.
not really necessary in my opinion?
Yeah, vast majority of users will only see it as a standard performance drive. Probably a lazy way to prevent excess vibrations by not informing the user they need to do some fiddly hoop jumping to get it to work.
because it would add complexity and cost and they are more expensive than similarly sized drives already. These drives are 99.99% used with RAID controllers (or software RAID) anyway so why bother
45homelab?
Triple actuator
Wonder how Unraid would see these drives
18TB for 200USD, where?
Don't you hate it when your camera focusses on the background screens and not your face... 🤣
Distributed computing.
I thought that platters used something like a RAID0 internally already, this crushed my beliefs. So anyway, why don't drives do this already, with all their heads even though using single actuator? Are the tracks not aligned properly?
So dual the failure points.
double the performance? That's nice but I'd rather double the capacity. That would even be worth halving the performance.
There are 32 TB HAMR drives out now. They have 50 TB HAMR drives in the lab.
@@MarkRose1337 Those are pretty cool indeed
PLEASE FIX YOUR CAMERA FOCUS. I can read everything in the background monitors' tabs, but your face is blurry, and i can't see any text on the disk.
Steam Cache servers are the way
This would've been a nice innovation when the WD Velociraptors were all the rage, but now that SSDs have taken over, it'll get lost in the shuffle unless you're specifically looking for mass storage.
I I know that it is anecdotal, but literally every Seagate drive I have ever had has failed., so.. no thanks
I still have plenty of seagate drives
The implementation is kinda disappointing. It would be cool if both actuators had access to the whole storage medium and balanced the IO load according to which head can get to the target sector first. Now it's just two dumb hard drives bolted together.
Doing that is ridicolously complicated, the control board would need a powerful processor like a RAID card and a complicated firmware. Hard drives aren't in a place where they can afford that
@@marcogenovesi8570 I don't think it's _that_ complicated. The electronics should only be twice what a normal drive is. Splitting the workload among two actuators isn't very difficult.
@@eDoc2020 "splitting the workload" is different from what OP said. He said "balanced the IO load according to which head can get to the target sector first"
That implies the controller is running complex logic to split the workload depending on where the data is and where the heads are.
@@marcogenovesi8570 Consumer drives have been reordering requests (with command queuing) to increase performance for almost two decades. I don't believe the logic is much more complicated by adding another head. I mean it is decently more complicated but compute performance has increased faster. I'd figure the logic for dealing with SMR media is more complicated.