I believe disk logic should create internal raid 0 and present it to OS as single drive taking the performance gain. Then you can add it to the pool as normal hdd w/o concerns how to threat your halves.
@@Gastell0exactly. The sata version works almost like this, however, in the sata version, you can get double the performance if you software raid it by size. Wendell did a great video about this limitation in sata and how to workaround it.
Yeah, I think that might be the direction this technology will evolve into. I believe there's a SATA version of this that behaves like that, but I don't know how it handles the data balancing. Then, maybe after that, they will split the heads even further into quad actuators! and achieve 1GB/s throughput!
cool! glad it will help you out! :-) I had a few people ask me about these a while back and I wasn't sure how they worked. decided to just buy one and check it out. hope this will be useful to many!
ixSystems could really help here by adding logic to provide warnings to any unsuspecting sysadmin who’s not acquainted with the drive architecture. That said I haven’t personally seen how the drive presents in TrueNas - it does after all have two distinct LUNs, and that should be a clue to most people to use caution and plan a VDEV layout accordingly.
@@chrismoore9997 There is a Level1Techs forum post where a guy made a script that does this for you. I can't find the link but maybe you can. Seems like ALOT of ground prep/work if there is a script to do it for you.
Yeah if this was a decade ago, but honestly you'll get random failure that is way outside your purchase from the guys at warehouses and delivery rather than anything wrong with the drives themselves. If failure was so great then people wouldn't buy spinning rust anymore.
It seems so simple but awesome! This will probably be a pretty huge advantage for hdd evolution in the future. Though I was disappointed you didn't take the drive apart to show the guts. I was wanting to see how everything is working together.
😂 I do still want to use this HDD for other things. I wouldn't want to risk contaminating it by opening it. The Seagate has a video of this drive with a clear cover so you can see it in action.
It's been a very long time since I was impressed by an HDD, but this was actually rather cool. It would've been interesting to see tests of other ways of raiding them, like using btrfs or zfs. I think that could plausibly make a significant difference. For all I know, there's mechanical reasons why you don't get double performance when using both at once and perhaps they're more designed for sequential throughput? That is still a great thing, even if you don't get the same performance boost on random io.
I really really love this technology. Hard drives are getting bigger and bigger with 22TB being the max currently. However, if you put a 22TB drive into a RAID and start rebuilding, you`ll die of old age until its finished rebuilding (and one other drive probably will too). Having double the speed, which is very realistic when rebuilding, it could cut your wait time in half. Especially if the HAMR technology allows for 30+TB drives, you need a way to make reading and writing faster. Maybe one day we`ll get 3 or 4 arms? Only sad thing about this is that it`s detected as 2 separate drives... not practical in my case. Seagate has to do something that the device is detected as a single drive, either by some software driver (idk if driver would be the correct word here) or maybe some chip on the HDD itself that fuses the "two HDDs" into a RAID0 without the OS really knowing it. Best case scenario would be that they even implement a way where you can (at least globally) toggle how the OS should see those multi activator drives. Either as a combined drive or as individuals. Oh and as always thank you for doing such a great job with these explanation videos. Not too long, all the important details explained and in a very understandable way.
yeah, I think if the 2 halves were virtualized into a single LUN, that would be easier to use. Welcome to the channel and hope you find more useful content! thanks for watching! :-)
In your random read test, you used a block size of 4k, but linux SW Raid uses a block size of 512k, so that is probably hurting your iops on that test. I'd be interested to see the randread test at 512k on sdb and sdc versus 512k randread on md0.
The point of the random I/O test is to hurt it in IOPS. Testing a worst case scenario and see how it fares in such situations. random I/O is the weakness for any spinning platter technology due to latency and having to wait for the sector to fly under the heads before you can perform I/O. I was not aiming for a comprehensive test, but just to probe the low and top ends of the performance spectrum. nonetheless, I think if you make the random I/O more suitable to the data layout as you suggest, you would see higher throughput (each I/O has less waste), but probably similar levels of IOPS as that is bound by other factors. it could be an interesting test to do, but wasn't part of my thought process for this demonstration.
@@ArtofServer Oh, that's a fair point. I was more thinking if I used ZFS on top of this which will read the full ZFS block size even if the OS requests a 4k read to make sure the ZFS block checksum is valid. I think your test results still show very interesting behavior, and I might pick up some of these to try. Obviously I'd need to be careful (like you point out in this video) not to include both sub-drives in the same RAIDZ set. Thanks for the thoughtful reply to my comment!
If you have been in the game long enough *ALL* hard drive manufacturers produce bad drive models from time to time. So saying you don't recommend manufacture X because of problem Y is frankly as dumb as hell. Besides at this point there is only Seagate, Western Digital and Toshiba left standing. At work I currently have hundreds of Seagate drives in use and the failure rates are low, astoundingly low given that a couple hundred of the drives are over 10 years old.
You're entitled to your own opinions. But if manufacturer X keeps producing problems Y1, Y2, Y3, etc., at some point you realize maybe stay away from manufacturer X. I think that's a reasonable response when you've been around long enough to notice a pattern. Still thinking X makes great products after seeing that pattern would be frankly, dumb as hell. I have no brand loyalties, and if manufacturer X changes their patterns in a positive direction, I have no issues with their products and would use them. Even my favorite HDD maker HGST (now WD) was born out of IBM storage, which had produced their infamous "DeathStar" drives, which I avoided back in the day.
@@ArtofServer The point is manufacturers X, Y, and Z have all produced and continue to produce from time to time dud models. As such avoiding manufacture X because of some historical bad model is nonsensical. Seagate is not particularly worse than other manufacturers as the data from Backblaze shows. What is more important is to avoid bad models than bad manufacturers as this makes a much bigger difference by quite a considerable margin.
Also either Connor or Seagate at one point made an OEM drive pair that took 2 IDE drives and presented them as a single drive with the combined size of both. I remember seeing it for the first time on an AST system. When either drive was unplugged it would not detect the remaining drive.
As I understood the documentation, using a dual channel cable can independently cobtrol each half. The drive electronics just compensate with a dingle channel cable.
It might be interesting to try this drive in a dual port setup so see what the secondary SAS port will see. I suspect it will present a different SCSI target ID with 2 LUNs as well. Have to see if I can find a cable to do this as I don't have any dual SAS expander backplanes for 3.5" format.
I was hoping it was doing RAID 0 internally so it was transparent to the system. But it's just two drives in one housing that can be accessed via one connection and I assume a port multiplier built into the drive. So what's the benefit if RAIDing separate drives can achieve the same throughput? It's late so I'm going to bed after watching the first 7 minutes.
You need to get a couple drives and setup a system with a SAS 2 and 3 backplane and install the drives there in the backplane. Do the tests both with a single connection to the backplane and a redundant SAS connection which would use both SAS channels on the drives. Then redo the tests. I would also setup a basic install of Truenas Scale and do the tests with the drives in a mirror and a Z1raid. Checking if Truenas can actually handle these drives properly. There was a big argument on the old Truenas forumslast year with a lot of half info tossed out, where someone bought a bunch of these drives and the Pool created only showed half of the capacity and testing only showed half the capacity was recognized in their configuration. They wanted to know what happened to the other half of the drive capacity The argument was never really solved, and I think the OP of the drives sent them back and conventional drives were installed. (Maybe you got one of them). I believe there have been random reports of new drives in certain systems not reporting the correct capacity or acting weird, dropping known good and new drives etc.)
I need to find some dual SAS expander backplanes for 3.5" format then... my only dual SAS expander backplane is for 2.5" format servers. And I guess I would need more of these drives.... though, I don't have a need for them so not sure I want to invest in an entire set of these drives to test an array setup.
Interesting stuff! Will be curious to see how much they sell for. Any idea what capacities they go up to? Obviously there is risk in using these with say, ZFS, but I think with the right setup it could work out. Spread across lots of mirrors perhaps? Or maybe it'd be too complicated for it's own good and if you really want better performance you should just look to flash. Either way really neat, thanks for sharing!
@@_-Karl-_ I think for zfs mirror you would want something like ' mirror sg1a sg2a mirror sg1b sg2b ' where sg1a is drive1 1st platter, sg2a = drive2 1st platter, and document it somewhere
But can it go at 15000 RPM? I once saw a hard drive going that fast. And can they make an SSHD (hybrid drive) version of it? If they were to combine this with their tech for 240 Terabyte hard drives, high capacity flash memory technology (for the hybrid drive stuff), and ZFS… I think it would be a game changer for large datacenters.
Interesting, I'd probably mirror each drive within itself, then some form of raid array. This should bring the iops up since the mirror can then access parts of the files on each actuator stack. It's a theory and I may need to look into this for my next nas hardware refresh. Also need to contrast cost and performance against SSD.
I'm NOT a fan either!! My home is in Santa Cruz, CA And Scotts Valley, CA (where seagate is located) is just 7 miles up the road, (off of HWY 17). They have ALWAYS had bearing problems! SUGGEST: Use *CTRL-L* to clear the screen before each command you use. It's sometimes hard to see the bottom of the screen!
Oh, Seagate has had more problems than just bearings. I can't count the number of times I've tried to help someone with their HBA controller only to discover some weird issue with a Seagate drive. So many bugs in their firmwares... thanks for the suggestion! appreciate it! :-)
@@ArtofServer - True. To get some or, (maybe) all data back, I used SPINRITE from Gibson Labs! Steve does some incredible code, all in ASM! I had a client at a Banking institution, and they had a small 40GB drive that wouldn't boot. So I booted it with SPINRITE and let run over night, came in the morning, removed the CD, and it came right up, with all data recovered!
I'm using smartctl on both proxmox nodes and Truenas scale, both don't show the messages like you have. Did you configure anything to show it with these (very helpful) messages?
No. But keep in mind that SMART output for SAS drives vs SATA drives are very different. In this case, it was a SAS drive. If you're looking at SATA drives, the output will be different.
How would the SATA version of this drive work in a ZFS vdev, with, say 8 of them in a RAIDZ2 config? I believe it presents itself as a single drive, which means there wouldn't be any worry regarding a one disk failure compromising the redundancy like the SAS version (in this case, essentially turning it into a 7-disk RAIDZ config). Currently I'm able to get these disks at a lower price compared to the normal X18 18TB Exos, and the faster throughput also seems nice on paper, but worst case scenario, if they somehow end up with speeds akin to the single actuator drives, would they introduce more points of failure, for example? On a sidenote: The SAS version seems perfect to build a 2vdev RAIDZ2 config with each LUN on a separate vdev, which would essentially give it even more redundancy in case only half of the disk fails the way I see it.
I haven't gotten my hands on the SATA version so it's hard to say. If the SATA version basically behaves like a single drive, backed by the dual actuators, then I think you just treat it as a single drive and enjoy the benefits. I think Seagate should have just added the logic in the firmware to treat these drives as single drive - that seems to make the most sense to me.
Soo all this made me wander. If they show one drive with two parts if the drive fails both parts will be gone, because they share the same electronics and so on. So if you make a ZFS with Z1 and two parts are gone will this lead to a broken pool? Will it be safer with this drives to make a Z2 pool then to counter for the split personality of this drives...
I wish they continue to grow this technology to separate head movement per platter and bring it to regular SATA drives. Maybe it will be already joined together as RAID0 inside the drive's FW.
Interesting gimmick, I’d love to see how it fares with btrfs, but I’ll admit that I’ve been out of the loop when it comes to servers for a few good years. Most stuff I touch these days are SSDs. Kinda thinking it would have more than one failure point with extra actuators. Reminds me of those WD Raptor drives from back in the day.
Mirrored stipes may work good with this setup. But one issue for SOME people is that if their OS has a limit on how many physical disks they can have (UnRaid) then this would count as 2 disks towards their license. And I defialnately wouldn't use this for parity.
It's not that new an idea. I recall someone experimenting with an actuator on each side of the drive so you had two heads for each platter. It was passed over because the platters had to be smaller to allow the second actuator on the other side to keep the drive at a standard size. They should have each platter's head move independently. you could (internal to the drive) treat each plater as a single 'drive' in a raid array.
That's interesting. If you happen to find an article with more details about that, please link it here. I'd love to learn more. Thanks for sharing! :-)
Those were Conner Peripherals *Chinook* drives, named after the Boeing CH-47 Chinook tandem rotor helicopter, the platters were normal sized but the frame was 5.25" In 1996 Seagate acquired Conner with all of its patents.
2:10 - what you're talking about is basically "raid 0": since the two units can be seen separately by the system, you can configure them as so... this disk overall looks to me a compromise between disk capacity and speed. If you don't mind the risk of loosing all your data in case of failure, ofcourse.
@@neins If the two units are visible separately, It should be possible. You'll gain redundancy security, but you'll miss the benefit of the speed gain for parallelizing the procedure of writing and reading.
Mdraid was created with 512k chunk size (part that is written to each drive before writing to another), so may be it worth setting much lower chunk size to test 4k block size performance.
This is super insteresting technology.... Not really interesting for a homelab but certainly for the enterprise space. Also, funny how the world works, you say you dont like segate, yet for me, I had every drive under the sun fail on me except segate. Specifically iron wolf and exos, had only a single failed 2.5 inch barracuda (and that one was not even mine, just had to fix it for data recovery, which was about 80% successful) On the other hand I have 2 failed WD Elements drives, both just outside of warranity - one had complete data loss, fortunately not critical data, thats why there was no backup 7 failed WD Reds from my uncles NAS, all of them after about 2 - 3 years (uncle always pairs up WD and segate in the same array - yes, not good, I keep telling him - never had a failed segate yet, those drives are like 10 years old now) Then there is my NAS, got an assortment of segate drives 3 iron wolf drives, 7 years and going, 9 Exoses - 4 years and going I am not one for brand loyalty, nor am I saying that your experience is invalid. I just found it interesting how different my experience is. To be fair, I probably have way less experience total, as I dont have any actual servers, I just build from off the shelf hardware for whatever I need at home.
I was ask ixsystems for that drives when my friends company was buying storage from them, but this is not what they put into enterprise storages (yet) , was sayed that is not well tested to put into production
It may be true that ixsystems haven't done a lot of testing of these types of drives. But these drives came out of a data center from a few years back, so there are enterprises that apparently have been using these drives for some time. as mentioned in the vid, care needs to be taken when planning the geometry of your ZFS vdevs when using these drives.
Is others have said I really don't like this design from a standpoint of fault tolerance. You really need to have greater than raid 6 or RAIDZ2 as a single drive could bring your system precariously close to a failed array. Unless you stagger them. So each disk is on a different zpool. Even then it increases the odds of degrading multiple pools at the same time. I'd rather see this technology integrated into a single drive and increase the overall throughput through SAS. I mean you mentioned or actually demoed how you can do a raid zero through the operating system. And well that's nice it'd be nice to actually see that simply done at the disck level bypassing the need to actually mess around at the operating system level. Because I'm thinking that this might be a nightmare on something like TrueNAS Scale.
You have to be very careful how you assign the LUNs to the vdevs in ZFS / TrueNAS scale / etc. If this technology was more widely adopted, you could implement a check to make sure all drives assigned to any particular vdev are not from the same SCSI target and issue a warning when trying to assemble such a pool. And as you said, if one of the 2 LUNs has problems, when you pull the drive to replace, both LUNs will go offline, affecting 2 vdevs even if you carefully assign the LUNs to different vdevs. And yeah, if I were to try this, nothing less than raidz2.
@ArtofServer Correct, I am able to see in the video, but I was wondering if I could just copy and paste it. And I can save it some place for me to try it later. Of course, if possible. Thanks.
Totaly right with you. Seagate drive quality since 5 years or more is very poor. I use generic seagate 2tb drive on NAS in my company and lifetime is an horror.
Yeah, sorry to hear it! I know the pain because I talk to thousands of people every year about their storage server builds, and so many people run into issues with Seagate more than any other brand.
Interesting, but I still don't trust Seagate with my data. As you touched on, care would be needed for ZFS use as the potential for failure is higher if you don't aggregate different VDEVs across the physical device.
I would agree. Although I think Seagate was the first to implement this, I think HGST also have dual actuator products. I just can't get a hold of them yet...
can i split this two part into 2 different vdev combine with outher similar dual actualtor disk forms raidz in one pool and still get parity i need? another question is , since it's still occupy single SATA/SAS port,will HBA card channels reduce to 1/2 when attache this kind of disk?
@@_-Karl-_ 1 ANSWER IS IN MY ESTIMATE RESULT , I AM HAPPY WITH THAT risk/capacity & bandwidth efficiency, iops for SATA 3Gbps is not what i am concern since i only care big files on this kind of drive.the only problem is my HBA have 4 sets of sff8747 split into 16 SATA/SAS drive seems only occupied 8 channel of pcie links.wondering those 2actuator actual occupie how many those channels.
I see this as incredibly dangerous... Imagine you are running your zfs pool as raid Z1 and the controller or motor dies a Mach.2 unit, this means that 2 "drives" drop out of the zpool and data loss occurs. Whilst its true you could build your pool around this feature, its a mistake just waiting to happen.
You can just use the drive in raid0 LVM as its own drive in ZFS. Works great, I’ve been doing it and get great performance out of it. The drive will fail as a unit, as a single drive :)
@@johnpyp ya I would not do that... its probably better to create raid X of stripped pairs. remember in zfs mirrors/stripes/zraid are also vdevs themselves and you can use them to construct other mirrors/stripes/zraid. Anyway my point is, its just an accident waiting to happen where some sysadmin is going to forget that /dev/sda and /dev/sdb are effectively the same disk...
If we want to see real performance in mechanical disks, we must use the two-head technology for a disk group patented by Segate. Fitting two disks into one box just saves space. If we can use two or more read-write heads for a disk group, then we can start talking about real performance in mechanical disks. This is just a little vaccine for the survival of mechanical discs, we need real solutions.
@@ArtofServer if youtube delete my previus message you can search from google, tomshardware already make news about that " seagate dual head hdd patent "
The question is, does it make sense to theoretically have twice the IOPS? If it will bring higher holding and maintenance costs, it will be more difficult to compete with ultra-high IOPS NVME SSDs. I hope to have ultra-high capacity and ultra-low power consumption(motor rotation speed). I wish the HDD manufactures keep placing INTEL 3DXPoint technology's NVRAM on the hard drive's controller board, whether as a buffer, as cache or for storing metadata(the data used to describe data on the disc).
Sounds more like half than twice. Sorry Seagate. Come back when you have two independent sets of heads and that will really impress me. How nobody has done that yet is beyond me. Heads on both sides of the drive. You could put two independent controller boards on it too. For use with cluster file systems from two different hosts. Or use the multiple paths for more throughout. I bet you could physically put three sets of heads on the same platter, 120 degrees apart though at that point you have no hope for it to resemble the standard form factor. I wonder if that would increase heat a lot
@@WolfGamerBohumin yeah. exactly that, but with modern rpm and cachin...... oh wait, that would be the problem. seperate caches would corrupt eachother's disk contents. So I guess you couldn't have completely isolated PCBs in any practical way. But MAYBE you could still have two head sets. and independantly articulating arms on each set to improve throughput. Perhaps even put multiple heads on each arm, so that one can read the inner 55% of the platter and the other the outer 55%, and each arm now only needs to move half as much But I bet that would increase heat a lot from more friction and more coils moving parts. SSD it is i guess.... just still feels a little hard to swallow SSD for what it costs in the dozens of TB range today compared to HDD
So HDD manufacturers are deliberately crippling their products - for double the speeds you do not need a separate actuator - you only need an ASIC which allows to connect more read/write heads! The surface bitrate of this Mach.2 drive is exactly same as on conventional drive (or even less - if this would need 2 servo sides compared to one). So why no vendor has a higher performing chipseet, and all they do is to mux all the heads (like 10+ now) into like 2-3 channels for the asic.
The latest thing is the mosaic drives, which heat the platter to allow for more data density. That is more dangerous because you can heat a magnet to a point that it permanently loses its magnetic properties.
not really. the same number of heads as traditional drives. they just move independently in 2 sets. the issue is that if one fails and you have to replace the drive, both LUNs, including the other one that may not be defective, are removed from the system.
I believe disk logic should create internal raid 0 and present it to OS as single drive taking the performance gain. Then you can add it to the pool as normal hdd w/o concerns how to threat your halves.
Agreed. Useless for an unraid array for example.
I wonder if unraid would vount it as 2 drines towards the limit
That's exactly how SATA version of this drive works
@@Gastell0exactly. The sata version works almost like this, however, in the sata version, you can get double the performance if you software raid it by size. Wendell did a great video about this limitation in sata and how to workaround it.
Yeah, I think that might be the direction this technology will evolve into. I believe there's a SATA version of this that behaves like that, but I don't know how it handles the data balancing. Then, maybe after that, they will split the heads even further into quad actuators! and achieve 1GB/s throughput!
I appreceate you putting this video together. I have a 20 pack of these drives and this info will help me better plan how to use them.
cool! glad it will help you out! :-) I had a few people ask me about these a while back and I wasn't sure how they worked. decided to just buy one and check it out. hope this will be useful to many!
@@ArtofServer - I will likely use this with my TrueNas system but I will have to manually ensure that each half of the drive is in a different vdev.
ixSystems could really help here by adding logic to provide warnings to any unsuspecting sysadmin who’s not acquainted with the drive architecture.
That said I haven’t personally seen how the drive presents in TrueNas - it does after all have two distinct LUNs, and that should be a clue to most people to use caution and plan a VDEV layout accordingly.
@@chrismoore9997 There is a Level1Techs forum post where a guy made a script that does this for you. I can't find the link but maybe you can. Seems like ALOT of ground prep/work if there is a script to do it for you.
I'm not a system administrator, but this video was really very interesting
Thanks for watching and commenting!
If they wouldn't be from Seagate I would buy this in a heartbeat.
HGST also released similar technology, but I haven't seen it in the used market.
There's nothing wrong with Seagate today.
Yeah if this was a decade ago, but honestly you'll get random failure that is way outside your purchase from the guys at warehouses and delivery rather than anything wrong with the drives themselves. If failure was so great then people wouldn't buy spinning rust anymore.
It seems so simple but awesome! This will probably be a pretty huge advantage for hdd evolution in the future. Though I was disappointed you didn't take the drive apart to show the guts. I was wanting to see how everything is working together.
😂 I do still want to use this HDD for other things. I wouldn't want to risk contaminating it by opening it. The Seagate has a video of this drive with a clear cover so you can see it in action.
It's been a very long time since I was impressed by an HDD, but this was actually rather cool. It would've been interesting to see tests of other ways of raiding them, like using btrfs or zfs. I think that could plausibly make a significant difference. For all I know, there's mechanical reasons why you don't get double performance when using both at once and perhaps they're more designed for sequential throughput? That is still a great thing, even if you don't get the same performance boost on random io.
I definitely think it is interesting.
I really really love this technology. Hard drives are getting bigger and bigger with 22TB being the max currently. However, if you put a 22TB drive into a RAID and start rebuilding, you`ll die of old age until its finished rebuilding (and one other drive probably will too). Having double the speed, which is very realistic when rebuilding, it could cut your wait time in half. Especially if the HAMR technology allows for 30+TB drives, you need a way to make reading and writing faster. Maybe one day we`ll get 3 or 4 arms?
Only sad thing about this is that it`s detected as 2 separate drives... not practical in my case. Seagate has to do something that the device is detected as a single drive, either by some software driver (idk if driver would be the correct word here) or maybe some chip on the HDD itself that fuses the "two HDDs" into a RAID0 without the OS really knowing it. Best case scenario would be that they even implement a way where you can (at least globally) toggle how the OS should see those multi activator drives. Either as a combined drive or as individuals.
Oh and as always thank you for doing such a great job with these explanation videos. Not too long, all the important details explained and in a very understandable way.
yeah, I think if the 2 halves were virtualized into a single LUN, that would be easier to use. Welcome to the channel and hope you find more useful content! thanks for watching! :-)
In your random read test, you used a block size of 4k, but linux SW Raid uses a block size of 512k, so that is probably hurting your iops on that test. I'd be interested to see the randread test at 512k on sdb and sdc versus 512k randread on md0.
The point of the random I/O test is to hurt it in IOPS. Testing a worst case scenario and see how it fares in such situations. random I/O is the weakness for any spinning platter technology due to latency and having to wait for the sector to fly under the heads before you can perform I/O. I was not aiming for a comprehensive test, but just to probe the low and top ends of the performance spectrum.
nonetheless, I think if you make the random I/O more suitable to the data layout as you suggest, you would see higher throughput (each I/O has less waste), but probably similar levels of IOPS as that is bound by other factors. it could be an interesting test to do, but wasn't part of my thought process for this demonstration.
@@ArtofServer Oh, that's a fair point. I was more thinking if I used ZFS on top of this which will read the full ZFS block size even if the OS requests a 4k read to make sure the ZFS block checksum is valid. I think your test results still show very interesting behavior, and I might pick up some of these to try. Obviously I'd need to be careful (like you point out in this video) not to include both sub-drives in the same RAIDZ set.
Thanks for the thoughtful reply to my comment!
If you have been in the game long enough *ALL* hard drive manufacturers produce bad drive models from time to time. So saying you don't recommend manufacture X because of problem Y is frankly as dumb as hell. Besides at this point there is only Seagate, Western Digital and Toshiba left standing. At work I currently have hundreds of Seagate drives in use and the failure rates are low, astoundingly low given that a couple hundred of the drives are over 10 years old.
You're entitled to your own opinions. But if manufacturer X keeps producing problems Y1, Y2, Y3, etc., at some point you realize maybe stay away from manufacturer X. I think that's a reasonable response when you've been around long enough to notice a pattern. Still thinking X makes great products after seeing that pattern would be frankly, dumb as hell. I have no brand loyalties, and if manufacturer X changes their patterns in a positive direction, I have no issues with their products and would use them. Even my favorite HDD maker HGST (now WD) was born out of IBM storage, which had produced their infamous "DeathStar" drives, which I avoided back in the day.
@@ArtofServer The point is manufacturers X, Y, and Z have all produced and continue to produce from time to time dud models. As such avoiding manufacture X because of some historical bad model is nonsensical. Seagate is not particularly worse than other manufacturers as the data from Backblaze shows. What is more important is to avoid bad models than bad manufacturers as this makes a much bigger difference by quite a considerable margin.
That took long enought. Conner actually patented a hard drive design that utilized dual actuators.
Connor was acquired by Seagate in 1996. So they revived some old IP for mach.2?
I remember that one ... never caught on
hxxps://en.m.wikipedia.org/wiki/File:Conner_Peripherals_%22Chinook%22_dual-actuator_drive.jpg
Also either Connor or Seagate at one point made an OEM drive pair that took 2 IDE drives and presented them as a single drive with the combined size of both. I remember seeing it for the first time on an AST system. When either drive was unplugged it would not detect the remaining drive.
As I understood the documentation, using a dual channel cable can independently cobtrol each half. The drive electronics just compensate with a dingle channel cable.
It might be interesting to try this drive in a dual port setup so see what the secondary SAS port will see. I suspect it will present a different SCSI target ID with 2 LUNs as well. Have to see if I can find a cable to do this as I don't have any dual SAS expander backplanes for 3.5" format.
Really helpful information. Thanks for calling out the redundancy bit at the end.
Thanks for watching!
I was hoping it was doing RAID 0 internally so it was transparent to the system. But it's just two drives in one housing that can be accessed via one connection and I assume a port multiplier built into the drive. So what's the benefit if RAIDing separate drives can achieve the same throughput? It's late so I'm going to bed after watching the first 7 minutes.
You need to get a couple drives and setup a system with a SAS 2 and 3 backplane and install the drives there in the backplane. Do the tests both with a single connection to the backplane and a redundant SAS connection which would use both SAS channels on the drives. Then redo the tests.
I would also setup a basic install of Truenas Scale and do the tests with the drives in a mirror and a Z1raid. Checking if Truenas can actually handle these drives properly.
There was a big argument on the old Truenas forumslast year with a lot of half info tossed out, where someone bought a bunch of these drives and the Pool created only showed half of the capacity and testing only showed half the capacity was recognized in their configuration. They wanted to know what happened to the other half of the drive capacity The argument was never really solved, and I think the OP of the drives sent them back and conventional drives were installed. (Maybe you got one of them). I believe there have been random reports of new drives in certain systems not reporting the correct capacity or acting weird, dropping known good and new drives etc.)
I need to find some dual SAS expander backplanes for 3.5" format then... my only dual SAS expander backplane is for 2.5" format servers. And I guess I would need more of these drives.... though, I don't have a need for them so not sure I want to invest in an entire set of these drives to test an array setup.
Interesting stuff! Will be curious to see how much they sell for. Any idea what capacities they go up to?
Obviously there is risk in using these with say, ZFS, but I think with the right setup it could work out. Spread across lots of mirrors perhaps? Or maybe it'd be too complicated for it's own good and if you really want better performance you should just look to flash. Either way really neat, thanks for sharing!
@@_-Karl-_ I think for zfs mirror you would want something like ' mirror sg1a sg2a mirror sg1b sg2b ' where sg1a is drive1 1st platter, sg2a = drive2 1st platter, and document it somewhere
But can it go at 15000 RPM? I once saw a hard drive going that fast.
And can they make an SSHD (hybrid drive) version of it?
If they were to combine this with their tech for 240 Terabyte hard drives, high capacity flash memory technology (for the hybrid drive stuff), and ZFS…
I think it would be a game changer for large datacenters.
The good old days of 15K RPM drives are over my friend... they were great room heaters and made some awesome ambient sounds.
Another very insightful presentation. Thank you
Glad it was helpful!
Interesting, I'd probably mirror each drive within itself, then some form of raid array. This should bring the iops up since the mirror can then access parts of the files on each actuator stack. It's a theory and I may need to look into this for my next nas hardware refresh. Also need to contrast cost and performance against SSD.
I'm NOT a fan either!! My home is in Santa Cruz, CA And Scotts Valley, CA (where seagate is located) is just 7 miles up the road, (off of HWY 17). They have ALWAYS had bearing problems! SUGGEST: Use *CTRL-L* to clear the screen before each command you use. It's sometimes hard to see the bottom of the screen!
Oh, Seagate has had more problems than just bearings. I can't count the number of times I've tried to help someone with their HBA controller only to discover some weird issue with a Seagate drive. So many bugs in their firmwares...
thanks for the suggestion! appreciate it! :-)
@@ArtofServer - True. To get some or, (maybe) all data back, I used SPINRITE from Gibson Labs! Steve does some incredible code, all in ASM! I had a client at a Banking institution, and they had a small 40GB drive that wouldn't boot. So I booted it with SPINRITE and let run over night, came in the morning, removed the CD, and it came right up, with all data recovered!
I'm using smartctl on both proxmox nodes and Truenas scale, both don't show the messages like you have. Did you configure anything to show it with these (very helpful) messages?
No. But keep in mind that SMART output for SAS drives vs SATA drives are very different. In this case, it was a SAS drive. If you're looking at SATA drives, the output will be different.
How would the SATA version of this drive work in a ZFS vdev, with, say 8 of them in a RAIDZ2 config?
I believe it presents itself as a single drive, which means there wouldn't be any worry regarding a one disk failure compromising the redundancy like the SAS version (in this case, essentially turning it into a 7-disk RAIDZ config).
Currently I'm able to get these disks at a lower price compared to the normal X18 18TB Exos, and the faster throughput also seems nice on paper, but worst case scenario, if they somehow end up with speeds akin to the single actuator drives, would they introduce more points of failure, for example?
On a sidenote: The SAS version seems perfect to build a 2vdev RAIDZ2 config with each LUN on a separate vdev, which would essentially give it even more redundancy in case only half of the disk fails the way I see it.
I haven't gotten my hands on the SATA version so it's hard to say. If the SATA version basically behaves like a single drive, backed by the dual actuators, then I think you just treat it as a single drive and enjoy the benefits. I think Seagate should have just added the logic in the firmware to treat these drives as single drive - that seems to make the most sense to me.
Soo all this made me wander. If they show one drive with two parts if the drive fails both parts will be gone, because they share the same electronics and so on. So if you make a ZFS with Z1 and two parts are gone will this lead to a broken pool? Will it be safer with this drives to make a Z2 pool then to counter for the split personality of this drives...
Even with a raidz2/3 pool, I would not put more than one LUN from the same drive into the same vdev.
I wish they continue to grow this technology to separate head movement per platter and bring it to regular SATA drives. Maybe it will be already joined together as RAID0 inside the drive's FW.
Interesting gimmick, I’d love to see how it fares with btrfs, but I’ll admit that I’ve been out of the loop when it comes to servers for a few good years. Most stuff I touch these days are SSDs. Kinda thinking it would have more than one failure point with extra actuators. Reminds me of those WD Raptor drives from back in the day.
Mirrored stipes may work good with this setup. But one issue for SOME people is that if their OS has a limit on how many physical disks they can have (UnRaid) then this would count as 2 disks towards their license. And I defialnately wouldn't use this for parity.
Use SATA version of the drive instead of SAS, it's effectively a transparent raid0
Good point on the drive count license issue! Thanks!
the mach 2 drives are great for unraid systems, makes a world of difference
are you currently using Mach.2 drives? if so, how many?
It's not that new an idea. I recall someone experimenting with an actuator on each side of the drive so you had two heads for each platter. It was passed over because the platters had to be smaller to allow the second actuator on the other side to keep the drive at a standard size. They should have each platter's head move independently. you could (internal to the drive) treat each plater as a single 'drive' in a raid array.
That's interesting. If you happen to find an article with more details about that, please link it here. I'd love to learn more. Thanks for sharing! :-)
Those were Conner Peripherals *Chinook* drives, named after the Boeing CH-47 Chinook tandem rotor helicopter, the platters were normal sized but the frame was 5.25"
In 1996 Seagate acquired Conner with all of its patents.
Pretty sure this is almost 3 decades old technology now... Seagate just happened to be the current holder of the patent and revive it!
2:10 - what you're talking about is basically "raid 0": since the two units can be seen separately by the system, you can configure them as so... this disk overall looks to me a compromise between disk capacity and speed. If you don't mind the risk of loosing all your data in case of failure, ofcourse.
Could it be setted as "raid1"?
@@neins If the two units are visible separately, It should be possible. You'll gain redundancy security, but you'll miss the benefit of the speed gain for parallelizing the procedure of writing and reading.
Mdraid was created with 512k chunk size (part that is written to each drive before writing to another), so may be it worth setting much lower chunk size to test 4k block size performance.
Yeah the only concern I would have is motor failure, which takes out "2" drives. I think laying them out in RAID50 might work?
I don't think I would do anything with a single parity setup though... i would choose double parity (RAID6) at the minimum.
This is super insteresting technology.... Not really interesting for a homelab but certainly for the enterprise space.
Also, funny how the world works, you say you dont like segate, yet for me, I had every drive under the sun fail on me except segate.
Specifically iron wolf and exos, had only a single failed 2.5 inch barracuda (and that one was not even mine, just had to fix it for data recovery, which was about 80% successful)
On the other hand I have
2 failed WD Elements drives, both just outside of warranity - one had complete data loss, fortunately not critical data, thats why there was no backup
7 failed WD Reds from my uncles NAS, all of them after about 2 - 3 years (uncle always pairs up WD and segate in the same array - yes, not good, I keep telling him - never had a failed segate yet, those drives are like 10 years old now)
Then there is my NAS, got an assortment of segate drives
3 iron wolf drives, 7 years and going,
9 Exoses - 4 years and going
I am not one for brand loyalty, nor am I saying that your experience is invalid. I just found it interesting how different my experience is. To be fair, I probably have way less experience total, as I dont have any actual servers, I just build from off the shelf hardware for whatever I need at home.
Very interesting.
I think so too!
I was ask ixsystems for that drives when my friends company was buying storage from them, but this is not what they put into enterprise storages (yet) , was sayed that is not well tested to put into production
It may be true that ixsystems haven't done a lot of testing of these types of drives. But these drives came out of a data center from a few years back, so there are enterprises that apparently have been using these drives for some time. as mentioned in the vid, care needs to be taken when planning the geometry of your ZFS vdevs when using these drives.
@@ArtofServer yeah but on end it is interesting as 4 disk can saturate 20G easy :D , by the way thx for your videos i am learn a lot
Is others have said I really don't like this design from a standpoint of fault tolerance. You really need to have greater than raid 6 or RAIDZ2 as a single drive could bring your system precariously close to a failed array. Unless you stagger them. So each disk is on a different zpool. Even then it increases the odds of degrading multiple pools at the same time. I'd rather see this technology integrated into a single drive and increase the overall throughput through SAS. I mean you mentioned or actually demoed how you can do a raid zero through the operating system. And well that's nice it'd be nice to actually see that simply done at the disck level bypassing the need to actually mess around at the operating system level. Because I'm thinking that this might be a nightmare on something like TrueNAS Scale.
You have to be very careful how you assign the LUNs to the vdevs in ZFS / TrueNAS scale / etc. If this technology was more widely adopted, you could implement a check to make sure all drives assigned to any particular vdev are not from the same SCSI target and issue a warning when trying to assemble such a pool. And as you said, if one of the 2 LUNs has problems, when you pull the drive to replace, both LUNs will go offline, affecting 2 vdevs even if you carefully assign the LUNs to different vdevs. And yeah, if I were to try this, nothing less than raidz2.
video is dark. Would you like to adjust brighter for next video? Thanks
Which part? The overhead camera or the screen recording? Thanks for letting me know.
@@ArtofServer nope i have a higher end content creation monitor setup almost perfectly for everything and the video was not dark it looked great
@@ArtofServer the terminal, dark theme with green is a bit hard to read for me.
Hey thank you very much for the video, is it possible that you post the command lines used in the video? That would be awesome 👌 super B. Thanks.
Thanks! What do you mean by the command lines? Are you not able to see them in the video? Or you want something you can copy & paste?
@ArtofServer
Correct, I am able to see in the video, but I was wondering if I could just copy and paste it. And I can save it some place for me to try it later.
Of course, if possible. Thanks.
Totaly right with you. Seagate drive quality since 5 years or more is very poor. I use generic seagate 2tb drive on NAS in my company and lifetime is an horror.
Yeah, sorry to hear it! I know the pain because I talk to thousands of people every year about their storage server builds, and so many people run into issues with Seagate more than any other brand.
@@ArtofServer And yes many year ago cheetah drive are THE reference
How is the performance on that new fangled cpu you got there lol
It's fantastic! Thanks! :-)
I love SAS
me too!!!
Isn’t the idea of this that the data is spread across both areas by the drive itself?
That would probably be the next evolution of this technology. But as it is, no. Thanks for watching!
@@ArtofServer It was interesting to see it show as two separate drives.
Btw, your green font in the terminal is way too dark for YT. :)
😊👍 THX
You're welcome 😊
old multi-head tech, you could move each read head independently.
Oh well, i thought you will open it....
two drives in one like a RAID 0? how much faster can one get if you RAID 0 a RAID 0? 😱
They could've just used two heads on different sides of the drive. A bit more expensive but a huge missed opportunity.
Interesting, but I still don't trust Seagate with my data. As you touched on, care would be needed for ZFS use as the potential for failure is higher if you don't aggregate different VDEVs across the physical device.
I would agree. Although I think Seagate was the first to implement this, I think HGST also have dual actuator products. I just can't get a hold of them yet...
Make video for how to turn hp z840 to ai server ollama
what does that have to do with this video?
@@ArtofServer we need the same application done in th-cam.com/video/Wjrdr0NU4Sk/w-d-xo.htmlsi=A5xnU4gXFjufkMHZ
can i split this two part into 2 different vdev combine with outher similar dual actualtor disk forms raidz in one pool and still get parity i need? another question is , since it's still occupy single SATA/SAS port,will HBA card channels reduce to 1/2 when attache this kind of disk?
@@_-Karl-_ 1 ANSWER IS IN MY ESTIMATE RESULT , I AM HAPPY WITH THAT risk/capacity & bandwidth efficiency, iops for SATA 3Gbps is not what i am concern since i only care big files on this kind of drive.the only problem is my HBA have 4 sets of sff8747 split into 16 SATA/SAS drive seems only occupied 8 channel of pcie links.wondering those 2actuator actual occupie how many those channels.
I see this as incredibly dangerous... Imagine you are running your zfs pool as raid Z1 and the controller or motor dies a Mach.2 unit, this means that 2 "drives" drop out of the zpool and data loss occurs. Whilst its true you could build your pool around this feature, its a mistake just waiting to happen.
I do think it requires careful consideration when used in a parity RAID scheme.
@@ArtofServer OMG! Great catch. That is a nightmare that WILL NOT MIGHT happen.
RAID 6.
You can just use the drive in raid0 LVM as its own drive in ZFS. Works great, I’ve been doing it and get great performance out of it.
The drive will fail as a unit, as a single drive :)
@@johnpyp ya I would not do that... its probably better to create raid X of stripped pairs. remember in zfs mirrors/stripes/zraid are also vdevs themselves and you can use them to construct other mirrors/stripes/zraid. Anyway my point is, its just an accident waiting to happen where some sysadmin is going to forget that /dev/sda and /dev/sdb are effectively the same disk...
If we want to see real performance in mechanical disks, we must use the two-head technology for a disk group patented by Segate. Fitting two disks into one box just saves space.
If we can use two or more read-write heads for a disk group, then we can start talking about real performance in mechanical disks.
This is just a little vaccine for the survival of mechanical discs, we need real solutions.
Seagate has such a patent? Have they ever prototyped a HDD based on that IP?
@@ArtofServer if youtube delete my previus message you can search from google, tomshardware already make news about that " seagate dual head hdd patent "
The question is, does it make sense to theoretically have twice the IOPS? If it will bring higher holding and maintenance costs, it will be more difficult to compete with ultra-high IOPS NVME SSDs. I hope to have ultra-high capacity and ultra-low power consumption(motor rotation speed). I wish the HDD manufactures keep placing INTEL 3DXPoint technology's NVRAM on the hard drive's controller board, whether as a buffer, as cache or for storing metadata(the data used to describe data on the disc).
I like the concept but it would be price mostly that would make me get these. It's still a single point of failure.
Price per TB? On the used market, it is pretty competitively priced.
You are doubling the failure rate ?
Sounds more like half than twice. Sorry Seagate. Come back when you have two independent sets of heads and that will really impress me. How nobody has done that yet is beyond me. Heads on both sides of the drive. You could put two independent controller boards on it too. For use with cluster file systems from two different hosts. Or use the multiple paths for more throughout. I bet you could physically put three sets of heads on the same platter, 120 degrees apart though at that point you have no hope for it to resemble the standard form factor. I wonder if that would increase heat a lot
Your suggestion gave me flashbacks from FCAL. Except FCAL was less complicated. Lol
Do you mean something like Conner Peripherals "Chinook" HDD?
@@WolfGamerBohumin yeah. exactly that, but with modern rpm and cachin...... oh wait, that would be the problem. seperate caches would corrupt eachother's disk contents. So I guess you couldn't have completely isolated PCBs in any practical way. But MAYBE you could still have two head sets. and independantly articulating arms on each set to improve throughput.
Perhaps even put multiple heads on each arm, so that one can read the inner 55% of the platter and the other the outer 55%, and each arm now only needs to move half as much
But I bet that would increase heat a lot from more friction and more coils moving parts.
SSD it is i guess.... just still feels a little hard to swallow SSD for what it costs in the dozens of TB range today compared to HDD
So HDD manufacturers are deliberately crippling their products - for double the speeds you do not need a separate actuator - you only need an ASIC which allows to connect more read/write heads! The surface bitrate of this Mach.2 drive is exactly same as on conventional drive (or even less - if this would need 2 servo sides compared to one). So why no vendor has a higher performing chipseet, and all they do is to mux all the heads (like 10+ now) into like 2-3 channels for the asic.
The latest thing is the mosaic drives, which heat the platter to allow for more data density. That is more dangerous because you can heat a magnet to a point that it permanently loses its magnetic properties.
has there ever been a PoC on such technology as you described?
double the failure chances from 2 heads too.
not really. the same number of heads as traditional drives. they just move independently in 2 sets. the issue is that if one fails and you have to replace the drive, both LUNs, including the other one that may not be defective, are removed from the system.
On hell no