The power supply pulls 14.9 watts with no drives or anything else plugged in and as noted in the video the Zima board is powered by it's own power supply.
I didn't see that in the video unless I missed something? If so that's quite a good result. There is some video corruption around 1:56 maybe the note didn't make it to the export.
@@anthonydiiorio The PSU hides the power savings from the 4 disks then because of it's (PSU) idle power draw. And I'll argue that's a flaw with this test, as it's not a direct test of the drives but of the PSU as well. The 4 disks on their own should draw 16 Watt less when fully spun down, we only see about 9 Watt reduction (26.6 to 17.7), the rest is hidden by the non-efficient PSU. Also it would be nice if the test was transparent with regard to what idle state firmware the disks has. The savings are bigger than this test demonstrates in a real system.
@@tx2gnd The point of the video is showing what is saved at the wall by stopping the drive spindles. Power supplies have a draw even when not under load.
From my experiences where we have been asked to spin the drives down to save on power during off-peak times versus leaving them running all the time at a constant temperature. Anecdotally I've noticed more drive failures in the spin-up/down group of NAS units I have installed and managed over the years. I leave mine spinning all the time. Even my off-site one that only gets backed up to once a day by 3 of our NAS units in the family. It's going on 8 years with 0 drive failures. The NAS units in the family that get used all the time for docs/VM's/photo-back-up and storage etc. are seeing 8 years or more for drive failures. Occasionally we get one that fails at year 5 but it's really rare. This is my testing over the last 20 years and 10 or so NAS units in total.
For the longest time i didn't spin down my drives, but I've recently reorganized my storage so very few things actually need spinning rust, and most is in flash somewhere. Another problem was the dreaded issue that SMART reads for some reason keep them awake, but i digress. The 6 drives home-server normally uses 95ish watts with them spinning, and this drops to around 60 now. Electricity is relatively expensive here at over 0.40€/kWh so this saves over 100€ per year. Should've done this a long time ago.
More than watts, it's heat and noise when you start to have a fair amount of drives. I moved from TrueNas with always on raid array to Unraid with selective spin up/down. The difference was massive on the heat side (about 10c in my ventilated server closet)
We want to save energy. So the term watt-hours should have been used at some point. Instead we only talked about watts which is instantaneous power. I would have expected a result something like, "With the drives spun down we are saving X watt-hours per day but a drive powering up requires more power for X number of seconds therefore to make this break-even the drive must idle for X seconds/hours/days/whatever on average before each wake and anything longer than that is savings." So I don't think we have learned anything useful here as of yet.
I've been using sleep with my drives in my various system for decades now, including my current TrueNAS Scale systems. I just have set the spindown timer to 2 hours so as to prevent constant spindown/spinup cycles -- if 2 hours pass without activity, it typically means no one's at home or it's night time, so it's ok for the drives to go to sleep. With a 2 hour timer, there's enough activity during the day that the drives keep running and won't spindown needlessly.
This is wrong. Your hdparm didn't spin down the disk, just disabled electronics (Idle_A state). The Exos has a default of "Idle_A" state, you need to use the Seagate SeaChest utilities to set the drive to enable Idle_B, Idle_C or Standby. Then you'll get as low as 1.06 Watt idle for each drive. See user manual section 2.5.1 Table 6 page 12 (manual rev. F). Also see section 2.5.4 on what each Idle state represent and how to set it.
@@LAWRENCESYSTEMS They would only do so if you've set the firmware to another state than Idle_A first. And the 17.7 Watt indicates that the drives still are in Idle_A and not Idle_B or Idle_C or Standby (1.06 Watt pr drive).
@@LAWRENCESYSTEMS I rewatched the video. Can't find where the 14.9 Watt is mentioned? Anyway it doesn't make sense. I do understand that the drives got their own PSU. The only number you gave us is that the idle of 4x Exos is 26.6 Watt and after your hdparm -Y it's 17.7 Watt. My point is, 4 Exos X16 at full idle (motor off, as per firmware idle state "standby") should according to the manual only draw 1.06 Watt each. That should be a reduction from 26.6 Watt to 10 Watt for the 4 drives. So something isn't right with this test. You say the drives spun down, so not Idle_A or Idle_B. But Idle_C is only reduced RMP, not fully motor off and still uses 2.47 Watt pr drive. And yes it takes a long time to recover from Idle_C. Please just use the Seachest utilities from Seagate (Linux CLI tool) and verify the enabled idle state in firmware. I gave you where too look for the info in the user manual. You can use the tool to verify that the drives is in the right idle state too. Point is, you should see a reduction from 5.07 Watt pr drive to 1.06 Watt pr drive and the numbers in the video doesn't show that. I'll leave it at this as there's no point to argue over this. Just redo the test with the right tools and all should be clear.
I think the wear and tear depends on the use case. I've set up mine to sleep after 3 hours of no activity which saves it from frequent spin down and spin up intervals while also saving on wear for longer inactive periods.
I have an Unraid server with 20 drives, I have the drives set to spin down after 30 minutes of no activity. The server pulls about 150W when the drives are all spun down and over 270W when they are all spun up but idle. It's mostly used for Plex and with the way Unraid works it only needs to spin up the drive it's reading from, so I ensure that each shows episodes are on the same drive together so it's not jumping from drive to drive. I have had no problems with excessive drive failures since enabling spin down.
We had mailing servers and machine computers run scsi 24/7. Only shut down for total power lost, lasted 10yrs. We used unix5 on machines and windows server on the Dell server. Power on was more critical than saving power or you wouldn’t get your credit card bill, gas bill, electric bill, or in some cases, your paycheck and your tax return
I run the latest TrueNAS with 20 HDDs on a 7900 12 core CPU w/128GB DDR5. I spin down the drives after 2 hrs of inactivity which seems a reasonable compromise of power savings and wear and tear. Within +/- 10% my idle power usage is 100W and under read/writes is 200W (convenient that the numbers work out so well). I keep my NVME drives always on (of course) so no problems running Docker apps which only write to these. It is a good compromise. No bad HDDs after 3+ years.
Interesting topic. This is something I dealt with extensively not only because it saves power but also because I have my home lab at my desk and I need it to be quiet. My home lab with one ATX Fujitsu Siemens with an i3 6100T and an Optiplex MFF with an i3 7100t will idle at 37 Watts. It only spins up my disks when doing a backup once a day. I have 4 HDDs 2.5" and 3 HDDs 3.5". When they all run simultaneously they will consume double or triple the wattage. My home lab would cost me 120€ a year with my spinning down measures so I would say it's worth it. Especially when you use (old) consumer grade disks.
Ive been running 3 8TB (7200rpm) ironwolf pros for over a year now with "hdparm -S 2" (Spin down after 10 seconds with no activity), this takes my overall server powerdraw from around 38w at the wall to 16w as measured by a smartplug in homeassistant, saving me 22watts or about 7.3w pr drive, or ~192kwh each year in total. The spun down drives are my media library, so there is a noticeable delay of about 1-3 seconds when i need to access this media, but that is something i've decided to live with.
Best is to use mergerfs with snapraid for media only. That is best of both worlds, to be honest. Only a disk that has data on it is woken up, and only when someone is watching ir importing data. Everything else on all my other servers is on NVMe drives.
Two identical backup systems would be good test 🤔 One that always spins and one that puts the drives to sleep. Identical backup operations for both systems and a whole lot of time…
For user facing systems, we don’t spin down the drives. After some experimenting we got lots of user complaints about the server was “always” super slow. Totally different story for backup and non-user systems.
I do it different way: 5:00 - wakeonlan NAS 5:15- perform all backups etc from Proxmos (fully on SSD) to NAS 6:00 - peform all backups from all laptops (TimeMachine) 7:00 - replicate NAS to remote location 9:00 - shutdown NAS This NAS draws around 50W while working, 0W while shut down. So 6h a day 50W and 18h a day 0W so around 900W saved Plus: Proxmox takes much less power when no backup is happening.
I did experiments like this when I worked at Netapp. It was a long time ago..back when 4TB drives were the largest drives ever made. I didn't see any difference in longevity. It was the quality controls of the manufacturer that determined longevity more than anything..that and temperature. We experimented with sleeping drives because the cost of running our lab each day with HVAC was astronomical, thousands of dollars each day.
I have 8 HDD, 5 SSD setup in a 6 core 11500 NAS. One drive on its own might get accessed over the span the of the day. The entire server idles at ~62W GPU & all. If I ask the drives to stay spun up I'm looking at about 110W. My other server used to be run off a 65W chip and had a system idle of 60W with 14 disks, with the disks spun up it would idle at 144W. What's more important though is that it was not striped storage (other than the SSDs). The spike in power doesn't mean anything for the power bill it's transient and I feel like some people are going to take this as an 'um-actually' later (however it's good that you mentioned it but I was hoping you would mention it in regards to wear) If you're running striped storage it's a totally different situation to a JBOD. When you ask that striped pool to sleep then wake for rights you have to deal with the amplified vibrations from all the drives waking at once plus the huge current sag. However in a JBOD scenario where each is written to individually or just X drive + Parity drive, it makes sense.
Our customers were given rights to tape back up on their main system, we were a closed loop network. Unfortunately, they backed up every six weeks, something their IT management decided on their own. The problem was some of that data needed to be accessed a year or two later. They kept that for court cases of fraud since we imaged every letter and document that went out. Fortunately for them, I was copying to CD encrypted once a month and handing it to the manager, which saved the company millions of dollars against lawsuits. We did have scsi failures after 10+yrs, which mostly sounded like dental drills Very noticeable. I had 5 separate unique drives swap abled and 5 computers ready too. Image from 2004 to 2018.
My Plex server has four Exos 20TB drives. With the drives spun down, the server pulls about 45 watts, with them working it's about 62 watts. It would probably be lower if I bought a more efficient PSU.
I am sure other power meter devices are available, but it is curious that "Kill-a-watt" devices don't seem to be on sale in Europe. Pity, since those ones look nicer to use than the generic ones available here.
Comes at the very right point in time. I was just about to buy 4 drives for my home nas. I would also be really interested in how ZFS reacts to drive being spun down or how to tune it to allow longer timeouts. Or Will a read/write cache nvme allow the drives to respond later. Moreover the noise floor of exos vs e.g. ironwolf drives in different power states is very important to me. I can read the dB values from the data sheet but don’t know if I will hear it while sitting next to it 😅
I have an array of 20 12TB Ironwolf Pro's using ZFS with an 2TB nvme drive as read cache. And noise and power consumption are also considerations for me. I have my drives set to go beddy byes after 30 minutes of no access. When they are sleeping and I access data stored in the read cache, the drives don't need to spin up. Though, sometimes they do and sometimes they dont. I think its to do with a specific setting dictating the period after which truenas should check for a new version of the cached files. Or maybe its something to do with specific file types that check for new versions. But most of the time when I'm accessing media files, the drives will not spin up. Sometimes when accessing documents, the drives will spin up, but not always. All the important data I use every day is stored on an SSD array so access is fast, random writes and reads are are pretty good and there is rarely any delay in access times. In terms of the write issue that was mentioned, with ZFS it is not an issue as the RAM is used as write cache. So files copied to the NAS when the drives are spun down are written to the RAM first and then to the drives once they wake up and are accessible again. Which is a big part of why ZFS is so great. Overall, considering my spinning rust is only cold storage that is not accessed very often, putting them to sleep when not in use is absolutely saving me a bunch of money on the power bill. Literally hundreds of Watts. Especially considering I have 20 drives compared to the 4 used in this video.
Wow, what a coincidence: yesterday I calculated the power consumption of 10 HDD (3.5) or SSD or NVME drives. And here's a surprise: 1 NVME disk is fast (Kingstone Enterprise) but can also consume 14W of power! For comparison, SSD (also Kingstone Enterpise) consumes 2W peak. 🙂
Entire shutdown is definitely the best way. I have my NAS set to shutdown around 10PM and BIOS configured to turn on an hour or two before I'm "up and at it".
Thats why I ended up going back to Mergerfs+snapraid. Most of my data does not need to be spinning around 24/7. I for example, cannot sleep any drives. I have on my servers about 60+ services running, so it would be accessing, reading and writing 24/7 if I did one-stop solution, thts why only media is on HDDs(mergerfs) and everything else on NVMe raid. Then I can benefit of each drive sleeping seperately when not in use.
I think this test is a bit too narrow in scope. I understand you're trying to refute a certain specific claim about wattage, but there should be a lot more to it. As another commenter had mentioned - there's going to be a significant noise & temperature difference even in a scenario of a small number of drives where power savings won't be a big factor. What should be looked at is the option having both SSD and HDD in separate arrays, SSD array always on and the HDD array only at certain times. Should be easy to implement and test in two different NAS systems, might be worthwhile to also test a jugaad of this in a single system. In a scenario where you might want a few tens of TB "to exist", but you don't need more than maybe 10TB at constant-availability then the total difference can be significant.
@@rpungello spinning up and spinning down the drive definitely wears out the bearings.. as to how much and if it will keep the drive from functioning depends upon the technology age and a bunch of factors that are pretty hard to consolidate
As far as I know, no one has actually tested it on a large scale. People often cite datacenters, but datacenters want their hardware to be in use all the time, so they won't be powering down like that. Sure it'll wear the drives, but does it matter? No idea. I used to run my NAS with them powering off and can't say I noticed any extra failures rather than the usual, but that's only one guy with 15 drives. We need someone with many (hundreds?) of drives to do a proper test, but they also need to not care about using the start/stop half of the drives as they probably won't want drives they're making use of stopping randomly.
@@PeterBrockie Aye I oversaw a few hundred racks for an oil and gas company worth billions, we had enough NetApp storage shelves (30 racks total) and thousands of drives that I was able to perform a study on the failure rates of the various brands of HDD (HGST, Toshiba and Seagate) with pretty conclusive results but nothing was ever spun down or taken into a low power mode to test something like this its was maximum throughput at all times. Only time power savings were taken into consideration was when we were looking at server refreshes and what hardware could be consolidated down into virtual systems from physical hosts. Power was relatively cheap, it was rack/facility space in the Data Center that really cost you an arm and a leg if you didn't own the building itself. Honestly it could go either way you usually see the highest power draw and most abuse taking place during POST but there are so many other factors to control for in this type of test usage patterns of the drives and heat/vibrations over extended periods of time are usually the biggest causes of wear on a drive. If your NAS doesn't see consistent high utilization powering it off or putting it into a low power mode is probably the play especially if its a home-lab where your paying marked up electricity rates and not getting it at wholesale.
1:39 It'd have been easier to just type: hdparm -Y /dev/sd? -- the question mark is a single-character wildcard, matching all drives that begin with /dev/sd and are followed by one character. It wouldn't match e.g. /dev/sda1 since that's two characters after /dev/sd. Just leaving this as a hint for people who like handy, quick CLI tips.
With my Synologies and them getting drive sync, and hyperdrive backup, a slew of bit torrents. I think maybe I have 2-4 hours a day when MAYBE they aren't active. Turn off the lights when your not in a room! I do have a Sonoff 31 running tasmota on most of my house. Espescially now with the holiday lights. My switch rack with 3 switches, fiber modem, router and an oscislating fan use more power. and I'm not shutting those down. IMO!
I would save a lot of power and win a lot of comfort if my unifi gateway could sent Wake on LAN packets. My WoL for my remote PC is currently running on my NAS in Home Assistant. If I could power off my NAS, and wake it with WoL, that would be great.
I think that the BRAND of the drives can make quite a significant impact on the results given that the data that we already have from Backblaze, for example, already puts Seagate at a higher AFR compared to the other makes/models. That being said -- on my 4-bay QNAP TS-453Be NAS, when I spin down the four HGST 6 TB SATA drives I have in there, it will drop down from ~50 W down to ~17 W. So, the power savings that comes with spinning the drives down ends up being quite a bit more substantial. On my main Proxmox server, however, where I have 36 drives, the chances that I will be able to actually spin down the drives is pretty much next to zero, as there is ALWAYS some disk activity going, for a variety of reasons.
My backup NAS is automatically awake for 12 hours a week when all the backup/maintenance magic takes place. My feeling is that it's is adequate to my usecase
2.225 watt was saved in the sleep mode per drive... A lot of these newer drives have SSD cache which takes power, then you have the entire controller board. I think you're right from a logical perspective. A real sleep state should draw virtually zero power. It should be less than a wake on LAN. This is probably something the specification is missing for SATA and other buses. Really powering down the device would show it as being disconnected from the bus
The Exos X16 14TB drive it seems he uses is able to draw as little as 1.06 Watt at the deepest sleep. He just forgot to use the right tool. He needs to set internal idle state in firmware with SeaChest.
@tx2gnd it seems kind of dumb you need to use manufacturer's tools to push the drive to a lower power state... This is Linux where you can actually just give the OS control
@@xephael3485 Not dumb at all. Take a look at the flags and they make sense. This is enterprise drives, you control them by changing flags in the firmware. Then use your OS (Linux) to manage them. The tool is command line for Linux too.
Been meaning to buy a new NAS enclosure to run TrueNAS instead of keeping using and old desktop also because of power efficiency, plus a bump in performance... but still wasn't quite there considering spinning down drives. Will see about wake on lan instead perhaps. I don't need it running all the time. Just that my old desktop is just too old. xD Intel Haswell. I did the whole stupid thing upgrading it before considering to get something new... the PSU was blown so I got a new one, procured RAM for it, got a new network card because the integrated ethernet was behaving weirdly... and then my power bill came and holy sh*t. xD
Take at look the german Unraid community. There power usage is a big thing because of the high electricty costs. I also use a unraif nas by myself because oft the concept oft spinning down all the disks that are not currently in use.
Before I watch, I'm just gonna say my HGST drives don't respond to "spin down" commands from my experience. If so, I'd totally do it because those 60 drives are pulling a constant 515W from the wall.
Its no different then when a system boots it will undergo the peak power draw for a few seconds as everything powers on and boots up. Even with 24 drives most it would likely pull is around 240Watts, well within the capability of pretty much any PSU to handle for this type of application. Only time you may run into an issue like this is if you have a dedicated GPU and undersized PSU from the get-go then you can run into over-draw scenarios. Otherwise though shouldn't be an issue 99% of the time.
@BigHeadClan I'm not using these specific drives but I don't believe that. I've seen far fewer drives tripping 500 w power supplies when the controller starts them all at the same time.
@@xephael3485 15 years of IT, 5 of which was in data center and I've never seen that happen a single time, any time I've even heard of it happening its because because the system wasn't spec'd correctly (usually too large of GPU for the PSU to handle) or the quality of your hardware is sub-par as the cheapest PSU's rarely hit the advertised ratings of 500 Watts or can support that type of power draw if its only connected over the 12V Rail. But even the least efficient HDDs shouldn't be pulling more than 20 Watts or so during start up, if your tripping the PSU during POST from excessive power-draw something else is going on.
@BigHeadClan that's probably because you're dealing with systems which have controllers preprogrammed to spin up the drives half a second or one second between each other... Generally you don't have commercial systems which haven't taken this into account.
@@BigHeadClan telling 24 or so motors to start up at the exact same time basically gives you a black hole of energy draw... just saying It's easily mitigated though by just staggering them by a fraction of a second. If you build a home system with multiple drives you can easily run into the power Spike issue
What are y'all's opinions on wear and tear on vertically oriented HDDs vs. horizontal (flat)? With vertical, the platter accelerates with gravity when going down then decelerates against gravity going up, and it does that 2 (5400, 7200, 10000) per minute. Whereas a horizontal HDD is parallel to Earth's surface so seems the rotational speed would be less influenced by gravitational force. It's why i preferu NAS and server chassis to use horizontal orientations only, but idk if im worrying about nothing.
You still have a gravitational force acting on the drive. For vertical orientation I suspect the acceleration and deceleration forces the platters experience mostly cancel each other out. What really matters more IMO is how well balanced the platters are. That they spin so fast means even very tiny imperfections cause vibrations. Whether you orient the drive horizontal or vertical, these forces are much stronger than that of gravity. In the grand scheme of things I don't think orientation makes a difference
The power supply pulls 14.9 watts with no drives or anything else plugged in and as noted in the video the Zima board is powered by it's own power supply.
I didn't see that in the video unless I missed something? If so that's quite a good result. There is some video corruption around 1:56 maybe the note didn't make it to the export.
@@anthonydiiorio he has that in all of his videos, i like to think it's his sense of humor trolling us or something.
@@anthonydiiorio The PSU hides the power savings from the 4 disks then because of it's (PSU) idle power draw. And I'll argue that's a flaw with this test, as it's not a direct test of the drives but of the PSU as well. The 4 disks on their own should draw 16 Watt less when fully spun down, we only see about 9 Watt reduction (26.6 to 17.7), the rest is hidden by the non-efficient PSU. Also it would be nice if the test was transparent with regard to what idle state firmware the disks has. The savings are bigger than this test demonstrates in a real system.
@@jordanc8926 It's a weird TH-cam bug. Me and Techno Tim both keep having that happen and we don't know why because it's not in the original.
@@tx2gnd The point of the video is showing what is saved at the wall by stopping the drive spindles. Power supplies have a draw even when not under load.
From my experiences where we have been asked to spin the drives down to save on power during off-peak times versus leaving them running all the time at a constant temperature. Anecdotally I've noticed more drive failures in the spin-up/down group of NAS units I have installed and managed over the years. I leave mine spinning all the time. Even my off-site one that only gets backed up to once a day by 3 of our NAS units in the family. It's going on 8 years with 0 drive failures. The NAS units in the family that get used all the time for docs/VM's/photo-back-up and storage etc. are seeing 8 years or more for drive failures. Occasionally we get one that fails at year 5 but it's really rare. This is my testing over the last 20 years and 10 or so NAS units in total.
For the longest time i didn't spin down my drives, but I've recently reorganized my storage so very few things actually need spinning rust, and most is in flash somewhere. Another problem was the dreaded issue that SMART reads for some reason keep them awake, but i digress.
The 6 drives home-server normally uses 95ish watts with them spinning, and this drops to around 60 now. Electricity is relatively expensive here at over 0.40€/kWh so this saves over 100€ per year. Should've done this a long time ago.
Thanks, Tom. I learned so much from you over the years!
More than watts, it's heat and noise when you start to have a fair amount of drives.
I moved from TrueNas with always on raid array to Unraid with selective spin up/down.
The difference was massive on the heat side (about 10c in my ventilated server closet)
We want to save energy. So the term watt-hours should have been used at some point. Instead we only talked about watts which is instantaneous power. I would have expected a result something like, "With the drives spun down we are saving X watt-hours per day but a drive powering up requires more power for X number of seconds therefore to make this break-even the drive must idle for X seconds/hours/days/whatever on average before each wake and anything longer than that is savings."
So I don't think we have learned anything useful here as of yet.
Yep a more mathematically approach is needed here. Otherwise that comparison as he did is just a kind of feeling
I've been using sleep with my drives in my various system for decades now, including my current TrueNAS Scale systems. I just have set the spindown timer to 2 hours so as to prevent constant spindown/spinup cycles -- if 2 hours pass without activity, it typically means no one's at home or it's night time, so it's ok for the drives to go to sleep. With a 2 hour timer, there's enough activity during the day that the drives keep running and won't spindown needlessly.
exactly, I've set my 15 drives to 3 hour timers and i haven't had a single drive failure in the last 10 years these drives were in service.
This is wrong. Your hdparm didn't spin down the disk, just disabled electronics (Idle_A state). The Exos has a default of "Idle_A" state, you need to use the Seagate SeaChest utilities to set the drive to enable Idle_B, Idle_C or Standby. Then you'll get as low as 1.06 Watt idle for each drive. See user manual section 2.5.1 Table 6 page 12 (manual rev. F). Also see section 2.5.4 on what each Idle state represent and how to set it.
@@tx2gnd damn man... Why do you have to go full techie? You just pulled the S3 power state out of your bag!
The spindles did come to a stop with that command.
@@LAWRENCESYSTEMS They would only do so if you've set the firmware to another state than Idle_A first. And the 17.7 Watt indicates that the drives still are in Idle_A and not Idle_B or Idle_C or Standby (1.06 Watt pr drive).
The power supply pulls 14.9 watts with no drives plugged in and as noted in the video the Zima board is powered by it's own power supply.
@@LAWRENCESYSTEMS I rewatched the video. Can't find where the 14.9 Watt is mentioned? Anyway it doesn't make sense. I do understand that the drives got their own PSU.
The only number you gave us is that the idle of 4x Exos is 26.6 Watt and after your hdparm -Y it's 17.7 Watt.
My point is, 4 Exos X16 at full idle (motor off, as per firmware idle state "standby") should according to the manual only draw 1.06 Watt each. That should be a reduction from 26.6 Watt to 10 Watt for the 4 drives. So something isn't right with this test.
You say the drives spun down, so not Idle_A or Idle_B. But Idle_C is only reduced RMP, not fully motor off and still uses 2.47 Watt pr drive. And yes it takes a long time to recover from Idle_C.
Please just use the Seachest utilities from Seagate (Linux CLI tool) and verify the enabled idle state in firmware. I gave you where too look for the info in the user manual. You can use the tool to verify that the drives is in the right idle state too.
Point is, you should see a reduction from 5.07 Watt pr drive to 1.06 Watt pr drive and the numbers in the video doesn't show that.
I'll leave it at this as there's no point to argue over this. Just redo the test with the right tools and all should be clear.
I think the wear and tear depends on the use case. I've set up mine to sleep after 3 hours of no activity which saves it from frequent spin down and spin up intervals while also saving on wear for longer inactive periods.
Love this test!! i have missed these little nerdy things from Tom! like when you put electricity directly to RAM on a free-nas server.
I have an Unraid server with 20 drives, I have the drives set to spin down after 30 minutes of no activity. The server pulls about 150W when the drives are all spun down and over 270W when they are all spun up but idle. It's mostly used for Plex and with the way Unraid works it only needs to spin up the drive it's reading from, so I ensure that each shows episodes are on the same drive together so it's not jumping from drive to drive. I have had no problems with excessive drive failures since enabling spin down.
We had mailing servers and machine computers run scsi 24/7. Only shut down for total power lost, lasted 10yrs. We used unix5 on machines and windows server on the Dell server. Power on was more critical than saving power or you wouldn’t get your credit card bill, gas bill, electric bill, or in some cases, your paycheck and your tax return
What about PSU inefficiency, how is that taken into account?
I run the latest TrueNAS with 20 HDDs on a 7900 12 core CPU w/128GB DDR5. I spin down the drives after 2 hrs of inactivity which seems a reasonable compromise of power savings and wear and tear. Within +/- 10% my idle power usage is 100W and under read/writes is 200W (convenient that the numbers work out so well). I keep my NVME drives always on (of course) so no problems running Docker apps which only write to these. It is a good compromise. No bad HDDs after 3+ years.
Interesting topic. This is something I dealt with extensively not only because it saves power but also because I have my home lab at my desk and I need it to be quiet.
My home lab with one ATX Fujitsu Siemens with an i3 6100T and an Optiplex MFF with an i3 7100t will idle at 37 Watts. It only spins up my disks when doing a backup once a day. I have 4 HDDs 2.5" and 3 HDDs 3.5". When they all run simultaneously they will consume double or triple the wattage. My home lab would cost me 120€ a year with my spinning down measures so I would say it's worth it. Especially when you use (old) consumer grade disks.
Ive been running 3 8TB (7200rpm) ironwolf pros for over a year now with "hdparm -S 2" (Spin down after 10 seconds with no activity), this takes my overall server powerdraw from around 38w at the wall to 16w as measured by a smartplug in homeassistant, saving me 22watts or about 7.3w pr drive, or ~192kwh each year in total.
The spun down drives are my media library, so there is a noticeable delay of about 1-3 seconds when i need to access this media, but that is something i've decided to live with.
Best is to use mergerfs with snapraid for media only. That is best of both worlds, to be honest. Only a disk that has data on it is woken up, and only when someone is watching ir importing data. Everything else on all my other servers is on NVMe drives.
@@karliszemitis3356i just run em as seperate disks, no raids etc
Two identical backup systems would be good test 🤔
One that always spins and one that puts the drives to sleep. Identical backup operations for both systems and a whole lot of time…
For user facing systems, we don’t spin down the drives. After some experimenting we got lots of user complaints about the server was “always” super slow. Totally different story for backup and non-user systems.
I do it different way:
5:00 - wakeonlan NAS
5:15- perform all backups etc from Proxmos (fully on SSD) to NAS
6:00 - peform all backups from all laptops (TimeMachine)
7:00 - replicate NAS to remote location
9:00 - shutdown NAS
This NAS draws around 50W while working, 0W while shut down. So 6h a day 50W and 18h a day 0W so around 900W saved
Plus: Proxmox takes much less power when no backup is happening.
A lot of work to save 900 Watts. Plus that 900 W probably cost a lot less than your labor doing all that work , you gotta catch 22
@@Dirtyharry70585 you mean the crontab written once and forgotten is too much? and tasks schedulled on Proxmox and forgotten? wow
I did experiments like this when I worked at Netapp. It was a long time ago..back when 4TB drives were the largest drives ever made. I didn't see any difference in longevity. It was the quality controls of the manufacturer that determined longevity more than anything..that and temperature. We experimented with sleeping drives because the cost of running our lab each day with HVAC was astronomical, thousands of dollars each day.
Thanks for the numbers. Off topic but curious - where has the homelab show podcast gone? haven't seen it in a long time.
I have 8 HDD, 5 SSD setup in a 6 core 11500 NAS. One drive on its own might get accessed over the span the of the day. The entire server idles at ~62W GPU & all. If I ask the drives to stay spun up I'm looking at about 110W. My other server used to be run off a 65W chip and had a system idle of 60W with 14 disks, with the disks spun up it would idle at 144W. What's more important though is that it was not striped storage (other than the SSDs). The spike in power doesn't mean anything for the power bill it's transient and I feel like some people are going to take this as an 'um-actually' later (however it's good that you mentioned it but I was hoping you would mention it in regards to wear)
If you're running striped storage it's a totally different situation to a JBOD. When you ask that striped pool to sleep then wake for rights you have to deal with the amplified vibrations from all the drives waking at once plus the huge current sag. However in a JBOD scenario where each is written to individually or just X drive + Parity drive, it makes sense.
Our customers were given rights to tape back up on their main system, we were a closed loop network. Unfortunately, they backed up every six weeks, something their IT management decided on their own. The problem was some of that data needed to be accessed a year or two later. They kept that for court cases of fraud since we imaged every letter and document that went out. Fortunately for them, I was copying to CD encrypted once a month and handing it to the manager, which saved the company millions of dollars against lawsuits. We did have scsi failures after 10+yrs, which mostly sounded like dental drills Very noticeable. I had 5 separate unique drives swap abled and 5 computers ready too. Image from 2004 to 2018.
My Plex server has four Exos 20TB drives. With the drives spun down, the server pulls about 45 watts, with them working it's about 62 watts. It would probably be lower if I bought a more efficient PSU.
Good video, right to the point thanks!
I am sure other power meter devices are available, but it is curious that "Kill-a-watt" devices don't seem to be on sale in Europe. Pity, since those ones look nicer to use than the generic ones available here.
Good video, maybe you should run a few of those experiments, and I a year or two tell everyone what the result were
Comes at the very right point in time. I was just about to buy 4 drives for my home nas.
I would also be really interested in how ZFS reacts to drive being spun down or how to tune it to allow longer timeouts.
Or Will a read/write cache nvme allow the drives to respond later.
Moreover the noise floor of exos vs e.g. ironwolf drives in different power states is very important to me. I can read the dB values from the data sheet but don’t know if I will hear it while sitting next to it 😅
I have an array of 20 12TB Ironwolf Pro's using ZFS with an 2TB nvme drive as read cache. And noise and power consumption are also considerations for me.
I have my drives set to go beddy byes after 30 minutes of no access. When they are sleeping and I access data stored in the read cache, the drives don't need to spin up. Though, sometimes they do and sometimes they dont. I think its to do with a specific setting dictating the period after which truenas should check for a new version of the cached files. Or maybe its something to do with specific file types that check for new versions. But most of the time when I'm accessing media files, the drives will not spin up. Sometimes when accessing documents, the drives will spin up, but not always.
All the important data I use every day is stored on an SSD array so access is fast, random writes and reads are are pretty good and there is rarely any delay in access times.
In terms of the write issue that was mentioned, with ZFS it is not an issue as the RAM is used as write cache. So files copied to the NAS when the drives are spun down are written to the RAM first and then to the drives once they wake up and are accessible again. Which is a big part of why ZFS is so great.
Overall, considering my spinning rust is only cold storage that is not accessed very often, putting them to sleep when not in use is absolutely saving me a bunch of money on the power bill. Literally hundreds of Watts. Especially considering I have 20 drives compared to the 4 used in this video.
Wow, what a coincidence: yesterday I calculated the power consumption of 10 HDD (3.5) or SSD or NVME drives. And here's a surprise: 1 NVME disk is fast (Kingstone Enterprise) but can also consume 14W of power! For comparison, SSD (also Kingstone Enterpise) consumes 2W peak. 🙂
Yea but at that point you have the issues that come with SSD when it comes to write to the drive
as if nvme drives are not ssd...
Entire shutdown is definitely the best way. I have my NAS set to shutdown around 10PM and BIOS configured to turn on an hour or two before I'm "up and at it".
Thats why I ended up going back to Mergerfs+snapraid. Most of my data does not need to be spinning around 24/7. I for example, cannot sleep any drives. I have on my servers about 60+ services running, so it would be accessing, reading and writing 24/7 if I did one-stop solution, thts why only media is on HDDs(mergerfs) and everything else on NVMe raid. Then I can benefit of each drive sleeping seperately when not in use.
in a world, where wattage and amperage are a thing... there also must be kilogrammage/poundage, degreesage, lumenage, secondage ...
I think this test is a bit too narrow in scope. I understand you're trying to refute a certain specific claim about wattage, but there should be a lot more to it. As another commenter had mentioned - there's going to be a significant noise & temperature difference even in a scenario of a small number of drives where power savings won't be a big factor.
What should be looked at is the option having both SSD and HDD in separate arrays,
SSD array always on and the HDD array only at certain times.
Should be easy to implement and test in two different NAS systems, might be worthwhile to also test a jugaad of this in a single system.
In a scenario where you might want a few tens of TB "to exist", but you don't need more than maybe 10TB at constant-availability then the total difference can be significant.
Also very interested in hearing whether anybody has tested the wear & tear of allowing HDD power saving vs. fully spun up, heads engaged 24/7
@@rpungello spinning up and spinning down the drive definitely wears out the bearings.. as to how much and if it will keep the drive from functioning depends upon the technology age and a bunch of factors that are pretty hard to consolidate
As far as I know, no one has actually tested it on a large scale. People often cite datacenters, but datacenters want their hardware to be in use all the time, so they won't be powering down like that. Sure it'll wear the drives, but does it matter? No idea. I used to run my NAS with them powering off and can't say I noticed any extra failures rather than the usual, but that's only one guy with 15 drives.
We need someone with many (hundreds?) of drives to do a proper test, but they also need to not care about using the start/stop half of the drives as they probably won't want drives they're making use of stopping randomly.
@@PeterBrockie Aye I oversaw a few hundred racks for an oil and gas company worth billions, we had enough NetApp storage shelves (30 racks total) and thousands of drives that I was able to perform a study on the failure rates of the various brands of HDD (HGST, Toshiba and Seagate) with pretty conclusive results but nothing was ever spun down or taken into a low power mode to test something like this its was maximum throughput at all times.
Only time power savings were taken into consideration was when we were looking at server refreshes and what hardware could be consolidated down into virtual systems from physical hosts. Power was relatively cheap, it was rack/facility space in the Data Center that really cost you an arm and a leg if you didn't own the building itself.
Honestly it could go either way you usually see the highest power draw and most abuse taking place during POST but there are so many other factors to control for in this type of test usage patterns of the drives and heat/vibrations over extended periods of time are usually the biggest causes of wear on a drive. If your NAS doesn't see consistent high utilization powering it off or putting it into a low power mode is probably the play especially if its a home-lab where your paying marked up electricity rates and not getting it at wholesale.
now lets talk about 24 drives spun down
And 24 000 drives spun down.
1:39 It'd have been easier to just type: hdparm -Y /dev/sd? -- the question mark is a single-character wildcard, matching all drives that begin with /dev/sd and are followed by one character. It wouldn't match e.g. /dev/sda1 since that's two characters after /dev/sd.
Just leaving this as a hint for people who like handy, quick CLI tips.
With my Synologies and them getting drive sync, and hyperdrive backup, a slew of bit torrents. I think maybe I have 2-4 hours a day when MAYBE they aren't active. Turn off the lights when your not in a room! I do have a Sonoff 31 running tasmota on most of my house. Espescially now with the holiday lights. My switch rack with 3 switches, fiber modem, router and an oscislating fan use more power. and I'm not shutting those down. IMO!
it seems to me that turning on and off is always stressful for electronics, even for an ordinary lamp
I would save a lot of power and win a lot of comfort if my unifi gateway could sent Wake on LAN packets. My WoL for my remote PC is currently running on my NAS in Home Assistant. If I could power off my NAS, and wake it with WoL, that would be great.
I think that the BRAND of the drives can make quite a significant impact on the results given that the data that we already have from Backblaze, for example, already puts Seagate at a higher AFR compared to the other makes/models.
That being said -- on my 4-bay QNAP TS-453Be NAS, when I spin down the four HGST 6 TB SATA drives I have in there, it will drop down from ~50 W down to ~17 W.
So, the power savings that comes with spinning the drives down ends up being quite a bit more substantial.
On my main Proxmox server, however, where I have 36 drives, the chances that I will be able to actually spin down the drives is pretty much next to zero, as there is ALWAYS some disk activity going, for a variety of reasons.
My backup NAS is automatically awake for 12 hours a week when all the backup/maintenance magic takes place. My feeling is that it's is adequate to my usecase
Instructions unclear HDD is sleeping with the fishes
Interesting, I would have thought the drives would draw much less than 4.4W sleeping, but I guess not.
That was probably what the power supply itself was pulling
2.225 watt was saved in the sleep mode per drive... A lot of these newer drives have SSD cache which takes power, then you have the entire controller board.
I think you're right from a logical perspective. A real sleep state should draw virtually zero power. It should be less than a wake on LAN. This is probably something the specification is missing for SATA and other buses. Really powering down the device would show it as being disconnected from the bus
The Exos X16 14TB drive it seems he uses is able to draw as little as 1.06 Watt at the deepest sleep. He just forgot to use the right tool. He needs to set internal idle state in firmware with SeaChest.
@tx2gnd it seems kind of dumb you need to use manufacturer's tools to push the drive to a lower power state... This is Linux where you can actually just give the OS control
@@xephael3485 Not dumb at all. Take a look at the flags and they make sense. This is enterprise drives, you control them by changing flags in the firmware. Then use your OS (Linux) to manage them. The tool is command line for Linux too.
Been meaning to buy a new NAS enclosure to run TrueNAS instead of keeping using and old desktop also because of power efficiency, plus a bump in performance... but still wasn't quite there considering spinning down drives. Will see about wake on lan instead perhaps. I don't need it running all the time.
Just that my old desktop is just too old. xD Intel Haswell.
I did the whole stupid thing upgrading it before considering to get something new... the PSU was blown so I got a new one, procured RAM for it, got a new network card because the integrated ethernet was behaving weirdly... and then my power bill came and holy sh*t. xD
Take at look the german Unraid community. There power usage is a big thing because of the high electricty costs. I also use a unraif nas by myself because oft the concept oft spinning down all the disks that are not currently in use.
for me, not worth wear and tear of spin up/down for the small power savings. cool experiment though, good to know
Before I watch, I'm just gonna say my HGST drives don't respond to "spin down" commands from my experience. If so, I'd totally do it because those 60 drives are pulling a constant 515W from the wall.
I notice the hgsts do go to sleep in docks, but I've never deliberately sent a command.
@churblefurbles huh, maybe I could try again sometime
I've been using power down on hdd since 98 and guest what, they are still working 🤔 Might be the consumerism bullshit at play here 🤭
Lawrence Systems.... science 🔭 👍🤣
When you wake up drives you might not want to wake them all up at the same time unless you have a power supply that can take abuse.
Its no different then when a system boots it will undergo the peak power draw for a few seconds as everything powers on and boots up. Even with 24 drives most it would likely pull is around 240Watts, well within the capability of pretty much any PSU to handle for this type of application.
Only time you may run into an issue like this is if you have a dedicated GPU and undersized PSU from the get-go then you can run into over-draw scenarios. Otherwise though shouldn't be an issue 99% of the time.
@BigHeadClan I'm not using these specific drives but I don't believe that. I've seen far fewer drives tripping 500 w power supplies when the controller starts them all at the same time.
@@xephael3485 15 years of IT, 5 of which was in data center and I've never seen that happen a single time, any time I've even heard of it happening its because because the system wasn't spec'd correctly (usually too large of GPU for the PSU to handle) or the quality of your hardware is sub-par as the cheapest PSU's rarely hit the advertised ratings of 500 Watts or can support that type of power draw if its only connected over the 12V Rail.
But even the least efficient HDDs shouldn't be pulling more than 20 Watts or so during start up, if your tripping the PSU during POST from excessive power-draw something else is going on.
@BigHeadClan that's probably because you're dealing with systems which have controllers preprogrammed to spin up the drives half a second or one second between each other...
Generally you don't have commercial systems which haven't taken this into account.
@@BigHeadClan telling 24 or so motors to start up at the exact same time basically gives you a black hole of energy draw... just saying
It's easily mitigated though by just staggering them by a fraction of a second. If you build a home system with multiple drives you can easily run into the power Spike issue
What are y'all's opinions on wear and tear on vertically oriented HDDs vs. horizontal (flat)? With vertical, the platter accelerates with gravity when going down then decelerates against gravity going up, and it does that 2 (5400, 7200, 10000) per minute. Whereas a horizontal HDD is parallel to Earth's surface so seems the rotational speed would be less influenced by gravitational force. It's why i preferu NAS and server chassis to use horizontal orientations only, but idk if im worrying about nothing.
You still have a gravitational force acting on the drive. For vertical orientation I suspect the acceleration and deceleration forces the platters experience mostly cancel each other out.
What really matters more IMO is how well balanced the platters are. That they spin so fast means even very tiny imperfections cause vibrations. Whether you orient the drive horizontal or vertical, these forces are much stronger than that of gravity.
In the grand scheme of things I don't think orientation makes a difference