I rewatched this, because I wanted to review this topic. Wow, you are such an incredible Teacher and Educator. Glad that I found out about your channel years ago. You've helped so many people become more knowledgeable about technology.
Thank you for making this video! It really helped me out a lot I was defiantly confused on the difference before this video. Good thing I bought the right card for what I need Raid 5(its a LSI raid card with battery got it for $8 I also firmware updated it). I just got my 4x 2tb sas drives @$15ea in the mail today. I cant wait to see what kind of crystal mark benchmark I get. I need a better hard drive mounting system Im trying to jam all this into a T3500 workstation.
Wait what about raid expander cards and can you use them stand alone? What about Raid card IT mode? (from what I under stand that makes the raid card act like a HBA)
I'm researching on buying a refurbished server and converting it to a NAS (using free/truenas).This was imensely helfull since truenas needs a HBA and I had didnt understand the differences. Thanks a bunch
Few things: 1. You can look at modern software based storage solutions like Ceph and Gluster, and the recommendation is to use HBA and not RAID card. These solutions are for enterprise, not home use. 2. Cheap RAID card will simply "melt" if you try to connect tons of SATA SSD's (I'm talking about all the LSI 92XX based PCIe 2.0 cards) 3. When it comes to redundancy, if you have large disks (8+ TB) and one of them broke and you replace it, the RAID card doesn't know whats the data that needs to be rebuilt to the new replacement/hot-spare disk, so the rebuild process will take DAYS and your entire system will be slow, even on expensive RAID cards. Compare that with ZFS which basically only copies the relevant parts 4. On systems based on ZFS there is a process called "scrubbing" which runs once a week/month (depending on your configuration) which checks all your data in you storage pool. If something is wrong, it will fix it without any issues and without halting your system. Seen anything like that with any RAID cards? 5. Regarding the battery - with ZFS (and few other solutions) you don't need this solution. You just need an "SLOG" device (which means most of the time a single small SSD with power protection built in). You can read about it here: www.servethehome.com/what-is-the-zfs-zil-slog-and-what-makes-a-good-one/ so this covers the enterprise redundancy issue. 6. As more and more companies and storage device solutions use NVME solutions (you will see this year the first mechanical hard disks with NVME U.2/M.2 connectors, btw) - the RAID/HBA solutions are becoming non relevant. You can search and see that there are no RAID cards for NVME devices, for example.
Hi Hetz Biz Thank You very much! that was a lot :-) Raid controllers does actually do "scrubbing" - ish it is called other things,, I took this from the Lenovo M5015,,, Features Patrol Read for media scanning and repairing Patrol read is a background sentry service designed to proactively discover and correct media defects (bad sectors) that arise normally as a disk drive ages. The service issues a series of verify commands and if a bad block is discovered, the card's firmware will use RAID algorithms to recreate the missing data and remap the sector to a good sector. The task is interruptible based on controller activity and host operations. The firmware also provides an interface where the patrol read task can be initiated, set up for continuous operation, and terminated from a management application. Patrol read can be activated by manual command or automatically. Thank you for watching! :-)
Nice explanations! I do think you missed 1 think though. With HBAs and ZFS, you won't get corrupt files. You will miss files, but not get corrupt files, due to the copy-on-write mechanism. That is something that's quite important to me :)
Bart Kuipers from are “security” point of view - but rot is FAR more critical and important than “loosing a files”, which is never going to happens anyway in critical applications (UPS and write back aching etc.)... Please remember, that with raid you can recreate data, if a disk is missing - but you have no idea of knowing if it is “the proper data”, especially clear if data and parity do not align... Then raid is actually potentially making matters much worse!
@@BartKuipersdotcom I know - I am a happy ZFS user :) The comment was targeted much more towards Morten (and the rest of the viewers)... To focus on the fact that with the large (Many Tb --> Pb) pools being deployed today (vs. 15-25 years ago when RAID had its hay-day), what RAID provides is in many instances a false sense af security..
"As we all know, modern hard disk drives do internal bad sector management inside their firmware, that is, when the drive detects a physically damanged or unreliable physical sector, it replaces the bad one with a good one from the reserved sector store. This mechanism eliminates the need for OS to do bad sector managment itself. Users can know the development of sector replacement count from S.M.A.R.T item called "reallocated sector count" If there is a bitrot,, the drive will be the first to try to fix this,, with the checksum it saves with every block. The RAID controller,, also does this task : "Patrol Read for media scanning and repairingPatrol read is a background sentry service designed to proactively discover and correct media defects (bad sectors) that arise normally as a disk drive ages. The service issues a series of verify commands and if a bad block is discovered, the card's firmware will use RAID algorithms to recreate the missing data and remap the sector to a good sector. The task is interruptible based on controller activity and host operations. The firmware also provides an interface where the patrol read task can be initiated, set up for continuous operation, and terminated from a management application. Patrol read can be activated by manual command or automatically." - this is from a Lenovo M5015 card,, 9 years old. So yes using HBA, you do not get the services that the RAID does,, so you need ZFS to do that for you or BTRFS, REFS..
HBA cards are just essentially extra ports, and drives are seen by the OS as individual drives. RAID cards are software and hardware that (as you said) presents the drives to the OS in certain PROPRIETARY configurations. For modern operating systems and for normal uses HBA is less headache and pain for all involved. Very simple and straightforward. RAID cars have a huge problem. If your raid card has a certain firmware, hardware revision also driver version and it FAILS. You have to get the same exact raid card with the same exact firmware, same exact hardware revision and driver version. Now if an HBA fails you can pretty much use any HBA as long as all the drives the OS are expecting to be there are there. Example is setting up a RAID array with the drives connected to the HBA. Even though the HBA has nothing to do with that raid array the OS is looking for those drives so any HBA can get those drives to show for the OS you are golden. You touched on the fact that HBA cards have a performance hit to the system resources and the overhead required. GREAT VIDEO
Hi Chuckles Nuts Thank You very much! I have actually shown me moving a drive from a x3650 M2 to x3650 M3 and M4,, and only lately found that it, did not work with M5,,, so with RAID cards, there is also some backwards compatibility. and I did not even mess with the firmware,, it kind of just worked, on the older servers. Thank you for watching! :-)
About the performance impact on resilvering: A "hardware" controller will sync the whole disk (e.g. 10TB) even when only 1 TB is used - e.g. ZFS will only resync the used space - so it's much much faster. Also: While resilvering the "hardware" controller too throttles the resync and normal operations will be slower..
Yes,, and if you actually used more of the storages of those 10TB drives you put in,, ZFS will be slowing down your system way more,, and getting back deleted files,, is a no go.
ZFS has no problems with hundreds of TB per pool (with 10+ TB drives, that's actually not much nowadays). Getting back deleted files? You lost me there.. What do you mean exactly?
It's worth noting that those flash drives on the front of the EMC can probably do 3GB/s each, which would saturate a 200Gbps network card with just 8 drives. Also, from my own research, enterprise flash drives usually have some capacitors on the drive itself to allow the in flight data that arrives in the drives cache to complete the journey. Given the latency and speed of those things they probably don't view power loss to the system as the same problem it once was.
@@MyPlayHouse Sure, if it's going through the CPU. RDMA is probably the preferred way to do this, load the data from disk to memory and then straight out through the NIC. I believe some newer stuff (I've seen on Serve The Home) can directly access the nvme disks, so they have quite powerful arm chips on the NIC itself - it's basically a tiny computer that does one thing.
If you are paying a per core lic. (if your server is an oracle enterprise server you can get 5-10 good raid controllers for just 1 core lic.) If CPU it not an issue,, you can use ZFS for storage.
There are some RAID cards like the Avago 9300 series and above (I promise I'm not paid by them!) which can do HBA (aka JBOD) mode at the same time as RAID mode. It is an expensive way of doing things, but if you need some drives presented directly to the OS (like OS drives) and others handled exclusively by the RAID card (like big R6 storage blocks) then you can do it on one card. This means you get some performance boost by being able to use the RAID cache for the JBOD drives so you can get even faster OS drives. The battery backup isn't necessary for RAID caching to work - it's just strongly recommended, and can be over-ridden with the option with "Always writeback without BBU". Some HBA / JBOD controllers can detect drive failures and generate the same kind of alerts that you get from RAID cards as they are actually based around the same controller chip - just with different features on them (and chips missing along side them). PS the large card you showed I think will take 256 drives on those connectors through an internal to chassis SAS expander and not just limited to 16 that you mentioned.
Hi Claire Clough Thank You very much! The HP P441 card I showed,, can be both a Raid card and a HBA,,, very expensive HBA :-) Thank you for watching! :-)
Hello Sir, I have watched and enjoyed your videos for a while. I have a question for you. I have a x3650 M2 Type 7947 my first rackmount server to play with. I believe I have to upgrade the default raid card in order to be able to support using 2.5 2tb+ Drives inside. I also would like to add the additional backplane to make the server run 12 drives. Can please suggest a few internal raid or hba options?
Hi andyhello23 Thank You very much! It good for some stuff,, I asked DELL they would not recommend Software raid,, unless you was on a really tight budget. Thank you for watching! :-)
I can say that HBA used in enterprises environment when you connect the sever with a HBA card to a SAN ... Here we have IBM's xsystem connected to a DS3500(very old.. but rock) and a storwize 5000 with hba sas cards.. It's very fast ... and the raid stuff is managed in the SAN.
About the security comparison: Please read on journaling file systems again :) In ZFS specifically there's the ZIL. Also: How many faulty batteries have you replaced on "hardware" controllers? :)
Hi salat Thank You very much! two batteries,, they usually hold the servers lifespan,, in Lenovo,, okay HP´s you have a point there. Thank you for watching! :-)
Happy New Year, really enjoyed your video. Currently working on a supermicro x8dtu-f, 4bay drive 1u server need a hba or raid not sure to hold the OS. Searched your Ebay store but not sure what I need. Any advice would be appreciated, thanks.
I picked up a couple of HP branded SAS drives for one of my servers but it turns out they are from a HP 3PAR storage box which uses a custom format of 520 bytes per block making them useless inside a server attached to a SAS HBA or Raid card. The only option to get them working will be to attempt a low level format to change them to 512 bytes per block. Apparently EMC do similar things...
@@MyPlayHouse Which SAS HBA are you using? I have an old Proliant and the old HP Smart Array doesn't have the ability to present the raw disks to the OS. There are cheap IBM 3650's here that I could pick up to do it - but the card would need to be inbuilt or on board as the servers i know for sale have been stripped and whilst I could get them going again it would be another job to do..
Hi Morten, thanks for your video, it's been really helpful!! I'm building a small "Nas" for home use only with Unraid in an HP Microserver G8 I got for free! I need to get all all drives as HBA, do you think buying a card such as HP H220 or HP H240 would be better than getting a HP P420 and forcing it to go into HBA mode? With Unraid it's a problem if it cannot get correct temp of all drives, so I think HBA card is better for this usage than a RAID card, but I might be wrong as I'm not an expert! Thanks!
I think some of your examples might be incorrect in some cases. HW RAID cards can have more problems with data consistency because the HW RAID is not application/data aware and in a way "tricks" the application because it acknowledges to the application that data is safe, when it *might* not be... in your example, the cache has a time limit and is not safe forever. additionally, in cluster application environments, where your application workload might be distributed, if a node with HW RAID failure tells the application it has completed a data transaction, but the data is stuck in the RAID cache and hasn't recovered yet to flush to disk, the application will continue to think the data was committed and the workload may migrate to another node in the cluster assuming, falsely, the data was committed when it was not. From an application standpoint, and part of what ZFS does so well, is that it allows you to handle the low-level storage in a "data aware" way. A very simple example of this is when you do a data consistency check. With HW RAID, it does its checks across every single stripe/logical block, even if the blocks are empty blocks because it doesn't know which ones are actually in use or not. With ZFS, if you do a data consistency check ("scrub" in ZFS terms) on an "empty" data pool, it will finish in less than 1 second because it knows there's no "real data." I know that is somewhat of a impractical example (empty data sets are not very useful), but it helps as a mental exercise to understand how powerful "data aware" storage technology can be. on the matter of performance, there are a lot of different variables to consider; from the number of I/O transactions, the size of the I/O transactions, latency, the number of times data is copied while moving from source to final destination, etc. performance can be complicated subject, so I'm not saying my tiny comment here is comprehensive, BUT think about this one aspect: typical HW RAID controller is dual-core, maybe quad-core (?) processor that does 1~2Ghz clock speed (although it does have advantage of being specifically designed for purpose of storage I/O) vs typical server CPU resources of 10+ cores (Westmere-EP dual socket was 12-cores) to >50 cores (modern dual socket system), often running 2~3GHz. so, it is possible, at least with modern multi-core servers, that your CPU resources are much more abundant than what you will have in your HW RAID controller. So, for software technologies like ZFS (with HBA card) that use the CPU, you might be able to do a lot more (and ZFS does do a lot more, checksums on all data, compression, etc.) I/O. But, you are right in that software storage like ZFS does take away some of the CPU resources from the applications. HW RAID does have some advantages, because it responds from the controller+cache, transaction latency can be much lower than HDDs, which can be very important for some applications. But if latency is important, you can solve it with SSDs+HBA these days. another consideration is features. when you buy a HW RAID controller, it come with a fixed set of features, and the manufacturers can really only put so many features into it because all of that has to be programmed into the firmware, which is relatively tiny. Software technologies like ZFS have an evolving set of features. If a new feature is added to ZFS, I can keep the same hardware and upgrade ZFS to get new features. And the new features possibilities are "endless" since software can have access to all CPU cores and all the RAM in system. For example, ZFS by default uses up to 50% of your RAM for L2ARC (cache)... compare that to the 1GB or 2GB of cache available on HW RAID to typical server system RAM (32GB to 1~2TB). Also, HBA has been around for a long time. Even way back in the days when I was a SunOS or Solaris system admin, I worked on servers with SCSI drives that used software RAID technologies like DiskSuite which had to use SCSI HBAs. But, that was a time before multi-core servers, it was multi-sockets at the time in some systems and CPU resources were less plentiful. So, back then, having HW RAID really did seem superior to the software RAID technologies using CPU; and this was also *before* ZFS came out. But that was a long time ago, and since then, CPU / RAM resources in servers have multiplied many times while HW RAID have not evolved as fast. As always, love your channel... thanks for making a video about this topic!
For FreeNAS, are there any suggestions for HBA Cards ? used ones for under 100 maybe lower ? I saw Dell cards for about 50 bucks but i'm afraid it could be incompatible
@@MyPlayHouse yeah, but i thinking more about the Driver- and Softwarecompatibiilty, since FreeNAS don't support every piece of Hardware (like NIC's), but now i'm just using the standard SATA Ports on my asus server board and the supermicro-backplane of the HotSwap Case
From your video I understand that the BBU on the RAID card is to protect the RAID card cache ram contents when the RAID array crashes becoming non responsive which implies that RAID disk arrays crash regularly - how regularly? So you are taking away one point of failure by being able to add disk redundancy via RAID card but adding a new point of failure of RAID hang/lockup - do you mean that can happen independently of a random computer hang? Which makes me realise an uninteruptable power supply alone is not enough if you have a RAID card ram cache. Approximately how long does a hardware RAID card add to the boot from cold time of a computer - seconds? Or longer?
spinning rust is dead...long live SSD drives :) I hate RAID controller cards, prefer HBA, the video was very fun to watch this morning with my coffee :) Your new title is Dr Whiteboard :)
Hi Unkyjoe's Playhouse Large drives has a few good years still,, lets see when 10TB SSD is something you get for $400-ish I think it's at least two years out. a 4TB SSD is still $550. I prefer RAID cards for local storage in servers,, just wish I could get one with SHR.. :-) Thank you for watching! :-)
Unkyjoe, why do you hate raid cards? I've been in IT about the same amount time as you and RAID cards have saved my butt far more times that caused problems.
Hi Eric Yost No, as the card did not see the drives,, or many of them, I do not expect it to pass through a drive it does not see. and I got another card,, that is in IT-mode. Thank you for watching! :-)
Hello My PlayHouse, Thank you for the exellent presentation! I'm a little out of my depth with some of the information, I still have a lot to learn. Question: I have a Dell T7500 server, 48g ram, 2x x5670 CPU and 2 500g ssd, and motherboard has pci 2.0 and max out at 3 gps i think. I am researching either a pci 3.0 express card 6 gbs, a hba 6gps, or a raid card 6gps (on a very modest budget $). the main criteria is ssd read/write speed for the GPU for gaming and reliability. could you give me some insight on which might be better in my setup?
One thing that most RAID cards cannot do, what ZFS and MD-RAID can do, is create a RAID / VDEV across cards. Also, an issue what you have seen yourself, is the need to keep the firmware on the RAID cards always up to date, when you replace a card with a different firmware revision can have issues importing 'foreign configurations' on the disks.
Hi bwzes03 Thank You very much! Yes if you need more drives than the ports on one card can handle,, you need SAS expanders,,, those are actually a big like an HBA,, in front of the RAID card. Thank you for watching! :-)
I have a dilemma, I was acquired some 3tb SAS drives and I'd like to put them to use on a regular PC. What's the actual hardware id need in order to get the PC to recognize the drives. I'm using an hp z400 workstation for starters and I'd like to put 2 of those 3tb drives in there. Some say i need a SAS to sata converter but it didn't work for me. Others say a SAS to sata interposer and some say i need a SAS contrller card. I just need to use the drives for storage. Thanks in advance.
Hi BlaGGah You can do it with a SAS HBA or RAID card,, you need to make sure the card can handle >2TB drives,, some older ones does not.. and a cable like this : amzn.to/2IUOBQ2 Thank you for watching! :-)
so if I had a project that was mission critical (absolutely can not lose any data or have any corruption) and speed was necessary redundancy is preferred and the drives are planned to be kept in offline storage after being filled with archival data would I be better off with an hba or a raid controller? would like to use a computer consisting of an amd2400g cpu 16gb ddr4 3000mhz ram and up to 8 - 10tb hdd with a redundant psu and an m.2 ssd for os in a short depth 4u server chassis with a ups on rack and 10 gig nic I have not purchased anything so suggestions are most definitely welcome and if you do suggest something different please give reasons ty all so much!
@@MyPlayHouse Ideally but I can't make a decision on hba vs raid or rather hardware vs software raid 6 support I just want it to work... and with computers it's never that easy lol
True raid card hardware based pcie 2.0 or 3.0 for a pc running windows 10 Where can I get one that is reasonable priced? No host buss adapter cards. Internal mini sas
@MyPlayHouse You can get a host bus adapter or a proprietary server card that you can reflash to serve as q host bus adapter but finding a true raid on card raid controller with drivers for windows 10 is quite a challenge. I don't want to use operating system resources to do software raid on my pc
One thing you forgot to mention. What if the card fails? RAID is not a full standard. It just gives instruction on what to do, but not how to do it. This means that different manufacturers will use their own mechanisms to achieve that goal, making different cards and RAID arrays incompatible with each other. This allows for vendor lock in to keep your data hostage. HBA's however being dumb, allows another HBA from another vendor to be used to reinstate the connections. Eliminating the problem of your data being vendor locked.
Hi pseudonymity0000 Yes that is truth,,, if your raid card dies,, it's best to get one like it.. But if a HBA card dies in a production environment,, I would also try to get one just like it,, to not mess with OS, file system and so on. Thank you for watching! :-)
Bad example. No one uses a HBA card with JBOD disks, usually a logical device is presented for a HBA from a disk array, where the disks are already divided into the necessary raid level and all of them are managed by means of an array. In this case, the raid controller raid does not have any advantages with external drives.
Are you able to do some performance tests e.g. RAID6 in software vs. hardware? I really doubt that "hardware" controllers ("hardware" in quotes, because it's only software on a small embedded controller with it's own small CPU, etc. anyway) are faster than software on modern hardware (and with modern I'd say less than 15 years old)
Hi salat Thank You very much! and now you take out your graphics card,, it's just dedicated hardware that calculates stuff for your screen,, and your CPU can do that,, and often the CPU is as expensive as the GPU,, so it must be as fast at it. Thank you for watching! :-)
@@salat Yes, proper hardware accelerated RAID cards are faster when compared feature wise 1:1. Otherwise your CPU needs to calculate a lot just for the RAID structure and FS features. The GPU analogy works.
Some of these HBA cards are really expensive, so I don't think there is much of a savings going to HBA, but it is nice to use the software. I have a PERC 710P in my T620 so I may want to get an HBA card instead. I can get a used one for about 50.00 US.
Hello friend, I was wondering if you could find the time to help me get my 7979 server (bios 1.19) running. Currently it can boot up to windows, but so slowly! The rom diagnostics passes everything but gets stuck on the memory test. I have tried swapping dimms, even trying a different brand. Any ideas? Thank you so much!
Hi Reuben Sheffel No,, not as so,, did you put the RAM in,, in the right order? No no ideas :-/ take out everything not needed! maybe replaces the CPU cooling paste,, might make it really slow if it has non - glad you liked the video :-) Thank you for watching! :-)
@@MyPlayHouse Thanks for the tips! I tried those and no luck. I'll bet the board got cooked, something in the ram pathway. Keep on making great videos, and thanks again!
Hi Morten! Remember me? 😁 I'm the MojangYang. I don't expect you to because it has been months. I have a question of confirmation: If my server and the raid controller die, theoretically can I connect the SAS drive to a HBA controller on a different device and access my data?
Hi ralph ups Thank You very much! that day might come :-) But I think AI's will be as smart as us for less than a second,, and after that they will be bought,, by just how slow we are to work with. Thank you for watching! :-)
Yeah, hardware RAID is fast. RAID 5 and 6 often uses a hardware Xor machine for parity data. Hybrid raid cards doesn't have this feature though, so beware. Checksumming of drive sectors can happen on-card, if you low-level format SAS drives to use it (don't confuse it with bigger sector formatting though). Software raid (BTRFS, ZFS and REFS) uses the CPU for check-summing. The battery back-up on the cache ensures that it can use true write-back caching and not pseudo write-back, like software raid does. Consistency checks creates much less of a performance hit on the system than with for example REFS. On top of that, you can't defragment a ZFS filesystem. However, two things software raid is known to be faster at is tiering (or SSD caching) and handling SSD's. As you say, the hardware raid controller doesn't know anything about your data and it can't send trim commands to flash based SSD's.
the main thing to add is that if you use a HBA, with a storage OS, like ZFS, Synology, etc, and set them up with caching drives, they will duplicate the caching/power off protection that a dedicated Raid controller has.
I feel like the explanation left a bit to be desired tbh, if you already know what it is and you deal with this sort of thing all the time it's kinda of redundant (excuse the pun) but for someone building their first freenas box might be confusing for instance 7:17 the BBU protects the data in the ram in case of power loss it's not responsible for performance (some cards may disable caching features if it detects the lack of a bbu and there may or may not be a way to force the cache) or allowing the card to work in fact you can run it even with cache features enabled without a BBU among other things you said i don't agree with and factually aren't correct XD, one thing i can say is that in theory HBAs should be cheaper but specially in the used market can be the opposite no one wants old raid card specially those that can't be flashed into IT mode think of really old cards like an hp P400/P410 those are almost tech waste nowadays but if they were hbas they would still be useful today, and such cards end up on ebay for next to nothing, while even the cheapest raid card you can flash to IT mode will cost you double or triple that let alone a proper HBA or even worse a brand new one just because of the fact people are aware of these flashable cards and the demand, sellers take advantage and prices go up. That being said I still use some raid cards for specific applications where I don't need data resiliency only redundancy specially if you're using filesystems like NTFS/EXT4 hardware raid is perfectly fine but i also see a lot of overhead on many controllers where mdadm on linux or even dynamic raid on windows can perform better so it's always a good idea to benchmark any hardware before committing to it .
Hi Dash Tesla Thank You very much! That was a lot,, Yes prices of used HBA are high,, because of FreenNAS and UnRaid,, and other applications that rely on software rather than Hardware for the disk handling. Thank you for watching! :-)
HBA is definitely not new. We just didn't often call them HBAs way back when. SCSI HBAs were extremely common before the turn of the century. Fibre Channel HBAs have been common since the late 80s. RAID controllers came later, then started going away again as people started to use software RAID (like ZFS) that did things a RAID controller couldn't.
Hi Jesper Monsted Thank You very much! a lot of features can be build in to the software RAID,, but it is all taking away performance from the CPU. Thank you for watching! :-)
From my experience, HBA of the same generation, is many times faster on big writes than Raid cards. on tiny writes with high I/O Raid controller is faster. For home use HBA is almost always faster.
Mr Morten hjorth sir, Good to hear that your video editing guy. Has turned down or off the background music in the video. Now I can hear your walk through instructions clearly.😃
Raid cards are not hardware(well they are in that they are a card with chips on but the raid is still done in software) , they are just software outside of the OS, i think thats important as people sometimes think that as its hardware its above things like software bugs etc.
I think you're confused about what the distinction is about. It's about if it runs as software on your CPU, or if it's on its own dedicated hardware. While there's still software that runs on that dedicated hardware, has nothing to do with the distinction being made.
Hi Andrew Joy You are right,, Raid cards, are hardware which is controlled by software. It's very specialized, and does not have to worry about sharing resources with other systems,, and the little it does it does very efficient and fast. Thank you for watching! :-)
Hi David Wilson AHH the F**,, you are right!,, no chance of fixing that now :-/ Thank You very much! glad you liked the video :-) Thank you for watching! :-)
I rewatched this, because I wanted to review this topic. Wow, you are such an incredible Teacher and Educator. Glad that I found out about your channel years ago. You've helped so many people become more knowledgeable about technology.
Hi Poe Lemic
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Well I think you covered almost everything useful to any potential new comers. Very nice job done!
Hi Jeff Chen
Thank You very much! I can see here in the comments,, that there was many more points :-)
Thank you for watching! :-)
Morten you are just such a great resource for knowledge - much apprieciated that you spent so much time sharing that knowledge, keep up the good work!
Hi WoTpro
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Yay. Its weekend and there is a new video from you!
Hi STRAUSS Technik
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Thank you for making this video! It really helped me out a lot I was defiantly confused on the difference before this video. Good thing I bought the right card for what I need Raid 5(its a LSI raid card with battery got it for $8 I also firmware updated it). I just got my 4x 2tb sas drives @$15ea in the mail today. I cant wait to see what kind of crystal mark benchmark I get. I need a better hard drive mounting system Im trying to jam all this into a T3500 workstation.
Wait what about raid expander cards and can you use them stand alone? What about Raid card IT mode? (from what I under stand that makes the raid card act like a HBA)
Raid expander cards can be used with both HBA's and RAID cards,, but not alone.
I'm researching on buying a refurbished server and converting it to a NAS (using free/truenas).This was imensely helfull since truenas needs a HBA and I had didnt understand the differences.
Thanks a bunch
Hi Bush Lee
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Thanks for your time.
Hi muwaga micheal
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Few things:
1. You can look at modern software based storage solutions like Ceph and Gluster, and the recommendation is to use HBA and not RAID card. These solutions are for enterprise, not home use.
2. Cheap RAID card will simply "melt" if you try to connect tons of SATA SSD's (I'm talking about all the LSI 92XX based PCIe 2.0 cards)
3. When it comes to redundancy, if you have large disks (8+ TB) and one of them broke and you replace it, the RAID card doesn't know whats the data that needs to be rebuilt to the new replacement/hot-spare disk, so the rebuild process will take DAYS and your entire system will be slow, even on expensive RAID cards. Compare that with ZFS which basically only copies the relevant parts
4. On systems based on ZFS there is a process called "scrubbing" which runs once a week/month (depending on your configuration) which checks all your data in you storage pool. If something is wrong, it will fix it without any issues and without halting your system. Seen anything like that with any RAID cards?
5. Regarding the battery - with ZFS (and few other solutions) you don't need this solution. You just need an "SLOG" device (which means most of the time a single small SSD with power protection built in). You can read about it here: www.servethehome.com/what-is-the-zfs-zil-slog-and-what-makes-a-good-one/ so this covers the enterprise redundancy issue.
6. As more and more companies and storage device solutions use NVME solutions (you will see this year the first mechanical hard disks with NVME U.2/M.2 connectors, btw) - the RAID/HBA solutions are becoming non relevant. You can search and see that there are no RAID cards for NVME devices, for example.
Hi Hetz Biz
Thank You very much! that was a lot :-)
Raid controllers does actually do "scrubbing" - ish it is called other things,, I took this from the Lenovo M5015,,, Features
Patrol Read for media scanning and repairing
Patrol read is a background sentry service designed to proactively discover and correct media defects (bad sectors) that arise normally as a disk drive ages. The service issues a series of verify commands and if a bad block is discovered, the card's firmware will use RAID algorithms to recreate the missing data and remap the sector to a good sector. The task is interruptible based on controller activity and host operations. The firmware also provides an interface where the patrol read task can be initiated, set up for continuous operation, and terminated from a management application. Patrol read can be activated by manual command or automatically.
Thank you for watching! :-)
Nice explanations! I do think you missed 1 think though. With HBAs and ZFS, you won't get corrupt files. You will miss files, but not get corrupt files, due to the copy-on-write mechanism. That is something that's quite important to me :)
Bart Kuipers from are “security” point of view - but rot is FAR more critical and important than “loosing a files”, which is never going to happens anyway in critical applications (UPS and write back aching etc.)...
Please remember, that with raid you can recreate data, if a disk is missing - but you have no idea of knowing if it is “the proper data”, especially clear if data and parity do not align... Then raid is actually potentially making matters much worse!
@@tonypilborg True, bitrot is an issue, but fortunately, with ZFS you don't really have to worry about that anymore :)
@@BartKuipersdotcom I know - I am a happy ZFS user :)
The comment was targeted much more towards Morten (and the rest of the viewers)... To focus on the fact that with the large (Many Tb --> Pb) pools being deployed today (vs. 15-25 years ago when RAID had its hay-day), what RAID provides is in many instances a false sense af security..
"As we all know, modern hard disk drives do internal bad sector management inside their firmware, that is, when the drive detects a physically damanged or unreliable physical sector, it replaces the bad one with a good one from the reserved sector store. This mechanism eliminates the need for OS to do bad sector managment itself. Users can know the development of sector replacement count from S.M.A.R.T item called "reallocated sector count"
If there is a bitrot,, the drive will be the first to try to fix this,, with the checksum it saves with every block. The RAID controller,, also does this task :
"Patrol Read for media scanning and repairingPatrol read is a background sentry service designed to proactively discover and correct media defects (bad sectors) that arise normally as a disk drive ages. The service issues a series of verify commands and if a bad block is discovered, the card's firmware will use RAID algorithms to recreate the missing data and remap the sector to a good sector. The task is interruptible based on controller activity and host operations. The firmware also provides an interface where the patrol read task can be initiated, set up for continuous operation, and terminated from a management application. Patrol read can be activated by manual command or automatically." - this is from a Lenovo M5015 card,, 9 years old.
So yes using HBA, you do not get the services that the RAID does,, so you need ZFS to do that for you or BTRFS, REFS..
HBA cards are just essentially extra ports, and drives are seen by the OS as individual drives. RAID cards are software and hardware that (as you said) presents the drives to the OS in certain PROPRIETARY configurations.
For modern operating systems and for normal uses HBA is less headache and pain for all involved. Very simple and straightforward. RAID cars have a huge problem. If your raid card has a certain firmware, hardware revision also driver version and it FAILS. You have to get the same exact raid card with the same exact firmware, same exact hardware revision and driver version.
Now if an HBA fails you can pretty much use any HBA as long as all the drives the OS are expecting to be there are there. Example is setting up a RAID array with the drives connected to the HBA. Even though the HBA has nothing to do with that raid array the OS is looking for those drives so any HBA can get those drives to show for the OS you are golden.
You touched on the fact that HBA cards have a performance hit to the system resources and the overhead required. GREAT VIDEO
Hi Chuckles Nuts
Thank You very much! I have actually shown me moving a drive from a x3650 M2 to x3650 M3 and M4,, and only lately found that it, did not work with M5,,, so with RAID cards, there is also some backwards compatibility. and I did not even mess with the firmware,, it kind of just worked, on the older servers.
Thank you for watching! :-)
This was a good informational video - Doesn't really matter that you spelled Performance wrong !!!!
Keep Up The Good Work !!!!
Hi Tom Chambers
Thank You very much! Yes I fucked that up a little :-) it must be some bitrot...
Thank you for watching! :-)
He had all the right letters, just not necessarily in the right order - (Morecambe & Wise joke)
You must pre form before you per form.
About the performance impact on resilvering: A "hardware" controller will sync the whole disk (e.g. 10TB) even when only 1 TB is used - e.g. ZFS will only resync the used space - so it's much much faster. Also: While resilvering the "hardware" controller too throttles the resync and normal operations will be slower..
Yes,, and if you actually used more of the storages of those 10TB drives you put in,, ZFS will be slowing down your system way more,, and getting back deleted files,, is a no go.
ZFS has no problems with hundreds of TB per pool (with 10+ TB drives, that's actually not much nowadays). Getting back deleted files? You lost me there.. What do you mean exactly?
It's worth noting that those flash drives on the front of the EMC can probably do 3GB/s each, which would saturate a 200Gbps network card with just 8 drives.
Also, from my own research, enterprise flash drives usually have some capacitors on the drive itself to allow the in flight data that arrives in the drives cache to complete the journey. Given the latency and speed of those things they probably don't view power loss to the system as the same problem it once was.
But if you expect a high performance storage on your server,, it might eat up a lot of the severs CPU resources.
@@MyPlayHouse Sure, if it's going through the CPU. RDMA is probably the preferred way to do this, load the data from disk to memory and then straight out through the NIC. I believe some newer stuff (I've seen on Serve The Home) can directly access the nvme disks, so they have quite powerful arm chips on the NIC itself - it's basically a tiny computer that does one thing.
ZFS can also run as fast as the old RAID cards it’s just to configure the commitment or use a ZIL
If you are paying a per core lic. (if your server is an oracle enterprise server you can get 5-10 good raid controllers for just 1 core lic.)
If CPU it not an issue,, you can use ZFS for storage.
There are some RAID cards like the Avago 9300 series and above (I promise I'm not paid by them!) which can do HBA (aka JBOD) mode at the same time as RAID mode. It is an expensive way of doing things, but if you need some drives presented directly to the OS (like OS drives) and others handled exclusively by the RAID card (like big R6 storage blocks) then you can do it on one card. This means you get some performance boost by being able to use the RAID cache for the JBOD drives so you can get even faster OS drives. The battery backup isn't necessary for RAID caching to work - it's just strongly recommended, and can be over-ridden with the option with "Always writeback without BBU".
Some HBA / JBOD controllers can detect drive failures and generate the same kind of alerts that you get from RAID cards as they are actually based around the same controller chip - just with different features on them (and chips missing along side them).
PS the large card you showed I think will take 256 drives on those connectors through an internal to chassis SAS expander and not just limited to 16 that you mentioned.
Hi Claire Clough
Thank You very much! The HP P441 card I showed,, can be both a Raid card and a HBA,,, very expensive HBA :-)
Thank you for watching! :-)
Hello Sir, I have watched and enjoyed your videos for a while. I have a question for you. I have a x3650 M2 Type 7947 my first rackmount server to play with. I believe I have to upgrade the default raid card in order to be able to support using 2.5 2tb+ Drives inside. I also would like to add the additional backplane to make the server run 12 drives. Can please suggest a few internal raid or hba options?
Yes the x3650 M2 had a raid controller that could only do 2TB,, The M1014, M1015, M5014 and M5015 from the x3650 M3 will do bigger drives.
@@MyPlayHouse Thank You, Sir. Keep up the good work!
Good to learn what are the pluses for why enterprise needs a raid card.
Hi andyhello23
Thank You very much! It good for some stuff,, I asked DELL they would not recommend Software raid,, unless you was on a really tight budget.
Thank you for watching! :-)
I can say that HBA used in enterprises environment when you connect the sever with a HBA card to a SAN ... Here we have IBM's xsystem connected to a DS3500(very old.. but rock) and a storwize 5000 with hba sas cards.. It's very fast ... and the raid stuff is managed in the SAN.
Hi Matias M.
Yes, but then you have a storage controller,, that does what the RAID card would do,, and some more.
Thank you for watching! :-)
I want say I love your videos 😍
Hi Barakat Al-Hamzi
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
About the security comparison: Please read on journaling file systems again :) In ZFS specifically there's the ZIL. Also: How many faulty batteries have you replaced on "hardware" controllers? :)
Hi salat
Thank You very much! two batteries,, they usually hold the servers lifespan,, in Lenovo,, okay HP´s you have a point there.
Thank you for watching! :-)
Happy New Year, really enjoyed your video. Currently working on a supermicro x8dtu-f, 4bay drive 1u server need a hba or raid not sure to hold the OS. Searched your Ebay store but not sure what I need. Any advice would be appreciated, thanks.
Happy new year! and thank You very much,, and best of luck.
I picked up a couple of HP branded SAS drives for one of my servers but it turns out they are from a HP 3PAR storage box which uses a custom format of 520 bytes per block making them useless inside a server attached to a SAS HBA or Raid card. The only option to get them working will be to attempt a low level format to change them to 512 bytes per block. Apparently EMC do similar things...
You are in luck,, I did a video on that, two years back : th-cam.com/video/BbtPPH3W7nU/w-d-xo.html
That might help you.. :-)
@@MyPlayHouse Which SAS HBA are you using? I have an old Proliant and the old
HP Smart Array doesn't have the ability to present the raw disks to the OS. There are cheap IBM 3650's here that I could pick up to do it - but the card would need to be inbuilt or on board as the servers i know for sale have been stripped and whilst I could get them going again it would be another job to do..
Hi Morten,
thanks for your video, it's been really helpful!!
I'm building a small "Nas" for home use only with Unraid in an HP Microserver G8 I got for free!
I need to get all all drives as HBA, do you think buying a card such as HP H220 or HP H240 would be better than getting a HP P420 and forcing it to go into HBA mode?
With Unraid it's a problem if it cannot get correct temp of all drives, so I think HBA card is better for this usage than a RAID card, but I might be wrong as I'm not an expert!
Thanks!
Use whatever card you can get the cheapest,, it is just forwarding the card to the operating system,, it does not need to be fancy...
hba also great for running zfs (freenas) on esxi
Hi udm
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
This video helped a lot
Hi Alexander Ivanchev
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
18:05 So, new hard drive lights your OS on fire in HBA. Gotcha!
Hi Joel Doxtator
I think you got the most important, out of the video :-)
Thank you for watching! :-)
I think some of your examples might be incorrect in some cases. HW RAID cards can have more problems with data consistency because the HW RAID is not application/data aware and in a way "tricks" the application because it acknowledges to the application that data is safe, when it *might* not be... in your example, the cache has a time limit and is not safe forever. additionally, in cluster application environments, where your application workload might be distributed, if a node with HW RAID failure tells the application it has completed a data transaction, but the data is stuck in the RAID cache and hasn't recovered yet to flush to disk, the application will continue to think the data was committed and the workload may migrate to another node in the cluster assuming, falsely, the data was committed when it was not. From an application standpoint, and part of what ZFS does so well, is that it allows you to handle the low-level storage in a "data aware" way. A very simple example of this is when you do a data consistency check. With HW RAID, it does its checks across every single stripe/logical block, even if the blocks are empty blocks because it doesn't know which ones are actually in use or not. With ZFS, if you do a data consistency check ("scrub" in ZFS terms) on an "empty" data pool, it will finish in less than 1 second because it knows there's no "real data." I know that is somewhat of a impractical example (empty data sets are not very useful), but it helps as a mental exercise to understand how powerful "data aware" storage technology can be.
on the matter of performance, there are a lot of different variables to consider; from the number of I/O transactions, the size of the I/O transactions, latency, the number of times data is copied while moving from source to final destination, etc. performance can be complicated subject, so I'm not saying my tiny comment here is comprehensive, BUT think about this one aspect: typical HW RAID controller is dual-core, maybe quad-core (?) processor that does 1~2Ghz clock speed (although it does have advantage of being specifically designed for purpose of storage I/O) vs typical server CPU resources of 10+ cores (Westmere-EP dual socket was 12-cores) to >50 cores (modern dual socket system), often running 2~3GHz. so, it is possible, at least with modern multi-core servers, that your CPU resources are much more abundant than what you will have in your HW RAID controller. So, for software technologies like ZFS (with HBA card) that use the CPU, you might be able to do a lot more (and ZFS does do a lot more, checksums on all data, compression, etc.) I/O. But, you are right in that software storage like ZFS does take away some of the CPU resources from the applications. HW RAID does have some advantages, because it responds from the controller+cache, transaction latency can be much lower than HDDs, which can be very important for some applications. But if latency is important, you can solve it with SSDs+HBA these days.
another consideration is features. when you buy a HW RAID controller, it come with a fixed set of features, and the manufacturers can really only put so many features into it because all of that has to be programmed into the firmware, which is relatively tiny. Software technologies like ZFS have an evolving set of features. If a new feature is added to ZFS, I can keep the same hardware and upgrade ZFS to get new features. And the new features possibilities are "endless" since software can have access to all CPU cores and all the RAM in system. For example, ZFS by default uses up to 50% of your RAM for L2ARC (cache)... compare that to the 1GB or 2GB of cache available on HW RAID to typical server system RAM (32GB to 1~2TB).
Also, HBA has been around for a long time. Even way back in the days when I was a SunOS or Solaris system admin, I worked on servers with SCSI drives that used software RAID technologies like DiskSuite which had to use SCSI HBAs. But, that was a time before multi-core servers, it was multi-sockets at the time in some systems and CPU resources were less plentiful. So, back then, having HW RAID really did seem superior to the software RAID technologies using CPU; and this was also *before* ZFS came out. But that was a long time ago, and since then, CPU / RAM resources in servers have multiplied many times while HW RAID have not evolved as fast.
As always, love your channel... thanks for making a video about this topic!
Hi Art of Server
Thank You very much! This was too long :-) and this is clearly a religion..
Thank you for watching! :-)
For FreeNAS, are there any suggestions for HBA Cards ? used ones for under 100 maybe lower ? I saw Dell cards for about 50 bucks but i'm afraid it could be incompatible
Yarh,, Dell has used other connectors than most others.. :-/
@@MyPlayHouse yeah, but i thinking more about the Driver- and Softwarecompatibiilty, since FreeNAS don't support every piece of Hardware (like NIC's), but now i'm just using the standard SATA Ports on my asus server board and the supermicro-backplane of the HotSwap Case
From your video I understand that the BBU on the RAID card is to protect the RAID card cache ram contents when the RAID array crashes becoming non responsive which implies that RAID disk arrays crash regularly - how regularly?
So you are taking away one point of failure by being able to add disk redundancy via RAID card but adding a new point of failure of RAID hang/lockup - do you mean that can happen independently of a random computer hang?
Which makes me realise an uninteruptable power supply alone is not enough if you have a RAID card ram cache.
Approximately how long does a hardware RAID card add to the boot from cold time of a computer - seconds? Or longer?
Hi Ri Anders
As fare as I know it only in the event of power loss.
Thank you for watching! :-)
spinning rust is dead...long live SSD drives :) I hate RAID controller cards, prefer HBA, the video was very fun to watch this morning with my coffee :) Your new title is Dr Whiteboard :)
Hi Unkyjoe's Playhouse
Large drives has a few good years still,, lets see when 10TB SSD is something you get for $400-ish I think it's at least two years out. a 4TB SSD is still $550.
I prefer RAID cards for local storage in servers,, just wish I could get one with SHR.. :-)
Thank you for watching! :-)
Unkyjoe, why do you hate raid cards? I've been in IT about the same amount time as you and RAID cards have saved my butt far more times that caused problems.
If your spinning drives are rusting, move from your location and get another drive or brand. You’re doing it wrong.
in proxmox and your HP DAS did you put your raid card in IT mode to passthrough the hdds?
Hi Eric Yost
No, as the card did not see the drives,, or many of them, I do not expect it to pass through a drive it does not see. and I got another card,, that is in IT-mode.
Thank you for watching! :-)
Hello My PlayHouse,
Thank you for the exellent presentation! I'm a little out of my depth with some of the information, I still have a lot to learn. Question:
I have a Dell T7500 server, 48g ram, 2x x5670 CPU and 2 500g ssd, and motherboard has pci 2.0 and max out at 3 gps i think. I am researching either a pci 3.0 express card 6 gbs, a hba 6gps, or a raid card 6gps (on a very modest budget $). the main criteria is ssd read/write speed for the GPU for gaming and reliability. could you give me some insight on which might be better in my setup?
Looks like the DELL T7500 has PCIe 2.0,, you can not improve that. Maybe look at my latest video where I put NVMe drives in a old server.
One thing that most RAID cards cannot do, what ZFS and MD-RAID can do, is create a RAID / VDEV across cards.
Also, an issue what you have seen yourself, is the need to keep the firmware on the RAID cards always up to date, when you
replace a card with a different firmware revision can have issues importing 'foreign configurations' on the disks.
Hi bwzes03
Thank You very much! Yes if you need more drives than the ports on one card can handle,, you need SAS expanders,,, those are actually a big like an HBA,, in front of the RAID card.
Thank you for watching! :-)
I have a dilemma, I was acquired some 3tb SAS drives and I'd like to put them to use on a regular PC. What's the actual hardware id need in order to get the PC to recognize the drives. I'm using an hp z400 workstation for starters and I'd like to put 2 of those 3tb drives in there. Some say i need a SAS to sata converter but it didn't work for me. Others say a SAS to sata interposer and some say i need a SAS contrller card. I just need to use the drives for storage. Thanks in advance.
Hi BlaGGah
You can do it with a SAS HBA or RAID card,, you need to make sure the card can handle >2TB drives,, some older ones does not.. and a cable like this : amzn.to/2IUOBQ2
Thank you for watching! :-)
so if I had a project that was mission critical (absolutely can not lose any data or have any corruption) and speed was necessary redundancy is preferred and the drives are planned to be kept in offline storage after being filled with archival data would I be better off with an hba or a raid controller? would like to use a computer consisting of an amd2400g cpu 16gb ddr4 3000mhz ram and up to 8 - 10tb hdd with a redundant psu and an m.2 ssd for os in a short depth 4u server chassis with a ups on rack and 10 gig nic I have not purchased anything so suggestions are most definitely welcome and if you do suggest something different please give reasons ty all so much!
Looks like a DIY server build ;-)
@@MyPlayHouse Ideally but I can't make a decision on hba vs raid or rather hardware vs software raid 6 support I just want it to work... and with computers it's never that easy lol
Also depends on your OS,, some systems like software raid better then other,, ESXi,, does not like it at all,, but freeNAS and Proxmox loves it.
True raid card hardware based pcie 2.0 or 3.0 for a pc running windows 10 Where can I get one that is reasonable priced? No host buss adapter cards. Internal mini sas
I get mine on Ebay, Amazon or Aliexpress,, where ever the prices is right..
@MyPlayHouse You can get a host bus adapter or a proprietary server card that you can reflash to serve as q host bus adapter but finding a true raid on card raid controller with drivers for windows 10 is quite a challenge. I don't want to use operating system resources to do software raid on my pc
One thing you forgot to mention. What if the card fails?
RAID is not a full standard. It just gives instruction on what to do, but not how to do it. This means that different manufacturers will use their own mechanisms to achieve that goal, making different cards and RAID arrays incompatible with each other. This allows for vendor lock in to keep your data hostage.
HBA's however being dumb, allows another HBA from another vendor to be used to reinstate the connections. Eliminating the problem of your data being vendor locked.
Hi pseudonymity0000
Yes that is truth,,, if your raid card dies,, it's best to get one like it.. But if a HBA card dies in a production environment,, I would also try to get one just like it,, to not mess with OS, file system and so on.
Thank you for watching! :-)
Bad example. No one uses a HBA card with JBOD disks, usually a logical device is presented for a HBA from a disk array, where the disks are already divided into the necessary raid level and all of them are managed by means of an array. In this case, the raid controller raid does not have any advantages with external drives.
Hi alros1990
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Are you able to do some performance tests e.g. RAID6 in software vs. hardware? I really doubt that "hardware" controllers ("hardware" in quotes, because it's only software on a small embedded controller with it's own small CPU, etc. anyway) are faster than software on modern hardware (and with modern I'd say less than 15 years old)
Hi salat
Thank You very much! and now you take out your graphics card,, it's just dedicated hardware that calculates stuff for your screen,, and your CPU can do that,, and often the CPU is as expensive as the GPU,, so it must be as fast at it.
Thank you for watching! :-)
Graphics card isn't a complete computer that runs it's own OS - a RAID controller is..
@@salat Yes, proper hardware accelerated RAID cards are faster when compared feature wise 1:1. Otherwise your CPU needs to calculate a lot just for the RAID structure and FS features. The GPU analogy works.
Some of these HBA cards are really expensive, so I don't think there is much of a savings going to HBA, but it is nice to use the software. I have a PERC 710P in my T620 so I may want to get an HBA card instead. I can get a used one for about 50.00 US.
Hi C MJ
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Morten you should have some t-shirts made for your channel i will definitely buy one if you did
Hi Ben King
Thank You very much! I would like to have some stuff to sell :-) But not sure about t-shirts,,, maybe.
Thank you for watching! :-)
Thanks thanks thanks, Now I finally get it
You are very welcome! :-)
Good info! So if people flashing cards into IT-mode, they actually turn a RAID card into a HBA?
Hi Henk Kok
Thank You very much! sometimes it makes good sens,, other times you ruin a perfectly good raid card.
Thank you for watching! :-)
Hello friend, I was wondering if you could find the time to help me get my 7979 server (bios 1.19) running. Currently it can boot up to windows, but so slowly! The rom diagnostics passes everything but gets stuck on the memory test.
I have tried swapping dimms, even trying a different brand.
Any ideas?
Thank you so much!
Hi Reuben Sheffel
No,, not as so,, did you put the RAM in,, in the right order? No no ideas :-/ take out everything not needed! maybe replaces the CPU cooling paste,, might make it really slow if it has non - glad you liked the video :-)
Thank you for watching! :-)
@@MyPlayHouse
Thanks for the tips! I tried those and no luck. I'll bet the board got cooked, something in the ram pathway.
Keep on making great videos, and thanks again!
Hi Morten! Remember me? 😁 I'm the MojangYang. I don't expect you to because it has been months.
I have a question of confirmation: If my server and the raid controller die, theoretically can I connect the SAS drive to a HBA controller on a different device and access my data?
Yes and import the raid configuration. I have done it.
thanks very much
Hi Prozied
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
starts at 3:00
Hi Michael Stepniewski
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Please could you do a video, of that hp xw 4600 type workstation.🙌
Hi ralph ups
There is a video on the HP workstation somewhere.. :-)
Thank you for watching! :-)
thank you 🙏🏻🙏🏻🙏🏻🙏🏻🙏🏻
Hi Mohammad Efhami Sisi
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
I think Sophia the Ai, has just fell in love with it.😚
Hi ralph ups
Thank You very much! that day might come :-) But I think AI's will be as smart as us for less than a second,, and after that they will be bought,, by just how slow we are to work with.
Thank you for watching! :-)
Yeah, hardware RAID is fast. RAID 5 and 6 often uses a hardware Xor machine for parity data. Hybrid raid cards doesn't have this feature though, so beware. Checksumming of drive sectors can happen on-card, if you low-level format SAS drives to use it (don't confuse it with bigger sector formatting though). Software raid (BTRFS, ZFS and REFS) uses the CPU for check-summing. The battery back-up on the cache ensures that it can use true write-back caching and not pseudo write-back, like software raid does. Consistency checks creates much less of a performance hit on the system than with for example REFS. On top of that, you can't defragment a ZFS filesystem. However, two things software raid is known to be faster at is tiering (or SSD caching) and handling SSD's. As you say, the hardware raid controller doesn't know anything about your data and it can't send trim commands to flash based SSD's.
Hi Tomas Kjersgaard
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
You have a dope closet.
Thank You very much,, :-)
the main thing to add is that if you use a HBA, with a storage OS, like ZFS, Synology, etc, and set them up with caching drives, they will duplicate the caching/power off protection that a dedicated Raid controller has.
Hi thtadthtshldntbe
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
I feel like the explanation left a bit to be desired tbh, if you already know what it is and you deal with this sort of thing all the time it's kinda of redundant (excuse the pun) but for someone building their first freenas box might be confusing for instance 7:17 the BBU protects the data in the ram in case of power loss it's not responsible for performance (some cards may disable caching features if it detects the lack of a bbu and there may or may not be a way to force the cache) or allowing the card to work in fact you can run it even with cache features enabled without a BBU among other things you said i don't agree with and factually aren't correct XD, one thing i can say is that in theory HBAs should be cheaper but specially in the used market can be the opposite no one wants old raid card specially those that can't be flashed into IT mode think of really old cards like an hp P400/P410 those are almost tech waste nowadays but if they were hbas they would still be useful today, and such cards end up on ebay for next to nothing, while even the cheapest raid card you can flash to IT mode will cost you double or triple that let alone a proper HBA or even worse a brand new one just because of the fact people are aware of these flashable cards and the demand, sellers take advantage and prices go up. That being said I still use some raid cards for specific applications where I don't need data resiliency only redundancy specially if you're using filesystems like NTFS/EXT4 hardware raid is perfectly fine but i also see a lot of overhead on many controllers where mdadm on linux or even dynamic raid on windows can perform better so it's always a good idea to benchmark any hardware before committing to it .
Hi Dash Tesla
Thank You very much! That was a lot,, Yes prices of used HBA are high,, because of FreenNAS and UnRaid,, and other applications that rely on software rather than Hardware for the disk handling.
Thank you for watching! :-)
HBA is definitely not new. We just didn't often call them HBAs way back when. SCSI HBAs were extremely common before the turn of the century. Fibre Channel HBAs have been common since the late 80s. RAID controllers came later, then started going away again as people started to use software RAID (like ZFS) that did things a RAID controller couldn't.
Hi Jesper Monsted
Thank You very much! a lot of features can be build in to the software RAID,, but it is all taking away performance from the CPU.
Thank you for watching! :-)
@@MyPlayHouse or hogging RAM like crazy (like ZFS) great videos man, keep them coming
From my experience, HBA of the same generation, is many times faster on big writes than Raid cards. on tiny writes with high I/O Raid controller is faster. For home use HBA is almost always faster.
Hi Michael Jones
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
thank you sooooooooooo much
You are Soooooooo welcome :-)
Mr Morten hjorth sir,
Good to hear that your video editing guy. Has turned down or off the background music in the video. Now I can hear your walk through instructions clearly.😃
It wasn't loud - I think you're just being picky.
Hi ralph ups
Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Raid cards are not hardware(well they are in that they are a card with chips on but the raid is still done in software) , they are just software outside of the OS, i think thats important as people sometimes think that as its hardware its above things like software bugs etc.
I think you're confused about what the distinction is about. It's about if it runs as software on your CPU, or if it's on its own dedicated hardware. While there's still software that runs on that dedicated hardware, has nothing to do with the distinction being made.
Hi Andrew Joy
You are right,, Raid cards, are hardware which is controlled by software. It's very specialized, and does not have to worry about sharing resources with other systems,, and the little it does it does very efficient and fast.
Thank you for watching! :-)
holy shit dude, u got a mini data center at ur home... 3:41 - why did you traced out the physical cards to the paper ...? lmfaooo
Hi S To
Thank You very much! I do weird stuff to entertain you and also me :-)
Thank you for watching! :-)
I still use old school hardware raid.
And that works great. There is solutions, where I see a good use case for software raid. But not always where it is used.
It should have been performance rather than preformance...
Hi David Wilson
AHH the F**,, you are right!,, no chance of fixing that now :-/ Thank You very much! glad you liked the video :-)
Thank you for watching! :-)
Hardware RAID isnt as flexible with mixed drives as an HBA using software RAID...
Hi Brian Froeber
You are right about that! glad you liked the video :-)
Thank you for watching! :-)
True. This is one of the advantages you get with the RAID card. Fewer mistakes down the road ..
need to redo the video as facts/prices vary constantly.
Yarh,, It does not matter much in the longer run.
mye reklame Morten
I do not understand?