Couldn't you argue though that the processing power of any modern server-grade CPU will greater than that of whatever comes with your RAID card? Being responsible for server builds is something I've never had to do, thus my experience is minimal, but how much of a hit does software RAID use on a CPU anyhow? Any way to 'cap' or limit the amount of CPU software RAID can consume?In my experience, it comes down to performance. When you are using hardware raid, the raid processing is done by a separate hardware device. When you are using motherboard raid or software raid, all the processing is done by the CPU instead; taking away resources from other applications and processes.
I have 2 production servers running a software raid in Ubuntu, but they both just store backup images for a Windows server. If I needed to access data off that server often, I would have run with a hardware raid controller on them. For what it is worth, those two servers have been rock solid with no issues what-so-ever.
Remove the cron?The other PITA with SW RAID is the weekly resync on a Sunday evening!
Why would i do that? There are more benefits to the resync than disadvantages.We use hardware raid with everything, the performance benefits are too good to pass up and controllers are cheap.
We experimented using KVM and software raid and the I/O results were abysmal.
Remove the cron?
This was my thoughts entirely. If you are going out to grab 4+ disks, why even consider software? Take the strain off the CPU entirely, get a good raid card, and you're done. All our servers run RAID10 and you can add SSD cache to greatly improve performance. Software RAID really is silly.I don't know why there is even a debate about this. Hardware RAID is better, so you use Software RAID when you have a 2 bay server like a blade or when you want to be cheap and not spend money on a decent controller. End of story