amuck-landowner

Best disk storage/raid server setup for VPS host server?

ICPH

Member
Hello,

i found that my hosting server require more io than one classic 7200rpm disk can handle.

I think io of two 7200rpm disks is minimum that i should have.

Three 7200 drives io quite sufficing with some room.

There are SSDs but its said they are limitted on writing (i have 5mb/s data written).

I think data redundance is required (data on 2 places in raid) so if one drive fail, the disk continue working. Please what are your ideas on VPS reseller node server disks setup which fits above requirements and is not too expensive?

Please what do you think is optimal and cost effective disk storage setup for VPS host server? (disk types,raid, partitioning)
 
Last edited by a moderator:

Xenfinity

New Member
Verified Provider
RAID 5 is popular because of its low cost combined with enhanced performance.  You need three drives to make RAID 5 work, and it tolerates one drive failure.

(Comparison of RAID levels)

I still think that the panic over the flash writing limit on SSDs is unduly alarmist.  If you need more I/O performance, you could set up some solid-state drives for caching reads and writes to and from your hard drives.

Nick
 

TruvisT

Server Management Specialist
Verified Provider
Anything under RAID 10 for a VPS Host Node is stupid.

Raid 5 will be nothing but trouble for you. Avoid and just get a RAID 10 and a decent raid card.
 

raindog308

vpsBoard Premium Member
Moderator
There are SSDs but its said they are limitted on writing (i have 5mb/s data written).
Whether you mean 5 megabytes/seconds or 5 megabits/second...that should be a fraction of what an SSD can deliver, no? I really don't understand what you're saying here.
 

Xenfinity

New Member
Verified Provider
Anything under RAID 10 for a VPS Host Node is stupid.


Raid 5 will be nothing but trouble for you. Avoid and just get a RAID 10 and a decent raid card.
Care to elaborate why RAID 5 will be nothing but trouble?

Whether you mean 5 megabytes/seconds or 5 megabits/second...that should be a fraction of what an SSD can deliver, no? I really don't understand what you're saying here.
Indeed, SSDs should have no trouble at all with a consistent 5MB/s of data to write, since many can write a hundred times faster than that.

Nick
 

DomainBop

Dormant VPSB Pathogen
Care to elaborate why RAID 5 will be nothing but trouble?
Higher rate of failure among other things.

Please what do you think is optimal and cost effective disk storage setup for VPS host server?
If I was giving advice...

RAID10: Best

RAID6: Better than RAID5 due to lower rate of failure

RAID5: Better than RAID1 but keep 2 spares on hand because the discs seem to fail in pairs

RAID1: LOL, look at me, I'm a summer host with a brand new AMD Opteron 4334 from OVH,  480 Mbps DDoS protection fark yeah! Deadpool in either September when school starts or when I max out mommy's credit card, whichever comes first!

NO RAID: Bought me a $25 AMD X4 at Datashack.  Those suckers buying my 2G/$2 plans willl never know it's not really RAID10
 

drmike

100% Tier-1 Gogent
RAID6 I saw on an ad recently.. Rare to see it mentioned.

Big trick everyone and their shady mother plays is SSD caching.

Then there are similar PCI-E memory/disk solutions... but people aren't RAIDING those today...

As a provider your best best is 4 spinning drives + hardware RAID controller + SSD cache.  It isn't a miracle worker though.

Answer as always is pure SSD (expensive) or server with way more drives/spindles.
 

TruvisT

Server Management Specialist
Verified Provider
A new way people are going are 1TB SSDS in SW RAID 1. Seen that around.

Oh the days of 15K SATAs in RAID 10.
 

Francisco

Company Lube
Verified Provider
Jesus tits on Christ do not go RAID5.

If you use drives over 1TB in size you're going to be in pain when you have to replace one. We learned the

hard way about using RAID5 with larger disks. It doesn't even matter about using 'enterprise' drives, they

all fail (we've seen higher failure on enterprise drives even). The only drives that have given us 0 issues

were Hitachi's.

Go RAID6 if you need the space but ideally RAID10.

If you insist on RAID5, you best read up on gddrescue.

Francisco
 
Last edited by a moderator:

AMDbuilder

Active Member
Verified Provider
I would also recommend using Hardware Raid 10 in most situations, especially with larger drives.  The rebuild times the larger your array the more potential risk of a second failure. All drives WILL fail, it's just a question of time.

The only area I currently question the use of Raid 10 is dealing with SSD drives.  Just to be clear there is NO question in my mind that raid should always be used, it's just a question of what is appropriate for the situation in question.

If you take talks such as http://www.research.ibm.com/haifa/conferences/systor2011/present/session5_talk2_systor2011.pdf or Intels notes here: http://www.intel.com/support/motherboards/server/sb/CS-030395.htm for example.  It does make you question if Raid 5 is still a bad thing.
 

clouds4india

New Member
If budget permits you should go for 4x ssd drives + raid 10 /  4 x hdd 7.2k rpm + raid 10 ( LSI +BBU ) raid best option

then comes ram go for 48gb / 72 gb ram
then for the processor for a config like this you could go for 2x L5520 / L5639 and such
 

willie

Active Member
I'm worrying a bit about the concept of RAID10 instead of RAID6 or RAID60 on big storage servers.  RAID6 by definition can survive any 2-drive failure, while almost 50% of 2-drive failures will kill a RAID10.  I do think it's ok to take a backup or storage server offline for recovery in case of a drive failure.  That's different from an online server that has to stay up during the recovery, at possibly higher risk due to disk contention with running services.
 
There's nothing wrong with RAID5, I've been using it for almost 17 years with my home stuff. I wouldn't mind going to RAID10 but I don't feel like rebuilding my arrays. 

You should never consider RAID'ing SSD drives, unless you are prepared for write exhaustion to come early due to parity calculation, read patrol, and other pesky vendor only things ;-)

If you really need the speed, get a hardware raid controller, like a MegaRaid card, and use a single SSD drive for caching purposes (you need a firmware license to do this, it's worth the effort) and you can still use your normal SATA drives in whatever configuration you wish.

Be advised, SATA (if I remember correctly, ATAPI only allowed this as well) drives do not support disconnected writes, only disconnected reads are supported. You also only get 1 transaction at a time for writes, so in order to get more writes, you need to add more disks. SAS/SCSI has tagged command queuing, which gives you 128 concurrent writes to disk.
 
Last edited by a moderator:

willie

Active Member
RAID5 was much safer 17 years ago than it is now.  Today's drives are large enough that if a drive in a big array fails, the rebuild time is long enough that there's significant chance of another drive failing during the rebuild.  Drive failures aren't independent events: 1) drives tend to fail in bunches, and 2) rebuilding a raid is an exceptionally heavy workload which will be hard on marginal drives.  These considerations are why RAID6 was developed.  Add to that the way budget vps hosts tend to use cheap consumer drives, and RAID5 or RAID50 both are asking for trouble.
 
It really depends on the drive, firmware, and controller card. But, you're making a mountain out of a molehill, I know /plenty/ of people who still use SCSI RAID5, and currently, SAS based RAID5 with multiple hotspares. 

Earlier versions of RAID6 were total hackjobs and had per vendor specific quirks that did unusual things on the card itself.

Some/most/all SATA drives were terrible at failing in an array due to onboard firmware bugs (older seagates wouldn't transmit FAILURE messages for up to 90 seconds. SAS/SCSI drives did not have this problem)

Rebuilding an array on SATA disks is going to take a long time because of one outstanding write transaction at a time, unlike SAS/SCSI. I believe /some/ controllers set WCE to 0 when rebuilding an array on SATA disks which disable write caching, making writing brutally slow.
 
Top
amuck-landowner