amuck-landowner

SSD guru needed. Looking for an SSD with good Garbage Collection.

KuJoe

Well-Known Member
Verified Provider
Basically I need to buy 6 SSDs for my personal server and I'll be running them in RAID10 with a hardware RAID controller on an OS that does not support TRIM (and even if it did, I read that the hardware RAID would negate it anyways).

I've read that SSDs with SandForce controllers have good Background Garbage Collection but I'm looking for some suggestions based on other's experience.

My goal is to get 6 180-256GB SSDs for less than $100 each so let me know what you would recommend taking the lack of TRIM into account. Thanks! :)
 

dcdan

New Member
Verified Provider
For a home server I'd definitely go for 5 used s3500's in RAID-6. This will actually give you better fault tolerance than RAID-10. The only downside is the RAID controller has to be able to do RAID-6 at decent speeds.
 

DomainBop

Dormant VPSB Pathogen

alexh

New Member
Corsair Force GS have SandForce controllers.  I've been using them in one server and haven't had any problems or seen any performance degradation (the drives themselves have been in continuous use in a production environment for about 1 1/2 years). 

NewEgg is selling manufacturer refurbished 240GB GS's on eBay for $85.  http://www.ebay.com/itm/Manufacturer-Recertified-Corsair-Force-Series-GS-CSSD-F240GBGS-RF2-2-5-240GB-SA-/380958096901?pt=US_Internal_Hard_Disk_Drives&hash=item58b2df3e05
I've heard bad things about SF controllers in the past. Using an Agility 3 right now and can't complain, but I know a lot of drives were DOA due to initial firmware issues. Samsung has been picking up speed though, with many people using 840 Evo and Pro. I'd recommend the 840 or 850 Pro due to their MLC NAND though. The new 850 Pro are rated at 150TB write endurance, and benchmark at >500MB/s read/write consistently.

Edit: 840/850 Pro have their own garbage collection. They've been tested thoroughly in RAID configurations and perform well. I see them as a cheaper alternative to Intel's DC series, or even a better alternative with the new 850s out. 
 
Last edited by a moderator:

KuJoe

Well-Known Member
Verified Provider
Corsair Force GS have SandForce controllers.  I've been using them in one server and haven't had any problems or seen any performance degradation (the drives themselves have been in continuous use in a production environment for about 1 1/2 years). 

NewEgg is selling manufacturer refurbished 240GB GS's on eBay for $85.  http://www.ebay.com/itm/Manufacturer-Recertified-Corsair-Force-Series-GS-CSSD-F240GBGS-RF2-2-5-240GB-SA-/380958096901?pt=US_Internal_Hard_Disk_Drives&hash=item58b2df3e05
Those were the ones I was looking at before I opened this thread. :)

I've heard bad things about SF controllers in the past. Using an Agility 3 right now and can't complain, but I know a lot of drives were DOA due to initial firmware issues. Samsung has been picking up speed though, with many people using 840 Evo and Pro. I'd recommend the 840 or 850 Pro due to their MLC NAND though. The new 850 Pro are rated at 150TB write endurance, and benchmark at >500MB/s read/write consistently.

Edit: 840/850 Pro have their own garbage collection. They've been tested thoroughly in RAID configurations and perform well. I see them as a cheaper alternative to Intel's DC series, or even a better alternative with the new 850s out. 
Samsung 840 Pros are way out of my price range. Even their 128GB SSDs are too expensive for me budget.

For a home server I'd definitely go for 5 used s3500's in RAID-6. This will actually give you better fault tolerance than RAID-10. The only downside is the RAID controller has to be able to do RAID-6 at decent speeds.
I will only do RAID10 so RAID5 and 6 are not an option for me with SSDs (everything I've read shows that parity based RAID is not good for SSDs). The RAID card I'm using is a Dell H700.

The price is right for me. I'll research these some more. :)
 
Last edited by a moderator:

pcan

New Member
I will only do RAID10 so RAID5 and 6 are not an option for me with SSDs (everything I've read shows that parity based RAID is not good for SSDs). 
The "RAID5 is bad for SSD" is a myth. RAID5 may be better than RAID10, depending on the data size and other considerations. See as example the third chapter of this paper: http://cesg.tamu.edu/wp-content/uploads/2012/02/hotstorage13.pdf.

RAID5 is worse than RAID10 for small writes; for bigger writes, RAID5 does impose less wear on the array than RAID10. 

On my production systems, after analisys and some tests, I run the SSD array in RAID5.

By the way, I will not be overly concerned of the SSD wear. I see the main "failure mechanism" of modern SSDs as: "lets buy new ones at 8x the size, 2x performance and 50% the price of the old ones".
 

KuJoe

Well-Known Member
Verified Provider
@pcan I'm sure most of my writes will be small since it's just for personal RDP usage with a private game server or 2 on it. One of the VMs will have drives replicated in real time to another server so lots of small writes there.


I haven't ruled out the idea of used drives either since it's just for personal usage.
 
Parity calculations are brutal for SSD write exhaustion. The larger the drives, the more the parity is written for the parity disk(s). A large amount of SAS/SATA RAID controllers do not send trim commands because it will cause stalls with firmware bugs, plus all the ones that do support TRIM to the physical drives use vendor specific 'hacks' to get it to work (scary).

Best bet is to use normal HDD's and a few SSDs for metadata caching on the actual controller itself. 
 

KuJoe

Well-Known Member
Verified Provider
Parity calculations are brutal for SSD write exhaustion. The larger the drives, the more the parity is written for the parity disk(s). A large amount of SAS/SATA RAID controllers do not send trim commands because it will cause stalls with firmware bugs, plus all the ones that do support TRIM to the physical drives use vendor specific 'hacks' to get it to work (scary).


Best bet is to use normal HDD's and a few SSDs for metadata caching on the actual controller itself.
Can you elaborate some more on the metadata caching part?
 

KuJoe

Well-Known Member
Verified Provider
Also, should I even consider those hybrid drives or just go with 10k SAS drives if I don't do SSDs?
 
Decent RAID cards have the ability to use an SSD for caching for normal perpendicular SATA drives. It requires a license on the card. Depending on the content, it will be used. Please see:

http://www.lsi.com/products/raid-controllers/pages/default.aspx#tab/product-family-tab-3

There are OS level ones, too, such as 'flashcache' (Written by FB and pretty good). 

Stay away from software based RAID or generic onboard RAID (which is still software raid) and use an actual RAID controller that goes through your PCI-EX bus. 
 
Some PERC controllers used to be LSI or Adaptec based, unsure if your specific controller. Can you see if it supports anything like I mentioned?
 

alexh

New Member
Also, should I even consider those hybrid drives or just go with 10k SAS drives if I don't do SSDs?
I'd avoid hybrid drives, since your controller is a rebranded LSI model. It seems to support CacheCade. Instead of 10K SAS, I'd look for some WD Black drives on sale. They do well in RAID, benchmark well, and have been very reliable. If you go the CacheCade route, you can also selectively place things onto your SSD, while leaving IO-intensive/write-heavy apps on the hard disk.
 

KuJoe

Well-Known Member
Verified Provider
I think I'll just go with the 10k SAS drives since the SSDs sound like they are going to be a complete gamble on whether or not they will work a year or two down the road without TRIM.
 
Top
amuck-landowner