# SSD guru needed. Looking for an SSD with good Garbage Collection.



## KuJoe (Sep 11, 2014)

Basically I need to buy 6 SSDs for my personal server and I'll be running them in RAID10 with a hardware RAID controller on an OS that does not support TRIM (and even if it did, I read that the hardware RAID would negate it anyways).

I've read that SSDs with SandForce controllers have good Background Garbage Collection but I'm looking for some suggestions based on other's experience.

My goal is to get 6 180-256GB SSDs for less than $100 each so let me know what you would recommend taking the lack of TRIM into account. Thanks!


----------



## rmlhhd (Sep 11, 2014)

These should do the job nicely.

http://www.ebay.com/itm/Samsung-256GB-SATA-6Gb-s-Notebook-Desktop-2-5-SSD-Dell-0T5YVC-MZ7PC256HAFU-/271595346503?pt=US_Solid_State_Drives&hash=item3f3c581a47


----------



## dcdan (Sep 11, 2014)

For a home server I'd definitely go for 5 used s3500's in RAID-6. This will actually give you better fault tolerance than RAID-10. The only downside is the RAID controller has to be able to do RAID-6 at decent speeds.


----------



## DomainBop (Sep 11, 2014)

Corsair Force GS have SandForce controllers.  I've been using them in one server and haven't had any problems or seen any performance degradation (the drives themselves have been in continuous use in a production environment for about 1 1/2 years). 

NewEgg is selling manufacturer refurbished 240GB GS's on eBay for $85.  http://www.ebay.com/itm/Manufacturer-Recertified-Corsair-Force-Series-GS-CSSD-F240GBGS-RF2-2-5-240GB-SA-/380958096901?pt=US_Internal_Hard_Disk_Drives&hash=item58b2df3e05


----------



## alexh (Sep 11, 2014)

DomainBop said:


> Corsair Force GS have SandForce controllers.  I've been using them in one server and haven't had any problems or seen any performance degradation (the drives themselves have been in continuous use in a production environment for about 1 1/2 years).
> 
> NewEgg is selling manufacturer refurbished 240GB GS's on eBay for $85.  http://www.ebay.com/itm/Manufacturer-Recertified-Corsair-Force-Series-GS-CSSD-F240GBGS-RF2-2-5-240GB-SA-/380958096901?pt=US_Internal_Hard_Disk_Drives&hash=item58b2df3e05


I've heard bad things about SF controllers in the past. Using an Agility 3 right now and can't complain, but I know a lot of drives were DOA due to initial firmware issues. Samsung has been picking up speed though, with many people using 840 Evo and Pro. I'd recommend the 840 or 850 Pro due to their MLC NAND though. The new 850 Pro are rated at 150TB write endurance, and benchmark at >500MB/s read/write consistently.

Edit: 840/850 Pro have their own garbage collection. They've been tested thoroughly in RAID configurations and perform well. I see them as a cheaper alternative to Intel's DC series, or even a better alternative with the new 850s out.


----------



## KuJoe (Sep 11, 2014)

DomainBop said:


> Corsair Force GS have SandForce controllers.  I've been using them in one server and haven't had any problems or seen any performance degradation (the drives themselves have been in continuous use in a production environment for about 1 1/2 years).
> 
> NewEgg is selling manufacturer refurbished 240GB GS's on eBay for $85.  http://www.ebay.com/itm/Manufacturer-Recertified-Corsair-Force-Series-GS-CSSD-F240GBGS-RF2-2-5-240GB-SA-/380958096901?pt=US_Internal_Hard_Disk_Drives&hash=item58b2df3e05


Those were the ones I was looking at before I opened this thread. 



alexh said:


> I've heard bad things about SF controllers in the past. Using an Agility 3 right now and can't complain, but I know a lot of drives were DOA due to initial firmware issues. Samsung has been picking up speed though, with many people using 840 Evo and Pro. I'd recommend the 840 or 850 Pro due to their MLC NAND though. The new 850 Pro are rated at 150TB write endurance, and benchmark at >500MB/s read/write consistently.
> 
> Edit: 840/850 Pro have their own garbage collection. They've been tested thoroughly in RAID configurations and perform well. I see them as a cheaper alternative to Intel's DC series, or even a better alternative with the new 850s out.


Samsung 840 Pros are way out of my price range. Even their 128GB SSDs are too expensive for me budget.



dcdan said:


> For a home server I'd definitely go for 5 used s3500's in RAID-6. This will actually give you better fault tolerance than RAID-10. The only downside is the RAID controller has to be able to do RAID-6 at decent speeds.


I will only do RAID10 so RAID5 and 6 are not an option for me with SSDs (everything I've read shows that parity based RAID is not good for SSDs). The RAID card I'm using is a Dell H700.



rmlhhd said:


> These should do the job nicely.
> 
> http://www.ebay.com/itm/Samsung-256GB-SATA-6Gb-s-Notebook-Desktop-2-5-SSD-Dell-0T5YVC-MZ7PC256HAFU-/271595346503?pt=US_Solid_State_Drives&hash=item3f3c581a47


The price is right for me. I'll research these some more.


----------



## Serveo (Sep 11, 2014)

How about crucial mx100? We use them in HW raid-1.

http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review


----------



## dcdan (Sep 11, 2014)

Serveo said:


> How about crucial mx100? We use them in HW raid-1.
> 
> http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review


Without trim, expect 15-20 MB/s write speed per drive (as well as significantly degraded read speeds, down to 20-30 MB/s per drive). All Crucial M500/M550/MX100 and Samsung Evos are like that.


----------



## pcan (Sep 11, 2014)

KuJoe said:


> I will only do RAID10 so RAID5 and 6 are not an option for me with SSDs (everything I've read shows that parity based RAID is not good for SSDs).


The "RAID5 is bad for SSD" is a myth. RAID5 may be better than RAID10, depending on the data size and other considerations. See as example the third chapter of this paper: http://cesg.tamu.edu/wp-content/uploads/2012/02/hotstorage13.pdf.

RAID5 is worse than RAID10 for small writes; for bigger writes, RAID5 does impose less wear on the array than RAID10. 

On my production systems, after analisys and some tests, I run the SSD array in RAID5.

By the way, I will not be overly concerned of the SSD wear. I see the main "failure mechanism" of modern SSDs as: "lets buy new ones at 8x the size, 2x performance and 50% the price of the old ones".


----------



## KuJoe (Sep 11, 2014)

@pcan I'm sure most of my writes will be small since it's just for personal RDP usage with a private game server or 2 on it. One of the VMs will have drives replicated in real time to another server so lots of small writes there.


I haven't ruled out the idea of used drives either since it's just for personal usage.


----------



## Deleted (Sep 11, 2014)

Parity calculations are brutal for SSD write exhaustion. The larger the drives, the more the parity is written for the parity disk(s). A large amount of SAS/SATA RAID controllers do not send trim commands because it will cause stalls with firmware bugs, plus all the ones that do support TRIM to the physical drives use vendor specific 'hacks' to get it to work (scary).

Best bet is to use normal HDD's and a few SSDs for metadata caching on the actual controller itself.


----------



## KuJoe (Sep 11, 2014)

Monkburger said:


> Parity calculations are brutal for SSD write exhaustion. The larger the drives, the more the parity is written for the parity disk(s). A large amount of SAS/SATA RAID controllers do not send trim commands because it will cause stalls with firmware bugs, plus all the ones that do support TRIM to the physical drives use vendor specific 'hacks' to get it to work (scary).
> 
> 
> Best bet is to use normal HDD's and a few SSDs for metadata caching on the actual controller itself.


Can you elaborate some more on the metadata caching part?


----------



## KuJoe (Sep 11, 2014)

Also, should I even consider those hybrid drives or just go with 10k SAS drives if I don't do SSDs?


----------



## Deleted (Sep 11, 2014)

Decent RAID cards have the ability to use an SSD for caching for normal perpendicular SATA drives. It requires a license on the card. Depending on the content, it will be used. Please see:

http://www.lsi.com/products/raid-controllers/pages/default.aspx#tab/product-family-tab-3

There are OS level ones, too, such as 'flashcache' (Written by FB and pretty good). 

Stay away from software based RAID or generic onboard RAID (which is still software raid) and use an actual RAID controller that goes through your PCI-EX bus.


----------



## KuJoe (Sep 11, 2014)

My RAID controller is a Dell H700.


----------



## Deleted (Sep 11, 2014)

Some PERC controllers used to be LSI or Adaptec based, unsure if your specific controller. Can you see if it supports anything like I mentioned?


----------



## KuJoe (Sep 11, 2014)

I know the 1GB versions support CacheCade I think it's called.


----------



## alexh (Sep 12, 2014)

KuJoe said:


> Also, should I even consider those hybrid drives or just go with 10k SAS drives if I don't do SSDs?


I'd avoid hybrid drives, since your controller is a rebranded LSI model. It seems to support CacheCade. Instead of 10K SAS, I'd look for some WD Black drives on sale. They do well in RAID, benchmark well, and have been very reliable. If you go the CacheCade route, you can also selectively place things onto your SSD, while leaving IO-intensive/write-heavy apps on the hard disk.


----------



## KuJoe (Sep 12, 2014)

I think I'll just go with the 10k SAS drives since the SSDs sound like they are going to be a complete gamble on whether or not they will work a year or two down the road without TRIM.


----------



## Jonathan (Sep 12, 2014)

KuJoe said:


> My RAID controller is a Dell H700.





Monkburger said:


> Some PERC controllers used to be LSI or Adaptec based, unsure if your specific controller. Can you see if it supports anything like I mentioned?


The H700 is built on LSI's 2208 chip IIRC.


----------



## Serveo (Sep 13, 2014)

dcdan said:


> Without trim, expect 15-20 MB/s write speed per drive (as well as significantly degraded read speeds, down to 20-30 MB/s per drive). All Crucial M500/M550/MX100 and Samsung Evos are like that.


It has TRIM


----------



## KuJoe (Sep 13, 2014)

Serveo said:


> It has TRIM


Isn't TRIM an OS/kernel function?


----------



## Serveo (Sep 13, 2014)

KuJoe said:


> Isn't TRIM an OS/kernel function?


True, let me rephrase that. It has its own garbage collection function called "Active garbage collection". More related to this: http://forums.crucial.com/t5/tkb/articleprintpage/tkb-id/[email protected]/article-id/48


----------



## KuJoe (Sep 13, 2014)

Looks like Crucial just jumped to the top of my list. 

EDIT: Dang, out of my price range.


----------



## Serveo (Sep 13, 2014)

256GB - €89,-. Thats even the level of a WD Black spindle. How much do you want to pay? (-;


----------



## KuJoe (Sep 13, 2014)

Under $100 USD each. This is for a personal server that I only plan to use for testing, development, a Windows jump box, and a few private game servers for me and 6 close friends so the less I can spend on it the better.


----------



## Coastercraze (Sep 13, 2014)

Maybe have a look at Mushkin?


----------



## KuJoe (Sep 13, 2014)

Coastercraze said:


> Maybe have a look at Mushkin?


How is there BGC? I didn't see anything that stood out in regards to it.


----------



## Deleted (Sep 14, 2014)

alexh said:


> I'd avoid hybrid drives, since your controller is a rebranded LSI model. It seems to support CacheCade. Instead of 10K SAS, I'd look for some WD Black drives on sale. They do well in RAID, benchmark well, and have been very reliable. If you go the CacheCade route, you can also selectively place things onto your SSD, while leaving IO-intensive/write-heavy apps on the hard disk.


I would avoid using SATA drives in a RAID environment for these reasons:

- If I remember correctly, SATA drives only have 1 outstanding write transaction at a time, SAS drives have as many as your Tagged Command Queue Depth (usually 128 or 256 depending on the drive). NCQ is not the same thing as TCQ, and TCQ is still superior for perpendicular drives.

- SATA drives can't write and read at the same time on the same transaction

- SAS drives have 2 dedicated ASIC's on the drive I/O card, one to handle I/O, Tagged Command Queuing/reordering, and another that deals head tracking. (which is important under load)

- SATA drives are affected by I/O vibration degradation, ie: more drives, more vibration which can kill your throughput (please see this excellent Sun Article that describes this behavior: http://web.archive.org/web/20090831133200/http://blogs.sun.com/brendan/entry/unusual_disk_latency (look at the comments as well). SAS drives have a dedicated processor that mitigates this more than plain SATA drives.


----------

