SandForce TRIM Issue & Corsair Force Series GS (240GB) Review
by Kristian Vättö on November 22, 2012 1:00 PM ESTBut How About Incompressible Data and TRIM?
I mentioned earlier that TRIM has never functioned properly in SandForce SSDs when the drive is tortured with incompressible datam, which has never been a strength of SandForce. When it faces some, it's not exactly sure what to do with it. Your data will of course be written just like compressible data, but when your whole design is based on the assumption that the data will be compressed on the fly, there are some design trade-offs when it comes to performance with incompressible data. SandForce has said that third generation controllers should bring vast improvements to incompressible performance but we have no concrete numbers as of yet.
To test how TRIM behaves with incompressible data, I filled the Force GS with incompressible data and then tortured it with incompressible 4KB random writes (100% LBA space, QD=32) for 60 minutes:
Corsair Force GS—Resiliency—AS SSD—6Gbps | ||||
Read Speed | Write Speed | |||
Firmware | 5.0.2 | 5.0.3 | 5.0.2 | 5.0.3 |
Clean | 494.1MB/s | 507.6MB/s | 270.5MB/s | 266.8MB/s |
After Torture | 372.3MB/s | 501.1MB/s | 74.9MB/s | 156.2MB/s |
After TRIM | 479.8MB/s | 506.0MB/s | 220.2MB/s | 150.3MB/s |
With firmware 5.0.2, both read and write speed degrade when tortured. The read speed doesn't degrade as much as write speed, but there is still a clear drop in performance. Fortunately TRIM will mostly restore read performance so there doesn't seem to be a similar problem as with compressible data. Write performance, on the other hand, restores but not fully. After TRIM write performance is about 81% of clean state performance, which isn't bad but not ideal either.
Firmware 5.0.3 seems to bring some changes to how incompressible data is dealt with. TRIM still doesn't work properly but as I've said before, I believe it's the way how the controller and firmware were designed, meaning that there isn't really a way to fix it. The good news is that write speed doesn't degrade nearly as much after torture as it did with firmware 5.0.2. Read speed also stays on-par with clean state performance. On the other hand, TRIM doesn't restore performance at all. As a matter of fact TRIM actually degrades write speed slightly but the difference is small enough to not raise any real concern. We did experience similar behavior with HD Tach, though.
Conclusion
SSD performance is all about trade-offs. As you improve one area, you generally weaken another. For example, you can opt for a large logical block size and get super fast sequential write speeds. The flip side is that random write speed will be horrible. Another good example is SandForce. They have chosen to concentrate on performance with compressible data, which has resulted in a trade-off with incompressible data performance.
Since it's generally impossible to have everything in one package, creating a good firmware and SSD is all about finding the balance. SandForce's approach in firmware 5.0.3 is in the right direction but it's far from perfect. TRIM now restores read speed after torture but in exchange, write speed takes a hit. I'm more satisfied with this behavior because the degradation in write speed is smaller and it seems that sequential writes and idle time will help to restore performance back to clean state. With firmware 5.0.2, read speed degraded for good; TRIMing the drive again and running HD Tach for several times didn't show any improvement.
What I'm more worried about is the TRIM behavior with incompressible data. With 5.0.2, TRIM at least worked somewhat as performance was better after TRIM than after torture. Sure, write speed doesn't go as low as it did with 5.0.2 but since most SSDs are used in TRIM-supported environments, I would rather take worse worst-case performance and partially working TRIM.
Hopefully SandForce will be able to find the right balance in a future firmware, which would be working TRIM regardless of the nature of data. 5.0.3 is a good start, but I feel that it concentrates too much on fixing one problem and as a result creates a bunch of new ones.
56 Comments
View All Comments
FunnyTrace - Wednesday, November 28, 2012 - link
Yes, I did read an article on Tweaktown about this in August 2012.JellyRoll - Friday, November 23, 2012 - link
OMG...SandForce does not do dedupe (deduplication). It does not "has to check if the data is used by something else."!!The drive is unaware of the actual file usage above the device level. That is a host level responsibility.
I cannot believe that this article was not vetted before it was posted.
Kristian Vättö - Friday, November 23, 2012 - link
SandForce does deduplication at the device level. It doesn't look for actual files like the host does because it's all ones and zeros for the controller. However, what it does is look for similar data patterns.For example, if you have two very similar photos which are 5MB each, the controller may not write 10MB. Instead, it will only write let's say 8MB to the NAND because some of the data is duplicate and the whole idea of deduplication is to minimize NAND writes.
If you go and delete one of these photos, the OS sends a TRIM command that tells the LBA is no longer in use and it can be deleted. What makes SandForce more complicated is the fact that the photos don't necessarily have their own LBAs, so what you need to do is to check that the LBA you're about to erase is not mapped to any other LBA. Otherwise you might end up erasing a portion of the other photo as well.
JellyRoll - Friday, November 23, 2012 - link
I challenge you to offer one document that supports your assertion that sandforce does deduplication. There ins't any, as it does not. Feel free to link to the technical document to support your claims in your reply.SandForce supports compression, not deduplication.
Here is a link to documentation and product data sheets.
http://www.lsi.com/products/storagecomponents/Page...
Kristian Vättö - Friday, November 23, 2012 - link
SandForce/LSI has published very little about the technology behind DuraWrite and how it works, but a combination of technologies including compression and deduplication is what they have told us.http://www.anandtech.com/show/2899/3
JellyRoll - Friday, November 23, 2012 - link
Linking an Anandtech article is not proof that SandForce does deduplication. A quick Google will reveal that there is no other source, outside of Anandtech, that claims that they offer deduplication on the current series of processors.As a matter of fact, that article is the only other reference to deduplication and SandForce that can be found.
There was a mistake made in that article.
JellyRoll - Friday, November 23, 2012 - link
As a matter of fact, if deduplication were to apply to the SandForce series of processors then incompressible data would also experience decreases in write amplification. SandForce is very public that they have "top follow the same rules" with incompressible data as everyone else. IE, they suffer the same amount of write amplification.Since SandForce controllers only exhibit performance enhancing and endurance increasing benefits from compressible data, that alone indicates that deduplication is not in use.
Deduplication can be applied regardless of the compressibility of the data.
extide - Saturday, November 24, 2012 - link
Incompressible data generally doesnt have duplications in it... that's kinda what makes it incompressible... I mean the whole POINT of compression is removing duplications!JellyRoll - Saturday, November 24, 2012 - link
If you have two matching sets of data, be they incompressible or not, they would be subject to deduplicatioin. It would merely require mapping to the same LBA addresses.For instance, if you have two files that consist of largely incompressible data, but they are still carbon copies of each, they are still subject to data deduplication.
extide - Wednesday, November 28, 2012 - link
That could also be considered compression. Take 2 copies of the same MP3 file and put them into a zip file, how big is the zip file? Pretty close to the size of one copy...