SandForce TRIM Issue & Corsair Force Series GS (240GB) Review
by Kristian Vättö on November 22, 2012 1:00 PM ESTFirmware 5.0.3 to the Rescue
As SandForce was well aware of the issue with TRIM, it allowed them to work on a new firmware with working TRIM before the issue gained much visibility. The new firmware carries a version number 5.0.3, although manufacturers may rename the update to correspond with their one firmware naming schemes. Availability of the update depends totally on the manufacturer as all have their own validation processes, but so far I've seen at least Corsair, Kingston and SanDisk offering the updated firmware for their drives. Again, I would like to point out that not all SF-2281 based SSDs are affected; there are plenty still using the older 3.x.x firmware with working TRIM.
To test if TRIM finally works, I'm using the same methods as in the previous page. Here's what performance looks like after 20 minutes of torturing:
There are no essential differences from the 5.0.2 firmware. Read speed still degrades but like I said, this is most likely how the controller and firmware were designed, meaning that there isn't really a way to fix it.
Next I TRIM'ed the drive:
Read speed is mostly restored, though not fully, but TRIM is definitely working better than it was in 5.0.2 and earlier. It's actually normal that performance after TRIM is a few percent shy of clean performance, so the behavior we are seeing here is fairly common. However, we now have a new quirk: Write speed degradation. As you can see in the first graph, write speed after torture was 398MB/s. After TRIMing the drive, the average write speeds drops to ~382MB/s. Generally the write speed is around 400MB/s but there are at least two dozen peaks where performance drops to as low as ~175MB/s.
I TRIMed the drive again to see if there would be any improvement:
And there is ~9MB/s improvement in average write speed. Write speeds still drops below 200MB/s on several occasions but in total the amount of negative peaks is a lot smaller. With more sequential writes and idle time, write speed should restore to close to clean state performance.
I also ran ATTO to see if it would replicate our HD Tach results:
Read speed is restored after TRIM, which is what our HD Tach tests showed as well.
When tested with ATTO, write speed doesn't actually degrade aside from the transfer size of 32KB, though similar behavior happens with the 5.0.2 firmware. The above graph can be a bit hard to read because the lines are crossing each other, so I double-checked the results by looking at the raw numbers reported by ATTO and there were no major differences. Again, keep in mind that ATTO doesn't write anywhere near as much data as HD Tach does. Aside from the peaks, performance with HD Tach was similar to clean-state, so it's possible that ATTO doesn't write enough to show the peaks as well.
56 Comments
View All Comments
FunnyTrace - Wednesday, November 28, 2012 - link
Yes, I did read an article on Tweaktown about this in August 2012.JellyRoll - Friday, November 23, 2012 - link
OMG...SandForce does not do dedupe (deduplication). It does not "has to check if the data is used by something else."!!The drive is unaware of the actual file usage above the device level. That is a host level responsibility.
I cannot believe that this article was not vetted before it was posted.
Kristian Vättö - Friday, November 23, 2012 - link
SandForce does deduplication at the device level. It doesn't look for actual files like the host does because it's all ones and zeros for the controller. However, what it does is look for similar data patterns.For example, if you have two very similar photos which are 5MB each, the controller may not write 10MB. Instead, it will only write let's say 8MB to the NAND because some of the data is duplicate and the whole idea of deduplication is to minimize NAND writes.
If you go and delete one of these photos, the OS sends a TRIM command that tells the LBA is no longer in use and it can be deleted. What makes SandForce more complicated is the fact that the photos don't necessarily have their own LBAs, so what you need to do is to check that the LBA you're about to erase is not mapped to any other LBA. Otherwise you might end up erasing a portion of the other photo as well.
JellyRoll - Friday, November 23, 2012 - link
I challenge you to offer one document that supports your assertion that sandforce does deduplication. There ins't any, as it does not. Feel free to link to the technical document to support your claims in your reply.SandForce supports compression, not deduplication.
Here is a link to documentation and product data sheets.
http://www.lsi.com/products/storagecomponents/Page...
Kristian Vättö - Friday, November 23, 2012 - link
SandForce/LSI has published very little about the technology behind DuraWrite and how it works, but a combination of technologies including compression and deduplication is what they have told us.http://www.anandtech.com/show/2899/3
JellyRoll - Friday, November 23, 2012 - link
Linking an Anandtech article is not proof that SandForce does deduplication. A quick Google will reveal that there is no other source, outside of Anandtech, that claims that they offer deduplication on the current series of processors.As a matter of fact, that article is the only other reference to deduplication and SandForce that can be found.
There was a mistake made in that article.
JellyRoll - Friday, November 23, 2012 - link
As a matter of fact, if deduplication were to apply to the SandForce series of processors then incompressible data would also experience decreases in write amplification. SandForce is very public that they have "top follow the same rules" with incompressible data as everyone else. IE, they suffer the same amount of write amplification.Since SandForce controllers only exhibit performance enhancing and endurance increasing benefits from compressible data, that alone indicates that deduplication is not in use.
Deduplication can be applied regardless of the compressibility of the data.
extide - Saturday, November 24, 2012 - link
Incompressible data generally doesnt have duplications in it... that's kinda what makes it incompressible... I mean the whole POINT of compression is removing duplications!JellyRoll - Saturday, November 24, 2012 - link
If you have two matching sets of data, be they incompressible or not, they would be subject to deduplicatioin. It would merely require mapping to the same LBA addresses.For instance, if you have two files that consist of largely incompressible data, but they are still carbon copies of each, they are still subject to data deduplication.
extide - Wednesday, November 28, 2012 - link
That could also be considered compression. Take 2 copies of the same MP3 file and put them into a zip file, how big is the zip file? Pretty close to the size of one copy...