Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

Note that these are slightly different from what we used to run in our previous NAS reviews. We have also shifted from IOMeter to IOZone for evaluating performance under Linux. The following IOZone command was used to benchmark the shares:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.



Readers interested in the hard numbers can refer to the CSV program output here. These numbers will gain relevance as we benchmark more NAS units with similar configuration.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

Some scenarios exhibit client caching effects, and these are evident in the gallery below.

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Seagate Business Storage 8-Bay - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 67 68
Re-Write 68 65
Read 26 103
Re-Read 25 102
Random Read 16 45
Random Write 65 65
Backward Read 15 36
Record Re-Write 660 671
Stride Read 24 76
File Write 65 67
File Re-Write 65 68
File Read 18 74
File Re-Read 18 72

 

Single Client Performance - CIFS and iSCSI on Windows Multi-Client Performance - CIFS
Comments Locked

28 Comments

View All Comments

  • buffhr - Friday, March 14, 2014 - link

    I can see the merit in a single point of contact, however at 5.1kusd it is overpriced IMO. Sure the disk = roughly50% of the cost but still 2.5k disk-less for a system that does not support SSH and practically has no ecosystem or encryption...
  • Samus - Friday, March 14, 2014 - link

    I can't believe what a hit encryption has on read performance. 25MB/s opposed to 102MB/sec? Holy...
  • extide - Friday, March 14, 2014 - link

    It would be a lot better if they used a CPU with AES-NI
  • max1001 - Friday, March 14, 2014 - link

    Look at the CPU. There's your answer.
  • Ammohunt - Thursday, March 20, 2014 - link

    I agree $5k buys alot of jbod that you can hang off an exiting server and configure however you want ZFS, tgtd, SMB, CIFS, NFS etc..
  • Haravikk - Friday, March 21, 2014 - link

    For such a large investment I'm pretty surprised by the lack of attention to detail here. There is no hardware support for encryption, which is crazy; my ~$250 Synology DS212j has an ARM processor with hardware encryption, so why doesn't a $5000+ machine? Also, 2x gigabit ethernet seems pretty meagre these days when any serious data users will be (or should be) investing in 10 gigabit ethernet at the very least, and while the controllers are pricey it would fit well within the huge premium here.

    I mean, I'm nearly finished building a DIY storage box; it's not racked (since I'm building it around a tower case), but it has 15 hot-swappable 3.5" hard drive bays. I'm using it for direct attached storage and it's coming in around $800 or so, but I don't think a small form factor motherboard sufficient to run ReadyNAS would push me much higher after swapping out the DAS parts. I dunno, for $5000+ I would think an enterprise oriented product should be able to do a lot better than what I can build myself! Even if I switched everything for enterprise parts I'd still come in under.
  • tech6 - Friday, March 14, 2014 - link

    It seems to have become the norm that companies release products with half finished software and expect their customers to be their beta testers. Why would any business in their right mind pay $5K for an unfinished product when there are much better alternatives available?
  • Sadrak85 - Friday, March 14, 2014 - link

    Did a back-of-the-envelope calculation a while back; the 2.5" ones just make more sense if you need maximum storage at the moment. That said, when we have the next gen of HDDs filled with helium and holding 10+ TB apiece, 3.5" all the way.
  • Sadrak85 - Friday, March 14, 2014 - link

    I eat my words, the 2.5" ones are 9U for 50 drives...which is fewer TB/U, if you can accept the units. This one can make sense after all.
  • Samus - Friday, March 14, 2014 - link

    I've replaced all our Seagate Constelation.2 drives over the past 3 years with Hitachi's, as they have failed like clockwork in our HP ML380 that came equipped with them.

    When I get the replacement back from HP, I put a Hitachi in the cage, install it in the server, and put the Constelation on eBay where I usually get $50. That's all they're worth, apparently.

    I love Seagate, but between their load/unload cycle-happy desktop drives that have a pre-determined death, and their ridiculously poor quality SAS drives, I just hope their SSD's are their saving grace, because my how the mighty have fallen from the 7200.7 days.

Log in

Don't have an account? Sign up now