Single Client Performance - CIFS and NFS on Linux

A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. In order to standardize the testing across multiple NAS units, the following parameters were used to mount the NFS and Samba shares:

mount -t nfs NAS_IP:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

mount -t cifs //NAS_IP/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER

Note that these are slightly different from what we used to run in our previous NAS reviews. We have also shifted from IOMeter to IOZone for evaluating performance under Linux. The following IOZone command was used to benchmark the shares:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.



Readers interested in the hard numbers can refer to the CSV program output here. These numbers will gain relevance as we benchmark more NAS units with similar configuration.

The NFS share was also benchmarked in a similar manner with the following command:

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

Some scenarios exhibit client caching effects, and these are evident in the gallery below.

The IOZone CSV output can be found here for those interested in the exact numbers.

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Seagate Business Storage 8-Bay - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 67 68
Re-Write 68 65
Read 26 103
Re-Read 25 102
Random Read 16 45
Random Write 65 65
Backward Read 15 36
Record Re-Write 660 671
Stride Read 24 76
File Write 65 67
File Re-Write 65 68
File Read 18 74
File Re-Read 18 72

 

Single Client Performance - CIFS and iSCSI on Windows Multi-Client Performance - CIFS
Comments Locked

28 Comments

View All Comments

  • lorribot - Friday, March 14, 2014 - link

    Sorry but the comment "Most users looking for a balance between performance and redundancy are going to choose RAID-5" is just plain stupid if you value your data at all. Look at anyone serious in enterprise storage and they will tell you Raid 6 is a must with SATA disks over 1TB. SATA is just pants when it comes to error detection and the likelyhood of one disk failling and then finding a second one fail with previously undetected errors when you try a rebuild is quite high.
    Rebuild times are often longer, I have seen 3TB drives stretch in to a third day.
    So on an 8 disk system you are now looking at only 6 disks and you really want a hot spare so now you are down to just 5 disks and 20TB raw, formated this is going to be down to 19TB. Where has that 32TB storage system gone?
    If you are doing SATA drive you need shelves of them, the more the merrier to make any kind of sense in the business world.
  • Penti - Saturday, March 15, 2014 - link

    Audience?

    I don't quite get who's the target audience for this, surely an rack mount NAS must mean SMB/Enterprise. But can't really see this fit here. Lack of encryption is just one point there, but at this price it surely lacks in many other regards, it has no 10GbE, no raid-controller (rebuild time seems to be ridiculous). Software doesn't really seem up for small enterprises. What is this appliance supposed to be used against? iSCSI is it's main feature but what use is it at this speed? No proper remote management of hardware that costs around 2500 USD? That is using a 42 dollar processor? I don't get this product, what are you suppose to use it for?
  • ravib123 - Saturday, March 15, 2014 - link

    We often use open filer or other linux based NAS/SAN platforms.

    Looking at this configuration I agree that most with an 8 disk array who are looking for maximum storage space would use RAID5, normally we use more disks and RAID10 for improved performance.

    My curiosity is how CPU and Memory bound this thing must be, but I saw no mention of these being limiting factors. The performance is far below most configurations I've used with 8 disks in RAID5 (with a traditional RAID card).
  • Penti - Saturday, March 15, 2014 - link

    The thing is that you get pretty decent hardware at 2000-2500 USD. Say a barebone Intel/Supermicro with IPMI/IPKVM (BMC), some Xeon-processor in the lower ends, AES-NI and all that and a case with hotswap bays and two PSU's. No problem running 10GbE, fiberchannel or 8 disks (you might need an add-on card or two). I would expect them to at least spend more then 500 for CPU, ram and board on appliances in this price range. It's not like the software and case itself is worth 2500 USD, plus whatever markup they have on their drives.
  • SirGCal - Sunday, March 16, 2014 - link

    Well, I used retired hardware and built a RAID6 (RAIDZ2) box with 8 drives, 2TB each, with nothing more then a case to hold them and a $41 internal SATA 4-port controller card. Downloaded Ubuntu, installed the ZFS packages, configured the array, and setup monitoring. Now I have a fully functional Linux rig with SSH, etc. and ~ 11,464,525,440 1K blocks. (roughly 11TB usable).

    I have another 23TB array usable using 4TB drives and an actual, very expensive, 6G, 8 port RAID card. The ZFS rig is right there in performance, even using slower (5400 RPM) drives.

    So you can do it as cheap as you like and get more functionality then this box offers. Need multiple NIC, throw em in, need ECC, server boards are just as available. Need U-factor, easy enough. I agree with the others, I don't see the $2k+ justification in cost... Even if they had the 'self encrypting' versions for $400 each, that's $3200, leaving $1900 for the hardware... Eww...
  • alyarb - Thursday, March 20, 2014 - link

    half-assed product. why is it only 30 inches deep? You could fit another row of disks if you use the entire depth of the rack. assuming you have a meter-deep rack of course, but who doesnt?

    I just want an empty chassis with a backplane for 3 rows of 4 disks. I want to supply the rest of the gear on my own.

Log in

Don't have an account? Sign up now