Basic SAS Architecture

The main advantages for SAS over SCSI are that it has a point to point architecture and much smaller cabling requirements. Modern parallel SCSI operates on a shared bus (every device shared the total bus bandwidth and was limited by the slowest device), ranging in bandwidth from 160MB/s to 320MB/s. SAS is capable of 3Gbit with 6Gbit in the works, and with the 8/10 data encoding that works out to 300MB/s and 600MB/s. More importantly, that bandwidth is per device, so it is unlikely in the near term that any single device will be able to saturate the available bandwidth. One other huge advantage that SAS has is its ability to interoperate with SATA devices. Most of the SAS enclosures sold today offer the ability to mix SAS and SATA drives, which allows for endless possibilities in architecting a storage infrastructure. Below is the SAS roadmap from the SCSI Trade Association.


As you can see from the diagram below, the SAS standard layering is made up of 6 distinct layers, each with a specific purpose.


The lowest layer of SAS is the physical link layer, which consists of cables, connections, and the electrical characteristics for the SAS transmitter and receiver in the SAS phy. A SAS phy is a combination of the physical layer, phy layer and the link layer functions. A phy at the target and the initiator make up a physical connection. When multiple phy's are grouped together, it can be referred to as a "Wide Port". Once each pair of phys is connected together to form a physical link, the aggregate bandwidth of the port will incrementally increase.


The link layer is the third lowest layer and it interfaces with the SAS phy layer and the SAS Port layer. Its main purpose is to control the SAS phy layer to manage connections with other SAS devices. Next in the stack is the Port layer; it receives requests from the transport layer, interprets the requests, selects link layers (which in turn select phys that are used to establish connections), and forwards the requests to the selected link layer for transmission.

The second highest layer is the transport layer. This layer receives requests from the application layer, constructs frames and sends them to the port layer, validates the frames and notifies the application layer. Last but certainly not least is the application layer, whose main purpose is to create tasks for the transport layer to process.

Index The Promise VTrak J300s
Comments Locked

31 Comments

View All Comments

  • yyrkoon - Friday, February 2, 2007 - link

    When are you guys going to do some reviews on consumer grade equipment ? Well, let me clarify, 'consumer grade' with on card RAID processor(s). For instance,, right now, I'm in the market for a 8 + port RAID HBA, but would like to know if buying a Highpoint 16 port SATA RAID HBA, would really be any worse than getting an Areca 8 port HBA, for ~$200 usd more. 3Ware, from what I understand offers the best Linux/Unix support, or does it ? If so, would it really make much of a difference in a SOHO application ?

    I personally, would like to see a comparison of the latest Promise, Highpoint, Areca, 3Ware, etc controllers. In short, there is a lot out there for a potential buyer, such as myself, to get lost in, and basically, I personally am interested in reliability first, speed second (to a point).

    Anyhow, I just thought I'd point out, that while you guys do cover a lot in the area, you guys seem to have a gap, where I think it really matters most to your readers (home PC / enthusiast crowd/SOHO).
  • mino - Saturday, February 3, 2007 - link

    I would stay away from Highpoint.
    We have had several issues of RAID HBA(new one!) consistently going down AND screwing the whole RAID5 ubner some workloads. For the money one is better off with QuadFX ASUS board than to go Highpoint-like solutions.
    Areca is pretty much on a different level, ofcourse...
  • yyrkoon - Sunday, February 4, 2007 - link

    Again, this only reinforces what I've said, need a good article on which HBAs are good for reliability, etc.
  • mino - Sunday, February 4, 2007 - link

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.

    Most people do not actually need RAID5 for home usage and it is usually cheaper to go _software_ RAID1 with every drive in the RAID attached to different controller. In such a scenario even the cheapest or onboard controller offers comparable fault-tollerancy to high-end RAID% solutions.

    However the simplest way to go is really 2 NAS RAID5 boxes mirroring each other.
  • dropadrop - Tuesday, February 6, 2007 - link

    quote:

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.


    I would rule out Adaptec and the older LSI chipsets still available (under several brands like Intel for example). We replaced a bunch of Intel 6 & 8 port controllers with top of the line 8-port Adaptec SATA II controllers.

    The performance of the Intel controllers (with LSI chipsets) was terrible. We got about 8-13MB/s sequential writes with RAID 10 arrays, and tested using alot of differant drives. The Adaptec products are alot better in regard to speed, but keep dropping drives. This seems to be a common problem, but they have no solution.

    I've previously used 3ware without any problems, and would gladly test Areca if they where available here.
  • yyrkoon - Sunday, February 4, 2007 - link

    why would I want to spend 1300 usd + per 5 disk array (minus drives), when I could build my own system much cheaper, and use the hardware/software I wanted ? Just because I don't know which HBAs are more reliable, than others (because I obviously cant afford to buy them all), doesn't mean I'm an idiot ;)
  • Bob Markinson - Friday, February 2, 2007 - link

    Interesting review!
    I would have liked to see a comparison with latest gen 15K SCSI drives and not 10K SCSI drives to see the true SAS interface performance advantage over SCSI. Futhermore, the Serveraid 6M comes in two versions - one with 128 MB cache and the other with 256 MB cache. Also, there were performance issues with early 7.xx firmware/sw revisions on the 6M at high IO loads - hopefully you ran the tests most recent firmware. Write-back cache was enabled on the 6M, right?

  • Lifted - Tuesday, February 6, 2007 - link

    Based on the title of the article, Promise VTrak J300S, you are expecting too much. The "comparison" was more like an ad for the product. What is point in comparing 10K U320 vs 15k SAS? It's supposed to tell us what exactly? You clearly need to look elsewhere for a SAS vs U320 comparison if that's what you were expecting here. This was more for kicks I think, and perhaps to make the J300S look better than ____ ??? I don't get it, it's just a storage enclosure. The RAID adapters and drives are what determine performance, so why was this apples-to-oranges "performance" review thrown into an enclosure article?

    Odd, quite odd.
  • fjeske - Friday, February 2, 2007 - link

    Isn't it a bit unfair to use old IBM 10K SCSI drives in this comparison? None of the now Hitachi drives show good performance on Storagereview.com. Compare to Seagate's Cheetah 15K.5 and I think you'll see a difference.

    Also, how was the SCSI setup done? Attaching 12 drives to one U320 bus will obviously saturate it. Servers usually pair them when connecting this many drives.
  • cgaspar - Friday, February 2, 2007 - link

    SAS and SCSI drives have disk write caches disabled by default, as the drives' caches are not battery backed. IDE and SATA drives frequently have write caching enabled by default. This makes writes much faster, but if you loose power, those writes the drive claimed were committed will be lost, which can be a very bad thing for a database. I'd suggest disabling the write cache on the SATA drives and re-testing (if you still have the gear), I suspect the results will be illuminating.

Log in

Don't have an account? Sign up now