21st Century Server Choices

Lots of people base their server form factor choice on what they are used to buying. Critical database applications equal a high-end server. Less critical applications: midrange server. High-end machines used to find a home at larger companies and cheaper servers would typically be attractive to SMEs. I am oversimplifying but those are the clichés that pop up when you speak of server choices.

Dividing the market into who should or should not buy high-end servers is so... 20th century. Server buying decisions today are a lot more flexible and exciting for those who keep an open mind. In the world of virtualization your servers are just resource pools of networking, storage and processing. Do you buy ten cheap 1U servers, four higher performance 2U, one “low cable count” blade chassis, or two high-end servers to satisfy the needs of your services?

A highly available service can be set up with cheap and simple server nodes, as Google and many others show us every day. On the flipside of the coin, you might be able to consolidate all your services on just a few high-end machines, reducing the management costs while at the same time taking advantage of the advanced RAS features these kind of machines offer. It takes a detailed study to determine which strategy is the best one for your particular situation, so we are not saying that one strategy is better than all the others. The point is that the choice between cheap clustered nodes and only a few high-end machines cannot be answered by simply looking at the size of the company you are working for or the "mission critical level" of your service. There are corner cases where the choice is clear, but that is not the case for the majority of virtualized datacenters.

So is buying high-end servers as opposed to buying two or three times more 2-socket systems an interesting strategy for your virtualized cluster if you are not willing to pay a premium for RAS features? Until very recently, the answer was simple: no. High-end quad socket systems were easily three times and more as expensive and never offered twice as much performance compared to dual socket systems. There are many reasons for that. If we focus on Intel, the MP series were always based on mature but not the cutting edge technology. Also, quad socket systems have more cache coherency overhead, and the engineering choices favor reliability and expandability over performance. That results in slower but larger memory subsystems and sometimes lower clock speeds too. The result was that the performance advantage of the quad system was in many cases minimal.

At the end of 2006, the Dual Xeon X5300 were more than a match for the Xeon X7200 quad systems. And recently, dual Xeon 5500 servers made the massive Xeon 7400 servers look slow. The most important reason why these high-end systems were still bought were the superior RAS features. Other reasons include the fact that some decision makers never really bothered to read the benchmarks carefully and simply assumed that a quad system would automatically be faster since that is what the OEM account manager told them. You cannot even blame them: a modern CIO has to bury his head in financial documents, must solve HR problems, and is constantly trying to explain to the upper management why the complex IT sytems are not aligned with the business goals. Getting the CIO down from the “management penthouse” to the “cave down under”, also called the datacenter, is no easy task. But I digress.

Virtualization can shatter the old boundaries between the midrange and high-end servers. They can be interesting for the rest of us, the people that do not normally consider these high-end expensive systems. The condition is that the high-end systems can consolidate more services than the dual socket systems, so performance must be much better. How much better? If we just focus on capital investment, we get the figures below.

Type Server CPUs Memory Approx. Price
Midrange Dell R710 2x X5670 18 x 4GB = 72GB $9000
Midrange Dell R710 2x X5670 16 x 8GB = 128GB $13000
High-end Dell R910 4x X7550 64 x 4GB = 256GB $32000

So these numbers seem to suggest that we need 2.5 to 3 times better performance. In reality, that does not need to be the case. The TCO of two high-end servers is most likely a bit better than that of four midrange servers. The individual components like the PSU, fans, and motherboard should be more reliable and thus result in less downtime and less time spent on replacing those components. Even if that is not the case, it is statistically more likely that a component fails in a cluster with more servers, and thus more components. Less cables and less hypervisor updates should also help. Of course, the time spent in managing the VMs is probably more or less the same.

While a full TCO calculation is not the goal of this article, it is pretty clear to us that a high-end system should outperform the midrange dual socket systems by at least a factor two to be an economical choice in a virtualization cluster where hardware RAS capabilities are not the only priority. There is a strong trend that the availability of the (virtual) machine is guaranteed by easy to configure and relatively cheap software techniques such as VMware’s HA and fault tolerance. The availability of your service is then guaranteed by using application level high availability such as Microsoft’s clustering services, load balanced web servers, Oracle fail-over, and other similar (but still affordable) techniques.

The ultimate goal is not keeping individual hardware running but keeping your services running. Of course hardware that fails too frequently will place a lot of stress on the rest of your cluster, so that is another reason to consider this high-end hardware... if it delivers price/performance wise. Let us take a closer look at the hardware.

The 32-Core, 64-Thread Beast
Comments Locked

51 Comments

View All Comments

  • Ratman6161 - Wednesday, August 11, 2010 - link

    Many products license on a per CPU basis. For Microsoft anyway, what they actually count is the number of sockets. For example SQL Server Enterprise retails for $25K per CPU. So an old 4 socket system with single cores would be 4 x $25K = $100K. A quad socket system with quad core CPUs would be a total of 16 cores but the pricing would still be 4 sockets x $25K = 100K. It used to be that Oracle had a complex formula for figuring this but I think they have now also gone to the simpler method of just counting sockets (though their enterprise edition is $47.5K).

    If you are using VMWare, they also charge per socket (last I knew) so two dual socket systems would cost the same as a single 4 socket system. Thing is though you need to have at least two boxes in order to enable the high availability (i.e. automatic failover) functionality.
  • Stuka87 - Wednesday, August 11, 2010 - link

    For VMWare they have a few pricing structures. You can be charged per physical socket, or you can get an unlimited socket license (which is what we have, running one seven R910's). You just need to figure out if you really need the top tier license.
  • semo - Tuesday, August 10, 2010 - link

    "Did I mention that there is more than 72GHz of computing power in there?"

    Is this ebay?
  • Devo2007 - Tuesday, August 10, 2010 - link

    I was going to comment on the same thing.

    1) A dual core 2GHz CPU does not equal "4GHz of computing power" - unless somehow you were achieving an exact doubling of performance (which is extremely rare if it exists at all).

    2) Even if there was a workload that did show a full doubling of performance, performance isn't measured in MHz & GHz. A dual-core 2GHz Intel processor does not perform the same as a 2GHz AMD CPU.

    More proof that the quality of content on AT is dropping. :(
  • mino - Wednesday, August 11, 2010 - link

    You seem to know very little about the (40yrs old!) virtualization market.
    It flourishes from *comoditising* processing power.

    Why clearly meant a joke, that statement of Johan, is much closer to the truth than most market "research" reports on x86.
  • JohanAnandtech - Wednesday, August 11, 2010 - link

    Exactly. ESX resource management let you reserve CPU power in GHz. So for ESX, two 2.26 GHz cores are indeed a 4.5 GHz resource.
  • duploxxx - Thursday, August 12, 2010 - link

    sure you can count resources together as much as you want... virtually. But in the end a single process is still only able to handle the max ghz a single cpu can offer but can finish the request faster. That is exactly the thing why those Nehalem and gulf still hold against the huge core count of Magny cours.
  • maeveth - Tuesday, August 10, 2010 - link

    So I have nothing at all against AnandTech's recent articles on Virtualization however so far all of them have only looked at Virtualization from a compute density point of view.

    I currently am the administrator of a VMware environment used for development work and I run into I/O bottle necks FAR before I ever run into a compute bottleneck. In fact computational power is pretty much the LAST bottleneck I run into. My environment currently holds just short of 300 VMs, OS varies. We peak at approximately 10-12K IOPS.

    From my experience you always have to look at potential performance in a virtual environment at a much larger perspective. Every bottleneck effects others in subtle ways. For example if you have a memory bottleneck, either host or guest based you will further impact your I/O subsystem, though you should aim to not have to swap. In my opinion your storage backend is the single most important factor when determining large-scale-out performance in a virtualized environment.

    My environment has never once run into a CPU bottleneck. I use IBM x3650/x3650M2 with Dual Quad Xeons. The M2s use X5570s specifically.

    While I agree having impressive magnitudes of "GHz" in your environment is kinda fun it hardly says anything about how that environment will preform in a real world environment. Granted it is all highly subject to work load patterns.

    I also want to make it clear that I understand that testing on a such a scale is extremely cost prohibitive. As such I am sure AnandTech, Johan speficially, is doing the best he can with what resources he is given. I just wanted to throw my knowledge out there.

    @ELC
    Yes, software licensing is a huge factor when purchasing ESX servers. ESX is licensed per socket. It's a balancing act that depends on your work load however. A top end ESX license costs about $5500/year per socket.
  • mino - Wednesday, August 11, 2010 - link

    However, IMO storage performance analysis is pretty much beyond AT's budget ballpark by an order of magnitude (or two).

    There is a reason this space is so happily "virtualized" by storage vendors AND customers to a "simple" IOPS number.
    It is a science on its own. Often closer to black (empiric) magic than deterministic rules ...

    Johan,
    on the other hand, nothing prevents you form mentioning this sad fact:

    Except edge cases, a good virtualization solution is build from the ground up with
    1. SLA's
    2. storage solution
    3. licensing considerations
    4. everything else (like processing architecture) dictated by the previous
  • JohanAnandtech - Wednesday, August 11, 2010 - link

    I can only agree of course: in most cases the storage solution is the main bottleneck. However, this is aloso a result of the fact that most storage solutions out there are not exactly speed demons. Many storage solutions out there consist of overengineered (and overpriced) software running on outdated hardware. But things are changing quickly now. HP for example seems to recognize that a storage solution is very similar to a server running specialized software. There is more, with a bit of luck, Hitachi and Intel will bring some real competition to the table. (currently STEC has almost a monopoly on the enterprise SSD disks). So your number 2 is going to tumble down :-).

Log in

Don't have an account? Sign up now