The Quad Opteron Alternative

Servers with the newest Intel six-core Xeon hit the market in April. The fastest six-cores Xeons were able to offer up to twice the performance of six-core Opteron “Istanbul”. The reason for this was that the age of the integer core in AMD's Opteron was starting to show. While the floating point part got a significant overhaul in 2007 with the AMD "Barcelona" quad-core chip, the integer part was a tuned version of the K8, launched back in 2003. This was partly compensated by large improvements in the multi-core performance scaling departement: HT-assist, faster CPU interconnects, larger L3 caches, and so on.

To counter this lower per-core performance, AMD's efforts focused on the "Magny-Cours" MCMs that scaled even better thanks to HT 3.0 and four DDR3 memory controllers. AMD’s twelve-core processors were launched at the end of March 2010, but servers based on these “Magny-Cours” Opterons were hard to find. So for a few months, Intel dominated the midrange and high-end server market. HP and Dell informed us that they would launch the "Magny-Cours" servers in June 2010. That is history now, and server buyers have an alternative again for the ubiquitous Xeon Servers.

AMD’s strategy to make their newest platform attractive is pretty simple: be very generous with cores. For example, you get 12 Opteron cores at 2.1GHz for the price of a six-core Xeon 2.66GHz (See our overview of SKUs). In our previous article, we measured that on average, a dual socket twelve-core Opteron is competitive with a similar Xeon server. It is a pretty muddy picture though: the Opteron wins in some applications, the Xeon wins in others. The extra DDR3 memory channel and the resulting higher bandwidth makes the Opteron the choice for most HPC applications. The Opteron has a small advantage in OLAP databases and the virtualization benchmarks are a neck and neck race. The Xeon wins in applications like rendering, OLTP and ERP, although again with a small margin.

But if the AMD platform really wants to lure away significant numbers of customers, AMD will have to do better than being slightly faster or slightly slower. There are many more Xeon based servers out there, so AMD Opteron based servers have to rise above the crowd. And they did: the “core generosity” didn’t end with offering more cores per socket. All 6100 Opterons are quad socket capable: the price per core stays the same whether you want 12, 24 or 48 cores in your machine. AMD says they have “shattered the 4P tax, making 2P and 4P processors the same price.”

So dual socket Opterons servers are ok, offering competitive performance at a slightly lower price, most of the time. Nice, but not a head turner. The really interesting servers of the AMD platforms should be the quad socket ones. For a small price premium you get twice as many DIMM slots and processors as a dual socket Xeon server. That means that a quad socket Opteron 6100 positions itself as a high-end alternative for a Dual Xeon 5600 server. If we take a quick look at the actual pricing of the large OEMs, the picture becomes very clear.

Compared to the DL380 G7 (72GB) speced above, the Dell R815 offers twice the amount of RAM while offering—theoretically—twice as much performance. The extra DIMM slots pay off: if you want 128GB, the dual Xeon servers have to use the more expensive 8GB DIMMs.

Quad Opteron Style Dell
Comments Locked

51 Comments

View All Comments

  • pablo906 - Saturday, September 11, 2010 - link

    High performance Oracle environments are exactly what's being virtualized in the Server world yet it's one of your premier benchmarks.

    /edit should read

    High performance Oracle environments are exactly what's not being virtualized in the Server world yet it's one of your premier benchmarks.
  • JohanAnandtech - Monday, September 13, 2010 - link

    "You run highly loaded Hypervisors. NOONE does this in the Enterprise space."

    I agree. Isn't that what I am saying on page 12:

    "In the real world you do not run your virtualized servers at their maximum just to measure the potential performance. Neither do they run idle."

    The only reason why we run with highly loaded hypervisors is to measure the peak throughput of the platform. Like VMmark. We know that is not realworld, and does not give you a complete picture. That is exactly the reason why there is a page 12 and 13 in this article. Did you miss those?
  • Per Hansson - Sunday, September 12, 2010 - link

    Hi, please use a better camera for pictures of servers that costs thousands of dollars
    In full size the pictures look terrible, way too much grain
    The camera you use is a prime example of how far marketing have managed to take these things
    10MP on a sensor that is 1/2.3 " (6.16 x 4.62 mm, 0.28 cm²)
    A used DSLR with a decent 50mm prime lens plus a tripod really does not cost that much for a site like this

    I love server pron pictures :D
  • dodge776 - Friday, September 17, 2010 - link

    I may be one of the many "silent" readers of your reviews Johan, but putting aside all the nasty or not-so-bright comments, I would like to commend you and the AT team for putting up such excellent reviews, and also for using industry-standard benchmarks like SAPS to measure throughput of the x86 servers.

    Great work and looking forward to more of these types of reviews!
  • lonnys - Monday, September 20, 2010 - link

    Johan -
    You note for the R815:
    Make sure you populate at least 32 DIMMs, as bandwidth takes a dive at lower DIMM counts.
    Could you elaborate on this? We have a R815 with 16x2GB and not seeing the expected performance for our very CPU intensive app perhaps adding another 16x2GB might help
  • JohanAnandtech - Tuesday, September 21, 2010 - link

    This comment you quoted was written in the summary of the quad Xeon box.

    16 DIMMs is enough for the R815 on the condition that you have one DIMM in each channel. Maybe you are placing the DIMMs wrongly? (Two DIMMs in one channel, zero DIMM in the other?)
  • anon1234 - Sunday, October 24, 2010 - link

    I've been looking around for some results comparing maxed-out servers but I am not finding any.

    The Xeon 5600 platform clocks the memory down to 800MHz whenever 3 dimms per channel are used, and I believe in some/all cases the full 1066/1333MHz speed (depends on model) is only available when 1 dimm per channel is used. This could be huge compared with an AMD 6100 solution at 1333MHz all the time, or a Xeon 7560 system at 1066 all the time (although some vendors clock down to 978MHz with some systems - IBM HX5 for example). I don't know if this makes a real-world difference on typical virtualization workloads, but it's hard to say because the reviewers rarely try it.

    It does make me wonder about your 15-dimm 5600 system, 3 dimms per channel @800MHz on one processor with 2 DPC @ full speed on the other. Would it have done even better with a balanced memory config?

    I realize you're trying to compare like to like, but if you're going to present price/performance and power/performance ratios you might want to consider how these numbers are affected if I have to use slower 16GB dimms to get the memory density I want, or if I have to buy 2x as many VMware licenses or Windows Datacenter processor licenses because I've purchased 2x as many 5600-series machines.
  • nightowl - Tuesday, March 29, 2011 - link

    The previous post is correct in that the Xeon 5600 memory configuration is flawed. You are running the processor in a degraded state 1 due to the unbalanced memory configuration as well as the differing memory speeds.

    The Xeon 5600 processors can run at 1333MHz (with the correct DIMMs) with up to 4 ranks per channel. Going above this results in the memory speed clocking down to 800MHz which does result in a performance drop to the applications being run.
  • markabs - Friday, June 8, 2012 - link

    Hi there,

    I know this is an old post but I'm looking at putting 4 SSDs in a Dell poweredge and had a question for you.

    What raid card did you use with the above setup?

    Currently a new Dell poweredge R510 comes with a PERC H700 raid card with 1GB cache and this is connect to a hot swap chassis. Dell want £1500 per SSD (crazy!) so I'm looking to buy 4 intel 520s and setup them up in raid 10.

    I just wanted to know what raid card you used and if you had a trouble with it and what raid setup you used?

    many thanks.

    Mark
  • ian182 - Thursday, June 28, 2012 - link

    I recently bought a G7 from www.itinstock.com and if I am honest it is perfect for my needs, i don't see the point in the higher end ones when it just works out a lot cheaper to buy the parts you need and add them to the G7.

Log in

Don't have an account? Sign up now