No more mysteries: Apple's G5 versus x86, Mac OS X versus Linux
by Johan De Gelas on June 3, 2005 7:48 AM EST- Posted in
- Mac
Workstation, yes; Server, no.
The G5 is a gigantic improvement over the previous CPU in the PowerMac, the G4e. The G5 is one of the most superscalar CPUs ever, and has all the characteristics that could give Apple the edge, especially now that the clock speed race between AMD and Intel is over. However, there is still a lot of work to be done.First of all, the G5 needs a lower latency access to the memory because right now, the integer performance of the G5 leaves a lot to be desired. The Opteron and Xeon have a better integer engine, and especially the Pentium 4/Xeon has a better Branch predictor too. The Opteron's memory subsystem runs circles around the G5's.
Secondly, it is clear that the G5 FP performance, despite its access to 32 architectural registers, needs good optimisation. Only one of our flops tests was " Altivectorized", which means that the GCC compiler needs to improve quite a bit before it can turn those many open source programs into super fast applications on the Mac. In contrast, the Intel compiler can vectorize all 8 tests.
Altivec or the velocity engine can make the G5 shine in workstation applications. A good example is Lightwave where the G5 takes on the best x86 competition in some situations, and remains behind in others.
The future looks promising in the workstation market for Apple, as the G5 has a lot of unused potential and the increasing market share of the Power Mac should tempt developers to put a little more effort in Mac optimisation.
The server performance of the Apple platform is, however, catastrophic. When we asked Apple for a reaction, they told us that some database vendors, Sybase and Oracle, have found a way around the threading problems. We'll try Sybase later, but frankly, we are very sceptical. The whole "multi-threaded Mach microkernel trapped inside a monolithic FreeBSD cocoon with several threading wrappers and coarse-grained threading access to the kernel", with a "backwards compatibility" millstone around its neck sounds like a bad fusion recipe for performance.
Workstation apps will hardly mind, but the performance of server applications depends greatly on the threading, signalling and locking engine. I am no operating system expert, but with the data that we have today, I think that a PowerPC optimised Linux such as Yellow Dog is a better idea for the Xserve than Mac OS X server.
References
Threading on OS Xhttp://developer.apple.com/technotes/tn/tn2028.html
Basics OS X
http://developer.apple.com/documentation/macosx/index.html
116 Comments
View All Comments
Rosyna - Friday, June 3, 2005 - link
Actually, for better or worse the GCC Apple includes is being used for most Mac OS X software. OS X itself was compiled with it.elvisizer - Friday, June 3, 2005 - link
rosyna's right.i'm just not sure if there IS anyway to do the kind of comparison you seem to've been shooting for (pure competition between the chips with as little else affecting the outcome as possible). you could use the 'special' compilers on each platform, but those aren't used for compiling most of the binaries you buy at compusa.
elvisizer - Friday, June 3, 2005 - link
why didn't you run some tests with YD linux on the g5?!?!?!?!?!?!? you could've answered the questions you posed yourself!!!!!argh.
and you definitly should've included after effects. "we don't have access to that software" what the heck is THAT about?? you can get your hands on a dual 3.6 xeon machine, a dual 2.5 gr, and adual 2.7 g5, and you can't buy a freaking piece of adobe software at retail?!?!?!?!?!
some seroiusly weird decisions being made here.
other than that, the article was ok. re-confirmed suspicions i've had for awhile about OS X server handling large numbers of thread. My OS X servers ALWAYS tank hard with lots of open sessions, so i keep them around only for emergencies. They are so very easy to admin, tho, they're still attractive to me for small workgroup sizes. like last month, I had to support 8 people working on a daily magazine being published at e3. litterally inside the convention center. os x server was perfect in that situation.
Rosyna - Friday, June 3, 2005 - link
There appears to be either a typo or a horrible flaw in the test. It says you used GCC 3.3.3 but OS X comes with gcc version 3.3 20030304 (Apple Computer, Inc. build 1809).If you did use GCC 3.3.3 then you were giving the PPC a severe disadvantage as the stock GCC has almost no optimizations for PPC while it has many for x86.
Eug - Friday, June 3, 2005 - link
"But do you really think that Oracle would migrate to this if it wasn't on a par?"[Eug dons computer geek wannabe hat]
There are lots of reasons to migrate, and I'm sure absolute performance isn't always the primary concern. We won't know the real performance until we actually see tests on Oracle/Sybase.
My uneducated guess is that they won't be anywhere near as bad as the artifical server benches might suggest, but OTOH, I could easily see Linux on G5 significantly besting OS X on G5 for this type of stuff.
ie. The most interesting test I'd like to see is Oracle on the G5, with both OS X and Linux, compared to Xeon and Opteron with Linux.
And yeah, it would be interesting to see what gcc 4 brings to the table, since 3.3 provides no autovectorization at all. It would also be interesting to see how xlc/xlf does, although that doesn't provide autovectorization either. Where are the autovectorizing IBM compilers that were supposed to come out???
melgross - Friday, June 3, 2005 - link
As none of us has actual experiance with this, none of us can say yes or no.But do you really think that Oracle would migrate to this if it wasn't on a par? After all Ellison isn't on Apple's board anymore, so there's nothing to prove there.
I also remember that going back to Apple's G4 XServes, their performance was better than the x86 crowd, and the Sun servers as well. Those tests were on several sites. Been a while though.
JohanAnandtech - Friday, June 3, 2005 - link
querymc: Yes, you are right. The --noaltivec flag and the comment that altivec was enabled by default in the gcc 3.3.3 compiler docs made me believe there is autovectorization (or at least "scalarisation"). As I wrote in the article we used -O2 and and then tried a bucket load of other options like --fast-math --mtune=G5 and others I don't remember anymore but it didn't make any big difference.querymc - Friday, June 3, 2005 - link
The SSE support would probably also be improved by using GCC 4 with autovectorization, I should note. There's a reason it does poorly in GCC 3. :)querymc - Friday, June 3, 2005 - link
Johan: I didn't see this the first time through, but you need to make a slight clarification to the floating point stuff. There is no autovectorization capability in GCC 3.3. None. There is limited support for SSE, but that is not quite the same, as SSE isn't SIMD to the extent that AltiVec is. If you want to use the AltiVec unit in otherwise unaltered benchmarks, you don't have a choice other than GCC 4 (and you need to pass a special flag to turn it on).Also, what compiler flags did you pass on each platform? For example, did you use --fast-math?
JohanAnandtech - Friday, June 3, 2005 - link
Melgross: Apple told me that most xserves in europe are sold as "do it all". A little webserver (apache), a database sybase, samba and so on. They didn't have any client who had heavy traffic on the webserver, so nobody complains.Sybase/oracle seems to have done quite a bit of work to get good performance out of Mac OS-x, so it must be interesting to see how they managed to solve those problems. But I am sceptical that Oracle/Sybase runs faster on Mac OS x than on Linux.