No more mysteries: Apple's G5 versus x86, Mac OS X versus Linux
by Johan De Gelas on June 3, 2005 7:48 AM EST- Posted in
- Mac
Micro CPU benchmarks: isolating the FPU
But you can't compare an Intel PC with an Apple. The software might not be optimised the right way." Indeed, it is clear that the Final Cut Pro, owned by Apple, or Adobe Premiere, which is far better optimised for the Intel PC, are not very good choices to compare the G5 with the x86 world.So, before we start with application benchmarks, we performed a few micro benchmarks compiled on all platforms with the same gcc 3.3.3 compiler.
The first one is flops. Flops, programmed by Al Aburto, is a very floating-point intensive benchmark. Analyses show that this benchmark contains:
- 70% floating point instructions;
- only 4% branches; and
- Only 34% of instructions are memory instructions.
Al Aburto, about Flops:
" Flops.c is a 'C' program which attempts to estimate your systems floating-point 'MFLOPS' rating for the FADD, FSUB, FMUL, and FDIV operations based on specific 'instruction mixes' (see table below). The program provides an estimate of PEAK MFLOPS performance by making maximal use of register variables with minimal interaction with main memory. The execution loops are all small so that they will fit in any cache."Flops shows the maximum double precision power that the core has, by making sure that the program fits in the L1-cache. Flops consists of 8 tests, and each test has a different, but well known instruction mix. The most frequently used instructions are FADD (addition), FSUB (subtraction) and FMUL (multiplication). We used gcc -O2 flops.c -o flops to compile flops on each platform.
MODULE | FADD | FSUB | FMUL | FDIV | Powermac G5 2.5 GHz | Powermac G5 2.7 GHz | Xeon Irwindale 3.6 GHz | Xeon Irwindale 3.6 w/o SSE2* | Xeon Galatin 3.06 GHz | Opteron 250 2.4 GHz |
1 | 50% | 0% | 43% | 7% | 1026 | 1104 | 677 | 1103 | 1033 | 1404 |
2 | 43% | 29% | 14% | 14% | 618 | 665 | 328 | 528 | 442 | 843 |
3 | 35% | 12% | 53% | 0% | 2677 | 2890 | 532 | 1088 | 802 | 1955 |
4 | 47% | 0% | 53% | 0% | 486 | 522 | 557 | 777 | 988 | 1856 |
5 | 45% | 0% | 52% | 3% | 628 | 675 | 470 | 913 | 995 | 1831 |
6 | 45% | 0% | 55% | 0% | 851 | 915 | 552 | 904 | 1030 | 1922 |
7 | 25% | 25% | 25% | 25% | 264 | 284 | 358 | 315 | 289 | 562 |
8 | 43% | 0% | 57% | 0% | 860 | 925 | 1031 | 910 | 1062 | 1989 |
Average: | 926 | 998 | 563 | 817 | 830 | 1545 |
The results are quite interesting. First of all, the gcc compiler isn't very good in vectorizing. With vectorizing, we mean generating SIMD (SSE, Altivec) code. From the numbers, it seems like gcc was only capable of using Altivec in one test, the third one. In this test, the G5 really shows superiority compared to the Opteron and especially the Xeons.
The really funny thing is that the new Xeon Irwindale performed better when we disabled support for the SSE-2, and used the "- mfpmath=387" option. It seems that the GCC compiler makes a real mess when it tries to optimise for the SSE-2 instructions. One can, of course, use the Intel compiler, which produces code that is up to twice as fast. But the use of the special Intel compiler isn't widespread in the real world.
Also interesting is that the 3.06 GHz Xeon performs better than the Xeon Irwindale at 3.6 GHz. Running completely out of the L1-cache, the high latency (4 cycles) of the L1-cache of Irwindale hurts performance badly. On the Galatin Xeon, which is similar to Northwood, Flops benefits from the very fast 2-cycle latency.
The conclusion is that the Opteron has, by far, the best FPU, especially when more complex instructions such a FDIV (divisions) are used. When the code is using something close to the ideal 50% FADD/FSUB and 50% FMUL mix and is optimised for Altivec, the G5 can roll its muscles. The normal FPU is rather mediocre though.
Micro CPU benchmarks: isolating the Branch Predictor
To test the branch prediction, we used the benchmark " Queens". Queens is a very well known problem where you have to place n chess Queens on an n x n board. The catch is that no single Queen must be able to attack the other. The exhaustive search strategy for finding a solution to placing the Queens on a chess board so they don't attack each other is the algorithm behind this benchmark, and it contains some very branch intensive code.Queens has about:
- 23% branches
- 45% memory instructions
- No FP operations
RUN TIME (sec) | |
Powermac G5 2.5 GHz | 134.110 |
Xeon Irwindale 3.6 GHz | 125.285 |
Opteron 250 2.4 GHz | 103.159 |
At 2.7 GHz, the G5 was just as fast as the Xeon. It is pretty clear that despite the enormous 31 stage pipeline, the fantastic branch predictor of the "Xeon Pentium 4" is capable of keeping the damage to a minimum. The Opteron's branch predictor seems to be at the level of G5's: the branch misprediction penalty of the G5 is 30% higher, and the Opteron does about 30% better.
The G5 as workstation processor
It is well known that the G5 is a decent workstation CPU. The G5 is probably the fastest CPU when it comes to Adobe After Effects and Final Cut Pro, as this kind of software was made to be run on a PowerMac. Unfortunately, we didn't have access to that kind of software.First, we test with Povray, which is not optimised for any architecture, and single-threaded.
Povray Seconds |
|
Dual Opteron 250 (2.4 GHz) | 804 |
Dual Xeon DP 3.6 GHz | 1169 |
Dual G5 2.5 GHz PowerMac | 1125 |
Dual G5 2.7 GHz PowerMac | 1049 |
Povray runs mostly out of the L2- and L1-caches and mimics almost perfectly what we have witnessed in our Flops benchmarks. As long as there are little or no Altivec or SSE-2 optimisations present, the Opteron is by far the fastest CPU. The G5's FPU is still quite a bit better than the one of the Xeon.
The next two tests are the only 32 bit ones, done in Windows XP on the x86 machines.
Lightwave 8.0 Raytrace |
Lightwave 8.0 Tracer Radiosity |
|
Dual Opteron 250 (2,4 GHz) | 47 | 204 |
Dual Xeon DP 3,6 GHz | 47.3 | 180 |
Dual G5 2,5 GHz PowerMac | 46.5 | 254 |
The G5 is capable of competing in one test. Lightwave rendering engine has been meticulously optimised for SSE-2, and the " Netburst" architecture prevails here. We have no idea how much attention the software engineers gave Altivec, but it doesn't seem to be much. This might of course be a result of Apple's small market share.
Cinema 4D Cinebench |
|
Dual Opteron 250 (2.4 GHz) | 630 |
Dual Xeon DP 3.6 GHz | 682 |
Dual G5 2.5 GHz PowerMac | 638 |
Dual G5 2.7 GHz PowerMac | 682 |
Maxon has invested some time and effort to get the Cinema4D engine running well on the G5 and it shows. The G5 competes with the best x86 CPUs.
116 Comments
View All Comments
Joepublic2 - Monday, June 6, 2005 - link
Wow, pixelglow, that's an awesome way to advertise your product. No marketing BS, just numbers!pixelglow - Sunday, June 5, 2005 - link
I've done a direct comparison of G5 vs. Pentium 4 here. The benchmark is cache-bound, minimal branching, maximal floating point and designed to minimize use of the underlying operating system. It is also single-threaded so there's no significant advantage to dual procs. More importantly it uses Altivec on G5 and SSE/SSE2 on the Pentium 4, and also compares against different compilers including the autovectorizing Intel ICC.http://www.pixelglow.com/stories/macstl-intel-auto...
http://www.pixelglow.com/stories/pentium-vs-g5/
Let the results speak for themselves.
webflits - Sunday, June 5, 2005 - link
"From the numbers, it seems like gcc was only capable of using Altivec in one test,"Nonsense!
The Altivec SIMD only supports single (32-bit) precision floating point and the benchmark uses double precision floating point.
webflits - Sunday, June 5, 2005 - link
Here are the resuls on a dual 2.0Ghz G5 running 10.4.1 using the stock Apple gcc 4.0 compiler.[Session started at 2005-06-05 22:47:52 +0200.]
FLOPS C Program (Double Precision), V2.0 18 Dec 1992
Module Error RunTime MFLOPS
(usec)
1 4.0146e-13 0.0163 859.4752
2 -1.4166e-13 0.0156 450.0935
3 4.7184e-14 0.0075 2264.2656
4 -1.2546e-13 0.0130 1152.8620
5 -1.3800e-13 0.0276 1051.5730
6 3.2374e-13 0.0180 1609.4871
7 -8.4583e-11 0.0296 405.4409
8 3.4855e-13 0.0200 1498.4641
Iterations = 512000000
NullTime (usec) = 0.0015
MFLOPS(1) = 609.8307
MFLOPS(2) = 756.9962
MFLOPS(3) = 1105.8774
MFLOPS(4) = 1554.0224
frfr - Sunday, June 5, 2005 - link
If you test a database you have to disable the write cache on the disk on almost any OS unless you don't care about your data. I've read that OS X is an exception because it allows the database software control over it, and that mySql indeed does use this. This would invalidate al your mySql results except for OS X.Besides all serious database's run on controllers with write cache with batteries (and with the write cache on the disks disabled).
nicksay - Sunday, June 5, 2005 - link
It is pretty clear that there are a lot of people who want Linux PPC benchmarks. I agree. I also think that if this is to be a "where should I position the G5/Mac OS X combination compared to x86/Linux/Windows" article, you should at least use the default OS X compiler. I got flops.c from http://home.iae.nl/users/mhx/flops.c to do my own test. I have a stock 10.4.1 install on a single 1.6 GHz G5.In the terminal, I ran:
gcc -DUNIX -fast flops.c -o flops
My results:
FLOPS C Program (Double Precision), V2.0 18 Dec 1992
Module Error RunTime MFLOPS
(usec)
1 4.0146e-13 0.0228 614.4905
2 -1.4166e-13 0.0124 565.3013
3 4.7184e-14 0.0087 1952.5703
4 -1.2546e-13 0.0135 1109.5877
5 -1.3800e-13 0.0383 757.4925
6 3.2374e-13 0.0220 1320.3769
7 -8.4583e-11 0.0393 305.1391
8 3.4855e-13 0.0238 1258.5012
Iterations = 512000000
NullTime (usec) = 0.0002
MFLOPS(1) = 736.3316
MFLOPS(2) = 578.9129
MFLOPS(3) = 866.8806
MFLOPS(4) = 1337.7177
A quick add-n-divide gives my system an average result of 985.43243.
985. On a single 1.6 G5.
So, the oldest, slowest PowerMac G5 ever made almost matches a top-of-the-line dual 2.7 G5 system?
To quote, "Something is rotten in the state of Denmark." Or should I say the state of the benchmark?
Eug - Saturday, June 4, 2005 - link
BTW, about the link I posted above:http://lists.apple.com/archives/darwin-dev/2005/Fe...
The guy who wrote that is the creator of the BeOS file system (and who now works for Apple).
It will be interesting to see if this is truly part of the cause of the performance issues.
Also, there is this related thread from a few weeks back on Slashdot:
http://hardware.slashdot.org/article.pl?sid=05/05/...
profchaos - Saturday, June 4, 2005 - link
The statement about Linux kernel modules is incorrect. It is a popular misconception that kernel modules make the Linux kernel something other than purely monolithic. The module loader links module code in kernelspace, not in userspace, the advantage being dynamic control of kernel memory footprint. Although some previously kernelspace subsystems, such as devfs, have been recently rewritten as userspace daemons, such as udev, the Linux kernel is for the most part a fully monolithic design. The theories that fueled the monolithic vs. microkernel flame wars of the mid-90s were nullified by the rapid ramping of single-thread performance relative to memory subsystems. From the perspective of the CPU, it take years for a context switch to occur since modifying kernel data structures in main memory is so slow relative to anything else. Userspace context switching is based on IPC in microkernel designs, and may require several context switches in practice. As you can see from the results, Linux 2.6 wipes the floor with Darwin just the same as it does with several of the BSDs (especially OpenBSD and FreeBSD4.x) and its older cousin Linux 2.4. It's also anyone's guess whether the Linux 2.6 systems were using pthreads (from NPTL) or linuxthreads in glibc. It takes a heavyweight UNIX server system, which today means IBM AIX on POWER, HP-UX on Itanium, or to a lesser degree Solaris on SPARC, to best Linux 2.6 under most server workloads.Eug - Saturday, June 4, 2005 - link
Responses/Musings from an Apple developer.http://ridiculousfish.com/blog/?p=17
http://lists.apple.com/archives/darwin-dev/2005/Fe...
Also:
They claim that making a new thread is called "forking". No, it’s not. Calling fork() is forking, and fork() makes processes, not threads.
They claim that Mac OS X is slower at making threads by benchmarking fork() and exec(). I don’t follow this train of thought at all. Making a new process is substantially different from making a new thread, less so on Linux, but very much so on OS X. And, as you can see from their screenshot, there is one mySQL process with 60 threads; neither fork() nor exec() is being called here.
They claim that OS X does not use kernel threads to implement user threads. But of course it does - see for yourself.
/* Create the Mach thread for this thread */
PTHREAD_MACH_CALL(thread_create(mach_task_self(), &kernel_thread), kern_res);
They claim that OS X has to go through "extra layers" and "several threading wrappers" to create a thread. But anyone can see in that source file that a pthread maps pretty directly to a Mach thread, so I’m clueless as to what "extra layers" they’re talking about.
They guess a lot about the important performance factors, but they never actually profile mySQL. Why not?
orenb - Saturday, June 4, 2005 - link
Thank you for a very interesting article. A follow up on desktop and workstation performance will be very appreciated... :-)Good job!