"It looks the same on the powerpoint slide, but they are very different". The place is Austin, where an AMD engineer is commenting on the slides describing the Zen and Skylake schematics. In Portland, the Intel representatives could not agree more: "the implementation matters and is completely different". "We have to educate our customers that they can not simply compare AMD's 32 core with our 28 cores".

This morning kicks off a very interesting time in the world of server-grade CPUs. Officially launching today is Intel's latest generation of Xeon processors, based on the "Skylake-SP" architecture. The heart of Intel's new Xeon Scalable Processor family, the "Purley" 100-series processors incorporate all of Intel's latest CPU and network fabric technology, not to mention a very large number of cores.

Meanwhile, a couple of weeks back AMD soft-launched their new EPYC 7000 series processors. Based on the company's Zen architecture and scaled up to server-grade I/O and core counts, EPYC represents an epic achievement for AMD, once again putting them into the running for competitive, high performance server CPUs after nearly half a decade gone. EPYC processors have begun shipping, and just in time for today's Xeon launch, we also have EPYC hardware in the lab to test.

Today's launch is a situation that neither company has been in for quite a while. Intel hasn't had serious competition in years, and AMD has't been able to compete. As a result, both companies are taking the other's actions very seriously.

In fact we could go on for much longer than our quip above in describing the rising tension at the headquarters of AMD and Intel. For the first time in 6 years (!), a credible alternative is available for the newly launched Xeon. Indeed, the new Xeon "Skylake-SP" is launching today, and the yardstick for it is not the previous Xeon (E5 version 4), but rather AMD's spanking new EPYC server CPU. Both CPUs are without a doubt very different: micro architecture, ISA extentions, memory subsystem, node topology... you name it. The end result is that once again we have the thrilling task of finding out how the processors compare and which applications their various trade-offs make sense.

The only similarity is that both server packages are huge. Above you see the two new Xeon packages –with and without an Omni-Path connector – both of which are as big as a keycard. And below you can see how one EPYC CPU fills the hand of AMD's CEO Dr. Lisa Su. 

Both are 64 bit x86 CPUs, but that is where the similarities end. For those of you who have been reading Ian's articles closely, this is no surprise. The consumer-focused Skylake-X is the little brother of the newly launched Xeon "Purley", both of which are cut from the same cloth that is the Skylake-SP family. In a nutshell, the Skylake-SP family introduces the following new features: 

  1. AVX-512 (Many different variants of the ISA extension are available)
  2. A 1 MB (instead of a 256 KB) L2-cache with a non-inclusive L3
  3. A mesh topology to connected the cores and L3-cache chunks together

Meanwhile AMD's latest EPYC Server CPU was launched a few weeks ago:

On the package are four silicon dies, each one containing the same 8-core silicon we saw in the AMD Ryzen processors. Each silicon die has two core complexes, each of four cores, and supports two memory channels, giving a total maximum of 32 cores and 8 memory channels on an EPYC processor. The dies are connected by AMD’s newest interconnect, the Infinity Fabric...

In the next pages, we will be discussing the impact of these architectural choices on server software. 

AMD's EPYC Server CPU
Comments Locked

219 Comments

View All Comments

  • psychobriggsy - Tuesday, July 11, 2017 - link

    Indeed it is a ridiculous comment, and puts the earlier crying about the older Ubuntu and GCC into context - just an Intel Fanboy.

    In fact Intel's core architecture is older, and GCC has been tweaked a lot for it over the years - a slightly old GCC might not get the best out of Skylake, but it will get a lot. Zen is a new core, and GCC has only recently got optimisations for it.
  • EasyListening - Wednesday, July 12, 2017 - link

    I thought he was joking, but I didn't find it funny. So dumb.... makes me sad.
  • blublub - Tuesday, July 11, 2017 - link

    I kinda miss Infinity Fabric on my Haswell CPU and it seems to only have on die - so why is that missing on Haswell wehen Ryzen is an exact copy?
  • blublub - Tuesday, July 11, 2017 - link

    Your actually sound similar to JuanRGA at SA
  • Kevin G - Wednesday, July 12, 2017 - link

    @CajunArson The cache hierarchy is radically different between these designs as well as the port arrangement for dispatch. Scheduling on Ryzen is split between execution resources where as Intel favors a unified approach.
  • bill.rookard - Tuesday, July 11, 2017 - link

    Well, that is something that could be figured out if they (anandtech) had more time with the servers. Remember, they only had a week with the AMD system, and much like many of the games and such, optimizing is a matter of run test, measure, examine results, tweak settings, rinse and repeat. Considering one of the tests took 4 hours to run, having only a week to do this testing means much of the optimization is probably left out.

    They went with a 'generic' set of relative optimizations in the interest of time, and these are the (very interesting) results.
  • CoachAub - Wednesday, July 12, 2017 - link

    Benchmarks just need to be run on as level as a field as possible. Intel has controlled the market so long, software leans their way. Who was optimizing for Opteron chips in 2016-17? ;)
  • theeldest - Tuesday, July 11, 2017 - link

    The compiler used isn't meant to be the the most optimized, but instead it's trying to be representative of actual customer workloads.

    Most customer applications in normal datacenters (not google, aws, azure, etc) are running binaries that are many years behind on optimizations.

    So, yes, they can get better performance. But using those optimizations is not representative of the market they're trying to show numbers for.
  • CajunArson - Tuesday, July 11, 2017 - link

    That might make a tiny bit of sense if most of the benchmarks run were real-world workloads and not C-Ray or POV-Ray.

    The most real-world benchmark in the whole setup was the database benchmark.
  • coder543 - Tuesday, July 11, 2017 - link

    The one benchmark that favors Intel is the "most real-world"? Absolutely, I want AnandTech to do further testing, but your comments do not sound unbiased.

Log in

Don't have an account? Sign up now