Comments Locked

65 Comments

Back to Article

  • JocPro - Thursday, August 18, 2016 - link

    Great detective work, guys!
  • ImSpartacus - Friday, August 19, 2016 - link

    Yeah, I love this kind of thing. Really fantastic read.
  • ddriver - Friday, August 19, 2016 - link

    Seems fairly overbuild, wonder if it was a requirement to run the chip stable, which would add to the platform cost. If so, lets hope the final product will not have it.
  • Kevin G - Friday, August 19, 2016 - link

    It is over engineered to be used in internal AMD validation/testing. Retail products will of course be different.
  • DanNeely - Friday, August 19, 2016 - link

    Interesting, especially the x15 or x16 PCIe slot. I didn't think non power of 2 configs were allowed.
  • Kevin G - Friday, August 19, 2016 - link

    I've seen references to 12 lanes being part of the spec but I have not seen such an implementation in the wild.
  • DanNeely - Friday, August 19, 2016 - link

    Thinking out loud a bit, it'd be an expensive Rube Goldberg but a a 16 lane PLX chip with a QoS function that let it favor one of the downstream ports over the other could maybe do it. The GPU would have 16 lanes to the PLX regardless, but 1/16th of the PLXes bandwidth would be reserved for the management link. The management controller in turn would be a 1x device connected to the second 16 lane port of the PLX (which is the expensive overkill part).
  • DanNeely - Friday, August 19, 2016 - link

    Thinking out loud a bit, it'd be an expensive Rube Goldberg but a a 16 lane PLX chip with a QoS function that let it favor one of the downstream ports over the other could maybe do it. The GPU would have 16 lanes to the PLX regardless, but 1/16th of the PLXes bandwidth would be reserved for the management link. The management controller in turn would be a 1x device connected to the second 16 lane port of the PLX (which is the expensive overkill part).
  • ZeDestructor - Friday, August 19, 2016 - link

    The 2 black 8-pin connectors are PCIe 8-pin, not EPS12V. This means that each CPU has just the one EPS12V connection, but a 6-pin and an 8-pin PCIe, which raises a few interesting questions as to what exactly that board needs at least 750W of available power for...
  • Ian Cutress - Friday, August 19, 2016 - link

    That's a fair point - I didn't think PCIe 8-pin because typically we never see those on a motherboard (if most server boards need PCIe power, it's a 6-pin, and all 8-pin is EPS 12V).
  • ZeDestructor - Friday, August 19, 2016 - link

    The htansparent one is EPS12V, the black one is PCIe.. looks like more updating needed :P
  • SunLord - Friday, August 19, 2016 - link

    I wonder if this points to AMD offering and ability similar to Intel new knights landing socket setup. Maybe A Radeon Pro SSG in P2 using NVDIMMs cause why not go full crazy :P
  • looncraz - Friday, August 19, 2016 - link

    I assume that are to provide all the extra juice they could ever want while overclocking those cores. Engineering boards are usually very heavily overbuilt, this seems to be no exception.

    And, of course, they will need all that power if they intend to run a few reference RX 480s on it :p
  • prisonerX - Saturday, August 20, 2016 - link

    It's a server motherboard. People run multiple high powered processing units from the PCIe bus and they want ample power via the PCIe ports. Hacks like the PCIe cables you see from ATX PSUs don't cut it in this environment.

    So the answer is the motherboard doesn't need it, the customers do.
  • prisonerX - Sunday, August 21, 2016 - link

    They're probably for PCIe 4.0 support: http://www.tomshardware.com/news/pcie-4.0-power-sp...
  • ZeDestructor - Monday, August 22, 2016 - link

    Yup. Now that that's out and letting the engineers overload (not really) the connectors to push 9A down each pin, pushing the full 1500W though 11 + 2 pins is well within the realm of possibility.

    I really, really hope this makes it to the consumer end - wire mobo once, never touch power wiring until major mobo upgrades happen...
  • prisonerX - Monday, August 22, 2016 - link

    Now if only motherboards would ditch ATX and start taking a single 12v line...
  • ZeDestructor - Monday, August 22, 2016 - link

    Just mention the idea of that and people start raising their pitchforks at Dell/HP/Lenovo for having gone all-12V with Skylake, claiming that it's proprietary, and not good for the consumer, etc.

    For myself I wish they'd do go all 12V, everywhere, and just move the 3.3V and 5V generation for HDDs to the HDD backplanes (and put backplanes into all cases while they're at it!).
  • Alexvrb - Tuesday, August 23, 2016 - link

    When it's done by OEMs, and they don't all agree on a standard, it IS proprietary! People aren't against going all 12V necessarily. They just want it to be an open standard. Anyway even in the case of the OEMs "all 12V" recent designs, that's just for the PSU - the mainboards supply other voltage outputs to devices as needed. You can't change THAT unless you want to break compatibility with everything you plug into the board.

    Supposedly it's more efficient this way or something. I think that depends on the PSU, though!
  • evolucian911 - Monday, August 22, 2016 - link

    PCIe 4.0 300-500W power delivery ?
  • ZeDestructor - Monday, August 22, 2016 - link

    That info wasn't out when I was musing :)
  • Arnulf - Friday, August 19, 2016 - link

    ‘ALL SATA CONNS CONNECTED TO P1’ which indicates the first processor has direct control.

    P1 is the **2nd*** CPU.

    P0 is the first CPU.
  • Ian Cutress - Friday, August 19, 2016 - link

    First as in, how the board looks on the way it is oriented.
    Compared to the zeroth, obviously! :D

    Joking aside you are right. Updated.
  • Hachi0Hachi - Friday, August 19, 2016 - link

    Interesting article and interesting motherboards. Hope that AMD hits a good stride with Zen. I always get excited when Intel pumps out a new Tick or Tock or whatever yet am usually left wanting after i see all the benchmarks. My 5yr old 2600k is still somehow relevant. They need some real competition again. Good work Ian.
  • mapesdhs - Sunday, August 21, 2016 - link

    Just curious, what speed does your 2600K run at? I'm still building 5GHz 2700K systems, plenty of value there even today (at this speed, it gives 880 for CB R15, same score as a stock 6700K), crazy easy to oc SB/SB-E without the need for whacko cooling. That and 3930K setups which are even cheaper on the CPU side (bagged one recently for 95 UKP, another sold last week on eBay for 72 UKP), though it's harder to find X79 mbds this year. Also won a 4960X for 205 which was good, and a P9X79-E WS for ~200 to go with it.

    Just so unimpressed with newer models; restricted PCIe, cooling issues, crazy pricing, etc.
  • Michael Bay - Friday, August 19, 2016 - link

    Does that mean desktop Zen is coming Q3 2017?
  • Trixanity - Friday, August 19, 2016 - link

    No, it means desktop Zen is launching in Q4 2016 with general availability in Q1 2017.
  • slickr - Friday, August 19, 2016 - link

    Desktop parts will come first. If you actually read the first article about Zen, you'd know this already.

    AMD will introduce Desktop first, again with products shipping in 2017, so I'm expecting Q1 2017.
  • Iynx - Friday, August 19, 2016 - link

    Promising for enthusiasts when the dev motherboard has a label of "volt adjust for overclocking" on it. The extra power connectors all over the server boards seem like fairly belt and bracers over engineering, one appears to be connected to nothing more than 2 fan headers but that being said its hard to tell on a multi layer board.
    What I'm most interested in is what are field and gearbox add in cards are, if they require both 16x pcie and an extra sideband management header.
  • jjj - Friday, August 19, 2016 - link

    8 mem chans would be interesting as Intel has just 4 for their 24 cores SKU for 102GB/s.

    In the press release about the demo, AMD states "With dedicated PCIe® lanes for cutting-edge USB, graphics, data and other I/O, the AMD AM4 platform will not steal lanes from other devices and components."

    We'll see how many mem chans and PCIe Lanes the 8C Zen offers. I expect only 2 memory channels as it kinda makes sense in consumer and saves power , area and system costs. For PCIe lanes , obviously we want lots but they got to mind the cost too. Zen got to cover the 150-350$ market properly , they can't price it high where there no volumes- 150$ or even less for 4 cores ofc. Hopefully the pricing doesn't disappoint. Guess they could add some Special Edition SKU at higher clocks at 499$ and even bring a 16 cores server SKU in consumer at more than that.

    Any clue about power consumption for Broadwell-E at 3GHz in Blender? With "normal" not something very tight. Just 2 memory chans and less cache could lead to interesting power numbers for Zen 8 cores. Assuming they don't hit a hard wall at 3GHz and that the perf is there.
  • jjj - Friday, August 19, 2016 - link

    Correction: Broadwell-E at 3GHz max power not in Blender so in Prime95.
  • Kevin G - Friday, August 19, 2016 - link

    With 8 channels of DDR4, bandwidth would scale based upon the number of DIMMs in a system. Fully populated with fast memory, there would be enough for a lowend/midrange GPU. It wouldn't surprise me if AMD leveraged this socket for a HPC part based around their GPU architecture.
  • smilingcrow - Friday, August 19, 2016 - link

    Haswell:
    E5-2628 V3 85W 2.5 – 3GHz, Max Turbo @8 Core = 2.8GHz ~$700 (OEM only)
    E5-2667 V3 135W 3.2 – 3.6GHz, Max Turbo @8 Core = 3.4GHz, $2,057
    Broadwell:
    E5-2620 V4 85W 2.1 – 3GHz, Max Turbo @8 Core = 2.3GHz $417
    E5-2667 V4 135W 3.2 – 3.6GHz, Max Turbo @8 Core = 3.5GHz, $2,057

    Note: Broadwell 8 core is underwhelming because the focus is more on 10+ cores where they impress much more.
    So a Zen with a max boost of 3GHz @8 Core at 95W with a seemingly decent IPC would be an amazing comeback if the price is right and turbo for 2 or 4 cores was around 3.5 or more.
    Those expecting 3.5GHz with all 16 threads under full load are being optimistic even at 125W.
    Looking at pricing versus a Xeon a Zen 8 core @ 3GHz is probably close to an E5-2630 V4 85W (10 core @ 2.4 Max) which is $667.
    Of course the AMD motherboards should be a lot cheaper than the X99 boards; desktop boards take the Xeon chips as well as i7 Broadwell E.
  • jjj - Sunday, August 21, 2016 - link

    On pricing, Broadwell-E is 246.3 mm2 i believe while Skylake GT2 4C is half that while some 40% of the die is the GPU. Broadwell-E's die is aimed at server and it ends up much bigger than it could be if aimed at consumer.
    Size wise, Zen 8 cores should be closer to Skylake , if AMD focused on density and ofc adjusted for process. We don't know how big the core is and if the southbridge is integrated that ads some area but AMD should be able to have both reasonable pricing and good margins. Pricing it high in consumer, when they can easily do better, doesn't serve AMD's interests or ours.
    Normality would be many cores without GPU for folks that use discrete GPU and fewer cores plus a GPU in APUs for people that don't need discrete. With Zen, AMD can offer that normality. Intel doesn't have that many cores die aimed at consumer and , if Zen performs, they'll need one since Brodwell-E at sane prices would be uncomfortable for their financials.
    If Zen performs, AMD can harm both Skylake and Broadwell-E in systems with a discrete GPU. Hit Skylake by offering more cores and Brodawell-E by offering much much better prices.

    I even hope (but not expect) that they do a notebook 8 cores SKU, paired with a discrete GPU.There is no reason not to have more than 4 cores in notebook. If they can find the right balance between perf and power, why not. In notebook workstation Intel is pushing 4 cores Xeons and high end gaming is a growing notebook segment so why not address those segments this way. The ASPs would be nice for AMD and if they reach a reasonable perf/power balance it would be a major marketing asset.
  • Gadgety - Friday, August 19, 2016 - link

    So, stuff seems to be happening. Something like this, four of the Vega HBM2 version of the SSG Pro Duos with 1TB M2 SSD each for a $50,000 desktop. Will it be 4K VR capable? Yes, but too heavy to carry in your back pack or "I stumbled and fell backwards on my PC during the VR gaming."
  • MrSpadge - Friday, August 19, 2016 - link

    That's the price you have to pay to haul the big guns!
  • R3MF - Friday, August 19, 2016 - link

    32c/64t

    does that mean AMD is planning an MCM for server SKU's using four Zen SoC's, each of which is composed of two four-CPU modules? Sounds complicated!
  • R3MF - Friday, August 19, 2016 - link

    that would suggest a dual-channel memory access for each of the four Zen SoC's on the MCM, hence eight memory channels in aggregate.
  • milli - Friday, August 19, 2016 - link

    I think we can safely assume that it will be a MCM module. Specially considering that new fast interconnect developed for Zen.
  • BMNify - Monday, August 29, 2016 - link

    Its most likely they just took the arm ccn network interconnect rather than do new custom interconnect ,remember they delayed the drop in cortex soc ,not scrapped it...
  • BMNify - Monday, August 29, 2016 - link

    If I recall, the generic arm ccn fabric IP does 256 megabytes/s (1Gb/s) for up to 4*4 clusters today and capable of 4cores*8/16 with the newer designs
  • BMNify - Monday, August 29, 2016 - link

    Or was it upto 256 GB/s (1 Terabit/s) was more likely, read the spec long ago and on android so won't search now ...
  • slickr - Friday, August 19, 2016 - link

    They've said that the CPU is multipurpose and scales all the way from low to high, they'll have basically notebook level CPU's and server CPU's from it. The way the CPU design was described in the other thread suggests 2x4 cores, able to produce 16 threads, this is for the desktop high end version.

    So for the server its likely to be the same.
  • Kevin G - Friday, August 19, 2016 - link

    You are forgetting that there are two sockets here. Thus each socket would house two chips with each chip having 8 Zen cores. This is what AMD did with the Opteron 6200/6300 series to reach 32 cores.
  • JMC2000 - Friday, August 19, 2016 - link

    From leaked/rumored die shots, Summit Ridge uses two 4-core + cache modules, Naples will use 4 of them.
  • BMNify - Monday, August 29, 2016 - link

    No. Its 1 chip per socket, 4 cores per cluster and currently up to 4 clusters, probably for a 256 GByte/s interconnect arm CCN fabric
  • Cooe - Thursday, May 6, 2021 - link

    Lol, you were SO wrong about all the "ARM interconnect" BS. xD
  • extide - Saturday, August 20, 2016 - link

    No, I am pretty sure there are 2 16-core dies, with 4 memory channels to each one.
  • extide - Saturday, August 20, 2016 - link

    Which is very similar to what they have done in the past -- they out two 8-core bulldozer chips in an MCM for 16 cores, and then they even put two of the old 6-core chips for 12-cores. I think they even did it with the original 4 core chips too.
  • BMNify - Monday, August 29, 2016 - link

    As they delayed the drop in cortex soc they, as the most cost effective option took the exiting tried and tested arm ccn IP is they don't have the cash to throw away and its a data throuput interconnect better than the antiquated and power hungry x86 interconnects
  • jhh - Friday, August 19, 2016 - link

    One question is if they have included many of the features Intel has added for improved virtualization support, such as direct DMA into cache (DPDK support), improved end-of-interrupt processing in VM without significant work required in the host OS, and other similar changes.
  • slickr - Friday, August 19, 2016 - link

    We don't know yet, I suspect we'll find out in late 2016, early 2017.
  • Einy0 - Friday, August 19, 2016 - link

    I was wondering the same thing. This thing could be a beast for VMs. I certainly hope they have worked on their Virtualization extensions. Virtualization would seemingly be the largest usage application to take advantage of this kind of core/thread density.
  • nunya112 - Friday, August 19, 2016 - link

    in regards to chipset support. its all on the CPU no SB will be needed unless it interfaces with SATA or external USB controllers. hence why they are having trouble with the track lengths on the motherboards
  • SunLord - Friday, August 19, 2016 - link

    Has anyone seen any rumors for how many pins the new server socket has?
  • The_Assimilator - Friday, August 19, 2016 - link

    Interesting layout of the IO panel on that prototype board - in particular, the fact that the audio outputs seem to be located closest to the CPU socket, whereas on current boards the trend is to put them at the other end for isolation purposes. I wonder if there's some special sauce there, or if it's just done this way for prototyping purposes.
  • ThortonBe - Friday, August 19, 2016 - link

    The article says, "with access to Elpida memory", but didn't Micron buy Elpida in 2013?
  • mdw9604 - Friday, August 19, 2016 - link

    Until we get some clarity around what their performance/watt is with Zen, which as been AMD/IBM's Achilles heel and their problem with getting any traction in the server/HPC market in the past 10 years, this whole Zen buildup is a waste of everybody's time.
  • Michael Bay - Saturday, August 20, 2016 - link

    Performance was much more important as a factor; then again, those two things are closely connected.
    TDP-wise, look at how 480 is doing. It will be better due to the shrink, but not amazingly better.
  • mdw9604 - Saturday, August 20, 2016 - link

    In data centers where core counts can hit the millions, cycles/watt is a big deal. Power cost are massive and will make or break chip makers chances. If they can't get close to current Xeon's they will be marginalized in niche area. I am pulling for them. We need a break in Intel's monopoly on the highend x86 market.
  • Shinzon - Monday, August 22, 2016 - link

    There is so much misinformation and speculation in this article.

    So let's clear things up a bit.

    It's 8 channels for the memory.

    Each CPU has 128 lanes of PCIE 3.0 (NOT 4.0) so for a 2 socket system you get a total of 256 PCIE lanes and use 128 of these lanes for the interconnect between the 2 sockets (yes this is the interconnect)
  • BMNify - Monday, August 29, 2016 - link

    CPU on a PCI (card) interconnect is so 1990,s :-)

    Enterprise needs 256 GigaByte+ interconnects for many cluster chips like these ,in a world where UHD1 and soon (before 2020) UHD2/8K video is multicast streamed to japanese consumers
  • Ian Cutress - Monday, December 12, 2016 - link

    Source? If it's not a source worth trusting then it's just conjecture. Plenty of sites post leaks and fail to post retraction - everything we've said here can be determined from known confirmed facts from announcements/sources and reverse engineering what you see here. So please, cc me in an email with a valid source.
  • unimatrix725 - Sunday, August 28, 2016 - link

    I read on another pc based tech site its called GMI. The SB is apparently integrated. I have commented previous that there is little information even with Wikipedia about Global Memory Interface or Interconnect. I also recall HyperTransport was gone now. I think replaced w/GMI?
  • BMNify - Monday, August 29, 2016 - link

    Except oc amd use 4cores per cluster and have a 4 cluster limit per chip so its 16 per chip 32 cores and 64 threads for the dual socket mb

Log in

Don't have an account? Sign up now