Comments Locked

254 Comments

Back to Article

  • Dr. Swag - Tuesday, April 11, 2017 - link

    Is it just me that sees "[table]" in the test bed and setup part? :P
  • Ian Cutress - Tuesday, April 11, 2017 - link

    As always, still writing as the embargo passes :D Always down to the wire, then over the wire, and a hop skip and a jump into a fast typing frenzy of fastidious fire.
  • Dr. Swag - Tuesday, April 11, 2017 - link

    No worries :). I can only imagine how much testing you've been doing ever since the Ryzen 7 launch :D

    You guys are still my favorite review site. Keep up the good work!
  • AndrewJacksonZA - Tuesday, April 11, 2017 - link

    Thanks Doc! I pretty much always prefer your in-depth analysis to other authors on other review sites. I really enjoy the way you do and word things - I put it down to a combination of your academic and pro-OC background. :-)

    Cheers,
    Andrew
  • ianmills - Tuesday, April 11, 2017 - link

    Ian it would be great if there was some easy way to only see only the parts of the review that are updated. Perhaps a diff can be done somehow ;)
  • EasyListening - Tuesday, April 11, 2017 - link

    Hey you rhymed!
  • leexgx - Thursday, April 13, 2017 - link

    EEC compatibility (not support at the moment) you can use ECC ram but the ECC functionality is disabled (don't know when or if AMD will enable it on none workstation CPUs)
  • MajGenRelativity - Tuesday, April 11, 2017 - link

    You mention a Ryzen 1400X in the conclusion, but I only see a 1500X and a 1400 at the beginning of the review. I do see a 1500X in the chart, so maybe you mean that?

    Paragraph just before "On the Benchmark Results"
  • Ian Cutress - Tuesday, April 11, 2017 - link

    I did indeed. :) Updated.
  • MajGenRelativity - Tuesday, April 11, 2017 - link

    I figured as much, unless you had access to a secret CPU I'd never seen before :P
  • 802Shaun - Tuesday, April 11, 2017 - link

    Just a friendly correction: It should be "200% more threads" instead of 300% more. Thanks for the article!
  • Ian Cutress - Tuesday, April 11, 2017 - link

    Updated :)
  • ChristopherF - Tuesday, April 11, 2017 - link

    Wouldn't 300% be correct since 12 is 3x 4?
  • mickulty - Tuesday, April 11, 2017 - link

    It has 300% of the threads, but has 200% *more* threads.
  • buri - Tuesday, April 11, 2017 - link

    Your logic is correct if you say "3 times", but with percentages you always refer to the % amount more. In this case is 8 threads more, so 200% of 4 threads. 6 threads more would have been 150% more, and so on
  • MobiusPizza - Tuesday, April 11, 2017 - link

    It's either 200% MORE threads or 300% the original thread count.
  • ddriver - Tuesday, April 11, 2017 - link

    Nice to see AMD not only back in the game, being competitive in trivial computing workloads, but also offering TWICE the value where PERFORMANCE really MATTERS - rendering, encoding, compiling and other time intensive workloads.
  • MajGenRelativity - Tuesday, April 11, 2017 - link

    Indeed it is. I recently built a PC with a 1700X, and my friend said it was a massive improvement over his old FX 6350
  • SquarePeg - Tuesday, April 11, 2017 - link

    I'm wondering how low AMD's pricing will go on R3 chip's if they can bring a 4C/4T low end R3 to market for $99 that can overclock decently then the FX 6300/6350 will have a true successor.
  • MajGenRelativity - Tuesday, April 11, 2017 - link

    That's also something I'm interested in. Right now it reaches down to $170, which is still in the upper mid-range, but if it can go even further down, I'd be pretty happy with it.
  • hojnikb - Tuesday, April 11, 2017 - link

    Any word on R5 1400 review ? Would be interesting to see how 1/2 of L3 cache hits performance.
  • Ian Cutress - Tuesday, April 11, 2017 - link

    We've been promised a sample to arrive soon. I'm off some of next week, so after then :)
  • Omega215D - Tuesday, April 11, 2017 - link

    Just steal it from LinusTechTips... You know he has that stuff just lyin' around after fumbling with it =)
  • msroadkill612 - Thursday, April 13, 2017 - link

    No, but we have evidence of the effect of double l3 on ryzen cores - if that helps.

    current 4 core ryzens have double the l3 per core as 8 core ryzens do.
  • Infy2 - Tuesday, April 11, 2017 - link

    Shame 7700K is not among the results so we can't compare Ryzen 4C/8T to the best of Intel's 4C/8T.
  • ddriver - Tuesday, April 11, 2017 - link

    Slightly better in games, marginally behind in intensive computations.
  • Ian Cutress - Tuesday, April 11, 2017 - link

    For CPU tests, it's in our Benchmark database: www.anandtech.com/bench

    I still need to run our gaming tests on a whole raft of CPUs, something to do the rest of this month!
  • 0ldman79 - Tuesday, April 11, 2017 - link

    Any chance the formatting could be corrected to the FX line can be directly compared to the Ryzen line?

    Some of the benchmarks overlap but the formatting is different. A direct comparison isn't possible now in a single window. We have to open two windows to compare.
  • Ian Cutress - Friday, April 14, 2017 - link

    All the old data was on Windows 7, the new data is Windows 10. I've made it so in the same window the scores are comparable on the same OS.
  • milli - Tuesday, April 11, 2017 - link

    Something really weird is going on with the Rise of the Tomb Raider and Ryzen. There's a huge drop in performance in DX12 for Ryzen+nVidia.
    I mean, how on earth can the A10-7890K be faster than the 1500X? That game or nVidia's drivers need updating.
  • lefty2 - Tuesday, April 11, 2017 - link

    Yeah, also with the RX 480 the i5 7400 scores better then i5 7600 (by a huge margin)! That makes no sense
  • sharrken - Wednesday, April 12, 2017 - link

    AdoredTV did a very interesting video about exactly this issue, called "Ryzen of the Tomb Raider". In pretty extensive testing they show that something is definitely wrong with Nvidia cards in DX12 on Tomb Raider.

    https://www.youtube.com/watch?v=0tfTZjugDeg

    On a Ryzen 1800X system, crossfire RX 480's beat out an overclocked Titan X, 90fps on the 480's and only 80fps on a Titan X - which is just ridiculously wrong when you look at the relative GPU power.

    Some other people have run more tests, and a similar thing is happening in The Division, so it seems highly likely that Nvidia has some strange issues somewhere along the line with DX12.

    https://www.reddit.com/r/Amd/comments/62n813/inspi...
  • milli - Wednesday, April 12, 2017 - link

    It's also happening in Battlefield 1, Deus Ex: MD & Total War: W.

    https://www.computerbase.de/2017-03/amd-ryzen-1800...

    Are nVidia drivers not detecting Ryzen CPU's correctly or is it foul play?
  • mdw9604 - Thursday, April 13, 2017 - link

    Poor AVX implementation /w AMD and the driver.
  • milli - Thursday, April 13, 2017 - link

    What has AVX to do with nVidia's DX12 drivers???
  • bug77 - Tuesday, April 11, 2017 - link

    Really great job not throwing intel power consumption in there for comparison. /s
  • Ian Cutress - Tuesday, April 11, 2017 - link

    Mainly because that part of the discussion was purely to do with CCX arrangement and core loading.

    But sure, because you asked so nicely. /s They've been added.
  • bug77 - Tuesday, April 11, 2017 - link

    Thanks.
  • Phiro69 - Tuesday, April 11, 2017 - link

    Could you comment on your Chromium Compile benchmark a bit; I'd like to use it as part of a pitch on why our compile farm needs replacing (e.g. "look what a $249 cpu can do").
    What OS did you build under, I'm guessing Windows 10 from your earlier statements in the full article?
    Did you follow these directions for the most part? https://chromium.googlesource.com/chromium/src/+/m...
    If so (and you used Windows 10), then you used Visual Studio? Which version and which license of VS?

    Thanks! Great review!
  • Ian Cutress - Tuesday, April 11, 2017 - link

    Win 10 x64 Pro v1607, Build 14393.953. VS Community 2015.3 with Win10 SDK. I bascially followed the instructions in that link. :)
  • Phiro69 - Tuesday, April 11, 2017 - link

    Thank you Ian!
    Maybe at some point as part of your benchmark description you have a url to a page showing basic (e.g. exactly the level of information you provided above but not step by step hand holding) benchmark setup instructions. I know I wonder if I've configured my builds correctly when I put together new systems; I buy the parts based on benchmarks but I don't ever really validate they perform at that level/I have things set correctly.
  • qupada - Tuesday, April 11, 2017 - link

    I was curious about this too. Obviously a direct comparison between your Windows test and my Linux one is going to be largely meaningless but I felt the need to try anyway. Since Linux is all I have, this is what we get.

    My Haswell-EP Xeon E5-1660v3 - approximately an i7-5960X with ECC RAM, and that CPU seems to be oft-compared to the 1800X you have put in your results - clocks in at 78:36 to compile Chromium (59.0.3063.4), or 18.31 compiles per day (hoorah for the pile of extra money I spent on it resulting in such a small performance margin). However that's for the entire process, from unpacking the tarball, compiling, then tarring and compressing the compiled result. My machine is running Gentoo, it was 'time emerge -OB chromium' (I didn't feel like doing it manually to get just the compile). Am I reading right you've used the result of timing the 'ninja' compile step only?

    I only ask because there definitely could be other factors in play for this one - for the uninitiated reading this comment, Chromium is a fairly massive piece of software, the source tar.xz file for the version I tried is 496MB (decompressing to 2757MB), containing around 28,000 directories and a shade under 210,000 files. At that scale, filesystem cache is definitely going to come into play, I would probably expect a slightly different result for a freshly rebooted machine versus one where the compile was timed immediately after unpacking the source code and it was still in RAM (obviously less of a difference on an SSD, but probably still not none).

    It is an interesting test metric though, and again I haven't done this on WIndows, but there is a chunk in the middle of the process that seems to be single-threaded on a Linux compile (probably around 10% of the total wall clock time), so it is actually quite nice that it will benefit from both multi-core and single-core performance and boost clocks.

    Also with a heavily multi-threaded process of that sort of duration, probably a great test of how long you get before thermal throttling starts to hurt you. I have to admit I'm cheating a bit by watercooling mine (not overclocked though) so it'll happily run 3.3GHz on a base clock of 3.0 across all eight cores for hours on end at around ~45°C/115°F.
  • rarson - Tuesday, April 11, 2017 - link

    14393.969 was released March 20th, any reason you didn't use that build?
  • Ian Cutress - Friday, April 14, 2017 - link

    Because my OS is already locked down for the next 12-18 months of testing.
  • Konobi - Tuesday, April 11, 2017 - link

    I don't know what's up with those FPS number in rocket league 1080p. I have ye olde FX-8350 @ 4.8GHz and a GTX 1070 @ 2.1GHz and I get 244fps max and 230FPS average at 1080p Ultra.
  • Ian Cutress - Tuesday, April 11, 2017 - link

    I'm running a 4x4 bot match on Aquadome. Automated inputs to mimic gameplay and camera switching / tricks, FRAPS over 4 minutes of a match.
  • jfmonty2 - Wednesday, April 12, 2017 - link

    Why Aquadome specifically? It's been criticized for performance issues compared to most of the other maps in the game, although the most recent update has improved that.
  • Ian Cutress - Friday, April 14, 2017 - link

    On the basis that it's the most strenuous map to test on. Lowest common denominator and all that.
  • Adam Saint - Tuesday, April 11, 2017 - link

    "Looking at the results, it’s hard to notice the effect that 12 threads has on multithreaded CPU tests"

    Perhaps you mean *not* hard to notice? :)
  • coder543 - Tuesday, April 11, 2017 - link

    I agree. That was also confusing.
  • Arbie - Wednesday, April 12, 2017 - link

    I agree - this is implying the reverse of what was probably meant. And it's still broken.
  • coder543 - Tuesday, April 11, 2017 - link

    "For $250, the top Ryzen 5 1600X gives six cores and twelve threads of AMD’s latest microarchitecture, while $250 will only get you four cores and no extra threads for the same price."

    You're missing a word in here. That word is "Intel". Right now, the opening paragraph contains one of the most confusing sentences ever written, because the only brand mentioned is AMD, where $250 simultaneously gets you 6 cores and 12 threads *and* only 4 cores? Please update this paragraph to show that Intel only gets you 4 cores.
  • Arbie - Tuesday, April 11, 2017 - link

    I agree - you really need to add "with Intel". This is a theme statement for the entire article and worth fixing.
  • CaedenV - Tuesday, April 11, 2017 - link

    Well, that review was surprising.
    I am looking to re-do my system in the next year or so, and I thought for sure that the R5 would be the no-brainer pick. But that seems not to be the case. If on a tight budget it looks like the i3 is the all-around value king offering great single-thread performance and decent light to moderate gaming. The i5 still reigns king for non-production work while being right about the same price point as the R5. And if doing production work the R7 really makes more sense as it is not that much more expensive while offering much better render performance. I somehow thought that the R5 would be better priced against the i5, just as the R7 stomps all over the i7 chips.

    So now when I look at building my next PC the real question is how much production work I plan on doing. If it is a lot then the R7 is the way to go. But if I am just doing media consumption and gaming then perhaps the Intel i5 will still be the best option. Hmm... maybe I'll just wait a bit longer. I mean, my i7 2600 still keeps chugging along and keeping up. The real temptation to upgrade is DDR4, USB-C, m.2, and PCIe v3. Seeing more 10gig Ethernet would also be a big temptation for an upgrade, but I think we are still 2-3 years out on that. Any up-tick in raw CPU performance is really a secondary consideration these days.
  • snarfbot - Tuesday, April 11, 2017 - link

    That is a strange position indeed, as it conflicts with all the data in the review.

    In other words lol wut?
  • Meteor2 - Wednesday, April 12, 2017 - link

    Indeed. My conclusion was 'wow, AMD have knocked it out of the park'. Same or better gaming, far better production.
  • Cooe - Monday, March 1, 2021 - link

    What happened to your "Ryzen 5 will be shit" comments from all over the OG Ryzen 7 review???
  • Drumsticks - Tuesday, April 11, 2017 - link

    It's not really hard to figure out. If you just do media and gaming, stick with Intel.

    If you rely on your home PC for any significant measure of production work, you should probably be buying the most expensive Ryzen chip you can.
  • gerz1219 - Tuesday, April 11, 2017 - link

    Yeah, for the longest time I've maintained separate rigs for gaming and video work, but I'm in the process of building a hybrid machine and the Ryzen 7 chips came out at just the right time. I just ordered an 1800X for my new workstation/gaming/VR rig. Gaming performance is somewhat important to me, but I can handle lower frame rates in certain games versus the 7700K because for my post-production video work, I need those extra cores and threads. For the longest time Intel was able to charge whatever they wanted at the high-end and prices had gotten ridiculous, so the 7-series fills a huge niche.

    However, it seems less clear where the Ryzen 3 and 5 chips will fit in. People who only use their machines for games won't see very many of the benefits of the Zen architecture, but they're saddled with the weaknesses of relatively slower single-threaded performance, and AMD isn't competing on price.
  • msroadkill612 - Thursday, April 13, 2017 - link

    You did luck out. u r the perfect ryzen demographic.

    I suspect teamed with a vega 8GB hbm & a pcie ssd, it will blow you away by xmas.

    But the 1600 6 core comes close mostly, for $250~ less.
  • Shadowmaster625 - Tuesday, April 11, 2017 - link

    The main reason to buy a 7600K over Ryzen is so you can actually go above 4.1GHz. Given how easy it is to clock a 7600K at 4.7GHz or even higher, it is highly disingenuous to not include overclocked results on the graphs.
  • sor - Tuesday, April 11, 2017 - link

    I think the overclocking niche is aware that they can do better. I agree that more data is better, but I certainly don't think there's any responsibility for Anandtech to provide overclocking results for either platform.

    Maybe they'll follow up with a comparison on how Ryzen 5 overclocked compared to the competition.
  • Meteor2 - Wednesday, April 12, 2017 - link

    How much does OC'ing help? Presumably not at all with gaming unless you're on a 1080 or higher, and how does it help multi-threaded production workloads?
  • Notmyusualid - Tuesday, April 18, 2017 - link

    My thoughts exactly - my buddies' 7600K runs 24/7 @ 5GHz, on a 240mm closed loop rad.

    It was the snappiest computer I've yet used...
  • dhotay - Tuesday, April 11, 2017 - link

    *shoo-in

    https://www.merriam-webster.com/dictionary/shoo-in
  • Achaios - Tuesday, April 11, 2017 - link

    "We have already shown in previous reviews that the Zen microarchitecture from AMD is around the equivalent of Intel’s Broadwell microarchitecture"

    I don't think so, Ian. Case in point:

    1. Intel Core i7-7700K @ 4.20GHz- 4.50 GHz Turbo (KABY LAKE): 2,595 MARKS PASSMARK SINGLE THREADED
    2. Intel Core i7-6950X @ 3.00GHz- 3.50 GHz Turbo (BROADWELL): 2,135 MARKS PASSMARK SINGLE THREADED
    3. AMD 1800X 3.6 GHz - 4.0 GHz Turbo(RYZEN): 1,952 MARKS PASSMARK SINGLE THREADED

    Out of curiosity, I benched my own 4770k at 4.5 GHZ, the frequency I game on:

    4. Intel 4770K 3.50 GHz - 4.53 GHz OC (HASWELL): 2610 MARKS PASSMARK SINGLE THREADED

    http://imgur.com/FrHmYlG

    It's not even the bloody equivalent of Haswell, man, much less that of Broadwell.
  • sor - Tuesday, April 11, 2017 - link

    No, you're cherry picking. It's pretty well documented that IPC is about broadwell level, if you want to get into a benchmark posting war you'll run out of material far sooner. I can even find huge wins for Ryzen, but I'm not going to cherry pick those to try to show a big discrepancy.
  • Achaios - Tuesday, April 11, 2017 - link

    How about you go ahead and cherrypick to prove me wrong on Single Threaded performance. Oh now wait, you can't b/c Ryzen is slow as molasses in January.

    https://www.cpubenchmark.net/singleThread.html
  • MrSpadge - Tuesday, April 11, 2017 - link

    Apart from WinrAR 5.2 that's pretty slippery molasses:
    http://www.zolkorn.com/en/amd-ryzen-7-1800x-vs-int...
  • fanofanand - Tuesday, April 11, 2017 - link

    And the cherry picking continues.

    "How about you go ahead and cherrypick to prove me wrong on Single Threaded performance"
  • Lord-Bryan - Tuesday, April 11, 2017 - link

    Ummmm Achaios, i don’t Anandtech is the right place, for all that sleazy fan war.
  • ultima_trev - Tuesday, April 11, 2017 - link

    My 1800X gets 2,062 in Passmark single thread. Not that Passmark is the lone dictator of IPC.
  • rarson - Tuesday, April 11, 2017 - link

    According to your data, Haswell IPC is equal to Kaby Lake IPC.
  • Meteor2 - Wednesday, April 12, 2017 - link

    Um, you need to divide by frequency to get IPC. Clue's in the name.
  • Ammaross - Wednesday, April 12, 2017 - link

    So, you're saying that your 4770K beat out a 7700K in single threaded. And now you argue "IPC" with this benchmark? Go home, you're drunk.
  • zeeBomb - Tuesday, April 11, 2017 - link

    This is good. This is great...this is AWESOME
  • sor - Tuesday, April 11, 2017 - link

    This is pretty great. When Ryzen is behind, it is not often and not far. When it is ahead, it is far ahead.
  • TadzioPazur - Tuesday, April 11, 2017 - link

    "Looking at the results, it’s hard to notice the effect that 12 threads has on multithreaded CPU tests."
    (..)it's hard NOT to notice(...)"?
  • bmasumian - Tuesday, April 11, 2017 - link

    Your Amazon link to Ryzen 5 isn't working.
  • AndrewJacksonZA - Tuesday, April 11, 2017 - link

    Page 5 - FCAT - Graph Heading/Text Error

    The text says "For our test, we take a 90-second clip of the Rise of the Tomb Raider benchmark running on a GTX 980 Ti at 1440p"
    but the graph heading says "System: FCAT Processing ROTR 1440p GTX1080 Data"
  • happy medium - Tuesday, April 11, 2017 - link

    this review could not have purposely made the Ryzen cpu's look any better.
    This site is ruined.
  • Drumsticks - Tuesday, April 11, 2017 - link

    Care to elaborate?
  • mmegibb - Tuesday, April 11, 2017 - link

    This review is consistent with what I saw with the Ryzen 7 reviews on multiple sites. In multi-threaded tests, Ryzen beats Intel because, well, it has more threads. Single thread performance lags Intel. It's up to the user to figure out price/performance for their particular needs.

    I'm with Drumsticks, please elaborate. This kind of drive-by comment does nothing to advance the conversation.

    Anandtech's suite of benchmarks is one of the best.
  • fanofanand - Tuesday, April 11, 2017 - link

    Not only is Anandtech's benchmark suite the best, Ian is the best CPU reviewer in the business, and quite probably in the world. I would love to know what reviewer out there has a better understanding of uArches and has the experience of a professional overclocker who pushed every component to the limits. Ian's experience and background is ideal to review CPUs, and after having read the entire thing I didn't detect even a whiff of bias. Claims of bias towards AMD at Anandtech, that's a new one to me!
  • ddriver - Tuesday, April 11, 2017 - link

    Well, the JS benchmarks are pointless really.

    There is much to be demanded from the benches.

    Too much emphasis put on games, do really 50% of the people use computers primarily for games?

    Too little on practical tests, number crunching tests are with software barely anyone uses.

    People need to see performance in premiere, after effects, cubase, pro tools, vray and similar.

    Bias towards AMD however is hilarious, it is quite the opposite actually.
  • th3ron - Tuesday, April 11, 2017 - link

    You seem to forget this is a review of $250 budget cpu's. No one's going to be running pro apps like the ones you listed on cpu's like these. The number crunching test are there for comparison with the more expensive cpu's. I don't think anyones ever bought a cpu based on its Winrar score.

    A lot of people will uses these cpu's for gaming so lots of gaming benchmarks make sense.
  • ddriver - Tuesday, April 11, 2017 - link

    People who use winrar most likely do not make logical considerations, because if they did, they wouldn't be using garbage like winrar.

    It is not a budget product, it is mid-range. And it is perfectly capable of doing a good job in content creation and such at a great value. Most of the software used in this review can barely make use of 4 threads, making such tests 66.66% pointless. Most of the tests that can actually scale to utilize the chip are software barely anyone uses or isn't even practically useful to begin with. And contrary to your beliefs, that doesn't accurately translate into performance in software that people actually use.
  • wolfemane - Wednesday, April 12, 2017 - link

    Get off your high horse. People with midrange CPUs aren't going to use pro software? Are you really serious? I know far more people on i5s and Older AMD CPU's who use premier and after effects. My wife and I use the Adobe suite regularly. Both our systems are running 2500k's. She is photographer and has been using mid-range components for just as long. I've been using premier, after effects, and Adobe media encoder longer than she's been using Photoshop. Adobe makes it pretty cheap to use their software. When they went to a monthly rate for their entire suite with free cloud based storage for $25 a month (I think it's a bit more expensive now) we jumped on it. The cloud storage alone is worth twice that.

    There's no way in hell I'd drop $500+ just on a processor and Intel has made it impossible for mid to low budget builders to afford 6+ cores. But with a pretty quick 6c/12t CPU, I'll be going with AMD for our next range of CPUs, which is coming soon. Sandy bridge is getting old. Just waiting for AM4 mITX.
  • calken99 - Wednesday, April 12, 2017 - link

    Are you really telling me that the 70,000 i5 computers in the business that I work don't use any pro applications? That's just one small corporation. Businesses will dwarf the annual sales of CPUs in comparison to the domestic market.
  • Meteor2 - Sunday, April 16, 2017 - link

    I was trying to get some numbers on this. I think consumer computers outnumber corporate by a good two to one. Not sure the average age is that different either -- most places I've seen have been holding on to their PCs for years. Core 2s abound!
  • Bp_968 - Sunday, April 16, 2017 - link

    This is the point I try to make all the time to console players saying PCs cost too much, require too much upgrading. My i7 970 or 980 (I forget!) Is still playing modern games wonderfully @2560x1440 with a gtx 970. We reciently built a pc for my neighbor out of spare parts and he ended up with a core2 quad (q6600 maybe?) With 6gb ram and a gtx460. He quickly upgraded to a gtx 1050 and now it easily stomps his PS4 (and probably the PS4pro).

    I'm with one of the previous posters about chipset accessories. It won't be CPU speed that causes me to up upgrade, it will be me wanting access to new features (pcie4, usb-c, usb3.1/3.2, NVMe, Intels ddr/ssd hybrid memory interface, etc etc).

    I also expect Intel to respond, at least in the ryzen7 market. I really hope it means Intel will finnally start offering 6-8 core CPUs in non-silly price points.
  • mmegibb - Tuesday, April 11, 2017 - link

    The choice of software hardly matters when what you are looking for is a collection of software that exercises the entire CPU subsystem: the cores, caches, memory, etc. As th3ron mentions, what matters is finding the deltas between CPUs.

    And yes, in spite of your snobbery, probably 50% of people reading this want to size their system for gaming. Gaming is the limiting case for my home builds.
  • psychobriggsy - Wednesday, April 12, 2017 - link

    Indeed gaming is important for many people.

    What the reviews show is that for a mixed-use system, the gaming aspect is not significantly behind Intel alternatives (obviously a couple of outliers, but that applies in both directions). However for the other uses, Ryzen is a complete win. It's good enough, rather than the pile of fail that Bulldozer core CPUs were. And indications are that games are getting more multithreaded over time, so buying a 4C product is limiting future gaming.

    It's clear that Intel will have to enable SMT in their i5 refreshes this year now, as that should let them claw something back in the 'partial multithreaded' use cases (apps that can't scale indefinitely with extra cores but top out at 4-8 threads).
  • IanHagen - Tuesday, April 11, 2017 - link

    I completely agree on that. I'd love to see more compiling benchmarks too. It's coming to the point where people who are buying a CPU for productivity are taking decisions drawn upon conclusions heavily influenced by gaming performance.
  • RafaelHerschel - Wednesday, April 12, 2017 - link

    50% of people use a fast CPU for gaming is a very conservative estimate. For regular office work or for media consumption an inexpensive CPU is fast enough. The current Intel Celeron and Pentium CPUs (or the AMD equivalent) offer much better value for most people. Because of marketing i3 and i5 CPUs sell well.

    And there are more gamers than professionals who use software that benefits from fast CPUs.
  • ddriver - Wednesday, April 12, 2017 - link

    Dunno about that, of all the people I know who have powerful machines, all do professional work, even those who game. Then again, the selection of my acquaintances has to do with their skills, and I do have to admit I have zero interest in interacting with someone who only plays games.

    I also know that is 99% of the games on the market cannot utilize 66.66% of that chip.

    So you end up putting 50% of the review emphasis on tests that can only utilize 1/3 of the chip.

    It is like... testing a sports car and putting 50% of the emphasis on its use as a hearse that will never be used at nowhere near its potential.
  • mmegibb - Wednesday, April 12, 2017 - link

    Man, ddriver, you are an elitist jerk. "I have zero interest in interacting with someone who only plays games". Also, "People who use winrar most likely do not make logical considerations, because if they did, they wouldn't be using garbage like winrar".

    Why are you like that?
  • vladx - Wednesday, April 12, 2017 - link

    Don't mind ddriver, he's just a pathetic troll who tries too hard.
  • Meteor2 - Wednesday, April 12, 2017 - link

    I imagine the proportion of PCs containing higher than i5-7400s bought by consumers used for gaming is much higher than 50%.

    *Not* talking about business buys here, I'm talking about people spending their own money.
  • Meteor2 - Wednesday, April 12, 2017 - link

    D'oh, I just replied to ddriver. What was I thinking.
  • SkipPerk - Wednesday, May 3, 2017 - link

    These are low-end CPU's. People use those for gaming and web-surfing. I have a proper Xeon machine at work like a normal person. Not to mention, you reference video software. What tiny percentage of computer users ever own or use video software? That is a tiny industry. It reminds me of the silly youtube reviews where the reviewer assumes everyone is editing videos, when less than one percent of us will ever do so.

    Most people buying non-Xeon CPU's really will be using basic software (MS Office, WinZip,...) or games. The only time I have used non-Xeon CPU's for work was when I had software that loved clock speed. Then I got a bunch of 6-core's and overclocked them (it was funny to watch the guys at Microcenter as I bought ten $1k CPUs and cheesy AIO water coolers). Otherwise one uses the right tool for the job.
  • AndrewJacksonZA - Tuesday, April 11, 2017 - link

    On the last page, "On The Benchmark Results"
    "Looking at the results, it’s hard to notice the effect that 12 threads has on multithreaded CPU tests."
    Don't you mean that it's NOT hard to notice?
  • Drumsticks - Tuesday, April 11, 2017 - link

    I didn't see the 7600k in gaming benchmarks, was that a mistake/not ready, or is it on purpose?

    Thanks for the review guys! This new benchmark suite looks phenomenal!
  • mmegibb - Tuesday, April 11, 2017 - link

    I was disappointed not to see the i5-7600k in the gaming benchmarks. Perhaps it wouldn't be much different than the i5-7600, but I have sometimes seen a difference. For my next build, it's looking like it's between the 1600x and the 7600k.
  • fanofanand - Tuesday, April 11, 2017 - link

    "Platform wise, the Intel side can offer more features on Z270 over AM4"

    Aside from Optane support, what does Z270 offer that AM4 doesn't?
  • MajGenRelativity - Tuesday, April 11, 2017 - link

    Z270 has more PCIe lanes off the chipset for controllers and such that AM4 does not
  • fanofanand - Tuesday, April 11, 2017 - link

    I won't disagree with that, but I'm not sure a few extra pci-e lanes is considered a feature. Features are typically something like M.2 support, built-in wifi, things like that. The extra pci-e lanes allows for MORE connected devices, but is a few extra pci-e lanes really considered a feature anymore? With Optane being worthless for 99.99999% of consumers, I'm just not seeing where Z270 gives more for the extra money.
  • JasonMZW20 - Tuesday, April 11, 2017 - link

    Let's do a rundown:

    Ryzen + X370
    20 (3.0) + 8 (2.0)
    Platform usable total: 28

    Core i7 + Z270
    16 + 14 (all 3.0)
    Platform usable total: 30

    Intel's Z270 spec sheet is a little disingenuous, as yes it does have a maximum of 24 lanes, but 10 are reserved for actual features like SATA and USB 2.0/3.x. 14 can be used by a consumer, giving you a total of 2 NVMe x4 + 1 NVMe x2 leaving x4 for other things like actual PCIe slots. That 3rd NVMe slot may share PCIe lanes with a PCIe add-in slot, if configured that way.

    Ryzen PCIe config (20 lanes): 1x16, 2x8 for graphics and x4 NVMe (or x2 SATA when NVMe is not used)

    Core i7 config (16 lanes): 1x16, 2x8, or 1x8+2x4 for graphics

    They're actually pretty comparable.
  • mat9v - Tuesday, April 11, 2017 - link

    No, not more PCIEx lines, those from chipset are virtual, they all go to CPU through DMI bus that is equivalent to (at best) 4 lines of PCIEx 3.0. All those chips (Intel and AMD) offer 16 lines from CPU for graphic card, but Zen also offers 4 lines for NVMe. Chipsets are connected by DMI (in Intel) and 4 lines of PCIEx 3.0 (in AMD), so that is equal, now Intel from those DMI lines offer virtual 24 lines of PCIEx 3.0 (a laugh and half) while AMD quite correctly offers 8 lines of PCIEx 2.0 (equivalent to 4 lines of PCIEx 3.0).
  • psychobriggsy - Wednesday, April 12, 2017 - link

    Indeed. If a user is going to need more than that, they're more likely going to be plumping for a HEDT system anyway. AMD's solution is coming in a bit, but that should be able to ramp up the IO significantly.
  • marecki - Tuesday, April 11, 2017 - link

    PDF Opening
    Can you link this PDF file?
  • Ratman6161 - Tuesday, April 11, 2017 - link

    Hmmm. Food for thought. So I've been sticking with my trusty old i72600K. I use VMWAre workstation to run VM's on my desktop quite a lot. I've always figured a lot of threads were helpful so that VM's and the host OS aren't competing for resources. But the VM's themselves aren't doing anything particularly intense. When not running VM's probably any old i5 level of performance is probably good enough. So...for my particular purposes seems like the Ryzen 5 1600X might be the way to go and save a bunch of money while I'm at it???
    More than adequate for my desktop needs and more cores/threads than an i7 when running VM's...and way cheaper. First CPU I've seen tht's got me kind of tempted.
  • cheshirster - Tuesday, April 11, 2017 - link

    That's where 1700 might look better.
  • IanHagen - Tuesday, April 11, 2017 - link

    I too am a heavy virtualization user and I'd say pick the 1700 if you can. More physical and logical cores are going to make a big difference for you.
  • Ratman6161 - Tuesday, April 11, 2017 - link

    I'm actually kind of looking for cheap. The Ryzen 5 1600x is more cores and threads than my trusty old i7-2600K and the 1600x is $140 less than the 1700. I'm actually considering going even cheaper and getting the 1600 instead of the 1600x. The main difference between the 1600 and 1600x seems to be clock speed and...they are unlocked so why not save $30 more and get the 1600?
  • Ratman6161 - Tuesday, April 11, 2017 - link

    PS: I'm thinking on going cheap with the CPU and using some of the savings on more RAM.
  • psychobriggsy - Wednesday, April 12, 2017 - link

    1600 comes with a cooler, 1600X doesn't, so bear that in mind during price comparisons.
  • IanHagen - Tuesday, April 11, 2017 - link

    Then I'd say get the 1600 and overclock it (:
  • lakedude - Thursday, April 13, 2017 - link

    Ryzen is not offering much in the way of OC headroom. Sure the chips are unlocked but they are already pushing them pretty hard, unlike the Cel300A from back in the day...
  • Ratman6161 - Thursday, April 13, 2017 - link

    True, the OC headroom doesn't seem that great. But for anyone willing to do a mild overclock (which I am) it seems like a no-brainer to choose the 1600 over the 1600x. The only difference seems to be clock speed other than the cooler (which I probably wouldn't use) and I'm betting that even pushing it a bit, the two would end up at the same maximum speed.
  • msroadkill612 - Thursday, April 13, 2017 - link

    I hear the good ryzen air cooler is pretty good, u sure u wanna bother w/ DIY cooling?
  • Ratman6161 - Tuesday, April 18, 2017 - link

    OK, so Friday (4/16) I actually picked up a Ryzen 5 1600 and an Asrock AB350 Pro 4 motherboard so I'm now speaking from some actual experience.
    So far its fully living up to my expectations. Regular office work its very fast and smooth (but for what I'm doing an i3 would be too). Running three VM's at the same time though, as I hoped, its still fast and smooth even with each VM assigned 4 cores and even when some of the the VM's are actually doing something. So from that alone I think I chose well. CPU was full price $219 but the motherboard was only $39 as part of a bundle deal (Microcenter). Throw in 32 GB RAM for $210 and overall it was a cheap upgrade.
    Cooling and Overclocking: I'd disagree (partly) with the included cooler being that good. It does a good job of cooling the CPU but its rather loud compared to what I'm used to. I was using a corsair all in one liquid cooling system (H55 with dual custom fans runing at very low speeds) on the old 2600K so I'm used to a near silent system except when the fans really ramped up during extended stability testing. With the included cooler I've only been able to get it stable at 3.7 Ghz. At 3.8 things get weird. But I'm also not turning up the voltage until I have my liquid cooler back. Ordered the Ryzen bracket for it from Corsair and I'm still waiting on it to come in.
    But, whatever Overclock I get is just a bonus. Works great for me at stock speeds.
    Memory: I got 16x2 Crucial DDR4 2400 DIMMS which are dual sided. No problem getting them to run at their rated 2400 but no luck at 2666. I was able to tighten up the timings a bit from rated 16-16-16 to 15-15-15 without changing any voltages. I know that's not that fast but for my purposes quantity is what counts most.
    I'm not a gamer so I can't say anything about that.
    However...I do miss the onboard GPU from my old Intel chip. Why? well, I was running three monitors. My old GTS450 PCIe card only supports two but I used to plug the third (and could have done a 4th) into the processor graphics. So now I'm down to only 2 monitors. Suppose i could buy a newer but cheap video card that supports more than 2?
  • SkipPerk - Wednesday, May 3, 2017 - link

    You can get a ton of cheap card options on ebay that have three video outputs. The AMD ones tend to be cheaper, but there are some nVidia as well. I think I had a 7750 once that was single slot and had three outputs. Tons of the dual slot cards have dual DVI and HDMI. I got a deal on a bunch of GTX 650's a while back that had that config and they supported triple monitor setups beautifully. I might be wrong on that model number now that I think about it. In any case, there are always good, cheap video cards on eBay.
  • MrSpadge - Tuesday, April 11, 2017 - link

    Ian, there are lot's of graphs in the gaming section. I think that's rather hard to read. You could combine the average FPS and 99th percentile into the same graph. Not sure how to make it look pretty, but since both graphs mostly carry the same mesage that would make it obviously more compact.
  • Icehawk - Tuesday, April 11, 2017 - link

    They are way too dense, I skipped all of the gaming pages... and I'm a gamer.
  • milkod2001 - Tuesday, April 11, 2017 - link

    Will you be updating results to BENCH? . I'd like to see how 1600x, 1700x and 1800x stack vs my existing Haswell 4770K
  • zodiacfml - Tuesday, April 11, 2017 - link

    Praise you for the RX480 benches there! Finally! Being GPU limited, the advantage of Intel chips are small. Many users might not be able to take advantage of the extra cores but in a few years, it will have its value.
  • th3ron - Tuesday, April 11, 2017 - link

    People posted the samething when the 8150 was launched and we know how that turned out.
  • SkipPerk - Wednesday, May 3, 2017 - link

    The 8150 was not bad once the price came down. I bought one for $160 years ago, and I still use it on a secondary machine. It is a nice little chip for the money. My only regret is that I ran it at 4.8 ghz for years, and now i need to run it at 3.6 or lower or it gets strange. It was a fine chip compared to the i5 2500.
  • 10101010101010 - Tuesday, April 11, 2017 - link

    Why not overclock the K? The whole battle is core speed Vs core number so it couldn't be more disingenuous to completely strip the main reason for buying the Intel chip away.
  • cheshirster - Tuesday, April 11, 2017 - link

    Most gaming tests age GPU bound.
    In rendering OC won't help K to win.
    And there is R5 1600 for 220$ that can work at 1600X+ level.
  • none12345 - Tuesday, April 11, 2017 - link

    This is by far the most comprehensive ryzen article ive sene to date since the release over a month ago.

    Variety of single threaded workloads, multithreaded workloads, and games with multiple graphics cards from multiple vendors at multiple resolutions.

    Thank you.

    Finally a real world gaming review on ryzen with multiple resolutions on multiple gpus from multiple vendors.

    Ive been waiting for a review like this for the last 5 weeks.
  • krumme - Tuesday, April 11, 2017 - link

    Yep. Good job at the game bm. Actually usable numbers. 99% and time under 30 fps. Thanx.

    Keep up the good work.
  • Meteor2 - Wednesday, April 12, 2017 - link

    Yes, it's an excellent set of benchmarks and review.
  • Notmyusualid - Tuesday, April 18, 2017 - link

    Good work of missing out the 7600k results..
  • mmegibb - Tuesday, April 11, 2017 - link

    Yes, couldn't agree more. When I think back many years, I'm amazed at the number of quality reviews I've read at Anandtech.
  • lilmoe - Tuesday, April 11, 2017 - link

    So, unless you're a gamer AND you have a 120hz monitor, Ryzen 5 literally butchers the Core i5.

    It's also very interesting that there's barely any gap in single threaded performance in javascript. I don't believe Intel has an IPC advantage in non-encoding workloads, it's more like all in software. Software that needs optimization and things should be ironed out in the near future.

    Now, all we just need is Apple and other OEMs to jump ship. It was a good ride, Intel. Good riddance.
  • mmegibb - Tuesday, April 11, 2017 - link

    Wow! I can't wait to get my hands on a 1600x and watch it grow arms and chop an Intel processor to bits right before my eyes.
  • th3ron - Tuesday, April 11, 2017 - link

    Console emulation is another area where Intel's ipc is well known. Just check the PCSX2 and RPCS3 forums.

    Fanboy fantasies don't replace facts.
  • Haawser - Tuesday, April 11, 2017 - link

    R5 1600- Best VFM processor since the i5-2500K.

    $219 for 6C/12T, with a Wraith Spire thrown in ? Easy OC to 3.9-4.0 ? You can't beat that with a stick. Let alone with anything Intel sell for a similar price.
  • mmegibb - Tuesday, April 11, 2017 - link

    Where did you get your info about OC'ing the 1600? I haven't seen much about OC'ing the Ryzen chips, at least in these initial comprehensive reviews. (I haven't searched much either). I still haven't decided between the 7600k and the 1600x, and how the 1600x overclocks will be a factor.
  • haukionkannel - Tuesday, April 11, 2017 - link

    All Ryzens in all test have been running between 3.9-4.1 in all owerclocking test. So it does not matter what Ryzen you get. The oc performance is the same. Must be because if the manufacturing proses. This could be a beast if made by intel factories ;)
  • MrSpadge - Tuesday, April 11, 2017 - link

    No, such a hard an consistent speed limit is usually by chip design. If it was "just the silicon lottery" there'd be more spread, like you see with Intels.
  • Outlander_04 - Tuesday, April 11, 2017 - link

    https://www.youtube.com/watch?v=3VvwWTQKCZs
    i5 and Ryzen 5 at good OC's
  • cheshirster - Wednesday, April 12, 2017 - link

    People are already bying 1600, it runs 3.8 OC on a box cooler
    1600X has no room to overclock at all
  • cheshirster - Wednesday, April 12, 2017 - link

    +1600
  • bobbozzo - Tuesday, April 11, 2017 - link

    Hi Ian, on the last page,
    Rise of the Tomb Raider’s benchmark is notorious for having each of its three _seconds_ perform differently

    I think that should be 'scenes' not seconds.

    Thanks!
  • mmegibb - Tuesday, April 11, 2017 - link

    In the gaming benchmarks, Intel generally has higher average framerates. But, interestingly, in the 99th percentile and time-spent-under-60fps, Ryzen usually tops Intel. To me, this translates into an overall smoother and more consistent game play experience with the Ryzens. Is that right?

    I've been on Intel processors for years. In fact, my son still is doing heavy-duty 1080P gaming with an OC'd i5-2500k. But, soon I'm going to replace that beloved CPU, and I want to buy a Ryzen just to upset the apple cart and do something different.

    I've been disappointed in the Ryzen 7 reviews as far as gaming is concerned. But, this review gives me hope. I'm really thinking that triple the threads of the i5-7600k with only a small loss of gaming performance is the way to go. Especially with DX12 getting more common.
  • Achaios - Tuesday, April 11, 2017 - link

    Αs a gamer, what you are primarily interested in is Single Threaded Performance simply because there's a host of games out there that depend on Single Threaded performance:

    1. All World of Warcraft versions.
    2. All Total War versions.
    3. Starcraft II.
    4. Civilization games.

    ...and so on. The OP is just giving you a review tailored to make Ryzen shine whereas in fact it still is an inferior CPU for gaming due to inferior Single Threaded performance.

    Very few games use for than 2-4 Cores, so that makes Ryzen largely irrelevant at the moment. It will also be irrelevant in the future too when games will begin utilizing more than 4 cores, because there will be -by then- far better Intel & AMD processors.
  • mmegibb - Tuesday, April 11, 2017 - link

    Yes, forever gaming reviews have hammered the idea that all that matters is single threaded performance.

    However, as I mentioned, this review shows the 1600x beating the 7600 in 99th percentile and time under 60fps, even in games like GTA V. Those are very important benchmarks for gaming quality perception. You didn't talk to that at all. You just repeated the boilerplate about "single core" that we all know.

    Also, I don't think it will be too far in the future when more games use DX12, and that seems to make a big difference.

    I think I'm getting a whiff of "intel-fanboy" from your post.
  • Meteor2 - Wednesday, April 12, 2017 - link

    You're quite right. The R7 was a bit 'meh'. It loses a bit too much against the much-higher-clocked i7s. The R5, however, really stands out with better 99th percentile performance and better potential for more to come. (Contary to popular belief, i5s are clocked slower, not faster, than i7s.)
  • Cooe - Monday, March 1, 2021 - link

    And yet it was YOU screaming from the roof tops before it came out about just how shit it was going to be... -_-
  • mat9v - Tuesday, April 11, 2017 - link

    But then those old games will happily run over 200fps even on Ryzen. Would you care to comment?
    Can you actually feel the difference in how they work or you are just having a number orgasm?
  • Reflex - Tuesday, April 11, 2017 - link

    Not really sure why I should care about how Ryzen performs in games that are several years old and that even budget CPU/GPU combinations can run more than adequately.
  • cheshirster - Wednesday, April 12, 2017 - link

    But Civilisation supports DX12 and it is here in tests with clear win of Zen.
  • MisterJitter - Thursday, April 13, 2017 - link

    Now as a precursor I am speculating here, but do you truly believe higher single core frequencies will continue to be the future of CPUs that are already pushing the limits. For example, do you believe Intel's next high end gaming CPU is going to be 6-7GHz? I don't think so... Technology used to increase exponentionally until now. I truly believe that if gaming performance is going to increase at the rate that it has over the past 10 years it's going to be because Devs finally star coding for multiple threads instead of relying on ONE workhorse.
  • FriendlyUser - Tuesday, April 11, 2017 - link

    Best CPU ever.
  • eldakka - Thursday, April 13, 2017 - link

    IMHO, that crown still belongs to the Celeron 300A. 50% overclockable (450MHz) was standard, and with the right riser card you could enable it for dual-socket systems.

    In today's terms, it'd be like buying two G4560's (3.5GHz, 2 cores, 4 threads, $63RRP e.a.), overclocking them by 50% on air-cooling (no fancy water) to 5.2GHz, and sticking them in a dual socket motherboard. Giving 4C/8T threads at 5.2GHz for ~$130, compared to an i7-7700K 4C/8T 4.2GHz for ~$340.
  • jrs77 - Tuesday, April 11, 2017 - link

    Singlethreaded the i5 still beats the R5 and the i5 comes with an iGPU which is pretty much mandatory for office PCs and small workstations.

    Sorry to say, but the AMD R5 is pretty much useless for the majority of users, as the most used software is still singlethreaded (Office, Photoshop, etc).

    Let's wait and see what the new AMD APUs will have to offer.
  • stockolicious - Tuesday, April 11, 2017 - link

    "Let's wait and see what the new AMD APUs will have to offer."

    Its the CPU Ryzen replaces Bulldozer connected to a great iGPU - this is where Intel is going to have a rough time as they don't do top of shelf graphics. When released hard to believe AMD wont have by far the best APU they have made and maybe the best on the market. This might even get them some high end design wins that "Eluded" them during the bulldozer times.
  • Icehawk - Tuesday, April 11, 2017 - link

    Good point, my company isn't going to spend more for an AMD system for our regular users and a video card (even junk) would likely tip the cost against them. I do think some of our devs might like these and there we can justify the extra $.
  • Krysto - Wednesday, April 12, 2017 - link

    Ryzen APUs are coming.
  • deltaFx2 - Tuesday, April 11, 2017 - link

    @jrs77: Talk about strawman arguments. "as the most used software is still singlethreaded " Just because you vehemently assert it doesn't make it true. All the MT workloads tested in Ian's suite are real workloads people use. I have 14 "Chrome Helper" threads running on my laptop as I type this, just to point out the obvious. The software that continues to be single threaded are the ones in which the cost of a MT implementation outstrips the gain. Office is 1T (I'll take your word for it) because it works perfectly fine on Atom or Excavator. I don't think Photoshop is a workload that holds up people most of the time either. Here's the other thing: Folks who have Photoshop for a living also likely do video editing, rendering and so forth. Sometimes at the same time as photoshop. See Ian's review of various workloads that do this.

    iGPU: That is fair point for the 4c part. For the hex-core, you're getting into the same usage space as the 8c: content creators. Then again, who buys desktops these days for office work? Most offices I know of give their employees laptops + docking stations. It's only gamers and content creators, CAD folks that buy laptops. These guys also buy graphics cards to go with their rig.
  • psychobriggsy - Wednesday, April 12, 2017 - link

    One benchmark that used to be done was multiple apps at the same time.

    For example, a browser benchmark running alongside a video encode.

    This can show real world use cases a lot better. Also it would show off better MT implementations better, in this case Ryzen would fare a lot better (either by having SMT in the 4C8T, or have more cores and SMT in the 6C12T) even where the Intel equivalent would do okay when doing 1 task only.
  • masouth - Wednesday, April 12, 2017 - link

    to add onto that, even certain tools/ functions in Photoshop are multi-threaded. Most blurs are as well as color mode conversions just to name a couple.

    as usual, YMMV depending on how often you use those but it IS there and more cores/threads offers a very real benefit for people that do use them.
  • Meteor2 - Wednesday, April 12, 2017 - link

    Most users could easily get by with Celerons. I'm not sure what your point is.
  • ChubChub - Tuesday, April 11, 2017 - link

    At $250 what should you get? A 1400, and use the extra cash + saved cash on the motherboard to get a better GPU.
  • davide445 - Tuesday, April 11, 2017 - link

    1600x or 1600 will be part of my new rig.
    Really clear from this review AMD does optimize his CPU for serious tasks (where lie the real lasting grow in the PC market) and modern gaming titles (DX12, the future), leaving a sufficient to good performance to the others.
    Minimizing production costs can profit for sales and sustain Intel possible dumping activities.
    IMHO a clever strategy, since they didn't need to serve ALL the market, but just being able to lead the most profitable, that's for sure not the casual e-Sport gamer.
  • ImperfectLink - Tuesday, April 11, 2017 - link

    Cinebench 10 and 11.5 tables are mixed up. It's 11.5 first with the decimals and 10 with the thousands.
  • farmergann - Tuesday, April 11, 2017 - link

    You choose to finish the article with, "...the Intel CPU is still a win here." A sentence that simply doesn't belong in any Ryzen vs sky/kaby comparison, much less as the final statement. What a joke of a shill you must be. BTW, your own testing reveals that tasks and games truly dependent on single thread IPC find Broadwel DT the victor over newer intel garbage, yet you mention Broadwell here as though it were dated... pitiful.
  • Maleorderbride - Tuesday, April 11, 2017 - link

    Read more than eight words and you will see that he refers to DX9 and DX11 specifically, which of course benefit far less from more CPU cores. DX12 is generally a win for AMD. What's the problem?
  • farmergann - Tuesday, April 11, 2017 - link

    The problem is clearly laid out in the OP. Pitiful that an i5 can be so thoroughly trounced yet moronic shills such as this author still go out of their way to make laughable attempts at rationalizing the defunct intel product.
  • Icehawk - Tuesday, April 11, 2017 - link

    Yay, we finally are at a point where AMD is a viable choice. It will be interesting to see what/if Intel fires back. If I was buying a new PC right now it would be a tough choice because I do a fair amount of HEVC encoding but am primarily a gamer.
  • psychobriggsy - Wednesday, April 12, 2017 - link

    If you do both at the same time, then the 1600's addition two cores and SMT will really help hide the effect on gaming from the encoding.
  • Falck - Tuesday, April 11, 2017 - link

    Great review! Just another typo on page 3:

    "As the first consumer GPU to use HDM, the R9 Fury is a key moment in graphics..."

    I think it's HBM?
  • Maleorderbride - Tuesday, April 11, 2017 - link

    Why did the i5-7600K get dropped from the majority of the benchmarks (or their results)? It seems rather odd to not report the data with the same set of CPUs for every benchmark.

    Minor typo, but I believe in the Conclusion you mean to say " Looking at the results, it’s hard NOT to notice "
  • Outlander_04 - Tuesday, April 11, 2017 - link

    Is there going to be a follow up article where you compare Ryzen performance when you use 3200Mhz RAM ?
    It does make a difference
  • psychobriggsy - Wednesday, April 12, 2017 - link

    What's the cost differential of such RAM versus a more reasonable (when considering CPUs in this price range) option?
  • trivor - Tuesday, April 11, 2017 - link

    If you're going to be doing anything other gaming (and only 1080P gaming) then the Ryzen is a very good pick. When you're talking about video transcoding (one of my primary uses for my higher end computers) Ryzen 5 takes i5 to town.
  • Joe Shmoe - Tuesday, April 11, 2017 - link

    Nice to see these chips tested with sensible gpu solutions.
    The GTX 1080 & above Nvidia cards (tho A.M.D. has yet to release anything as powerful) have been used by every site on the planet to test rysen chips;
    it took Jim on the adored TV youtube channel to actually show the lack of asynchronous compute hardware (which is not built in to Nvidia cards)and/ or the Nvidia drivers are actually knee capping rysen chips in 1080p game benchmarking, in DX 12, vs kaby lake i7's.
    Nvidia are just rubbish at DX12 for the money,and this will not improve no matter how many transistors they throw at it without assync compute hardware.
    Most experienced users I know are going to buy an R5 1600 (non X),
    clock it to 3.8 gig on all 6 cores,slap in an RX 580 when they drop to £200 ish, and not actually worry about benchmarks.
    It will game fine in 1080p compared to what they are running now.
    The whole i7 'gaming chip' argument is moot_
    Until ~ 20 months ago, intel marketed i5's as gaming chips and the extra price on i7's was for a productivity edge.
    (5* consumer chips at a massive price hike,but they are a lot more pro work capable)
    I dont know anybody who uses a 7700K for anything, frankly.
    The whole system price thing has got beyond a joke.
  • Meteor2 - Wednesday, April 12, 2017 - link

    I think exact the same thing: R5 1600 + RX580 is going to be unbeatable value for money.
  • deltaFx2 - Tuesday, April 11, 2017 - link

    @Ian Cutress: It would be helpful to know whether any of the workloads above use AVX-256, just to know how prevalent they are in common code. For example, does your 3DPM code use AVX-256? Also, when you run legacy tests, are the binaries legacy too, or do you recompile when applicable?
  • beerandcandy - Wednesday, April 12, 2017 - link

    This looks like it might be a good start for AMD to get back in the game. This isn't the normal way you try to do these things. I think mostly you want to show your best CPU "DESTROY" the competitors best CPU. If AMD doesn't do that then it sucks because they wont be able to compete in the halo CPU product areas. This will also cause them to be in a limited market space and they will be forced into less profitable situations
  • pandemonium - Wednesday, April 12, 2017 - link

    I am curious why comparable Intel 2011-3 CPUs weren't included? The i7-6800k would be in nearly direct competition to the 1800X based on cores and MSRP.
  • ET - Wednesday, April 12, 2017 - link

    Thanks for the comprehensive testing. I was missing some Core i7 results for comparison in some tests, such as the compilation test.
  • Lechelou - Wednesday, April 12, 2017 - link

    Leonard Nimoy voiced Civ IV, not V. Just sayin'.
  • madwolfchin - Wednesday, April 12, 2017 - link

    Someone at AMD should rethink about the position of R7 1700 vs R5 1600X, The 1600X is faster in Single Tread, and about even with the 1700 in multi-threaded application. Why would anyone buy the 1700 which is much more expensive
  • Outlander_04 - Wednesday, April 12, 2017 - link

    Because you can OC it and have the same performance as an 1800X
  • Meteor2 - Wednesday, April 12, 2017 - link

    I've only read as far as the test-bed set-up page: I want to say a MASSIVE thank you to MSI for supplying the GTX1080s. Top stuff, and that won't be forgotten.

    Back to reading...
  • vladx - Wednesday, April 12, 2017 - link

    Wow so Anandtech have now turned into AMD shills. Not only you conveniently excluded or ignored the 7700k, but also skipped the 7600k from the gaming benchmarks to paint Ryzen in a better light than reality actually reflects.I understand you had to finish the article ASAP but Anandtech was all about quality articles and you really should've published the article when you had all the facts.
  • Outlander_04 - Wednesday, April 12, 2017 - link

    The information is out there
    https://www.youtube.com/watch?v=3VvwWTQKCZs
  • vladx - Wednesday, April 12, 2017 - link

    That wasn't my point, readers shouldn't go elsewhere to compare with CPUs that are excluded due to bias.
  • Meteor2 - Wednesday, April 12, 2017 - link

    What relevance has a $340 CPU got to a $250 CPU review?
  • vladx - Wednesday, April 12, 2017 - link

    I'd say a ton more than the $499 Ryzen 7 1800x which didn't get excluded.
  • psychobriggsy - Wednesday, April 12, 2017 - link

    Yes, it's in the same product line, so people can see how it compares.

    Which seems to be roughly around 80% of the 1800X, for around half the price.
  • vladx - Wednesday, April 12, 2017 - link

    And 7700k is more relevant for gaming which was the subject at hand so there you go.
  • Meteor2 - Wednesday, April 12, 2017 - link

    You didn't answer my question...
  • vladx - Wednesday, April 12, 2017 - link

    I just did, 7700k is more relevant than a 1800X in gaming benchmarks and as the competition it should've been included if a $499 CPU from AMD is included.
  • psychobriggsy - Wednesday, April 12, 2017 - link

    7700K is at a different price point, it rightly was compared in the Ryzen 7 reviews.

    Regardless, it would lose in the multithreaded benchmarks still, whilst having a small extra advantage in the gaming results.
  • vladx - Wednesday, April 12, 2017 - link

    Ryzen 1800X is even more expensive than 7700k and yet got included in the gaming benchmarking, ironically considering 7700k is much more relevant for gaming.

    Sorry, but the bias and double standards are obvious in the article.
  • Meteor2 - Wednesday, April 12, 2017 - link

    Benchmarks: I love the new Chromium compilation and straightforward 4K h264->h265 tests. Bravo.
  • Lehti - Wednesday, April 12, 2017 - link

    As it is, the Ryzen 5 lineup is a compelling purchase, even for us European who usually get awful deals with PC components. If AMD released it as an APU lineup, however, this would be a no-brainer for everyone. And yes, I know that Ryzen 5 chips are basically binned Ryzen 7s, but still...
  • vladx - Wednesday, April 12, 2017 - link

    Can't agree there, I'm from Europe and the 7600k is only around $20 more expensive than the Ryzen 1500x and it beats it in a lot of games and also fares better in office workloads. And that $20 gets at least even out when you put in consideration the Intel B250 motherboards which are $20-40 than competing AMD B350 mainstream offers.
  • Meteor2 - Wednesday, April 12, 2017 - link

    Where in Europe are you? In the UK the R5 1500X is $80 cheaper than the the i5-7600K.
  • t.s - Wednesday, April 12, 2017 - link

    You're focusing in single threaded apps / games, again, and again, and again. Not everyone use they computer for ST apps / games. And when you factored it in, R5 is very compelling product.
  • vladx - Thursday, April 13, 2017 - link

    That's true but enthusiasts and prosumers are maybe 2% of the market, the rest 98% won't use their PC for more than browsing the web, watching movies and basic office work.
  • deltaFx2 - Friday, April 14, 2017 - link

    @vladx : That's true but if browsing the web, watching movies, and basic office work is all you do, Excavator is perfectly fine for you. I expect people buying 7700k or R5 or even some i5s have some workloads that justify the expense. This is especially true of Ryzen as the current crop need an external GPU. IIRC Raven Ridge desktop parts come only next year.
  • Sages - Wednesday, April 12, 2017 - link

    Would it be possible to do another power review with ryzen like you guys did with Carrizo in 2016? Or do you guys not have the equipment for that? For me as a electrical engineering student that article was very interesting. Keep up the good work!!
  • TheJian - Wednesday, April 12, 2017 - link

    "A side note on OS preparation. As we're using Windows 10, there's a large opportunity for something to come in and disrupt our testing. So our default strategy is multiple: disable the ability to update as much as possible, disable Windows Defender, uninstall OneDrive, disable Cortana as much as possible, implement the high performance mode in the power options, and disable the internal platform clock which can drift away from being accurate if the base frequency drifts (and thus the timing ends up inaccurate)."

    Or you could have just used Win7 since 1/2 the world still uses it and don't plan on quitting it until well after 2020. Until I see someone actually REVIEW these chips with win7, they won't get my money. Vulkan runs on everything, I have no need for dx12 (all everything you guys just mentioned plus all the tracking etc etc). With no apu crap involved here, you should test win7 if only to show if there is any differences. Considering most of the issues are bios fixes it seems (motherboard), no reason to abandon win7 in testing. Again, half of your readers are using it, more than 2x win10, and many of these people are stuck on it because they don't know how to get back to win7...ROFL

    I can't wait for everyone to drop dx12 once whatever they're working on currently is done (many too far along to quit right now in a current game build). I think we'll see many more Vulkan/dx11 only announcements later this year. No point in a dev supporting an OS (dx12 I mean) that was given away for a year and still can't hit 1/2 of win7's users, not to mention many woke up an were accidentally "upgraded" to win10 with no ability to get back reliably. Uninstalling it doesn't always work (and many don't own win7 discs to get back in cases like Dell etc for some people) ;)

    But hey, keep acting like win7 doesn't exist much to the chagrin of your readers.
  • vladx - Wednesday, April 12, 2017 - link

    "Until I see someone actually REVIEW these chips with win7, they won't get my money."

    Who are you punishing here, but yourself? You probably shouldn't anyway because neither Ryzen or Kabylake are supported on Win7.
  • bodonnell - Wednesday, April 12, 2017 - link

    Good luck with that. Since Microsoft is only supporting Kaby Lake and Ryzen on Windows 10 I guess you'll be sticking with 2015 and older technology for a while. I bet you desperately hung onto Windows XP too...
  • _zenith - Thursday, April 13, 2017 - link

    ... except for the fact that many games are console ports now, aaaaannd those are often - usually - already written for DX12, and this will only become moreso once Xbox Scorpio is released, with it's special DX12 hardware optimisations.
  • Notmyusualid - Tuesday, April 18, 2017 - link

    @zenith

    Go ahead and release a DX12-only game.

    Let me know how you get on with sales...
  • Arbie - Wednesday, April 12, 2017 - link

    When I do upgrade it will be with AMD. Even where (and if) Intel offers a little more performance per dollar, AMD has amazingly reduced the difference to the point where I can accept it in order to help fuel competition. If the market does not reward AMD for their valiant effort in Zen, the company may be forced to give up. It seems impossible for them to come from behind yet again in such a high-stakes arena. Then Intel will really slack off, and several years from now we'll ALL be worse off than if they were still duking it out.

    Everyone has to make their own decision, and I couldn't buy the Excavator etc fiascos, but the AMD product is now a real contender - and we need to keep them there.
  • bodonnell - Wednesday, April 12, 2017 - link

    Agreed. I updated my main rig a couple years ago and Intel was really the only option at the time, but if I was in the market now I would definitely be looking at a Ryzen 5 as keeping AMD around is better for consumers. For the money where Ryzen 5 lags it doesn't lag by much (and honestly legacy software that is single threaded was made to work on much lower performance cores) and where it shines (multi-threaded performance) it often beats price comparable Intel processors by a healthy margin.
  • BrokenCrayons - Wednesday, April 12, 2017 - link

    Ryzen 5 is an interesting CPU, worth a careful look given the outcome of the benchmarks in this article. Modern workloads seem to be much more likely to use more than one thread and legacy workloads that are single threaded would perform perfectly well on just about any modern CPU so it really isn't a difficult choice to look into a Ryzen 5 if you fall into its price bracket. AMD's APU offerings in the future might offer a better value for some customers since the price of a Ryzen CPU doesn't currently include graphics. People happy with iGPU performance would either require a dedicated graphics card purchase or reuse one thy already have available to build a complete system around a Zen-based processor so those sorts might be better off waiting until the APU versions are released later this year or they might be compelled to purchase a competing Intel product with an iGPU.
  • bodonnell - Wednesday, April 12, 2017 - link

    Can't wait to see what AMD does with the Zen core in the mainstream and mobile markets. A well balanced quad core design with a good Polaris based iGPU will be all most consumers need for their day to day use.
  • OddFriendship8989 - Wednesday, April 12, 2017 - link

    Is there a reason you don't put in the 7700k in these charts? I mean if you're going to put in 1700X and 1800X, you should put in 7700k too. Plus at just $80 more it's honestly a CPU being considered too people consider the 1600X.
  • vladx - Wednesday, April 12, 2017 - link

    Reason is obvious, anandTech have an AMD bias.
  • Outlander_04 - Thursday, April 13, 2017 - link

    Unlike yourself, and your well respected neutrality ?
  • vladx - Thursday, April 13, 2017 - link

    Yes as someone with both a 7700k system and a 1700X system I can safely call myself unbiased as I hold no special loyalty towards any brand.
  • Cooe - Monday, March 1, 2021 - link

    Liar liar, pants on fire lol.
  • cryosx - Thursday, April 13, 2017 - link

    they were testing against the direct competition (i5s) and the rest of the ryzen family. Makes sense, though I guess having the i7s in would be a nice touch.
  • Nightsd01 - Wednesday, April 12, 2017 - link

    "[Ryzen] has 50% more cores and 200% more threads"

    Wouldn't it be 300% more threads? 12 threads is 3x more than 4
  • Outlander_04 - Thursday, April 13, 2017 - link

    200% MORE
  • cvearl - Thursday, April 13, 2017 - link

    Curious what method on GTAV was used. I get in the 70's at those settings on my RX480 all VH settings on an i7 2600k.
  • ianmills - Thursday, April 13, 2017 - link

    Dolphin benchmarks still missing!
  • Notmyusualid - Tuesday, April 18, 2017 - link

    @ianmills

    I was looking for them too.
  • msroadkill612 - Thursday, April 13, 2017 - link

    Ryzens tough for cheapskates. its a nice cheap 4 core, but gee, for so little more u get a 6 core 1600.
  • msroadkill612 - Thursday, April 13, 2017 - link

    zen is just amds first act.

    The second is naples - gluing multiple ryzens together using their excellent new plumbing.

    The third is vega (gpu has long been amdS focus - in a10apuS e.g.).

    The last and seismic act, is a ccx with a single zen core block of four cores & same 4mb l3 cache, and a vega gpu, possibly with hbm2 vram.

    Its not very new ground for them. its very similar to the architecture solutions needed for the a10.

    Incidentally, i heard a great debate about "dark coding?".

    coders love using the gpu for compute when they can, cos it shifts heat away from stressed processors. cooler processors can then run faster. IE, they try and shift the load around the circuitry to avoid generating hot spots.

    The conclusion omitted to say "if u consider pc software static.... Then buy intel"

    new generation and paradigm, or the last tart up of the old generation. Your choice.

    There are of course wins and losses by both, but we know that we are measuring the best of the old, with the lowest point of the rapidly improving new. (see ashes of singularitytweak)

    & as above, its just act 1 now.
    The authors haughty dismissal of amdS top am4 chipset features is outrageously deceptive (as said in posts here).
  • msroadkill612 - Thursday, April 13, 2017 - link

    Its worth noting that amd moboS historically tend to endure many revised cpuS. whereas an intel cpu upgrade for a user is bound to require a new mobo.

    "ryzen 2" will probably simply drop into an am4 mobo.

    Its also interesting that many or all the mobos i have seen are ready to go for an apu - video connects onboard e.g.

    this indicates that raven ridge exists now, for pre-production mobo testing.
  • msroadkill612 - Thursday, April 13, 2017 - link

    I dont believe amd would go to all the trouble of doing zen and vega, and then merge zen with prev gen polaris for the APU.

    it doesnt make sense from several perspectives. amd philosophy, the architecture seen in ryzen - 2 zen core blocks on a ccx simply becomes one core block and one vega gpu on a single ccx.

    it will be a hell of a piece of silicon.
  • Johan Steyn - Friday, April 14, 2017 - link

    Ryzen 9 is not that far fetched. Looking at the server part coming soon, an Ryzen extreme could be happening, especially for workstations. Maybe it might even fit AM4, although unlikely with quad channels. I do not think the current SOC has enough pins. So maybe we might get a Ryzen 9 with plenty of cores and quad channel memory.
  • drajitshnew - Friday, April 14, 2017 - link

    Dear Ian,
    Please clarify a point. You have mentioned that both AMD & Intel have 16 CPU PCIe lanes, but AMD offers 4 pcie lanes for storage from the CPU. If the chipset is loaded this could have an impact on the following 3 situations,
    1. If the motherboard manufacturer routes those lanes from m2 to PCIe. Then those could be used, as storage, adding a GPU for GPGPU or a 10GbE NIC for use for a UHD media server, or AIC format storage.
    2. With a heavily loaded chipset, a NVMe drive like a 1 TB samsung 960 pro or comparable, may show improved performance, specially in sequential transfers.
    3. For a long lived system a large X-point or Z-nand or 3d SLC may show significant latency advantages.
  • cvearl - Sunday, April 16, 2017 - link

    You have odd 480 results on GTA V. Are you using the final run (with the jet) from the built in test? My 480 scores in the mid 70's using your settings on that run with an i7 2600k.
  • cvearl - Sunday, April 16, 2017 - link

    Looking back at my GTX 1060 SC results (before I replaced with my 480) It had similar results to what you show here (Assuming the final run of the built in test). Am I to understand that the 480 gets a better result on i7 than Ryzen?
  • Polacott - Monday, April 17, 2017 - link

    my experience with AMD processors is that they have aged perfectly. I mean the AMD processors got more support and performance as apps and SO has been prepared to take advantage of more threads as years passed. I would get the Ryzen 1600X without any doubts over the i5.
  • rmlarsen - Monday, April 17, 2017 - link

    Unfortunate typo: In the conclusion it says "Looking at the results, it’s hard to notice the effect that 12 threads has on multithreaded CPU tests. " I believe the author meant to write "it's hard NOT to notice".
  • Kamgusta - Tuesday, April 18, 2017 - link

    Why in the Earth nobody ever considers the i7-7700?!?! And keep on putting the Ryzen CPUs only against the i5-7600K and/or the i7-7700K?

    i7-7700 has the same clocks as the i5-7600K, but double the threads and 2MB more L3. It consumes a lot less power than the i7-7700K and no more power than the i5-7600K. You can picture it as a more powerful i5-7600K or as a slight less powerful i7-7700K (but far more efficient).

    If anyone is torn between R6-1600 and i5-7600K then the i7-7700 is, quite ironically, the best choice.
  • Ratman6161 - Tuesday, April 18, 2017 - link

    So over the weekend I upgraded my system from an i72600K to a Ryzen 5 1600. First off, I could care less about gaming so I'll put that out there up front. I can buy (in order of real world price)
    i7 7700K for $300
    ryzen 1600x would have been $249
    Ryzen 5 1600 was $219
    i5 7600K for $210.
    I went with the R5 1600. For highly multi threaded tasks (remembering I don't care anything about games about games) the six core R5's compare very favorably with the i7 7700K even though most of the comparisons you see match them up against the i5. And the big difference between the 1600 and the 1600X is clock speed...and they are unlocked. So for me the 1600 ended up being a no-brainer.
    So for us non-gamers anyway, i'd disagree with the i7 7700K being the best choice.
    Also, when comparing prices, look at the platform price including motherboards. I got an Asrock AB350 Pro 4 for $39 bundled with the CPU so total price $258. Cheapest 7600K bundle at the same place: $315, cheapest 7700K bundle $465.
  • Kamgusta - Tuesday, April 18, 2017 - link

    Are you replying to me? I talked about i7-7700 (NOT K), not i7-7700K.
    Which cope very well with some budget DDR4-2400MHz and a budget H270 board with no penalties whatsoever.
  • msroadkill612 - Wednesday, May 3, 2017 - link

    Interesting. ta for sharing. pretty awesome price for the 1600/mobo bundle.

    How did the intel mobo compare for functionality do u think?
  • loguerto - Friday, April 21, 2017 - link

    9 is not prime :)
  • LawJikal - Friday, April 21, 2017 - link

    What I'm surprised to see missing... in virtually all reviews across the web... is any discussion (by a publication or its readers) on the AM4 platform's longevity and upgradability (in addition to its cost, which is readily discussed).

    Any Intel Platform - is almost guaranteed to not accommodate a new or significantly revised microarchitecture... beyond the mere "tick". In order to enjoy a "tock", one MUST purchase a new motherboard (if historical precedent is maintained).

    AMD AM4 Platform - is almost guaranteed to, AT LEAST, accommodate Ryzen "II" and quite possibly Ryzen "III" processors. And, in such cases, only a new processor and BIOS update will be necessary to do so.

    This is not an insignificant point of differentiation.
  • systemBuilder - Friday, April 28, 2017 - link

    I believe the Ryzen core is 20% slower than the Intel core, in instructions per clock. A hyperthread is only about 30% as fast as a full core. With both of these factors thrown in, 6 Ryzen Cores = 5 Intel cores. So the advantage of Ryzen is actually miniscule. It's why I sold all of my AMD stocks in February.
  • willis936 - Thursday, July 27, 2017 - link

    "sold all of my AMD stocks in February"

    I'm cringing.
  • systemBuilder - Friday, April 28, 2017 - link

    Ryzen's cores are 20% slower than Intel's. A hyperthread is only worth (at best) 30% as much as a full core. Therefore, Intel offers 4 cores, AMD offers 6 * 0.8 * 1.3 = 6.24 cores, a decent bump but obviously not significant because few if any games are set up to use more than 8 cores, which in the best case for AMD would be (6 + 0.3 + 0.3)*0.8 = 5.28 cores, a small bump.
  • Cooe - Monday, March 1, 2021 - link

    Except Zen 1 was only about ≈5% slower in IPC vs Kaby Lake, not 20%...
  • msroadkill612 - Monday, May 1, 2017 - link

    Some thoughts from a ~newb, are that if 8 cores are the new black, then maybe 16GB (or 2GB per core) of ram, isnt as generous as it seems?

    Also, its a new paradigm. Tasks which taxed the cpu and thus historically avoided (software raid e.g.), can now be embraced with ~impunity.

    "Normal" CPUs can handle 16 jobs before a queue forms, commonly, an increase by a factor of 8 for a prospective upgrader.
  • Gothmoth - Tuesday, May 2, 2017 - link

    "...affords a comfortable IPC uplift over Broadwell....."

    yeah does it?

    what is comfortable??.... 10%.... who are you trying to kid here?
  • msroadkill612 - Thursday, May 4, 2017 - link

    I still dont get what the deal w/ am4 mobos and a pair of m.2 pcie3 nand ssdS in raid 0 is?

    the x370 (but not the x350) chipset seems to allow an extra 4x pcie3 lanes, directly linked to the cpu (not shared lanes via the chipset), for one or 2 x onboard m.2 sockets.

    But its never made clear, to me anyway, that if u use 2 m.2 drives, does each get 2 lanes of pcie3, and therefore are perfectly matched, as desired by raid0.

    Surely its not just me that finds a 4GBps storage resource exciting?

    (e.g. see storage in specs on link re m.2)

    https://www.msi.com/Motherboard/X370-XPOWER-GAMING...

    https://www.msi.com/Motherboard/X370-XPOWER-GAMING...

    I suspect it translates to 2 x 2 lane pcie3 lanes - 2GBps for each m.2 nvme ssd socket, which surreally, is less than samsung nvme ssdS e.gS maxed out ability of 2.5GB+ ea.

    Drives are now too fast for the interface :)

    A pair of nand nvme ssds could individually max out each of the 2, 2 pci3 lane sockets (2 GB each), for a total of up to 4GBps read AND WRITE (normally write is much slower than read on single drives). Thats just insane storage speed vs historical norms - a true propeller head would kill for that.

    I also hear ssdS are so reliable now, that the risks of raid 0 are considerably diminished.

    IMO, a big question prospective ~server & workstation ryzen users should be asking, is if they can manage w/ 8 lanes of pcie3 for their gpu - which seems entirely possible?

    "Video cards do benefit from faster slots, but only a little. Unless you are swapping huge textures all the time, even 4x is quite close to 16x because the whole point of 8GB VRAM is to avoid using the PCIe at all costs. Plus many new games will pre-load textures in an intelligent manner and hide the latency. So, running two 8x SLI/CF is almost identical to two 16x cards. The M.2 drives are much faster in disk-intensive workloads, but the differences in consumer workloads (load an application, a game level) are often minimal. You really need to understand the kind of work you are doing. If you are loading and processing huge video streams, for example, then M.2 is worth it. NVMe RAID0 is even more extreme. Will the CPU keep up? Are you reaching a point of diminishing returns? And if you do need such power, you should consider a separate controller to offload the checksuming and related overhead, otherwise you will need 1 core just to keep up with the RAID array."

    (interesting last line - w/ 8 cores the new black, who cares?)

    This would free up 8x pcie3 lanes for a high end add in card if a big end of town app requires it.

    So yeah, re a raid 0 using 2 m.2 slots onboard a suitable 2xm.2 slot am4 mobo, do I get what i need for proper raid0?

    i.e.

    each slot is 2GBps, so my raid pair is evenly matched, and the pair theoretically capable of 4GBps b4 bandwidth is saturated?
  • msroadkill612 - Thursday, May 4, 2017 - link

    PS re my prev post

    specifically from the link

    "• AMD® X370 Chipset
    ....
    • 2 x M.2 ports (Key M)
    - M2_1 slot supports PCIe 3.0 x4 (RYZEN series processor) or PCIe 3.0 x2 (7th Gen A-series/ Athlon™ processors) and SATA 6Gb/s 2242/ 2260 /2280/ 22110 storage devices
    - M2_2 slot supports PCIe 2.0 x4 and SATA 6Gb/s 2242/ 2260 /2280 storage devices
    • 1 x U.2 port
    - Supports PCIe 3.0 x4 (RYZEN series processor) or PCIe 3.0 x2 (7th Gen A-series/ Athlon™ processors) NVMe storage
    * Maximum support 2x M.2 PCIe SSDs + 6x SATA HDDs or 2x M.2 SATA SSDs + 4x SATA HDDs."

    it sure seems to be saying the 2nd m.2 poet would be a pcie2 port, and the first m.2 port uses the whole 4 pcie3 lanes linked to the cpu.

    thats sad if so - it means no matched pair for raid 0 onboard. only a separate controller would do.

    i cannot see why? why cant the 4 pcie3 lanes be shared evenly?
  • asuchemist - Wednesday, May 17, 2017 - link

    Every review I read has different results but same conclusion.
  • rogerdpack - Tuesday, March 27, 2018 - link

    "hard to notice" -> "hard not to notice" I think...

Log in

Don't have an account? Sign up now