Comments Locked

89 Comments

Back to Article

  • eragonhyd - Tuesday, January 28, 2020 - link

    Wonder if their next-gen APUs can skip RDNA1.0 and jump straight to v2.0. Given that both Zen3 and RDNA2 are late 2020 launches, this seems within the realm of possibility.
  • Hul8 - Tuesday, January 28, 2020 - link

    I'd think that with the additional tuning required to keep power consumption down, APUs will continue to be around a year behind the dGPUs.

    There's also the fact that since APUs are lower cost parts, whenever a new architecture involves use of a new - or even leading edge - process node, use of that node for these kinds of cost-sensitive products might not make financial sense for a long time. (See: All Ryzen generation APUs until now.)
  • Spunjji - Wednesday, January 29, 2020 - link

    Agreed re: GPUs arriving late in APUs. Seems to be a cadence they're happy with, and the majority of their target customers just won't know or care/
  • Kangal - Wednesday, January 29, 2020 - link

    Yeah, AMD is doing things wisely.
    They're innovating and cashing in on their success, funnelling that back to pay their debts, making good deals with Chip Fabs, improving brand perception, increasing their R&D gradually.

    They're focused on: firstly Servers (big money), then Desktop CPU (scale), then Dedicated GPU (reputation), then Budget APUs (efficiency), and lastly Mobile (optimisation).

    So, their Mobile Division was ~1.5 years behind their Server progress, and is slightly less than that now, and hopefully in a few years it will only be ~0.5 year (same year) release. But yeah, 2020 Mobile will reuse old Vega GPU and old Zen2 cores on the mature 7nm node. It would be probably late-2021 that we could expect further optimisation, use of newer +7nm node and Zen2-refresh (aka Zen3) cores, probably using Navi-refresh gpu. Or in other words, they optimise for things that will turn a profit, and then try to retrofit those technologies into more Budget and Low-Power products. They are sticking with the "monolithic design" for the Mobile Chipsets (and perhaps a few APUs), which demands further time for optimisations.

    I believe AMD is actually working on bringing the Infinity Fabric and "chiplet design" to the Mobile Space. Overall, it is the less efficient design, but since they have the edge in lithography they will leverage that to equal the field. What it will mean is that their "Mobile Chiplets" will be marginally slower than Intels and marginally thirstier, but they will have double the core count, and access a much more capable iGPU, and it will be faster to produce, and it will be cheaper to produce, and overall it will be more successful.

    Intel's only way to bounce back, is if they can utilise their next-gen architecture that's supposedly faster and more efficient than Zen3, and produce it on their "+10nm node" (which is similar to TSMC's +7nm), and to include a more capable iGPU, and to be able to do it profitably, and ... ... etc etc. There's too many "and ifs" in there. The more likely scenario seems to be that Intel becomes the company that sells higher priced products that are slower and less efficient, and they burn through too much money on basically marketing.
  • TouchdownTom - Wednesday, January 29, 2020 - link

    I was with you until you suggested that 2020 Ryzen Mobile (5000) would be released again in 2020 on Zen 2 and 7nm. I would be absolutely shocked if they did this and it would smash most of the progress they have made thus far on laptop chips. The next gen of mobile is going to be on 7nm+ and Zen 3. If they cant manage to make that work by this time next year, then they will and should delay the chip's release until they can make it work. The real question is if AMD can get 2nd gen RDNA into the chips, or if they only use 1st gen RNDA instead. The idea of them using Vega again is simply nonsense--AMD would be uncharacteristically lazy if that occurred. I think Zen 3 gets announced alongside 7nm+ desktop Ryzen 4000 by mid 2020.
  • Kangal - Friday, January 31, 2020 - link

    Ryzen Mobile 4000 should have been a 2019 product, but its been pushed to 2020.
    Obviously, AMD doesn't have as much resources as Nvidia or Intel, so it is understandable: they're busy. I expect AMD either does a refresh of the Ryzen Mobile 4000 in the next 12 months, or they push forward to its successor.

    So I'm stretching the definitions a little here, but be prepared to be slightly-disappointed with the AMD's Mobile chipsets in the next 11 months. Zen3, as things are looking like its going to be a "evolution" instead of "revolution" (a la Zen+ to Zen1). Also +7nm node won't offer much improvement in technology, I expect most of the benefits will be that its cheaper for AMD to purchase, lower rate of defect, and better stability. So even combined, we might not see too much of an improvement. What I'm anxious for is the iGPU and Drivers. Hopefully that gets improved drastically in the next 9-12 months, otherwise, we will be waiting almost 18-24 months. Obviously, things are cyclical in development and release.

    I was hoping for a GPD Win 3 in late-2019, one that's powered by a 7nm Quadcore Zen2 (5W-15W) and has a capable Navi-8 iGPU. Obviously that hasn't happened. So to get that meaningful upgrade over the Intel Core M/Y-i3, or Core U-i7, or AMD Ryzen V1605B... we're going to have to wait for the things to align: +7nm Zen3 and +7nm RDNA2. And that is going to happen by the end of this year. So it will take another 12 months (hopefully earlier) for that to trickle down into a Mobile APU. Which means the PS5 and XBV aren't going to look very impressive in 2021, tech moves on!

    Remember, the mobile sector is NOT a priority for AMD. They're also limited in resources. So not to trivialise their efforts or their Mobile Products, but they will get better as the company starts expanding again.
  • 29a - Wednesday, January 29, 2020 - link

    I think so too since the next gen consoles with probably use RDNA 2.
  • Frenetic Pony - Tuesday, January 28, 2020 - link

    Kind of... weird. Is it bad wording? Why would you have a "refresh" of Navi AND "next generation RDNA architecture" in the same year?
  • Ian Cutress - Tuesday, January 28, 2020 - link

    Different market segments probably
  • SaberKOG91 - Tuesday, January 28, 2020 - link

    It'll be interesting to see what AMD have learned from optimizing Vega for the 4000 series APUs. I've been wondering if AMD did a few more rounds of design on Navi 20 after the failed tape-out in 2018. It seems like once they knew Navi 20 wouldn't be ready mid-2019 that they were better off using the time to release a better product in 2020, using Radeon VII as their fall-back in their higher-end slot for 2019. I'd have to imagine that extra year of optimization will yield some interesting results.

    I won't be surprised if Navi 20 is double the performance of Navi 10 at a similar TDP, drawing from the Vega optimizations on 7nm and the enhancements of 7nm+. Navi 10 will get a similar treatment for the refresh. Though, the next AMD GPU on 5nm in 2021 is what I'm most looking forward to.
  • M8Hacker - Wednesday, January 29, 2020 - link

    I'm also curious about the Vega enhancements. With the new APU's sporting Vega at 59% improvements that are not tied entirely to node enhancements.

    AMD has said before that both Vega and RDNA will continue to exist. Having owned both products, I can tell you that Vega is still superior on compute while Navi just throws frames out at an insane rate given the difference between the two in raw horsepower.

    But I expect we'll see more of RDNA year along with the new ray-tracing RDNA2 in addition to the compute-happy Vega to round out demand.

    The fact that Vega is better at compute may also be part of the reason for it being in APU's. As software development advances to offload more of the processing to the GPU, even for non-graphical workloads, this could prove to be a good move, allowing the Vega APU's to age much better.

    Anyway, every time I wonder why AMD has made a decision lately, a few months later I find myself thinking "oh, that's really clever", so I'm excited about this year.
  • SaberKOG91 - Wednesday, January 29, 2020 - link

    I think Vega made a lot of sense considering the pre-existing die shrink to 7nm. They were able to spend all of their time optimizing for power rather than having to do a shrink or add new features. Navi was a whole new architecture on a brand new node, most-likely started long before the 7nm Vega shrink. I think someone over there was smart enough to realize that the cost of optimizing power on a small-die design was worth it and that they needed to focus on catching up to Nvidia in efficiency.
  • sing_electric - Wednesday, January 29, 2020 - link

    I think your assessment is the best way of explaining Radeon VII that I've seen, because for a while, their GPU strategy looked VERY scattershot: Release Vega on 14nm, don't scale it down the product stack... keep Polaris (actually which are basically RX 4xx chips from mid-2016) around... then make a 7nm Vega part for data/compute... then a 12nm shrink of Polaris... and then finally a 7nm Vega, all before releasing Navi.

    It just seemed like a lot of work making new masks for not a lot of payoff.
  • del42sa - Thursday, January 30, 2020 - link

    yes, for 8-9 years they were not able fix all GCN shortcomings and now miracoulously they made it so effective :-D I wonder what stuff are you taking guys .-) seriously
  • 335 GT - Thursday, January 30, 2020 - link

    GCN has always been about compute. Once they found it wouldn't scale in games they were kinda stuck. I doubt GCN will be going anywhere soon.
  • uefi - Tuesday, January 28, 2020 - link

    Hopefully AMD follow the nvidia super route of refreshing with higher binned dies, instead of mere higher clocks of the same die.
  • Veradun - Wednesday, January 29, 2020 - link

    dice will be all better due to one year of producing them
  • Hul8 - Tuesday, January 28, 2020 - link

    Easy:

    Introduce 2 - 4 halo RDNA2 parts in the top as 6800/XT and 6700/XT.

    Refresh or rebrand 5700 series into 6600, 5600 into 6500, etc.
  • nevcairiel - Wednesday, January 29, 2020 - link

    Thats been the AMD GPU strategy since forever anyway. Add one or two actual new products at the top, fill the middle and bottom with crappy rebrandings.
  • M8Hacker - Wednesday, January 29, 2020 - link

    Pretty sure this is not what they did with Navi...
  • TheinsanegamerN - Wednesday, January 29, 2020 - link

    Performance wise they sure did. They launched a nuVega 64 (the 5700xt), the nuVega56 (the 5600xt) and the nu580 (the 5500xt). the 5500 and 5600 were both badly overpriced, maybe 10-15$ cheaper then the previous gen parts with no noticeable advantage. The high end was left high and dry. Again.
  • Spunjji - Wednesday, January 29, 2020 - link

    So not what they did, then.

    Moving somebody else's goalposts is still moving the goalposts. The comment was about re-branding, not about how you personally feel aggrieved by AMD's inability to compete at the high end.
  • nevcairiel - Wednesday, January 29, 2020 - link

    Exception that proves the rule?
    They sure did it consistently before it. And it sounds like they may go back to that.
  • Hul8 - Wednesday, January 29, 2020 - link

    Even if AMD didn't do it with Navi, they've done rebrands and mild refreshes multiple times in the past decade. The one exception doesn't preclude them doing it now.

    The comment was was about 2020, and why they would refer to both "Navi refresh" and "RDNA2" as two distinct entities.

    Trying to figure out what's behind AMD's PR material gets much harder if you dismiss things too easily; bury your head in the sand and go "Navi wasn't a rebrands so AMD doesn't do rebrands, la-la-la-la-la-la-la..."
  • Korguz - Wednesday, January 29, 2020 - link

    hul8 and nvidia hasnt done this as well ??
  • Hul8 - Wednesday, January 29, 2020 - link

    Your fanboysim is showing. Why are you bringing up Nvidia, when this news item and discussion is about AMD and their plans - trying to speculate what AMD will do?
  • Hul8 - Wednesday, January 29, 2020 - link

    As an aside, I've been on AMD graphics since 2014, because I felt their products and prices made the most sense for me. I'm not a big spender, though.

    I also hope they'll catch up to Nvidia across the product stack and can put up a fight even once Nvidia progresses to ~7nm.

    I just don't identify thru a company - they're all in it to make money (off us).
  • Hul8 - Wednesday, January 29, 2020 - link

    *since 2010, actually. Forgot about the 5850...
  • Korguz - Thursday, January 30, 2020 - link

    Hul8 im just point out that you seem to blame amd for rebranding.. but keep in mind, nvidia does it as well.. in some cases worse then amd
    fanboy of nvidia ? hardly, out of 6 comps have, 2 run 1060s, the other 4, amd cards, froma 5870, to a 5970.
  • Hul8 - Thursday, January 30, 2020 - link

    @Korguz Why do you keep bringing Nvidia in this? Everyone knows their shitty practices, but they in no way redeem AMD of theirs.

    While rebrands seem to be the unavoidable reality of the GPU market - especially for OEMs, since they want constant "new" products, that doesn't mean that *each and every company* that does them shouldn't be held accountable. Each separately, and without regard to "he did it first" or "he did it too" (which are defenses only applicable to playground).
  • Korguz - Thursday, January 30, 2020 - link

    i never said they did.. to blame one company for one thing, when another company does the same, and not acknowledging it, is kinda of dumb... and some may thing of you as a fanboy yourself...

    but what what ever...
  • Hul8 - Saturday, February 1, 2020 - link

    @Korguz
    I chose to not acknowledge Nvidia facts because I was resisting your attempt to derail the discussion, which was about AMD's GPU plans. Not their past. Not Nvidia. Not the GPU market at large. Not sharing the blame.

    I also wasn't actively "blaming" AMD, but using their history as a building block for some speculation. If you want to make any guesses as to what a company is likely to do, you have to consider what they've said publicly and what they've done in the near past.

    All companies should be held to the same high standards, and I would voice that opinion - and blame companies - as long as it was on-topic to the discussion at hand.
  • Hul8 - Wednesday, January 29, 2020 - link

    Now correcting myself:

    Tom's Hardware clarified with AMD (https://www.tomshardware.com/news/amds-navi-to-be-...

    The "refresh" Dr. Lisa Su was referring to seems to be less of a "refresh" as computer enthusiasts understand it, and more a "refresh" as investors understand it - a new line of products.
  • Yojimbo - Tuesday, January 28, 2020 - link

    What is the official explanation of what RDNA2 is supposed to be? Maybe RDNA2 is going into the consoles and they are refreshing Navi for PC?

    RDNA2 is a 7+ part whereas Navi is a 7 part. According to TSMC, there are significant design characteristic differences between the two processes, it's not like going from 16 to 12. I'm just making the fracture at what seems the most natural place, APU versus discrete. I guess they could fracture somewhere in their discrete GPU stack, but surely they'd be saving less money that way. I just don't know how much less. I would think rather than paying debt down it would make more sense to bring out RDNA2 up and down their stack once they created a 7+ discrete GPU based on the RDNA2 architecture, but I don't really know. One thing that seems likely to me is that if they expected significant market share from the parts that should be the case. But RDNA2 should be the high end whereas Navi the low end, and they would expect bigger market share on the low end, then what gives? So if it does split within the discrete GPU stack then I think the best conclusion would be that RDNA2 does not provide much architectural performance increase over RDNA. Otherwise surely having it up and down their lineup would be worth it. So it would offer mostly ray tracing and variable rate shading and other features not considered critical for the low end PC gaming market.
  • SaberKOG91 - Tuesday, January 28, 2020 - link

    Given that RDNA2 will likely start as a high-performance, low-volume part in the consumer space, as well as a high-margin, higher-volume part in the professional/datacenter space, I think AMD will more than break even. Besides, the long-term investment of demonstrating that AMD can compete at the high end with Nvidia is worth every penny. A Navi refresh on 7nm+ is a low-risk, low-cost way to keep the mid-to-low range lineup well represented as well. It's really no different than selling Polaris and Vega at the same time, with the exception that I believe RDNA2 will deliver what Vega could not.
  • Veradun - Wednesday, January 29, 2020 - link

    Just an example lineup:

    RX6900XT rdna2
    RX6900 rdna2
    later on RX6800XT rdna2 (third cut of the chip, like 5600XT is for Navi10)

    RX6700XT rdna (tweaked 5700XT - might be 16gbps ram and/or more frequency)
    RX6700 rdna (tweaked 5700)
    RX6600XT rdna (tweaked 5600XT)
    later on RX6500XT rdna (tweaked 5500XT)
  • JKflipflop98 - Wednesday, January 29, 2020 - link

    As the consumer, I think I'd rather have the new 2.0 architecture even if it is a mid-range card. I'm guessing the 2.0 will be the "big boy" on the high-end of the stack and the revised 1.0 will be entry-level to mid-range. Seems silly to me as one of the ways you save money during fabrication is chopping down your defective high-end parts to mid and low ranges.

    Whatever, if the beancounters say that's the way to go, then that's the way they'll go. Sadly.
  • Korguz - Wednesday, January 29, 2020 - link

    its called die harvesting.. amd, intel and nvida have been doing it for ages now.. it not a silly way to save money, why throw out a die if only part of it is bad.. but the rest is still usable ? aka.. athlon x4. made into an x3 because 1 core had defects....
  • extide - Wednesday, January 29, 2020 - link

    Because we will get the 6000 series. Existing 5500, 5600 and 5700 will get 'refreshed' to 6000 series equivalents -- perhaps with a mild die rework or process update -- and then big navi will be released at the top at RDNA2
  • SolarBear28 - Tuesday, January 28, 2020 - link

    If RDNA 2 supports ray tracing it makes sense that it would debut on cards that are actually powerful enough to do it (aka Big Navi). While the rest of the lineup gets a refresh probably without ray tracing.
  • Yojimbo - Tuesday, January 28, 2020 - link

    But the coin has two sides. If RDNA2 offered improvements other than ray tracing and VRS it would make sense to bring those improvements to their entire lineup rather than making a refresh of a lesser architecture. AMD now should have the money to do it and increase their brand and their competitive position. They could strip the ray tracing out of the die and bring it out, just like NVIDIA did with Turing. Perhaps they intend to do that in 2021 and bring out the Navi refresh in 2020.
  • SaberKOG91 - Tuesday, January 28, 2020 - link

    The 16XX series vs the 20XX series is a closer analog to this. The 16XX series was a completely different die design than the 20XX series. It not only doesn't have RT cores, but the Tensor cores were replaced with just FP16 support.
  • Yojimbo - Wednesday, January 29, 2020 - link

    We don't know how tensor cores work so we can't really say that.
  • SaberKOG91 - Wednesday, January 29, 2020 - link

    We can though. If you do a scatter plot of die size vs shader count, the linear fit for the 16XX dies is y=0.164x + 32 (R^2=1.00) and the linear fit of the 20XX dies is y=0.134x + 134 (R^2=0.998). While I find it curious that the 16XX dies actually have a larger shaders than the 20XX dies (optimization for higher clock speeds?), it's clear that there is a distinct difference in their makeup.
  • Yojimbo - Wednesday, January 29, 2020 - link

    They took out the RT cores. You have no idea how large the RT cores are. Memory controllers are also changing over the various GPUs as is the number of NVENC units, I believe. What we do know, however, is that the tensor cores have always increased lock-step with the shaders. If tensor cores are different cores that were taken out of the 16 series cards then, as you pointed out, the die size per shader for the 16 series cards should be larger, not smaller. Plus RT cores seem to scale with shaders as well, so the proportionate constant should be even smaller in the 16 series. So it seems to me that you showed either that the tensor cores exist in the 16 series but not the 20 series or that either your method or data is insufficient.
  • TheinsanegamerN - Wednesday, January 29, 2020 - link

    That a lot of words to say "no, Saber, I dont like your math, you cant know more then me, You're WRONG!!!!11!!!!

    If the 1660 line had tensor cores, then deep learning, DLSS, and raytracing would be available on them. If they were firmwar elocked, someone would have opened them up by now.
  • Yojimbo - Thursday, January 30, 2020 - link

    No it's not. I gave a well-reasoned argument.

    "If the 1660 line had tensor cores, then deep learning, DLSS, and raytracing would be available on them."

    No. Volta has tensor cores but not raytracing. And NVIDIA has good reason to limit tensor core operations from the 1600 line even if it's artificial or mostly artificial. When has anyone ever opened up NVIDIA's or AMD's locked FP64 in the past? Besides, it doesn't have to be firmware locked. They could make minor changes to physically disrupt the alternate data paths used by the tensor cores, if that were indeed how the tensor cores are done. It seems to me, looking at the differences between Pascal and Volta and looking at the die sizes and performance of the AI accelerators compared to Volta, that there is an awful lot shared between the tensor cores and the shaders. What exactly is shared I don't know, but at a certain point it makes a lot more sense to leave them in there instead of doing the massive amount of work to create new SMs that don't include the tensor core circuitry.
  • SaberKOG91 - Wednesday, January 29, 2020 - link

    Actually, I can account for all of the changes to memory controllers, NVlink, and RT cores in just the change in area from the Y-intercept. The baseline area usage of the 20XX is 4x larger than the 16XX which is more than enough to double the width of the GDDR memory controller, add in the HBM controller, add in NVLink, add in more NVENC, and still have room left over for the RT cores. I was convinced by the die shots on day 1 that the RT cores are not actually integrated into the shaders. There's a huge new block of silicon that isn't accounted for by any of the other features that the 20XX supports and isn't present in the 16XX die shots or Volta before that. And the number of RT Cores available can easily be explained by some being disabled or removed for each die size. Having them scale with the number of CUDA cores is not all that surprising as an engineer.

    I don't have to demonstrate anything regarding Tensor cores. Nvidia themselves say they replaced them with FP16.

    No, the 16XX series doesn't have to have smaller shaders just because features are removed. The clock speeds are a fair bit higher on the 16XX which can easily be accomplished by bigger transistors. We also don't know if the FP16 implementation they did was a separate unit or a reorganization of the shaders to support native FP16 instead of emulated through FP32, which would also add area.
  • Yojimbo - Thursday, January 30, 2020 - link

    I'm not talking about the RT cores. I'm talking about the tensor cores. and yes it is necessaey to demonstrate something regarding them. NVIDIA is talking about their functionality and also NVIDIA is talking from a marketing standpoint. It would be foolish to believe it is giving an engineering explanation of how its secret sauce us accomplished. If the 20xx shaders+ tensor cores take up less die area than the equivalent number of 16xx shaders then why did they bother building the 16xx shaders? Not for power efficiency or performance, we can see that. The clock speeds seem to be in line with pascal clock speeds across the stock. Note that the 2080 super has the fastest clock speeds among all Turing cards, just like the 1080 does for the Pascal cards.

    I'm not saying you are definitely wrong. I'm saying you have not said anything that moves me from my original statement. We don't know. My feeling is that the tensor cores are most likely sharing quite a bit with the shaders. From what I remember, in NVIDIA's graphics, they show RT and shader operations happening simultaneously. They show integer and fp32 operations happening simultaneously. But when tensor operations are in progress that is all that is going on. I think that gives us a big clue, along with the idea I got by looking at die sizes of Pascal, Volta, and a very rough consideration of performance and die sizes of AI accelerators compared to Volta.
  • SaberKOG91 - Thursday, January 30, 2020 - link

    I've spent a lot of time digging around for confirmation from developers and other folks, and I will revise a few of my previous statements. First: Tensor Cores and RT Cores don't exist. Tensor Cores are just another way to move data around inside of an SM to perform matrix operations efficiently. Each SM can be used as 8 tensor cores, but does not have entirely separate and dedicated hardware for it. RT cores do exactly the same thing, but rely on a new bit of logic to perform BVH efficiently. Second, the area that I thought was dedicated to RT Cores is in fact the new "high-speed-hub".

    I do still believe my analysis that says that the 16XX dies have larger shaders, but I don't see them as being a simple migration from Volta or Pascal, namely because they do have the scheduler changes and SM organization changes made for Turing that aren't present in those two.

    So how do I justify it? Well, let's start with RTX. If you use the card without RT, it's a little bit faster than the 10XX series. If you use DLSS instead of other AA, RTX can be even faster than 10XX because it the operations take a lot less time to compute. That's good enough for most people to buy it. If you use RT though, you'll see a 20+% decrease in performance. For the 2080 Ti, 20% is enough to bring it down to between a 1080 and 1080 Ti. For the 2060, it's closer to 30% which brings it down to 1660 or between a 1060 and a 1070. If you had RT on a 1660, it would drop to 1650 performance which is between a 1050 Ti and a 1060. At the high-end of Turing, the cost of RT being enabled can be kind of ignored, but at the low end it would make doing anything other than RT a lot more constrained. That's not good for gaming or for marketing. So I can see why RT would be disabled entirely for the 16XX series.

    What about tensor cores? Well I think that's mostly a money grab. Having tensor cores on the 16XX series would mean that their Quadro counterparts would sell a lot better, taking sales away from the much higher margin 20XX equivalent Quadro cards.
  • Yojimbo - Saturday, February 1, 2020 - link

    I believe RT cores do not have SIMD/SIMT compute elements. Whether any of the shader cores are used in the computation I do not know, but I do expect there to be special compute cores. As far as tensor cores, I think most likely they are rewiring of the existing shaders, as I said. However, it's not correct to say the cores "do not exist". That's like saying neutrons don't exist because they are made of quarks, or trees don't exist because they have chloroplasts and chloroplasts can exist on their own.
  • SaberKOG91 - Saturday, February 1, 2020 - link

    They only added silicon for BVH accelerators to groups of shaders because the traversal is not efficiently computed otherwise. The rest of RT math is all floating point and can be represented as SIMD vector operations. Once you have scheduled an RT program against an SM, the rest of the resources are unavailable for further computation. This is why we see such a heavy drop in performance with RT enabled.

    All of Nvidia's marketing around simultaneous operations relies on different SMs running different kinds of shader programs. The only actual operations that you can run in parallel within a group of 8 shaders are int32 and fp32 operations. Once those shaders are allocated as Tensor Cores, it appears you can't use them for int32 either. You definitely can't do tensor ops and RT ops at the same time within an SM and I'm pretty sure the int32 resources are used for RT ops.

    I meant that Tensor Cores don't exist as distinctly separate hardware like the marketing spin would have us believe. Of course they are physically represented as changes to the structure within an SM.
  • neblogai - Wednesday, January 29, 2020 - link

    AMD might do a simple refresh of Navi10 cards. Right now, 5600XT is on the heels of 5700. So, they may just do something like 5700XT refresh that is Navi10 but with a boost from 16Gbps memory and higher GPU clocks, and 5700 refresh with the same VRAM but higher GPU clocks.
  • Hul8 - Wednesday, January 29, 2020 - link

    Or they could do a "cost down", "performance down" version of 5700/XT:

    Use the existing Navi 10 GPU, but pair it with cheaper memory and lower-end power delivery, maybe cooling, to generate "6600" series cards. They'd not only be cheaper in order to hit that x600 series price range, but would also offer less headroom to overclock to levels similar to the possible "6700 non-XT".

    The 6600 non-XT could even be based on 5600 XT, so the memory bandwidth would differentiate 6600 and 6600 XT more.
  • haukionkannel - Wednesday, January 29, 2020 - link

    This is what I was Also thinking.
    Low end will continue to be produced witouth raytrasing aka rdna1 and highend will have raytrasing aka rdna2. They may Also overlap!
    You can get either get faster $300 gpu without raytrasing or slover $300 gpu with raytrasing.
  • Spunjji - Wednesday, January 29, 2020 - link

    I'm of the opinion that's how Nvidia should have done it too, instead of dragging RTX all the way down to the 2070 and 2060.
  • Alistair - Tuesday, January 28, 2020 - link

    I don't mind waiting 1 or 2 years between releases, but only if it is a full product stack. We still have 1080 ti performance from AMD 3 years later. 3.5 years to catch up is a bit sad, kind of a sign of the times, GPUs are just not improving fast enough anymore (nVidia also).
  • Cellar Door - Wednesday, January 29, 2020 - link

    So don't wait but the current flagship and stop complaining.
  • Alistair - Wednesday, January 29, 2020 - link

    bought a 1080 4 years ago, the 5700 xt is hardly an upgrade, i've been waiting a long time, might as well complain after 4 years... hope the 5800 xt is coming soon and next second half
  • Spunjji - Wednesday, January 29, 2020 - link

    I'm confused about why it's a bad thing that your high-end card still provides high-end performance 4 years later. It's not like games have suddenly become more demanding and we're all drowning in low frame rates.

    The 5700XT is a good upgrade for the people who are still running Maxwell-generation GPUs. It's simply not meant for you.
  • Alistair - Wednesday, January 29, 2020 - link

    It is a bad thing. I don't care about keeping an old video card. We are drowning in low frame rates, you just aren't playing the latest games. Try Red Dead Redemption 2 for example.
  • Korguz - Wednesday, January 29, 2020 - link

    by we.. you meant a few ?? no one i know is drowning in low frame rates. the main problem is.. to upgrade what one has isnt worth the price cause its either too expensive.. or the performance increase isnt worth the price
  • TheinsanegamerN - Wednesday, January 29, 2020 - link

    Screw off. Consumers are more then alllowed to complain when companies stop innovating.
  • Korguz - Wednesday, January 29, 2020 - link

    TheinsanegamerN thats funny.. where were all these consumers to complain about intel when they stopped innovating ?? when they stuck the main stream market at quad cores?? minor performace increases year over year and kept charging more each year.. it wasnt intel zen did intel give the mainstream more then 4 cores.. but intel STILL really isnt innovating... as wilsonkf said in the intel financials article : " Zen is out for 3 years. Intel should have done something big, not keep pushing the same arch on 14nm+++ only adding 2 more cores " even now.. intel STILL isnt innovating... they just keep rehashing the same cpus still..
  • Dizoja86 - Wednesday, January 29, 2020 - link

    Korguz, are you new here? People are always ranting about Intel.
  • Korguz - Wednesday, January 29, 2020 - link

    nope.. just pointing something out to theinsanegamerN
  • lilkwarrior - Wednesday, January 29, 2020 - link

    4K is far more pixels to push to get higher performance—along with being much more expensive—and most gamers don't play 4K w/ GPUs more than enough for 1080p (what most do play); accordingly, there is less demand for new GPUs.

    For Nvidia it's different. They'd be cannibalizing themselves with little return at this point when their cards are next-gen ready but the next-gen hasn't arrived!
  • Alistair - Wednesday, January 29, 2020 - link

    there's never been such high demand for gpus, Borderlands 3 at 1440p barely stays above 60fps on ultra with a RX 5700 or GTX 1080
  • Alistair - Wednesday, January 29, 2020 - link

    in case my comment wasn't clear, like most enthusiast gamers, i have a 144hz monitor, so 60 hz doesn't cut it
  • TheWereCat - Wednesday, January 29, 2020 - link

    Then use High instead of Ultra?
  • TheinsanegamerN - Wednesday, January 29, 2020 - link

    Or say "AMD needs to get off their backside and compete with nvidia's entire product stack,not 1/3rd of it."
  • Spunjji - Wednesday, January 29, 2020 - link

    I think it's pretty clear by this point in time that AMD would love to take a slice of Nvidia's high-end pie *if they could*.

    They can't, though - whether it be due to limitations in their high-end designs (Fury X, Vega) or a simple lack of funds to get new designs to market quickly (where we're at now).
  • Xyler94 - Wednesday, January 29, 2020 - link

    90% of people don't buy GPUs above 400$. Why should AMD, when they're facing financial trouble, try and appease "I'll just buy Nvidia anyways" enthusiasts? There was a point where AMD cards were better in every segment, but people still bought Nvidia cards. (I think it was the 400 era for Nvidia).

    Be 100% real with me. You wouldn't buy an AMD card even if it had better performance at the high end, would you?
  • Alistair - Wednesday, January 29, 2020 - link

    Exactly, compete with everything. I'm a long time AMD customer (the 7870 that went into the PS4 was awesome, the 280x for BF4 was awesome, and then I bought a GTX 1080 and have been waiting for 4 years...),
  • Spunjji - Wednesday, January 29, 2020 - link

    Drop a few settings, then. The visual difference between most High and Ultra settings in games is somewhere between "none" and "keep looking, maybe squint a little, you'll definitely see it eventually".

    For everyone else who can't stand having to change a setting or two, there's already the 2080Ti.
  • Alistair - Wednesday, January 29, 2020 - link

    The 2080 Ti isn't a choice, it's a giant waste of money. Most people don't realize that you only get about 14 percent more performance going from the 2070S to the 2080S, and another 14 percent going from the 2080S to the 2080 Ti. That's ridiculously low, and has never happened before in history.
  • Alistair - Wednesday, January 29, 2020 - link

    Also at ultra in RDR2 you get sub 60fps, and dropping a ton of settings you'll still get 70-75 fps with a 2080 ti, so if you play the latest games (there are about 10 of them like this now) you'll get low fps with the current gen of cards.
  • ProDigit - Wednesday, January 29, 2020 - link

    Exciting! 2015 graphics in 2020!
  • TristanSDX - Wednesday, January 29, 2020 - link

    Refreshed Navi must be at least 40% faster, to combat with mid-range Ampere
  • Spunjji - Wednesday, January 29, 2020 - link

    I won't be holding my breath, but it sure would be nice.
  • deksman2 - Wednesday, January 29, 2020 - link

    Its not impossible.
    Some rumours seem to suggest 'big navi' is twice as fast as 5700 xt... now, IF that has any merit whatsoever (I'm not saying it will, I'm just hypothesising here), then RDNA 2 has to have about 32% higher IPC, with another 12% boost coming from clock increases (thanks to 7nm+).

    Its not unprecedented for gpu uArch's of new generation to introduce radical changes/improvements in IPC all the way up to 50% at same TDP.

    So, depending on how good RDNA 2 actually is (which we won't know until its released), it could very well end up being Ampere competitor as well.
  • nt300 - Friday, February 7, 2020 - link

    From what we know based on various sources, RDNA2 is a new GPU uArch with a complete cache system overhaul, Variable Rate Shading and several other power efficiency enhancements. I can't say for certain how fast this rumoured Big Navi is going to be, but I am quite confident its on par or faster than the RTX 2080-Ti.
  • Korguz - Monday, February 10, 2020 - link

    source ??
  • eastcoast_pete - Wednesday, January 29, 2020 - link

    The danger with announcing a new (hopefully better) line like RDNA2 now is that people like me will now wonder if I should wait with a dGPU purchase until both NVIDIA and AMD have rolled out their new architectures. After all, many of us don't plan on shelling out several hundred dollars or Euros more than once every couple of years.
    As for APUs, I hope AMD will roll out an APU version of their new console APU they are making for PS5 and the next Xbox. That might be interesting for an HTPC.
  • Spunjji - Wednesday, January 29, 2020 - link

    Given that it still hasn't happened yet, I'm not confident that it ever will - the economics of it just don't make sense.
  • deksman2 - Wednesday, January 29, 2020 - link

    If your current hw is adequate for your purposes and you can wait until RDNA 2 is rolled out later in 2020, then you should definitely wait.

    If your hw has aged less than gracefully and you are in immediate need to upgrade, then you might as well buy what's presently available.

    Something better will always be around the corner, however, if you waited this long and with new GPU uArch's being on the verge of being released (literally), then you might as well wait (if anything else, it will allow current generation of GPU's to drop in pricing).
  • Hul8 - Wednesday, January 29, 2020 - link

    If you need a new GPU (current one broke) or "need" it now, then buy. (But go in knowing there'll be buyer's remorse later.)

    Otherwise, with the costly implementation, no competition, and no use in most games for real-time ray tracing, if you can make do with your current setup, a GPU from the next couple of generations will probably be a better purchase.
  • nt300 - Friday, February 7, 2020 - link

    The term "Refresh" can be interpreted differently from one another. In this context, what Dr. Lisa Su is saying is we will refresh our entire GPU lineup with new RDNA2 based graphics cards. Basically there's no more RDNA1 GPUs coming, as RDNA2 is the replacement moving forward.
  • Korguz - Monday, February 10, 2020 - link

    and you have a source for this.. or is it just your speculation based and what you have read ??

Log in

Don't have an account? Sign up now