NVIDIA Scrubs GeForce RTX 4080 12GB Launch; 16GB To Be Sole RTX 4080 Card
by Ryan Smith on October 14, 2022 1:20 PM EST- Posted in
- GPUs
- GeForce
- NVIDIA
- Ada Lovelace
- RTX 4080
In a short post published on NVIDIA’s website today, the company has announced that it is “unlaunching” their planned GeForce RTX 4080 12GB card. The lowest-end of the initially announce RTX 40 series cards, the RTX 4080 12GB had attracted significant criticism since it’s announcement for bifurcating the 4080 tier between two cards that didn’t even share a common GPU. Seemingly bowing to the pressure of those complaints, NVIDIA has removed the card from their RTX 40 series lineup, as well as cancelling its November launch.
NVIDIA’s brief message reads as follows:
The RTX 4080 12GB is a fantastic graphics card, but it’s not named right. Having two GPUs with the 4080 designation is confusing.
So, we’re pressing the “unlaunch” button on the 4080 12GB. The RTX 4080 16GB is amazing and on track to delight gamers everywhere on November 16th.
If the lines around the block and enthusiasm for the 4090 is any indication, the reception for the 4080 will be awesome.
NVIDIA is not providing any further details about their future plans for the AD104-based video card at this time. However given the circumstances, it’s a reasonable assumption right now that NVIDIA now intends to launch it at a later time, with a different part number.
NVIDIA GeForce Specification Comparison | |||||
RTX 4090 | RTX 4080 16GB | RTX 4080 12GB (Cancelled) |
|||
CUDA Cores | 16384 | 9728 | 7680 | ||
ROPs | 176 | 112 | 80 | ||
Boost Clock | 2520MHz | 2505MHz | 2610MHz | ||
Memory Clock | 21Gbps GDDR6X | 22.4Gbps GDDR6X | 21Gbps GDDR6X | ||
Memory Bus Width | 384-bit | 256-bit | 192-bit | ||
VRAM | 24GB | 16GB | 12GB | ||
Single Precision Perf. | 82.6 TFLOPS | 48.7 TFLOPS | 40.1 TFLOPS | ||
Tensor Perf. (FP16) | 330 TFLOPS | 195 TFLOPS | 160 TFLOPS | ||
Tensor Perf. (FP8) | 660 TFLOPS | 390 TFLOPS | 321 TFLOPS | ||
TDP | 450W | 320W | 285W | ||
L2 Cache | 72MB | 64MB | 48MB | ||
GPU | AD102 | AD103 | AD104 | ||
Transistor Count | 76.3B | 45.9B | 35.8B | ||
Architecture | Ada Lovelace | Ada Lovelace | Ada Lovelace | ||
Manufacturing Process | TSMC 4N | TSMC 4N | TSMC 4N | ||
Launch Date | 10/12/2022 | 11/16/2022 | Never | ||
Launch Price | MSRP: $1599 | MSRP: $1199 | Was: $899 |
Taking a look at the specifications of the cards, it’s easy to see why NVIDIA’s core base of enthusiast gamers were not amused. While both RTX 4080 parts shared a common architecture, they did not share a common GPU. Or, for that matter, common performance.
The RTX 4080 12GB, as it was, would have been based on the smaller AD104 GPU, rather than the AD103 GPU used for the 16GB model. In practice, this would have caused the 12GB model to deliver only about 82% of the former’s shader/tensor throughput, and just 70% of the memory bandwidth. A sizable performance gap that NVIDIA’s own figures ahead of the launch have all but confirmed.
NVIDIA, for its part, is no stranger to overloading a product line in this fashion, with similarly-named parts delivering unequal performance and the difference denoted solely by their VRAM capacity. This was a practice that started with the GTX 1060 series, and continued with the RTX 3080 series. However, the performance gap between the RTX 4080 parts was far larger than anything NVIDIA has previously done, bringing a good deal more attention to the problems that come from having such disparate parts sharing a common product name.
Of equal criticism has been NVIDIA’s decision to sell an AD104 part as an RTX 4080 card to begin with. Traditionally in NVIDIA’s product stack, the next card below the xx80 card is some form of xx70 card. And while video card names and GPU identifiers are essentially arbitrary, NVIDIA’s early performance figures painted a picture of a card that would have performed a lot like the kind of card most people would expect from the RTX 4070 – delivering performance upwards of 20% (or more) behind the better RTX 4080, and on-par with the last-generation flagship, the RTX 3090 Ti. In other words, there has been a great deal of suspicion within the enthusiast community that NVIDIA was attempting to sell what otherwise would have been the RTX 4070 as an RTX 4080, while carrying a higher price to match.
In any case, those plans are now officially scuttled. Whatever NVIDIA has planned for their AD104-based RTX 40 series card is something only the company knows at this time. Meanwhile come November 16th when the RTX 4080 series launches, the 16GB AD103-based cards will be the only offerings available, with prices starting at $1199.
100 Comments
View All Comments
nandnandnand - Friday, October 14, 2022 - link
>4070 with 6 GBbruh
Byte - Friday, October 14, 2022 - link
Historically the Ti were mid refresh cards. So 4075 for $899 and a 4075Ti for $999 and then we got 4070 for $599 and a 4070Ti for $699 all are possiblephilehidiot - Monday, October 17, 2022 - link
They don't need to. They're conceding the naming is confusing, not the pricing is wrong. They could rename it the GTX69SpaceDocker and charge the same. That they're "unlaunching" is either because an increased delay was required to allow rebranding or, more likely, they have now seen the market response and decided it's in their best interests to allow the 30 series to sell out before launching the now 4070.But these prices and energy consumptions mean this is another skip generation for me. I'm on a Vega64 and the most I've used my GPU for is Hashcat of late. Quite happy with older games minus the DRM that breaks them.
My PC uses around 350W whilst gaming. That's okay. Nvidia laughed at the 250W TBP of the Vega64 when it came out. I will NOT be buying a card that uses circa 300W.... because I generate my own 'leccy and you'd be surprised how much more aware you are of usage when you're breeding your own angry pixies. You actually start to care about power factor...
meacupla - Friday, October 14, 2022 - link
The RTX 4090 is completely CPU bound in 1080p, and it even manages to get bottlenecked at 1440p in some titles.I think you would be wasting your money if you don't play at 4K with a RTX 4090
Makaveli - Friday, October 14, 2022 - link
lol you can bet your bottom dollar there will be people pairing a 4090 with coffee lake cpus at 1080/1440p then complaining in forums about it.StevoLincolnite - Friday, October 14, 2022 - link
I am tempted to pair it up with my Core 2 Quad Q6600 @ 3.6Ghz+16GB DDR2 Ram.Because screw normal common sense.
webdoctors - Friday, October 14, 2022 - link
I still have that CPU in the closet for turning into a NVR, but not happy with the power draw :(James5mith - Saturday, October 15, 2022 - link
You aren't happy with the power draw on a Core2 Q6600? It was a 105w TDP part, back when TDP was actually the maximum power draw.God forbid you want to go past "low midrange" on modern CPUs, all of them start at 140w+ TDP, and TDP is just a low end estimate now for power draw.
SirDragonClaw - Monday, October 17, 2022 - link
NVRs spend most of their time at low CPU usages with a few spike every now and then. A modern system (something like a 12th gen i3) will average out to under 30 watts for power draw. A Q6600 running as an NVR uses well over 120 watts on average (I know, I had one)In the UK the difference in power cost for these two devices is about $288 USD per year at the current power prices.
Tomatotech - Sunday, October 16, 2022 - link
Raspberry Pi? Total package about $50 or less if you can buy used. Obviously depends on how many cameras and if they are HD/SD and frame rate. Uses about 10w at 70% CPU connected via Ethernet. (Mine uses 2w while running pi-hole etc. Ethernet uses less power than wifi)