NVIDIA Scrubs GeForce RTX 4080 12GB Launch; 16GB To Be Sole RTX 4080 Card
by Ryan Smith on October 14, 2022 1:20 PM EST- Posted in
- GPUs
- GeForce
- NVIDIA
- Ada Lovelace
- RTX 4080
In a short post published on NVIDIA’s website today, the company has announced that it is “unlaunching” their planned GeForce RTX 4080 12GB card. The lowest-end of the initially announce RTX 40 series cards, the RTX 4080 12GB had attracted significant criticism since it’s announcement for bifurcating the 4080 tier between two cards that didn’t even share a common GPU. Seemingly bowing to the pressure of those complaints, NVIDIA has removed the card from their RTX 40 series lineup, as well as cancelling its November launch.
NVIDIA’s brief message reads as follows:
The RTX 4080 12GB is a fantastic graphics card, but it’s not named right. Having two GPUs with the 4080 designation is confusing.
So, we’re pressing the “unlaunch” button on the 4080 12GB. The RTX 4080 16GB is amazing and on track to delight gamers everywhere on November 16th.
If the lines around the block and enthusiasm for the 4090 is any indication, the reception for the 4080 will be awesome.
NVIDIA is not providing any further details about their future plans for the AD104-based video card at this time. However given the circumstances, it’s a reasonable assumption right now that NVIDIA now intends to launch it at a later time, with a different part number.
NVIDIA GeForce Specification Comparison | |||||
RTX 4090 | RTX 4080 16GB | RTX 4080 12GB (Cancelled) |
|||
CUDA Cores | 16384 | 9728 | 7680 | ||
ROPs | 176 | 112 | 80 | ||
Boost Clock | 2520MHz | 2505MHz | 2610MHz | ||
Memory Clock | 21Gbps GDDR6X | 22.4Gbps GDDR6X | 21Gbps GDDR6X | ||
Memory Bus Width | 384-bit | 256-bit | 192-bit | ||
VRAM | 24GB | 16GB | 12GB | ||
Single Precision Perf. | 82.6 TFLOPS | 48.7 TFLOPS | 40.1 TFLOPS | ||
Tensor Perf. (FP16) | 330 TFLOPS | 195 TFLOPS | 160 TFLOPS | ||
Tensor Perf. (FP8) | 660 TFLOPS | 390 TFLOPS | 321 TFLOPS | ||
TDP | 450W | 320W | 285W | ||
L2 Cache | 72MB | 64MB | 48MB | ||
GPU | AD102 | AD103 | AD104 | ||
Transistor Count | 76.3B | 45.9B | 35.8B | ||
Architecture | Ada Lovelace | Ada Lovelace | Ada Lovelace | ||
Manufacturing Process | TSMC 4N | TSMC 4N | TSMC 4N | ||
Launch Date | 10/12/2022 | 11/16/2022 | Never | ||
Launch Price | MSRP: $1599 | MSRP: $1199 | Was: $899 |
Taking a look at the specifications of the cards, it’s easy to see why NVIDIA’s core base of enthusiast gamers were not amused. While both RTX 4080 parts shared a common architecture, they did not share a common GPU. Or, for that matter, common performance.
The RTX 4080 12GB, as it was, would have been based on the smaller AD104 GPU, rather than the AD103 GPU used for the 16GB model. In practice, this would have caused the 12GB model to deliver only about 82% of the former’s shader/tensor throughput, and just 70% of the memory bandwidth. A sizable performance gap that NVIDIA’s own figures ahead of the launch have all but confirmed.
NVIDIA, for its part, is no stranger to overloading a product line in this fashion, with similarly-named parts delivering unequal performance and the difference denoted solely by their VRAM capacity. This was a practice that started with the GTX 1060 series, and continued with the RTX 3080 series. However, the performance gap between the RTX 4080 parts was far larger than anything NVIDIA has previously done, bringing a good deal more attention to the problems that come from having such disparate parts sharing a common product name.
Of equal criticism has been NVIDIA’s decision to sell an AD104 part as an RTX 4080 card to begin with. Traditionally in NVIDIA’s product stack, the next card below the xx80 card is some form of xx70 card. And while video card names and GPU identifiers are essentially arbitrary, NVIDIA’s early performance figures painted a picture of a card that would have performed a lot like the kind of card most people would expect from the RTX 4070 – delivering performance upwards of 20% (or more) behind the better RTX 4080, and on-par with the last-generation flagship, the RTX 3090 Ti. In other words, there has been a great deal of suspicion within the enthusiast community that NVIDIA was attempting to sell what otherwise would have been the RTX 4070 as an RTX 4080, while carrying a higher price to match.
In any case, those plans are now officially scuttled. Whatever NVIDIA has planned for their AD104-based RTX 40 series card is something only the company knows at this time. Meanwhile come November 16th when the RTX 4080 series launches, the 16GB AD103-based cards will be the only offerings available, with prices starting at $1199.
100 Comments
View All Comments
meacupla - Sunday, October 16, 2022 - link
Donate it to a PC recycling charity. Use a low power or mobile CPU based system. They are not even that expensive for older models.Eidigean - Friday, October 14, 2022 - link
I have an 8700k with a Radeon 580 that could use a GPU upgrade.I also have an i7-4960x that still doesn't suck (OC'd from 3.6 GHz to 4.0 GHz and still under-volted) with a GTX 1070 ($200 eBay special 3 years ago) that might also benefit from a GPU upgrade. My 30" 2560x1600 10-bit panel only runs at 60fps; so I don't expect to be CPU bound.
Not looking to heat my house with a 4090, likely getting a 4080, but the Ivy Bridge extreme 4960x system use to have two GTX 580's in SLI, so the 1000w power supply with four 12v rails is up for it.
The DDR3 2133 CAS 9 memory in there has lower latency than most newer kits. Totally getting my money's worth 9 years later. The DDR4 3200 CAS 14 memory in the Coffee Lake rig is pretty close in latency (divide the CAS timing by the RAM speed to get the latency).
Flunk - Saturday, October 15, 2022 - link
No point in buying new high end GPUs for 8 year old CPUs because you'll be CPU bound all the time.Samus - Sunday, October 16, 2022 - link
You'd be best off to get a 3070Ti or 3080 for $600-$700 to pair with that 8700k. I wouldn't blow $1000+ on a modern CPU when you can get the last gen (which lets be honest is only 2 years old) for half the price while it will still be relevant for years.Samus - Sunday, October 16, 2022 - link
According to a friend who just got a 4090, his CPU (12700) bottlenecks the card at 3440x1440 in BF2042. He can confirm this because if he underclocks the card, the performance stays the same. The CPU cores aren't even pegged, but that's not the problem; the software isn't feeding the CPU enough to keep up with the GPU, and this is a common problem across all games because desktop CPU's have become so powerful that software can't efficiently use them.meacupla - Sunday, October 16, 2022 - link
Yeah, your friend is going to need a 7900X3D or 13900K to unlock more potential in the 4090.And to think that the 4090 is actually still handicapped using GDDR6X
Fallen Kell - Saturday, October 15, 2022 - link
"The GPU market is in a terrible place and needs a hard reset."That only will happen if there is competition that competes at the high end with product on the shelves...
NextGen_Gamer - Friday, October 14, 2022 - link
In my mind, a small price drop and a rename to GeForce RTX 4070 Ti would be completely fine. Correct me if I am wrong, but the card as was (4080 12GB) was using the full AD104 die; meaning if it is just renamed 4070, it would leave no room for the Ti unless NVIDIA moved to the much bigger AD103 chip. So, keep all the specs as they were, drop the price to $700 and call it 4070 Ti and launch a cut-down configuration at $599 called the 4070. Both of those numbers are a $100 increase over 3070/3070 Ti MSRP prices, but that falls in line with the 40xx series in general being more expensive across the board.haukionkannel - Friday, October 14, 2022 - link
4090 is selling so well, that I don´t expect price cuts, only name change.meacupla - Friday, October 14, 2022 - link
The cuts to CUDA cores on the "4080 12GB" model would have put it squarely in the 4060Ti territory.If the "4080 12GB" model wanted to be a "4070", at the very least, it would require 50% of what the 4090 packs. 4090 has 16385 CUDA cores.
RTX 4080 12GB vs RTX 4090: 7680/16385=46.9%
RTX 3060Ti vs RTX 3090: 4864/10496=46.3%
For a real "4070", the CUDA core difference would have to be around 56.2% compared against a 4090. Or around 9200 CUDA cores at a minimum.