Both the GeForce 256 and Savage 2000 are offering T&L support. Is this the next big step in 3D rendering?
Well, first off I think it’s important that some people actually see the Savage 2000 in action supporting hardware-based T&L before any decisions or commentary can be made about their particular implementation. While we believe GeForce has some significant shortcomings on the rendering feature set and fill-rate side of things, there is no doubt that they have made substantial progress in geometry acceleration (witness some very nice demos they were showing on the GeForce hardware). So, until S3 shows off the Savage 2000 running demos, and more importantly real applications, utilizing their hardware T&L solution I remain doubtful that their implementation is actually any faster than a good CPU. Maybe I’ll be surprised, but I’m not holding my breath. So that being said, I really think only one vendor (nvidia) has a compelling geometry solution. But as I said previously, you really cannot take advantage of significant geometry acceleration unless it is coupled with equally impressive fill-rate (otherwise even if you can generate the triangles quickly how are you going to fill them just as quickly?). So for the particular implementation of geometry acceleration on GeForce, I do not believe this to be the "next big step" in 3D rendering.
If not, what are the big upcoming advances for 3D?
Well, clearly this is a very unbounded question. But, relative to the next cycle of 3D accelerators, we believe full-scene spatial anti-aliasing to be the next big advance in 3D. There are very few instances in the history of 3D acceleration when a feature has been offered that does not require any software development support and yet can dramatically improve the overall visual quality of a game or application. The people who buy a GeForce product will by-and-large see relatively small levels of performance and visual quality improvement for the overwhelming majority of games available in the near future. In contrast, our customers who buy T-Buffer enabled hardware with our real-time full-scene spatial anti-aliasing capability will be immersed in an incredibly powerful "out of box experience" that has rarely been seen in the history of PC 3D graphics – immediately a person’s library of games will be substantially improved the moment they plug in the T-Buffer enabled hardware. This is really quite exciting for the industry, which historically has forced customers to wait some time before games actually take advantage of a new hardware capability.
John Carmack said the following about the GeForce 256 in a recent .plan update:
It is fast. Very, very fast. It has the highest fill rate of any card we have ever tested, has improved image quality over TNT2, and it gives timedemo scores 40% faster than the next closest score with extremely raw beta drivers.
The throughput will definately improve even more as their drivers mature.
For max framerates in OpenGL games, this card is going to be very hard to beat.
Once again, 3dfx has always been the speed king, but this sounds pretty tough to beat and John certainly knows his stuff. Do you think you'll be able to do it again? How and why?
Well, John C. has not yet tested our next generation product yet so the fact that GeForce beats anything that’s currently on the market is really no big surprise. The GeForce has an advantage with its onboard geometry capability for lower resolutions and color depths, however we believe that our next generation product will substantially outperform the GeForce when running at resolutions and color depths that gamers demand. We have always placed a big emphasis on being the premiere hardware platform for the Quake games, and expect our next generation of products to continue to live up to that high standard.
The announced 480MP/s fillrate seems a bit lower than many had expected for the GeForce 256. Doing the math, this seems to point towards a 120MHz core clock, which is lower than their existing TNT2 parts. Since 3dfx uses the same fab plant, do you anticipate any clock speed issues?
Clock frequency is much more a function of the logic design and physical implementation than fab, so the fact that the GeForce runs at a low clock frequency certainly does not mean our next generation product will also.
Do you think NVIDIA might be downplaying their specifications for now until a product is ready or until after 3dfx has made an announcement?
Well, anything is possible I guess in this hyper-competitive market. But I think everyone who follows this market knows that over time nvidia has always promised more than they actually deliver, and we don’t expect that to be any different this time around either. If you consider that nvidia has touted to "revolutionize the world" with GeForce, I think you’ll agree that nvidia is once again over-hyping a product.