I am not sure I like this product at the price point. If it was $50, then it would make sense, but as another poster pointed out, the older and faster 6200 with real memory is about $10 more.
The marketing is also deceptive. 6200 Turbo cache sounds like it would be faster than the 6200.
In addition, this so call innovative use of system memory sounds like nothing more than integrated video. OK, it's faster, but aren't you increasing cpu load.
The review also use an Athlon 64 4000+, I am doubtful that users who buy an A64 4000+ is going to skip on the video card.
I was forced to go for a 6200TC 64MB (up to 256Mb) solution about a week ago. Had to upgrade my MoBo to a PCI-X version and had to get the cheapest card that i could find in the PCI-X flavour.
I must say its a lot better than the FX5200 card i used to have ... I am running it with only 256Mb of system RAM so its not running at optimal performance , but i can run UT2003 with everything set to HIGH and in a 1280x1024 rez :)
A few stutters when the game actually starts , but after about 10seconds , the game runs smooth and without any issues ... dont know the exact FPS though :)
I score about 12000 points on 3dMark2001 with stock clocks (yeah 3DM2001 old, but its all i could download over-night)
Will let you know what happens when i finally get another 256Mb in the damn thing.
I don't like this... why would I want the one that costs over $100 when I can get the 6200 for $110-210 that has it's own dedicated memory and performs better. It's stupid to replace the current 6200 with this pile. It would be fine as a $50-75 card, or for use in a laptop, or a HTPC... but don't replace the current 6200 with this.
I have a laptop with a 64MB Mobility Radeon 9600 (350MHz GPU, 466MHz DDR MHz 128bit RAM), and I can run Far Cry at 1280x800, high settings, Doom 3 1024x768 high settings, Halo 1024x768 high settings, Half-Life 2 1280x800 high settings, all at around 30fps.
This is, obviously, an AGP solution. I don't really know how it does it. I was very surprised at what it could pull off, especially the high resolutions, with only 64MB onboard.
What's going on?
Have you heard whether the limited PCI-E X16 bandwidth of the I915 is true for the I925/825XE chipsets also?
Also, I'm curious whether you've done any testing on the nForce 4 with only one DIMM so as to limit the system bandwidth and get some indication of how the GeForce6200TC scales in performance with greater/lesser system memory bandwidth available?
"As far as I understand Hypermemory, it is not capable of rendering directly to system memory."
In the past ATI has indicated all of the R300 derived cores are capable of writing directly to a texture in system memory.
At the very least HyperMemory implementation on the Radeon Express 200G chipset must be able to do so, as ATI supports implementations without any local RAM they have to be capable of rendering to system memory to operate.
The only difference I've noticed in the respective implementations thus far is that nVidia's Turbocache lowest local bus size if 32-bit, whereas ATI's implementation only supports as low as 64bit so the smallest local RAM they can use is 32MB. (Well, they can use no local RAM also, though that would obviously be considerably slower)
Yeah, this does all seem to make some sort of sense now. But not much sense as I can't see why Intel would delibrately limit the bandwidth of the PCIe bus they were pushing so heavily. Unless the 925 chipset has a full bi-directional 4GB/s, and the 3 down/1 up is something they decided to impose on the cheaper 915 to differentiate it from the high-end 925.
I guess it's safe to assume nVidia implemented bi-directional 4GB/s in the nForce4, given that they were also working on graphics cards that would be dependent on PCIe bandwidth. And unless there was a good reason for VIA, ATI, and SiS not to do so; I would imagine the K8T890, RX480/RS480, and SiS756 will also be full 4GB/s both ways.
NVIDIA tells us its a limitation of 915. Looking back, they also heavily indicated that "some key chipsets" would support the same bandwidth as NVIDIA's own bridge solution at the 6 series launch. If you remember, their solution was really a 4GB total bandwidth solution (overclocked AGP 8x to "16x" giving half the PCIe bandwidth) ... Their diagrams all showed a 3 down 1 up memory flow. But they didn't explicitly name 915 at the time.
#28- see page 2 of the article, the text just above the diagram near the bottom of the page "Even on the 915 chipset from Intel, bandwith is limited across the PCI Express bus. Rather than a full 4GB/s up and 4GB/s down, Intel offers only 3GB/s up and 1GB/s down..."
#25- I'd also always assumed that all PCIe x16 sockets could support 4GB/s both ways, this is the first time I've heard otherwise. And it isn't even 4/1, it's 3/1 according to the info given.
Derek- is this limited PCIe x16 bandwidth common to all chipsets?
We tested the 32MB 64-bit $99 version of the card that "supports" a 128MB framebuffer.
#31 is correct -- the maximum of 112 of 96 (or 192 for the 256 MB version) of system RAM is not staticly mapped. It's always avalable to the system under 2D operation. Under 3D, it's not likely that the entire framebuffer would be absolutely full at any given time anyway.
doesnt it dynamicly allocate the extra memory it needs? so this would just affect games then if it needed more, not regular apps that done need lots of video memory.
I'm likewise confused. At the end of the review they say:
"There will also be a 64MB 64-bit TC part (supporting 256MB) available for $129 coming down the pipeline at some point, though we don't have that part in our labs just yet."
I think this is a good product, i think it could be a very good part for companies like dell, if they include it into their systems. cheaper than the x300se's they currently include, but better performance, and will appeal to that type of customer
#24, from the description it sounds like for the radeon igp there is no problem with both using sideport and system memory simultaneously for directly rendering into (the interleaved mode exactly sounds like part of all buffers would be allocated in system memory, though maybe that's not what is meant).
Basically, this is saying that this generation $90 part is no better than last generation $90 part. That's sad. I was hoping the performance leap of this generation would be felt through all segments of the market.
In other news, Nforce4 (2 months ago) and Xpress 200 (1 month ago) STLL arent on the market. Good lord. Talk about paper launches from ATI and Nvidia...
Ok, I have to admit I'm a bit confused here. Which cards did you exactly test, the 6200/16MB(32bit) and the 6200/32MB(64bit), or what? And what about the 6200/64MB, will it be a 64bit card, or a whole 128bit card?
#15- I've got a Ti4200 but I'd never call it nVidia's best card. It is still the best card you can get in the bargain-bin price-range it is now sold at (other cards at a similar price are the FX5200 and Radeon 9200), though supplies of new Ti4200's are very limited these days.
#12- Thanks Derek for answering my question about higher resolutions. As only the front-buffer needs to be in the onboard memory (because it's absolutely critical the memory accessed to send the signal to the display must always be available without any unpredictable delay), that means even the 16MB 6200 can run at any resolution, even 2560x1600 in theory though performance would probably be terrible as everything else would need to be in system memory.
I would expect the 6200 to blow the Ti4200 out of the water, because the FX5700/Ultra is considered comparable to the GF4Ti. By comparison, many places are pitting the 6200 against the higher-end FX5900, and it holds its own.
Even with the slower TurboCache, it should still be on par with a 4600, if not a little bit faster. Notice how the more powerful version beats an X300 across the board, a card derived from the 9600 series?
I like seing such an inexpensive part playing newer games, but I'd hardly call it real-world to pair a $75 video card with an Athlon64 4000+, which Newegg lists at $719 right now.
It'd be interesting to see how these cards fare with a more realistic system for them to be paired with, i.e. a Sempron 2800+
I think this is a good offering from NVIDIA. Passively cooled is a VERY good solution in my line of work. One less thing I have to worry about silencing. As I use my PC to make money, not for playing games. Although I like to play an occasional game from time to time don't get me wrong. I use my XBOX for gaming. When this card comes out I'll get one.
#9, It'll only use 128Mb if a full 128 is needed at the same time -- which isn't usually the case, but we haven't done an indept study on this yet. Also, keep in mind that we still tested at the absolute highest quality settings with noAA/AF (excpet doom 3 even used 8x AF as well). We were not seeing slide show framerates. The FX5200 doesn't even support all the features of the FX5900, let alone the 6200TC. Nor does the FX5200 perform as well at equivalent settings.
IGP is something I talked to NVIDIA about. This solution really could be an Intel Extreme Graphics killer (in the integrated market). In fact, with the developments in the mareketplace, Intel may finally get up and start moving to create a graphics solution that actually works. There are other markets to look for TurboCache solutions to show up as well.
#11 ... The packaging issue is touchy. We'll see how vendors pull it off when it happens. The cards do run as if they has a full 128MB of ram, so that's very important to get across. We do feel that talking about the physical layout of the card and the method of support is important as well.
#8, 1600x1200x32 only requires that 7.5MB be stored locally. As was mentioned in the artile, only the FRONT buffer needs to be local to the graphics card. This means that the depth buffer, back buffer and other render surfaces can all be in system memory. I know it's kind of hard to believe, but this card can actually draw everything diectly into system RAM from the pixel pipes and ROPs. When the buffers are swapped to display the back buffer, what's in system memory is copied into graphics memory.
It really is very cool for a low performance budget part.
And we might see higher performance version of turbo cache in the future ... though NVIDIA isn't talking about them yet. It might be nice to have the possibility of an expanded framebuffer with more system RAM if the user wanted to enable that feature.
TurboCache is actually a performance enahancing feature. It's just that it's enhancing the performance of a card with either 16MB or 32MB of on board ram and either a 32 or 64 bit memory bus ... :-)
"NVIDIA has defined a strict set of packaging standards around which the GeForce 6200 with TurboCache supporting 128MB will be marketed. The boxes must have text, which indicates that a minimum of 512MB of system RAM is necessary for the full 128MB of graphics RAM support. It doesn't seem to require that a discloser of the actual amount of onboard RAM be displayed, which is not something that we support. It is understandable that board vendors are nervous about how this marketing will go over, no matter what wording or information is included on the package."
More bullsh!t deceptive advertising to bilk uninformed consumers out of their money.
#7, I was thinking the same thing. This concept seems absolutely perfect for nForce5 IGP, should NVidia decide to go that route. And, once again, NVidia's approach to budget seems superior to ATI's, at least from an initial glance. A heavily-castrated 6200TC running off SHARED RAM STILL manages to outperform a full X300? Come on, ATI, get with it!
I gotta wonder, though: this solution seems unbelievably dependent on "proper implementation of the PCIe architecture." This means that the card can never be coupled with HSI for older systems, and transitional boards will have trouble running the card (Gigabyte's PT880 with converted PEG, for example- PT880 natively supports AGP). Does this mean that a budget card on a budget motherboard will suffer significantly?
IMO, even (as low as) $79 is too expensive. Taking 128MB of system memory away on a system budgetized to include one of these, would typically be leaving 384MB, robbing the system of memory to pay nVidia et al. for a part without (much) memory.
I tend to disagree with the slant of the article too, that it's not necessarily a good thing to try pushing modern gaming eyecandy at expense of performance. What looks good isn't a crisp and anti-aliased slideshow, but a playable game. even someone just beginning at gaming can discern a lag when fragging it out.
We're only looking at current games now, the bar for performance needs will be raised but the cards are memory bandwidth limited due to the architecture. These might look like a good alternative for someone who went and paid $90 for an FX5200 from BestBuy last year but in a budget system it's going to be tough to justify ~ $80-100 when a few bucks more won't rob one of system memory or as much performance.
Even so, historically we've seen that initial price-points do fall, better to see modern support than a rehash of a FX5xxx.
nVidia's marketing department must be really pleased with coming up with the name "TurboCache". It makes it sounds like its faster than a normal card without TurboCache, whereas in reality the opposite is true. Uninformed customers would probably choose a TurboCache version over a normal version, even if they were priced the same!
----
Derek- does the 16MB 6200 have limitations on what resolutions can be used in games? I know you wouldn't want to run it at 1600x1200x32 in Far Cry for instance, but in older games like Quake 3 it should be fast enough.
Thing is that the frame-buffer at 1600x1200x32 requires 7.3MB, so with double-buffering you're using up a total of 14.65MB leaving just 1.35MB for the Z-buffer and anything else it needs to keep in local memory, which might not be enough. I'm assuming the frame the card is currently displaying must be held in local memory, as well as the frame being worked on.
The situation is even worse with anti-aliasing as the frame-buffer size of the frame being worked on is multiplied in size by the level of AA. At 1280x960x32 with 4xAA, the single frame-buffer alone is 18.75MB meaning it won't fit in the 16MB 6200. It might not even manage 1024x768 with 4xAA as the two frame buffers would total 15MB (12MB for the one being worked on, 3MB for the one being displayed).
It will be interesting to know what the resolution limits for the 16MB (and 32MB) cards are, with and without anti-aliasing.
I may be way off base with this question, but would this sort of GPU lend it self well to some sort of integrated, onboard graphics solution? Even if it is isn't integrated directly into the main chipset (or chip for Nvidia), could it simply be soldered to the motherboard somewhere?
Somehow this seems to make more sense to me for what to do with this technology than use it on a dedicated video card, especially if the price point is not that much less than a regular 6200.
Wow, almost 50 fps on HL2 at 10x7, that is pretty good for a budget card.
I'd like to see MS, ATI, and Nvidia get more people into PC gaming, that would make for better and cheaper games for those of us who are already loving it.
Actually, nForce 4 + AMD systems are looking better than Intel non-925xe based systems for TurboCache parts. We haven't looked at the 925xe yet though ... that could be interesting. But overhead hurts utilization alot on a serial bus, and having more than 6.4GB/s from memory might not be that useful.
The efficiency of getting bandwidth across the PCI Express bus will still be the main bottleneck in systems though. Chipsets need to impliment PCI Express properly and well. That's really the important part. The 915 chipset is an example of what not to do.
Turbo cache and Hyper memory cards should do better on Intel based systems as they do not need to go via the HTT to det to the memory. So I agree with #3 show us som i925X(E) tests. I'm not expecting higher scores on the Intel systems however. Just a larger gain from this type of technology.
"The next thing that we're waiting to see is a working implementation of virtual memory for the graphics subsystem. The entire graphics industry has been chomping at the bit for that one for years now."
3DLabs VP10 GPU has a feature that allows system memory to be treated as virtual memory for the GPU, and it has been out 12 months or more.
im an ATI fanboi but good job nvidia!..your products keep on getting better and better!(though i dont plan on downgrading to any of these from my 9800pro...its still a great budget card). nvidia has certainly won in this generation of graphic card wars...lets see whats going to happen when the next gen line up comes out(though i doubt that'll be anytime soon).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
43 Comments
Back to Article
paulsiu - Tuesday, March 1, 2005 - link
I am not sure I like this product at the price point. If it was $50, then it would make sense, but as another poster pointed out, the older and faster 6200 with real memory is about $10 more.The marketing is also deceptive. 6200 Turbo cache sounds like it would be faster than the 6200.
In addition, this so call innovative use of system memory sounds like nothing more than integrated video. OK, it's faster, but aren't you increasing cpu load.
The review also use an Athlon 64 4000+, I am doubtful that users who buy an A64 4000+ is going to skip on the video card.
Paul
guarana - Thursday, January 27, 2005 - link
I was forced to go for a 6200TC 64MB (up to 256Mb) solution about a week ago. Had to upgrade my MoBo to a PCI-X version and had to get the cheapest card that i could find in the PCI-X flavour.I must say its a lot better than the FX5200 card i used to have ... I am running it with only 256Mb of system RAM so its not running at optimal performance , but i can run UT2003 with everything set to HIGH and in a 1280x1024 rez :)
A few stutters when the game actually starts , but after about 10seconds , the game runs smooth and without any issues ... dont know the exact FPS though :)
I score about 12000 points on 3dMark2001 with stock clocks (yeah 3DM2001 old, but its all i could download over-night)
Will let you know what happens when i finally get another 256Mb in the damn thing.
Jeff7181 - Wednesday, December 22, 2004 - link
I don't like this... why would I want the one that costs over $100 when I can get the 6200 for $110-210 that has it's own dedicated memory and performs better. It's stupid to replace the current 6200 with this pile. It would be fine as a $50-75 card, or for use in a laptop, or a HTPC... but don't replace the current 6200 with this.icarus4586 - Friday, December 17, 2004 - link
I have a laptop with a 64MB Mobility Radeon 9600 (350MHz GPU, 466MHz DDR MHz 128bit RAM), and I can run Far Cry at 1280x800, high settings, Doom 3 1024x768 high settings, Halo 1024x768 high settings, Half-Life 2 1280x800 high settings, all at around 30fps.This is, obviously, an AGP solution. I don't really know how it does it. I was very surprised at what it could pull off, especially the high resolutions, with only 64MB onboard.
What's going on?
Rand - Friday, December 17, 2004 - link
Have you heard whether the limited PCI-E X16 bandwidth of the I915 is true for the I925/825XE chipsets also?Also, I'm curious whether you've done any testing on the nForce 4 with only one DIMM so as to limit the system bandwidth and get some indication of how the GeForce6200TC scales in performance with greater/lesser system memory bandwidth available?
Rand - Friday, December 17, 2004 - link
DerekWilson-"As far as I understand Hypermemory, it is not capable of rendering directly to system memory."
In the past ATI has indicated all of the R300 derived cores are capable of writing directly to a texture in system memory.
At the very least HyperMemory implementation on the Radeon Express 200G chipset must be able to do so, as ATI supports implementations without any local RAM they have to be capable of rendering to system memory to operate.
The only difference I've noticed in the respective implementations thus far is that nVidia's Turbocache lowest local bus size if 32-bit, whereas ATI's implementation only supports as low as 64bit so the smallest local RAM they can use is 32MB. (Well, they can use no local RAM also, though that would obviously be considerably slower)
Rand - Friday, December 17, 2004 - link
DerekWilson - Thursday, December 16, 2004 - link
And you can bet that NVIDIA's Intel chipset will have a nice, speedy, optimized for SLI and TurboCache PCIe implimentation as well.PrinceGaz - Thursday, December 16, 2004 - link
Yeah, this does all seem to make some sort of sense now. But not much sense as I can't see why Intel would delibrately limit the bandwidth of the PCIe bus they were pushing so heavily. Unless the 925 chipset has a full bi-directional 4GB/s, and the 3 down/1 up is something they decided to impose on the cheaper 915 to differentiate it from the high-end 925.I guess it's safe to assume nVidia implemented bi-directional 4GB/s in the nForce4, given that they were also working on graphics cards that would be dependent on PCIe bandwidth. And unless there was a good reason for VIA, ATI, and SiS not to do so; I would imagine the K8T890, RX480/RS480, and SiS756 will also be full 4GB/s both ways.
DerekWilson - Thursday, December 16, 2004 - link
NVIDIA tells us its a limitation of 915. Looking back, they also heavily indicated that "some key chipsets" would support the same bandwidth as NVIDIA's own bridge solution at the 6 series launch. If you remember, their solution was really a 4GB total bandwidth solution (overclocked AGP 8x to "16x" giving half the PCIe bandwidth) ... Their diagrams all showed a 3 down 1 up memory flow. But they didn't explicitly name 915 at the time.PrinceGaz - Thursday, December 16, 2004 - link
#28- see page 2 of the article, the text just above the diagram near the bottom of the page "Even on the 915 chipset from Intel, bandwith is limited across the PCI Express bus. Rather than a full 4GB/s up and 4GB/s down, Intel offers only 3GB/s up and 1GB/s down..."#25- I'd also always assumed that all PCIe x16 sockets could support 4GB/s both ways, this is the first time I've heard otherwise. And it isn't even 4/1, it's 3/1 according to the info given.
Derek- is this limited PCIe x16 bandwidth common to all chipsets?
DerekWilson - Thursday, December 16, 2004 - link
We tested the 32MB 64-bit $99 version of the card that "supports" a 128MB framebuffer.#31 is correct -- the maximum of 112 of 96 (or 192 for the 256 MB version) of system RAM is not staticly mapped. It's always avalable to the system under 2D operation. Under 3D, it's not likely that the entire framebuffer would be absolutely full at any given time anyway.
Alphafox78 - Thursday, December 16, 2004 - link
doesnt it dynamicly allocate the extra memory it needs? so this would just affect games then if it needed more, not regular apps that done need lots of video memory.rqle - Thursday, December 16, 2004 - link
so total cost of these card is the card price + (price of 128MB worth of DDR at the time)?Maverick2002 - Thursday, December 16, 2004 - link
I'm likewise confused. At the end of the review they say:"There will also be a 64MB 64-bit TC part (supporting 256MB) available for $129 coming down the pipeline at some point, though we don't have that part in our labs just yet."
Didn't they just test this card???
KalTorak - Thursday, December 16, 2004 - link
#25 - huh? (I have no idea what that term means in the context of PCIe, and I know PCIe pretty well...)KayKay - Thursday, December 16, 2004 - link
I think this is a good product, i think it could be a very good part for companies like dell, if they include it into their systems. cheaper than the x300se's they currently include, but better performance, and will appeal to that type of customermczak - Wednesday, December 15, 2004 - link
#24, from the description it sounds like for the radeon igp there is no problem with both using sideport and system memory simultaneously for directly rendering into (the interleaved mode exactly sounds like part of all buffers would be allocated in system memory, though maybe that's not what is meant).IntelUser2000 - Wednesday, December 15, 2004 - link
WTF!! I never new Intel's 915 chipsets used 4/1GB implementation of PCI Express!! Even Anandtech's own article didn't say that they said 4/4.DerekWilson - Wednesday, December 15, 2004 - link
As far as I understand Hypermemory, it is not capable of rendering directly to system memory.Also, when Hypermemory needs to go to allocate system RAM for anything, there is a very noticeable performance hit.
We tested the 16MB/32-bit and the 32MB/64-bit
The 64MB version available is only 64-bit ... NVIDIA uses four 8M x 16 memory chips.
Cybercat - Wednesday, December 15, 2004 - link
Basically, this is saying that this generation $90 part is no better than last generation $90 part. That's sad. I was hoping the performance leap of this generation would be felt through all segments of the market.mczak - Wednesday, December 15, 2004 - link
#12, IGP would indeed be interesting. In fact, TurboCache seems quite similar to ATI's Hypermemory/Sideport in their IGP.Cygni - Wednesday, December 15, 2004 - link
In other news, Nforce4 (2 months ago) and Xpress 200 (1 month ago) STLL arent on the market. Good lord. Talk about paper launches from ATI and Nvidia...ViRGE - Wednesday, December 15, 2004 - link
Ok, I have to admit I'm a bit confused here. Which cards did you exactly test, the 6200/16MB(32bit) and the 6200/32MB(64bit), or what? And what about the 6200/64MB, will it be a 64bit card, or a whole 128bit card?Cybercat - Wednesday, December 15, 2004 - link
What does 2 ROP stand for? :P *blush*PrinceGaz - Wednesday, December 15, 2004 - link
#15- I've got a Ti4200 but I'd never call it nVidia's best card. It is still the best card you can get in the bargain-bin price-range it is now sold at (other cards at a similar price are the FX5200 and Radeon 9200), though supplies of new Ti4200's are very limited these days.#12- Thanks Derek for answering my question about higher resolutions. As only the front-buffer needs to be in the onboard memory (because it's absolutely critical the memory accessed to send the signal to the display must always be available without any unpredictable delay), that means even the 16MB 6200 can run at any resolution, even 2560x1600 in theory though performance would probably be terrible as everything else would need to be in system memory.
housecat - Wednesday, December 15, 2004 - link
Another Nvidia innovation done right.MAValpha - Wednesday, December 15, 2004 - link
I would expect the 6200 to blow the Ti4200 out of the water, because the FX5700/Ultra is considered comparable to the GF4Ti. By comparison, many places are pitting the 6200 against the higher-end FX5900, and it holds its own.Even with the slower TurboCache, it should still be on par with a 4600, if not a little bit faster. Notice how the more powerful version beats an X300 across the board, a card derived from the 9600 series?
DigitalDivine - Wednesday, December 15, 2004 - link
how about raw perfomance numbers pitting the 6200 with nvidia's best graphics card imo, the ti4200.plk21 - Wednesday, December 15, 2004 - link
I like seing such an inexpensive part playing newer games, but I'd hardly call it real-world to pair a $75 video card with an Athlon64 4000+, which Newegg lists at $719 right now.It'd be interesting to see how these cards fare with a more realistic system for them to be paired with, i.e. a Sempron 2800+
sphinx - Wednesday, December 15, 2004 - link
I think this is a good offering from NVIDIA. Passively cooled is a VERY good solution in my line of work. One less thing I have to worry about silencing. As I use my PC to make money, not for playing games. Although I like to play an occasional game from time to time don't get me wrong. I use my XBOX for gaming. When this card comes out I'll get one.DerekWilson - Wednesday, December 15, 2004 - link
#9, It'll only use 128Mb if a full 128 is needed at the same time -- which isn't usually the case, but we haven't done an indept study on this yet. Also, keep in mind that we still tested at the absolute highest quality settings with noAA/AF (excpet doom 3 even used 8x AF as well). We were not seeing slide show framerates. The FX5200 doesn't even support all the features of the FX5900, let alone the 6200TC. Nor does the FX5200 perform as well at equivalent settings.IGP is something I talked to NVIDIA about. This solution really could be an Intel Extreme Graphics killer (in the integrated market). In fact, with the developments in the mareketplace, Intel may finally get up and start moving to create a graphics solution that actually works. There are other markets to look for TurboCache solutions to show up as well.
#11 ... The packaging issue is touchy. We'll see how vendors pull it off when it happens. The cards do run as if they has a full 128MB of ram, so that's very important to get across. We do feel that talking about the physical layout of the card and the method of support is important as well.
#8, 1600x1200x32 only requires that 7.5MB be stored locally. As was mentioned in the artile, only the FRONT buffer needs to be local to the graphics card. This means that the depth buffer, back buffer and other render surfaces can all be in system memory. I know it's kind of hard to believe, but this card can actually draw everything diectly into system RAM from the pixel pipes and ROPs. When the buffers are swapped to display the back buffer, what's in system memory is copied into graphics memory.
It really is very cool for a low performance budget part.
And we might see higher performance version of turbo cache in the future ... though NVIDIA isn't talking about them yet. It might be nice to have the possibility of an expanded framebuffer with more system RAM if the user wanted to enable that feature.
TurboCache is actually a performance enahancing feature. It's just that it's enhancing the performance of a card with either 16MB or 32MB of on board ram and either a 32 or 64 bit memory bus ... :-)
DAPUNISHER - Wednesday, December 15, 2004 - link
"NVIDIA has defined a strict set of packaging standards around which the GeForce 6200 with TurboCache supporting 128MB will be marketed. The boxes must have text, which indicates that a minimum of 512MB of system RAM is necessary for the full 128MB of graphics RAM support. It doesn't seem to require that a discloser of the actual amount of onboard RAM be displayed, which is not something that we support. It is understandable that board vendors are nervous about how this marketing will go over, no matter what wording or information is included on the package."More bullsh!t deceptive advertising to bilk uninformed consumers out of their money.
MAValpha - Wednesday, December 15, 2004 - link
#7, I was thinking the same thing. This concept seems absolutely perfect for nForce5 IGP, should NVidia decide to go that route. And, once again, NVidia's approach to budget seems superior to ATI's, at least from an initial glance. A heavily-castrated 6200TC running off SHARED RAM STILL manages to outperform a full X300? Come on, ATI, get with it!I gotta wonder, though: this solution seems unbelievably dependent on "proper implementation of the PCIe architecture." This means that the card can never be coupled with HSI for older systems, and transitional boards will have trouble running the card (Gigabyte's PT880 with converted PEG, for example- PT880 natively supports AGP). Does this mean that a budget card on a budget motherboard will suffer significantly?
mindless1 - Wednesday, December 15, 2004 - link
IMO, even (as low as) $79 is too expensive. Taking 128MB of system memory away on a system budgetized to include one of these, would typically be leaving 384MB, robbing the system of memory to pay nVidia et al. for a part without (much) memory.I tend to disagree with the slant of the article too, that it's not necessarily a good thing to try pushing modern gaming eyecandy at expense of performance. What looks good isn't a crisp and anti-aliased slideshow, but a playable game. even someone just beginning at gaming can discern a lag when fragging it out.
We're only looking at current games now, the bar for performance needs will be raised but the cards are memory bandwidth limited due to the architecture. These might look like a good alternative for someone who went and paid $90 for an FX5200 from BestBuy last year but in a budget system it's going to be tough to justify ~ $80-100 when a few bucks more won't rob one of system memory or as much performance.
Even so, historically we've seen that initial price-points do fall, better to see modern support than a rehash of a FX5xxx.
PrinceGaz - Wednesday, December 15, 2004 - link
nVidia's marketing department must be really pleased with coming up with the name "TurboCache". It makes it sounds like its faster than a normal card without TurboCache, whereas in reality the opposite is true. Uninformed customers would probably choose a TurboCache version over a normal version, even if they were priced the same!----
Derek- does the 16MB 6200 have limitations on what resolutions can be used in games? I know you wouldn't want to run it at 1600x1200x32 in Far Cry for instance, but in older games like Quake 3 it should be fast enough.
Thing is that the frame-buffer at 1600x1200x32 requires 7.3MB, so with double-buffering you're using up a total of 14.65MB leaving just 1.35MB for the Z-buffer and anything else it needs to keep in local memory, which might not be enough. I'm assuming the frame the card is currently displaying must be held in local memory, as well as the frame being worked on.
The situation is even worse with anti-aliasing as the frame-buffer size of the frame being worked on is multiplied in size by the level of AA. At 1280x960x32 with 4xAA, the single frame-buffer alone is 18.75MB meaning it won't fit in the 16MB 6200. It might not even manage 1024x768 with 4xAA as the two frame buffers would total 15MB (12MB for the one being worked on, 3MB for the one being displayed).
It will be interesting to know what the resolution limits for the 16MB (and 32MB) cards are, with and without anti-aliasing.
Spacecomber - Wednesday, December 15, 2004 - link
I may be way off base with this question, but would this sort of GPU lend it self well to some sort of integrated, onboard graphics solution? Even if it is isn't integrated directly into the main chipset (or chip for Nvidia), could it simply be soldered to the motherboard somewhere?Somehow this seems to make more sense to me for what to do with this technology than use it on a dedicated video card, especially if the price point is not that much less than a regular 6200.
bamacre - Wednesday, December 15, 2004 - link
Great review.Wow, almost 50 fps on HL2 at 10x7, that is pretty good for a budget card.
I'd like to see MS, ATI, and Nvidia get more people into PC gaming, that would make for better and cheaper games for those of us who are already loving it.
DerekWilson - Wednesday, December 15, 2004 - link
Actually, nForce 4 + AMD systems are looking better than Intel non-925xe based systems for TurboCache parts. We haven't looked at the 925xe yet though ... that could be interesting. But overhead hurts utilization alot on a serial bus, and having more than 6.4GB/s from memory might not be that useful.The efficiency of getting bandwidth across the PCI Express bus will still be the main bottleneck in systems though. Chipsets need to impliment PCI Express properly and well. That's really the important part. The 915 chipset is an example of what not to do.
jenand - Wednesday, December 15, 2004 - link
Turbo cache and Hyper memory cards should do better on Intel based systems as they do not need to go via the HTT to det to the memory. So I agree with #3 show us som i925X(E) tests. I'm not expecting higher scores on the Intel systems however. Just a larger gain from this type of technology.manno - Wednesday, December 15, 2004 - link
any chance we can see numbers for this thing on an intel system with higher bandwidth DDR2 memory? Maybe even overclocked DDR2?R3MF - Wednesday, December 15, 2004 - link
"The next thing that we're waiting to see is a working implementation of virtual memory for the graphics subsystem. The entire graphics industry has been chomping at the bit for that one for years now."3DLabs VP10 GPU has a feature that allows system memory to be treated as virtual memory for the GPU, and it has been out 12 months or more.
faboloso112 - Wednesday, December 15, 2004 - link
im an ATI fanboi but good job nvidia!..your products keep on getting better and better!(though i dont plan on downgrading to any of these from my 9800pro...its still a great budget card). nvidia has certainly won in this generation of graphic card wars...lets see whats going to happen when the next gen line up comes out(though i doubt that'll be anytime soon).