GeForce GTX 260 reloaded vs. the Radeon HD 4870 1GB

The cheapskates of the world seemed to enjoy our recent look at relatively inexpensive video cards, and rightly so. For perhaps the first time in recent memory, we found considerable value in graphics cards that cost less than $100. But today we separate the bargain-minded from the merely cheap by considering a pair of high-end graphics cards that offer substantial value in their own right. Tight competition between AMD and Nvidia has resulted in two new video cards that redefine their end of the market for just a smidgen under 300 bucks. Rather stealthily, Nvidia has ramped up the performance of its GeForce GTX 260 by enabling an additional thread processing cluster and cranking up clock speeds. Meanwhile, AMD has made its Radeon HD 4870 even more potent by doubling the onboard memory to a full gigabyte. The question is: Has Nvidia done enough to catch up with AMD, or will the additional memory allow the Radeon to hold on to its performance advantage? We aim to find out.

The GeForce GTX 260—reloaded

As you may know if you’re a hopeless geek/regular reader, the GeForce GTX series of graphics cards is based on Nvidia’s GT200 graphics processor. The GeForce GTX 280 is the full-on version of the GT200, while the less expensive GTX 260 has two of its ten thread processing clusters and one of its eight ROP partitions disabled. Disabling parts of a chip for product segmentation purposes is a common practice, and it can provide a fitting home for chips with portions that are less than perfect, so this basic plan makes sense. Trouble is, Nvidia didn’t count on the Radeon HD 4870 outgunning the GTX 260 at a lower price, which is just what the 4870 did upon its debut. Nvidia has responded in several ways, first by cutting prices and, more recently, but quietly changing the spec on the GTX 260 so that nine of its ten thread processing clusters (TPCs) are enabled.

The additional cluster gives the GTX 260 a little more graphics power, raising its stream processor count from 192 to 216 and its peak texture filtering capacity from 64 texels per clock to 72. In addition to that, prevailing clock speeds are up quite a bit. Let me give you a couple of examples.

The handsomely stickered card you see above is the GeForce GTX 260 AMP²! Edition from Zotac. (I would like to thank Zotac for making me type AMP²!, since I could use the exercise.) Not only does it have an additional TPC, but its clock speeds are quite a bit higher than early GTX 260s. The AMP²! has a 649MHz GPU core, 1404MHz shaders, and 896MB of GDDR3 memory at 1053MHz, up from 576/1242/999MHz on the first wave of GTX 260 cards. Those clock speeds are also, I should note, higher than the stock clocks for the GeForce GTX 280, which are 602/1296/1107MHz.

What this means, essentially, is that the revamped GTX 260 now offers almost all of the performance of the original GTX 280 at under $300, if you count the Zotac AMP²!’s $299.99 price at Newegg as “under $300.” Happily, there aren’t any mail-in rebates at all involved in that price, and Zotac even throws in a copy of the excellent Race Driver GRID. What’s not to like?

Well, maybe one thing. Why the devil did Nvidia hang on to the GTX 260 name when they decided to upgrade their product this substantially? I have it on good authority that the number “270” was available to them, conveniently located between 260 and 280. The mystery of product naming deepens and becomes more opaque with each passing day.

But, hey, faster games and stuff, so I’ll get over it. I’m just going to call the version with 216 SPs the GeForce GTX 260 Reloaded for clarity’s sake.

Zotac covers the AMP²! with a limited lifetime warranty, but you’ve gotta register via Zotac’s website within 30 days of purchase or the warranty drops to a two-year term. That’s the deal for North America, at least; other regions are different. Also, confusingly, going to Zotac’s main website and telling it you’re from the U.S. will take you to a site that says warranty registration isn’t open for U.S. customers. Instead, you have to go directly to www.zotacusa.com in order to register. Zotac does have U.S.-based tech support and RMA processing, though, along with toll-free tech support via phone from 9AM to 6PM Pacific.

EVGA’s take on the GTX 260 Reloaded has the same memory clocks at the Zotac, but its core and shader clocks are a little more conservative at 626/1350MHz. The EVGA costs a little more, too, at $309.99, but you do get the comfort of 24×7 toll-free tech support and EVGA’s Step-Up plan, which allows users to trade in their cards for credit toward an upgrade within 90 days of purchase. Like Zotac, EVGA offers a lifetime warranty, but only if you register within 30 days. Otherwise, the term is just one year. EVGA doesn’t bundle a game with its card, but folks who register via its website will get a free copy of 3DMark Vantage Advanced Edition, which, frankly, ain’t GRID.

Despite the GTX 260 Reloaded’s gains in GPU power, the cards themselves still require only dual 2×3-pin PCIe power plugs, unlike the GTX 280, which combines one 2×3 with one 2×4 connector.

In addition to the examples above, several other brands are selling GTX 260 Reloaded cards for around 300 bucks—less in a few cases. For instance, Zotac’s much lower-clocked AMP! (not squared) Edition card is $289.99, although it’s hard to see the point unless you’re on a very specific budget. The new GTX 260s are pushing down prices on the older, eight-cluster versions, as well. MSI is selling an eight-cluster GTX 260 for $239.99, and it comes with a $40 mail-in rebate.

The 4870 doubles up on RAM

AMD’s answer to all of this is the card pictured below:

Pretty much looks like any old Radeon HD 4870, but the difference is simple: this one has twice as much GDDR5 memory onboard, a full gigabyte. We’ve been reviewing video cards with a gig of RAM for a while now and puzzling over when that much memory would really be necessary. Finally, we’re starting to see cases where 512MB clearly won’t suffice, and the Radeon HD 4870 GPU is powerful enough to perform well in some of those situations when given enough memory, as you’ll see in the following pages. Even if having more RAM onboard is largely future-proofing for upcoming games, AMD arguably needed to match the competition, and the GTX 260 has had 896MB of RAM since its introduction—close enough to a gig for most intents and purposes and well more than the 512MB on the first 4870s.

Although our example of the 4870 1GB is a reference card from AMD, a number of Radeon board vendors are now selling these cards. Like many others, Diamond’s rendition is $299.99 at Newegg, no rebate required. Trouble is, Diamond’s warranty term is only a year, and even that short window of coverage becomes invalid if you don’t register the product online within 30 days. So, yeah. That’s as much fun as impromptu dentistry. You might do better by going with the Asus version for $289.99. Asus covers its cards for three years and—miracle of miracles!—tracks them via serial number, so no registration is required to get warranty service.

I really don’t care to be talking so much about warranty terms and mail-in rebates, but these things have become bigger issues in recent years. Mail-in rebates have spread like a cancer, especially among Nvidia’s partners, making video card pricing anything but straightforward. Meanwhile, board vendors have skimped on support, introducing these bogus warranty registration requirements. Both tactics rely on the same insight: inevitably, many customers won’t fulfill the exact paperwork requirements, thus saving the vendor some money. In some some cases, both rebates and warranties seem to require original receipts, which are more or less non-existent when you’re buying something online. So here we are, trying to help you tiptoe through a minefield. Props to Asus for doing the right thing here, and boo to Diamond for doing the exact opposite.

As one might expect, the 512MB versions of the 4870 are dropping in price as the 1GB cards arrive. Quite a few of them are available for $269.99, some with rebates attached on top of that, if you enjoy games of chance.

All of which sets us up for a renewed comparison of the Radeon HD 4870 1GB with the “reloaded” GeForce GTX 260. Both look like good options, and the real values may be in the cards they’ve essentially replaced and pushed down in price. As is our custom, we’ve included a whole raft of cards for comparison, ranging from the lowly GeForce 9500 GT to the exotic Radeon HD 4870 X2.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core 2 Extreme QX9650 3.0GHz
System bus 1333MHz (333MHz quad-pumped)
Motherboard Gigabyte GA-X38-DQ6
BIOS revision F9a
North bridge X38 MCH
South bridge ICH9R
Chipset drivers INF update 8.3.1.1009
Matrix Storage Manager 7.8
Memory size 2GB (4 DIMMs)
Memory type Corsair
TWIN2X40966400C4DHX
DDR2 SDRAM
at 800MHz
CAS latency (CL) 4
RAS to CAS delay (tRCD) 4
RAS precharge (tRP) 4
Cycle time (tRAS) 12
Command rate 2T
Audio Integrated ICH9R/ALC889A
with RealTek 6.0.1.5618 drivers
Graphics
Radeon HD
4670 512MB GDDR3 PCIe

with Catalyst 

8.53-080805a-067874E-ATI drivers

Diamond Radeon HD
3850 512MB PCIe

with Catalyst 8.8 drivers
Asus Radeon HD 4850 512MB PCIe
with Catalyst
8.8 drivers
Diamond Radeon HD
4870 512MB PCIe

with Catalyst 8.9 drivers
Radeon HD
4870 1GB PCIe

with Catalyst 8.9 drivers
Palit Radeon HD
4870 X2 2GB PCIe

with Catalyst 8.9 drivers
Zotac GeForce 9500 GT ZONE

512MB GDDR3 PCIe

with ForceWare 177.92 drivers 

EVGA 
GeForce 9600 GSO 384MB PCIe

with ForceWare 177.92 drivers

BFG 
GeForce 9600 GT OCX 512MB PCIe

with ForceWare 177.92 drivers

Palit GeForce
9800 GT 1GB PCIe

with ForceWare 177.92 drivers 

GeForce
9800 GTX+ 512MB PCIe

with ForceWare 177.92 drivers

Palit GeForce
GTX 260 896MB PCIe

with ForceWare 178.13 drivers

Zotac GeForce GTX 260 (216 SPs) AMP²! Edition 896MB PCIe

with ForceWare 178.13 drivers

XFX GeForce
GTX 280 1GB PCIe

with ForceWare 178.13 drivers

Hard drive WD Caviar SE16 320GB SATA
OS Windows Vista Ultimate x64 Edition
OS updates Service Pack 1, DirectX March 2008 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

The theory—and practice

To see how the changes to the GeForce GTX 260 affect its position in the grand scheme, here’s a comparison of some recent graphics cards. Note that we’ve derived these numbers from the actual clock speeds of the cards we’re testing rather than from the stock clock speeds established by the GPU vendors.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9500 GT

4.4 8.8 4.4 25.6 90 134

GeForce 9600 GSO

6.7 26.6 13.3 38.5 259 389

GeForce 9600 GT

11.6 23.2 11.6 62.2 237 355

GeForce 9800 GT

9.6 33.6 16.8 57.6 339 508
GeForce 9800 GTX+

11.8 47.2 23.6 70.4 470 705
GeForce 9800 GX2

19.2 76.8 38.4 128.0 768 1152
GeForce GTX 260

16.1 36.9 18.4 111.9 477 715
GeForce GTX 260 216 SPs

18.1 46.7 23.3 117.9 607 910
GeForce GTX 280

19.3 48.2 24.1 141.7 622 933
Radeon HD 4650 4.8 19.2 9.6 16.0 384
Radeon HD 4670 6.0 24.0 12.0 32.0 480
Radeon HD 3850 11.6 11.6 11.6 57.6 464
Radeon HD 4850

10.0 25.0 12.5 63.6 1000
Radeon HD 4870

12.0 30.0 15.0 115.2 1200
Radeon HD 4870 X2

24.0 60.0 30.0 230.4 2400

Like I said before, the GTX 260 Reloaded comes very, very close to the original GeForce GTX 280. By contrast, the Radeon HD 4870’s theoretical throughput is unchanged by the addition of more RAM.

In theory, it would seem that the GTX 260 Reloaded should have the edge in fill rate and texturing capacity, while the 4870 1GB ought to have more shader power. However, these things are often complicated in actual use by the quirks and varying efficiencies of the GPU architectures in question. Here’s how things shake out when we measure them with 3DMark’s directed benchmark tests.

This is pretty much the inverse of what we expected: the 4870 1GB proves faster in the color and texture fill rate tests, while the GTX 260 Reloaded takes three out of the four shader processing tests. Funny how that works. Another intriguing result: the GTX 260 Reloaded outruns the GTX 280 in the GPU cloth and particles benchmarks. This result suggests Nvidia may be using only one or some subset of the GT200’s thread processing clusters for certain types of work, such as vertex processing. In that case, the GTX 260 Reloaded’s higher shader clocks would matter more than the GTX 280’s additional TPC.

Call of Duty 4: Modern Warfare

We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. We’ve chosen to test at display resolutions of 1280×1024, 1680×1050, and 1920×1200, which were the three most popular resolutions in our hardware survey. We enabled image quality enhancements like 4X antialiasing and 16X anisotropic filtering, and we also added 2560×1600 to the list for the very fastest cards, in order to really stress them.

Nvidia gains some ground on AMD here, since CoD4 doesn’t appear to benefit much at all from the 4870 1GB’s additional video memory. The GTX 260 Reloaded is true to form, performing quite a bit better than the original GTX 260—and almost as well as the GTX 280. That’s enough of a boost to push the GTX 260 Reloaded past the 4870 1GB, although you probably would only notice the difference at 2560×1600.

Half-Life 2: Episode Two

We used a custom-recorded timedemo for this game, as well. We tested with most of Episode Two‘s in-game image quality options turned up, including HDR lighting. Reflections were set to “reflect world,” and motion blur was disabled.

The trend says the GTX 260 Reloaded scales a little better to the highest resolutions here, but we’re averaging over 60 FPS at 2560×1600, so I’m not sure it matters much.

Enemy Territory: Quake Wars

We tested this game with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” Shadows and smooth foliage were enabled, but soft particles were disabled. Again, we used a custom timedemo recorded for use in this review.

Once more, even the highest resolution tested isn’t enough to confound the high-end cards. Still, the GTX 260 Reloaded slightly outperforms the Radeon HD 4870 1GB at higher resolutions.

Crysis Warhead

Rather than use a timedemo, I tested Crysis Warhead by playing the game and using FRAPS to record frame rates. Because this way of doing things can introduce a lot of variation from one run to the next, I tested each card in five 60-second gameplay sessions. The benefit of testing in this way is that we get more info about exactly how the cards performed, including low frame rate numbers and frame-by-frame performance data. The frame-by-frame info for each card was taken from a single, hopefully representative play-testing session.

We first used Warhead‘s “Mainstream” quality level for testing, which is the second option on a ladder that has four steps. The “Gamer” and “Enthusiast” settings are both higher quality levels.

Both the GTX 260 Reloaded and the 4870 1GB perform acceptably here, with minimum frame rates of 35 and 29 FPS, respectively. The GTX 260 Reloaded is faster, though, as is the GeForce 9800 GTX+, surprisingly enough.

Switching to a higher resolution and quality level tends to clarify things a little, and what we learn is intriguing. The 4870 1GB appears to benefit from its additional video memory here. Also, the Radeon HD 4870 X2 really seems to struggle, just as it did in our first Warhead test. Apparently, AMD doesn’t have CrossFire working properly with this game yet.

Race Driver GRID

I tested this absolutely gorgeous-looking game with FRAPS, as well, and in order to keep things simple, I decided to capture frame rates over a single, longer session as I raced around the track. This approach has the advantage of letting me report second-by-second frame rates for our entire test session. All of the game’s quality settings were maxed out while we tested.

Check out that frame rate plot for the Radeon HD 4870. That’s a classic example of what happens when you run out of video memory. GRID was virtually unplayable at certain points around the track, where frame rates dropped deep into the single digits. The exact same Radeon HD 4870 GPU with 1GB of memory, however, produced performance easily superior to any of the GeForces.

Blu-ray HD video decoding and playback

One of the things that buying a new graphics card will get you that an older card or integrated graphics solution might not have is decode and playback acceleration for HD video, including Blu-ray discs. The latest GPUs include dedicated logic blocks that offload from the CPU much of the work of decoding the most common video compression formats. To test how well these cards perform that function, we used CyberLink’s new release 8 of PowerDVD, a Lite-ON BD-ROM drive, and the Blu-ray disc version of Live Free or Die Hard. Besides having the “I’m a Mac” guy in it, this movie is encoded in the AVC format (which includes H.264 video compression) at a 28Mbps bit rate.

In order to really stress these cards, we installed a lowly Core 2 Duo E4300 processor (a dual-core 1.8GHz CPU) in our test rig, and we asked the cards to scale up our 1080p movie to fit the native 2560×1600 resolution of our Dell 30″ monitor. We then recorded CPU utilization over a duration of 100 seconds while playing back chapter four of our movie.

The pattern we’ve seen before with low-end cards holds true here: the Radeons tend to be more effective at offloading the decode task from the CPU than the GeForces. The differences aren’t huge, though, and they’re accentuated by the slow CPU. With the usual fast quad-core processor in our test system, you’re looking at CPU utilization closer to 5% while playing a Blu-ray movie.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Half-Life 2 Episode Two at 1680×1050 resolution, using the same settings we did for performance testing.

In spite of the fact that the GeForce GTX 260 Reloaded is based on a much larger chip, its power use is lower than the Radeon HD 4870’s, especially at idle, where Nvidia has worked wonders. The GTX 260 Reloaded-based system draws 30W less at the wall socket than our otherwise-identical system equipped with the 4870 1GB. AMD does seem to be making progress, though. The newer 1GB version of the 4870 draws less power at idle than the 512MB model.

Noise levels

We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 12″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Some of the quieter cards, as you can see, fell below the ~40 dB threshold of our sound level meter, which robs us of exact data but points to a happy trend toward quiet coolers. The 4870 1GB and GTX 260 Reloaded were both able to register on our meter, but just barely. Neither card is annoyingly loud, and both exhaust hot air through openings in their expansion slot covers. Oddly enough, the GTX 260 Reloaded is quite a bit quieter than the GTX 280.

GPU temperatures

Per your requests, I’ve added GPU temperature readings to our results. I captured these using AMD’s Catalyst Control Center and Nvidia’s nTune Monitor, so we’re basically relying on the cards to report their temperatures properly. These temperatures were recorded while running the “rthdribl” demo in a window.

The 4870 1GB keeps alive the Radeon HD 4000-series tradition of high GPU temperatures. Its only close company on the Nvidia side of the aisle is a passively cooled GeForce 9500 GT and the GTX 280.

Conclusions

Despite the fact that these are tremendously complex chips with hundreds of millions of transistors, AMD and Nvidia have achieved a remarkable amount of parity in their GPUs. In terms of image quality, overall features, performance, and even price, the Radeon HD 4870 1GB and the GeForce GTX 260 “Reloaded” are practically interchangeable. That fact represents something of a comeback for Nvidia, since the older GTX 260 cost more than the 4870 and didn’t perform quite as well. If anything, the GTX 260 Reloaded was a smidgen faster than the 4870 1GB overall in our test suite.

The GTX 260 is based on a much larger chip with a wider path to memory, which almost certainly means it costs more to make than the 4870, but as a consumer, you’d never know it when using the two products, so I’m not sure it matters much for our purposes. Even the GTX 260’s power consumption is lower than the 4870’s, and its noise levels are comparable.

In the grand scheme, Nvidia may have a slight edge on AMD in some difficult-to-quantify ways. For instance, GeForce cards generally performed better with the newest game we tested, Crysis Warhead, and that’s not a surprising thing to see. Nvidia seems to do a better job of working with developers and ensuring good compatibility between its GPUs and new-from-the-box games. By contrast, the Radeons performed below our expectations in Warhead, and judging by the performance we saw from the Radeon HD 4870 X2, the CrossFire multi-GPU scheme isn’t yet working properly with this title. Nvidia may gain an additional advantage if and when we see PhysX-enabled games come to pass, but I wouldn’t factor that into a purchasing decision today.

On the other hand, AMD releases new drivers more often and has much better support for multiple monitors with CrossFire than the kludgy arrangement Nvidia uses with SLI. Also, CrossFire is broadly compatible with Intel chipsets, while Nvidia generally restricts SLI to nForce-based motherboards. So I dunno.

One thing I do know is that, whichever one you prefer, both of these cards are wicked fast for 300 bucks. In fact, you probably don’t need either one of them unless you plan on using it with a nice, big monitor with a resolution of at least 1920×1080 or 1920×1200. Heck, even then, you can get by very well in almost all of today’s games with something like a $170 Radeon HD 4850. What you get when you step up to one of these $300 cards is substantially more GPU power, memory bandwidth, and longevity potential. That may not always be apparent, but it may sometimes be painfully so. Only in a couple of cases—Warhead at 1920×1200 and GRID at 2560×1600—did we see the 4870 1GB’s extra memory make a difference versus the 4870 512MB. But the difference in GRID was night and day. Personally, given the choice, I’d pony up the extra cash for the 1GB card, just for the peace of mind. But I’m probably crazy for saying so.

Comments closed
    • StuG
    • 11 years ago

    I personally think that its perfectly resonable for TR to compare cards that are $300 each, but what they needed to do was to point out the the 260 was heavily overclocked somwhere and multiple times, cause a HD4870 heavily overclocked would have wooped in this article. End Point.

      • Silicondoc
      • 11 years ago

      NO, red fanboy , the 260’s are NOT overclocked, the proof is on the GB’s chart on page 4, which show 111 and 117 for the mem bandwidth.
      Sorry, even the MSI gtx260 cited by link for price and rebate at newegg earlier than page 4, has a 121 GB, since it’s core is over @ 620 from the factory.

      SO THE BOTTOM LINE IS YOU’RE WRONG AND A LIAR –

      Now apologize to your fellow enthusiasts.

        • lk7200
        • 11 years ago
    • Kretschmer
    • 11 years ago

    I appreciate the high quality of TR reviews, and this one was no exception.

    One point of curiosity, though:

    Why was a reference 4870 1GB compared to a heavily-overclocked 260v2? I can see it from a “best performance at a given price point” perspective, but doesn’t this distort the basic premise of the article? It would seem to skew results by how quick secondary manufacturers are to market with aggressively-overclocked cards. When the 4870 1GB OC (why wasn’t the 1GB offering called 4875?) cards come to market, a consumer wouldn’t be able to use this review to compare offerings.

    Perhaps include a reference card for each article flagship product in future reviews?

    • Byte Storm
    • 11 years ago

    It appears to me the many of you are arguing the wrong points.

    “Overclocked” – Whether factory performed or user performed, is just that, Overclocked. Period. End-of-Story.

    Before the other side says, “YAY someone gets my point!!!” You point is well, pointless. This review is between 2 cards of similar price points ~$299. Clocking doesn’t mean anything. Comparing it to a different card will also do nothing. Choosing to compare these 2 cards is a price for power comparison, and nothing more. Stop your whining about, “Why didn’t he compare similarly clocked cards!!! BLARGH!!!”

    Enough, you are all very tiring to listen too, while a majority are completely ignoring this fact. (Some of you thankfully got it)

      • sammorris
      • 11 years ago

      The most difficult aspect of this is the differing price markets. The 216core GTX260 is almost no more in the US than the HD4870, whereas in the UK it costs far more. If you’re american, buy the Geforce. If you’re british, buy the 4870, brand/pre-overclocked irrelevant, that’s just what you do.

        • MadManOriginal
        • 11 years ago

        Complicating it even more is that in the US the 512MB 4870 is a fair amount less than the GTX 260-216, and the old GTX260 is even less than that, and there are factory-oc’d 4870 1GB for the same $300.

        • Xaser04
        • 11 years ago

        If you look about a bit you can get a BFG GTX260 Maxcore OC for £204.

        This makes it only slightly more expensive than the HD4870 1gb.

          • sammorris
          • 11 years ago

          I’ve never seen one anywhere near that cheap. Presumably we’re talking ebay here.

    • End User
    • 11 years ago

    My 8800 GTS 640MB 112SP was unable to deal with S.T.A.L.K.E.R. Clear Sky in DX 10 mode @ 1920×1200 so I was thinking about upgrading prior to this article. My local dealer had one Core 216 in stock so I picked it up. The Core 216 provides playable frame rates in DX 10 mode @ 1920×1200 with all the settings maxed.

    I tweaked it up to AMP²! levels using the supplied OC’ing utility.

      • ish718
      • 11 years ago

      Well yea cleary sky is quite demanding in DX10.
      What cpu are you using?

      • sammorris
      • 11 years ago

      The enhanced dynamic lighting for Clear Sky is outrageous. Turning that off seems to triple frame rates.

    • ish718
    • 11 years ago

    These cards are too close in performance in most cases, to actually argue over which one is better, it doesn’t make sense really. I’m not going to kill someone over a 5 fps difference
    It comes down to which company you prefer, Nvidia or ATI/AMD

      • robspierre6
      • 11 years ago

      Actually, at 1680×1050 res.,both the 4870’s beat both the 280gtx and the 216 core in all the tested games.
      But, the 48701Gb version doesn’t scale as it should at higher resolutions

        • ish718
        • 11 years ago

        I don’t think you understand my point.

        • NeXus 6
        • 11 years ago

        They may beat the 280 but they aren’t that much faster. The 4870 and GTX 260/280 are overkill for most games at 1680×1050 anyway, so who cares? Save some money and get a 4850 or 9800 GTX+ if you play at that resolution.

    • Chrispy_
    • 11 years ago

    I’ll be labelled a fanboy for saying this but when all other factors are equal I just seem to prefer nvidia cards.

    It’s hard to qualify but I’ll try to give you my opinion:

    The Catalyst Control Centre upsets me – slow to load, sometimes the services would crash, and the whole driver installation still seems messier with a Radeon. The skinning of the interface lacks the only option I’d want (OS integration) and I’m loathe to say that pictures of Ruby and adverts in my drivers page are welcome.

    My particular sample group of graphics cards also favours nVidia in terms of reliability, but perhaps I’ve just been lucky with nVidia and unlucky with ATi. I’d like to say that my sample size is bigger than most as I get through maybe 100 graphics cards a year and have experience with about 500 machines which get replaced at a rate of about a dozen per month. I’ve had more application and game issues with ATi/AMD as a whole but I know AMD have improved their game a lot in the past 12 months.

    I’m sure it happens with nVidia cards too, but some ATi board partners (eg PoweColor and Sapphire) have caused me trouble in the past with custom BIOSes. Sure, getting the right drivers to work isn’t rocket science but it’s the little things like knowing that Windows can identify your board correctly that make a difference, all other things being equal. I’m guessing that nVidia was just historically stricter with it’s partners which is why I haven’t come across it on nVidia boards to date.

      • sammorris
      • 11 years ago

      Both manufacturers have their own good and bad companies in my experience. I tend not to see good results witth Powercolor, Connect3D and Gecube with ATI (and that also includes my own experiences for powercolor), and the same seems true of Leadtek and Point of View for nvidia. However, ultimately most cards are built to the reference design by a middle-man, typically Foxconn, so physically the problems aren’t usually manufacturer-specific. Software can be a problem, but I’ve never encountered it.
      It’s logical for anyone to avoid a company after multiple bad experiences, and that is probably one of the biggest causes of fanboyism. Brushing that aside, however, ATI up until this generation have fallen behind both in performance and driver quality, unquestionably. The earlier HD3870 drivers (8.1 to 8.3) were appalling with powerplay not working properly (stuck in 2D mode!), microstutter even with only a single card, and no 3dfx glide or working DirectX8 support so older games didn’t work properly, if at all. Pleasingly, all those problems have been rectified as of the 4800 series however, and I’d recommend it every bit as much as its nvidia counterpart.

      • swaaye
      • 11 years ago

      NV’s CP is superior for one reason only IMO and that is the wonderful automated application 3D setting profiles. They must have patented that because ATI probably would’ve copied it otherwise. Those profiles make setting up specific game settings so convenient and easy.

      Otherwise, I don’t care for either more than the other. They both make awesome hardware. And I think both companys’ drivers are of the same level of quality.

    • Damage
    • 11 years ago

    Ok, folks. One last shot here for several of you who keep posting incessantly, being rude to one another, and flame-spamming the discussion.

    l33t-g4m3r, Meadows, pmonti80, and sammorris, we’ve heard from you enough to know where you stand. You need to end your participation here so other voices can get a word in edgewise. If you do not, you will be banned.

    Others who are rude or who start going crazy with posts will be banned, as well.

    Also, I am going to go nuke some posts where people have violated our rule against personal attacks on one another. If your post has gone missing, that’s why.

      • sammorris
      • 11 years ago

      My apologies, I didn’t think I’d posted anything out of order, but it is a madhouse in here. I also seem to have attracted gl6350’s attention at least seeing as a private message turned up with nothing but two profanities in the subject… :S

      Anyway, I don’t think I need to say any more on this subject, so I’ll go hide now 😛

      • flip-mode
      • 11 years ago

      You need to post a disclaimer: “Warning, reading comments for this article may cause brain damage and loss of intelligence. Posting responses to these comments may be detrimental to humanity.”

    • Fighterpilot
    • 11 years ago

    Well there’s plenty of flames going on at Anand over this comparison test and its results…yikes!
    Personally I think TR tests are”good enough” to show if one card or another is clearly superior.I don’t think that was the case here.
    With regard to the use of overclocked cards:Why not overclock the ATI card by a similar percentage as the test Nvidia card is above factory specs?
    If the OC version has say,10% higher engine and memory speeds than the standard specs surely it wouldn’t be hard to overclock the ATI card by a similar amount?
    That would seem to take care of any advantage of an overclocked part against a stock one.
    Also TR’s poll of the most common resolution used by its members came out at 1680×1050….the 4870 won all but one game test including an absolute thrashing in GRID.
    Given that,I’d say the 4870 is the faster card.

      • Meadows
      • 11 years ago

      I’ll repeat this one last time before everyone gets bored of me:

      Both cards are shipped that way. Warranty is provided in each case.
      If you overclock the ATI card, you void that warranty, and you end up comparing practice to theory (not everyone would overclock theirs). Whether you like it or not, that Zotac card is stock-clocked, because it runs that given way without user tinkering or intervention required, and the company covers it. It’s only fitting that a stock card battles a stock card, then.

        • sammorris
        • 11 years ago

        The most common resolution might be 1680×1050, but the most common graphics card will NOT be this series. These are high end cards more typically found with 1920×1200 systems. Comparing which card wins at lower resolutions is pointless, as both will be above the rate at which a monitor will refresh, typically 60hz. The only exception to this is Crysis and STALKER:Clear Sky at max settings.

    • MadManOriginal
    • 11 years ago

    I initially had the same problem understanding results in the last video card review for the 4670 when the 9600GT was more or less the same as the 9800GT. Now I know the 8800GT-9800GT isn’t all that much faster but the fact they were tied in many benchmarks made me wonder…sure enough turns out the 9600GT was an overclocked version. It didn’t get too much attention because those weren’t the featured cards in that article though.

    Also, going back through TR reviews, the original 9600GT review was on an oc’d Palit card but stock clocks were also included so including both is not something that hasn’t been done before.

    Damage, we know that which particular card you review may be beyond your control but it’s clear you knew to do ‘stock’ clocks before, don’t go changing on us now! 🙂
    =====
    One more note about user user oc’ing wrt ATi cards….they include overclocking in the drivers for gosh sakes. Sure it’s not ‘factory oc’d’ (the silliest sham on reference cards ever) but it’s right. there. in the drivers….come on people :p

      • pmonti80
      • 11 years ago

      I had the same feelings as you. It has been done before, I don’t understand where is the big problem.

    • The Dark One
    • 11 years ago

    For what it’s worth, Charlie at the Inq says nVidia is saving the 270 and 290 models names for the die-shrunk versions of the current chips.

      • ew
      • 11 years ago

      New product names for new products. That is madness! Can’t be true.

    • Meadows
    • 11 years ago

    Damage (or anyone), we have a situation. Both gl6350 and bx450 have started spamming after they’ve done their “private message” business on the forum side, their messages are similar on both parts of the site.

    Time to nuke.

      • MadManOriginal
      • 11 years ago

      Yeah, ad hominem attacks at least have to be contained within a post that contains other content as well :-/

      • Damage
      • 11 years ago

      Both have been banned. I’m tempted to ban some more people today, including the obvious/ridiculous fanboys. I suggest you all simmah down!

    • DrDillyBar
    • 11 years ago

    Penn and Teller FTW!
    (doh! misplaced post = response to #102)

    • MaxTheLimit
    • 11 years ago

    When performance results at anything but the highest of resolutions are so very close it’s getting to minor things that go into choosing a card in this range. I don’t like crossfire or SLi, so I’m going with either the 4870 (1GB because it’s pretty close to the same price, and is just that little bit better) and the GTX 260 216. I ended up going with the 260 because I was able to find one cheaper, and shipped for free. I had trouble finding a 4870 I liked in stock for cheap. All the cheap ones (at the time) were out of stock. I’m betting most people would be happy with either option really. No game out right now pushes these cards until you get to the maximum resolution they are capable of. Unless you play at the highest res. of the card you won’t even be able to see the difference…I like that there is options for anyone who is a fan of either company to the point that both are happy. Guys like me who have no favorite company are happy to have a lot of options! Everybody can be happy!

      • Kurotetsu
      • 11 years ago

      I agree.

      The biggest difference I could see in the actual game benchmarks between the 4870 1GB and the 260 Reloaded was 7 milliseconds! I mean, what the hell? People are throwing fits over 7 milliseconds of difference? That basically means there’s no difference at all unless you’re one of them super elite douchebag gamers who’ve deluded themselves into believing they can perceive reality in milliseconds.

      Same performance. Same price. Flip a coin.

        • Meadows
        • 11 years ago

        You do perceive reality in milliseconds. Many speculations suggest the human eye takes in roughly 100 samples per second on average, which, if true, is 10 milliseconds for a frame. I expect your ears to be even finer.
        There’s not much of a difference, but it grows using highest-definition (2560×1600 and 4x antialias) settings, which is just what the target audience may likely be using this card for, if you think about it. And that difference is generally more than mere placebo, to nVidia’s advantage.

          • Krogoth
          • 11 years ago

          The bottleneck of your entire optical system is your visual cortex. FYI, it takes up a large portion of your brain.

          I am not going to get into the fact that most modern games are hard coded to be capped at 60-85FPS. As long as any of those GPUs can deliver that range without any noticeable dips or hiccups. You are not going to notice the difference. IIRC, DX10-era GPUs are pretty consistent in terms of IQ.

          The choice between them are boiled down to other features and vendor rebates.

          • MaxTheLimit
          • 11 years ago

          I dunno. those with the deep pockets for a huge display with the resolution ranging up to the 2560*1600 level wouldn’t blink too much at stepping it up for a 280. If you are spending that much cash on a display, I’d say your GPU price level could stretch a bit higher than the 300dollar mark if you are a gamer. I’d say 400-500 dollars on a GPU is common for the elite crowd that own displays that do that sort of resolution.

          I mean that’s AT LEAST a grand into display right?

          • Kurotetsu
          • 11 years ago

          Crap, crap, crap.

          Umm, I’m not sure what the blue hell was wrong with me when I typed that post, but ‘milliseconds’ should’ve been ‘frames per second’. I must have wires crossed in my brain or something.

          Luckily, it looked like nobody noticed. >_>

            • shaq_mobile
            • 11 years ago

            i noticed, but i wasnt going to say anythign since no one else seemed to notice. i think we all kind of assumed you mean tto say frames per second and had somehow translated that into a matter of timing.

            btw i <3 all graphics cards, for they are all beautiful in gods eyes.

    • PRIME1
    • 11 years ago

    You can get the BFG GTX260 for even less than $300 (AR)
    §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814143148<]§ Clocked even higher than the Zotac (oh noes!).

      • Usacomp2k3
      • 11 years ago

      No you can’t. You can get a GTX 260 though.

        • PRIME1
        • 11 years ago

        Thanks Mom I will fix the spelling 😉

    • MrJP
    • 11 years ago

    What a hilarious discussion. These cards are more or less equivalent in performance. How can some people get so heated up when you can pretty much guarantee that if you blind-tested them with these two cards (or any under/over-clocked variant for that matter) in their own systems at the resolutions they play at, they wouldn’t be able to tell the difference? Just choose the one you like best or is cheapest where you live, and save the angst for something important.

    In any case, value-wise 4850 Crossfire pwns them both, anyway 😉

      • MadManOriginal
      • 11 years ago

      Because equivalent in performance and price isn’t an arguable interweb discussion tat anyone can ‘win.’ 🙁

      • ew
      • 11 years ago

      Your voice of reason is not welcome here Mr.!

      It would be interesting to do a blind test to see if someone could tell the difference though or if one group using one card had a better experience then another group using the other card.

    • PerfectCr
    • 11 years ago

    l[<"The handsomely stickered card you see above is the GeForce GTX 260 AMP²! Edition from Zotac. (I would like to thank Zotac for making me type AMP²!, since I could use the exercise.)"<]l lol, I just spit water onto my monitor, literally.

      • DrDillyBar
      • 11 years ago

      I LOL’d really really loud myself…

    • MadManOriginal
    • 11 years ago

    Note to Scott: add ‘testing a factory oc’d card with no standard speed card in the same test’ to the list of topics that are good for lots of comments and page views 🙂

      • PRIME1
      • 11 years ago

      If anything TR needs to post more controversial articles, really get the comments system flying.

        • MadManOriginal
        • 11 years ago

        I suppose it depends upon one’s definition of controversial. The short list off the top of my head includes anything related to Apple or Microsoft.

    • Krogoth
    • 11 years ago

    1GiB of video memory = pointless for the most part (GRID seems to require more than 512MiB of VRAM at higher resolutions+AA).

    The GPU will be obsolete long before games “required” that amount for optimal performance.

    These days it is silly to spend more than $299 on a video card when 4850, 9800GTX+, 4670, 9500 and all yield sufficient performance. That is unless you are feeding a 30″ monitor. If that is the case then the $249-299 category is quite sufficient.

    I find flame wars on factory-overclocked products to be very silly. Here is how I envision the people on both ends.
    §[<http://img.photobucket.com/albums/v248/Krogoth255/RAGED.jpg<]§ FYI, the normal 260 Mk. II performs like 5-10% better than the 260. It still manages to tailgate the 280 in most benches and outrun the 4870.

    • ew
    • 11 years ago

    I realize there is a lot of complaining going on here but I’ve got to say that it would have been nice to see 4850×2 in there which also goes for about $300. I think that combo is probably the clear winner at that price. (But with the obvious disadvantage of requiring crossfire.)

      • Damage
      • 11 years ago

      I agree. Interesting card. We may have to snag one to test, if we have time.

      • ew
      • 11 years ago

      Oh, and 2x Nvidia’s $150 card too to be fair.

      • PRIME1
      • 11 years ago

      Well a 9800GX2 has been out for awhile and that costs less than $300.

        • sammorris
        • 11 years ago

        I’d take a GTX260 maxcore or 4870 over a 9800GX2 any day of the week.

    • l33t-g4m3r
    • 11 years ago

    Since nobody is being honest about this snafu, let me explain it exactly how I see it.

    step1: sell limited supply overclocked card at newegg for 299$
    never mind 299$ is the price point of the regular cards, and the other OC’d cards are much higher, or that there is a 75/100 mhz difference.

    step2: give OC’d card to site for review. enjoy higher numbers.

    step3: run out of OC’d cards supply 1 week after review, or raise price through the roof.

    step4: everybody buys normal 260’s at 299$ thinking they perform like TR’s review.

    step5: big paycheck for everyone involved.

    • S_D
    • 11 years ago

    Ok fair enough, the argument that the two products tested here are at the same price point is a very valid reason for their comparison and I accept that wholeheartadly.

    However, to say that Diamond wouldn’t send you a faster sample whereas nVidia did is playing into PR departments’ hands a little too much. I’d wager that I could find overclocked versions of /[

      • pmonti80
      • 11 years ago

      That is why I asked for underclocked (reference card) benchmarks as well as factory overclocked benchmarks. This way TechReport avoids this kind of suspicion of fanboyism and such.

        • PRIME1
        • 11 years ago

        Why should they cripple a card just to make a few fanboy happy?

        Slower GTX260s cost less and they still compete as well with the 4870. The HardOCP review confirmed this.

        In fact they even recommend the older GTX260 over the 4870.

        End of story.

          • pmonti80
          • 11 years ago

          How much work would have needed TechReport if instead of testing 9600 GSO or 9500 GT they had tested with the card at reference clock? (in addition to the Zotac factory clock)

          With that you have a complete comparison to offer to the readers and you avoid suspicions. I mean, where is the problem?

            • Damage
            • 11 years ago

            You’re free to post your opinion, and you’ve more than gotten your say here. But please stop posting every few minutes. You’re just trolling and spamming now, degrading the quality of the discussion. We’ll have to ban you if you can’t control your over-posting impulse.

    • PRIME1
    • 11 years ago

    I think the point most people are missing is…..

    r[

    • pmonti80
    • 11 years ago

    Edit: my mistake. Delete this post

    • PRIME1
    • 11 years ago

    The “reference” 4870 only has 512mb of ram, so TR should take some pliers and remove the “over memory” so that these test are fair…..

      • sammorris
      • 11 years ago

      What? Both cards are in the review… :S

      • l33t-g4m3r
      • 11 years ago

      so funny I forgot to laugh.

      If you are going to use OC’d cards, then TR should have used the POWERCOLOR AX4870 vs the OC’d 260.

      §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814131120<]§ @299$

        • Damage
        • 11 years ago

        Ok, several points, and then I’ll be done with this. Please see my earlier post about why we test actual consumer products where possible and how the “overclocked” tag is a misnomer. Now I’ll address your complaints.

        First, AMD saw fit to send us this 4870 1GB as a review sample. Somewhat surprising AMD’s fans are objecting to its use.

        Second, we are of course happy to test faster 4870 1GB cards when they are available to us. We did actually invite Diamond to send us a card for testing in this review, but they declined. We can revisit that if a card becomes available. However…

        Finally and most notably, an additional 50MHz for the 4870 1GB wouldn’t have made any real difference in the overall outcome of the review. If you read my conclusions and the body of the review carefully and look at the results, I think you’ll understand that. The rough parity between AMD and Nvidia I described wouldn’t be upset by a few percentage points of increase for the 4870. If anything, it might bring the 4780 up to speed with the GTX 260 Reloaded at higher resolutions in CoD4, Episode Two, Warhead, and Quake Wars. Beyond performance, the Zotac GTX 260 draws less power and comes with a sweet, sweet game in GRID, while the 4870 1GB cards (including the one you linked) don’t have games bundled. If you’re really upset that our verdict is “these cards are roughly equal,” consider those realities. I think we’ve been tremendously fair to AMD.

      • PRIME1
      • 11 years ago

      It is funny how 2 people can totally miss the point. Well more sad than funny really.

        • l33t-g4m3r
        • 11 years ago

        I just updated my previous reply, and no, nobody is missing any point.

        You’re just an over zealous nvidia fan, trying to find any excuse to justify skewed benchmarks.

          • PRIME1
          • 11 years ago

          You are just an overzealous ATI fan upset that your card lost.

            • l33t-g4m3r
            • 11 years ago

            wrong. I just don’t like how every other website used default clocked cards.
            This is cheating in my book.
            the 4870 didn’t lose either. look at the grid score.

            PS. if TR didn’t use the OC’d nvidia card, their score would have been MUCH lower.
            Look at the other reviews.
            You can’t say that using a pre-overclocked card didn’t make a difference.

            • Meadows
            • 11 years ago

            1) TR used a default-clocked card. They didn’t overclock the Zotac. It’s sold this way by default. Go back to school if that’s a tall order to grasp.

            2) I personally don’t play GRID, so I couldn’t care less – for me, ATI lost this one.

            • l33t-g4m3r
            • 11 years ago

            wrong. factory overclocked is still overclocked.
            It is not the standard clock speed of the other 260/216 cards.

            • Meadows
            • 11 years ago

            That’s not real overclocking, because they’re still the manufacturers.
            Overclocking is when YOU, the CUSTOMER, one-up the manufacturer to try your luck and break your warranty.

            Manufacturer-defined speeds, even if well above what nVidia declared for a chip’s debut, are still a sort of factory-default.

            • l33t-g4m3r
            • 11 years ago

            That card is the only card with those clock speeds.
            In fact the evga card with similar speeds cost 500$.
            The supply of these cards are probably very limited too.

            The normal 260’s are 575 / 1998 mhz.
            that’s a 75 Mhz difference in clock speed, and a 100 Mhz difference in memory speed.

            durr hurr. that’s not cheating. hurr durr :rolleyes:

            IMO, Penn and Teller need to do an episode of this review.

            • Meadows
            • 11 years ago

            You’re starting to bore me. You still can’t provide a valid rebuttal, and you’re still accusing Zotac of “cheating” when they’re following an accepted business model.

            • l33t-g4m3r
            • 11 years ago

            Your response is nothing more than an over the top troll.
            I’m not accusing _[

            • forthefirsttime
            • 11 years ago

            common sense says since most readers of this site can buy those two cards for the same price it’s a perfectly legitimate comparison

            • l33t-g4m3r
            • 11 years ago

            according to that logic, the POWERCOLOR AX4870 should have been used instead of the standard 4870.
            Heres a shocker: It wasn’t.

            The truth is, that this review had nothing to do with being a legitimate comparison.
            It is biased, pure and simple.
            There isn’t any other way to possibly see it.

            If the card was underclocked to reference speed, and BOTH results were included, I would say otherwise.
            That’s not how it happened though.

            • swaaye
            • 11 years ago

            I dunno. The way I see it is that you are taking the differences between two closely performing GPUs way, way too seriously. They are in the same class and change places occasionally. A relatively tiny difference in clocks from stock isn’t going to make a night and day difference.

            These days the market gets saturated with cards that mostly differ from the ATI/NV “official” clocks anyway. Maybe not right off the bat, but it happens soon enough as all the little card companies try to differentiate themselves. And who’s to say the default clocks even matter if much of the stock you can actually buy is operating higher anyway?

            A review that uses cards at the “default” clocks would probably be optimal, but saying if otherwise that it’s all biased worthlessness is really strong. It just doesn’t matter that much in the end.

            • Meadows
            • 11 years ago

            Why underclock a stock-clocked card?

            • MadManOriginal
            • 11 years ago

            Why compare a factory-overclocked (YEA THAT’S RIGHT THE COMPANIES MAKING THEM CALL THEM OVERCLOCKED< FORGOT THAT DID YA? YAY FOR CAPS) to a base-spec clocked card.

            • Meadows
            • 11 years ago

            Because they’re priced the same, and thus they’re perfect, unmitigated competition.

            • MadManOriginal
            • 11 years ago

            Yea, you’ve said that before. Your opinion (note, *opinion*) isn’t any more right than anyone else’s although you clearly think it is, we’ll let that one slide for now though. But you also neatly avoided answering why they shouldn’t be considered overclocked when the companies themselves call them overclocked. Please answer that one 🙂

            • Meadows
            • 11 years ago

            The companies don’t use the word in its actual meaning, but it’s useful for marketing purposes.
            Note how TR puts it in quote every single time “factory-overclocked” appears anywhere – it’s not really right. Even assuming that it is, Scott tested stock specified cards, so you remain with no valid argument.

            • Krogoth
            • 11 years ago

            Overclocking = operating the components beyond its engineered specifications.

            Factory overclocking is still overclocking. The difference from standard overclocking is that it is done at the vendor’s assembly line and the vendor is *willing* to cover the said product within its warranty.

            • eitje
            • 11 years ago

            is the card tested available in stores for me to purchase – as tested?

            • PRIME1
            • 11 years ago

            The Zotac card has not been overclocked by TR. It’s the correct speed right out of the box.

            • pmonti80
            • 11 years ago

            I think everyone knows that (anyone having read the article that is).

          • Meadows
          • 11 years ago

          The benchmarks aren’t skewed.
          While the GeForce doesn’t follow nVidia’s original guidelines, which is acceptable, these products are still FACTORY DEFAULTS.
          Zotac sells this card AT THE TESTED SPEED with warranty.
          AMD’s card is also sold AT THE TESTED SPEED.

          Both cards COST THE SAME. They are a FAIR MATCH.
          Please tell me if I should emphasize it even further for you to finally understand.

            • pmonti80
            • 11 years ago

            It’s not about understanding or not. There are reasons for disagreeing even if I’m not a fanboy. I stated them in my answer to Damage and #90 gave his reasons too.

            • MaxTheLimit
            • 11 years ago

            I’m not going to touch this debate too much, because it seems to be one that could go on for a bit. I would have been interested though to see what sort of results a moderately overckocked 4870 1GB would have yielded. It seems inevitable that they will be along shortly, and seeing what sort of results we might expect when they start showing up would be kind of interesting. I’m sure that both AMD, and nvidia will continue to wage war over prices in this range. Seems to make sense that soon we’ll see some company offer an at least matching price on the 48701GB overclocked compared to the GTX260 216 revision. Results like that could have been simulated manually, and I wouldn’t have minded seeing them. Maybe in a future review, when the cards actually start showing up I guess.

      • ish718
      • 11 years ago

      *Oops clicked reply by accident***

      Meh, I’m good with an 512mb card for now, most gamers don’t even play at such high resolutions anyway to even utilize 1GB video memory.

      By the time 1GB becomes standard, there will be much more powerful GPUs out like HD5870 or GTX 380 or w/e they’ll be called.

      But good review never the less…

    • robspierre6
    • 11 years ago

    It’s unfair to compare a 17% overclocked card to a stock clocked 4870.You could’ve used a DIAMOND xoc 4870 or a powercolor pcx one.

      • grantmeaname
      • 11 years ago

      they’re the same price. How is it even vaguely unfair?

      besides, one of the inherent problems with the HD 4870 is that it’s pushing the frequency limits of its architecture. The GT200 has a ton more overclocking headroom, and using an overclocked Ampsquaredexclamationpoint card is a pretty fair way to represent that.

        • MadManOriginal
        • 11 years ago

        I don’t know what your evidence for the 4870 being on the edge of its clockspeed is but I do know that plenty of people on forums have gotten them to do a pretty easy 10% oc.

        • robspierre6
        • 11 years ago

        same price?? the powercolor 4870 1GB pcx costs 299$ cheaper than the stock clocked 216core 260gtx 314$

        check the price @ newegg

        My powercolor 4870 512Mb is clocked at 882/1020.So, if you don’t own a 4870 card don’t make any biased statements please.

          • PRIME1
          • 11 years ago

          Stop being such a tool.

          §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814143155<]§ BFG GTX260 216 OC - $274 AR. Cheaper & faster than the 4870

            • MadManOriginal
            • 11 years ago

            I know! Let’s keep this thread going forever with price comparisons, because we all know prices stay rock steady over time.

            • Rza79
            • 11 years ago

            Rebates ……… pfffff
            Basically this means you have to pay $299.99 and maybe in a couple months you’ll get $25 back. On top of that, they’re temporary.
            In my opinion, reviews shouldn’t look at rebates. Specially since they’re only for the USA.

            *[http://www.newegg.com/Product/Product.aspx?Item=N82E16814131120<]§ §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814500068<]§ It's only $10 difference (3%) ...

            • darksynth
            • 11 years ago

            You call him a tool then Proceed to post a MIA? (lol as if you will ever see that money)

            Tool indeed.

      • PRIME1
      • 11 years ago

      It’s unfair because ATI loses……

      • forthefirsttime
      • 11 years ago

      ask yourself something.. does the average user care or even know if the product they buy is overclocked?

      no. they don’t. the only thing that matters is price, and these two cards are selling for the same price for the majority of people who read this site. hell, this review isn’t even recommending one card over the other.. they’re just saying they’re comparable.. and overclocked or not that’s still true.

      overclock both cards and they’re still both very similar performers.. what’s the big deal? 1-5 fps isn’t going to change your opinion.. the package included with the card and warranty matter much more at that stage anyway.

        • robspierre6
        • 11 years ago

        The AMP version from Zotac rebetaes at 299$ and the powercolor 48701GB pcx rebates at the exact same price.Check newegg.

        Secondly,if that’s the case then lets compare a 4870 clocked at 850/100, mine is clocked at “882/1020”, to a reference 280gtx and see which one wins.

          • Usacomp2k3
          • 11 years ago

          There is no such thing as a 280gtx. There is a GTX 280 though.

          • forthefirsttime
          • 11 years ago

          your reply doesn’t even address the points I brought up..

          a lot of people don’t want to change their clock speeds after they buy their card, and for those people, this review is relevant.

    • xtalentx
    • 11 years ago

    I have been trying to resist buying a new video card for a LONG time now. I run a 1900×1200 monitor and my 8800gt just doesn’t cut it for giving me all the toys at that resolution.

    Everytime I read another comparison artical I am one step closer to buying a 1GB 4870…

    Must fight urge… Must not do it…

      • sammorris
      • 11 years ago

      dooooo itttttt. 🙂
      At 1920×120 you’re only likely to ned the 512MB version.

      • pmonti80
      • 11 years ago

      Be a man and resist the urge, you won’t die from not buying the card. 😉

    • pani_alex
    • 11 years ago

    you put 9600 gso as 512, isent it only 384?

      • grantmeaname
      • 11 years ago

      there are two. look at the news.
      §[<https://techreport.com/discussions.x/15660<]§

      • Damage
      • 11 years ago

      You’re correct. I tested the 384MB version. I’ve corrected the listing in the review. Thanks.

    • sammorris
    • 11 years ago

    Where would a graphics thread be without its fanboys?
    I’ll heartily agree that the 4870s get pretty damn hot, but people seem to have forgotten that they actually run cooler than the old X1800s and X1900s.
    As for ‘lack of features’, I’m curious. Given that the HD4870s actually do more than the GTX200s outside gaming – better HD video acceleration, hardware acceleration of CAD software, and so on, that just sounds like clutching at straws.
    Usacomp: I did actually, and it’s bizarre, the price difference is £120 vs £175 over here which is much more justifiable.

    • PRIME1
    • 11 years ago

    I have no problem with reviewers using higher clocked cards as long as they are off the shelf parts.

    That would be like complaining about non stock coolers or any other non stock extra.

    As long as it’s that way out of the box, that’s how it should be benched.

      • Convert
      • 11 years ago

      Yeah I think it’s fine, but the other card should be OCed out of the box too when possible.

      This comparison was really apples to limes spec wise. One with more memory the other with more clock speed. Interesting nonetheless but still not something that really tells the full story here. Since the the gains of an extra 1GB was almost nonexistent I would be more interested in seeing what OCed card could do. Although the above question still needed to be answered and TR did a good job of doing that.

      Based on price though those two cards are kind of what you are looking at with choices as apparently there are only a couple of OCed AMD cards out there according to newegg.

        • pmonti80
        • 11 years ago

        The problem is that this comparison (if done based on price) is only useful this week and only in the USA. Prices vary wildly with time and location. The correct thing to do is use the non-overclocked model or underclock this one and post the results too. And finally, make judgements according to both results and not on today’s price.

          • PRIME1
          • 11 years ago

          These are both cards that can be bought in store with these specs for around the same price.

          Which is exactly how cards should be benchmarked.

    • rpsgc
    • 11 years ago

    I’m sorry but I just don’t buy into this review… no performance difference between 512MB and 1GB HD4870? I call that bullshit.

    Anandtech’s and HardOCP’s review clearly show otherwise, and these are not some shaddy unknown sites! So something is obviously wrong here.

    Anandtech: “So, what’s the bottom line? This is currently the card to get.”

    HardOCP: “Given the street prices though, there is no room for the 512MB Radeon HD 4870 any more in the enthusiast’s box as the 1GB 4870 has pushed it aside.”

    Bit-tech: “When you compare it to the Radeon HD 4870 512MB, which retails for around £170 these days, you still appear to be getting a good buy as well. It costs around eight percent more and in some circumstances you’re getting performance increases much bigger than that”

      • pmonti80
      • 11 years ago

      As I said in another message this is the reason why I’m confused and am asking for a non-overclocked benchmarking of this card.

      • Damage
      • 11 years ago

      Please remove the profanity from your post or it will be modded down soon. Thanks.

        • DrDillyBar
        • 11 years ago

        May I suggest “hogwash”

    • Usacomp2k3
    • 11 years ago

    Did anyone else notice the huge gap in price and small gap in performance between the $300 4870 and the $150 4850?

    • PRIME1
    • 11 years ago

    No surprise really as the standard 260 was beating the standard 4870.

    Given the lack of features on the 4870 (not to mention the heat issues) it was hard to recommend anyways.

      • Meadows
      • 11 years ago

      I admire your fanaticism and I favour nVidia myself, but it’s hard to bash the HD 4870 when you need a retouched, _[<"factory-overclocked"<]_ product to compete /[

      • robspierre6
      • 11 years ago

      The whole test is inaccurate.the 260gtx does lose at 22″ res. in all the tested games except for crysis.But again,the gtx card is overclocked it shoul’ve been compared to a overclocked 4870.

      And you are a silly fanboi.

        • Meadows
        • 11 years ago

        Maybe he is a silly fanboy, but you’re a retarded fanboy and also an annoying troll at that.

          • robspierre6
          • 11 years ago

          Listen meadows,the 4870 does better than the new 216 core 260gtx.check the review @ anandtech.
          Secondly,comparing a overclocked card to a stock clocked one is unfair and inaccurate.
          Thirdly,it’ him who’s trolling.It’s well known that Nvidia whould haven’t released the new card if the original 260gtx was competitive against the 4870.From the 4870 review @ techreport,i should remind you that at 22″ resolution the 4870 beat the 280gtx in 5 out of 6 games.

          Ohhh and lastly,your comments express a fanatic biased nvidia fanboi.

            • sammorris
            • 11 years ago

            I’m an ATI owner, and I can say without question the 216 core GTX260 is going to be the faster product, just from seeing all the benchmarks from reliable sites. Anandtech have made too many mistakes in the past to be truly believable for all tests (like for instance a freezer 7 pro cooling better than an Ultra-120)
            Fanboy or otherwise, the 260s are FASTER cards than the 4870s, but whether they’re specifically BETTER or not ultimately depends on the price point. In my opinion they all score about the same, the new 260 is more expensive but it’s practically a 280, the older one is only slightly faster than the 4870, and only slightly more expensive. The bum deal is the 1GB 4870 IF you use a lower res. Use 2560×1600 and either card will do fine, since the 260s have ample memory anyway.

            • Meadows
            • 11 years ago

            Both cards were stock-clocked. Scott didn’t overclock either one of them.

            • MadManOriginal
            • 11 years ago

            Arguing semantics must be one of your top 3 hobbies 🙂

    • thermistor
    • 11 years ago

    #27. Agreed. Q = m Cp (T2-T1).

    The analogy is putting a motorcycle radiator on a big block chevy. As long as you can raise the maximum coolant temperature to 600F (just a guess) instead of the regular 200F, you’ll get identical heat dissipation as the standard radiator.

    AMD just wants us to burn our fingers on the radiator cap. 🙂

    Not that I’m complaining…my next card will likely be a 4850 with a better than stock cooling solution.

    • sammorris
    • 11 years ago

    To give some idea, yes it’s rather stupid, but I played through ‘Call me Ishmael’ on 2560×1600 all Enthusiast on my 4870X2 – no AA obviously, that halves the frame rate at this resolution. Generally I was getting around 23-26fps, if an explosion went off right in my face it’d dip to 18-19 or so, and on one or two random occasions the fps dropped and the GPU left dual mode briefly, I think that’s probably my HDD not catching up though, as it’s only when something is brought into the draw distance that this can happen, and my HDD only has 0.8% free space… lmao.
    It’s worth pointing out that in XP where crossfire doesn’t work, I only got 16-17fps using the same setting. AA performance, for fun only, was 8fps in XP, 11 in Vista. Bear in mind the performance hit taken from DirectX10.

    Thresher: two 4850s can handle almost anything at 2560 res, and if you can find them cheap, you will get performance well above that of a GTX280 for little more than the cost of a GTX260.

    • Thresher
    • 11 years ago

    Any way to get these charts combined with the earlier 4870X2 review? I am seriously considering going with two 4850s and it would be nice to have it all in one chart so I don’t have to switch back and forth.

    • sammorris
    • 11 years ago

    The GTX260 is impressive, and with newer drivers it’s once again notably faster than the ATI counterpart, and up until recently would be a good buy, but the recent increase in nvidia’s prices again is unfortunate, and renders the 4870 a far better value purchase.
    In my experience, ‘just in case’ crossfire never happens, as by the time the owner considers a second card, for similar money they can just sell the first card and get another single card that fares better. Right now, I’d say if you use up to 1920×1200, an HD4870 or GTX260 will do just fine. If you use a 30″ though, I’d strongly recommend a 4870X2 right off the bat.
    Also, what’s with techreport not getting CF working for Warhead? I managed it, but it’s the first game where CF is completely disabled if you use windows XP. You need Vista to use crossfire on Warhead, but once you do, you get a significant performance (and fan noise…) boost.

    • sdack
    • 11 years ago

    I wonder how the GPU temperatures are being measured. The consistent difference of ATI GPUs in being hotter than Nvidia GPUs might as well be only a difference of how (or where) its measured by each maker.

    This difference gives ATI quite a disadvantage and Nvidia an advantage. I personally would in doubt always choose a card with a low temperature because it reflects its power consumption and the MTBF, too (MTBF = mean time between failures).

      • Meadows
      • 11 years ago

      ATI cards have slower fans by default, because their fan configs are too loud and they want quiet and pretty cards instead of cool ones. NVidia doesn’t have this issue.

      AMD also claims that the high temperatures of Radeons shouldn’t cause issues, but you can read about quite a few failures even here on TR that point to the contrary. Generally the RAM on those cards was the culprit – indeed, I believe AMD when they say the chips are fine, but RAM is different, it’s much less tolerant.

      • S_D
      • 11 years ago

      Sorry, I don’t agree with you that a lower temperature results in lower power consumption. Whilst it’s true that a GPU’s thermal output can indicate it’s power usage, it’s the efficiency of the cooling solution that can distort this, ie: It’s possible with a higher ‘power consuming’ GPU to run cooler just because it’s got a better cooler on board…

        • Saribro
        • 11 years ago

        It is, however, true that for a specific device under a constant load, higher temperature will lead to higher consumption. (That’s also why datasheets mention powerconsumption at a specific temperature.)

          • sammorris
          • 11 years ago

          Techreport have listed the 4870X2’s load temperature as being only 83 celsius. That is only doable in single GPU mode or with a higher forced fan speed. normal load for me is 86-88. However, again, my X1900XT is over two years old and has ran at 87-92C load all its life, for a year with the stock cooler and 18 months with a fanless HR-03. No sign of it going southside yet. It all depends on the stability of the architecture. If a 3870 got to 88C it would artifact and crash. A 4870 doesn’t seem to care at all.

          • ChronoReverse
          • 11 years ago

          It works the other way around. Power consumption is also the heat that must be dissipated.

          The temperature is the equilibrium point where the cooling matches the heat output (power consumption). Therefore increasing the power consumed increases the heat that must be dissipated which leads to higher temperatures.

          Note that the equilibrium point does depend on the cooling solution as well which is why the same heat output (and thus power consumption) can have different temperatures. Even a 1W chip with insulation instead of cooling would easily bust past 100 C

        • sdack
        • 11 years ago

        That is not what I wrote. Perhaps read again.

      • grantmeaname
      • 11 years ago

      Actually, lately the ATi GPUs have consistently run hotter than nVidia ones. Nobody denies that (well, except you). It’s not really a difference between them in power consumption, though. I think ATi’s aiming for the HTPC market, and going for slower (quieter) fans and hotter chips as opposed to hotter (louder) fans and cool chips, which is what nVidia tuned for.

      As for MTBF, there’s no way to compare the two from one manufacturer to the other solely based on GPU temperature. Within one architecture, though, you can.

        • sdack
        • 11 years ago

        Your talking nonsense here. Firstly, I have not denied anything. Instead I am being sceptical, something you still need to recognize. And being sceptical do I question the information given. Can you see this now?

        Nothing you write has any relevance. Your explanations are useless as they only blurr the facts. So when in doubt and no facts are at hand one should always choose the cooler product. Do you understand this? Does it make sense to you?

        Now, if you want to tell me something that I also want to know then perhaps explain how exactly both makers measure the temperature. I still believe that this is an important bit of information that needs to be investigated. All card makers use more or less a reference design. A mistake or a significant difference here and you will find it in all products. So if you have any useful information then let us have it.

    • Fighterpilot
    • 11 years ago

    The Core 216 is obviously a damn good card,don’t think there’s any serious question about that…the important news tho is it costs over a $100 dollars less than the slower GTX260 that debuted only a short while ago and for that you have to thank the excellent performance and pricing of the HD4870.

      • S_D
      • 11 years ago

      Yup,

      Ironically nVidia fanboys have a lot to thank ATI for… 🙂

      • MadManOriginal
      • 11 years ago

      That’s a good point, the original GTX260 at $200 might be the real story here.

        • sammorris
        • 11 years ago

        HD4870 is £170, the GTX260 is £185, the GTX260+ is £235, the GTX280 is £300. Right now the only GTX260 with the big value sticker on it is the original GTX260. The maxcore is kind of justifiable, but the GTX280 just seems rendered completely pointless, especially given the proximity of the HD4870X2 to its price.

    • Ondrej Scerbej
    • 11 years ago

    “The Radeon HD 4870 1GB is a better buy than both the GTX 260 and core 216 variant.”

    §[<http://www.anandtech.com/video/showdoc.aspx?i=3415&p=10<]§

      • pmonti80
      • 11 years ago

      This is the reason why I’m confused and asked for a non-overclocked benchmarking of this card.

    • srg86
    • 11 years ago

    I’m starting to resent all this cheapskate business, as I wouldn’t need the power of these graphics cards, I don’t play games. Even my current X800GTO (with DDR) does my needs.

      • Saribro
      • 11 years ago

      Yeah, I was going to mention that too. Some may have considered it to be funny at first, but now it’s just getting offensive.
      It’s not just about (not) needing the performance either, “a mere 100 bucks more” means 6 or 7 weeks worth of fuel for me. $250 is not a trivial expense for a whole lot of people, and that includes a lot of enthousiasts.

    • bogbox
    • 11 years ago

    The best of thews two is the 4870 1GB card , more memory , faster memory (3600 mhz is a lot) ,more features, dx 10.1 (one step closer to dx 11).

    The name , “260 “, is good because uses most of same basic features as the old 260, (its just OC on shaders a bit) most manufactures love the same name (how’s stoping them to sell old 260 ?)

    Average costumers (99%) like the same name (to many names are confusing ),don’t care to much .And those that care ,know the difference .

    Others sites said the extra memory makes more difference ,anyway , good review .

      • Meadows
      • 11 years ago

      99% of all statistics are made up on the spot.
      I bet you were made up on the spot too.

      I was beginning to miss your ATIfanboyness.

        • bogbox
        • 11 years ago

        99% of all statistics are made up on the spot.
        I bet you were made up on the spot too.

        ” ATIfanboyness” isn’t funny , really not funny , try harder ,more inventive pls . BTW “ATIfanboyness” is a word ?

          • Meadows
          • 11 years ago

          A lot of things you say aren’t words either, and don’t get me started on your interpunction.

        • xtalentx
        • 11 years ago

        Actually 93.7% of all statistics are made up. Just to be accurate.

          • sammorris
          • 11 years ago

          What I do know, is graphics specifications are meaningless. 4Ghz GDDR5, great! Oh no wait, the GTX280’s “ancient” GDDR3 still beats it.

            • bogbox
            • 11 years ago

            Not in grid :))

            • sammorris
            • 11 years ago

            very true, but GRiD is just one game… 🙂

    • Fighterpilot
    • 11 years ago

    Interesting review, thanks TR.
    Re the OC speeds….Here’s what Anand had to say about the stock clocked
    GTX260 216.
    “The Radeon HD 4870 1GB is a better buy than both the GTX 260 and core 216 variant.”
    §[<http://www.anandtech.com/video/showdoc.aspx?i=3415&p=10<]§

    • pmonti80
    • 11 years ago

    You should seriously stop comparing pre overclocked cards from Nvidia (or ATI) to non overclocked normal cards ATI (or Nvidia).

    Edit: Upon reading this again I realize that I’ve been a bit rude. I’ve posted on #70 what I should have really posted here.

      • S_D
      • 11 years ago

      Agreed.

      This seems to have become defacto practice for the nVidia PR department, in regards to only sending out pre-overclocked cards for comparison. TR really should maintain it’s integrity here and highlight this ambiguity, as it’s not a level playing field, particularly as ‘TREE’ highlights for us in the UK.

        • Thresher
        • 11 years ago

        At the very least, down clock it to stock speeds just to let us have a look at the regular performance as well.

      • marvelous
      • 11 years ago

      Yup I agree. Either both of them overclocked or non at all.

      • HurgyMcGurgyGurg
      • 11 years ago

      For the sake of comparison I agree. If the review is about trying to figure out whether the GTX 260 Reloaded is better than the 1 GB Radeon HD 4870 it should be both stock clocked.

      However, for real world comparison and value. This overclocked GTX 260 Reloaded cost just as much as the stock 1 GB Radeon HD 4870. So it still makes sense as to why it was chosen.

        • pmonti80
        • 11 years ago

        The prices are not the same in USA or in the rest of the world. But the answer is easy, just put the results with both speeds. If I find easily overclocked and non overclocked results in the same graphs everyone can find the results that matter for him/her.
        There has been an ongoing trend in Tech Report lately of mixing the things a little bit. For the everyone’s sake I would prefer making things totally clear and benchmarking in pre-overclocked mode and normal mode.

          • thecoldanddarkone
          • 11 years ago

          I’m sorry but most of the users on this board are from the US/Canada. So it’s relevant. It’s not their job to analyze the entire market. You are buying the video card and it’s your job to come up with the price/performance ratio for your area.

      • Jigar
      • 11 years ago

      I couldn’t agree more…

      • MadManOriginal
      • 11 years ago

      Agreed.

      Also on the Fillrate, Texel, memory bandwidth etc chart, could you include the cards’ clock and memory speeds?

        • sammorris
        • 11 years ago

        It is a little skewed, especially as no card overclocks the same so you can’t really estimate what the “real” results are.

          • PRIME1
          • 11 years ago

          These are off the shelf parts. No overclocking by the user at all. Why should TR waste time trying to find a card just to make the 4870 look better.

            • pmonti80
            • 11 years ago

            The problem is this is not the reference card. And Techreport is making recomendations of the card based on results from a non-reference card.

            • Tamale
            • 11 years ago

            who cares about reference cards? It’s not like the average user can buy them. Consumers buy branded products from evga, xfx, zotac, saphire, powercolor, etc etc etc

            • pmonti80
            • 11 years ago

            Yes and but usually it’s easier to find products from vendor that use reference cooler and reference clockspeeds.

      • Damage
      • 11 years ago

      I’d like to challenge your premises, because I think they’re wrong. There are good reasons we tested like we did, and you misstate the reality here.

      First, the GTX 260 card we tested was not “overclocked,” and I don’t believe we used that word to describe it. Overclocking is where you, as the user, turn up the clock beyond the product’s rated clock speed and potentially void your warranty. The Zotac GTX 260 we tested came out of the box at the speed we tested and has lifetime warranty coverage at that speed.

      Second, the Zotac is an actual consumer product, selling at Newegg. It’s common practice among Nvidia partners to sell cards at higher speeds than Nvidia’s baseline recommendation, and this card isn’t some strange outlier because of its higher stock speeds. In fact, many of the cards we tested for this review and others, Radeons and GeForces, have clock speeds that vary from the reference clocks, since this practice is so common.

      Third, the Zotac GTX 260 is priced at $299.99, exactly the same as the Radeon HD 4870 1GB from multiple vendors (which run at the clocks we tested). It is directly competing in the market against the Radeon and is thus a fair basis for comparison. I even spent some time in the beginning of the review setting up the exact pricing and warranty coverage terms in order to give the reader a sense of how things match up.

      You really overstate the case when you say we tested an “overclocked” card and imply that this isn’t a fair or real-world comparison. The opposite is true, and your objection is based on the GPU makers’ baseline clock speed, an almost irrelevant number for the consumer.

      If everybody wants to see it, I have no real objection to underclocking the cards we test to the GPU provider’s baseline speed in the future, but if I were to do so, I’d remind readers that those results are theoretical and of secondary interest.

        • pmonti80
        • 11 years ago

        The problem for me is that I’m from Spain and here prices usually are very diferent from there. So the totally valid argument of the prices in USA is not valid everywhere.
        The best thing to do in my opinion is to put the “theoritical” results as well so we can compare when the prices are not the same. With both results on screen there is no basis for suspicions of any kind and everyone can be happy.

          • Damage
          • 11 years ago

          We have always done our price comparisons based on local prices. If you have another set of options available, feel free to look at the performance data and adjust accordingly when you make your own decisions.

            • pmonti80
            • 11 years ago

            It’s your site do what you think it’s best.
            But I think it would be better if you posted always both reference and non reference card results so people can get a better idea. That way the price variations can be more easily adapted to.

            • Meadows
            • 11 years ago

            No, actually, that doesn’t make logical sense. You will almost never buy a GeForce videocard that has nVidia’s specs – what’s the point in highlighting those specs then? There is no point.

            TR do well in benchmarking the products exactly the way they are sold. If the Zotac is not available in your area, check the clock speeds of your card and the Zotac, find the percentage of difference, and approximate the results for that particular card.

            If Scott were to underclock Zotac’s card, he would only achieve two things:
            1) show performance levels that _[

            • pmonti80
            • 11 years ago

            To that I have to respectfully disagree. I find that it’s usually easier to find products from vendors that use reference cooler and reference clockspeeds. Hence the reason for my posts. Contrary to what you may believe I’m not pro-ATI or pro-Nvidia.

            • robspierre6
            • 11 years ago

            This test is unfair.comparing a overclocked card to a reference one is unfair.
            you couldve used a Asus top card or any other ATI overclocked card to compare it with the AMP gtx.
            A overclocked 4870 does beat a refernce 280gtx.Does that make the 4870 faster than the 280gtx??

            • Meadows
            • 11 years ago

            Both cards are exactly 300 dollars. I call that a fair comparison. You have no argument to counter that one.

            Your comment was interesting the first time, but this is the fourth or fifth time you spew the same garbage on the same news post, so I hereby ask you to shut your yap. Damage has already asked you not to troll about this “issue”, by the way.

            • sammorris
            • 11 years ago

            That makes sense. If both cards are $300 it’s a fair trial 🙂 Then again, I was never really an advocate of pre-overclocked GPUs, it seemed so easy to do it yourself…

            • Meadows
            • 11 years ago

            Yes – but there’s no warranty if you do that. Of course, you could always lie and say the card fried itself.

            • MadManOriginal
            • 11 years ago

            Perhaps rather than typing prices statically in to the article they could be done as links to a pricesearch engine entirely, or typed in at today’s prices along with a ‘check current prices here’ link to a pricesearch engine. *edit* I guess you kind of do this by having the prices as a link to Newegg but the price is still typed in with no pricesearch link.

            Wrt the overclocked versus stock issue, I don’t like saying test more stuff for the sake of it but having a reference clocked part in a test like this is important so I’d say underclocking should be done. This is going to be TR’s reference review for the GTX 260 Core 216 so it should reflect the reference speeds and include the overclocked speeds too if you want. This is as opposed to a ‘X_ card roundup’ article like you sometimes do where it’s a bunch of the same cards with different coolers, speeds, or non-reference designs, in that case the comparison is between different specific models so whatever speed they come is fine.

            • pmonti80
            • 11 years ago

            Your explanation is miles better than mine, nothing to add.

      • robspierre6
      • 11 years ago

      He could’ve used a 4870 top from Asus or an xoc one fro Diamond.

      • l33t-g4m3r
      • 11 years ago

      TR has been showing a bit of bias toward nvidia recently.

      In the previous 4870 review, TR devoted an entire section to 3dmark and the PhysX benchmark, which is irrelevant to the 4870 considering PhsyX is nvidia only.
      (Not to mention 3dmark has proven time and time again, to be an untrustworthy source of numbers.)

      TR also skipped reporting on ALL of the audio/video features of the 4870, while in the nvidia 200 article they reported it’s capabilities, which happens to be far less than what the 4870 can do.
      see here for the full details of the 4870:
      §[<http://www.xbitlabs.com/articles/video/display/ati-radeon-hd4850_10.html<]§ There seems to be /[

        • Meadows
        • 11 years ago

        Would you shut up already?

          • l33t-g4m3r
          • 11 years ago

          No, you. (-_-)

            • MadManOriginal
            • 11 years ago

            Agreed. 10characters=worthwhile

            • DrDillyBar
            • 11 years ago

            nm, my brain was broken

    • Meadows
    • 11 years ago

    Scott, did you use DirectX 9 in the Crysis Warhead tests? The game is essentially the same – it’s worth noting that it has ugly bugs in DirectX 10 though, particularly with texture caching, which increases the load on the hardware for no reason and introduces weird hiccups in specific areas.

    According to other review sites, even the GTX 260 (original) should do better than what you measured. I’ve tested both modes, I also use Vista 64, and I can tell that this time Dx10 is a sacrilege of all that is gaming, it’s been ruined. Even my sound was crackling (Creative) during Dx10 mode in some intensive scenes, probably due to the obscure way memory was handled, but in Dx9 mode, my paltry overclocked 8800 GT let me finish the game very (very) playably at “enthusiast” at 1280×960.

    • Meadows
    • 11 years ago

    It looks like I’m still right and nVidia wins the HD crowd with the new GTX cards, winning at high antialias/resolution tests.

    Now they need to “reload” the GTX 280 too, since it’s too close to the improved 260.

    • TREE
    • 11 years ago

    Thanks for this comparison, I’ve been looking into getting either of these two cards to replace my 8800 GTX dino. It does look like the GTX 260 will be my next purchase even though i am very egar to move back to the ATI front just to be able to have that bit of future proofing via crossfire. That being said though, i’ve never taken the oppertunity or even considered adding a second card to my system when a single card’s performance starts to become an issue in the latest games, so I guess its what some would call marketing hype.

    The performance of the GTX 260(+) is really good but… Thats a factory overclocked card and I live in Britian, so the price of that is not going to be the £200 or £220 of the HD 4870 1GB and will be more like £250 just plainly because the card is pre overclocked. Last but least there is also the pain of finding a card that actually has the 216 shaders. So maybe Nvidia will be loosing out just plainly because of very bad europien marketing and exploitation, that seems to be occuring as a result of their unclear naming and pricing scheme over here.

      • robspierre6
      • 11 years ago

      It’s a 17% overclocked 260gtx…He should’ve compared the overclocked gtx card to an overclocked 4870…

        • Damage
        • 11 years ago

        Posting the same thing over and over is trolling. Your complaint has been registered (and I’ve responded to it). Please stop posting dupes.

          • sammorris
          • 11 years ago

          I too have responded in my own post. Man, some people are blind… 😛

      • TREE
      • 11 years ago

      Ok i’ve given up on looking for a GTX 260+ card as the price here in the UK, even for a stock clocked card, is extortionate (£240 and up).

    • kvndoom
    • 11 years ago

    Looks like 1GB is a waste unless you have a 30″ monitor. Kudos to Nvidia for making the 280 pretty much obsolete. I agree that “270” would’ve been a more appropriate name, but they aren’t exactly known for being logical in the naming department anyway…

      • ssidbroadcast
      • 11 years ago

      Scott you should’ve just called it the GTX270 anyways. Maybe the other journalists would follow suit.

      Don’t pay attention to the green man behind the curtain!!

    • Nitrodist
    • 11 years ago

    Who needs a 512MiB card when we have 256MiB cards that play the current games just fine!

      • Vasilyfav
      • 11 years ago

      Exactly! No one will ever need more than 640MB GDDR RAM!

      • sammorris
      • 11 years ago

      Nitrodist: See the GRiD results page and tell me 512MB is enough. 🙂

        • Saribro
        • 11 years ago

        ahum, 4xAA/16xAF/2560*1600…
        I’d label it as an outlier.

          • sammorris
          • 11 years ago

          Outlier it may be, but it’s the setting I use, and I’m not the only one either.

    • Vasilyfav
    • 11 years ago

    The *[<4780<]* doubles up on RAM Huh? *That was a quick correction :>*

Pin It on Pinterest

Share This