ATI’s Radeon X1000 series GPUs

WE’VE BEEN WAITING quite a while for ATI’s new generation of graphics chips. It’s no secret that the R500-series GPUs are arriving later than expected, and fans of the company have nervously speculated about the cause of the delay. ATI chose to build its new series of graphics chips using 90nm process technology, and going with new process tech has always been risky. Some folks fretted that ATI may have run into the same sort of problems at 90nm that made Intel’s Pentium 4 “Prescott” processors famously hot, power hungry, and unable to hit their projected clock speeds. In a related vein, others fussed over rumors that ATI’s new high-end R520 architecture was “only” 16 pipes wide, compounding the process technology risk. If the R520 couldn’t hit its clock speed targets, it could have a difficult time keeping pace with its would-be rival, NVIDIA’s GeForce 7800, whose wider 24-pipe design makes it less dependent on high clock frequencies. As the weeks dragged on with no sign of ATI’s new GPUs, the rumor mill began circulating these concerns ever more urgently.

Two weeks ago today, Rich Heye, VP and GM of ATI’s desktop business unit, stood up in front of a room full of skeptical journalists and attempted to defuse those concerns. The problem with R520, he told us, with neither a snag caused by TSMC’s 90nm process tech nor a fundamental design issue. The chip was supposed to launch in June, he said, but was slowed by a circuit design bug—a simple problem, but one that was repeated throughout the chip. Once ATI identified the problem and fixed it, the R520 gained 150MHz in clock frequency. That may not sound like much if you’re thinking of CPUs, but in the world of 16-pipe-wide graphics processors, 150MHz can make the difference between competitive success and failure.

With those concerns addressed, ATI proceeded to unveil not just R520, but a whole family of Radeon graphics products ranging from roughly $79 to $549, based on three new GPUs that share a common heritage. It is one of the most sweeping product launches we’ve ever seen in graphics, intended to bring ATI up to feature parity with NVIDIA—and then some. Read on as we delve into the technology behind ATI’s new GPU lineup and then test its performance head to head against its direct competition.

Shader Model 3.0 and threading
Probably the most notable feature of the ATI R500-series graphics architecture is its support for the Shader Model 3.0 programming model. Shader Model 3.0 lives under the umbrella of Microsoft’s DirectX 9 API, the games and graphics programming interface for Windows. SM3.0 is the most advanced of several programming models built into DX9, and it’s the one used by all NVIDIA products in the GeForce 6 and 7 series product lines. SM3.0’s key features include a more CPU-like programming model for pixel shaders, the most powerful computational units on a GPU. SM3.0 pixel shaders must be able to execute longer programs, and they must support dynamic flow control within those programs—things such as looping and branching with conditionals. These pixel shaders must also do their computational work with 128-bits floating-point precision—32-bits of floating-point precision per color channel for the red, green, blue, and alpha.

ATI’s new GPUs support all of these things, including 32-bit precision per color channel. That’s a step up in precision from ATI’s previous DirectX 9-class graphics processors, all of which did internal pixel shader calculations with 24 bits of FP precision. Unlike NVIDIA’s recent GPUs, the R500 series’ pixel shaders will not accept a “partial precision” hint from the programmer and cut back pixel shader precision to 16-bits per channel for some calculations in order to save on resources like internal chip register space. Instead, R500 GPUs do all pixel shader calculations with 32-bit precision. The GPU can, of course, store data in lower precision texture formats, but the internal pixel shader precision doesn’t change.

The move from 24 to 32 bits of precision establishes a nice baseline for the future, but virtually no applications have yet layered enough rendering passes on top of one another to cause 24-bit precision to become a problem. As we have learned over the life of the GeForce 6 and 7 series, Shader Model 3.0’s true value doesn’t come in the form of visual improvements over the more common SM2.0, but in higher performance. The same sort of visual effects possible in SM3.0 are generally possible in 2.0, but they’re not always possible in real time. Through judicious use of longer shader programs with looping and dynamic branching, applications may use SM3.0 to enable new forms of eye candy.

In order to take best advantage of Shader Model 3.0’s capabilities, ATI has equipped the R520’s pixel shader engine with the scheduling and control logic capable of handling up to 512 parallel threads. Threads are important in modern graphics architectures because they’re used to keep a GPU’s many execution resources well fed; having lots of threads standing by for execution allows the GPU to mask latency by doing other work while waiting for something relatively slow to happen, such as a texture access. A Shader Model 3.0 GPU may have to wait for the result of a conditional (such as an if-else statement) to be returned before proceeding with execution of a dependent branch, so such latency masking becomes even more important with the addition of dynamic flow control.


A block diagram of the R520 architecture (Source: ATI)

Despite the ability for programs to branch and loop, though, Shader Model 3.0 GPUs retain their parallelism, and that pulls against the efficient execution of “branchy” code. Pixels (or more appropriately, fragments) are processed together in blocks, and when pixels in the same block take different forks of a branch, all pixels must traverse both forks of that branch. (The ones not affected by the active branch are simply masked out during processing.) In the R520, pixels are grouped into threads in four-by-four blocks, which ATI says is much finer grained threading than in competing GPUs.

To illustrate the improved efficiency of its architecture, ATI offers this example of a shadow mapping algorithm using an if-else statement:

The GPU with large thread sizes must send lots of pixels down both sides of a branch, and thus it doesn’t realize the benefits of dynamic flow control. A four-by-four block, like in the R520, is much more efficient by comparison.

Comparing the scheduling and threading capabilities of R520 to the competition isn’t the easiest thing to do, because NVIDIA hasn’t offered quite as much detail about exactly how its GPUs do things. NVIDIA seems to rely more on software, including the real-time compiler in its graphics drivers, to assist with scheduling. Yet NVIDIA says that its GPUs do indeed have logic on board for management of threads and branching, and that they keep “hundreds” of threads in flight in order to mask latency. As for threading granularity, some clever folks have tested the NV40 and G70 and concluded that the NV40 handles pixels in blocks of 4096, while the G70 uses blocks of 1024. NVIDIA claims those numbers aren’t entirely correct, however, pegging the NV40’s thread size at around 880 pixels and the G70’s at roughly a quarter of that. In fact, the G70’s pipeline structure was altered to allow for finer-grained flow control. The efficiency difference between groups of 200-some pixels and 16 pixels in ATI’s example above is pretty stark, but how often and how much this difference between the G70 and R520 will matter will depend on how developers use flow control in their shaders.

 

A new memory controller
Along with the rework pixel shader engine, ATI has designed a brand-new memory controller for the R520. Past ATI graphics chips used a centralized memory controller located in the center of the chip, and this placement caused problems. Lots of wires had to come into the center of the chip, and high wire density caused hotspots. The R520’s new memory controller adopts a more distributed form of organization: a ring topology.


ATI’s ring-style memory controller (Source: ATI)

Communication is handled by means of a pair of 256-bit rings running in counter directions around the periphery of the chip. That makes for 512 bits of internal bandwidth, but external bandwidth to memory remains 256 bits, as in ATI’s past high-end GPUs. ATI expects this new design to be capable of much higher memory clock speeds. The new memory controller also offers more access granularity by dividing its external memory interface up into eight 32-bit channels rather than four 64-bit channels. This memory controller’s “smart” arbitration logic is programmable, and ATI expects to tune in on a per application basis in the future using the Catalyst A.I. facility of its graphics drivers.

The R520 is tweaked in a number of other ways to help improve memory performance, as well. The texture, color, Z, and stencil caches are now fully associative, making them more effective than in the past. The chip’s Z compression has been revamped to achieve higher compression and maintain it longer. And the GPU’s hidden surface removal technique for its hierarchical Z buffer has now uses floating-point math for more precise operation.

A grab-bag of other enhancements
There’s much more to R520 and its derivatives, of course, but time is running short, so I’m going to bust out the machine gun and spray you with bullet points to cover the rest of the highlights. Many of them are quick but notable. Among them:

  • HDR support with AA — R520 and the gang can do filtering and blending of 16-bit per color channel floating-point texture formats, allowing for easier use of high-dynamic-range lighting effects—just like the GeForce 6 and 7 series. Unlike NVIDIA’s GPUs, the R500 series can also do multisampled antialiasing at the same time, complete with gamma-correct blends. ATI’s new chips also support a 10:10:10:2 integer buffer format that offers increased color precision at the expense of alpha transparency precision. This is a smaller 32-bit format, but it should still be suitable for high-dynamic-range rendering.

    Interestingly enough, ATI claims the Avivo display engine in the R500 series can do tone mapping, converting HDR images to display color depths with no performance penalty. Unfortunately, DirectX 9 doesn’t currently offer a way to expose this capability to developers, and it’s not in the drivers yet, but apparently the hardware can do it.

  • Better edge and texture antialiasing — ATI’s new “Adaptive AA” feature mirrors NVIDIA’s supersampled transparency AA, taking care of cases where alpha transparency creates jaggies that aren’t on polygon edges. This one is enabled via a simple checkbox in the driver.

    Another driver checkbox addresses a long-standing complaint of mine: angle-dependent anisotropic filtering. The R500 series includes a new, higher quality anisotropic filtering method that’s not angle dependent. Most newer GPUs haven’t included the ability to turn off angle-dependent aniso, and I’m pleased to see that ATI has made it happen.

  • Avivo video and display engine — Avivo is a branding concept that encompasses the whole of the “video pipeline,” from the capture stage using ATI’s Theater products to the decoding, playback, and display output stages that happen on a GPU. Avivo still uses pixel shaders rather than a dedicated video processing unit, but ATI is bringing over algorithms and capabilities from its Xilleon line of consumer electronics products in order to improve the Radeon’s video and display features. Avivo features end-to-end 10-bit integer color precsion per channel for both analog and digital displays, and ATI proudly declares that Avivo will have the best H.264 acceleration around. We will have to cover Avivo in more depth in a future face-off against NVIDIA’s PureVideo.

I’ve left out more details than I care to admit, but it’s time to turn our attention to the composition of the R500 family.

 

The R500 family
Because the R500 series decouples the different stages of the rendering pipeline from one another, ATI’s chip designers have been able to allocate their transistor budgets in some interesting ways. We’ve seen this sort of thing before with NVIDIA’s newer chips. For instance, the GeForce 6600 notably includes eight pixel shaders but only four “back end” units, or ROPs. Here’s a quick look at how ATI has divvied up execution resources inside of the first three R500-series GPUs and how the competing NVIDIA chips compare.

  Vertex
shaders
Pixel
shaders
Texture
units
Render
back-ends
Z
compare
Max.
threads
Radeon X1300 (RV515) 4 4 4 4 4 128
Radeon X1600 (RV530) 5 12 4 4 8 128
Radeon X1800 (R520) 8 16 16 16 16 512
GeForce 6200 (NV44) 3 4 4 2 2/4 ?
GeForce 6600 (NV43) 3 8 8 4 8/16 ?
GeForce 6800 (NV41) 5 12 12 8 8/16 ?
GeForce 7800 GT (G70) 7 20 20 16 16/32 ?
GeForce 7800 GTX (G70) 8 24 24 16 16/32 ?

Comparing these very different architectures from ATI and NVIDIA isn’t an exact science, of course, but I’ve tried to get it right. Because we don’t know precisely how many threads each NVIDIA GPU can handle, I’ve left that question unanswered in the table. Also, the number of Z compares per clock gets tricky. As you can see, I’ve put a split into the NVIDIA GPUs’ Z compare capabilities. That’s because the NV4x and G70 GPUs can handle either one Z/stencil and one color pixel per clock, or they can do two Z/stencil pixels per clock. It’s a tradeoff. The R500 series actually has a similar quirk; it can do two Z pixels per clock when multisampled antialiasing is enabled, twice as many as indicated in the table.


ATI’s three architectures compared (Source: ATI)

Anyhow, if we concentrate on the new ATI GPUs, we can see a couple of very straight-laced designs in the RV515 and the R520. Both have the same number of pixel shaders, texture units, render back-ends, and Z-compare units—all nicely balanced and politically correct, though not necessarily the optimal use of transistors.

Then we have the wild one of the bunch, the RV530. With five vertex shaders, 12 pixel shaders, eight Z-compare units, and only four texture units and render back ends, this design is wildly asymmetrical and generally disrespectful to its elders. The RV530 may also be a more optimal means of spending a transistor budget than the other two chips, although it is a pretty radical departure from the norm. The RV530 is something of a statement by ATI about where games are going, and the emphasis is decidedly on shaders. The X1600 cards based on this chip should be especially good at shader-heavy games with lots of complex geometry and shadowing, but they may be relatively weak in older games that rely on lots of texturing or just raw fill rate. RV530-based products may also suffer when subjected to the rigors of heavy anisotropic filtering or edge antialiasing, as in our test suite for this article. Whoops.

NVIDIA has gradually embraced more asymmetry between pixel shaders and other resources in its GPUs over time, culminating in the G70’s use of 24 pixel shaders and 16 ROPs. As far as we know, NVIDIA’s current architectures don’t decouple everything ATI’s new designs do. For instance, the number of texturing units is tied to the number of pixel shaders, and the render back-end and Z-compare unit are tied together inside what NVIDIA calls a ROP. Perhaps for this reason, or perhaps because it’s just crazy, no GPU from NVIDIA has ever had a 3:1 ratio of pixel shaders to render back-ends like the RV530. I’ll be curious to see whether the RV530 succeeds; it may be too far ahead of its time.

It’s not in the table above, but I should mention the memory controllers on the three chips. R520 has the full 512-bit internal, 256-bit external memory controller with a ring topology. RV530’s memory controller retains the ring topology but halves the bandwidth to 256 bits internally and 128 bits to memory. The humble RV515 doesn’t have a ring-style memory controller, but it can support one, two, or four 32-bit memory channels.

The chips
Speaking of transistor budgets—which is a sure-fire way to pick up the chicks—let’s have a look at where ATI’s three new chips wound up.

  Transistors
(Millions)
Process
size (nm)
Approx.
Die size
(sq. mm)
Radeon X1300 (RV515) 105 90 95
Radeon X1600 (RV530) 157 90 132
Radeon X1800 (R520) 321 90 263
GeForce 6200 (NV44) 75 110 110
GeForce 6600 (NV43) 143 110 156
GeForce 6800 (NV41) 190 110 210
GeForce 7800 (G70) 302 110 333

No table like this one would be possible without a heaping helping of disclaimers, so let’s get started. First, it seems that ATI and NVIDIA estimate transistor counts using different methods, so the numbers here aren’t necessarily entirely comparable. I didn’t count them myself. Second, the die size measurements you see were produced by me, and are not entirely, 100% accurate. I used a plastic ruler, and I didn’t measure fractions of a millimeter beyond the occasional .5 increment when really obvious. That said, these numbers should be more accurate than some others I’ve seen bandied about, so there you go.

  
Left: R520 (top) versus G70 (bottom)
Right: RV530 (top) versus NV41 (bottom)

Obviously, ATI’s move to 90nm process tech gives it the ability to squeeze in more transistors per square millimeter, as the numbers suggest. Die size is related pretty directly to the cost of producing a chip, so ATI looks to have an advantage in each segment of the market. However, that advantage may be mitigated by less-than-stellar yields on these 90nm chips, so who knows?

The real difficulty of handicapping things here comes in trying to sort out which of these new GPUs competes with which chip from NVIDIA. Truth be told, NVIDIA has already taped out multiple GPUs, presumably lower-end GeForce 7-series parts, to compete with ATI’s new offerings. Only the G70 and the R520 are sure-fire direct competitors from the same generation. On that front, note that the G70 packs 24 pixel shader pipelines into only 302 million transistors, while the R520’s sixteen pipes weigh in at 321 million transistors. That’s quite the difference. NVIDIA says the G70 would translate to about 226 mm2 at 90nm, were it to make the leap. The G70 hasn’t made that leap, though.

In terms of transistor counts and basic capabilities, the RV530 falls somewhere between the NV41 chip powering the GeForce 6800 and the NV43 GPU on GeForce 6600 cards. Similarly, the RV515 falls between the NV43 and NV44, so direct competitors among NVIDIA’s GPUs aren’t easily identified. That leaves us to compare the actual cards based on these chips on the basis of price and performance, which is what we’ll do next.

 

Cards in the X1000 series
Like I said, ATI is unleashing a whole range of cards based on these three chips. Here’s a look at all of ATI’s proposed models, from $79 on up.

  GPU Core
Clock
(MHz)
Memory
Clock
(MHz)
Memory
size
(MB)
Suggested
retail
price
Radeon X1300 HyperMemory RV515 450 1000 32(128) $79
Radeon X1300 RV515 450 500 128 $99
Radeon X1300 RV515 450 500 256 $129
Radeon X1300 Pro RV515 600 800 256 $149
Radeon X1600 Pro RV530 500 780 128 $149
Radeon X1600 Pro RV530 500 780 256 $199
Radeon X1600 XT RV530 590 1380 128 $199
Radeon X1600 XT RV530 590 1380 256 $249
Radeon X1800 XL R520 500 1000 256 $449
Radeon X1800 XT R520 625 1500 256 $499
Radeon X1800 XT R520 625 1500 512 $549

That’s pretty much a sweep, save for a gaping hole between $249 and $449 that could use some attention. Most of ATI’s current Radeon lineup should go the way of the dodo once these cards filter out into the market in sufficient volume. Surely something will come along to replace the current Radeon X800 XL at $299-349, but we don’t yet know what that will be.

ATI supplied us with four cards from its new lineup for review. Since we’ve only had a week and a half with the cards, we had to leave out the Radeon X1300 Pro. Our test suite for the other cards wouldn’t be appropriate for the X1300 Pro or its competitors. We’ll address it in a future review.


The Radeon X1600 XT

Further up the ladder, we have the Radeon X1600 XT. This card packs 256MB of RAM and 12 pixel pipes running at 590MHz for a price of $249. LCD fanatics, please note the dual DVI outputs. ATI says the X1600XT is a competitor for the GeForce 6600 GT, but that’s not quite right for this card. The more direct competitor is probably the GeForce 6800, a 12-pipe card that’s selling for $239 or so—sometimes less. I expect that ATI’s board partners will discount somewhat from the list price, so the comparison seems apt.


The Radeon X1800 XL

In the meaty part of the product lineup is the Radeon X1800 XL. This R520-based beast packs 256MB of memory and a 500MHz GPU clock. ATI’s suggested list price on this puppy is $449. That would probably make its most direct competition the GeForce 7800 GT, although some 7800 GT cards can presently be found for under $400.


The Radeon X1800 XT is almost exactly the same length as the GeForce 7800 GTX


…but with a double-width cooler

Finally, there’s the Radeon X1800 XT. The XT rides on the same basic board design as the XL, but with a larger dual-slot cooler. The 512MB version, pictured above, will list at $549. That puts it into the high end of GeForce 7800 GTX territory. Word has it that NVIDIA is preparing a 512MB version of the 7800 GTX to go toe to toe with the Radeon X1800 XT, but we’ll have to compare the 512MB Radeon X1800 XT to the 256MB GeForce 7800 GTX for now.

You may be wondering when these products will actually be available for purchase. All I can tell you right now is what ATI has told me, so let me hit you with the actual slide from the actual ATI presentation, with actual dates. Actually.

Three of the cards, including the X1800 XL, should be available today, if all goes according to plan. The rest are slated to arrive in November. ATI hasn’t given us any projections about how broad or deep availability of these new cards will be, so we will have to wait and see about that. The red team’s track record on product availability has been rocky in the past year or so, but perhaps this round will be different.

Doubling up on Radeon X1000 cards via CrossFire
Yes, the Radeon X1000 series will get its own version of CrossFire. We don’t have too many details on it yet, but we do know several things. High-end X1000 CrossFire rigs will still require dedicated master cards and external DVI cables in order to work, but will support much higher display resolutions—up to 2048×1536 at 70Hz or better. The dual-link DVI output capabilities of the Avivo display engine will team up with a new, more powerful compositing chip on the master cards to make this happen. ATI says this compositing engine will also be fast enough to deliver CrossFire’s SuperAA modes with no performance hit.

Meanwhile, the X1300 series may not even require master cards, because it will use PCI Express to pass data between two graphics cards, as can NVIDIA’s low-end GPUs.

We don’t yet have a definite ETA on X1800 or X1600 master cards, unfortunately, but ATI says they’re on the way.

 

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Athlon 64 X2 4800+ 2.4GHz
System bus 1GHz HyperTransport
Motherboard Asus A8N-SLI Deluxe ATI CrossFire reference board
BIOS revision 1013 080012
North bridge nForce4 SLI Radeon Xpress 200P CrossFire Edition
South bridge RS450
Chipset drivers SMBus driver 4.45
SATA IDE driver 5.34
SMBus driver 5.10.1000.5
SATA IDE driver 5.0.0.2
Memory size 1GB (2 DIMMs)
Memory type OCZ EL PC3200 DDR SDRAM at 400MHz
CAS latency (CL) 2
RAS to CAS delay (tRCD) 2
RAS precharge (tRP) 2
Cycle time (tRAS) 8
Hard drive Maxtor DiamondMax 10 250GB SATA 150
Audio Integrated nForce4/ALC850
with Realtek 5.10.0.5900 drivers
Integrated RS450/ALC880
with Realtek 5.10.00.5152 drivers
Networking NVIDIA Ethernet driver 4.82
Marvell Yukon 8.39.3.3 drivers
VIA Velocity v24 drivers
Marvell Yukon 8.39.3.3 drivers
VIA Velocity v24 drivers
Graphics XFX GeForce 6800 256MB PCI-E with ForceWare 78.03 drivers Radeon X1600 XT PCI-E  with Catalyst 8.173.1-05921a-026915E drivers
GeForce 6800 Ultra 256MB PCI-E with ForceWare 78.03 drivers Radeon X850 XT  PCI-E  with Catalyst 8.162.1-050811a-026057E drivers
Dual GeForce 6800 Ultra 256MB PCI-E with ForceWare 78.03 drivers Dual Radeon X850 XT  PCI-E  with Catalyst 8.162.1-050811a-026057E drivers
XFX GeForce 7800 GT 256MB PCI-E with ForceWare 78.03 drivers Radeon X1800 XL PCI-E  with Catalyst 8.173.1-05921a-026915E drivers
Dual XFX GeForce 7800 GT 256MB PCI-E with ForceWare 78.03 drivers  
MSI GeForce 7800 GTX 256MB PCI-E with ForceWare 78.03 drivers Radeon X1800 XT PCI-E with Catalyst 8.173.1-05921a-026915E drivers
Dual MSI GeForce 7800 GTX 256MB PCI-E with ForceWare 78.03 drivers    
OS Windows XP Professional (32-bit)
OS updates Service Pack 2

Thanks to OCZ for providing us with memory for our testing. If you’re looking to tweak out your system to the max and maybe overclock it a little, OCZ’s RAM is definitely worth considering.

All of our test systems were powered by OCZ PowerStream 520W power supply units. The PowerStream was one of our Editor’s Choice winners in our last PSU round-up.

Unless otherwise specified, the image quality settings for both ATI and NVIDIA graphics cards were left at the control panel defaults.

The test systems’ Windows desktops were set at 1280×1024 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Pixel-pushing power
We’ll begin with a look at the basic math on pixel-pushing power that helps determine overall performance. These numbers become less important over time as asymmetric designs like the Radeon X1600 XT become more common, but they’re still useful to consider.

  Core clock
(MHz)
Pixels/
clock
Peak fill rate
(Mpixels/s)
Textures/
clock
Peak fill rate
(Mtexels/s)
Memory
clock (MHz)
Memory bus
width (bits)
Peak memory
bandwidth (GB/s)
Radeon X1600 XT 590 4 2360 4 2360 1380 128 22.1
GeForce 6800  325 12 3900 12 3900 700 256 22.4
GeForce 6600 GT 500 4 2000 8 4000 1000 128 16.0
Radeon X800 400 12 4800 12 4800 700 256 22.4
GeForce 6800 GT 350 16 5600 16 5600 1000 256 32.0
Radeon X800 XL 400 16 6400 16 6400 980 256 31.4
GeForce 6800 Ultra 425 16 6800 16 6800 1100 256 35.2
GeForce 7800 GT 400 16 6400 20 8000 1000 256 32.0
Radeon 1800 XL 500 16 8000 16 8000 1000 256 32.0
Radeon X850 XT 520 16 8320 16 8320 1120 256 35.8
Radeon X850 XT Platinum Edition 540 16 8640 16 8640 1180 256 37.8
XFX GeForce 7800 GT 450 16 7200 20 9000 1050 256 33.6
Radeon X1800 XT 625 16 10000 16 10000 1500 256 48.0
GeForce 7800 GTX 430 16 6880 24 10320 1200 256 38.4

The Radeon X1600 XT has more pixel shaders running at a much higher clock speed than the GeForce 6800 and a similar amount of memory bandwidth, but it has just over half the pixel and texel fill rate of the 6800. As I’ve noted, the X1600 XT’s relative performance should vary quite a bit depending on the application. The X1600 XT’s challenge is compounded by the fact that it’s competing in our tests against an XFX card that’s “overclocked in the box” at 350MHz core and 900MHz memory clock speeds. The clock speeds on consumer products based on NVIDIA chips are often faster than NVIDIA’s reference spec, and that’s the case here.

Meanwhile, the Radeon X1800 XL matches up well against the stock GeForce 7800 GT, with exactly the same texel fill rate and memory bandwidth. Unfortunately for ATI, our 7800 GT test subjects are also “overclocked” cards from XFX that run at slightly higher clock speed, giving the 7800 GT cards a bit of an edge.

The most interesting matchup may be at the high end, where the Radeon X1800 XT squares off against the GeForce 7800 GTX. Here, ATI counters the brute force of GeForce 7800 GTX’s 24 pixel shaders and texture units with higher clock speeds. The Radeon X1800 XT’s 16 pipes run nearly 200MHz faster, yielding a theoretical peak texel fill rate nearly the same as the 7800 GTX. ATI’s top-end card leads by a wide margin in terms of peak memory bandwidth, nearly 10GB/s faster than the 7800 GTX.

So are the cards faithful to these numbers? Let’s see.

Few of the cards approach their theoretical peak limits in the single-textured fill rate test, save for the X1600 XT, whose limits are relatively low. With multiple textures per pixel, the cards get much closer to their peak capabilities. The Radeon X1800 XL magically pushes a few ticks past its theoretical peak throughput, but it still can’t catch the juiced XFX 7800 GT. The X1800 XT matches the GeForce 7800 GTX almost exactly, slightly behind it at lower resolutions, but slightly above it at 1600×1200.

Overall, it’s very close. That sets the stage for our game benchmarks, which are up next.

 

Doom 3
We’ve conducted our testing almost exclusively with 4X antialiasing and a high degree of anisotropic filtering. We generally used in-game controls when possible in order to invoke AA and aniso. In the case of Doom 3, we used the game’s “High Quality” mode in combination with 4X AA.

Our Delta Labs demo is typical of most of this game: running around in the Mars base, shooting baddies. The imaginatively named “trdemo2” takes place in the game’s Hell level, where the environment is a little more varied and shader effects seem to be more abundant.

It’s a disappointing start for the new ATI cards in Doom 3. OpenGL has long been an Achilles’ heel for Radeons, and the problem persists with the X1000 series.

 

Far Cry
Next up is Far Cry, which takes advantage of Shader Model 3.0 to improve performance. The game also has a path for ATI’s Shader Model 2.0b. Our first demo takes place in the jungle with lots of dense vegetation and even denser mercenaries. All of the quality settings in the game’s setup menu were cranked to the max. The second demo is relatively simpler in terms of geometry, but includes lots of “heat shimmer” effects.

The R500-series GPUs show us a little something in this first demo, outrunning the NVIDIA cards decidedly. The Radeon X1800 XT stomps the GeForce 7800 GTX and is nearly as fast as two 7800 GT cards in SLI. The X1800 XL outruns the GeForce 7800 GTX, too. Ow. Meanwhile, the Radeon 1600 XT still looks overmatched against the GeForce 6800.

Things balance out a little in the Volcano level, with an intriguing separation of sorts. The uber-high-end X1800 XT outperforms the GeForce 7800 GTX, but the still-pretty-high-end X1800 XL is matched almost evenly against the 7800 GT. The not-quite-as-high-end X1600 XT, though, can’t quite run with the GeForce 6800.

 

The Chronicles of Riddick: Escape from Butcher Bay
This OpenGL-based game has a Shader Model 3.0-ish mode for NVIDIA cards, but that mode exacts a big performance penalty, so I ran all cards with the SM2.0 path.

Another OpenGL game, another beating for the Radeon X1000 series. The fancy-pants Radeon X1800 XT gets a soul-compressing wedgie from the GeForce 6800 Ultra.

 

Splinter Cell: Chaos Theory
We’re using the 1.04 version of Splinter Cell: Chaos Theory for testing, and that gives us some useful tools for comparison. This new revision of the game includes support for Shader Model 2.0, the DirectX feature set used by Radeon X850 XT cards. The game also includes a Shader Model 3.0 code path that works on the newer NVIDA and ATI GPUs.

In our first test, we enabled the game’s parallax mapping and soft shadowing effects. In the second, we’ve also turned on high-dynamic-range lighting and tone mapping, for some additional eye candy. Due to limitations in the game engine (and in NVIDIA’s hardware), we can’t use HDR lighting in combination with antialiasing, so the second test was run without edge AA.

The new Radeon GPUs handle themselves well in this Shader Model 3.0 game. Once more, the X1800 XT puts the hurt on the GeForce 7800 GTX, while the X1800 XL is more evenly matched with the 7800 GT. Here, the X1600 XT even puts up a good fight. Notice that when we turn off antialiasing and crank up high dynamic range lighting, the GeForce 7800 cards get relatively stronger—not by much, but it’s enough to push the 7800 GT past the X1800 XL.

 

Battlefield 2
We tested the next few games using FRAPS and playing through a level of the game manually. For these games, we played through five 90-second gaming sessions per config and captured average and low frame rates for each. The average frames per second number is the mean of the average frame rates from all five sessions. We also chose to report the median of the low frame rates from all five sessions, in order to rule out outliers. We found that these methods gave us reasonably consistent results, but they are not quite as solid and repeatable as other benchmarks.

Yet again, the Radeon X1800 XT outruns the GeForce 7800 GTX, but among cards that cost less than $500, the NVIDIA products are somewhat faster.

F.E.A.R. demo
The F.E.A.R. demo looks purty, but that comes at the cost of frame rates. We actually had to drop back to 1024×768 resolution in order to hit playable frame rates, although we did have all of the image quality settings in the game cranked.

The new Radeons deliver consistently higher average frame rates than the green team’s competing cards in this new game, but the median low scores deflate the ATI cards a bit. The minimum frame rates on the NVIDIA cards are very similar, all told.

Guild Wars
I played through an awful lot of battles with Plague Devourers outside of Old Ascalon in order to bring you these scores. The charm does wear off after your 50th battle with a big scorpion-looking thing, slightly diluting the crack-like addictiveness of this online RPG.

Chalk this one up as a win for the GeForce cards. Here, the X1600 XT receives a brutal noogie at the hands of the GeForce 6800. This game looks nice, but you’re not exactly knee-deep in advanced pixel shader effects when playing it.

 

3DMark05
Here’s one test where we didn’t feel the need to crank up 4X antialiasing and a high degree of anisotropic filtering to keep the high-end cards occupied. 3DMark05 also makes ample use of pixel shader effects and fairly complex geometry. All told, this test should allow the Radeon X1600 XT to strut its stuff.

As one might expect, the Radeon X1600 XT turns in a much better performance here than in most of our other tests, giving the slightly long-in-the-tooth GeForce 6800 a reason to contemplate its existence.

It’s a split on the R520-based cards, with the X1800 XT smoking the GeForce 7800 GTX, especially at lower resolutions, and the X1800 XL trailing the GeForce 7800 GT. Now we’ll look at the individual scores from the three games that make up 3DMark05’s overall score.

Relative performance changes a little from one test to the next, but stays reasonably true to the picture painted by 3DMark’s overall score.

 

3DMark05 (continued)
Now we’ll look at the synthetic feature tests from 3DMark05.

The GeForce cards dominate 3DMark05’s relatively simple pixel shader test, but the tables turn in the vertex shader benchmarks, where the new Radeons steal the show. The X1600 XT’s irreverent personality comes out when it uses its five vertex shaders at 590MHz to somehow upstage the X1800 XL.

 

ShaderMark
ShaderMark should allow us to look at pixel shader performance in quite a bit more detail. This programs runs a whole host of different shader programs and tracks performance in each.

This one’s a back-and-forth battle between the two high-end cards, and it’s hard to declare a winner. The GeForce 7800 GTX seems to be faster in more of the tests than the X1800 XT, but the new Radeon crunches through some of the slower, more intensive shaders faster.

I should mention that I’ve included scores for the HDR shaders here, but they weren’t really running right on the X1000-series cards. There were slight but obvious visible problems with these shaders on the new Radeons, and I expect that a fix in either the application or ATI’s drivers could affect performance.

Before we move on, let’s take a look at the slow-motion instant replay from ShaderMark’s shadow mapping tests.

Here’s a pixel shader program that uses flow control, and the results are striking. The GeForce cards all suffer a minor performance penalty when flow control is in use, while the Radeon X1000-series cards show a big jump in performance. With flow control active, the ATI cards are also much faster than their NVIDIA counterparts. This is the sort of shader that can benefit from ATI’s finer threading granularity for looping and branching, obviously.

ShaderMark will also allow us to quantitatively analyze image quality, believe it or not. It does so by comparing output from the graphics card to the output of Microsoft’s DirectX 9 reference rasterizer, so this is more of a quantitive analysis of the deviation from Microsoft’s standard than anything else. The number is reported as the mean square error in the card’s output versus the reference image.

Because of the obvious visual problems noted above, I’ve excluded the HDR shaders from our cumulative average. Otherwise, the ATI and NVIDIA GPUs are equally faithful to the Microsoft reference rasterizer.

 

Adaptive antialiasing
I haven’t had time to do a full test and write-up of the X1000 series’ antialiasing performance, but I have confirmed that the new GPUs stay true to the sample patterns and modes used by the Radeon X800 series. In fact, there’s little change aside from the ability to do antialiasing in combinations with high-dynamic-range image formats and adaptive antialiasing.

Adaptive AA is basically a clone of NVIDIA’s transparency AA, introduced in the GeForce 7800. This feature handles difficult cases where alpha transparency is used to create a see-through object like a fence, screen, grill, or grate. Rather than capture all new images, I’ve lifted some of them from my GeForce 7800 GTX review. If anything about transparency AA has changed in NVIDIA’s latest drivers, it won’t be reflected here.

Here’s a completely pathological case where the multiple layers of the chain link fence look like a scattered mess without adaptive/transparency AA. Things get progressively better as you scroll down through the shots of different modes.


Radeon X850 XT PE – 4X multisampled edge AA


Radeon X1800 XT – 4X multisampled edge AA, no adaptive AA


GeForce 7800 GTX – 4X multisampled edge AA, no transparency AA


Radeon X1800 XT – 2X multisampled edge AA, adaptive AA


GeForce 7800 GTX – 4X multisampled edge AA, multisampled transparency AA


Radeon X1800 XT – 4X multisampled edge AA, adaptive AA


GeForce 7800 GTX – 4X multisampled edge AA, supersampled transparency AA


Radeon X1800 XT – 6X multisampled edge AA, adaptive AA


GeForce 7800 GTX – 8xS multisampled/supersampled AA, supersampled transparency AA

I’d say both ATI’s 2X adaptive AA and NVIDIA’s multisampled transparency AA modes are pretty well useless. ATI’s 4X adaptive AA does a good job, though, just as NVIDIA’s 4X supersampled transparency AA does. ATI’s 6X adaptive AA mode looks pretty good—not as nice as NVIDIA’s 8xS with supersampled transparency AA, but 8xS mode has always carried a pretty big performance penalty. Unfortunately, we ran out of time before we could test adaptive AA performance. We’ll have to look at that in a future article.

 

Power consumption
We measured total system power consumption at the wall socket using a watt meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The idle measurements were taken at the Windows desktop, and cards were tested under load running a loop of 3DMark05’s “Firefly Forest” test at 1280×1024 resolution.

Please note that these numbers aren’t as pure as the driven snow. Because we wanted to include CrossFire and SLI, we kept each brand of cards with its respective platform here, the ATI cards with the Radeon Xpress 200 motherboard and the NVIDIA cards with the nForce4 SLI mobo. Differences in power consumption between these motherboards will influence the overall result.

Even with the caveats, I think we can draw some provisional conclusions. Systems based on the new Radeons uniformly draw more power than the NVIDIA competition, which isn’t entirely a shock given the higher clock rates for the ATI GPUs. The system based on the Radeon X800 XL, which has the lowest clock speed of the three, has power requirements under load similar to the GeForce 7800 GT-based rig. The X1800 XT system, with its stratsopheric GPU clock speed, requires 25W more system power than the nForce4/GeForce 7800 GTX system.

These new Radeons also seem to draw quite a bit more power at idle than the GeForce cards—or the Radeon X850 XT, which is on the same motherboard as the Radeon X1000 cards. The X1800 XT, in particular, is pulling an awful lot of juice. The R500 GPUs do clock gating to reduce idle power consumption, and they also ramp down clock speeds a small amount when the GPU is idle. Still, I wish ATI had used more extensive dynamic clock speed adjustments to help cut idle power use further.

Noise levels
We used an Extech model 407727 digital sound level meter to measure the noise created (primarily) by the cooling fans on our two test systems. The meter’s weightings were set to comply with OSHA standards. I held the meter approximately two inches above the tops of the graphics cards, right between the systems’ two PCI Express graphics slots.

The sound levels the meter picked up track pretty well with my perception of the X1000 series’ coolers. The two X1800 cards are reasonably quiet. The XT’s dual-slot cooler can be loud when it kicks into high gear as the system powers on, but otherwise, it just whispers. I suppose it might crank itself up inside of a warm case, but it was a model citizen on our open test bench. The X1800 XL, though, had an annoying habit of kicking its cooling fan into high gear for no apparent reason when idle on the Windows desktop.

The X1600 XT is another story. This is a loud card all of the time, whether idle or running a game. The fan just runs fast enough to make quite a bit of noise all day long, more than either of the X1800 cards do under load. ATI may need to put a beefier cooler on this one in order to keep fan speeds in check.

 
Conclusions
I should start by saying that we’ve not been able to test many of the R500-series GPU architectures’ new features as extensively as we usually would due to lack of time, including the chips’ Avivo video and display engine and some facets of edge and texture antialiasing. We also left out a minor feature known as the Radeon X1300 Pro. We’ll have to address those things at a later date, when time permits.

That said, we’ve learned quite a bit about ATI’s new GPUs in the preceding pages. We should probably break things down into pieces in order to make sense of it all.

From a pure graphics technology standpoint, the Radeon X1000 series of graphics processors doesn’t break new ground with bold innovations, but it does give ATI nearly every feature that the GeForce 7 series GPUs have had over the Radeon X800s. Not only that, but ATI has added a number of worthwhile capabilities, including multisampled antialiasing with high-dynamic-range color modes, “free” tone mapping via the Avivo display engine, and much finer-grained batch sizes for dynamic flow control in Shader Model 3.0. ATI has also caught up with NVIDIA on the internal chip architecture front by decoupling the computational units responsible for the various stages of the graphics pipeline from one another, allowing more flexibility for the development of adventurous variations on the core GPU architecture, like the RV530.

If the R520 architecture and derivatives have a weakness, it may be performance in current applications given the number of transistors. The R520 apparently requires more transistors for what’s basically a 16-pipe design than NVIDIA’s G70 does with 24 pixel shaders and texture units. ATI has made up the deficit with higher GPU and memory clock speeds, but that tradeoff leads to higher power consumption than G70, even though ATI’s GPU is manufactured on a smaller fab process. Then again, the Radeon X1800 XT outperforms the GeForce 7800 GTX in many cases. That performance gap may grow if future applications begin making extensive use of shaders with flow control, where ATI’s architecture is more efficient. However, I certainly wouldn’t expect that to happen overnight, and it may not happen during the lifespan of G70, R520, and their offspring.

These considerations are less of a concern in the middle of the graphics market than they are at the very high end, where ATI has resorted to extreme clock speeds to counter NVIDIA’s widest GPU architecture. Our measured power consumption under load on the Radeon X1800 XL system was within three watts of the GeForce 7800 GT-based system. That’s hardly a reason to panic. Smart use of asymmetric GPU design, as in the RV530 (if that proves to be smart), may help ATI achieve higher performance with fewer transistors and smaller power envelopes in the future, as well.

ATI faced some daunting challenges with this new GPU generation, and they have largely succeeded in meeting them. If they deliver the products on schedule, ATI could lay claim to bragging rights for having been first to deliver a top-to-bottom range of next-gen GPUs on 90nm process technology. As for the cards as products, well, let’s look at them one by one.

At the very high end of the market, the Radeon X1800 XT is indeed a worthy competitor for the GeForce 7800 GTX. In Direct3D games, the X1800 XT is usually faster than the 7800 GTX. Unfortunately, ATI’s weak showing in OpenGL games keeps the X1800 XT from capturing the undisputed heavyweight title. I also have a few concerns about the likely extent of the Radeon X1800 XT’s availability in the market, given its relatively high GPU clock speeds. We will have to wait and see about that.

There are also a number of GeForce 7800 GTX cards on the market now with higher clock speeds than the MSI cards we tested. I chose the MSI cards simply because I wanted a matched pair of cards for use in SLI, and I didn’t have a pair of “overclocked in the box” cards on hand. NVIDIA’s board partners have been pretty aggressive about ramping up the G70’s clock speeds and standing behind those cards with lifetime warranties, and those cards might give the Radeon X1800 XT more of a run for its money in Direct3D apps—at the expense of higher power consumption, of course.

We may have a rematch between the Radeon X1800 XT 512MB and a new 512MB version of the GeForce 7800 GTX shortly, too, that could produce a clear champ.

The battle between the Radeon X1800 XL and the GeForce 7800 GT is close, but I’d have to give the edge to the 7800 GT. On balance, the 7800 GT is faster, though there are occasions, like in Far Cry and the F.E.A.R. demo, where the X1800 XL outruns even the GeForce 7800 GTX. Assuming the Radeon X1800 XL can soon hit the lower price points that some 7800 GT cards are hitting, it ought to be a very competitive product.

The Radeon X1600 XT confounds me. Admittedly, our test suite was best suited for high-end graphics cards, and the extensive use of antialiasing and anisotropic filtering probably hurt the X1600 XT’s standing in our results. Still, this card lists for $249 and has 256MB of memory onboard, which is territory where I’d expect to be able to use high-quality edge and texture antialiasing in current games. Only in select cases can the Radeon X1600 XT keep pace with NVIDIA’s like-priced offering, the GeForce 6800. I appreciate ATI’s boldness in choosing an asymmetrical GPU design with the RV530; five vertex pipes and 12 pixel shaders at 590MHz aren’t easily discounted. However, in light of the performance we’ve seen and the size of the chip, this feels more like a $179 product to me—more of a true GeForce 6600 GT competitor. The X1600 XT may age well as more shader-laden games take hold, but I wouldn’t cough up $249 for one now when the GeForce 6800 can be had for less. 

Comments closed
    • BoBzeBuilder
    • 10 years ago

    Well, here we are in the spring of 2009, and this once $549.99 card is now utterly obsolete. Isn’t that something?

      • SecretMaster
      • 9 years ago

      Duh

    • Chrispy_
    • 13 years ago

    Well, here we are in July 2006 and the X1800XT is available in 256MB guise for around $250 (£160)

    At this price it’s significantly cheaper than a Geforce 7800GTX and generally outperforms it based on the newest generation of games.

    A close match for the similarly priced 7900GT, the nearest competitor in today’s market.

    • Amun
    • 14 years ago

    Below is a link follow it. It proves that Core speed and mem speed start to mean nothing when you increase res and aa and have more memory capcity, funny a x800xl 512mb thrashes a x850xt … LOL.

    Now correct me if i am wrong but the X1800XT is 512 mb and the current 7800gtx is 256mb… when nvidia releases a 512mb 490 core and 1300 mem nvidia card ( which you can buy as a standard OC) and see what the performance is like. oh and let put aside the fact the ATI isnt out for another month… ATI has won nothing yet.

    §[< http://www.amdzone.com/modules.php?op=modload&name=Sections&file=index&req=viewarticle&artid=182&page=2<]§

    • Delta9
    • 14 years ago

    So basically the X5800XT, I mean X1800XT is a stop gap fuck up. Wake me up when the r580 comes out. If it wasn’t broken it would be an extremely compelling product, instead it’s neck and neck with Nvidia and sucks more juice than a Vegas hooker.

    • unmake
    • 14 years ago

    Oh, Canada.

    ATI doesn’t look very competitive this product cycle, and I doubt they can manage the sort of turn-around they pulled off circa the Radeon 8500. They’ll probably be facing some (frivolous) shareholder suits..

    • Palek
    • 14 years ago

    I just read the x1x00 review on Anandtech, and found their power draw measurements (under load) to be quite different from TR’s. Here’s an abbreviated list of both (difference to 7800GT baseline result in brackets):

    TR

    l[

      • Pete
      • 14 years ago

      Xbit used the 512MB version, no? Edit: NM, 512MB for both.

    • Palek
    • 14 years ago

    It’s a little early to bury the x1600. For one thing, Scott pointed out that the x1600 will really be able to stretch its feet when next generation shader-heavy games become the norm. Early critics of the new x1000 series mustn’t forget that initial driver releases for new GPU architectures generally leave a lot of room for performance improvements. Also, the heavily programmable nature of the x1000 chips lends itself well to per-game performance optimizations – and by that I refer to real optimizations that do not degrade quality, not cheats.

    The power consumption of these new chips is a little worrying, but maybe this can also be improved upon with future drivers that more aggressively throttle back the frequency or turn off certain sections when idle. TSMC (or whoever is manufacturing the chips) could also tweak their 90nm process to reduce leakage.

    All in all, surely this wasn’t the perfect execution of a product launch many were expecting to counter nVidia’s recent success, but then hardly anyone ever gets it 100% right. The 7800 was an exception not the norm. You need only to look at the 5800 dustbuster fiasco or the handicapped video acceleration in the 6800 to realize this.

      • deathBOB
      • 14 years ago

      I agree. Ati seems pretty good at wringing extra performance out of its cards with improved drivers. While I am not blown away by performance I am not dissapointed. Even my little 6600GT has seen some big improvements with newer driver releases.

      Looking at the specs of the X1600 Ati has a nice little platform to update and optimize.

      On the other hand I can get a nice X800 card for a similar price. At this time the X1600 is a crappy deal, but I think it will improve in the future.

      Also: To TR and any other review site, please think of us color blind folks. Considering most of your readers are men, I would imagine more than a few have trouble like I do. Yellow and Green are not good colors to use when comparing things, please use colors that are more different. Its really annoying to have to sit and stare at a benchmark to figure out what is going on.

        • Usacomp2k3
        • 14 years ago

        y[

          • Vrock
          • 14 years ago

          Ah, nevermind.

      • Koly
      • 14 years ago

      Yes it’s a little early to bury the X1600. IF Ati found some MAJOR power optimizations, it would be a great value card for $100. As a healthy successor to X600 and X700 series. This is the prices range where 6600GT could end too after NVIDIA introduced it’s next mainstream generation. The X1300 should be dropped altogether or even better, integrated into a chipset.

      No amount of future shader heavy games will save X1600 as midrange option. They might rely more on pixel shaders, but they certainly won’t use much less texturing and memory bandwidth than today’s games. The 4 texturing units and 128bit bus of X1600 (and X1300) is so 2003. Ati should quickly move its X1800 or some other R520 derivative into the mainstream and produce a 24 or 32 pipe high end chip. There is room on the 90nm process. But again, that requires some drastic cuts of power consumption.

      • indeego
      • 14 years ago

      3Dmark don’t lieg{<.<}g Like, ever.

    • tehmaster
    • 14 years ago

    Nvidia should vapor release a “Nvidia x1801 ultravapor” series card, basically they could take a couple of their engineering samples paint them red, double the memory, throw a doublewide cooler on it and clock the hell out of it and send it to some review sites. They could then claim back the throne claiming that they will be shipping shortly. It may even work as a business tactic(especially the name).

    • link626
    • 14 years ago

    ati is teh suck.

    they cheat by bumping up the core frequencies way high to 650+mhz. i can imagine these cards burning up shortly after the 1 year warranty is up

      • Fighterpilot
      • 14 years ago

      hehehe… another pouting NVidiot who refuses to take “yes” for an answer.

      ATI “cheated ” by making their memory and clockspeeds faster….lol

      Just suck it up dude….ATI X1800XT is the new King…..read it and weep.

        • Lucky Jack Aubrey
        • 14 years ago

        Right now, X1800XT is the King of Vaporware. Let’s hope that ATi gets that situation straightened out and makes the X1800XT available to consumers on 11/5, as promised.

        • Krogoth
        • 14 years ago

        X1800XT is vaporware right now, and it gives rougly the same performance as a factory OC’ed 7800GTX. It looks like X1800XT is close to it’s celling, unless there’s a reviewer brave enough to push it. I don’t think Damage would want to push something that’s techincally vaporware at this point.

        This cycle actually has more in common with the previous generation. To me it seems the X1800XT=X800XT PE at launch. I remember the massive hype and flamewars associated with it. There were never too many units launched into the market. The units that made it were often overinflated in cost.

          • MadCatz
          • 14 years ago

          Um…you cannot possibly compare the x1800XT to an overclocked 7800GTX. That is like saying back in the old days that my Ti4200 overclocked up to Ti4600 speeds. Yeah it did, but don’t forget that they x1800XT can overclock as well :). In this case an overclocked x1800XT would beat an overclocked 7800GTX. Of course this probably will never matter since the card will never exist.

            • Beomagi
            • 14 years ago

            ok then, how about comparing it to a stock asus 7800GTX atop?

    • spworley
    • 14 years ago

    The high power and heat of the X1000 series are surprising, I was expecting lower than before due to 90nm.

    TR: I’d love to see that useful and interesting power usage chart include the same system with an old old low-wattage card just for an idle-only baseline. As it stands, we can’t tell what fraction of that power is due to the video card. Slap an old Matrox Millenium or something in there (which shouldn’t take more than a watt or two?) and we’ll know a good baseline to subtract.

    • Lazier_Said
    • 14 years ago

    Mostly disappointing.

    The X1600 series are utterly unattractive against previous generation cards, whether that’s a 6600GT or a X800GT.

    The X1800XL is certainly an improvement over the previous X850, but that $449 SRP sure doesn’t look attractive with 7800GTs all over the place at $350.

    The X1800XT is impressive, but it sure looks like a paper launch. If these are actually available at SRP in a month, that’s a more appealing card than a $475 7800GTX.

    • Koly
    • 14 years ago

    This is a complete and utter dissapointment. X1800XT is the only one which is competitive from a performance point of view, the rest is simply sh.t. I don’t get it, what is Ati thinking? They introduce a new generation of graphics cards with the same (X1800XL) or much worse performance (X1600XT, X1300Pro) than their last one, with higher price and absolutely terrible heat output.

    It’s like:

    X1300,Pro: did you buy a 9600Pro or XT two or three years ago? Buy it again for the same price today!

    X1600XT: anybody who bought this one for $250 would have to be completely brainwashed. An X800 or X800GTO with REAL 12 pipes coupled to real 12 texturing units and real 256bit memory bus would simply mop the floor with this crap. For almost $100 less. Instead of a 12×1 card you can buy a 12×0.33 one. The only thing I regret is that Scott haven’t included an X800/GT/GTO/Pro/XL into the review. The historic moment when a hardware company introduces a new generation product which is slower, more expensive and even unavailable could be exposed beautifully.

    X1800XL: it’s performance is for all purposes equivalent to an X850XT, which is sold for only a little more than $300. MSRP for an X1800XL is $450. Nice try.

    And I can’t get over the power consumption. I have an X800Pro and the amazing thing is that it outputs less heat than the 9800Pro I had before. With the AC Silencer it runs passively cooled when idle. Now look at X1600XT. It consumes more power idle than an X850XT, while X800Pro will run circles around it when actually in use. Ati really did f..k up this one. I mean this generation.

      • PLASTIC SURGEON
      • 14 years ago

      Give it time. Nvidia f****up the FX series also. It took them some time for the NV30 archetecture to mature. The jump to .13nm for them paid off a few years later. Expect the same from ATI….

        • Koly
        • 14 years ago

        Well, the FX series at least outperformed the GeForce4s and MXes.

          • tu2thepoo
          • 14 years ago

          The Geforce FX 5600s and 5200s were slower than Geforce4 4400/4200s in most cases, if I remember. I think the 5800 ultra had a hard time beating the Ti4600 when it was first released, too. I don’t think the Geforce FXs got considerably faster than the Geforce 4s until the 2nd generation NV3x chips (5950/5700/etc) were released.

          Not that I think you’re wrong – just pointing out…uh… stuff.

    • Sargent Duck
    • 14 years ago

    Maby just don’t have the internet plugged in?

    Or if that doesn’t work, there are some *cough cough hack* copies of Half-Life 2 that don’t need steam.

      • Chryx
      • 14 years ago

      whoa, that’s damned impressive.

      -[

    • BearThing
    • 14 years ago

    Hmmmm….

    As far as I can tell, ATI didn’t even give Nvidia a reason to introduce the 7xxx series mid-range cards that must be in the works. The price points on the new ATI cards are crazy. Donning my tinfoil hat, I can only speculate that they are trying to limit demand because yields are the suck.

    Of course, ATI says that R580 is on schedule and will be here in the not too distant future. (And why should we have any reason to doubt that?) Assuming R580 does arrive in early ’06, it would push all these cards down on the pricing chain, and everything would make more sense.

    For everyone who simply must have an ATI card RIGHT NOW, here is the most compelling reason to buy one that I can see:

    §[< http://www.newegg.com/Product/Product.asp?Item=N82E16814102595<]§ For all intents and purposes you get an X850XT PE for $200. Buy an X800GTO2 today and enjoy fine performance on current games 'til next spring. By then the real R5xx cards might actually be available at reasonable price points. And they might even have shipped Crossfire!

      • Proesterchen
      • 14 years ago

      Look at RV530 to see how far a 3 times increase in shader power alone will carry the R580, or not.

    • nagashi
    • 14 years ago

    With twice the ram on the 1800xt vs the 7800gtx, as well as it running 300mhz faster, doesn’t it seem like ATI is using up any potential pricing advantage they have with their smaller core on their memory costs? I can’t imagine that ram is cheap….

    • FireGryphon
    • 14 years ago

    The X1000 series (it feels weird to say that) is competative. That’s what counts.

      • PRIME1
      • 14 years ago

      Price and availability should count for something.

        • FireGryphon
        • 14 years ago

        Yes, but if they can at least keep up technically, there’s still competition in the graphics card market. Note that what my comment really said was, “This is a late, less-than-stellar showing from ATI. At least they did it.”

    • blitzy
    • 14 years ago

    well, it’s better than I was expecting but not enough to take ATI out of second place

    • Convert
    • 14 years ago

    No contest when it comes to the x1800xt. That thing kicks arse. What is the deal with the rest of the line though? Sure they would be competitive if their price was reasonable. I would rather have the 7800gt/gtx for the price of a x1800xl. Same goes for the x1600xt.

    • nagashi
    • 14 years ago

    You might note that at a couple points the 1800xt is able to beat sli rigs. If it were a clean sweep, I might agree with you, but as these new cards are occasionally able to beat sli rigs, I’d say it’s definitely useful for people to see the comparison visually.
    EDIT: This was supposed to be a reply to #26.
    many apologies.

    • PerfectCr
    • 14 years ago

    We waited this long, for *this*? I am all for competition, but really my o/c’ed 6600GT is running all the games I play at decent frame rates. Why spend $500 bucks to see uber high frame rates?

    Sure, if I had the money I’d spend it but really. ATI is really behind here and needs to come out with something revolutionary, not evolutionary. Will there ever be a Radeon 9700 Pro like product again?

      • daniel4
      • 14 years ago

      The scheduler inside the card is pretty revolutionary since it’s the first of it’s kind for a pc gpu and is a huge reason why the 16 pipe r520 can be so much faster than the 24 pipe 7800 gtx. Imagine if ATi were able to put out a 32 pipe version with that scheduler improving efficiency to the point that there could very well be an actual doubling in performance from what we’re seeing now.

        • Krogoth
        • 14 years ago

        FYI, in case you have not read the complete article. 7800GTX only has 16 pipes for pixels, while having 24 pipes for textures. Also, the X1800XTs operate at a much higher clockspeed and have more memory bandwidth.

          • Koly
          • 14 years ago

          FYI, it’s completely the other way round, 7800GTX has 24 pixel pipelines and 16 ROPs. Go and read the article.

            • Krogoth
            • 14 years ago

            I stand corrected. 🙂

            • Krogoth
            • 14 years ago

            Damm double post, can someone delete this. 🙁

        • PerfectCr
        • 14 years ago

        Then how do you explain the absolutely dismal OpenGL performance?

          • Flying Fox
          • 14 years ago

          Extremely poor (rumour has it that they are trying to rewrite the whole thing from the ground up) drivers in the OpenGL side dating back to god knows when.

          • PLASTIC SURGEON
          • 14 years ago

          You can bet your collective ass if the vast majority of games being released were of the OpenGL variety, ATI would indeed focus more on OpenGL support. At this point, D3D run the majority of games and upcoming titles. So for them it’s not a major issue as long as they compete….Whatever that means in the grand scheme of things

    • indeego
    • 14 years ago

    I like how the pricegrabber for me listed prices for Nvidia’s cards. Nuff saidg{<.<}g

    • maxxcool
    • 14 years ago

    *yawn* good solid review, ho hum product. maybe next gen 520’s will be a bit more interesting.

    • Usacomp2k3
    • 14 years ago

    Good read, as always.
    I have to disagree with alot of the comments here: the 1800XT is a beast of a card. Simply blowing the 7800gtx away in most of the DirectX games. That’s impressive, if I do say so myself.
    However, the vast suckage on the part of ATI in the realm of OpenGL (quake 4 anyone?) will render it not as widely desired in the high-end, IMHO. Too bad there’s not a GLmark05 😉

      • PLASTIC SURGEON
      • 14 years ago

      ATI has always took the safe approach in regards to D3D and OpenGL support. The fact being that more games are released on D3D format then OpenGL. And this bodes the same for most of 2005-2006 by the release dates and details of upcoming PC titles.

    • cappa84
    • 14 years ago

    “We may have a rematch between the Radeon X1800 XT 512MB and a new 512MB version of the GeForce 7800 GTX shortly, too, that could produce a clear champ.”

    Has a 512MB 7800GTX been mentioned anywhere by Nvidia ?

    If so, more info please 😛

    • Proesterchen
    • 14 years ago

    The only mildly interesting product I see is the X1800 XL, and even that only when coming way down on the prices. And get over the broken-chip-feel that it has to it. And the availability issues that will certainly be abound.

    Don’t get me wrong, some things ATi implemented this round are nice, especially the HDR+AA and HQ AF options, but I don’t see this helping much, performance is too much of a mixed picture.

    • Illissius
    • 14 years ago

    Hmm. Is it the extra memory and bandwidth making that much of a difference with the X1800XT? I don’t see much else that could be causing it — the 7800GT/X1800XL and 7800GTX/X1800XT are both pretty evenly matched in terms of pixel power.
    EDIT – According to the AnandTech review X1800XL can only have 128 threads like the lower end cards, and only the XT gets 512. If true, this could certainly be another factor…

    Also, as previously mentioned, I would be most highly unsurprised if nVidia were to release and ship the 7800 Ultra right around the time the X1800XT actually starts becoming available, with the 80 series drivers to accompany it. Question is whether it’ll still be 110nm (rather more likely, in my estimation), or they’ll make the jump to 90 as well…

    Biggest disappointment has to be the X1600XT. Back when its specs were leaked, I was honestly expecting nVidia to get whopped — 12 pipes at 600MHz looked like something to go toe to toe with the 6800 Ultra. And at $200-300, nearly double the performance of the 6800. Instead, even in 3DMark05 it only somewhat outperforms it, and gets soundly beaten otherwhere. Meh.

      • daniel4
      • 14 years ago

      I highly doubt that is the real midrange card that we were looking for. It’s basically just an X1300 Pro with higher clocked memory. That just doesn’t seem right to me.

    • Logan[TeamX]
    • 14 years ago

    Wow, I’m admittedly quite disappointed with the performance of the R5XX chips. I hope this is just a driver issue and can be resolved soon enough.

    I mean, my X800XL still runs quite nicely with everything else there, especially when overclocked, for a fraction of the price. Good to know.

    • Hattig
    • 14 years ago

    The only card that was interesting to me was the cheap X1300 based cards, sadly missing in the review. The X1600 series looks like a good future mid-range card for modern games as well.

    Oh, and thank for the review – one suggestion for the charts with lots of datasets – make all nVidia cards a shade of green and all the ATI cards a shade or orange/red? At the moment they are very hard to read.

    Looks like they are good at DirectX based games, but OpenGL is letting ATI down again.

    If ATI sorts out the yield issues, then the small die sizes will give them an advantage.

    Once ATI is using the new R520 cores, I expect we will start seeing some overclocked by default cards being released. Might be worth holding on a few months to see what happens, and to see what nVidia’s 7×00 midrange is like.

      • Illissius
      • 14 years ago

      > /[

        • Corrado
        • 14 years ago

        And thirded

          • bhtooefr
          • 14 years ago

          And fourthed.

            • odizzido
            • 14 years ago

            that gets my vote too. its a good idea.

            • danny e.
            • 14 years ago

            i have to agree with that also

            • Dissonance
            • 14 years ago

            There are only so many shades of each color at our disposal 😉

            • Usacomp2k3
            • 14 years ago

            Being color-blind, a red-green combo isn’t going to be the best for me. I can come close enough to picking out the reviewed cards out of the yellow that I’d just as soon stick with that.

      • BobbinThreadbare
      • 14 years ago

      Am I the only one who likes the yellow bars, with the reviewed subjects emphasized?

    • froydnj
    • 14 years ago

    Hey Scott, I appreciate all the work you do to churn out these reviews.

    However, when you’re testing new cards like this, would you mind *[

      • sbarash
      • 14 years ago

      Totally agree. No good reason to include SLI stats…

      -Stephen

        • pureevilmatt
        • 14 years ago

        The more cards, the better.

        I can see what you’re saying about it skewing the graph and making it a little harder to read, but more is almost always better comparitively speaking. Case in point, in some benches the X1800 XT beats some SLI setups. I think that’s fairly important.

          • froydnj
          • 14 years ago

          That information should certainly be made available–it is important information. (Save $500!) But it probably happens rarely enough that it would not be a big deal if that information was detailed in a second, different graph.

        • wierdo
        • 14 years ago

        I can see a reason, but perhaps in a seperate grouping or even seperate pages.

      • wierdo
      • 14 years ago

      fully agree

      • crichards
      • 14 years ago

      I agree – I find the graphs difficult to read with the SLI stuff included.

      • Convert
      • 14 years ago

      Granted it takes maximum an extra second to sort it out. I want sli results there though and crossfire whenever it is released. Basically it’s just another card on the list so you shouldn’t be having any problems.

      Next you guys will want ati in their own box and nvidia in another.

    • SpiffSpoo
    • 14 years ago

    Too bad no x1300s, but the performance of those should be less than a x1600 so who cares =)

    The x1600s were a disapointment, they could have made those much better and upped the price on them, that would have been a better idea.

    The x1800xl seems good, but it should be either be faster for its price or cost less. I say 400$ to start instead of 450$ with its performance right now. The x1800Xt is a beast, sure its core speed is high, but its ram spead is even more insane.

    An OC section would have been nice, but the limited time with the cards prevented that, at least I think. I bet the x1800xl might be worth the money if it OCs nicely. And crossfire with the x1800s should be better than those of the x800s. Crossfire with the x800s doesn’t seem like a bad idea now either since the x1600s arent up to the performance of the x800s.

    Although I am not going to buy one of these cards because I have agp, if I was buying one I would be setting my eyes for a x1800XT over the 7800TGX, maybe in crossfire when if comes out. And Blah to the x1600s

    • quarantined
    • 14 years ago

    With only 16 pipes and marginal improvements overall, I can’t say I really care how badly they might paper launch this round. So no paper cuts for me on this one.

    • mdalli
    • 14 years ago

    I will not be buying an ATI card in the foreseeable future for one big reason — buggy and overly-complicated-to-download drivers.

    nVidia has one simple driver file for their cards, and the version number is easy to check. I bought an ATI card 2 years ago, and the “latest” drivers were filled with bugs. No thanks.

    • Lord.Blue
    • 14 years ago

    Overall a good review. Hopefully this “release” will be in time to save ATI, but then again, they already have a steady income flow that will start pouring on more in November (X-Box 360)

      • PRIME1
      • 14 years ago

      ATI probably already got paid for the XboX. Microsoft got smart and is making the chip themselves. ATI was more of a consultant.

        • kilkennycat
        • 14 years ago

        ATi will be receiving per-unit royalties on the Xbox360 GPU. However, all is not roses — guess who M$$ has to (er) “consult” if there are functionality or non-process-related yield problems ? Also, I wonder if the R5xx “design-bug” is in the first batches of the Xbox360 GPU…? There are already warnings not to obstruct the air-flow of the Xbox360. When the air-vents get filled up with lint, we should see a nice stream of Xbox360 returns.. Hopefully, the owners will have taken out extended warranties when they bought the units.

    • tehmaster
    • 14 years ago

    Howcome no overclocking??? None of the sites overclocked as far as i can tell, NDA restriction??? I am really interested in the XL’s abilities in this area.

      • Krogoth
      • 14 years ago

      Overclocking is a complete wildcard and can’t be use in an objective review. This isn’t the epenis olympics and it’s likely that X1800XT is close to it’s clock celling.

      • lethal
      • 14 years ago

      supposedly the x1800xl are the “bugged” cores made before the final core version taped out, so the XL core should be already very close to their limit. That also explains why there aren’t any XTs available now since the revised core has just started production… so all the “new” cards are actualy based in a tape out made months ago.

        • tehmaster
        • 14 years ago

        These “bugged” cores from the XL line are supposed to be available today, if ATI is releasing them I think they really slipped up…

    • PLASTIC SURGEON
    • 14 years ago

    I think we all would have liked to see the 24 pipeline version of the R520…MAYBE IN 6 MONTHS? 😉

    • l33t-g4m3r
    • 14 years ago

    whens the AIW coming out?

    • Krogoth
    • 14 years ago

    ATI is a little too late to save itself in the high-end market. X1800XT at stock might be faster then 7800GTX by 20%, but some of the recent factory OC’ed 7800GTXs will have roughly the same performance. It looks like the leaked benches from eariler weren’t too off back with X1800XT’s old clock speed.

    The last thing to screw-up ATI is not having card avaiablity for another month after the product annoucment. Crossfire cards and capiable-boards are still MIA. It just gives Nvidia the perfect time to release a 7800U part. (480+/1400/512MB?).

    The X1600XT does pretty well for a $199 card, despite having a disadvantage in fill-rate against the 6800NU. It gives the 6600GT a run for it’s money.

    • matnath1
    • 14 years ago

    Scott:

    Where are the HL2 Benchies???.. What about AGP?

    I see no compelling reason to buy this generation whatsoever.

    I’m sticking with my X800XT (bought it from tiger direct for $225).

    W

      • Krogoth
      • 14 years ago

      AGP is dying in the high-end market. Both the folks at ATI/Nvidia had already made it clear. Unless, you have a NF3 chipset. It’s probably better to simply upgrade your system if you want to take advantage of the new performance cards.

        • matnath1
        • 14 years ago

        I guess we all know that the big two want to sell more graphics cards and mobos but ATI could have cornered the HIGH end AGP market by releasing R520 with the realto bridge chip…..

      • lemonhead
      • 14 years ago

      I agree, i’d imagine it would be fair to say they’re a heck of lot more folks playing HL2/CS source then D3. Props for the BF2 though.

        • Dissonance
        • 14 years ago

        The problem with using Half-Life 2 is that Steam likes to auto-update in ways you can’t opt out of. If an update happens halfway through testing, you’ve gotta retest everything with the latest version. It’s rather annoying, to say the least.

          • PerfectCr
          • 14 years ago

          Actually not true, there is an option in Steam to stop automatic updates.

            • Damage
            • 14 years ago

            …that doesn’t work.

            • tfp
            • 14 years ago

            Could you unplug the machine from the network while running the benchmark? Would HL2 let you do that?

            Have no idea don’t have the game.

            • matnath1
            • 14 years ago

            you can play the game in offline mode

            • matnath1
            • 14 years ago

            I simply meant that you (Scott) show HL2 in your test bed of benchies (along with Riddick, BF2 and the FEAR demo etc) but that they never materialized in your review. I’m just interested in seeing how much faster the X1800 architecture will run this game vs. the x800 series in order to see how much more efficient the new architecture really is. For example, the X1800 XL is similarly specked/clocked 500 core and 500 memorywith 16 pipes as the X800 XT. Does this buy us a few more frames per second or not?(Since this game was optimized for ATI by Valve and is a good balance against DOOM 3 which was optimized for NVIDIA). An OVERALL combined benchmark of all of these games would also be an interesting thing to analyze since it will average open gl against 3d and let us see who the BIG BIG DOG REALLY IS!!!!!!!!!!!!!!!!!!!!

        • BearThing
        • 14 years ago

        “I agree, i’d imagine it would be fair to say they’re a heck of lot more folks playing HL2/CS source then D3.”

        Well, for the next couple weeks anyway. After that, who knows:

        §[< https://techreport.com/onearticle.x/8857<]§

      • Lazier_Said
      • 14 years ago

      Re: HL2 benches. I don’t think HL2 is a very important benchmark. It already runs so fast on the last generation of cards that further improvements are of only academic interest.

      Re: AGP. If they’ll sell you a modern AGP card, you won’t have incentive to go buy a new PCI-E platform. Just coincidentally, ATI and Nvidia are both in the PCI-E chipset business. That stinks if you already have a 3ghz+ AGP platform, but that’s the way it is.

      Re: Appeal of this generation. Until cutting edge games like FEAR come out, I don’t see it either. Current titles already run like butter on a 6800U level card, why pay for something faster?

      • xxldirty
      • 14 years ago

      totally, this was about the first time I skipped a tr review and went to other sites, and it was because of no hl2 benchies

    • PRIME1
    • 14 years ago

    Another paper launch. (I checked newegg)

    The numbers were disappointing.

    I doubt their high end card will even compete against a 7800GTX 512 with 8x.xx drivers. Or even the standard cards once the new drivers are released.

      • Krogoth
      • 14 years ago

      I’m willing to bet that within the coming weeks Nvidia will slient launch the 7800U. It’s basically a dual-slot cooler verison of the 7800GTX with 512MBs of faster GDDR3 memory. The dual-slot cooler allows it to achieve higher clocks.

    • daniel4
    • 14 years ago

    Is that really an x1600xt? I mean really, its nice that a card with that low of a fillrate is even comparing to a 6800, but I would have thought that ATI was going to be a little bit more aggressive with their top mainstream competitor.

    Also, there seems to be a huge difference between the XL and the XT. Could the memory speed advantage really be that big?

    • MorgZ
    • 14 years ago

    Great review Scott.

    Seems like ATi have struggled to get these cards out and, as you say, id be suprised if their are too many of the top of the range cards available to purchase anytime soon.

    It will be interesting to see what nVidia do to respond, i suspect they will release their new higher clocked card pretty soon and, given the availability of the 7800GTX and the fact i suspect they’ve been holding back some good clocking cards since this product became available, they should have a fair number of these cards to sell.

      • madlemming
      • 14 years ago

      Nvidia released ‘official’ betas of their new version 80 drivers this morning, I don’t imagine it will change any of the outcomes, but it will narrow the margins where they are loosing.

      Nice review. ATI wins at direct-x and nvidia wins at open-gl, the more things change…

        • mongoosesRawesome
        • 14 years ago

        i’d like to see a comparison with the new 80 series driver once it is out of beta.

    • Vrock
    • 14 years ago

    Nothing spectacular here. I see overall parity between 7800/1800XL and 7800GTX/1800XT.

    I’m going to check pricewatch now.

    *edit: nothing found on pricewatch in a quick search. The ATi store is also devoid of any X1000 series cards.

    • Spotpuff
    • 14 years ago

    Too little too late.

    Guess I’m stuck w/ my X800XL for a while 🙁

    Too bad too it’s having issues in 1920×1200 resolutions.

    • lethal
    • 14 years ago

    typo in page 3, in the table… the last gpu reads “6800GTX (G70)”

    • rgreen83
    • 14 years ago

    Can we say “finally”?

      • mongoosesRawesome
      • 14 years ago

      not until we actually can buy them

Pin It on Pinterest

Share This