AMD’s Radeon HD 7970 graphics processor

Oh, man. Just a few days before Christmas, AMD uncorked a massive jug of holiday cheer in the form of the Radeon HD 7970 graphics card. Sloshing around inside? The world’s first GPU produced on an 28-nm manufacturing process. This incredibly fine new production process has allowed AMD to cram more transistors—and thus more graphics horsepower of virtually every sort—into this puppy than any graphics chip to come before. While many kids were looking forward to the latest Xbox 360 game under the tree on Christmas morning, the Radeon HD 7970 delivers nearly fifteen times the texel filtering speed of Microsoft’s venerable game console, to name one key graphics rate. I don’t want to dwell on it, but this new Radeon is nearly an order of magnitude more powerful than an Xbox 360 in nearly every respect that matters.

Ok, so I kinda do want to dwell on it, but we need to move on, just like the former ATI has done since creating the Xbox 360’s GPU.

This new Radeon’s true competitors, of course, are the other PC graphics processors on the market, and it has nearly all of them beaten on paper. The chip behind the action is known as “Tahiti,” part of AMD’s “Southern Islands” lineup of next-gen GPUs. As a brand-new design, Tahiti is, of course, infused with all of the latest features—and a few new marketing buzzwords, too. The highlights alone are breathtaking: 2048 shader ALUs, a 384-bit memory interface, PCI Express 3.0, support for DirectX 11.1, and a hardware video encoding engine. Tahiti features the “Graphics Core Next” (note to Rory Read: time to stop letting engineers name these things) shader architecture that promises more efficient scheduling and thus higher delivered throughput, especially for non-graphics applications.


A vague functional block diagram of the Tahiti GPU. Source: AMD.

If the prior paragraph wasn’t sufficient to impress you, perhaps the block diagram above will do the trick. One of the themes of modern GPUs is massive parallelism, and nowhere is that parallelism more massive than in Tahiti. Honestly, the collection of Chiclets above leaves much to be desired as a functional representation of a GPU, especially the magic cloudy bits that represent the shader cores. Still, the basic outlines of the thing are obvious, if you’ve looked over such diagrams in the past. Across the bottom are six memory controllers, each with a pair of 32-bit memory channels. Running up and down the center are the shader or compute units, of which there are 32. Flanking the CUs are eight ROP partitions, each with four color and 16 Z/stencil ROP units. The purple bits represent cache and buffers of various types, which are a substantial presence in Tahiti’s floorplan.

We will get into these things in more detail shortly, but first, let’s take a quick look at how Tahiti stacks up, in a general sense, versus the DirectX 11 GPUs presently on the market.

The Tahiti GPU between my thumb and forefinger

ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

ALUs

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die
size

(mm²)

Fabrication

process node

GF114 32 64/64 384 2 256 1950 360 40 nm
GF110 48 64/64 512 4 384 3000 520 40 nm
Cypress 32 80/40 1600 1 256 2150 334 40 nm
Barts 32 56/28 1120 1 256 1700 255 40 nm
Cayman 32 96/48 1536 2 256 2640 389 40 nm
Tahiti 32 128/64 2048 2 384 4310 365 28 nm

The most immediate comparison we’ll want to make is between Tahiti and the chip it succeeds, the Cayman GPU that powers the Radeon HD 6900 series. At 4.3 billion, its transistor count doesn’t quite double Cayman’s, but Tahiti is easily the most complex GPU ever. Tahiti improves a bunch of key graphics resources by at least a third over Cayman, including texture filtering capacity, memory interface width, and number of shader ALUs. Even so, Tahiti is a smaller chip than Cayman, and it carries on AMD’s recent practice of building “mid-sized” chips to serve the upper portions of the market. As you can see, Nvidia’s GF110 still dwarfs Tahiti, although Tahiti crams in more transistors courtesy of TSMC’s 28-nm fabrication process.

Of course, Tahiti is just the first of a series of GPUs, and it will contribute its DNA to at least two smaller chips still in the works. One, code-named “Pitcairn,” will supplant Barts and drive the Radeon HD 7800 series of graphics cards in a more affordable (think $250 or less) portion of the market. Below that, another chip, known as “Cape Verde,” will at last relieve the Juniper GPU of its duties, which have included both the Radeon HD 5700 series and the re-branded 6700 series. Although we believe both of these new chips are imminent, we don’t yet know exactly when AMD plans to introduce them. Probably before they arrive, AMD will unleash at least one additional card based on Tahiti, the more affordable Radeon HD 7950.

There is one other code name in this collection of Southern Islands. At its press event for the 7970, AMD simply showed the outline of a pair of islands along with the words, “Coming soon.” The rest isn’t too hard to parse out, since the contours of the islands pictured match those of New Zealand—which also happens to be the name of the rumored upcoming dual-GPU video card based on a pair of Tahiti chips. New Zealand will probably end up being called the Radeon HD 7990 and serving the very high end of the market by being really, truly, obnoxiously, almost disturbingly powerful. We’re curious to see whether New Zealand will be as difficult to find in stock at Newegg as Antilles, also known as the Radeon HD 6990. Maybe, you know, the larger land mass will help folks locate it more consistently.

Absent any additional code names, we’re left to speculate that AMD may rely on older chips to serve the lower reaches of the market. The recent introduction of the mobile Radeon HD 7000M series, based entirely on Cypress derivatives, suggests that’s the plan, at least for a while.

The one card: Radeon HD 7970

We’ve discussed Tahiti’s improvements in key graphics specs versus Cayman, but AMD has another bit of good news in store, too. Onboard the Radeon HD 7970, Tahiti will flip bits at pretty good clip: 925MHz. That’s up slightly from the highest default clocks for products based on Cypress (the 5870 at 850MHz) and Cayman (the 6970 at 880MHz). The 7970 has the same 5500 MT/s memory speed as its predecessor, so it will rely on 50% more memory channels to provide additional bandwidth.

The 7970’s combination of clock speeds and per-clock throughput give it the highest theoretical memory bandwidth, texture filtering rate, and shader arithmetic rate of any single-GPU video card. Thus, AMD has taken direct aim at Nvidia’s single-chip flagship, the GeForce GTX 580, by pricing the Radeon HD 7970 at $549. That price will get you a card with 3GB of GDDR5 memory onboard, enough to drive multiple displays at high resolutions, and it undercuts the 3GB versions of the GTX 580, which are selling for just under $600 at Newegg right now. AMD says it is shipping cards into the channel now, and the plan of record is for formal availability to start on January 9th. We wouldn’t be surprised to see cards for sale before the official date, though, if they make it to the right retailers.


Left to right: Radeon HD 5870, 6970, and 7970

At 10.75″, the 7970 matches the length of its two predecessors almost exactly. However, the deletion of one DVI port has opened up additional real estate on the expansion plate cover for venting. This change, along with the use of a somewhat larger blower pushing air across the card’s vapor chamber-based cooler, should improve cooling efficiency and allow for more air movement at lower fan speeds—and thus lower noise levels.

The downsides of this config are all related to the removal of that DVI port. What remains are two mini-DisplayPort outputs, an HDMI port, and one dual-link DVI output. To offset the loss of the second DVI port, AMD is asking board makers to pack two adapters in the box with every 7970: one HDMI-to-DVI cable and one active mini-DP-to-DVI converter. That config should suffice for folks wanting to run a three-way Eyefinity setup on 1080p displays or the like, but I believe neither of those adapters support dual-link DVI, so folks hoping to drive multiple 30″ monitors via DVI may have to seek another solution.

Incidentally, like the 6970 before it, the 7970 should in theory be able to drive as many as six displays concurrently when its DisplayPort outputs are multiplied via a hub. Unfortunately, the world is still waiting for DisplayPort hub solutions to arrive. AMD tells us it is working with “multiple partners” on enabling such hubs, and it expects some products to arrive next summer.

Although the 7970’s clock speeds are fairly high to start, AMD claims there’s still quite a bit of headroom left in the cards and in their power delivery hardware. The GPUs have the potential to go over 1GHz, with “a good chunk” capable of reaching 1.1GHz or better. The memory chips, too, may be able to reach 6500 MT/s. In addition to giving end users some healthy overclocking headroom, that sort of flexibility could allow AMD’s board partners to build some seriously hopped-up variants of the 7970.

A revised graphics architecture

The biggest change in Tahiti and the rest of the Southern Islands lineup is undoubtedly the shader core, the computational heart of the GPU, where AMD has implemented a fairly major reorganization of the way threads are scheduled and instructions are executed. AMD first revealed partial details of this “Graphics core next” at its Fusion Developer Summit last summer, so some information about Tahiti’s shader architecture has been out there for a while. Now that the first products are arriving, we’ve been able to fill in most of the rest of the details.

As we’ve noted, Tahiti doesn’t look like too much of a departure from its Cayman predecessor at a macro level, as in the overall architecture diagram on page one. However, the true difference is in the CU, or compute unit, that is the new fundamental building block of AMD’s graphics machine. These blocks were called SIMD units in prior architectures, but this generation introduces a very different, more scalar scheme for scheduling threads, so the “SIMD” name has been scrapped. That’s probably for the best, because terms like SIMD get thrown around constantly in GPU discussions in ways that often confuse rather than enlighten.

In AMD’s prior architectures, the SIMDs are arrays of 16 execution units, and each of those units is relatively complex, with either four (in Cayman) or five (in Cypress and derivatives) arithmetic logic units, or ALUs, grouped together. These execution units are superscalar—each of the ALUs can accept a different instruction and operate on different data in one clock cycle. Superscalar execution can improve throughput, but it relies on the compiler to manage a problem it creates: none of the instructions being dispatched in a cycle can rely on the output of one of the other instructions in the same group. If the compiler finds dependencies of this type, it may have to leave one or more of the ALUs idle in order to preserve the proper program order and obtain the correct results.

The superscalar nature of AMD’s execution units has been both a blessing and a curse over time. On the plus side, it has allowed AMD to cram a massive amount of ALUs and FLOPS into a relatively small die area, since it’s economical in terms of things like chip area dedicated to control logic. The downside is, as we’ve noted, that those execution units cannot always reach full utilization, because the compiler must schedule around dependencies.

Folks who know at AMD, including Graphics CTO Eric Demers, have consistently argued that these superscalar execution units have not been a problem for graphics simply because the machine maps well to graphics applications. For instance, DirectX-compliant GPUs typically process pixels in four-by-four blocks known as quads. Each pixel is treated as a thread, and 16-thread groups known as “wavefronts” or (in Nvidia’s lexicon) “warps” are processed together. In an architecture like Cypress, a wavefront could be dispatched to a SIMD array, and each of the 16 execution units would handle a single thread or pixel. As I understand it, then, the four components of a pixel can be handled in parallel across the superscalar ALUs: red, green, blue, and alpha—and, in the case of Cypress, a special function like a transcendental in that fifth slot, too. In just one clock cycle, a SIMD array can process an operation for every element of an entire wavefront, with very full utilization of the available ALU resources.

The problems come when moving beyond the realm of traditional graphics workloads, either with GPU computing or simply when attempting to process data that has only a single component, like a depth buffer. Then, the need to avoid dependencies can limit the utilization of those superscalar ALUs, making them much less efficient. This dynamic is one reason Radeon GPUs have had very high theoretical FLOPS peaks but have sometimes had much lower delivered performance.

Logical block diagram of a Tahiti CU. Source: AMD.

In a sense, Tahiti’s compute unit is the same basic “width” as the SIMDs in Cayman and Cypress, capable of processing the equivalent of one wavefront per clock cycle. Beneath the covers, though, many things have changed. The most basic execution units are actually wider than before, 16-wide vector units (also called SIMD-16 in the diagram above), of which there are four. Each CU also has a single scalar unit to assist, along with its own scheduler. The trick here is that those vec16 execution units are scheduled very much like the 16-wide execution units in Nvidia’s GPUs since the G80—in scalar fashion, with each ALU in the unit representing its own “lane.” With graphics workloads, for instance, pixel components would be scheduled sequentially in each lane, with red on one clock cycle, blue on the next, and so on. In the adjacent ALUs on the same vec16 execution unit, the other pixels in that wavefront would be processed at the same time, in the same one-component-per-clock fashion. At the end of four clocks, each vec16 unit will have processed 16 pixels or one wavefront. Since the CU has four of those execution units, it is capable of processing four wavefronts in four clock cycles—as we noted, the equivalent of one wavefront per cycle. Like Cayman, Tahiti can process double-precision floating-point datatypes for compute applications at one quarter the usual rate, which is, ahem, 947 GFLOPS in this case, just shy of a teraflop.

For graphics, the throughput of the new CU may be similar to that of Cypress or Cayman. However, the scalar, lane-based thread scheduling scheme simplifies many things. The compiler no longer has to detect and avoid dependencies, since each thread is executed in an entirely sequential fashion. Register port conflicts are reduced, and GPU performance in non-traditional workloads should be more stable and predictable, reaching closer to those peak FLOPS throughput numbers more consistently. If this list of advantages sounds familiar to you, well, it is the same set of things Nvidia has been saying about its scheduling methods for quite some time. Now that AMD has switched to a similar scheme, the same advantages apply to Tahiti.

That’s not to say the Tahiti architecture isn’t distinctive and, in some ways, superior to Nvidia’s Fermi. One unique feature of the Tahiti CU is its single scalar execution unit. Nvidia’s shader multiprocessors have a special function unit in each SM, and one may be tempted to draw parallels. However, AMD’s David Nalasco tells us Tahiti handles special functions like transcendentals in the vec16 units, at a very nice rate of four ops per clock cycle. The scalar unit is a separate, fully programmable ALU. In case you’re wondering, it’s integer-only, which is why it doesn’t contribute to Tahiti’s theoretical peak FLOPS count. Still, Nalasco says this unit can do useful things for graphics, like calculating a dot product and forwarding the results for use across multiple threads. This unit also assists with flow control and handles address generation for pointers, as part of Tahiti’s support of C++-style data structures for general-purpose computing.


An overview of the Tahiti cache hierarchy. Source: AMD.

Another place where Tahiti stands out is its rich complement of local storage. The chip has tons of SRAM throughout, in the form of registers (260KB per CU), hardware caches, software-managed caches or “data shares,” and buffers. Each of these structures has its own point of access, which adds up to formidable amounts of total bandwidth across the chip. Also, Tahiti adds a hardware-managed, multi-level read/write cache hierarchy for the first time. There’s a 16KB L1 instruction cache and a 32KB scalar data cache shared across four CUs and backed by the L2 caches. Each CU also has its own L1 texture/data cache, which is fully read/write. Meanwhile, the CU retains the 64KB local data share from prior AMD architectures.

Nvidia has maintained a similar split between hardware- and software-managed caches in its Fermi architecture by allowing the partitioning of local storage into 16KB/48KB of texture cache and shared memory, or vice-versa. Nalasco points out, however, that the separate structures in Tahiti can be accessed independently, with full bandwidth to each.

Tahiti has six L2 cache partitions of 128KB, each associated with one of its dual-channel memory controllers, for a total of 768KB of L2 cache, all read/write. That’s the same amount of L2 cache in Nvidia’s Fermi, although obviously Tahiti’s last-level caches service substantially more ALUs. The addition of robust caching should be a big help for non-graphics applications, and AMD clearly has its eye on that ball. In fact, for the first time, an AMD GPU has gained full ECC protection—not just of external DRAMs like in Cayman, but also of internal storage. All of Tahiti’s SRAMs are single-error correct, double-error detect protected, which means future FirePro products based on this architecture should be vying in earnest for deployment in supercomputing clusters and the like against Nvidia’s Tesla products. Nvidia has a big lead in the software and tools departments with CUDA, but going forward, AMD has the assistance of both Microsoft, via its C++ AMP initiative, and the OpenCL development ecosystem.

How this architecture stacks up

Understanding the basics of an architecture like this one is good, but in order to truly grok the essence of a modern GPU, one must develop a sense of the scale involved when the basic units are replicated many times across the chip. With Tahiti, those numbers can be staggering. Tahiti has 33% more compute units than Cayman has SIMDs (32 versus 24), with a third more peak FLOPS and a third higher texture sampling and filtering capacity, clock for clock.

If you’d like to look at it another way, just four of Tahiti’s CUs would add up to the same pixel-shading capacity as an entire R600, the chip behind the Radeon HD 2900 XT (though Tahiti has much more robust datatype support and host of related enhancements).

Today’s quad-core Sandy Bridge CPUs have four cores and can track eight threads via simultaneous multi-threading (SMT), but GPUs use threading in order to keep their execution units busy on a much broader scale. Each of Tahiti’s CUs can track up to 40 wavefronts in flight at once. Across 32 CUs, that adds up to 1280 wavefronts or 20,480 threads in flight. Meanwhile, by Demers’ estimates, Tahiti’s L1 caches have an aggregate bandwidth of about 2 TB/s, while the L2s can transfer nearly 710 GB/s at 925MHz.

Peak pixel

fill rate

(Gpixels/s)

Peak bilinear

filtering

(Gtexels/s)

Peak bilinear

FP16 filtering

(Gtexels/s)

Peak shader

arithmetic

(TFLOPS)

Peak

rasterization

rate

(Mtris/s)

Memory

bandwidth

(GB/s)

GeForce GTX 280 19 48 24 0.6 602 142
GeForce GTX 480 34 42 21 1.3 2800 177
GeForce GTX 580 37 49 49 1.6 3088 192
Radeon HD 5870 27 80 40 2.7 850 154
Radeon HD 6970 28 85 42 2.7 1760 176
Radeon HD 7970 30 118 59 3.8 1850 264

In terms of key graphics rates, the Tahiti-driven Radeon HD 7970 eclipses the Cayman-based Radeon HD 6970 and the Fermi-powered GeForce GTX 580 in nearly every respect. The exceptions are the ROP rates and the triangle rasterization rate.

ROP rates, of course, include the pixel fill rate and, more crucially these days, the amount of blending power for multisampled antialiasing. The 7970 is barely faster than the 6970 on the this front because it sports the same basic mix of hardware: eight ROP partitions, each capable of outputting four colored pixels or 16 Z/stencil pixels per clock. Rather than increasing the hardware counts here, AMD decided on a reorganization. In previous designs, two ROP partitions (or render back-ends) were associated with each memory controller, but AMD claims the memory controllers were “oversubscribed” in that setup, leaving the ROPs twiddling their thumbs at times. Tahiti’s ROPs are no longer associated with a specific memory controller. Instead, the chip has a crossbar allowing direct, switched communication between each ROP partition and each memory controller. (The ROPs are not L2 cache clients, incidentally.) With this increased flexibility and the addition of two more memory controllers, AMD claims Tahiti’s ROPs should achieve up to 50% higher utilization and thus efficiency. Higher efficiency is a good thing, but the big question is whether Tahiti’s relatively low maximum ROP rates will be a limiting factor, even if the chip does approach its full potential more frequently. The GeForce GTX 580 still has quite an advantage in max possible throughput over the 7970, 37 to 30 Gpixels/s.

Tahiti’s peak polygon rasterization rates haven’t improved too much on paper, either. It still has dual rasterizers, like Cayman before it. Rather than trying to raise the theoretical max throughput, which is already quite high considering the number of pixels and polygons likely to be onscreen, AMD’s engineers have focused on delivered performance, especially with high degrees of tessellation. Geometry expansion for tessellation can have one input but many outputs, and that can add up to a difficult data flow problem. To address this issue, the parameter caches for Tahiti’s two geometry engines have doubled in size, and those caches can now read from one another, a set of changes Nalasco says amounts to tripling the effective cache size. If those caches become overwhelmed, they can now spill into the chip’s L2 cache, as well. If even that fails, AMD says vaguely that Tahiti is “better” when geometry data must spill into off-chip memory. This collection of tweaks isn’t likely to allow Tahiti to match Fermi’s distributed geometry processing architecture step for step, but we do expect a nice increase over Cayman. That alone should be more than sufficient for everything but a handful of worst-case games that use polygons gratuitously and rather bizarrely.

In addition to everything else, Tahiti has a distinctive new capability called partially resident textures, or in the requisite TLA, PRTs. This feature amounts to hardware acceleration for virtual or streaming textures, a la the “MegaTexture” feature built into id Software’s recent game engines, including the one for Rage. Tahiti supports textures up to 32 terabytes in size, and it will map and filter them. Large textures can be broken down into 64KB tiles and pulled into memory as needed, managed by the hardware.

AMD’s internal demo team has created a nifty animated demonstration of this feature running on Tahiti. The program implements, in real time, a method previously reserved primarily for offline film rendering. In it, textures are mapped on a per-polygon basis, eliminating the need for an intermediate UV map serving as a two-dimensional facsimile of each 3D object. AMD claims this technique solves one of the long-standing problem with tessellation: the cracking and seams that can appear when textures are mapped onto objects of varying complexity.

The firm is hopeful methods like this one can remove one of the long-standing barriers to the wider use of tessellation in future games. Trouble is, Tahiti’s PRT capability isn’t exposed in any current or near-future version of Microsoft’s DirectX API, so it’s unlikely more than a handful of game developers—those who favor OpenGL—will make use of it anytime soon. We’re also left wondering whether the tessellation hardware in AMD’s prior two generations of DirectX 11 GPUs will ever be used as effectively as we once imagined, since they lack PRT support and are subject to the texture mapping problems Tahiti’s new hardware is intended to solve.

ZeroCore power

GPUs continue to become more like CPUs not just in terms of computational capabilities, but also in the way they manage power consumption and heat production. AMD took a nice step forward with Cayman by introducing a power-limiting feature called PowerTune, which is almost the inverse of the Turbo Core capability built into AMD microprocessors. By measuring chip activity, PowerTune estimates likely power consumption and, if needed in specific cases of very high utilization, reduces the GPU’s clock speed and voltage to keep power in check. The cases where PowerTune steps in are relatively rare and are usually cased by synthetic benchmarks or the like, not typical games. Knowing PowerTune is watching, though, allows AMD to set the default clock speeds and voltages for its GPUs higher than it otherwise could. That’s one reason Tahiti is able to operate at a very healthy 925MHz aboard the Radeon HD 7970.

This new chip takes things a step further by introducing a new GPU state somewhat similar to the “C6” or “deep sleep” feature added to CPUs some years ago. AMD has experience with deploying such tech on the graphics front from the development of its “Llano” CPU-GPU hybrid. Now, with Tahiti, AMD calls the feature ZeroCore, in a sort of play on the whole Turbo Core thing, I suppose. The concept is simple. The Tahiti chip has multiple voltage planes. When the host system sits idle long enough to turn off its display (and invoke power-save mode on the monitor), voltage to the majority of the chip is turned off. Power consumption for the whole video card drops precipitously, down to about three watts, and its cooling fan spins to a halt, no longer needed. A small portion of the GPU remains active, ready to wake up the rest of the chip on demand. AMD says waking a Radeon from its ZeroCore state ought to happen in “milliseconds” and be essentially imperceptible. In our experience, that’s correct. As someone who tends to leave his desktop computer turned on at all times, ready to be accessed via a remote connection or the flick of a mouse, I’m a big fan of this feature.

ZeroCore has even more potential to please users of systems with CrossFire multi-GPU configs. Even during active desktop use where the primary video card is busy, the second (and third, and fourth, if present) video card will drop into ZeroCore mode if not needed. Although we haven’t had a chance to try it yet, we expect this capability will make CrossFire-equipped systems into much better citizens of the average home.

Even when the display isn’t turned off, sitting at a static screen, the 7970 should use less power than the 6970—about 15W versus about 20W, respectively—thanks to several provisions, including putting its DRAM into an idle state. We’ll test all of these power draw improvements shortly, so hang tight.

Finally, true video encode acceleration in a desktop GPU

Desktop graphics chips have had video decoding engines embedded in them for ages, growing in functionality over time, and Tahiti participates in that trend. Its Universal Video Decoder block adds hardware decode acceleration for two standards: the MPEG-4 format (used by DivX and the like) and the MVC extension to H.264 for stereoscopic content. Also, the UVD block has the related ability to decode dual HD video streams simultaneously.

More exciting is a first for discrete desktop GPUs: a hardware video encoder. Since UVD refers explicitly to decoding, AMD has cooked up a new acronym for the encoder, VCE or Video Codec Engine. Like the QuickSync feature of Intel’s Sandy Bridge processors (and probably the SoC driving the smart phone in your pocket), VCE can encode videos using H.264 compression with full, custom hardware acceleration. We’re talking about hardware purpose-built to encode H.264, not just an encoder that does its calculations on the chip’s shader array. As usual, the main advantages of custom logic are higher performance and lower power consumption. Tahiti’s encode logic looks to be quite nice, with the ability to encode 1080p videos at 60 frames per second, at least twice the rate of the most widely used formats. The VCE hardware supports multiple compression and quality levels, and it can multiplex inputs from various sources for the audio and video tracks to be encoded. Interestingly, the video card’s frame buffer can act as an input source, allowing for a hardware-accelerated HD video capture of a gaming session.

AMD plans to enable a hybrid mode for situations where raw encoding speed is of the essence. In this mode, the VCE block will take care of entropy encoding and the GPU’s shader array will handle the other computational work. On a high-end chip like Tahiti, this mode should be even faster than the fixed encoding mode, with the penalty of higher power draw.

Unfortunately, software applications that support Tahiti’s VCE block aren’t available yet, so we haven’t been able to test its performance. We fully expect support to be forthcoming, though. AMD had reps on hand from both ArcSoft and Sony Creative Software at its press event for the 7970, in a show of support. We’ll have to revisit VCE once we can get our hands on software that uses it properly.

..and even more stuff

Tahiti is the first GPU to support PCI Express 3.0, which uses a combination of higher signaling rates and more efficient encoding to achieve essentially twice the throughput of second-generation PCIe. Right now, the only host systems capable of PCIe 3.0 transfer rates are based on Intel’s Sandy Bridge-E processors and the X79 Express chipset. We don’t expect many tangible graphics performance benefits from higher PCIe throughput, since current systems don’t appear to be particularly bandwidth-limited, even in dual eight-lane multi-GPU configs. In his presentation about Tahiti, Demers downplayed the possibility of graphics performance gains from PCIe 3.0, but did suggest there may be benefits for GPU computing applications.

AMD claims Tahiti is capable of supporting the upcoming DirectX 11.1 standard, a fairly minor incremental bump whose feature list is fairly esoteric but includes provisions for native support of stereoscopic 3D rendering. A future beta driver for Windows 8 will add hooks for DX11.1 support, according to AMD.

As if all of that weren’t enough, the Radeon HD 7970 is hitting the market alongside a gaggle of software upgrades to AMD’s Eyefinity multi-display graphics technology. Collectively, these modifications have been labled Eyefinity 2.0. Some of the changes are available for older Radeons in current drivers, including tweaks to enable several new display layouts and multi-monitor stereoscopic gaming. Upcoming releases in the first couple months of 2012 will do even more, including the display-geek-nirvana unification: Eyefinity multi-displays, HD3D stereoscopy, and CrossFire multi-GPU should all work together starting with the Catalyst 12.1 driver rev. You’ll either have a truly mind-blowing gaming experience or get an unprecedentedly massive headache from such a setup, no doubt.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core
i7-980X
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1333MHz

Memory timings 9-9-9-24 2T
Chipset drivers INF update
9.2.0.1030

Rapid Storage Technology 10.8.0.1003

Audio Integrated ICH10R/ALC889A

with Realtek 6.0.1.6482 drivers

Graphics
Asus Radeon HD
5870 1GB

with Catalyst 8.921 drivers

Asus Matrix Radeon HD
5870 2GB

with Catalyst 8.921 drivers

Radeon HD 6970
2GB

with Catalyst 8.921 drivers

Radeon HD 7970
3GB

with Catalyst 8.921 drivers

XFX GeForce GTX
280 1GB

with ForceWare 290.36 beta drivers


GeForce GTX 480 1.5GB

with ForceWare 290.36 beta drivers

Zotac GeForce GTX
580 1.5GB

with ForceWare 290.36 beta drivers

Hard drive Corsair
F240 240GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

Service Pack 1

DirectX 11 June 2009 Update

Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing a 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.
  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Skyrim at its Ultra quality settings with FXAA enabled.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Texture filtering

Peak bilinear

filtering

(Gtexels/s)

Peak bilinear

FP16 filtering

(Gtexels/s)

Memory

bandwidth

(GB/s)

GeForce GTX 280 48 24 142
GeForce GTX 480 42 21 177
GeForce GTX 580 49 49 192
Radeon HD 5870 80 40 154
Radeon HD 6970 85 42 176
Radeon HD 7970 118 59 264

Now that we’ve talked about the architecture ad nauseum, it’s nice to get into some test results. On paper, Tahiti has massive texture filtering throughput compared to any other current GPU, and in this quick synthetic test, it delivers on that promise quite nicely. The only saving grace for the competition is the GF110’s full-rate FP16 filtering, which allows the GeForce GTX 580 to avoid being completely embarrassed.

Tessellation and geometry throughput

Peak

rasterization

rate

(Mtris/s)

Memory

bandwidth

(GB/s)

GeForce GTX 280 602 142
GeForce GTX 480 2800 177
GeForce GTX 580 3088 192
Radeon HD 5870 850 154
Radeon HD 6970 1760 176
Radeon HD 7970 1850 264

Given that Tahiti has substantially more buffering for geometry expansion than its predecessors, we’d expected the 7970 to perform better in this test. Instead, it’s not much faster at the “Extreme” tessellation level—and is actually slower at the lower “Normal” setting. Our sense is that this result may be caused by a software quirk, at least in part. TessMark is written in OpenGL, and AMD’s driver support there doesn’t always get the attention the DirectX drivers do.

We do have another option, which is to try a program that can act as a tessellation benchmark via DirectX 11. Unigine Heaven fits the bill by offering gratuitous amounts of tessellation on its “Extreme” setting. The additional polygons don’t really improve image quality in the demo, which is a shame, but they do push the graphics hardware pretty hard, so this demo will serve our need for a synthetic test of geometry throughput.

Now that’s more like it. The 7970 shows major improvement over the past two generations of Radeon graphics hardware, enough to put it at the front of the pack. Now, I’m not convinced Tahiti is outright faster than the GF110 in tessellation throughput. The Heaven demo includes things other than ridiculous numbers of polygons, including lots of pixel shader effects. My sense is that the tessellation hardware on the top few GPUs is simply fast enough that something else, like pixel shader performance, becomes the primary performance limiter. When that happens, Tahiti’s massive shader array kicks in, and the contest is over. The relevant point is that Tahiti’s geometry throughput is sufficiently improved that it’s not an issue, even with an extremely complex tessellation workload like this one.

Putting those new shaders to work

Peak shader

arithmetic

(TFLOPS)

Memory

bandwidth

(GB/s)

GeForce GTX 280 0.6 142
GeForce GTX 480 1.3 177
GeForce GTX 580 1.6 192
Radeon HD 5870 2.7 154
Radeon HD 6970 2.7 176
Radeon HD 7970 3.8 264

The first couple of tests above, the cloth and particles simulations, primarily use vertex and geometry shaders to do their work. In those tests, the 7970 easily outperforms the 6970, but it’s not quite as fast as the two Fermi-based GeForces. As we’ve noted, vertex processing remains a strength of Nvidia’s architecture.

Boy, things turn around in a hurry once we move into the last three tests, which rely on pixel shader throughput. True to form, AMD’s older GPUs tend to outrun the GeForces in these tests, since they’re quite efficient with pixel-centric workloads. Even so, Tahiti is substantially faster. In a couple of cases, the 7970 delivers on its potential to crank out over twice the FLOPS of the GeForce GTX 580.

GPU computing performance

These results are instructive. When we move from pixel shaders into DirectCompute performance, the Fermi-based GeForces recapture the lead from the Cypress- and Cayman-based Radeons. The Radeons have much higher theoretical FLOPS peaks, but the GeForces tend to be more efficient here. Tahiti, though, changes the dynamic. The Radeon HD 7970 outruns the GTX 580 and is nearly 50% faster than the Cypress-based Radeon HD 5870.

LuxMark is a ray-traced rendering test that uses OpenCL to harness any compatible processor to do its work. As you can see, we’ve even included the Core i7-980X CPU in our test system as a point of comparison. Obviously, though, the 7970 is the star of this show. The newest Radeon nearly doubles the throughput of its elder siblings—and nearly triples the performance of the Fermi-based GeForces. We’ve only run a couple of GPU computing tests, so our results aren’t the last word on the matter, but Tahiti may be the best GPU computing engine out there. AMD appears to have combined two very desirable traits in this chip’s shader array: much higher utilization (and thus efficiency) than previous DX11-class Radeons, and gobs of FLOPS in the given chip area.

The Elder Scrolls V: Skyrim

Our test run for Skyrim was a lap around the town of Whiterun, starting up high at the castle entrance, descending down the stairs into the main part of town, and then doing a figure-eight around the main drag.

Since these are pretty capable graphics cards, we set the game to its “Ultra” presets, which turns on 4X multisampled antialiasing. We then layered on FXAA post-process anti-aliasing, as well, for the best possible image quality without editing an .ini file.

The plots above show the time required to render the individual frames produced during our 90-second test run. If you’re unfamiliar with our fancy new testing methods, let me direct you to this article, which explains what we’re doing. In a nutshell, our goal is to measure graphics performance in a way that more fully quantifies the quality of the gaming experience—the smoothness of the animation and the ability of the graphics card to avoid momentary pauses or periods of poor performance.

Because Skyrim is a DirectX 9 game, it’s one of the few places where our representative of older GPU generations, the GeForce GTX 280, is able to participate fully. However, as you can see, the GTX 280 is slow enough to have earned its own plot, separate from the other GeForces. Our decision to test at 2560×1600 with 8X AA and 16X aniso has laid low this geezer of a GeForce; its 1GB of RAM isn’t sufficient for this task, which is why it’s churning out frame times as high as 100 ms. We had the same video memory problem with our Radeon HD 5870 1GB card, so we swapped in a 2GB card from Asus to work around it.

You can tell just by looking at the plots that the Radeon HD 7970 performs well here; it produces more frames than anything else, and not a single frame time stretches over the 40 ms mark.

The fact that the 7970 produces the most frames in the plots should be a dead giveaway that it would have the highest average frame rate. The newest Radeon reigns supreme in this most traditional measure of performance.

This number is about frame latencies, so it’s a little different than the FPS average. This result simply says “99% of all frames produced were created in less than x milliseconds.” We’re ruling out the last one percent of outliers in order to get a general sense of frame times, which will determine how smoothly the game plays.

I’ll admit, I had to stare at the frame time plots above for a little while in order to understand why those two GeForces would have a lower 99th percentile frame latency than the Radeon HD 7970, which looks so good. The culprit, I think, is those first 150 or so frames where all of the cards are slowest. That section of the test run comprises more than 1% of the frames for each card, and in it, the GeForces deliver somewhat lower frame latencies.

Now, a difference of two milliseconds is nearly nothing, but those opening moments are the only place where the fastest cards struggle, and the GeForces are ever so slightly quicker there. I do think some focus on the pain points for gaming performance is appropriate. What we seem to be finding over time is that viewing graphics as a latency-sensitive subsystem is a great equalizer. To give you a sense of what this result means, note that a score between 33 and 37 milliseconds translates to momentary frame rates between 27 and 30 FPS. For the vast majority of the time, then, all of these cards are churning out frames quickly enough to maintain relatively smooth motion, especially for an RPG game like this one that doesn’t rely on quick-twitch reactions.

Our next goal is to find out about worst-case scenarios—places where the GPU’s performance limitations may be contributing to less-than-fluid animation, occasional stuttering, or worse. For that, we add up all of the time each GPU spends working on really long frame times, those above 50 milliseconds or (put another way) below about 20 FPS. We’ve explained our rationale behind this one in more detail right here, if you’re curious or just confused.

In this case, our results are crystal clear. Only the GeForce GTX 280, which doesn’t have enough onboard video RAM to handle the game at these settings, struggles at all with avoiding major slowdowns in Skyrim. We’ve noted in the past that Skyrim performance appears to be more CPU limited than anything else. Don’t worry, though. We’ll be putting these GPUs through the wringer shortly.

Batman: Arkham City

We did a little Batman-style free running through the rooftops of Gotham for this one.

Several factors converged to make us choose these settings. One of our goals in preparing this article was to avoid the crazy scenario we had in our GeForce GTX 560 Ti 448 review, where every card tested could run nearly every game adequately. The Radeon HD 7970 is a pretty pricey bit of hardware, and we wanted to push it to its limits, not watch it tie a bunch of other cards for adequacy. So we cranked up the resolution and image quality and, yes, even enabled DirectX 11. We had previously avoided using DX11 with this game because the initial release had serious performance problems on pretty much any video card. A patch has since eliminated the worst problems, and the game is now playable in DX11, so we enabled it.

This choice made sense for benchmarking ultra high-end graphics cards, I think. I have to say, though, that the increase in image quality with DX11 tessellation, soft shadows, and ambient occlusion isn’t really worth the performance penalty you’ll pay. The image quality differences are hard to see; the performance differences are abundantly obvious. This game looks great and runs very smoothly at 2560×1600 in DX9 mode, even on a $250 graphics card.

As you can see, all of the cards produce some long frame times; the frame time plots are more jagged than in Skyrim. This will make for an interesting comparison. Also, it’s pretty clear the Radeon HD 5870 is overmatched here, even with 2GB of video RAM onboard.

We’ve found that average FPS and 99th percentile frame times don’t always track together, especially when there are wide swings in frame times involved, like we have here. However, in this case, they mirror each other pretty closely. All of the cards seem to have some long frame times in relatively proportional measure. Thus, in both FPS and 99th percentile latency, the Radeon HD 7970 manages to outperform the GeForce GTX 580 by a small margin.

The 7970’s slight edge holds when we turn our attention toward longer-latency frames. The new Radeon is the only card of the bunch to spend less than half a second working on rendering frames beyond 50 ms. The GTX 580 isn’t far behind, though.

Battlefield 3

We tested Battlefield 3 with all of its DX11 goodness cranked up, including the “Ultra” quality settings with both 4X MSAA and the high-quality version of the post-process FXAA. We tested in the “Operation Guillotine” level, for 60 seconds starting at the third checkpoint.

Yes, at these settings, we’re pushing these cards very hard. We very much wanted to avoid a situation where the GPUs weren’t really challenged. I think we succeed there, although we may have overshot.

Nevertheless, the Radeon HD 7970 comes out of this contest looking very good indeed, with a clear lead over the GeForce GTX 580 in both average FPS and 99th percentile frame times. That’s true even though we didn’t encounter any of the big frame time spikes that we have on other levels of this game with GeForce cards.

In fact, even with the relatively low average frame rates we saw, this stretch of BF3 runs quite well, with pretty even frame times throughout, especially on the three fastest cards. As a result, even the GeForce GTX 480, which averaged 27 FPS, avoids long frame times very effectively—and is thus quite playable.

The 7970 is easily the best solution here, though, both subjectively and in every way we’ve measured performance.

Crysis 2

Our cavalcade of punishing but pretty DirectX 11 games continues with Crysis 2, which we patched with both the DX11 and high-res texture updates.

Notice that we left object image quality at “extreme” rather than “ultra,” in order to avoid the insane over-tessellation of flat surfaces that somehow found its way into the DX11 patch. We tested 90 seconds of gameplay in which we tracked down an alien, killed him, and harvested his DNA. Cruel, yes, but satisfying.

You can tell the Radeon HD 7970 is relatively fast here from the frame time plots. The 7970 generates more frames than any other card and generally has lower frame latencies. However, the 7970’s frame time plot has quite a few spikes in it compared to the GeForces. That results in a dead heat between the 7970 and the GTX 580 in 99th percentile frame times.

Those frame time spikes cause the 7970 to spend more time processing frames beyond 50 ms than the GTX 580 does, as well. However, a total of 51 ms in this category isn’t bad.

Civilization V

We’ll round out our punishment of these GPUs with one more DX11-capable game. Rather than get all freaky with the FRAPS captures and frame times, we simply used the scripted benchmark that comes with Civilization V.

GeForces have long been at the top of the performance charts in this game, and we’ve suspected that the reason was geometry throughput. The terrain is tessellated in this game, and there are zillions of tiny, animated units all over the screen. Even so, the 7970 grabs the top spot with room to spare. That’s solid progress for the Radeon camp.

Power consumption

The first two graphs above give us a look at the 7970’s ZeroCore feature at work. The 7970 system’s total power draw drops by 17W when the display goes into power save, almost entirely courtesy of the 7970’s new low-power state for long idle. (The display’s power consumption is not part of our measurement.)

Overall, the 7970’s power consumption picture is quite nice. Even when idling at the Windows desktop, the newest Radeon shaves off about 5W of system-wide power consumption—more than that compared to the GeForces. When running Skyrim, the 7970 system draws a gobsmacking 80W less than the otherwise-identical GTX 580 rig. It seems the 28-nm process at TSMC is coming along quite nicely, doesn’t it?

Noise levels and GPU temperatures

When ZeroCore kicks in and the 7970’s fan stops spinning, we hit the noise floor for the rest of the components in our test system, mainly the PSU and the CPU cooler. By itself, without its fan spinning, the 7970 is pretty much silent.

I had hoped the bigger blower and larger exhaust venting area would make the 7970 quieter than the competition, especially since its power draw is relatively low overall. The 7970 is fairly quiet during active idle, but its fan ramps up quite a bit when running a game. Judging by the temperatures we measured, it appears AMD has biased its cooling policy toward restraining GPU temperatures rather than noise levels. I’d prefer a somewhat quieter card that runs a little hotter, personally. Still, nothing about the 7970’s acoustic profile is terribly offensive; it’s just not quite as nice as the GTX 580’s, which is a surprise given the gap in power consumption between the two. It’s a shame AMD didn’t capitalize on the chance to win solidly in this category. Perhaps the various Radeon board makers can remedy the situation.

Conclusions

For several generations now, whenever a new Radeon GPU was making its debut, I have bugged AMD Graphics CTO Eric Demers about whatever features were missing compared to the competition. There have always been feature deficits, whether it be graphics-oriented capabilities like coverage sampled antialiasing and faster geometry processing or compute-focused capabilities like better scheduling, caching, and ECC protection. Each time, Demers has answered my questions about what’s missing with quiet confidence, preaching the gospel of making the correct tradeoffs in each successive generation of products without compromising on architectural efficiency or time-to-market.

That confidence has seemed increasingly well founded as the years have progressed, in part because we often seem to be comparing what AMD is doing right now to what Nvidia will presumably be doing later. After all, AMD has been first to market with revamped GPU architectures based on new process tech for quite a few generations in a row. It hasn’t hurt that, since the introduction of the Radeon HD 4800 series, AMD has been at least competitive with Nvidia’s flagship chips in contemporary games, if not outright faster, while building substantially smaller, more efficient GPUs. Meanwhile, the firm has steadily ratcheted up the graphics- and compute-focused features in its new chips, gaining ground on Nvidia seemingly every step of the way.

With Tahiti and the Radeon HD 7970, AMD appears to have reached a very nice destination. In graphics terms, the Radeon HD 6970 and Cayman had very nearly achieved feature parity with the GF110. Tahiti moves AMD a few steps ahead on that front, though the changes aren’t major. The biggest news there may be the improvements to tessellation performance. Tahiti may not have caught up to Nvidia entirely in geometry throughput, but it’s fast enough now that no one is likely to notice the difference in any way that matters.

The more consequential changes in this GPU are primarily compute-related features, including caching, C++ support, ECC protection, and the revamped shader array. AMD has dedicated substantial space on this chip to things like SRAM and ECC support, and Tahiti looks poised to take on Nvidia in the nascent market for GPUs in the data center as a result. Nvidia has one heckuva head start in many ways, but AMD can make its case on several fronts, including comparable feature sets, superior power efficiency, and more delivered FLOPS.

Radeon HD 7970

January 2012

Gamers looking for the fastest graphics card in the world can rest assured that, at least for a while, the Radeon HD 7970 is it. Our testing has shown that when you move beyond FPS averages and get picky about consistently smooth action via low frame latencies, the differences between the most expensive cards in the market, including the 7970 and the GeForce GTX 580, tend to shrink. What that means is you’re not always likely to feel the difference between one card and the other while playing a game, even if the average frame rates look to be fairly far apart. One can’t help but wonder, as a little green birdie pointed out to us, whether AMD’s choice to dedicate lots of die space to compute-focused features while, say, staying with eight ROP partitions hasn’t hampered the 7970’s ability to take a larger lead in gaming performance. Still, the 7970 draws substantially less power than the GTX 580, and heck, we had to choose carefully and crank up the image quality in order to make sure our suite of current games would be clearly performance limited on these cards. The Radeon HD 7970 ticks all of the right boxes, too, from PCIe 3.0 to video encoding to DirectX 11.1 and so on. For the time being, it’s clearly the finest single-GPU video card in the world, and as such, it’s earned a TR Editor’s Choice award.

Comments closed
    • focusedthirdeye
    • 8 years ago

    Thanks for posting this article and having this website available on the Internet. – Doug Rochford

    • CampinCarl
    • 8 years ago

    Hey, so, this may have already been answered, but I’m too lazy to dig through all the comments.

    Is there a reason the data for the 560 TI 448 was left out of this review? Not that it’s too difficult to pull up the 448 review and compare it to the 6970 and then the 7970 to the 6970..but still. The test beds are the same…soooooo…? Was it too busy taking an arrow in the knee?

    Edit: Though actually, the 6970 data doesn’t match up because they are on different drivers. And different resolutions for some games. Womp womp woooomp.

    • Bo_Fox
    • 8 years ago

    Wish there were more games covered like in the past.. a couple of games with detailed graphs are good, but hey, lots of games would make it a far more well-rounded review. Just a few months ago, Techreport covered like 8-9 games, right?

    • albundy
    • 8 years ago

    considering that the tests were run on ultra high rez WQXGA monitors that cost 3 times this card, i can only imagine what kind of triple digit fps any normal consumer screens will get with the eye candy maxed out.

    • michael_d
    • 8 years ago

    I just ordered Sapphire HD 7970 from newegg. I will make a thread in Graphics section about its performance. Currently I have a single HD 5870.

    • marvelous
    • 8 years ago

    Why didn’t AMD engineers raise the ROP count?

      • flip-mode
      • 8 years ago

      from the article
      [quote<]Rather than increasing the hardware counts here, AMD decided on a reorganization. In previous designs, two ROP partitions (or render back-ends) were associated with each memory controller, but AMD claims the memory controllers were "oversubscribed" in that setup, leaving the ROPs twiddling their thumbs at times. Tahiti's ROPs are no longer associated with a specific memory controller. Instead, the chip has a crossbar allowing direct, switched communication between each ROP partition and each memory controller. (The ROPs are not L2 cache clients, incidentally.) With this increased flexibility and the addition of two more memory controllers, AMD claims Tahiti's ROPs should achieve up to 50% higher utilization and thus efficiency.[/quote<] Anand's discussion of it is also worth a read: [url<]http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/4[/url<]

    • can-a-tuna
    • 8 years ago

    Just liked to add that to Metro 2111 (or whatever the number was) should have been included in the tests because it’s the “heaviest” game around. Also Deus Ex: Human Revolution would be a good add-on to test suite that being a modern DX11 game.

    • amythompson172
    • 8 years ago

    is this better? [url<]http://www.amazon.com/Diamond-4870X2PE52G-Radeon-4870X2-GDDR5/dp/B001E1BP80?tag=emjay2d-20[/url<]

      • khands
      • 8 years ago

      The 7970 beats a 4870×2 by a landslide. The 5870 was approximately equal and this thing beats it by between 50%-200%. Depending on game, settings, and resolution.

    • HisDivineOrder
    • 8 years ago

    “That confidence has seemed increasingly well founded as the years have progressed, in part because we often seem to be comparing what AMD is doing right now to what Nvidia will presumably be doing later. After all, AMD has been first to market with revamped GPU architectures based on new process tech for quite a few generations in a row. It hasn’t hurt that, since the introduction of the Radeon HD 4800 series, AMD has been at least competitive with Nvidia’s flagship chips in contemporary games, if not outright faster, while building substantially smaller, more efficient GPUs. Meanwhile, the firm has steadily ratcheted up the graphics- and compute-focused features in its new chips, gaining ground on Nvidia seemingly every step of the way.”

    True. That said, you seem to have forgotten to consider the now abandoned strategy that AMD had been pushing up until now, the strategy of building smaller, more efficient GPU’s that drive down costs for the high end for all consumers, forcing nVidia to compete. As it is, AMD just looked at the market as it is and slapped $50 onto it, deriving a MSRP for their new high end.

    This is a step in the wrong direction if you’re a fan of their old direction they took with the 48xx series and beyond. In your summary, you also glossed over the fact that if AMD is merely meeting all of nVidia’s standards (besides some AA-based ones for DX11) then when Kepler shows up 1Q011 it’s going to be behind the curve again. It’s brought very little new to the table besides low power idle and more performance per watt, which the die shrink alone should have achieved (40nm to 28nm).

    Such a low performance improvement suggests nVidia has a very low bar to hit to go from a 580 which is already pretty damn close to the 7970 in practical terms of actual gameplay to something that walks all over 7970. Of course, this assumes nVidia hasn’t made some grievous error in designing the follow-up GPU. Imagine if nVidia ALSO takes their strategy from their competitor and “prices accordingly?”

    Slap $100 onto the 7970’s $549 and boom, we’re in the $649 territory. But the performance warrants it… right? When’s too much too much? Where do you draw the line? Are you going to say it in the summary when it’s the $549 video card, the $599 video card or the $649 video card? AMD just took their high end card’s price from $379 to $549.

    That probably deserved a shout out in the end of review summary.

      • wierdo
      • 8 years ago

      [quote<]True. That said, you seem to have forgotten to consider the now abandoned strategy that AMD had been pushing up until now, the strategy of building smaller, more efficient GPU's that drive down costs for the high end for all consumers, forcing nVidia to compete. As it is, AMD just looked at the market as it is and slapped $50 onto it, deriving a MSRP for their new high end.[/quote<] How so? The 7970 die size is even a bit smaller than the 6970. I think the reason it's high priced right now is because there's no competing product from nVidia that would force AMD to be the price/performance alternative, so it's priced as a pure performance product. That may change in six months, but for now AMD is basically saying "you want the best, get the 7970, you want the best deal, get the last gen cards" while they use their lead time to ramp up their supplies for the new year lineup and sell what they have for a price the Cadillac crowd would easily pay. I think the bar for nVidia is low exactly because AMD is still following the smaller dies strategy, will nVidia do big dies again? I'm guessing they will, and if so then we'll again be looking at a market similar to last year's I'd guess.

      • flip-mode
      • 8 years ago

      Answer this question: if AMD (and Nvidia) can sell all they can produce at [$550] [$650] [$toomuchmoney] then why should they price it any lower.

      This is the third time I’ve asked this. The first guy I asked gave a non-answer reply the first time and didn’t bother to reply the second time. Maybe you’ve got a direct answer for that.

        • khands
        • 8 years ago

        ^this is the issue, supply and demand and right now there’s a fair bit of demand and no supply.

        • yogibbear
        • 8 years ago

        It’s really not that expensive. I am fine with the price. In 3 months it’ll be $500 or a little bit less and I’m okay with that.

          • khands
          • 8 years ago

          And once a competing Kepler product appears we’ll probably see big price drops, unless TSMC can’t improve yields enough to meet demand.

      • Silus
      • 8 years ago

      LOL good one! AMD was congratulated soundly (and deserved it) when their new single GPU cards were priced as low as we’ve seen with the HD 4000s and HD 5000s, reserving high prices only for the dual GPU cards. Now they are pricing their flagship single GPUs almost as high as their last generation high-end dual GPU and the argument to accept it is that “It’s priced where AMD needs to price it,”. Too bad other companies don’t have that courtesy, because they also have yield and/or other problems…which is the ONLY reason AMD is pricing this card as high.

      But don’t worry, if a GTX 680 (or whatever it’s called) is faster than the HD 7970 and costs more than the HD 7970, there will be a huge uproar by the usual suspects, because it’s too expensive 🙂

      • clone
      • 8 years ago

      AMD’s card comes with 3gb’s of ram, it was up until last night $50 less than Nvidia’s 3gb 580, now they are tied.

      AMD did not raise prices, they actually forced Nvidia to lower them and given the old high end not so long ago was $830 (8800 GTX Ultra) 2006 it seems unfair to criticize AMD for offering a better part for less than the competition.

      this situation has been a win all round for the consumer as AMD forced Nvidia to adjust prices downward and they redefined the high end at $50 lower than it was just a month ago.

      • shank15217
      • 8 years ago

      This card has a smaller die area than their last flagship, your entire rant makes no sense. Sorry but if you read the review, AMD set the standard for gfx yet again.

      • ish718
      • 8 years ago

      It would have been more interesting to see AMD allocate more transistors towards gaming performance but it’s absolutely unnecessary atm.

      There are no games the HD7970 has a problem with running @ 2560×1600 maxed out

      Don’t forget the high demand+low supply of 28nm chips, not to mention higher development cost compared to 40nm kinda warrants AMD to price the HD7970 so high.

      • ptsant
      • 8 years ago

      “True. That said, you seem to have forgotten to consider the now abandoned strategy that AMD had been pushing up until now, the strategy of building smaller, more efficient GPU’s that drive down costs for the high end for all consumers, forcing nVidia to compete. As it is, AMD just looked at the market as it is and slapped $50 onto it, deriving a MSRP for their new high end.”

      You oversimplify. The new GPU is more efficient in all metrics that count. It is smaller in terms of surface than its top end siblings and the nVidia rivals. It is more power efficient, especially in idle which is the greatest part of a computer’s lifetime and in terms of FPS/watt. It offers higher performance *per shader* than the 69xx cards (something like 10-30% if I am not mistaken). As for Kepler, we’ll have to wait until it appears.

      The price is high but not shocking of the fastest card on the planet and the 10% over the geforce 580 does not only give you bragging rights but also meaningful features. It has always been like that for some time. If they don’t manage to sell at $550, they’ll quickly lower the price. All you have to do is … not buy.

    • NeelyCam
    • 8 years ago

    Careful with the 7000 series – some are old, rebadged sh*t:

    [url<]http://www.anandtech.com/show/5291/amd-quietly-releases-the-oemonly-radeon-hd-7670-turks-rides-again[/url<]

      • khands
      • 8 years ago

      This has been mentioned a couple of times but the 7900 series are likely the only cards this gen going to be using GCN, with the 78xx (and possibly 77xx) cards using VLIW4 and the rest using the old VLIW5 standby. I expect the 7800 series cards to look and perform very similar to today’s 6900 cards at lower power draw, and if the 77xx cards use VLIW4 they’ll probably be down clocked versions of the same (which may make them rather good purchases if you plan to OC).

        • flip-mode
        • 8 years ago

        Not from what I’ve heard. GCN: 79xx, 78xx, 77xx. Rebadge: 76xx and below.

        And I think that’s perfectly fine, really. GCN features and performance aren’t really useful for the entry level crowd. Let the entry level crowd get what they pay for – a basic video card that will be able to provide a minimal gaming capability. No need for the compute features on these entry level systems either – they won’t be using it.

          • khands
          • 8 years ago

          that would be fantastic if it turns out true, my rumors are rather old at this point so it’s possible.

      • Arclight
      • 8 years ago

      [quote<]Update: AMD Quietly Releases The OEM-Only Radeon HD 7000 Series: VLIW5 Rides Again[/quote<] I wouldn't buy OEM anyhow but thanks for the heads up.

    • anotherengineer
    • 8 years ago

    Wow over 350 posts!!

    Nice review as always, however I want more and hope you can extend the benching later on.

    I know this monster was designed for gaming, however its VCE and HDMI(1.4a 3D, & multichannel sound etc.) performance would have also been nice to see since the card does have the capabilities. I know I read, you hope to get to try it at some point. Video transcoding and folding also would have been interesting to put into the review, along with some of those signal test patterns.

    “More exciting is a first for discrete desktop GPUs: a hardware video encoder. Since UVD refers explicitly to decoding, AMD has cooked up a new acronym for the encoder, VCE or Video Codec Engine. Like the QuickSync feature of Intel’s Sandy Bridge processors (and probably the SoC driving the smart phone in your pocket), VCE can encode videos using H.264 compression with full, custom hardware acceleration”
    and
    “Unfortunately, software applications that support Tahiti’s VCE block aren’t available yet, so we haven’t been able to test its performance. We fully expect support to be forthcoming, though. AMD had reps on hand from both ArcSoft and Sony Creative Software at its press event for the 7970, in a show of support. We’ll have to revisit VCE once we can get our hands on software that uses it properly.”

    Time to get back to the grindstone Scott 😉

    • clone
    • 8 years ago

    I find it pretty funny that ppl are complaining about the price tag on this puppy when back in 2006 the 8800 GTX Ultra was listing for $830.00

    now here we are in 2012 after inflation AMD is asking $550 for the best single gpu and so many are talking about how terrible AMD is.

    p.s. for $549.00 you get 3gb’s of ram while Nvidia’s 580 costs just shy of $600 so really no one has anything to cry about.

      • Bo_Fox
      • 8 years ago

      The 8800 Ultra was a wholly different beast of its own. This was after the 8800GTX was miles ahead of its competition. It was nearly twice as fast as anything AMD had to offer. Plus the 8800 Ultra came out before the delayed 2900XT finally came out. If the 2900XT were more competitive, NV would of course have lowered the prices.

      Furthermore, the 8800GTX/Ultra had the most staying power of all cards for the past several years (it was the leadership GPU for about 18 months). The only card with greater staying power was the 9700 Pro released in 2002, which remained competitive at the top for 2 years.

      The 7970 just should not ever be compared to the 8800 Ultra of its time, period. 20% better than the competition is nothing like 100%.

    • ptsant
    • 8 years ago

    I think Eyefinity is much more important for game immersion than tesselation, “ultraextreme” settings and 32x anisotropic filtering. You absolutely need to try Eyefinity gaming… Next buy is 3×24″ glorious monitors and this card or its smaller brother.

      • khands
      • 8 years ago

      As soon as we figure out how to make bezelless screens I’ll be on board.

        • thermistor
        • 8 years ago

        Um, they have a 1×3 wide bezel-less monitor right now. I won’t bother linking, but it is out there.

          • wierdo
          • 8 years ago

          Yeah something like this could work, but it’s pricey:

          [url<]http://www.forumopolis.com/showpost.php?p=2990001&postcount=1297[/url<]

        • wierdo
        • 8 years ago

        Same here, I can’t handle bezels, they ruin it for me, waiting for something like oleds with detachable bezels or anything of that sort, it’ll be annoying enough for me to see the lines between the screens even without bezels, but I think I could handle that.

        Alternatively I might just wait for 4k TVs and just hook a giant 50″ one to the pc, but I was hoping for something like an affordable curved wide screen in my future. One can dream.

        • yogibbear
        • 8 years ago

        You can just rip the bezels off current screens. admittedly it can be a PITA and they look ugly afterwards if viewed from behind/the side, but it still works.

        • Farting Bob
        • 8 years ago

        Ive got a friend who recently hooked up 3×24″ IPS screens for some gaming. I have a single 27″ display, and one of the first cheap TN ones they made at that size. I prefer my single screen. The bezels on his displays arent that thick, but it does distract from the immersion. When i can get affordable pencil-thick bezel displays i might reconsider i guess. Also by then GPU’s should be even more powerful for driving 3 HD displays.

      • PrincipalSkinner
      • 8 years ago

      I can’t say I’ve tried gaming with more than one monitor, but I absolutely hate seeing edges of polygons on nicely textured and modeled objects. They ruin any illusion of photorealism. Or any kind of realism.

        • ptsant
        • 8 years ago

        I won’t disagree. My vision is maybe not as good as yours, though. Then again, it depends on what games you play: fast paced FPSs do not really let you admire such tiny details. Skyrim, on the other hand…

        What I’m trying to say, is that for a given game/engine technology the visual difference between some settings can be small for a big performance drop. I especially liked the 3×24″ experience and I would be willing to trade-off some of these settings for a glorious 7k-pixel visual space. I never said one should run a 3×24″ space on “low” settings, it would be a pity…

      • JLW777
      • 8 years ago

      i got a question regarding multi-screen setup vs one single big monitor as the bezels are preventing quite a few of ppl to ms. what’s the difference between e.g 3 x 24″ (1920×1200) vs 1 x 40″ (2560 x 1600) is it cost related or does the 3 x monitor setup can scale to a higher resolution? (in my mind that 3 x 1920 x 1200 is still 1920 x 1200 but on a larger scale right?)

        • Firestarter
        • 8 years ago

        [quote<](in my mind that 3 x 1920 x 1200 is still 1920 x 1200 but on a larger scale right?)[/quote<] On a larger scale? I don't know what you're thinking, but 3 monitors of 1920 x 1200 each is just 7 megapixel, versus the 4 megapixel of a 2560 x 1600 monitor. On a eyefinity capable videocard, that's just about the only difference, aside from the obvious aspect ratio and bezel issues. And that said, where's this 40" 2560x1600 monitor you speak of?

        • ptsant
        • 8 years ago

        3×24″ is 5760×1200.

      • Bonzo
      • 8 years ago

      Unless you have just one monitor.

        • The Jedi
        • 8 years ago

        …or a projector. Just crank up the anti-aliasing.

    • WaltC
    • 8 years ago

    I want two of them! (But I will likely buy just one, eventually.) Actually, what I want to see out of AMD is one of these bad boys pared down to ~$150 specs while [i<]maintaining a 256-bit bus![/i<] Paring the high-end down to $150 specs with a 128-bit bus is what everyone expects ATi to do. It would be so nice to see them do the unexpected. Demand for such a 256-bit bused product would be so strong that I imagine AMD would have difficulty keeping up with demand.

      • Krogoth
      • 8 years ago

      256-bus memory bus is too expensive for that market segment.

        • WaltC
        • 8 years ago

        I know that it is…but I’d still like to have seen it, anyway…;) What’s different about this gen ATi product is that now there are three bus choices when before there had always only been two–256 or 128 (I don’t think anyone is working with 64-bits anymore.) In an Interview (part II) over on Rage 3D, Eric Dremers makes the point that he’s not that much in favor of doing a 128-bit product at all–but he didn’t know how the “marketing guys” would come down on the issue. It wouldn’t surprise me at all to see this ~$150 product @ 128-bit/bus. It was just a thought that low-end 256-bits would be nice–someday we will certainly get there, though.

        What I have trouble with, though, are the people who find the initial asking price of $549 to be somehow so alien to the 3d-card MSRP scene that it is baffling, and a very negative development. First, the $549 is just the *starting point*; second, nVidia’s had no trouble maintaining a $500-$600 MSRP range on a lesser product, the GTX 580, for over one year now. Knowing that, then, doesn’t seem much of a stretch to understand that the demand for the product @&549 will come in where AMD wants it. Failing that, of course, the MSRP will fall–but not *ever* to below that of the 580 series, imo.

        • flip-mode
        • 8 years ago

        I don’t know why this claim is made. You can find plenty of 256 bit cards for sale at $150 and below – HD 6850 for example – so we’re not talking just discontinued cards on clearance. So obviously 256 bit cards can still be sold profitably at $150 and below.

          • Krogoth
          • 8 years ago

          AMD is already clearing out its Bart parts.

          It will not be long until its replacements will come.

          Besides, HD 6850 started at $199 at its launch.

        • Bo_Fox
        • 8 years ago

        No, it is not.

      • jensend
      • 8 years ago

      What I want to see out of AMD is one of these bad boys pared down to ~$150 specs while [i<]giving every consumer a free cow![/i<] Paring the high-end down to $150 specs and not giving anybody a free cow is what everyone expects AMD to do. It would be so nice to see them do the unexpected. Demand for a product with a free cow included would be so strong that I imagine AMD would have difficulty keeping up with demand.

        • dpaus
        • 8 years ago

        What’s with you and the dairy theme this morning?

          • axeman
          • 8 years ago

          Maybe he’s in a particularly good mooed.

          • jensend
          • 8 years ago

          Putting a 256-bit GDDR5 bus on the replacement for Juniper would be like putting a 6L V8 in a Volkswagen Beetle, putting a defibrillator in packages of Captain Crunch as the free toy, or giving away free livestock with computer component purchases. Each of these will increase the cost without much of a real benefit to most of the normal target consumers. The 5770/6770 isn’t normally bandwidth-limited, and the much better on-chip caches in GCN will reduce the pressure on the bus. Further, while bandwidth has always been vital to graphics chips, the continuing march towards doing more computation with the data and the relative stagnation of monitor resolutions have meant that the importance of bandwidth relative to compute resources has been steadily declining over the past decade. I can maybe kinda understand some people clamoring for a 192-bit bus, but a 256-bit bus would be just silly.

          Besides, it’s already been leaked that Cape Verde has a 128-bit memory interface as everyone expected. Too late for wishful thinking.

          I suppose calling for AMD to give out free ponies would have been more in line with the “hey, while we’re talking wish-fulfillment fantasies, here’s another” theme, but the previous post’s Milkman Conspiracy theme did have me thinking of a different beast.

          In case you don’t know about the Milkman, I’ll just remind you that [url=http://www.doublefine.com/Psycho-pedia/Milkman_Conspiracy/<]I am not the Milkman. Where's the Milkman? I bet he's sleeping on the job. His milk is delicious; everybody wants it. He'll be here soon-- [i<]then the lies will end.[/i<][/url<]

            • dpaus
            • 8 years ago

            [quote<]or giving away free livestock with computer component purchases[/quote<] How is that outlandish?? Many of my component purchases seem to come with some real turkeys.....

      • Arclight
      • 8 years ago

      I remember a best selling card from a few years ago, the GT 8800. It had if i remember right a ~$250-300 debut price and it was what, the 4th performance card from nvidia? Years later people expect AMD to [b<]launch[/b<] cards at around the same rank in performance but for $150.....it's not gonna happen guys.... I mean i am a consumer also, i also want them to offer us great product for a few bucks but we got to be realistic. That said i kinda hate how people say "oh if AMD gets me "x" performance level for X% less than nvidia then i might switch. But if it was nvidia launching overpriced products they'd be all up in your face saying " Well that's the price of getting the best, you broke mofos". Oh how i loath fanboys..... Prices will come down in time, as the tech gets old or when it gets competition or both. Meanwhile yes, we can admit it's overpriced and advise others to sit it out but that's about all we can do. Stop pinning this down to "greedy" manufacturers, cause truth be told all of them are in it for the money. /rant.

      • no51
      • 8 years ago

      ts;dr

        • Palek
        • 8 years ago

        I LOLed.

          • no51
          • 8 years ago

          glad somebody did.

    • Arclight
    • 8 years ago

    Have you guys seen what the marketing guys came up with for the 7970 on AMD’s website?

    I quote
    [quote<]NEVER SETTLE The world's first 28 nm GPU architecture. Engineered to humiliate the competition.[/quote<] End quote Ouch, i believe that is called a burn.

      • no51
      • 8 years ago

      [quote<]i believe that is called a burn.[/quote<] I thought it was called the GTX480?

        • dpaus
        • 8 years ago

        [quote<]I thought it was called the GTX480[/quote<] Which, running at load, can indeed cause a burn. Oh, snap!

    • ShadowEyez
    • 8 years ago

    Very nice review. I’m sure if you have more time, you’ll do a crossfire follow up piece. Even many of the large sites did not do the level of testing you guys did, and the microshuttering material is very unique.
    I have to wonder though (as do many of the people on this site), being a person that does not want to spend more than $200 – $250 on a graphics card every couple of years, does this card really make sense. I’m really looking forward to the introduction/review of the 7950 or 7940, the “sweet spot” of price/performance (though in this day and age both ATI and Nvidia engineer these cards so that performance scales in a linear relation to price).

    • ronch
    • 8 years ago

    How one would wish AMD’s CPU division is as proficient as their GPU division. The ATI division has been on a roll for a while now and puts the CPU division to shame. Ironic, even, that it was AMD, a CPU design house, that swallowed up ATI, and not the other way around. Of course, their GPU division only has to compete with Nvidia while their CPU division has to compete with a much bigger and more resourceful competitor, and of course, buying a company with strong graphics technology to augment your weak CPU lineup only makes sense, but regardless, their CPU division is looking too embattled. I think AMD has to really work on nurturing its CPU engineering team/s apart from further pushing their GPU engineers. Get your CPU act together but don’t drop the ball in the GPU arena, AMD!

    • BrannigansLaw
    • 8 years ago

    This review was awesome. By far one of the best reviews I have seen esp. w/ the framerate latency/smoothness metric thrown in.

    Would love to see OC numbers as well by that metric.

    • jamsbong
    • 8 years ago

    ATI has finally done it! Moving to Nvidia Fermi type architecture. I have no doubt that ATI was able to move in this direction but the question was when. 40nm was simply not practical to manufacture such complex GPU. ATI chose to be patient and the end result is simply amazing. A chip that is faster, more complex, more efficient and smaller than GF110.

    A company tends to stick to a proven business model that works, so I won’t be surprise if Nvidia will have history repeating itself. I suspect more CUs for Kepler, maybe double the current amount? That would lead to 1024 CU and the number of transistors to possibly 6million!!! Kepler could end up being nearly as large as the GF110. It will be a super hot chip and filled with problems sustaining proper clock speed.

    • Krogoth
    • 8 years ago

    I can’t be the only one who thinks that Silus is trying too hard to fill PRIME1’s shoes.

      • Silus
      • 8 years ago

      I don’t fill anyone’s shoes. I’m my own self and coherent about it. The thumbs down legion, is just a pathetic group that considers diverging opinions as something to point the finger at. Fortunately here at TR, the thumbs up/down feature is just…there, with no practical use whatsoever, because if it did have some use, then TR would be an AMD fanboy exclusive site.

        • yogibbear
        • 8 years ago

        I gave you a thumbs up but only cause I wanted to do it ironically cause I’m a hipster.

          • no51
          • 8 years ago

          I’ve been giving Silus thumbs up before it was cool.

        • clone
        • 8 years ago

        having read your posts that typically come across very weak and painfully biased a thumbs down cuts short a worthless discussion.

        pathetic applies but to who depends.

        when your posts lose the bias and become informed the thumbs down will go away.

      • flip-mode
      • 8 years ago

      The sooner people (you too Krogy) learn to ignore Silus, the better. But we here at TR have been struggling to learn that lesson for over a decade. Something tells me we’re not ever going to learn it.

        • poulpy
        • 8 years ago

        I still keep a special place in my heart for both Shintai and Proesterchen, may they both RIP (but make it far from teh Internet please).

      • yogibbear
      • 8 years ago

      I’m can’t work out how you came up with I’m can’t. It’s awesome. New meme.

      I’m can’t be the only…. arrow in places.

        • flip-mode
        • 8 years ago

        Sweet! Krogoth is not impressed was running out of favor. I’m can’t wait to use the new meme!

          • no51
          • 8 years ago

          flip-mode is on point.

        • derFunkenstein
        • 8 years ago

        Wasn’t I’m Can’t a famous philosopher?

          • axeman
          • 8 years ago

          buh dumb-cha!

      • Meadows
      • 8 years ago

      I’m can’t wwebsite as in the knee.

      Oh Krogoth, that man may be a troll, and successful at that – if you observe the length of the thread he spawned – but you really could do the least and spell his name properly. His name is not, in fact, “Slius”.

      • yogibbear
      • 8 years ago

      I’m can’t believe you edited that :/

        • Meadows
        • 8 years ago

        Oh come on, it’s Krogoth, he ninja-edits his own crap [i<]all the time[/i<].

        • flip-mode
        • 8 years ago

        It’s not like correcting typos is radical behavior.

          • yogibbear
          • 8 years ago

          9 hours later?

      • Krogoth
      • 8 years ago

      Successful troll is successful.

        • Meadows
        • 8 years ago

        [url<]http://bit.ly/tIKXT2[/url<]

          • Krogoth
          • 8 years ago

          [url<]http://t0.gstatic.com/images?q=tbn:ANd9GcTw1I0m5WtT9lKP3_DIq4pDSYJveP10gNs8YmsV8Jr9dE3M5TVP[/url<]

            • yogibbear
            • 8 years ago

            Enough with the ponies.

            • Meadows
            • 8 years ago

            Never enough.

      • Buzzard44
      • 8 years ago

      Meh, Krogoth not impressed.

    • pogsnet1
    • 8 years ago

    Faster than GTX 580 yet consumes less power than counterpart, there is nothing more I can say except the price, a little lower please. 😀

    • JdL
    • 8 years ago

    What were the graphics settings used on Battlefield 3? I didn’t see those listed. Resolution, AA, etc.

      • Meadows
      • 8 years ago

      It’s right there on the screenshot, blind man.

        • poulpy
        • 8 years ago

        What kind of person points a blind man to screenshot?

          • Meadows
          • 8 years ago

          My oh my, good one!

    • sweatshopking
    • 8 years ago

    Cool it boys, and hug it out

      • khands
      • 8 years ago

      Sweatshopking is on point?

        • dpaus
        • 8 years ago

        No, he just wants a hug.

          • sweatshopking
          • 8 years ago

          BOTH!!!!!!!

            • kamikaziechameleon
            • 8 years ago

            lol

    • Abdulahad
    • 8 years ago

    AT LAST….. I DON’T SEE RED WITH AN AMD OFFERING!!!

    • deb0
    • 8 years ago

    Another yawner from ATI; 15-18% improvement in BF3! Expect Nvidia to own ATI once again.

      • Krogoth
      • 8 years ago

      Look at the resolution and level of AA and AF and come again.

      Besides, BF3 is CPU-limited in MP mode.

        • wingless
        • 8 years ago

        Krogoth….impressed?!

    • Bensam123
    • 8 years ago

    I’m not really finding the whole removal of two DVI ports all that attractive. Some of us still use monitors with VGA or PBYR hook ups. I’m not talking necessarily about CRTs, but LCDs with VGA hookups or home entertainment equipment with them or PBYRs. HDMI and DPs can’t handle changing the signal to analog.

    I still also don’t understand why AMD is asking the card manufacturers to package them with active DP adapters. Active DP adapters are like $30-40 dollars. It would be one thing to ask them to package a passive one, but an active one is just asking to be neglected. It makes you question why they would still require you to have an active adapter if they’re taking into account hacking off one of the DVI ports to begin with.

      • Farting Bob
      • 8 years ago

      I expect AMD or one of the resellers will make a version with all the ports you will need, it might take a month or 2 after release, but it’ll happen, theres no technical reaon why they cant, they just think that the market for 4+ monitors on a single card is smaller than the need for a better cooling system which improves temps and noise for everyone.

        • Bensam123
        • 8 years ago

        I looked for those when I bought my 6970, but they don’t have it or they have the same hardware restrictions. Both of my DVI ports can do analog, but only one can do analog while running a digital display on the other, neither can run analog both at the same time. They can’t just tack on an extra port anymore and have it work, there is something happening in hardware/software that is stopping it.

        I had to buy and active DP adapter to hook my TV up to my computer or else I had to always disable one of my monitors to enable my TV… 🙁

      • Krogoth
      • 8 years ago

      DP is slowly replacing HDMI, VGA and DVI.

      The DP ports are meant to make Eyefinity (which requires DP) more straightforward.

      DP is become more and more commonplace on new generation of LCD monitors.

      The reason for the need for an active adapter is simple. DP is a different animal from DVI and HDMI.

        • Bensam123
        • 8 years ago

        “The reason for the need for an active adapter is simple. DP is a different animal from DVI and HDMI.”

        Amazingly sound logical Krogoth.

        I don’t think there is anything wrong with pushing port types… but video cards being made now are for devices that are available now. People aren’t going to run out and buy new devices for their video card. That said, this offers less ports then the 6970 in that regard.

      • NeelyCam
      • 8 years ago

      If you are able to put down $500 for a gfx card upgrade, you should have enough money to buy a $15 adapter (check Monoprice.com) or, you know, [i<]upgrade to a monitor with digital inputs[/i<]

        • Bensam123
        • 8 years ago

        An active adapter isn’t the same as a passive adapter. Go google a bit more.

        It shouldn’t matter how much the video card costs to begin with. I bought a 6970 for around $275, it had more ports that coincided better with my needs or could use adapters to fit those needs. You can’t adapt from digital to analog without a converter ($70ish). DVI has a built in converter in the spec.

        I’m not going to dispose of a perfectly good LCD or 60″ plasma TV because you think I should have digital ports, I’m sure that goes for a lot of people.

          • nexxcat
          • 8 years ago

          While I agree with you in principle, my ~ 5 year old 40″ LCD TV has more HDMI ports than I know what to do with. I’m a little surprised that your 60″ Plasma, which is probably a higher-rent item, doesn’t have enough digital input.

          Having said all that, it [i<]is[/i<] annoying to have to shell out even $10 when purchasing a $500 item. It's immaterial that I can afford it; I can, but it's that they're making me shill out extra cash. The flip-side of this argument is, of course, that you're not paying for what you won't use. TANSTAAFL, of course, so I suppose I should be grateful, but I will definitely wait until there are models that will allow a pair of DVI-connected 27" LCD monitors to be used before [i<]I[/i<] plink down the money.

            • ptsant
            • 8 years ago

            If all you need is to drive 2x 27″ DVI only monitors, an HDMI to DVI cable ($10) should suffice, and may be included by many manufacturers. Unfortunately, the card manufacturer cannot anticipate all possible variations and pack DP to DVI, DVI to VGA, HDMI to DVI, HDMI to DP, miniDP to DP, HDMI to miniHDMI and whatnot.

            Instead of complaining about the card having 3 different output types maybe you should ask why a 27″ monitor (which even today is a premium item…) only has DVI!

            • Bensam123
            • 8 years ago

            I’m not complaining about it not having every type of port I need, rather not being able to convert to the port I want.

            The logic of ‘it should have it’ or ‘you should buy it’ does not make up for people NOT having it. This is like first level industrial design, you make a product for your consumers needs. DP, HDMI, and DVI-D can convert to each other with a $7 adapter, they can’t be converted to analog ports of any type with a converter though.

            This still doesn’t make up for the mess with active and passive DP adapters though. Requiring manufacturers to box a $40 adapter with the card just isn’t going to happen.

          • Bauxite
          • 8 years ago

          60″ lcd or plasmas have hdmi, so whats the actual gripe again? Wait, $40 so a 6+ year old tv can hook up to a overkill card for its resolution (at most its 1080p which an lcd TV I bought in 2006 has, and incidentally is still the norm)…yeah thats real harsh /sarcasm

          Multi-monitor is the only real issue. For those who can blow $500 on a card and then 3+ monitors (or televisions) on top: select the proper monitor in the first place or get adapters since clearly its in the expected price bracket.

          Otherwise stuff the whining about a better spec (displayport) taking over.

            • Bensam123
            • 8 years ago

            Oh noes… I should have everything that you deem I should. Telling people how to spend their money is complete BS and you know it.

            That aside, keeping the DVI-I port doesn’t prevent you from running more displays, it allows people with older displays to keep running them. I don’t know why you would be against that.

    • indeego
    • 8 years ago

    Nice card.

    Few people have the resolution(s) tested. I wonder how many of TR’s audience has 30″ screens hooked up.
    Ain’t worth $550 except to people throwing major cash at those monitors/games/cards.

    Nice technical achievement for sure, niche application, as usual. So yeah I question awarding Editor’s choice at this pricepoint. It’s like awarding a Porche an editor’s choice. No sh*t it’s a great car, but ~5% of your audience can afford it.

      • jensend
      • 8 years ago

      Not sure why you were thumbed down for an obvious truth. Maybe some folks were working on deceiving themselves into thinking they can/should/must spend this kind of cash on this extravagance and your post reminded them of cold hard realities.

      Nice card, obviously, and we do get valuable information about GCN that should carry over to other cards, but the Pitcairn and Cape Verde reviews will be a lot more relevant for the vast majority of us. The ultra-high-end is mostly for show/bragging rights anyways.

      On a related note, I wonder how much of a perceptual difference 2560×1600 and ULTR4 EXTR3M3!!! graphics settings really make when compared to 1080p medium-high graphics settings in real gameplay. I’d be interested to see double-blind testing showing whether people can even distinguish some of these things when actually playing a game.

        • Anonymous Coward
        • 8 years ago

        [quote<]I'd be interested to see double-blind testing showing whether people can even distinguish some of these things when actually playing a game.[/quote<] I think the aspect ratio would give it away! Anyway I do expect that the higher resolution matters since the user is sitting fairly close to the screen.

          • jensend
          • 8 years ago

          Well, yes, of course the test would have to use the same aspect ratios.

          [url=http://webvision.med.utah.edu/book/part-viii-gabac-receptors/visual-acuity/<]Here[/url<] is a concept and lit overview saying research suggests the "hardware" of human eye has a resolving power of 28 arcseconds (based on the spacing of cones in the fovea), while the smallest features people with 20/20 vision actually succeed in distinguishing in eye exams etc are basically a full arcminute. Let's assume a 30" 16:10 monitor and a 1m viewing distance. (Much closer than that will cause visual strain, especially with a monitor that size- pages I saw all referred to tests done by somebody named Jaschinski-Kruza in a 1998 "Ergonomics" article "Visual strain during VDU work: the effect of viewing distance and dark focus." Maybe some gamers are willing to deal with visual strain and sit closer, but I think this is a decent estimate.) Then a pixel at 2560x1600 is 52.1 arcseconds and a pixel at 1920x1200 is 69.4 arcseconds. We're close to the limits of ideal perception here and I don't expect that people can really distinguish that well in realtime action gaming. Again, we'd have to have blind testing to be sure. But I really think that the resolution bump doesn't make much of a difference, and many of the graphics effects will have considerably less of a distinguishable effect than that.

            • jensend
            • 8 years ago

            Thank you, thumbdown brigade! Does anybody have an intelligent reply, or are there just a couple people out there with a vendetta against me?

        • can-a-tuna
        • 8 years ago

        Perhaps you two should read “jihadjoe”‘s response and think why it got so many thumbs up.

          • jensend
          • 8 years ago

          jihadjoe’s response says only “why not award the best product in each segment an EC, even if it is the only product in its segment and few will buy it? The 5% will benefit from the information.” It doesn’t do anything to contest indeego’s main point, which was that this card is irrelevant to the vast majority of us; it just counters his off-hand comment about the Editor’s Choice. It certainly doesn’t counter my argument that extra-high resolutions and graphics settings may not make much of a gameplay difference or the scientific data about pixel pitch vs human perception.

          (Note that neither of us were saying the [i<]review[/i<] is irrelevant to most of us; as I said in my first reply, this provides valuable information about the performance of GCN and thus provides a little information about how Pitcairn and Cape Verde, which really will be relevant to many more of us, will perform.)

      • jihadjoe
      • 8 years ago

      I’ve always thought of an “Editor’s Choice” as the product having the best value and performance for its particular market segment.

      Just because only 5% of the audience can afford a $550 card (or a $100k car) doesn’t mean that an EC award has to be withheld for the particular market. In this case, TR’s EC is just them saying that if you have $550 to blow on a GPU at this time, then you should definitely be getting the 7970.

      • yogibbear
      • 8 years ago

      I have 3 monitors. Pretty sure this card interests me. All 27″ 1080pers.

      • sparkman
      • 8 years ago

      > I wonder how many of TR’s audience has 30″ screens hooked up.

      I have a 26″, and I know people with 30″, but you are probably right most people have closer to 20″.

      Which is irrelevant because most of us don’t want to read very much about the sub-$200 cards that make responsible fiscal sense in that resolution range. We want to read about the giants slugging it out.

      Plus these technologies will be sub-$200 in a few years so everyone has that to look forward to.

      Adding my thumbs down for your “waaaaa I can’t afford this card waaaaa Tech Report should stop reviewing it waaaaa” post.

        • JustAnEngineer
        • 8 years ago

        Here’s where we were at 10 months ago:
        [url<]https://techreport.com/discussions.x/20555[/url<]

      • JustAnEngineer
      • 8 years ago

      I’ve been gaming at 2560×1600 for 5+ years. If you’re seriously considering plunking down $550 for a new graphics card, you shouldn’t still be using a dinky 1680×1050 monitor.

    • Farting Bob
    • 8 years ago

    Would have liked to see how it handled Rage with its hardware support of megatextures. I would imagine that the frame latencies would be very good as its purpose seems to be to provide smooth performance in even stressful scenes.

    Great review, the frame latency method makes the latest TR reviews a cut above the rest, even very good sites like Anand seem to have fallen behind in GPU reviews. It may have been late, but what a great review, and what a great card! Cant wait for the 7950 reviews, as its likely that this card will be out of my price range right now (especially when you factor in the “screw Britain tax” we see on computer hardware. I was very tempted to get a 6950 when it was released but i held off, and it looks like i made the right call!

      • Bensam123
      • 8 years ago

      Why would you want to see it run a game that is highly optimized for graphical hardware that is an order of magnitude lower? XD

    • SubSeven
    • 8 years ago

    Wow… a very impressive feat of engineering from the Red team. Now if they could only pull off something like this (or even sorta close to this) on their processor front, life would really get interesting…

      • Sahrin
      • 8 years ago

      AMD is the green team in CPUs.

        • axeman
        • 8 years ago

        There’s no red team for CPUs, so I’m pretty sure no one would get confused 😀

    • ish718
    • 8 years ago

    I’m guessing this card would look more impressive performance wise if we had games that make extensive use of DirectCompute.
    I believe BF3 uses DirectCompute for AA and Civ5 uses it for texture compression.

    Most of the gains seem to come from the new 384bit bus, since TR test all the games @ 2560×1600.

    • ew
    • 8 years ago

    [quote<]I don't want to dwell on it, but this new Radeon is nearly an order of magnitude [/quote<] more [quote<]powerful than an Xbox 360 in nearly every respect that matters. [/quote<]

    • chuckula
    • 8 years ago

    What you see in the 7970 is AMD biting the bullet to get better GPGPU performance with a (relative) hit to improvements in games. Nvidia did the same thing with Fermi. There’s no magic bullet here, but AMD seems to have done a much better job with the power consumption of the 7970 vs. what Nvidia did with the first generation Fermis.

    I’d get used to having relatively smaller improvements in gaming performance from the upcoming GPUs from both Nvidia and AMD due to the need to devote more resources to non-gaming applications. There’s no free lunches in this business.

      • sschaem
      • 8 years ago

      BF3: 6970 23fps, 7970 37fps.
      Crysis2 : 6970 25fps, 7970 36fps
      Civ5: 6970 32fps, 7970 48fps

      A 50-60% jump in performance is not a small improvement.

      Cayman 32 96/48 1536 2 256 2640 389 40 nm
      Tahiti 32 128/64 2048 2 384 4310 365 28 nm

      To have Cayman run BF3 60% faster you would need to pump the spec beyond Tahiti, so something here went right for gaming.

      Now if your point is that, if all transistors where dedicated to only do the Dx9 API could it be faster.. For sure.
      But AMD/ nVidia cant live on with just the gaming market… they need a new source of revenue so they cant ignore ‘compute’.

      • Theolendras
      • 8 years ago

      At some point games themselves will probably use GPGPU themselves, like physics, texture decompression ala Id Software or Civ 5 etc… We’re still probably a few years from that I think. Would be cool tough with some APU might even do that stuff leaving the rendering to discrete as it’s use to, or dedicated trough SLI/Crossfire.

    • flip-mode
    • 8 years ago

    I’m really curious about Kepler now. :sigh: it’s always a wonder about what’s next.

      • ish718
      • 8 years ago

      Don’t expect Kepler to much more powerful than SI, it will be more about power efficiency and compute.

        • khands
        • 8 years ago

        There are rumors floating around that full-blown Kepler won’t be around till Sept./Oct. of this year but be up to 2.5x as powerful as a 580. I don’t put much stock in them though if it were true we might see the 8000 series AMD cards coming out against them.

          • Firestarter
          • 8 years ago

          2.5x as fast as a 580 would also mean probably about 2x as much power consumption as the 7970, unless they pull a reverse Fermi and go for a no-compromise all-out gaming card.

            • flip-mode
            • 8 years ago

            No, that forgetting about the 40nm to 28nm process shrink.

            • Firestarter
            • 8 years ago

            How so? I’m assuming that Kepler won’t be drastically faster per Watt than Tahiti, because they use a similar manufacturing process. Everything else is an unknown for me. So if Kepler ends up 2.5x as fast as a 580, it would be 2x as fast as the HD7970 on the same process with what I assume is a similar approach to pushing pixels. Assuming it would use roughly 2x the power is not far-fetched then, me thinks.

            • flip-mode
            • 8 years ago

            Hmm… I don’t have a good reply to that; I think that means you win.

            • Firestarter
            • 8 years ago

            Who knows. I hope and fear that Kepler will be a lot faster and more efficient and cheaper and all that, because that would mean than my soon to be bought Tahiti is outdated already. But as long as that means more competition, I guess we can only win, right?

            • flip-mode
            • 8 years ago

            I’d love to see Kepler trounce SI and initiate a pricing landslide. Actually it would be amusing to see Kepler beat the 7970 by exactly 20% and then see all these Nvidia fanboys exclaim how wonderful and virtuous it is and how it is crazy faster than the 7970 after hearing the 7970’s 20% advantage being called a “yawner” and “marginal”.

            • PixelArmy
            • 8 years ago

            20% over past gen != 20% over current gen (which would be 44% faster than past gen).

            • flip-mode
            • 8 years ago

            I don’t know what point you are trying to make. All I’m saying is I’d like to see gtx 680 be exactly 20% faster than *hd 7970* and see the Nvidia fanboys turn around and exclaim over the 20% advantage after calling it a “marginal” lead here. Not that it will happen that way, but it would be amusing if it did.

            *edit: meant to say 7970. changed.

            • PixelArmy
            • 8 years ago

            The point I’m making is that you seem to think it’s the “20%” figure without any context that matters. What is also important is what that +20% is based off of.

            Beating the old guy (580) with a new architecture on a new process by 20% may or may not be marginal. Contrast that with your hypothetical 680 beating the new guy (7970) by 20% (note your reply to my reply changed this to a [b<]6[/b<]970). You realize that in an equal comparison to the old gen, that hypothetical 20% on top of a 7970's 20% would equate to 44% over the 580, right? (1.2 * 1.2 = 1.44) So you're saying 20% vs 44% isn't anything that [i<]might[/i<] justify a different reaction? Anyways, I hope, you can see the difference. I'll admit I'm a nVidia fanboy, but personally I think the 7970's 20% is pretty good though probably more or less expected. (Though I guess you could argue that it's more than 20% when compared to the AMD's last gen).

            • flip-mode
            • 8 years ago

            [quote<]You realize that in an equal comparison to the old gen, that hypothetical 20% on top of a 7970's 20% would equate to 44% over the 580, right? (1.2 * 1.2 = 1.44) So you're saying 20% vs 44% isn't anything that might justify a different reaction?[/quote<] Eh, performance of 7970 over 6970, i.e. "equal comparison to the old gen", is not the point I was making, but in regards to that, the 7970 is average 1.41x faster than the 6970 and at roughly the same die size. But to stay with the point I was making - the 20% figure does have context, but perhaps I'm not communicating it well enough. The context is the performance advantage over the competitor's product juxtaposed against a lack of consistent appraisal of a given performance advantage; this lack of consistency comes from both fanboy teams whenever either company releases something. In this case that advantage is the 20% performance advantage the hd 7970 has over the gtx 580. Thus, what I'm getting at here would be the possible amusement that would ensue from a gtx 680 that ends up 20% faster than the hd 7970. Then we'd get to see the turnabout of fanboys - AMD'ers saying "20% is nothing much" and Nvidians saying "oh my god 20% that's mind blowing". Honestly, I don't expect that to happen. While keeping the same die size, AMD boosted performance by 1.41 over the previous generation and also had to add in all the GPGPU functionality. Nvidia already added the GPGPU to their silicon, so if Nvidia keeps gtx 680 at the same die size as the gtx 580 it seems to me that most of those additional transistors would be going to straight-out improved performance rather than adding GPGPU functionality. So in other words I'd expect that at the same die size and power consumption the gtx 680 would probably be a good bit more than 1.40x the performance of gtx 580 and therefore a good bit more than 1.20x the hd 7970. edit: I massively edited this post to make it more clear and less, er, adversarial.

            • PixelArmy
            • 8 years ago

            HD 7970 and hypothetical GTX 680 vs GTX 580: 20 vs 44%
            HD 7970 and hypothetical GTX 680 vs HD 6970: 41 vs 69%
            Still a big enough difference between the two cards from a past gen POV to warrant a different response.

            Not good enough? Irrelevant? Ok, let’s go on. You keep harping on the “competitor” aspect while ignoring the “generation” aspect. How about acknowledge both…? It’s not just beating your competitor’s product but which product. Dethrone or not, beating your opponent’s last gen flagship is simply not the same as beating your opponent’s current gen flagship especially by the same %. (Actually the standard would probably be lower within the same generation…) Conundrum… none there… Beating your opponent’s last gen is pretty much a given unless you’re an AMD cpu.

            (Note, in this thread you think it’s probably “pretty easy” for Kepler to beat this, yet we should be in awe of this?)

            As for the fanboy stuff, maybe you should be honest with yourself. (I own 3 ATI/AMD cards, so what?)

            • flip-mode
            • 8 years ago

            [quote<]Not good enough? Irrelevant? Ok, let's go on. You keep harping on the "competitor" aspect while ignoring the "generation" aspect. How about acknowledge both...?[/quote<]If you want to make that point then I won't begrudge you for it. It's just not the point I want to make. Your asking me for an entirely different juxtaposition than I'm trying to make. If I really wanted to talk about the generational progress then I would, but it's a different subject than I wanted to talk about. By the way, my previous post came off sounding adversarial and I wasn't intending for it to be that way and have since edited it, FWIW.

          • Goty
          • 8 years ago

          I think the 2.5x numbers are [i<]per watt[/i<].

            • khands
            • 8 years ago

            Could very well be, this is all coming to me third hand.

          • Farting Bob
          • 8 years ago

          I think NV had some slideshows which showed that in their non-disclosed benchmarks, in very specific circumstances unlikely to ever be seen in the real world, it was up to 2.5 times as fast as the 580. But theorhetical and synthetic benchmarks mean nothing, what matters is how it performs when using real games and real GPGPU software.

        • flip-mode
        • 8 years ago

        Really? I thought it might be pretty easy for Kepler to end up a fair bit faster than SI. Nvidia has a pretty decent track record of nearly doubling performance with each successive generation: G71, G80, GT200, GF100. Nvidia’s already got GPGPU figured out and implemented, and I wouldn’t think power saving features would take up very many transistors – Nvidia already has pretty decent power savings, just not quite as decent as HD 7970 at this point. Heck, if Nvidia just does a dumb doubling of the GTX 580 that would end up blowing away the 7970.

    • esterhasz
    • 8 years ago

    It has been said before, but I really enjoy the new “inside the second” approach to benchmarking, not only because it gives a better impression of performance perceived by users, but also because the whole process becomes much more nuanced and highlights different [i< ]aspects[/i<] that go into what is finally a composite notion. If you want to fuse the timeseries graph for frame time and the percentile data into a somewhat more synthetic form, you may want to look into Tukey's box plot (http://en.wikipedia.org/wiki/Box_plot), which allows for the visualization of averages, outliers and percentile cutoffs in a single graph.

      • khands
      • 8 years ago

      I do like the box plot idea, that would be a good way to visualize it instead of the mess we have now, though potentially less detailed.

        • esterhasz
        • 8 years ago

        Yes, there is less detail but it would not have to be either/or and it would facilitate comparing more than 3-4 cards side by side. And is the temporal aspect particularly significant in any way? Besides the really very interesting finding that the first couple of frames take longer to render, the fact that frame 2300 is a stutter rather than frame 60 is mostly irrelevant. What is important is that there is a number of stutters in the 60s sequence. Perhaps somebody would like to study the scene and try to detect, by looking at the image, if there is a specific image element that may have caused a stutter or to see the pattern of variation, but beyond that, the temporal axis is basically not adding anything very important. A “rug” plot on the y axis (the colored bars along the axes here: [url<]http://www.mathworks.com/matlabcentral/fx_files/27582/4/rug.jpeg[/url<] ) would be close to equally informative. In any case, it's really cool to even discuss these kinds of questions with PC benchmarks now! EDIT: forgot the possible heaping of slow frames. Timeline = good!

      • yogibbear
      • 8 years ago

      Isn’t the problem with a box plot that the Q2-Q3 bit will be so small and the max will skew right off the page making the graphs almost meaningless with a linear scale?

      E.g.

      |—————————[]———————————————————————-|

      is the result I imagine you getting for pretty much everything.

        • esterhasz
        • 8 years ago

        Good point. Great ASCIIvis.

        One could have a cutoff for highest lowest?

          • khands
          • 8 years ago

          I was thinking something like the 25-75% range, and then you can talk about the outliers in your 50ms+ graph, which would help differentiate the two I think.

      • Bensam123
      • 8 years ago

      Yeah, I agree. I didn’t like how they initially tried to introduce frame time (more as a marketing or buzzword thing), but I really do like framrates being measured in ms or less.

      • radializ3r
      • 8 years ago

      one other potential way to maintain detail but give us a “clearer” picture of overall behavior would be to use a plot of lognormally distributed frame times (log time tends to work very well for a multitude of purposes) split by one’s variable of choice (in this case, the gfx cards in question)

      a full prob plot or distribution plot has the advantage of showing the % points in the tail as well as the pseudosigma of the majority points – and a log transform on the data will reduce issues that yogibbear spoke of

      * edit * … i meant “potential way” and not “potential”

    • Firestarter
    • 8 years ago

    I can’t frikking wait for the HD7950 numbers!

      • khands
      • 8 years ago

      I’m hoping it ties the 580 or just slightly underperforms where the 7970 beats it just a little and follows that trend for around 399 or less, though AMD might try to squeeze some more money out of it if Nvidia doesn’t respond to the 7970 before the 7950 gets out the door.

    • flip-mode
    • 8 years ago

    Thanks for the review, Scott.

    GCN is pretty exciting. Of course the launch is proceeding from the high end and working down. 7970 is an impressive monster with loads of performance. Power efficiency at idle is just spectacular and is the one real surprise here; more performance is a given and the need for AMD to jump into compute with both feet was written on the wall.

    The pricing of this card is almost irrelevant. It would be no more relevant to me at $350 than at $550. What is important is that AMD at least price the card appropriately given its performance – i.e. not charge more than a gtx 580 for a product that is slower than a gtx 580 – and AMD has done that.

    Much more relevant and interesting to me are the products at the $200 and below mark and as resolutions that I can even comprehend – 30″ monitors and 2560×1600 does not even compute for me. And, I’ll be honest, I’m really more interested in what happens at $150. $200 is just a bit much for me to spend on a video card. And the real question there is what is the value proposition going to be for those cards? Will AMD bring an appreciable performance increase to those price points or will it just be 6870 performance on a new architecture but at the same price as before? If so, that would really dampen the enthusiasm I would have for the new product. Having said that, it would make me quite happy about the $150 I just dropped on a HD 5870 just a few days ago.

    About this review in particular – I’m glad you included the 5870, and even the 280; I think you should have just used a 1gb 5870 model as that’s the one that’s got any appreciable level of ownership. It might have made more sense to just have used a lower resolution that the 1gb cards could cope with, and then the results for the gtx 280 would have made more sense too. Either way, I understand the reason you did it the way you did.

    • derFunkenstein
    • 8 years ago

    [quote<]The Radeon HD 7970 outruns the GTX 580 and is nearly a third faster than the Cypress-based Radeon HD 5870.[/quote<] From page 8, the GPU Computing Performance Civ V graph. It's actually a little over 50% faster. 205 --> 313

      • Damage
      • 8 years ago

      True enough. Fixed. Thanks.

      • Deanjo
      • 8 years ago

      Nm

        • khands
        • 8 years ago

        It’s about 150% the speed, which is 50% [i<]faster[/i<]. Cypress is about 1/3 [i<]slower[/i<] than Tahiti. Darn percentages being relative!

    • dpaus
    • 8 years ago

    [quote<]folks hoping to drive multiple 30" monitors may have to seek another solution {p.2}[/quote<] Say what?? The card has two mini-DisplayPorts and a dual-link DVI. My 30" HP ZR30w has DisplayPort and dual-link DVI inputs. I could drive 3 of them from this puppy (and I can hardly wait to try). I keep asking myself how AMD can be so far ahead of the curve with their GPU architectures and implementations and struggle so with their CPUs. Here's to hoping they can pull off a little internal tech transfer... Great review, and well worth the wait. I [i<]was[/i<] slightly disappointed that there wasn't at least a token alien-abduction sidebar, though 🙂 EDIT: nm, Scott clarified the text. Still no 'the dog ate my original review' story, though...

      • axeman
      • 8 years ago

      I’m guessing he means people who have 30″ monitors with only DVI input, which shouldn’t be that many.

        • dpaus
        • 8 years ago

        I think even the current models of the Dell 30″ monitors have DisplayPort, although I’m not 100% sure; we switched from Dell to HP for 30″ displays about a year ago.

          • JustAnEngineer
          • 8 years ago

          My Dell UltraSharp 3007WFP has only one dual-link DVI input. More recent UltraSharp 30″ 2560×1600 monitors include more inputs.

    • kamikaziechameleon
    • 8 years ago

    While I agree with the tech report review and the recommendation of the card I’m really unhappy with the debut price of this product. Simply put it makes no one sweat. Its not competing with any product on the market, atleast if they put it at 500 then the 580 would need to get knocked down a notch. If we always charge more for new tech without discounting old stuff then we’ll have a thousand dollar + GPU cards before long.

    what did the 6970 debut for again?

    EDIT:

    I think people think I’m down on AMD, I’m an AMD fan for sure but this price relative to the pricing trends for GPUs and more specifically AMD gpus raises a cause for concern to me. Its not so much bang for buck complaint, the numbers speak for themselves, then again so does every next gen launch now doesn’t it?

      • ImSpartacus
      • 8 years ago

      It has no competition and it’s expensive to fab. It’s a pretty big chip on an immature process; this isn’t another 40nm revision.

        • kamikaziechameleon
        • 8 years ago

        There is always overhead to every leap. Yes this has more than others but honestly they’ve been trending price this way for 3 gens and basically making excuses each time. See my other responses.

        At the end of the day 3 gens of similar “reasons” for prices and the reason becomes and excuse. This is business not the finger pointing game. The price jumps have not been honestly related to any of the above. If they are seeing issue with adoption rates and market penetration at the same time as supply constraints being problem you think they’d fix that after 3 years of issues.

      • Arclight
      • 8 years ago

      Maybe this article published by kitguru.net might provide a clue to why the 7970 has such a debut price:
      [url<]http://www.kitguru.net/components/harrison/can-amd-get-enough-wafer-slots-at-tsmc/[/url<]

        • kamikaziechameleon
        • 8 years ago

        That makes sense, this has supposedly been a driving force behind price climb the last several gens. I guess its just becoming an old story.

        AMD-“hey guys we suck on the business end, didn’t mean to charge you so much.”

        If it was only this gen it wouldn’t bother me but we’ve been getting fed this line for the last two gens now and its honestly lame.

        They mean to charge us that much otherwise it wouldn’t cost that much simple as that.

        Fool me once, shame on you…

      • esterhasz
      • 8 years ago

      It’s of course understandable that you want better products at cheaper prices.

      But let’s face it, AMD is not swimming in money, their stock is not doing well, they just fired 10% of their staff, and manufacturing problems have bogged them down even before the Thailand flood happened. If they can sell all of the chips that TMSC can manufacture at this price for a couple of month, it would go far in keeping the company from crumbling once the European recession kicks into full (reverse) speed. If somebody can justify spending $500 dollars on a videocard that is probably used for leisure activities, $550 should be OK as well.

      I’m rather happy that AMD has the opportunity to get a decent margin for once.

        • kamikaziechameleon
        • 8 years ago

        Do they want margin on the product or a better margin of the market??? I agree with your notion that they might not being doing so well but if you look at my other replies they’ve kinda been their own worst enemy the last 3 gens. Supply constraints have severly limited supply of what was then the current gen products for 3-6 months inflating prices and forcing old products to compete in the vaunted 200-300 dollar price point. This has severally hurt their market penetration. Their product have more features and are leading the charge with technology. I do believe a large part of the driver stability that nvidia products experience are releated to the fact that they are 12-24 months behind AMD on software integration for DX and stuff. Not to mention how late they were to the GDDR 5 party, lol.

      • flip-mode
      • 8 years ago

      I think it’s priced fair enough. What do you mean it’s not competing with any card on the market? There are multiple cards that this card competes with – gtx 580, gtx 580 3gb, 6990, gtx 590.

      It’s price is well justified enough, I think. If I were able to spend $450 or $500 on a graphics card, I’d be just as able to spend $550.

      AMD is supposed to be a for-profit company and they’ve priced this card appropriately in regards to it’s competitiveness and for AMD’s profit motive.

      What will be much, much more important is the performance, features, and pricing of the cards in the $200-and-below price brackets.

        • kamikaziechameleon
        • 8 years ago

        by not competing I mean it is the single fastest gpu and there is no single gpu of note at its price.

        I’m not saying its price is unjustified just that amd has been trending up its prices ever since the giant drop in the 4xxx gen of radeons. Each gen since has debuted a new card at a higher price climbing 50 to 100 dollars every gen, rather than treating the new gens as replacements as is the tradition in any tech industry or industry with annual designed obsolescence. Imagine if each year GM intro’d new cars that cost 20 percent more than the year before and didn’t retire last gen’s cars in a prompt fashion. It bloats the markets and distorts prices eventually. Sure its 550 this year but it will probably be a 600 or 650 intro next year for the 8XXX premier flagship. Just saying that its a costly trend AMD has been following all when GPU upgrades have been more and more unnecessary.

        I would happily upgrade every 12 months if we saw a meaningful improvement at the 200 dollar price point simply so I could stay current but instead we see less and less competition at that price point and are instead seeing it creep higher and higher again. AMD fell victim to this before that was why the big price/design adjustment happened with the 4xxx series.

          • redwood36
          • 8 years ago

          I get what you are saying Kamikaziechameleon. They have a few months to win over some loyal customers by being the first on the market. Why not cut even deeper by being competitively priced. Then the decision becomes obvious for buyers. 50 bucks would make me think….well I’ve been loyal to nvidia in the past, Ill stick with them (even though only a fool buys any of these until Nvidia releases a 680, just to give it a fair comparison).

          in addition, honestly I was expecting more out of the 7970.

            • kamikaziechameleon
            • 8 years ago

            I agree, though its a compelling offer now I expect the new Nvidia offering to come in swinging. This price/preformance is not so impressive when you consider the gen. Its beating the old king by 10- 20 percent, this is not terribly impressive.

            If Nvidia priced their flagship along the same lines as AMD it could cost us 650-750 lol. I predict they won’t however, and when Nvidia comes in with their fixed pricing it will crush the AMD offering with this MSRP.

            • no51
            • 8 years ago

            *cough*
            8800 Ultra

            • kamikaziechameleon
            • 8 years ago

            oh gosh I’d forgotten. :O

            • flip-mode
            • 8 years ago

            Man, AMD will drop prices if they need to. Stop sweating. Be patient.

            • kamikaziechameleon
            • 8 years ago

            I was patient before then I got a 460 😛 I don’t know if I’m so much complaining as observing.

            I guess I am complaining though since I was waiting for this gen then I saw the price tag and decided not to buy. I wanted to spend money but now I don’t darn you AMD 😛

          • Farting Bob
          • 8 years ago

          The 3GB 580 is roughly the same price or even a bit more, its the closest single GPU to the 7970 and priced around the same. Sure we would all love for this top of the line card to have debuted $100 cheaper, but it looks like we may have to wait until NV comes out with a new card before we see this performance level in sub $500 cards.

            • kamikaziechameleon
            • 8 years ago

            sad but true

          • shank15217
          • 8 years ago

          So when AMD kicks butt they ‘don’t’ compete and when they dont do so well they ‘cant’ compete. Half the people here just like to talk out of their a**

      • khands
      • 8 years ago

      It’s priced where AMD needs to price it, until they can pull a 4000 series and jump to faster RAM, and TSMC can get their act together and get new nodes out on time in volume, and Nvidia decides to compete both with performance [i<]and[/i<] time to market we're going to see increases in cost.

        • kamikaziechameleon
        • 8 years ago

        This isn’t really addressing any of my points.

          • flip-mode
          • 8 years ago

          The trouble is that your points are based on an emotional response to the price and there’s really no way to make that better. You already know all the good news:

          * better image quality than ever
          * lower power consumption than ever
          * more RAM than ever
          * faster RAM than ever
          * higher performance than ever
          * beats any other card at the same price point (that’s the gtx 580 3GB) by an appreciable margin
          * highly overclockable
          * overall fastest GPGPU in the world
          * first ever built in dedicated video codec
          * nifty display adapters in the box
          * it really is the most impressive video card there is – more performance and features and flexibility than anything else now or before but at idle power levels that are on par with the lowest level cards on the market.

          You already know all of that but emotionally you can’t get past the fact that AMD isn’t giving it all to you for $399 and you’re emotionally attached to that 5870-6970 $399.99 pricing structure.

          At this point, all we can do is wish you the best 😆 (not trying to be a jerk by saying that!) Most others look at the performance and features and don’t seem to be too heartbroken for the price.

          So, to sum up, no one has been able to help you feel better, but you’re not having much luck getting others to feel the same disappointment that you do either.

            • kamikaziechameleon
            • 8 years ago

            my point is the upward trend in AMD flagship products for the last 3 gens. Going up almost 100 dollars per gen. If prices go up every time tech improves that is not good for consumers no matter what tech is in there. Its not this launch, its the trend.

            AMD makes allot of excuses every gen since the 4870 as to why it had to charge more, but at the end of the day its been the same excuses every gen, supply constraints, new tech, blah blah blah. End of day if they can’t control their process or product price after 3 flippin years they shouldn’t be in business, if they are gonna let price fluctuate by 50-75 percent each gen on “accident” and if they are gonna let the supply chain choke out their ability to provide product to a market then complain about market penetration… BLAH. That my friend is bad business, but that is not even the point of my concern.

            Tech is always improving making the argument that prices should always go up is a complete fallacy. I thought some might notice with the exponential improvements in tech we could see a dangerous trend in prices. I’m not saying that this will necessarily happen but more that its a fallacy to say, “its better so pay more” when talking about tech. If that were true we’d all be screwed.

          • khands
          • 8 years ago

          Just trying to explain why, not that it helps bring the prices down, just that the reason we had the dip in the first place isn’t there.

            • kamikaziechameleon
            • 8 years ago

            I’m not really convinced their is legitimacy to these reasons that AMD keeps spouting. Should their be they’ve been flying that flag for the last 3 years, time to realize they are doing something wrong.

            • khands
            • 8 years ago

            High-demand+constrained supply+CPU division falling off a cliff=They need all the money they can get out of this thing.

            • flip-mode
            • 8 years ago

            The CPU division is an extraneous variable. All that I imagine is driving pricing at the moment is pure supply and demand. AMD is probably using every last bit of 28nm fabrication allotment that TSMC has given them. Tahiti is likely not the only chip in production. There are also Pitcairn and Cape Verde and those are probably using more fab space – they’re going to be higher volume products and will need quite a bit of supply built up ahead of launch.

            • khands
            • 8 years ago

            It’s an important one for the company, though with all the sell outs I’m guessing it was much more supply/demand than anything else.

      • clone
      • 8 years ago

      remember when AMD came out with HD 4xxx priced $200 less than anything Nvidia while offer comparable performance….. aahh those were good times, but then let’s look back, AMD grabbed 5% of Nvidia’s marketshare at the expense of 20% of potential profit.

      AMD did this at the time because they had been losing marketshare for quite a while and hadn’t had a hit since the X1900 series…. they were desperate, now AMD has released on time and on performance 3 times in a row, every time beating Nvidia to market and arguably offering the right performance each time…. 3 generations of success.. even the HD 5xxx series which was a resounding success and what did it get them after Nvidia was a solid 12 months late with an answer…. another 5% marketshare.. maybe?

      so here we are, 3 times in a row AMD grabs the lead and leads the way, 3 times in a row and still you have ppl going “well wait for Nvidia’s response”… it’s barely on the horizon but many are saying “wait and see”…. this card offers a 20% bump over it’s closest competition while offering a dramatic drop in power consumption and still so many complaints… their are always complaints and excuses why not to buy a card, “it’s too expensive”…. yeah AMD is desperate for coin and priced the card where it fits into the market performance wise, they haven’t done anything wrong while also not doing any huge favors, “wait for Nvidia”… yeah AMD has absolutely no control over that and those willing to wait have shown that they will wait forever leaving the only option for AMD to sway them would be to sell the card for a loss which isn’t reasonable.

      here we are 3 generations where AMD is on time while they clobber Nvidia on value, in features and on performance if not at first then during the entire run of each generation and still their will always be those unwilling to change.

      can’t blame AMD for not taking a loss to satisfy the fickle, the skeptics or the haters.

      p.s. not once during these last 3 generations has Nvidia done anything pricewise even remotely close to what AMD did with HD 4xxx so really is it AMD’s fault the market won’t reward them for it?

        • Silus
        • 8 years ago

        Wake up and stop living in AMD’s RDF. It’s pathetic. Nvidia did it alright and it was called 8800 GT. 90% the performance of a 8800 GTX for half the price. And they recently did it with the GTX 460 too, but well I don’t expect you to remember that far back…

          • kamikaziechameleon
          • 8 years ago

          yeah the 460 is amazing that is my current card. 200 dollars and plays everything well.

          • clone
          • 8 years ago

          learn how to read, I said ATI hadn’t had a hit since the X19xx series, 8800 GT competed with the X29xx series…. and I own a 460 GTX and while it is a nice card it was also a year late which I also said…. nothing incorrect at all in my comments, AMD hd4xxx series undercut everything Nvidia had, AMD HD 5xxx beat Nvidia to the punch by 12 months and AMD 6xxx came out on time ahead of Nvidia and competes head to head with Nvidia, sure Nvidia figured out a response many months after AMD released their product…. many months where they could have taken marketshare but the market is what it is.

            • kamikaziechameleon
            • 8 years ago

            AMD always exploits the first to market nature of their launches. They price their product relative to the last gen price/performance trends and that kills the momentum. Look at this card the price of the 7970 relative to performance it fits right into that market, they don’t even price it to directly compete, lol.

            I mean they can complain about not getting enough market share but they fail to go head to head with LAST GEN offerings when the chance presents itself. Why not price this at 500??? Why not price this to chase the 580 down? Supposedly Nvidia is far out so they could make this really hard for nvidia and gather a bunch of market. Just a thought. I know the margin is lower but the volume would be bigger.

            Oh wait, supply constraints, yeah…

            • flip-mode
            • 8 years ago

            Answer a question: if AMD can sell all it produces at $550 then why should it price them lower than that?

            • kamikaziechameleon
            • 8 years ago

            Fair point, it must be their strategy then! AWESOME! They don’t seem to promote it as that though and fan boys argue it isn’t. “AMD doesn’t mean to charge us this much its just… “-fanboy

            I would say its a waste of a first to market advantage I would point out what good is first to market if you can’t RULE the market in that time. Hey if they have supply constrains and they are making the best of it cool, too bad that has been the case the last 3 times they were first to market. If they have supply constraints and the price helps mitigate that it makes sense. The thing is 3 gens of supply concerns they should really look into fixing that. If they make money and stay in business that’s cool but their is allot of noise about market share not being what it should. I’m saying if market share was an honest concern they could EASILY turn that around.

            Manufacturing demand by shorting supply doesn’t pan out for most products unless they are apple or the wii.

            Selling their whole supply at 550, nothing wrong with that. Just saying though the price for the situation might be smart the fact of the matter is the situation itself might not be. Its not the first supply issue they’ve had, only their third in a row.

            This is a pretty circular argument. I say the price/gen is a bad pair you say the supply is bad I say ok, how many launches in a row have they had supply issues well it doesn’t matter they can make it up in the margin then I ask about market share and it starts all over again. Hey does Microsoft charging for live make it a good business move that is right for the consumer because they can? How might that effect long term consumer good will or effect migration to the platform or away from it.

            • flip-mode
            • 8 years ago

            Fail: question not answered.

            Repeat: if AMD can sell all it produces at $550 then why should it be priced any lower than that?

            It’s a simple, clear and concise question.

            • kamikaziechameleon
            • 8 years ago

            “I would point out what good is first to market if you can’t RULE the market in that time. Hey if they have supply constrains and they are making the best of it cool, too bad that has been the case the last 3 times they were first to market.”

            translation. Yeah its smart to sell at a premium with a “supply constraint”. Its also flippin stupid that AMD has basically been claiming they can’t get market share, all while continually suffering supply constraints up until the point where they have… Competition.

            Their product is good, their marketing and sales are horrible. If margin was their goal like nvidia then just name an MSRP and don’t complain about supply issues. If they want market share… better address those supply issues. My question to you is WTF is AMD trying to do, what is their goal/strategy here.

            • flip-mode
            • 8 years ago

            Strike two. I’m not throwing a curve balls or even fast balls. This is easy. Why even bother responding if you’re not going to answer the simple question. Third try: if the product sells all units that can be produced at price A, why should the price be lowered at all?

            Don’t quote yourself with some non-answer you already gave before. Don’t use “translations”. Don’t talk about marketing. Don’t answer the question with a “what is AMD trying to do” question. Don’t put “supply constraints” in quotes like it’s some big lie and AMD has fifty thousand fully functional 7970 video cards in the broom closet.

            Just give a straight answer, man, or don’t even bother.

            • kamikaziechameleon
            • 8 years ago

            Don’t know what you missed its strike 3 for you, lol

            “sell all it produces”-flip-mode

            “translation. Yeah its smart to sell at a premium with a “supply constraint”” -kamikaziechameleon

            I was specific for a reason and you missed the point. You have to assume a volume and sales intent. so they produce 10 and sell ten at the premium they don’t recuperate any R&D costs, great for them the tech is clearly new and clearly expensive why not sell more… oh wait price influences that doesn’t it. I think we’ve established market demand sits between 200 and 300 dollars(I think AMD said that themselves). if they want to make a splash no matter how good it is they have to get closer to 300 to sell in volume or gasp intro other GPUs from this gen at that price.

            Selling all they produce at any cost isn’t necessarily good for them in any situation, weather at 200 or 800 dollars, its about volume relative to cost and overhead. The last time they discussed any of this they declared a target of the 300 dollar core consumer with their flagship card (oh that would be the 7970) Heck even now last gens flagship approaches sitting at 350 (the 6970) I said yes regarding constraints because without an artificial constraint regardless of volume produced the demand will be tapered by price. Trying to stack new tech ontop of old tech just doesn’t work EVER for any company and here is why. OLD technology costs more to make and if it doesn’t they the company is doing something horribly wrong. As such the margin at a similar price point between new and old tech would be dramatically different. Informing that while you gain a premium flagship you are not making the money you could at your core 300 dollar price point.

            Simple answer to your question is NO, your question disregards so many different elements. might make money today but close the doors tomorrow if charging as much as you can and constraining supply to eliminate production waste is the only two variables considered. Its funny because I’m sure you thought my answer would be yes but I think your question is a basic fallacy in that it ignores too much.

            EXOTIC FISH BREEDER I AM, I can sell the fish I breed for as much money as I can fetch to move all the fry but if that doesn’t balance with my overhead I’m loosing money. Often its more affordable to move either less fish for more money or more fish for less money(or a mix) and ramp up breeding depending on market demand, and overhead costs, what is fixed vs what is variable.

            • flip-mode
            • 8 years ago

            When you don’t have a good answer, ramble at length and use terms like volume, overhead, splash, core consumer, new tech vs old tech, production waste, the question is a fallacy, breeding fish, fixed and variable costs.

            • kamikaziechameleon
            • 8 years ago

            “Simple answer to your question is NO”
            I’m confused. I was trying to explain where I draw my connection I produce my own product and sell it the R&D if you will is the importing or acquisition of new and rare fishes. Most businesses with a production aspect follow the same fiscal model or at-least understand the different ones made available.

            I present an answer that says NO and explains why because selling at a permium without a notion of volume or overhead makes no sense. Business has cost and you determine those based on volume and overhead both fixed and variable… Margin means nothing without these.

            so if something like this was a basic model for profitability :

            M = P x V – O
            M:margin of profit
            P:price
            V:volume
            O:overhead

            you asked me M=550 x V-O I would say I have no flipping idea without a Fixed volume or atleast some sort of production limitations it becomes irrelevant hence my with supply constraints initial remark. Assuming no volume limitations or constraints or associated costs (unlikely but could be extremely small or marginal in a favorable situation). This is just a made up equation that I wanted to show you gave me one variable in a 4 variable equation (but its really more than this as P is not fixed its determined by requirements of overhead and volume limitations ) and asked Well what is the answer discounting volume and overhead you are like P being greatest it can regardless of any volume is best, yeah sure it is assuming price doesn’t dictate volume based on demand.

            • flip-mode
            • 8 years ago

            You answered a “why” question with “no”. WTF. No what? No the price should not be dropped? Presenting completely formed thoughts would probably be helpful.

            Then you go into 4 variables. There are not four variables to the question “if a product sells all units that can be produced at a given price, why should the price be dropped”. Production is not a variable because it is assumed to be maxed out – cannot be increased. There’s no need to bring margin and profit and overhead into the question. The only variable is price. Sales are 100% – all you can make. Production is 100% – you can’t produce anymore. Overhead, margin, and profit are completely outside the domain of the question.

            Look, here are some example answers to the question “if a product sells all units that can be produced why should the price be dropped”:

            A. gain market share
            B. pressure competition
            C. gain customer loyalty
            D. increase production volume

            Those examples all embody motives to drop prices and they’re all problematic as follows:

            A. there will be no market share increase if you’re already selling everything you can

            B. you’re already pressuring the competition as much as you can if you’re selling everything you can

            C. lowering prices so 10x more people want one will surely result in SKU depletion. this means that not only will the cheapskates not be able to get one and still be disappointed, but those individuals that would have bought at the higher price won’t be able to get any and they’ll also be frustrated. Frustrated people eventually get mad, and mad people don’t stay loyal.

            D. if you’re already making as many as you can then you can’t increase production. Also – and this is radically important – AMD has more 28nm chips in production RIGHT NOW and those are the chips that are destined for the mainstream market segments where high volumes are absolutely necessary. You can’t call your office assistant and tell him to run down to Walmart and buy some more capacity. TSMC is ramping 28nm and taking orders from AMD and Nvidia. AMD is probably using up every last bit that has been dedicated to them. Nvidia is also likely in production of some kind of 28nm chip right now and they have definitely contracted with TSMC for fab capacity. From what I have heard, TSMC favors Nvidia because the two compaines have a long history and Nvidia has always been the larger order (not only are Nvidia’s chips waaay bigger, but Nvidia still has a higher share of the market than AMD, so Nvidia gets probably a significantly larger portion of TSMC’s capacity).

            • kamikaziechameleon
            • 8 years ago

            ha, lol first response is legit then for some reason between then and now I thought was addressing this in my head:

            “if AMD can sell all it produces at $550 then should it price them lower than that?”

            yeah retardation on my part.

            I’ve been missing that Why in my inner dialogue, lol. I have failed in all subsequent questions. lol.

            I was hung up on ” sell all it produces ” please apply:

            “if AMD can sell all it produces at $550 then should it price them lower than that?”-my imagination,

            to all of the subsequent ridiculous points. I was having something of an inner monologue I guess. I’ve been successfully arguing with myself.

            • kamikaziechameleon
            • 8 years ago

            I need to go outside :^/

            • flip-mode
            • 8 years ago

            LOL. I always need to go outside.

            • kamikaziechameleon
            • 8 years ago

            I agree with what you actually said and I would like to withdraw my irrelevant BS please.

            • flip-mode
            • 8 years ago

            You’ve got my respect for the ability to abruptly mea culpa once you realize it. +1.

            • kamikaziechameleon
            • 8 years ago

            Thanks, when your wrong you gotta own up to it. #1 problem with lots of these comment arguments is people never admit. I was not just wrong I was irrationally rallying against a phantom, lol.

            • kamikaziechameleon
            • 8 years ago

            “D. if you’re already making as many as you can then you can’t increase production. Also – and this is radically important – AMD has more 28nm chips in production RIGHT NOW and those are the chips that are destined for the mainstream market segments where high volumes are absolutely necessary. You can’t call your office assistant and tell him to run down to Walmart and buy some more capacity. TSMC is ramping 28nm and taking orders from AMD and Nvidia. AMD is probably using up every last bit that has been dedicated to them. Nvidia is also likely in production of some kind of 28nm chip right now and they have definitely contracted with TSMC for fab capacity. From what I have heard, TSMC favors Nvidia because the two compaines have a long history and Nvidia has always been the larger order (not only are Nvidia’s chips waaay bigger, but Nvidia still has a higher share of the market than AMD, so Nvidia gets probably a significantly larger portion of TSMC’s capacity).”

            yes… so TSMC is the only kid in town to fab at? I mean AMD used to have its own fabing facility but they kinda got rid of it. Verticle integration is EXTREMELY powerful when done right, look at companies like Amway its mostly vertically integrated. Vertical integration allows for a special process that can’t be shared or easily copied by competitors. I remember how many people proposed separating out the business’ would be a good move I was thinking it kinda destroyed a large part of what made AMD special.

            • flip-mode
            • 8 years ago

            [quote<]so TSMC is the only kid in town to fab at?[/quote<] The short answer to that question is, yes. The longer answer: First off, I'm not sure if there is any alternative at all to TSMC for 28nm circuits right now, much less 28nm circuits that meet the right characteristics of power use and clock speed. Secondly, you need to adapt the circuitry to the fabrication process - so two manufacturers would require two variations of the same circuit design. You cannot just take the exact same photo mask and Fedex it from TSMC to Global Foundries. You need a completely different photo mask for Global Foundries customized for the peculiarities of Glofo's manufacturing process. I hear those photo masks are insanely expensive. Thirdly, there would need to be enough spare capacity at Glofo to make it worth it for AMD to source production from two different manufacturers. Glofo doesn't have 28nm at all right now, to my knowledge. I think Glofo is working on 22nm or whatever the next half node is. But even if Glofo has 28nm, do they have enough to spare to make it worth the cost of doing the mask, bug hunting the Glofo 28nm process, and sourcing from two MFR's? AMD's not going to do that if Glofo can only provide an addition 15% of the supply. I'd imagine it would have to approach some threshold some something like 25-30 percent of total production for it to be worth it. And there may be even more to it than that, but that's all I've got.

            • kamikaziechameleon
            • 8 years ago

            Your right it won’t happen anytime soon, I get that there is not resolution in the short term but it has been an issue long enough for them to look at resolving it. Perhaps entering into a partnership with a foundry to expand and broaden capacity or something to that effect. I know it won’t happen today or tomorrow but maybe in a gen or two they could have their supply side problems under control.

            • clone
            • 8 years ago

            1st off this card comes with 3gb’s of ram and sells for less than the GTX 580 3gb so AMD did indeed offer up a cheaper high end…. your whole position is worthless.

            2ndly you overstate the value of volume in regards to these cards and this price bracket, these cards don’t sell in volume unless that volume is counted in the 10’s and if AMD owned all of the high end you still wouldn’t see it on a spread sheet save one done exclusively for the high end.

            AMD’s latest card is without a doubt a better card, why should they price it identical to an inferior solution?…… while on the CPU side one could argue AMD is not for profit the reality is they wish they were more like the GPU side, why are you so focused on complaining that AMD is the company that absolutely has to lead the way in price reductions despite them having the better product?

            that said like it or not it was only a few years ago that Nvidia was selling $1800+ graphics solutions for gamers (2X 8800 Ultra’s with SLI exclusive Nvidia MOBO) and it took AMD’s launch of the HD 4xxx series and a year long delay with Fermi before Nvidia gave up the practice, but hey why not crap on AMD for not doing that and instead selling the fastest single gpu for $549.00 because that is so out of reach at $50 above Nvidia’s notably inferior card.

            by the way just for the sake of discussion, the high end single gpu’s back in 2006 used to sell for up to $830.00 (Nvidia 8800 GTX Ultra)… plus tax,… think about that for a moment, accounting for inflation that $830 is now after 6 years more like $950 – $1000 and AMD is pimping the high end at $550.00 and here you are complaining.

            • kamikaziechameleon
            • 8 years ago

            “1st off this card comes with 3gb’s of ram and sells for less than the GTX 580 3gb so AMD did indeed offer up a cheaper high end…. your whole position is worthless.”

            if ram influenced performance and price in a linear fashion this might matter. truth is the 580 3 gb is not a “flagship” or real offering just a ram gimmick for most users. at that its not cheaper the msrp is the same between them. 550! there are some fancier version that go for more but you can get the gigabyte version for 550 or less along with a couple others.

            “that said like it or not it was only a few years ago that Nvidia was selling $1800+ graphics solutions for gamers (2X 8800 Ultra’s with SLI exclusive Nvidia MOBO) and it took AMD’s launch of the HD 4xxx series and a year long delay with Fermi before Nvidia gave up the practice”

            yes I believe I’ve stated such info in other posts myself. we are talking last 3 gens amd everything after the 4xxx series.

            “2ndly you overstate the value of volume in regards to these cards and this price bracket, these cards don’t sell in volume unless that volume is counted in the 10’s and if AMD owned all of the high end you still wouldn’t see it on a spread sheet save one done exclusively for the high end.”

            yeah, I know. That is why I pointed out if they want market share a price cut might be a good move. remember they’ve never officially veered from their target 300 dollar flagship strategy they announced with the launch of the 4xxx line. They just keep making excuses like its an accident when prices vary dramatically etc.

            “AMD’s latest card is without a doubt a better card, why should they price it identical to an inferior solution?…… while on the CPU side one could argue AMD is not for profit the reality is they wish they were more like the GPU side, why are you so focused on complaining that AMD is the company that absolutely has to lead the way in price reductions despite them having the better product?”

            do you buy technology on a regular basis? by virtue of being next gen the card should not fit into the last gen price/performance scheme. there has been extensive discussion both here and on the forums about such merrits and they don’t really apply to healthy companies in any annually updated industry. stuff always gets better your basic observation is a complete fallacy.

            “by the way just for the sake of discussion, the high end single gpu’s back in 2006 used to sell for up to $830.00 (Nvidia 8800 GTX Ultra)… plus tax,… think about that for a moment, accounting for inflation that $830 is now after 6 years more like $950 – $1000 and AMD is pimping the high end at $550.00 and here you are complaining.”

            I don’t think its so much complaining as asking why does AMD act like a grade schooler. What other company declares one strategy then plays the professional blame game all the time. I mean a card should cost what they mean for it to cost not have it be dictated by factors that they generally control. Their marketing and sales department don’t really elevate the brand with, “Well wanted to price the whole line cheaper we really did remember when we said that 4 years ago, but you know how technology is, its always new and new costs like money. Ontop of that our mean supply chain guys refuse to give us enough NEW product, probably because they are so busy dragging out our old designs we don’t like rolling back in a timely fashion. What can we say its all out of our control”

            550 for a card with that performance at that price is fine. I won’t buy it, but honestly nothing fundamentally illegal or immoral just not making any business sense. If they want market share as their realignment announcement back almost 4 years now was all about focusing their engineering on their core consumer at the 300 dollar price point. If they can rule that territory then hey they’ll be in the best position if their products rule that price point. It just makes sense. Who cares about 500 dollar plus cards. Not that they won’t offer premium products rather that they’ll only fill that with multi GPU designs.

            As you sighted they won’t sell or have to make many cards at that price point. Its not a sincere offering. If they wanted first to market to matter this wouldn’t be a soft launch and it would mean a card hitting that owns the 200-300 dollar arena.

            Ain’t nothing wrong with this card but if you want AMD to go anywhere they don’t make any-sense as a long term viable company with such nonsensical maneuvers over the last 3 years.

            • clone
            • 8 years ago

            were done, you have no business case, your points are outright terrible and flawed and most definitely don’t offer up a valid long term strategy.

            you started by claiming foul that AMD did not price the card under the GTX 580 instead claiming they were raising the bar, you then went on to claim the good business model is to sell the card for less….. like something you can afford.

            the long term strategy is as simple as it is effective, you take the lead then run with the positive pr selling high with high margin until that goes soft or you run out of time then you introduce lower versions of the card to extract more value from the architecture.

            you don’t come out in the middle and cut your high end sales and you don’t come out in the middle unable to showcase the absolute potential of the design both are disasters in the waiting.

            as an example Seagate and likely also Western Digital made record money this year during a massive industry wide hard drive shortage…. because they maximized the margins to compensate for the lack of product.

            • kamikaziechameleon
            • 8 years ago

            “you started by claiming foul that AMD did not price the card under the GTX 580 instead claiming they were raising the bar, you then went on to claim the good business model is to sell the card for less….. like something you can afford.”

            I think that selling products people buy is a viable strategy 😛 But otherwise I think there is a gross generalization/lumping of the desperate opinions and rebukes on here. Some are wishes/opinion/fact/observation/idea etc.

            “as an example Seagate and likely also Western Digital made record money this year during a massive industry wide hard drive shortage…. because they maximized the margins to compensate for the lack of product.”

            you point out a short term business maneuver vs what I perceive as a long term trend from AMD. Not really comparable. Money now doesn’t mean money later. While the shortage for the HDD makers was clearly a temporary, not so much with AMD or Nvidia these days, they’ve gotten pretty set in their ways. I guess I’m not trying to say I don’t like AMD as I feel they are more or less missing out on an opportunity because of mismanagement of supply chain production issues. fine leave the 6970 at this price and intro a 6950 at a more mainstream price point, probably not happening. As sighted supply chain problems aren’t typically as much an AMD engineering shortfall as a management problem and we know they have plenty of that.

        • kamikaziechameleon
        • 8 years ago

        there is much correct in your observations. The issue is that following up the 4870 every other gen launched with major supply issues and “artificially inflated” declarations by AMD themselves always sighting these supply constraints(similar to this launch). That has caused allot of apprehensive consumers to hold out for price drops that never really came till after the Nvidia answer or the end of that gen. I know I was one of them. I had a 4870 and wanted to go to another AMD after that awesome experience but AMD basically unofficially promises us price cuts that either never come or only happen after Nvidia makes them honest. They loose that First to market advantage by basically saying, hey wait 1-2 months it will drop 100 dollars or more. The 6870 launched at almost 300 dollars remember, that is a bit much for that card after seeing where it fit into their overall lineup. While AMD cards see fluctuations of 100 dollars or more in a life span Nvidia products hit and kinda chill at a price give or take 50 dollars.

          • flip-mode
          • 8 years ago

          Ugh, now you’re getting ridiculous. The hd 6970 launched at $240 – that’s just $10 more than the launch price of the gtx 460 that you seem to have no problem extolling the virtues of, and yet, the hd 6870 easily gives an extra $10 worth of performance.

            • kamikaziechameleon
            • 8 years ago

            I recall the supply constraints caused higher prices as I built a friend a computer with a 6850 in it that was bought at launch for 240. there was a couple weeks there where the 6870 was retailing for almost 300 seriously.

            • flip-mode
            • 8 years ago

            So you are saying that AMD somehow financially benefits from supply limitations and point-of-sale price hikes? LOL. You realize that right now I can go find a bunch of cards from both Nvidia and AMD that are selling for over msrp?

            • kamikaziechameleon
            • 8 years ago

            Yeah but its measures of magnitude. We’ve had varying levels of price inflation and manipulation or whatever for 3 gens now from AMD that has been twice as much as say nvidia(and Nvidia products over MSRP are usually overclocks, pretty substantial ones). MSRP on these things is a very amorphous thing the reason AMD stands out is because they’ve had products sore past the 100 dollar plus mark over their “official” MSRP.

            This time the MSRP is set high intentionally and this time AMD and their zealots say “well its fast and their are supply constraints and its new tech” well by virtue of the fact this is a NEXT GEN CARD, 2 of those better be true, at that does that justify the price? As sighted before WHO GIVES A FLIP IF ITS FASTER ITS NEXT GEN, IT BETTER FLIPPIN BE FASTER! It aught to by virtue of its existence do all the crap it does, if it didn’t we wouldn’t buy it. The nature of tech is that the new product replaces the old the new HTC phones cost the same as the old ones used to and if the old ones want to stay around they discount.

            For some reason AMD thinks they are immune to this. Fastest don’t mean crap they specifically said 4 gens ago something to the effect of, “from now on they will target all single GPU designs at the 300 dollar and below price point and rule the upper echelons of GPU performance with dual GPU designs.” 3 gens of creeping prices and they just fib their way there by making it like its not the way its supposed to be but guess what it is.

            If they can’t control their product price they have issue. Its been 3 years of this crap!

            HOLLY CRAP THIS THING HAS A LIFE OF ITS OWN THE PRICE JUST KEEPS CLIMBING UNCONTROLLABLY!

            If they set it high and change their whole strategy fine, I think that is a bad strategy not just for their competition with nvidia but for the survival of discrete GPU’s as a whole. That isn’t our agurment or disagreement. The original discussion is price trends by AMD, then people keep saying well they deserve it(something I’m not terribly convinced of) or they can’t help it(something unacceptable after the last 3 fumbles.) But at this point in time it is said to be a result of a long series of things that make AMD look like an incompetent company. Would you go to a company that can’t control its process in another field? I’m not attacking AMD with my observations just calling them out or more specifically the fanboy rational behind the acceptance of this.

            I can’t think of a single tech product that has trended up by as much as 50 per year. Honestly their premium GPU was supposed to be 300 as of the last time they declared product strategy to the public. The 550 premium MSRP is a fair bit beyond that. Tech hasn’t trended up that much in a long time. I just can’t discern a GOOD reason for this.

            Is the GPU a decent price performance offering, yeah by last gen standards. Each gen is supposed to change those so any judgement on this product does seem premature even relative to AMD’s own future offerings when its setting itself up to be judged by old standards. Thats like me saying I’m pretty smart compared to a 6th grader, well congratulations I’m an adult I should hope so.

            First to market means nothing if the product doesn’t act like it belongs there.

            • kamikaziechameleon
            • 8 years ago

            wow that is allot longer than anticipated.

            • flip-mode
            • 8 years ago

            If, in a free market, AMD is selling these products for less than the market will bear for the level of production that they can provide then that’s stupidity on AMD’s part. If there’s no price drop until Nvidia launches a product then that means AMD either priced it right or maybe even could have priced it higher.

            • no51
            • 8 years ago

            I remember the 285 staying around launch MSRP for a long time, even after the launch of the 4890 and the 5870 and eventually the 480.

            • kamikaziechameleon
            • 8 years ago

            yeah first gen in a long time that they priced right. That was the beginning of their better pricing practices.

            • no51
            • 8 years ago

            [quote<]yeah first gen in a long time that they priced right.[/quote<] to you maybe. but i wanted one, but couldn't justify the price over the marginal gain over my G92GTS sli rig.

            • kamikaziechameleon
            • 8 years ago

            not so much about the price as I couldn’t buy it as the upkeep of the MSRP. It really helps with product identity.

            • kamikaziechameleon
            • 8 years ago

            Hey if its their strategy… fine. Not a good one IMHO but still better than seemingly falling victim to their own inadequacies as a company 3 plus years in a row(as their marketing and fan boys would have you believe). There is no law being broken no fundamental principal of economics being ignored. I was just articulating my notions on their marketing and pricing trends and what it means to me as a potential consumer of their products.

            • clone
            • 8 years ago

            you aren’t a potential consumer, the high end back in 2006 was the 8800 GTX Ultra listing at $830, the comparable card from AMD in 2012 is $550 and you are complaining how AMD is making this huge mistake and ignoring that the HD 7970 comes with 3gb’s of ram and sells for less than the GTX 580 3gb killing your whole position.

            it’s a double standard although I’m sure you complained about pricing back then just like today the simple truth is you aren’t in the market for an AMD high end card.

            that doesn’t mean AMD is making a mistake, it just means you were never going to buy a high end video card, if you can’t afford to justify $549 you weren’t going to go $499.

            • kamikaziechameleon
            • 8 years ago

            WTF has this broken down to. you make no sense… you don’t see or address a SINGLE notion I make you make and remake the same arguments that have been presented like 10 times and don’t realize what I’m even discussing.

            yeah sure AMD is great they are perfect they can sell cards for what they want no problem. If car companies or other tech companies did this they wouldn’t be in business. No tech company intros new tech without adjusting old tech offerings on the price structure. Their price structure makes it so they almost never directly compete with anything nvidia offers. above the 250 dollar price point. This should but doesn’t change that trend. If its so flipping apparent that they beat out nvidia why don’t they ignore nvidia for the next 2 months and start setting up the next gen price… oh yeah right supply constraints. My bad.

            Trueth is that they sure aren’t gaining any ground here. Not loosing but that isn’t the point is it.

            I don’t even know what the discussion is anymore. I was originally just putting out thoughts. Few people seem to acknowledge my thoughts I clearly hear and see your perspective. Its kinda covered to a lesser extent in the review and echoed by all the original sub posters under my original comment. Yeah they make few 7970 and will sell all for a fair premium. Probably good for short term business. I’m starting up a Fish breeding business on the side while working full time so I see the divergence between short term business and long term business. Its easy to let short term gains get in the way of a very strong long term strategy. Just a thought that there might be another way to approach this OR that they might have some fundamental underlying issues with regards to supply chain fabbing or upper management that have been holding them back for a long time.

            • flip-mode
            • 8 years ago

            You’ve posted 80 times – trust me, people have seen and addressed your posts. Your posts are all complaining about the $550 price. You want it to be $400 just like the 5870 and 6970 were. But you can’t answer the question why AMD should price it lower than $550 if they’re selling every chip they can make for $550. You want the price lower because that’s what *you* can afford; you want AMD to serve *you* rather than serve the market. It would be absolute stupidity for AMD to sell these for less money just to make *you* happy even though they could sell every unit they can make for another 33%. Hey, great idea – AMD, the for-profit, free market company should just arbitrarily stop pursuing profits in order to make *some guy* happy by letting him have a card he can’t really afford. Meanwhile, with the lower price, ten times more people will try to buy one and ten times more people will be pissed off because they’re out of stock. Gee whiz, that’s brilliant! And hey, so what if Nvidia keeps pricing their cards at $500 and higher – we don’t have any problem with that because things work different for Nvidia, we’ll take whatever they give us.

            • kamikaziechameleon
            • 8 years ago

            “Your posts are all complaining about the $550 price.”

            I think they are all about analyzing the business, or addressing peoples limited comprehension of what is good business. AMD has a better product for all practical purposes. Given what we know they have a problem! They are not getting a fair share of the market while offering what I consider to generally be better engineered products.

            I don’t think its an outright engineering problem, I’d point more directly at a marketing/sales/management hybrid. They can’t secure the slots to produce their products, they are first to market(in most industries this is a flipping huge deal) they don’t really shake things up though… oh the list is long of goofy issues. They can sell a 1,000 dollar card that doesn’t resolve any of the points I’ve put forward and really neither would a 300 dollar card if they do have the supply issues they talk about every 6-12 months. Though I started this by just pondering the price trend the more we bicker and poke at what might be going on the more it seem to be more than price happening here. There are some recurring logical fallacies put forth “its better so charge more” has never worked in such simplistic terms for tech.

            Yeah AMD charge 550 if they sell their production capacity at that price kudos, does that meet their requirements to balance things? Does that cover overhead costs? I mean the supply issues have been dogging them for a while just saying it might behoove them to try and resolve that as its really hindered their early market arrival every time. Is my observation a fallacy?

            • clone
            • 8 years ago

            I think the break in his logic is that he doesn’t understand that marketshare is fluid but not so fluid.

            if AMD sold the cards for $100 and everyone bought one both companies would die, Nvidia due to lack of revenue and AMD because they lost on every card and lastly because everyone would buy now and not in the immediate future.

            servicing the market is always tricky, Harley Davidson rarely sells more than 100,000 motorcycles despite the demand, they have the luxury of adjusting prices to suit extracting more coin from each bike when they can but at least not selling at a loss ever and then maximizing profit off accessories.

            Seagate and Western Digital jacked the margins so high on small volumes because their manufacturing went for a swim and an extended mud bath, they had no other choice.

            it works for now because their is no one else in the market but long term if margins remain too high then competitors will fill in the gap so as production increases they will drop prices again to remain the only games in town.

            • kamikaziechameleon
            • 8 years ago

            I understand that, I never proposed 100 dollar GPU’s any where in this thread, lol. I agree with all your saying I just don’t think its been something that works so much for tech products anymore. There are marketing reasons, there are lots of issues. I feel like I read what you wrote and its what I wanted to write but with different strategies/conclusions at the end. 😛 The price trend you talk about has been done in a more successful fashion when they don’t have so many management related supply shortfalls.

            What your talking about is new tech debuts at a standard price and old tech gets discounted(think phones, think CPU’s think about most tech products). Nvidia does this but doesn’t seem to fond of having diff gens on the shelves at the same time so they relabel a 460 a 560 and update the drivers, lol. AMD has not really followed any organized launch schedule or price restructuring of any kind besides trending up their GPU’s each launch. I always know that Nvidia will have a 560 esque offering at the 200-250 price point AMD juggles lots of products in that price range the 6870 started there now the 6950 sits there that is a big change in offerings in one gen prior to the launch of next gen, now that the 7970 is out I have no idea what they are thinking of doing, seems supply constraints and competition currently dictate price. I don’t see why me saying fine price is solid then, but they really should look at supply problems is such a contested point. A company should set their products price not have variables they SHOULD control set it for them, especially when first to market. They will do well when they could own the market.

            If the 7970 was indeed design to target the core consumer 300-350 price point it starting it at 450 would let them continue the tradition of dropping 100 or so inside a generation prior to next gen launch and still steal the market. Too bad they have supply constraints they’d be throwing away money to hit that price point with such a fixed inventory.

            I probably wouldn’t get it at 450 but you know what I can think of allot more people who would. God forbid they intro the whole line at the target price design points and steal the market. Saying their is no gain to having a handle on supply finally and actually controlling their price is bad thing makes no sense to me. Heck if they could get out a 7950 at 350 its game over I’d buy that and so would most people except the most diehard nvidia people.

            • clone
            • 8 years ago

            were done, you have no business case, your points are outright terrible and flawed and most definitely don’t offer up a valid long term strategy.

            you started by claiming foul that AMD did not price the card under the GTX 580 instead claiming they were raising the bar, you then went on to claim the good business model is to sell the card for less….. like something you can afford.

            the long term strategy is as simple as it is effective, you take the lead then run with the positive pr selling high with high margin until that goes soft or you run out of time then you introduce lower versions of the card to extract more value from the architecture as required.

            you don’t come out in the middle and cut your high end sales out from under and you certainly don’t come out in the middle unable to showcase the absolute potential of the design both are disasters in the waiting.

            what AMD is doing is the reason why companies race to be first to market.

            • kamikaziechameleon
            • 8 years ago

            “you started by claiming foul that AMD did not price the card under the GTX 580 instead claiming they were raising the bar, you then went on to claim the good business model is to sell the card for less….. like something you can afford.”

            I’m not aiming for claims more observations. This was originally a bit of a sour observation about pricing trends(correct me if that is wrong), then we got into the goofy discussion – “it is better so it should cost more” if this is the case every other tech company is doing it wrong (please tell me why we should stack tech, how does this help AMD compete or exploit its market timing advantage). Then from there is has kinda meandered as there are several different sub threads here though its more about” is this a viable long term strategy for AMD?” I think the engineering is relatively peerless. Nvidia has not focused on making efficient chips much beyond the occasional hit like the 8800 GT and the 460 GTX. When talking about these different elements my stances are not really related. While my notion of pricing trends from AMD is pretty much based in a frugal fanboy disappointment the discussion of stacking tech and prices relatively is just a matter of the market and history, then it came down to business and what we know, from there I think we are seeing a disturbing trend in marketing/sales/management.

            Please attack my desperate perspectives but don’t bunch them together and use them as one stance for that is not what they are. Opinion/observation/fact/idea all very different and making them one is not productive to this conversation. Sorry for getting heated earlier I was just a little put off by stuff over here in the real world, goofy morning.

            • clone
            • 8 years ago

            it’s better and at $550 and it forced Nvidia to lower the selling prices of the GTX 580 3gb.

            np I am solid with discussions, it’s the web, I usually don’t get heated and shouldn’t either, not worth it, good discussion and I agree to disagree.

            NOTE UPDATE: edited the post to reflect recent changes, apparently prices on GTX 580’s have dropped and now they are priced very close, this is a win win for the consumer as AMD has forced Nvidia to lower it’s prices and redefined the high end in the process… competition at it’s best.

            • kamikaziechameleon
            • 8 years ago

            I wonder how Nvidia will maneuver their line since there will be such a gap prior to their launch. Knowing Nvidia they haven’t of late liked leaving last gen and current gen in the market simultaneously. If they drop prices to move inventory on the lower cards we might see a more even launch of high end and bargain products, I know the buzz is different but it would be a good way for them to counter AMD’s momentum in a huge way.

            I guess the thing I expect myself to think in this situation is “oh no need to consider 2/3 of the market its just outdated boring I’ll grab the new cats pajamas”… with no news if the 7950 will make an appearance in a similar window with a more approachable price around the 300 dollar mark it doesn’t really make the New AMD product relevant for most consumers but rather if Nvidia keeps moving products down to counter could end up helping them clear out old inventory and ready for their stuff. I stand by the notion given supply constraints the product price is fine just annoyed they let them selves get trapped. With supply holding them back they can’t really move in and capture the market no matter what the price is.

            I feel like there is some misunderstanding taking place because 2/3 of what you say I would agree with just we draw differing conclusions or prioritize things differently, lol.

            • clone
            • 8 years ago

            ugh I had this response made and clicked submit and for whatever reason I was no longer logged in.

            it’s pretty common for the bigger debates to be over the finer points while overall their is agreement and as they go on opinions become entrenched and less tractable.

            so long as AMD is guiding Nvidia their isn’t much they can do, Nvidia is in real pickle of a situation, they are focusing on enterprise at the expense of desktop while AMD is releasing product on time and focused for desktop.

            Nvidia has done pretty good so far for many reasons but Kepler is an important piece for them while Tahiti is notable for AMD I don’t believe it’s an end all be all product.

            • kamikaziechameleon
            • 8 years ago

            Yeah, Nvidia has to maintain their foot hold in the desktop space as its their main revenue stream. I’m honestly more surprised they aren’t being more pro active and aggressive with their product pricing. beyond the 560 the rest of their products are just a bit more than I’d like to pay for their respective performance.

            for a good laugh look up at the misguided conversation between me and flip-mode. look at the conclusion, lol, its great.

          • clone
          • 8 years ago

          I covered this, ppl who don’t want to buy won’t, they will look for any excuse not to buy… “I’ll wait for price drops”, “I’ll wait for Nivida”, “I hate AMD”….now you mention that it’s AMD’s fault for offering up reasons why the cards are priced the way they are….. uncertainty in the market and all, you were looking for reasons not to buy, admit it.

          I’ve owned more than 50 gfx cards over the years form the old 3dFX to ATI to Nvidia to Kyro VR, I have no loyalties, I currently have a 460 GTX that while I’m happy with I likely will be replacing.

          the card in question is 20% faster than Nvidia’s best, consumes considerably less power in the process of clobbering it and generates far less heat while running quietly… it’s a superior card in every way and armed with this AMD prices it $50 higher than Nvidia’s best and ppl are complaining, whining and crying foul.

          btw Semi Accurate is claiming that Nvidia may not have a new part until 2013 and if true I’m certain their will be those who wait a year to save $10-$20 on an MIR price drop, the fickle, the skeptics and the haters amongst them.

            • kamikaziechameleon
            • 8 years ago

            I was talking about the consumer thought process. If a company says its dropping prices soon, you gonna buy the product now or wait for that 50-100 dollar drop in a month or two? Nvidia does away with this by maintaining MSRP much much better than AMD. Its just fact. Not saying I love nvidia cards blah blah blah. I have a 460 but would really like a 7950 but if AMD launches a product and tells me to hold onto my money the price is going down any day now I’ll probably get the Nvidia product if I’m not into waiting. yeah I might not get the greatest deal down the road but the uncertainty that AMD creates around their products is NOT GOOD in terms of marketing and sales. Conversely this is an issue Nvidia had with the 9XXX series and prior.

            • sparkman
            • 8 years ago

            If the 7970 is supply-constrained, available only in limited quantities from the fab, then AMD might as well price it just below the competition. Because the other option you suggest is NOT POSSIBLE: to drop the price, grab market share, rub it in nVidia’s face, and make the “lost” money back in volume. Because there is no volume from the fab to support that move.

            And guess what? AMD has priced their card $50 below the competition, just enough to look like a nice little deal on a faster card, probably allowing them to sell out their short-term fab capacity. What does that tell us? The 7970 probably IS supply-constrained.

            You try to blame this lack of fab capacity on poor AMD planning, but this is the new reality in the 2010’s with process sizes rapidly approaching the size of a single electron. Semiconductors barely work at modern scales, and the fabs seem to be struggling to keep Moore’s Law in motion. There is nothing AMD can do about this. Just hope we avoid the 5000-series pain where all product instantly sold out everywhere.

            • kamikaziechameleon
            • 8 years ago

            checked prices, the 7970(not available but MSRP announced) will be 550… 580 1.5 gb is 500… 580 3 gb is 550… cheaper than what?EDIT: look at other comments it seems the changes are in the last 24 hrs.

            please see other posts with regards to pricing/volume/overhead.

            • clone
            • 8 years ago

            I don’t disagree entirely but out of 15 generations of GFX cards you are speaking of the HD 5xxx generation …. 1 gen where AMD launched a little high but mentioned that prices would come down as volumes rose… at the time I’m sure they were thinking with an eye on Nvidia not knowing ahead of time that Nvidia would be a year late with an initially uncompetitive product.

            no one at AMD said wait to buy, nor did they say $50 to $100 off, what AMD did say was in response to the price rise above the HD 4xxx series little more.

            you do realize that the HD 7970 comes with 3gb’s of ram while selling for less than the Nvidia 580 3gb….. this kills your position as AMD is doing exactly what you wanted them to do.

            the fact that you waited more than a year, even after Nvidia finally released Fermi and it initially turned out to be trash causing you to wait another 4 months for 460 to get launched shows that you were likely going to wait until you could choose between the 2 companies using your own personal criteria for what is and isn’t “fair” which could be argued doesn’t happen until Nvidia offers something worthy at which point you blame AMD and buy Nvidia….. lol.

            given the double standard their was nothing AMD could have done to sway your mind in this regard, as an opposite extreme I personally bought 2 HD 5xxx cards while waiting for Nvidia to release Fermi, was happy with both and it just happened that I sold the 2nd HD 5xxx when the 460 came on the market so I scooped that up instead and used it until this Xmas at which point I gave it to my Nephews along with my comp, I’m currently using a 6600 GT until I get something new which given how late Nvidia is going to be will most likely be AMD based considering how good the latest card looks.

            • kamikaziechameleon
            • 8 years ago

            “you do realize that the HD 7970 comes with 3gb’s of ram while selling for less than the Nvidia 580 3gb….. this kills your position as AMD is doing exactly what you wanted them to do.”

            same price 550 for each. not sure if this is a new development but price checked it after all the sitations of this.

            • clone
            • 8 years ago

            I price checked last night at 2 AM the AMD was cheaper than all of the Nvidia 3gb’s…. look like adjustments are happening.

            • kamikaziechameleon
            • 8 years ago

            I believe it, I honestly had no idea there were two standard offerings of the 580, lol.

            • kamikaziechameleon
            • 8 years ago

            The 68XX cards launched with a major markup beyond the MSRP(almost 300 for the 6870 for a bit, I built a friend a machine with a 6850 in it at the time) because of some supply issues. I think the 6970 launched at 450 didn’t it? Its now settled to its intended 350 and has been there, the 5XXX gen was a mess because beyond cost they were no where to be seen for so long, lol. They’ve always made allot of noise about supply issues with regards to most moves they make leading up to and following launch, not certain if this is because its true or not. I’m not an industry person so its all I have to go on.

    • bcronce
    • 8 years ago

    As of right now, nothing earth shattering. Performance is a toss up. While is can be dramatically faster in some areas, it is still beaten by a last gen card in other areas. The drivers are not mature, so we may still see some large gains once they mature.

    Power draw is MUCH better. Quite impressive. There are other compute and async threading related tweaks that we may not see for a bit, and probably won’t help at all for games but will be great for their respective areas.

    As a whole, this card is a decent upgrade from my modded 6950, but as a game card is only a “meh” upgrade. I would love to see better tessilation.

    I’ll probably skip this gen of GPUs unless nVidia pulls some magic out of their…. back end.

    Here hoping for a working Piledriver from AMD. I really need a CPU upgrade.

      • Arclight
      • 8 years ago

      Idk dude, the HD 7970 seems pretty impressive compared to the 6950. I mean how many people think that a GTX 580 is a meh upgrade from the 6950? and we are talking bout a card that beats the GTX 580……

      I don’t have percentages but here you go (don’t forget the % up for the 7970 from easy OCing).
      [url<]http://www.techpowerup.com/reviews/AMD/HD_7970/28.html[/url<] Tech report used the highest possible settings reducing both the 7970 and 580 under 60 fps at 1080p but there are still very high settings configurable so that the 7970 can go over 60 fps. Settings at which, mind you, the 6950 would hover around 40 fps dipping under 30 fps in hard spots. But still, if you can wait for nvidia's products it would be better. I also believe their flagship GPU can easily surpass AMD's, it will just come down to how much better for how much more $$$. Unless AMD has an ace in their sleeves in the form of a rumored 2384 sp card...that could even things out or tip it towards AMD. In your case time is not pressing, but we don't know when nvidia will actually launch their high end cards. Worst case scenario it will take another 5-6 months, time that others aren't willing to wait.

        • BestJinjo
        • 8 years ago

        “Idk dude, the HD 7970 seems pretty impressive compared to the 6950.”

        Aren’t you comparing a $550 card to a $250 card? The current competition for the 7970 is GTX580.
        So you have a 28nm card that’s barely faster than a 40nm 580. He meant in the global context comparing to his 6950 overclocked/unlocked. 40% faster performance is not a lot in a global context when the asking price is 120% higher. That’s where he is coming from.

          • dpaus
          • 8 years ago

          [quote<]40% faster performance is not a lot in a global context when the asking price is 120% higher[/quote<] Try telling that to a guy cross-shopping Corvettes and Ferraris (or Porsches, whatever your favourite fantasy ride is....). Currently, the 7970 is the video card King of the Hill, and by default, is a 'halo' product. Such products always command a price premium, usually a significant one.

          • Arclight
          • 8 years ago

          [quote<]As a whole, this card is a decent upgrade from my modded 6950, but as a game card is only a "meh" upgrade.[/quote<] [quote<]Aren't you comparing a $550 card to a $250 card? The current competition for the 7970 is GTX580.[...] in a global context when the asking price is 120% higher. That's where he is coming from.[/quote<] He obviously is looking for an UPGRADE while you imply he isn't. At the same time he thinks the 7970 is a "meh" upgrade from a 6950, imo it's totally worth it if he can afford to upgrade. But i'll say it again, if he can wait for nvidia's next gen high end cards, he should do that. It's very probable that AMD will drop prices due to competition and we don't know in 6 months who will have the better price/performance or performance crown....

        • bcronce
        • 8 years ago

        A 50% boost for GPU is “normal”. So “meh”. I was hoping 40nm to 28nm would bring something great that we have not seen before. I myself have never seen such a large jump in transistor size since I got into computers almost 2 decades ago.

        Instead of much greater performance, we got less power draw and just “better” performance.

        I’m not complaining, less power is awesome, but I was hoping for a “wow” factor.

          • ptsant
          • 8 years ago

          Have you seen the massive overclocking in some other sites? Although in that case we are talking 5760×1080 performance levels, so I really think you wouldn’t need to do that with current titles, unless you plan on doing Eyefinity or some crazy gpgpu stuff. I think the “wow” effect is really hard to get today (compared to the Geforce 2 era), but we are still seeing much more progress in the GPU front than in the CPU front.

      • dpaus
      • 8 years ago

      [quote<]nothing earth shattering. Performance is a toss up...[/quote<] Tough audience! The 7970 is a huge jump up from its predecessor, and somewhere around 10% faster than its closest competitor [i<]and[/i<] at lower power. If Piledriver comes out 10% faster than a Core i7-2700K and similarly lower in power usage, would that still be 'nothing earth shattering, performance is a toss up.....' ???

        • Deanjo
        • 8 years ago

        Depends on the competitors. If PD would come in with such a capable product that would be huge but only because they are so far behind currently. The graphics side hasn’t seen that much of a gap however and is usually a tight race.

        • BestJinjo
        • 8 years ago

        The comparison hardly makes sense. i7-2700k will be replaced by i7-3770k at the same price point. IVB will be at least 6% more efficient in IPC:

        [url<]http://www.nordichardware.com/news/69-cpu-chipset/44973-intel-to-release-ivy-bridge-on-april-8th.html[/url<] Also, because it will be on 22nm, it will probably overclock to 5.3-5.4ghz. With 6% IPC and overclocking, Intel will give us 17-20% more CPU performance at the SAME price. AMD is asking 83% more over the 6970 for just 40% more performance. GTX580 vs. GTX570 is even worse in this regard! But the key differences you failed to mention are: 1) HD7970 will become obsolete in just 3 years, 2500k/2600k/2700k or 3770k will not (they'll actually survive 2-3 GPU upgrades); 2) HD7970 stands to plummet in price tremendously the minute 7980 or GTX670/680 come out, 2500k-3770k will not plummet in price once Piledriver comes out because Intel will still have a good CPU for games, etc.; 3) You can resell a 2700k/3770k in 12-15 months and barely lose $, while in 15 months HD7970 will lose at least $200. Buying fast graphics cards is nothing like buying a good $220-$350 CPU. While on paper the $500 graphics card adds more value in games, it has very high obsolescence and depreciation rates, making it one of the worst "investments" in a gaming rig. The $220-$350 CPU is actually a "better" buy for a gamer since it will last longer and depreciate less and survive multiple GPU upgrades. A gamer on a reasonable budget is always better off buying 3x $200 GPUs every 2 years than buying a $550 HD7970 and keeping it for 5-6. The only time it makes sense to buy $500-550+ GPUs is if financially you don't care at all about losing value on them (i., hardcore gamer for whom gaming is his/her main hobby or high income earning individual).

          • dpaus
          • 8 years ago

          Your logic is flawless, but as I pointed out above, the pricing of the 7970 has virtually nothing to do with logic and almost everything to do with marketing and market positioning.

            • BestJinjo
            • 8 years ago

            Ya, you are right on that. Compared to current competition, HD7970 appears to be priced appropriately. Like you said $500+ cards are more about buying the best and not about value. We even saw the same thing with the GTX580 (+43% higher MSRP for 15% more performance over the GTX570). We’ll see how 2012 plays out because right now it is shaping up to be the year of Blizzard (Diablo III and SC2 expansion) + PC console ports.

            The desire to upgrade to a $500 card for 40% more performance is greatly dimished for console ported games or games that don’t require a lot of GPU power (Blizzard) where last generation is fast enough. I will probably skip this generation and wait for another 50% speed boost over the 7970 and at least 5-6 next generation games before making that $500 upgrade.

          • Sunburn74
          • 8 years ago

          I have no problem with the price. People who pay for these extreme products know its diminishing returns up front. Heck on some level we all know that. If we all were in the bang for the buck market exclusively, we’d all be running 6650s.

          Also note, I’m not sure for whom a 7970 is an essential product to own for productivity, but if it can somehow save you time and money, 550 is nothing to spend for it in the long run.

        • flip-mode
        • 8 years ago

        Yes, that’s certainly not a “toss up”! LOL. “Decisive” would be the more appropriate term.

      • Silus
      • 8 years ago

      It’s mostly meh because of the price, which is insanely high.
      The performance increase is not impressive at all, but it’s what AMD typically delivers with new architectures. The biggest performance increase was seen from HD 3870 to HD 4870, but that was not a new architecture at all. And the price was much more attractive.
      With the HD 7970 AMD is charging an arm and a leg for a new GPU, which was something they haven’t done for a while now. 28 nm yields must really suck at this point.

        • Krogoth
        • 8 years ago

        The high-end always had a hefty premium.

        Nvidia does the same thing as well.

        IIRC, the last several times they released a new ultra-high card, the MSRP was at ~$500-599.

        ITT: People complaining about prices on a luxury item.

        • clone
        • 8 years ago

        you cry about thumbs down and then post this uninformed ill conceived trash.

        AMD releases a new gpu on time that outperforms Nvidia’s absolute best by better than 20% while generating less heat, running quieter, running cooler and while supporting a new feature set, they sell it for less than Nvidia’s 3gb 580 and you see this as unfair….. that’s quite a double standard you have.

        you make these trivial comments about architecture, whether it was new or not…. who cares so long as it’s better, faster, cheaper, do you have any idea how worthless your comment is…. sniff, sob it’s not a new architecture (regarding HD 4xxx)…. so? it was better, faster, cheaper, used less power and supported more gaming features….. which matters more to gamers, the results or the white paper explaining how they got the results?…. absurd by any measure your position is, in sum total amounting to whining.

        and then you make a poorly conceived worthless comment about pricing in relation to 28nm yields….. regarding a part that is selling for less than the comparable part from Nvidia because in your silly view all AMD parts no matter how much better should cost less and the onus is on AMD to push pricing lower…. that AMD actually did push pricing lower escapes you but who cares AMD isn’t impressive in your hypocritical uninformed eyes.

        ignoring of course with complete hypocrisy to guide your foolish view that Nvidia was the company that sold $830 high end cards up until AMD handed them their ass on a platter 4 generations in a row which redefined the high end pricing at $550.00….. which to you is “insanely high” not because it is but because you personally can’t afford it and obviously if you can’t afford it it must be insane when others can…… and they must be insane as well.

        don’t complain about getting thumbs down when you compose such weak empty posts, be more objective and check your info and the thumbs down will likely go away.

    • Joerdgs
    • 8 years ago

    Happily awaiting the HD7950 review. Need a nice upgrade for my HD5850. Haven’t spend money on my rig for nearly a year 🙂

    • Xenolith
    • 8 years ago

    Any chance for an OpenGL benchmark?

      • axeman
      • 8 years ago

      Hrm, doesn’t Rage dynamically adjust detail to try and maintain 60fps or something? So that’s out, we need something else that is a modern title using OpenGL, which is… nothing? I would be curious to see as ATi’s OpenGL support has historically been poo.

        • Xenolith
        • 8 years ago

        The Heaven 2.0 synthetic benchmark has an OpenGL mode.

        Rage can be benched, just have to tweak some settings. Video drivers are still unstable, so probably not ready for standardized testing.

          • khands
          • 8 years ago

          I’ll never really trust RAGE benches, but Heaven 2.0 would be interesting.

            • Theolendras
            • 8 years ago

            Yeah, this seems about right, am I the only one who thinks this is why this game engine dynamic engine is really the next step in PC gaming. I mean, I don’t like to have to begin a game than have a outdoor level that suddenly drop fps, or the other way, have done halfway trough a game than realize it could look better on my setup. I can fiddle with the setup at the very beginning but I’m trying not to overdo it since it kind of destroy the immersion.

            • khands
            • 8 years ago

            It’s a very difficult problem to solve well. We’ll see if other games based on an updated version of this do it better. What I would like to see though is if the 7970’s streaming texture enhancements help alliviate the texture pop in issues or not though.

    • yogibbear
    • 8 years ago

    Dayum. My GTX 260 core 216 is feeling a little bit slow now 🙁

    Can I wait till Ivy bridge? Hmm… hopefully by then the 7870 etc. are available. Would be funny as AMD would be inadvertently helping Intel…. and vice-versa.

      • khands
      • 8 years ago

      That’s been happening for a while now. Every time a new part is out (especially near the same time frame) enthusiasts think about upgrading their whole systems.

        • flip-mode
        • 8 years ago

        Not me! X4 955 on DDR2 baby!

          • khands
          • 8 years ago

          Well, not all enthusiasts 😉

    • TravelMug
    • 8 years ago

    Great review. I like the new details like having the theoretical specs repeated on top of the page where the tests are run (MP/s, geometry etc.). The graphs with the frame times are shaping up nicely as well with the split between different category of cards having the revied in all of the graphs. Good job!

    • yogibbear
    • 8 years ago

    In a couple more years ATI could have purchased AMD instead 🙂

    I joke, I joke.

    But the GPU side of their business has been hitting the back of the net for quite a few years in a row now.

      • lilbuddhaman
      • 8 years ago

      I see nvidia and ati as dead even right now. I’m wondering when/which will falter ala AMD.

    • drfish
    • 8 years ago

    Nevermind…

    • sschaem
    • 8 years ago

    AMD did good. 28nm, pcie3.0, PRT (THE future of game engine), complete video codec, near perfect power management, …. All that so far, are AMD firsts. Big Kudos for releasing this marvel among all the internal turmoil. I hope the GPU division (ATI) is proud to start 2012 with bang!

      • Theolendras
      • 8 years ago

      Not quite sure it will be the future. Depends mostly on influencial game engine vendor like Epic or Crytek. I really hope this will make it sooner than later tough.

    • Pantsu
    • 8 years ago

    Nice review, though I would’ve like to see some VCE testing, but it seems to be impossible atm? Also overclocking should be included too, even though we all know already this thing OC’s like a beast.

    A note about Skyrim: the test seems to be CPU limited when looking at those worst frametimes. For some reason it seems Nvidia cards handle CPU limited scenarios slightly better than AMD cards do. It’s not a big difference, but it’s there. In any case you should use skyrim boost patch to increase the CPU efficiency in this game so GPUs can stretch their legs more.

    What I really hope from techreport is a crossfire review of 7970 to see how much micro stuttering is going on with the new generation, compared to 6970 CF and GTX580 SLI. This is what AMD/Nvidia really need to fix most badly.

      • Zoomastigophora
      • 8 years ago

      Microstuttering isn’t a problem that can be “fixed” per-se as long as AFR is the workload distribution of choice for multi-GPU solutions. Any solution that evens out the rate of frame displays would necessitate the buffering of frames in advance and consequently increases input lag, which may or may not be a problem depending on the game. Even without multi-GPU solutions, frame times aren’t even between individual frames so microstuttering can never truly be eliminated (ok well, if the GPU is outputting all frames faster than the display refresh rate then you’ll never have stuttering of any kind, but that follows by definition).

      Looking forward, the most feasible solution for multi-GPU stuttering I see happening is the return to distributed tile based rendering so that both GPUs are working on the same frame at the same time. Tile based rendering is starting to make a comeback as evidenced by Frostbite 2 (which tangentially Larrabee would have been well suited for had it not been canceled), but it’ll require some magic on AMD’s part to properly composite the results of each tile from the separate GPU ROP partitions and frame buffers. IIRC, ATi tried this back with the crossfire on the X1k series, but it was never used by anyone due to the difficulty of programming tiled renderers at the time as well as compatibility issues on ATi’s end in various parts of the display chain when using crossfire in tiled mode. Even now, it would probably require Direct3D to expose more hardware level control of the workload distribution in multi-GPU situations than currently available, and devs would have to use driver specific hacks in the mean time, which is never pleasant for anyone involved.

    • Silus
    • 8 years ago

    AMD goes Fermi! And obviously I’m not seeing the complaints about die space being wasted on GPGPU stuff, instead of graphics…same old double standards!

    Anyway, AMD is finally catching up and acknowledging that their past architectures were not forward looking, especially when thinking about entering new markets (those that NVIDIA’s already in for a few years now).
    Obviously NVIDIA also has some catch up to do with Kepler, in terms of their 3D Surround being available with just one card, like the Radeons do with Eyefinity. As for performance, and given how Tahiti fares against GF110, Kepler will surely beat it soundly, otherwise it won’t be that much faster than GF110.

      • Kaleid
      • 8 years ago

      Fermi was inefficient (as was x2900xt which of course didn’t have any GPGPU stuff), this clearly isn’t.

        • Silus
        • 8 years ago

        Inefficient in what ? The only thing Fermi was inefficient in, was power consumption. Performance, features were there. And comparing Fermi with HD 2900 XT is laughable. Fermi, even with the release problems was at least faster than Cypress. R600 didn’t even manage to beat NVIDIA’s second best at the time.

          • Kaleid
          • 8 years ago

          High powerusage = inefficient. Fermi did indeed have performance but it was still inefficient. x2900xt was both inefficient and slow.

            • Deanjo
            • 8 years ago

            High power usage <> inefficient. Performance has to be taken into consideration as well for that conclusion to be made as well. One could easily say something is inefficient based on various criteria for example one could say that Tahiti is inefficient onsidering it takes 30% more transistors to achieve roughly 15% more performance compared to Fermi.

            Efficiency is a measurement of input vs output and can be used on more items then just power efficiency.

            You can have high power consumption and be extremely efficient at the same time.

            • Kaleid
            • 8 years ago

            It can be fast yet still be ineffective. A Lamborghini is fast but thirsty…

            • NeelyCam
            • 8 years ago

            A Corvette is both fast and efficient – what’s your point?

      • can-a-tuna
      • 8 years ago

      Bitter speech from nvidia-boy. Tahiti beats Thermi in all aspects. Suck it.

        • Silus
        • 8 years ago

        In all ? You need to actually read reviews…As for it beating Fermi in some aspects, well it took AMD long enough to do so. Not that your small brain can cope with the fact that Tahiti is AMD’s Fermi. Took them long enough to realize how crappy their architectures were for anything other than gaming, which is where the very profitable HPC market is.

          • esterhasz
          • 8 years ago

          Your prose reads like an 8th grader’s diary, but you make a valid point. The professional market (both HPC and graphics) has been highly profitable for Nvidia. ATI’s decision, probably made about four years ago, to not make the necessary architectural adjustments at 40nm, but to wait for 28nm may have been a mistake in retrospect. On the other side, given the fact that they came very close to bankruptcy in 2008/9 the additional investments needed to make that transition earlier (particularly concerning drivers and support, but also the larger dies) could have broken their neck. These are very difficult decisions to make, which are made in a context of high uncertainty, and people in different companies that are equally smart and dedicated come to different conclusions.

            • Silus
            • 8 years ago

            I couldn’t care less what you think about my prose. Those sort of attacks, especially when immediately after you say I have a valid point, just shows how little argument exists around here…

            And of course I have a valid point, simply because it’s supported by facts. Facts that have been a reality for several years now, except for AMD of course that ignored both HPC and mobile handheld markets completely. That’s not being smart, that’s being quite ignorant and incompetent in management terms. But that’s their problem. Let’s see if they will be able to even have an appreciable market share in HPC, since in the mobile market they are nowhere to be found.

            • khands
            • 8 years ago

            They were doing quite well in HPC back when DX9 was king; I’m sure they’ll find their way back in.

            On another topic entirely, esterhasz’s issue with your prose is valid from an argumentative standpoint. If you’re trying to sway an opinion, it is best to be courteous.

            • Silus
            • 8 years ago

            They were doing quite well ? Do tell (with facts please), since that seems to be a load of BS.

            From “quite well” to “almost nothing” isn’t exactly a show of impressive management or product placement, if what you said is true (which I highly doubt it is).

            • khands
            • 8 years ago

            Ever hear of their Brook+ initiative? Stanford based Folding@home around it.

            • Silus
            • 8 years ago

            LOL since when was that “doing quite well” in the HPC market ?
            It seems you don’t even know what the HPC market is. It certainly isn’t folding@home, but it seems you didn’t know that. HPC market is about gas & oil prospecting, medical research, weather analysis, etc.

            • khands
            • 8 years ago

            High Performance Computing isn’t just supercomputers, high performance distributed computing is an important aspect of that as well. Look at how many 5770’s they sold to bitcoin farmers. And Brook+ wasn’t just used for folding@home it’s just the best known implementation. I’m going to stop feeding the troll now though.

            Edited for clarity.

            • Silus
            • 8 years ago

            Yes, please go away. I’m tired of explaining to AMD fanboys simple stuff such as HPC market = high profits in key areas such as oil & gas prospecting, medical research, etc. Folding@Home = no profits where a bunch of geeks boast over who made more points with their setup, instead of the actual more broader goal of helping out in finding some sort of benefit for the society.

            • Theolendras
            • 8 years ago

            They actually did some finding on alzheimer proteins trough this tool, I understand your point, not that it need to downplay Folding@home completely tough it has potential and already showed.

      • Krogoth
      • 8 years ago

      RTFA, enough said.

        • ImSpartacus
        • 8 years ago

        No kidding. It was an impressive article, no?

      • Arclight
      • 8 years ago

      Feel free to think otherwise but imo people were upset about Fermi’s first implementation due to high power consumption and heat output (remember GTX 480?). Look at the power consumption and thermals of the HD 7970, they are outstanding not only being smaller than the previous generation but also alot smaller than the competition.

      Why should we blame AMD if the gaming performance is higher than the competition and it consumes less power, gives off less heat and overclocks exceptionally. I wouldn’t recommend to anyone to buy a true high end card with the stock cooler (since it’s loud) but once custom dual or triple fan designs start being offered to the public, HD 7970 will be the ultimate single GPU video card on the planet for gaming.

      The original Fermi wasn’t all that. Why do you think people called it Thermy?

        • Arclight
        • 8 years ago

        @custom coolers
        [url<]http://www.overclock.net/t/1194445/xfx-7970-double-d-black-edition[/url<]

      • flip-mode
      • 8 years ago

      So get a gtx 680 for your 15th birthday, then.

        • cynan
        • 8 years ago

        Which will undoubtedly be what Nvidia will start calling the GTX 580 when Kepler drops as 7-series parts

      • wierdo
      • 8 years ago

      It’s a question of time to market. Some would argue that at 40nm nVidia’s timing with Fermi sacrificed too much die-size/efficiency for its GPU computing features. At 28nm transistor budgets double, and the tradeoffs are less severe, you can see AMD’s 28nm product is not “dramatically” faster than nVidia’s product, but it still performs better at a much smaller footprint (365 mm2 vs 520 (!) mm2).

      In fact, I think that it’s impressive that even with all these major “GPU computing” heavy updates to its core, AMD’s product STILL seems to be leaning strongly in the direction of “small cores” that the company’s been pushing since the 4800s, that’s not bad for all the GPU computing baggage it carries from a gaming perspective.

        • Silus
        • 8 years ago

        How can you compare die sizes when the fab process is different ? Do you have a 28 nm GF110 to know that ? Or a 40 nm Tahiti for that matter ? When Kepler hits @ 28nm you can talk about die sizes.

        “small cores” but loads of them. 2048 to be precise. That’s 4x more than GF110, yet it isn’t that much faster on average. And this comparison is valid now, since the architectural differences for the first time in many, many years, aren’t as pronounced as they were when AMD used VLIW4/5. There are many similarities in GCN and Fermi and GCN takes 4 times as many ALUs to beat Fermi and not by much.

          • wierdo
          • 8 years ago

          Because die size is one of the big factors when considering the cost of making such a product, it can have an effect on yields on top of that. If Tahiti was on a 40nm process it may not have been a good design; or did you already forget the question you posted?

          You wanted to know why people think Tahiti is a good design now when it’s basically doing what the 580 did and people thought that wasn’t efficient use of silicon – except Tahiti’s doing it on a 28nm process instead of 40nm, which is that tiny little detail that makes a very important difference.

          Well that was the answer to that. Short answer: At 40nm transistor budget is more expensive, at 28nm there’s more transistor budget for designers to play with, yes it’s pointing out the obvious. Check out the power consumption/temperature/die size tradeoffs between the 7970 and the 40nm products on the market, the tradeoffs made by different products are obvious.

            • Silus
            • 8 years ago

            I made no such question…stop making things up…All I said was that I see no huge uproar over Tahiti’s “wasted” die space with GPGPU stuff, when Fermi got a lot of flak for the very same reason.

            And again, it makes no sense to compare die sizes of two chips using different fab processes. Not to mention that your whole argument is that Tahiti @ 28nm is doing what Fermi already did (as in, in the past) @ 40 nm, as if that was a good thing…
            Another fallacy that really needs to die is that die size is that important. If it was, then AMD’s graphics division would be swimming in money when compared to NVIDIA. The facts quite clearly contradict that, since AMD has smaller dies for a couple of years, yet their GPU division’s financials suck when compared to NVIDIA’s.

            • wierdo
            • 8 years ago

            [quote<]I made no such question...stop making things up...All I said was that I see no huge uproar over Tahiti's "wasted" die space with GPGPU stuff, when Fermi got a lot of flak for the very same reason.[/quote<] You just did it again, you're wondering why it's ok to "waste" die space on GPGPU at 40nm vs 28nm. It's not a question of if it's a "waste" or not, but how much to dedicate to a function that's secondary to the product's intended purpose - gaming - is worth it, and that question can depend on transistor budget, which obviously changes with die shrinks. At 40nm GPGPU functions obviously take more die space than at 28nm, that's all. Whether or not it was worth it at 40nm is another story. Personally I think it was - considering nVidia's market situation - but I understand when people question that vs Tahiti's relatively "late" timing with it.

            • Silus
            • 8 years ago

            Is your understanding of english so poor that an “affirmation” (which is what I made) is actually a question or one’s “wondering” ?

            Yeah…definitely whatever…you can go ahead and discuss that one with someone else…

        • BestJinjo
        • 8 years ago

        “Some would argue that at 40nm nVidia’s timing with Fermi sacrificed too much die-size/efficiency for its GPU computing features.”

        That makes no sense. Fermi had a larger die but NV’s profit margins were nearly ~ 50%. The firm has done extremely well. Apple would call Fermi’s 2-year lead for scalar architecture innovative, not following the leader. All AMD did was wait until 28nm and created a scalar architecture that’s barely 20-25% faster than Fermi was on 40nm!!! It might be impressive to beat a GTX580, but not when we are discussing 28nm vs. 40nm cards.

        Secondly, if you look at the power consumption of HD6970 vs. GTX580, the 580 (still Fermi architecture) was about 20% faster for only ~35W more power consumption or so. So compared to HD6970, the GTX580 offered vastly superior GPGPU compute and faster gaming performance. And people called “Fermi inefficient”? How hot a card runs is also a function of its cooler and the cooler on the 480/470 cards was far inferior to the 570/580 cards. Also, as 40nm process matured, NV was able to clock the GF110 chips higher at the same voltage.

        People tend to underestimate how important the move to 28nm really is (or any node shrink). The primary reason HD7970 looks so amazing is because of the node shrink, first and foremost. It’s very easy to destroy the HD6970 / GTX580 when you can fit 60% more transistors into the same space and achieve lower power consumption to boot.

        If we had HD6970 and GTX580 on 28nm, they would also be very impressive since HD6970 would probably have 2560 SPs, etc. I don’t think AMD did enough. HD7970 is a great stop-gap card but once NV shifts to 28nm, that 20% performance advantage will evaporate. There is no way GTX680 is going to be just 20% faster than GTX580. NV can literally just shrink GTX580 to 28nm and add more units and make 0 changes to the archtiecture and smoke the 7970. It’s like AMD knew this and launched HD7970 conservatively clocked at 925 to get good yields and are waiting to release a faster version when the time comes…It feels like this card was held back purposely.

          • designerfx
          • 8 years ago

          Uh, TLDR version:

          there were problems with 28nm, they are fixed, thus improvements come with it (plus a new generation of cards).

          your whole “oh nvidia’s going to do better” we have no freakin idea until the card comes out.

          • wierdo
          • 8 years ago

          I wasn’t claiming NV’s decision was right/wrong, just replying to Silus’s confusion about why some people thought 40nm may have been too soon for GPGPU emphasis.

          NV got a head start in that market by doing so, it may have been a smart business move to jump early, even if it cost them some gaming performance and die size growth. It’s too early to tell if the head start in this fledgling new GPU computing market was worth it or not, but personally speaking I can understand why they did it.

          Unlike Intel and AMD, nVidia doesn’t have a CPU to add an integrated GPU on top of, so they may have serious and valid concerns about the future direction of the PC market, a big potential market for GPUs may be GPU computing, and nVidia needs to grab it more than anyone else it’s competing with in order to grow its business in a few years from now.

          And like I said, at ~350mm2 I think AMD’s product is still going for “smaller and cheaper to make” philosophy, so IF nVidia again comes out with a large GPU on 28nm, then we’ll be going through a similar performance vs price/performance competition scenario that’s been going on since the 4800s were released imho.

            • Silus
            • 8 years ago

            LOL, my confusion…it was no confusion. You’re in denial, but you can fix that by reading the GTX 480 launch thread right here in TR. That should dissipate your confusion about who and why “some” people thought emphasis on GPGPU was not justified with Fermi.

            Besides, NVIDIA’s emphasis on GPGPU already started with GT200, just FYI to clear your confusion.

            • wierdo
            • 8 years ago

            And it was criticized for the same reason: Coming out too early.

            Now I made clear that I’m with nVidia’s management on this, but apparently your agenda is to find reassurance of your position rather than discussing pros and cons, I’ll leave you to that.

            • Silus
            • 8 years ago

            Coming out too early !? Is that the stupidest argument ever or what ?
            Did you criticize the HD 5870 as the first DX11 card, for coming out too early (since DX11 games were a thing of illusion at that point) ? Or the 8800 GTX for being the first DX10 card ?
            Of course not, because that makes no sense. New technology is always welcome. That argument is really pathetic, especially when you actually look at the facts that NVIDIA had lots of reasons to invest die space in GPGPU improvements in their GPU architecture. Very profitable reasons for that matter. Claiming that it came out too early, is reserved for the basement geek that does nothing more than play games and has no understanding of business and/or what certain markets a product needs to get into.

            • wierdo
            • 8 years ago

            Maybe you should tell Sony/LG they’re dumb for not dropping their LCD production and going for OLEDs last year then, I mean come on, $5000 25″ panels should fly off the shelves vs the inferior $400 40″ LCDs they’d compete against.

            This is not really a new or difficult concept to understand, I don’t see why you have difficulty following this fact, it’s not rocket science.

            Tech companies have to deal with making these judgment calls all the time, adding/dropping features based on design constraints they have to work under, such as transistor budget and time to market etc.

            • bthylafh
            • 8 years ago

            Get over yourself, aspie.

            • Silus
            • 8 years ago

            Where is esterhasz to grade your prose as 8th graders diary ?

            • BestJinjo
            • 8 years ago

            Well the HD4800 and 5800 series were an anomaly in terms of spoiling us gamers with insanely aggressive pricing strategy from AMD. But fast forward to HD6800 and 6900 series. AMD cards were not really cheaper than NV’s cards, despite NV’s larger die sizes.

            HD6850 ~ GTX460 (until the 460 went EOL)
            HD6870 ~ GTX560 (perhaps $20-30 advantage to AMD).
            HD6950 ~ GTX560 Ti
            HD6970 ~ GTX570

            In all of these cases, the price/performance is very similar. AMD didn’t really dominate HD6800/6900 series on the desktop. They did extremely well on the mobile side since their chips were more power efficient. However, on the desktop, NV still commanded nearly 60% market share.

            And now with 7800/7900 series, we are seeing very high pricing from AMD (not necessarily overpriced), but much in-line with NVs. You can’t say at all anymore than AMD is the price/performance leader, not when the are launching HD7870 for $299, HD7950 for $449 and HD7970 at $549. They are asking for a premium for high performance (fair enough). If AMD launched top-to-bottom immediately from $99-549 price level, then NV would need to be worried. AMD was unable to do so, using older HD6800/6900 series to compete in the sub $300 level still. For now, as long as NV is able to launch fast cards in 4-5 months, they’ll have no trouble competing with their older generation. This is because AMD won’t even have large volumes of new 28nm cards until March.

            You also can’t assume that wafer costs are the same for both firms. AMD is buying 28nm wafers right now, competing with Apple for 28nm iPad 3 chips, etc. The supply is contrained for 28nm (and that’s probably why HD7950 is getting delayed into February and the entire 7800 series lineup is shifting from February to March). Perhaps in 1 quarter from now, as yields improve, the cost for 28nm wafer might fall. At this point, 40nm wafer costs are very low, so NV can drop the prices on GTX560/560Ti/570/580 and still remain competitive.

            AMD got a great headstart with HD7970 series, but having the best card at $550 price level doesn’t really matter for market share where majority of consumers buy cards at $300 and below (in fact the desktop discrete graphics card market for >$199 is just 14-15% of the entire GPU market). Until AMD delivers better cards at $200 or below vs. NV’s, HD7970 is going to do little for them in the global context.

            I believe the best selling card on Amazon this holiday season was the GTX550Ti. This isn’t me saying that HD7970 is not a great card, but it’s just not important for sales. Enthusiasts always automatically assume that whoever has the fastest $500 card will dominate that generation, completely forgetting that the majority of gamers do not buy $500 graphics card. The real market share fight on the desktop happens at the $99-249 price levels. Obviously, if more and more gamers will start buying more expensive graphics cards in volumes, then perhaps the share of more expensive graphics cards will matter more.

            But another way of looking at it, NV had the faster 8800GTX, GTX280/285, GTX480/580 and AMD didn’t go out of business. That’s because top cards do not determine anything but bragging rights.

      • Krogoth
      • 8 years ago

      I still see you sitll haven’t read the article.

      It makes it painfully obvious from an architectural standpoint that Tahiti has as much in common with GF100,GF104 and GF110 as Bulldozer has in common with Sandy Bridge.

      You also seem to forgot about the GF100. GF100 was unrefined garage. Nvidia to clock the bejesus out of GF100 an immature 40nm process in order to beat Cypress by a tiny margin at gaming. I do admit it did great at GPGPU and tessellation, but its power inefficiency drew away potential buyers. It probably didn’t help that Nvidia marketing drones decided to hype the crap of 480 with a campaign that was eerily similar to another product launch, the first generation FX line.

      It wasn’t until Nvidia architect retool and tweak the GF100 design into GF104 and GF110 along with more mature 40nm process is where you get to see considerable improvements in power efficiency and gaming performance.

      580 is essentially “what” 480 should have been at launch. The GF104 was a winner at the $200-240 segment at its launch. It pretty much repeat the success of the 8800GT series, effortless 2Megapixel gaming for a reasonable price-point that end-up having decent longevity. (460 is still quite viable). AMD’s answer came late in the form of the Bart family (6850 and 6870). The 5770 and 5750 were just architectural upgrades of 4870 and 4850 with sightly inferior performance at a “higher” price point.

        • Silus
        • 8 years ago

        What the hell are you talking about ? The review has many points where the comparisons with Fermi are more than obvious. Just because you choose to ignore them, doesn’t make you right. As I mentioned in another post, for the first time in many years, it’s possible to actually compare both architectures, given how similar they are in many aspects. Obviously it’s not the same architecture, but it’s definitely not night and day as AMD and NVIDIA’s architectures have been, especially since G80 and R600.

        And what a load of BS. GF104 wasn’t “re-tooled”. Stop spouting Charlie’s idiocy. GF104 was an entirely different chip and with lots of changes from GF100 basic architecture, because it wasn’t meant for more markets other than desktop and laptop gaming.
        As for GF110, did you really see considerable improvements in gaming performance ? Must be another RDF you’re into, because it really didn’t have that much performance increase. Power efficiency yes, gaming performance…about 15-20% more than the GTX 480. Oh but I’m sorry, just like many here consider a 20-30% increase from Tahiti over GF110 a “phenomenal increase”, I guess I should also consider 15-20% from GF100 to GF110, as the “greatest thing ever”…

        As for the 580 being what the 480 should have been, no argument there. Don’t even know why you’re bringing that up. I did not say otherwise in any of my replies.

          • Krogoth
          • 8 years ago

          Please read the article again, not just skim though the details.

          There are a number of differences between the architectures. The only thing similar is that they have a greater focus on GPGPU performance/functionality than their predecessors.

          GF104 is a re-tool GF100, as in Nvidia axed the GPGPU related components and trim down the blocks to make the silicon smaller which made it easier fabricate. Like how, Bart is a retooled Cypress chip. AMD just reorganize the blocks and trim them down without compromising too much performance. You end-up with a “Cypress-Light” that is much easier to make, while delivering 90-95% of the performance.

          GF110 is much better than GF100 at gaming performance, again thanks to retooling the blocks and adding more resources without increasing power consumption.

            • Silus
            • 8 years ago

            NVIDIA trimmed nothing down…it actually up the ante in some aspects. GF104 isn’t exactly the same architecture as GF100. For starters, each SM in GF100 is composed of 32 Stream Processors, GF104/GF114 SMs are composed of 48 Stream Processors. Not to mention the other glaring differences, like the TMUs being able to filer FP16 formats at full speed, something missing from GF100 and included in GF110 as well:

            [url<]https://techreport.com/articles.x/19242[/url<] Again, I never argued that GF110 wasn't better than GF100. That much is obvious. Anyway, this is besides the point. This isn't about NVIDIA, but rather AMD's product, which major fault, isn't its performance (even if it's not that good when compared to the other offerings in the market), but rather the price. It's way too high. If they finally have a GPU that can handle itself in GPGPU heavy situations and actually get into the HPC market, then high prices should be in that market, not the desktop one. AMD itself had changed that playing field with lower priced GPUs, but now they are going back to charging way too much for a single GPU. This is the expected price of a dual GPU card...

            • Farting Bob
            • 8 years ago

            The 7970 is priced competitvely with the 580, 590 and 6990 which are the only cards in the same league right now. I also suspect that NV being slower to market again means that AMD will wait until they release their competitor, then we might see it drop $50-75. Its not a crazy low priced offering, its smart business though. They;ve left wiggle room to drop the price of the reference card and from the overclocking comments, once they do that there is room in this chip to release a faster version (base clock over 1Ghz) which would happily sit at the current 7970 price point.

            Once both sides have there complete offerings out and available then price will start creeping back down like they always do. Even more so with the new node, once yields get really good and competition is there this card wont stay high priced for long.

            • khands
            • 8 years ago

            ^this, they didn’t want another 5000 series debacle, with high-demand + low price + low yields, they can’t afford to leave more money on the table so they aren’t. They’re also betting on Kepler being late to market again which rumors currently point to.

            • Silus
            • 8 years ago

            How can Kepler be late again ? Was it released once already ?

            Plus what signs point to Kepler being late ? TR itself posted an article about NVIDIA itself confirming that Kepler would be out in Q1 2012. What are these trustworthy signs ?

            • khands
            • 8 years ago

            Rumor has it we won’t see the top end cards till around september and they’ll release lower end cards in q1, which would put competing cards quite late. Though I didn’t say they were trustworthy, just that amd appears to be betting on them.

            • Silus
            • 8 years ago

            You didn’t say they were trustworthy, yet said that all signs pointed to Kepler being late:

            “They’re also betting on Kepler being late to market again which all signs currently point to”

            Care to revise your position on this once more ?

            • khands
            • 8 years ago

            Changed to rumors per request, it’s still going to be a few months and that’s a little late in this industry.

            • Silus
            • 8 years ago

            And you had this to say in the GTX 480 launch:

            “Thanks to rediculous prices here in the UK (470 = 5870 prices) nvidia will be lucky to sell more than a dozen cards until they can work out how to make a profit and still remain competitive at the mid, midhigh price ranges.”

            [url<]https://techreport.com/discussions.x/18682?post=474520#474520[/url<] Even though the GTX 470 was competitive with the HD 5870, you considered the price ridiculous. And neither the GTX 590 nor the HD 6990 are in the same league as the HD 7970, since they're faster, yet the 7970 is priced almost at the same level as them. That's way too expensive for this card, much like the GTX 580 is also way too expensive at $500. It's been a while since high end single GPU cards cost this much and it was applauded by everyone that prices for high-end single GPU cards were finally as low as they should be. Now everyone (i.e. the usual suspects) is understanding why AMD is doing it, when in the same situation in the past, they didn't. Now that is ridiculous, especially when everyone knows the real reason for the high prices: poor yields.

            • Krogoth
            • 8 years ago

            Supply/Demand, enough said.

    • Zoomastigophora
    • 8 years ago

    In the future, would it be possible for GPU reviews to include GPU and VRAM frequency when more than one monitor is attached to a single card? And power draw numbers for that case? It’s annoyed me for a while now that Radeons ramp up to near high performance speeds when serving more than one monitor, which seems like a horrible oversight given how much AMD is pushing Eyefinity. I think Anandtech’s article mentioned that this generation still ramps up to near high performance speeds as well.

      • Ryu Connor
      • 8 years ago

      [url<]http://forums.nvidia.com/index.php?showtopic=211130&view=findpost&p=1299999[/url<] Kepler is supposed to have hardware to address said issue.

    • odizzido
    • 8 years ago

    Nice review 🙂 What I like about this card, minus its amazing power efficiency, is that they seem to have plugged the weaker holes that existed on the older gen cards.

    I would have loved it if you tested metro 2033 as for me that has been a long standing stain on AMD. It is old though so I understand why you wouldn’t.

    That video encode sounds really nice too.

    • lilbuddhaman
    • 8 years ago

    Those frametime graphs are starting look good, you got it down now, readable.

    p.s. what about overclocking !

      • stupido
      • 8 years ago

      just search a little bit 😉 you’ll see that this thing goes nicely above 1GHz without voltage increase…

        • lilbuddhaman
        • 8 years ago

        Oh i have, i even made a comment on one of the shortbreads… a volt modded one hit 1.7ghz

    • Bauxite
    • 8 years ago

    “Interestingly, the video card’s frame buffer can act as an input source, allowing for a hardware-accelerated HD video capture of a gaming session.”

    Me wants!

    Right now for an ongoing “project” I have a hack of a config running to a hdmi capture card on another system because fraps etc do NOT do 100% proper captures of various things.

      • Jambe
      • 8 years ago

      I, too, am interested in this. Presumably software would have to be modified to take advantage of the frame buffer source?

        • Firestarter
        • 8 years ago

        If AMD’s marketing has any talent left, they will release and support a tool that quite simply dumps a well-encoded video of anything going on with just a press of a hotkey. That would instantly transform this into the goto-card for Starcraft casters and the like. To slam-dunk it, they should help live streaming sites like justin.tv and ustream to take advantage of the GPU, which will make live-streaming your pro (or amateur) games as easy as buying AMD’s newest.

          • sschaem
          • 8 years ago

          No reason for AMD to not do this, unless they have some shackles on in the form of DRM restrictions?
          We have that option already for screen capture at the OS level… so it indeed would be nice if drivers or the OS offer this for not just image but video. (would be better then fraps having to hack the OS layer)

            • khands
            • 8 years ago

            DRM is where this could get all stuck up, especially for HDMI content.

    • swampfox
    • 8 years ago

    What kind of monitor are you using? (Looked at “testing methods,” but it isn’t particularly relevant. I’m just curious.)

      • DancinJack
      • 8 years ago

      A Dell 30″ I’m pretty sure. 3008 if I remember correctly.

        • swampfox
        • 8 years ago

        That’s what I thought. Thanks.

    • ClickClick5
    • 8 years ago

    I’m a long time Radeon user and I normally get the even series of cards.
    This makes me feel a little sad about my 6970. But none the less, this makes me smile for the 8xxx.

    Nice write up Scott!
    Nice card too.

    • Left_SHifted
    • 8 years ago

    finally, thanks Scott

    • StuG
    • 8 years ago

    Thanks! Well done and worth the wait, though a bit antsy towards the end. I may have well found my new card. 🙂

    • MadManOriginal
    • 8 years ago

    Scott, I have a suggestion:

    On page 9 Skyrim tests the final chart is for ‘time spent beyond 50ms’ but all cards except the GTX 280 have zero frames beyond 50ms. In other games this isn’t the case and so the same chart is actually useful, but for Skyrim that chart is useless because it shows no differentiation between the cards. (The same is true to a lesser extent for the same chart for BF3 on page 11.)

    I understand why you’d want to use the same time of 50ms for all games and want to avoid possible shenanigans by picking a time that favors one card misproportionately, but maybe consider changing the time from 50ms to a lower number in a case like Skyrim to at least show some difference between the cards.

      • Meadows
      • 8 years ago

      Bad suggestion. The 50 ms barrier was chosen perfectly soundly to represent a noticeable hitch (or, if maintained, crappy gameplay that’s hard to enjoy).

      Do you seriously want to cherry-pick the graphs and compromise the quality of the review in order to show [i<]one single game[/i<] in a different light? The chart shows what it's supposed to show: that it [i<]doesn't matter which[/i<] high-end card you pick for Skyrim when upgrading a PC, none of them will jerk during gameplay.

        • yogibbear
        • 8 years ago

        If I could give you 100 thumbs up I would.

          • NeelyCam
          • 8 years ago

          You can buy TR thumbs at NeelyCam’s ThumbUpp-store, $9.99 for a hundred:

          [url<]http://www.thumbupp.com[/url<]

            • no51
            • 8 years ago

            Damn you, I actually clicked that.

        • thermistor
        • 8 years ago

        I’m not gonna thumb ya down, but the OP has a valid point. 50 ms barrier is chosen perfectly soundly and arbitrarily. Why 50 and not, say, 57 or 46.5 ms? I can always tell arbitrary because the person selecting generally chooses a nice round number.

        The inverse of 50 is 20 FPS, why not 40 ms, which is roughly 25 FPS? Don’t think anyone would say the 20 FPS is quick enough to enjoy fluid game-play, nor would 25 FPS. I just flipped the numbers to put it into traditional game play FPS measurement terms, not to throw cold water on the new methodology, which is better.

        It would be interesting to see the plot at 30 40 50 60 ms just to see the separation between solutions. And different (read: pickier) people will have different thresholds for acceptability.

        But to claim it’s not arbitrary is mistaken. One way to correct the arbitrariness is to do a survey of an acceptable sample size of gamers (preferably 30 minimum, with multiple repeats of identical tests), and determine the threshold of the sample population. Then either do a mean of the population, or a some % confidence interval for acceptable theshold ms that gamers’ experiences will not be compromised.

        This post is merely to demonstrate Meadows error, not in any way to pour cold water on the excellent TR review.

      • jensend
      • 8 years ago

      They are still working on figuring the best metrics here, but sticking to what’s relevant to human perception is a must. They can’t just raise the bar because too many cards make it over the bar; any performance differences which don’t make a difference to the human-perceived smoothness of real gameplay are simply [i<]irrelevant[/i<]. So if you want them to shift their metrics down you'll just have to get busy breeding a race of superhumans who can tell the difference between 20ms frames and 25ms frames. In the meantime, the rest of us will content ourselves with what actually makes a difference to us. A better idea would finding a better weighting function to avoid problems with the cutoff. 25ms frames are quite clearly OK and 65ms frames are quite clearly a problem; there's nothing that tells us that 49ms frames are OK but 50ms frames are a problem. They need a weighting function from frame time to "badness" that better reflects our perception. When they simply report the number of frames above 50ms, that's equivalent to assigning all frames below that a "badness" of 0 and those above it a "badness" of 1. The "time spent above 50ms" is only slightly better- it assigns everything below 50ms a 0 and for x >50 the badness is x. These are very simplistic and can easily be improved upon.

        • Arag0n
        • 8 years ago

        I love this 50ms time because it remarks what I EVER said to one friend that usually takes this websites as a rule book. It’s not that you need 60fps to have a smooth gameplay, you need to have a smooth frame rate of at least 30fps, but traditionally in order to get something that doesn’t have significant drops, you need to be well over this 30fps, so random hiccups doesn’t happen or are meaningless.

          • Meadows
          • 8 years ago

          You do need 60 fps, because 30 fps can appear quite jerky regardless of how consistent it otherwise is.

            • jensend
            • 8 years ago

            Online claims about perception which aren’t backed by double-blind testing are worth less than the electrons used to display them.

            I think I’ve seen results from a double-blind test which showed that almost everyone can’t distinguish between 30fps and much higher framerates if the 30fps frames had motion blur; I can’t find the test right now but I do see plenty of people making that claim.

            I’m quite certain that the rate required for perceptual transparency is considerably less than 60fps if some motion blur is used, and I suspect that even without motion blur 60fps isn’t required. But without better data this is a pretty pointless argument to get into.

            • Meadows
            • 8 years ago

            Computers can’t generate “photorealistic” motion blur, and the cheap hack of a “motion blur” that they DO generate only gives the illusion of smoothness at the cost of latency.

            Don’t get me wrong, it’s one of my favourite graphics effects, but it’s not nearly there yet.

            • jensend
            • 8 years ago

            Computers quite definitely can do photorealistic motion blur. Not in realtime on standard hardware, of course. But that’s completely beside the point.

            The reason I mentioned motion blur had nothing to do with current game engines and their hack effects and everything to do with tests of human visual perception. Temporal aliasing is a big part of why lower framerates look choppy; if you go to a theater and watch a movie (almost universally 24fps) it doesn’t seem nearly as choppy as a 24fps game. The reason is motion blur- both natural (because cameras don’t have infinitely fast shutters) and artificial (esp on stop-motion and CG stuff).

            As I said, the test I’m remembering w.r.t. 30fps being perceived as fully continuous was with motion-blurred frames (I think it was film frames and thus the blur was naturally achieved by the camera, but that makes no difference). The FPS threshhold where the illusion of motion is fully achieved will of course be higher for nonblurred frames than for blurred ones but I still think it’s under 60fps. Again, need double-blind tests to get any real accuracy here.

            • derFunkenstein
            • 8 years ago

            [quote<]But that's completely beside the point.[/quote<] That's actually *entirely* the point when it comes to games. If it can't be done realistically in real-time, it doesn't help the illusion of motion.

            • jensend
            • 8 years ago

            *facepalm*

            Look, I’m not talking about using motion blur in games. I never was talking about using motion blur in games. I tried saying this twice already; I guess I’m not doing a good job of making myself clear. My points are a) finding the framerate required for perceptual transparency in games requires double-blind testing rather than anecdotal evidence and b) my personal hypothesis is that perceptual transparency probably happens below 60fps.

            The only reason I brought motion blur up is because the only scientific test on the subject which I remember reading about happened to use motion-blurred frames. Its conclusion that people can’t tell the difference between 30fps and much higher framerates doesn’t provide [i<]evidence[/i<] for my hypothesis, since I'm talking about non-blurred frames and motion blur does help with perceived smoothness. I mentioned it because I do think it provides some [i<]support[/i<] for my opinion, since I don't think motion blur makes a tremendously huge difference. I'd think that if motion-blurred frames reach perceptual transparency by 30fps then static frames may normally reach it around 40fps.

            • Firestarter
            • 8 years ago

            Double blind test to say that 30fps is not enough? To suggest to a audience like this that we can’t tell the difference between 30 and 60 frames per second is quite frankly ludicrous. No amount of motion blur is going to help you with that.

            If you want choppy and blurry video to look at, go buy a ticket to the cinema. I’ll be over there checking out the 120hz monitors.

            • jensend
            • 8 years ago

            Go ahead and check out those 120Hz monitors, along with the [url=http://www.noiseaddicts.com/2008/11/most-expensive-speaker-cable-world-audioquest-audiophile/<]$21,000 audio component cables[/url<] ("to suggest to an audience like this that we can't tell the difference between those and $5 cables is quite frankly ludicrous"), [url=http://penny-arcade.com/comic/2002/11/25<]supernatural power equipment[/url<], etc. Whatever makes you feel better about yourself.

            • Firestarter
            • 8 years ago

            [url<]http://www.anandtech.com/show/3842/asus-vg236h-review-our-first-look-at-120hz[/url<] [quote<]The ASUS VG236H was my first exposure to 120Hz refresh displays that aren’t CRTs, and the difference is about as subtle as a dump truck driving through your living room. I spent the first half hour seriously just dragging windows back and forth across the desktop - from a 120Hz display to a 60Hz, stunned at how smooth and different 120Hz was. Yeah, it’s that different.[/quote<] I guess Brian Klug of Anandtech was high as a kite, totally off his knocker, smoking crystals and stuffing poppers up his ass then, right? Unlike those cables, of which one could measure the properties and prove that they don't do jack, you can SEE the difference quite clearly with 120hz monitors. "As subtle as a dump truck", apparently.

            • jensend
            • 8 years ago

            Just because he claims he can see it means [i<]absolutely nothing[/i<] without blind testing. People feel just as certain that their sound is much much better with the $21K cables, and they write glowing reviews of how stunned they were at how marvelous and different their new cables are. It's all the placebo effect and other well-known psychological biases. If you see a high-end monitor being sold by a normally reputable company for thousands of dollars, the natural presumption is that you must be getting something for all that cash. If you actually spend the money to buy it, you'd be all the more certain to claim there was a difference, because it's human to rationalize your past behaviors. "I'm not the kind of person who frivolously wastes money, therefore this 120Hz display makes all the difference" is a very convincing argument to your subconscious but it's very scientifically unconvincing. As I said above, perceptual claims without double-blind testing are worth less than the electrons required to display them.

            • Firestarter
            • 8 years ago

            I guess “about as subtle as a dump truck driving through your living room” is just too subtle to detect without double-blind testing then. I agree with you that there are many phenomenons that people claim to detect that are justly eliminated using that methology, and that it’s a good idea to apply it to contentious claims, especially to high-end items where post-purchase rationalization is sure to come into play.

            However, I take offense to your claim that that, which is obvious to many of us, is a worthless, baseless opinion without it being scientifically proven using your preferred testing methology. The difference, as Brian pointed out. is not as subtle as you think. You’d know that if you had owned a 100+hz CRT monitor and used it back in the day to play Quake, CS or a similar twitch FPS.

            In fact, I’m confident that a test, which focusses purely on the effect of refresh rate on player skill, will show a statistically significant correlation between refresh rate and skill for players of all skill levels. Ofcourse that opinion is worth less that the bits used to store them in the database without actually conducting the test, but consider it an indicator of how convinced I am of the merits of this particular brand of kool-aid.

            • jensend
            • 8 years ago

            It’s not just [i<]my[/i<] preferred testing methodology, it's [i<]the only scientifically valid[/i<] methodology when it comes to things like this. And no offense is meant to you or Brian Klug or anybody else in saying you quite possibly might be fooled about what you can perceive; it's just part of being human, and all of us, including the best and brightest among us, are subject to cognitive biases we're not aware of. I can respect your claim that tests would bear you out, and since I can't find the citation for the study I'm remembering, I'm not on the most solid ground here either. We need real testing to find these things out. (One thing to keep in mind is that I'm claiming people probably can't distinguish [i<]consistent[/i<] 30fps w/motion blur or ~40fps w/o blur from vastly higher frame rates; since games' frame times vary so much, their frame rates may need to be [i<]averaging[/i<] well above that to avoid hitting perceptible slowdowns.) You bring up two interesting points. The first is your post below saying that the perceptual transparency threshhold might be higher for interactive media. Not only is this possible in itself, but there also may be [url=http://www.anandtech.com/show/2803<]particularities about the many steps in the input/display chain, incl. the way many game engines process input, that make a higher frame rate more detectable than otherwise[/url<]. The second interesting point is your proposed double-blind test- focusing on player performance instead of what people can consciously distinguish. I suppose it is possible that small effects on performance could persist beyond the perceptual transparency threshhold, in which case frame rates above that threshhold could still matter a little to players of twitch multiplayer games and matter a lot to competitive/professional twitch gamers. I don't think it would impact the enjoyability of single-player or non-twitch multiplayer games at that point though.

            • Firestarter
            • 8 years ago

            You’re not catching my drift: it is NOT a subtle effect, not at all. People like me can instantly see and feel the difference the difference between playing a twitch FPS like CS or Quake at 60hz and playing it at 120hz. And I’m a very average FPS player that probably gets his ass kicked by 50% of FPS playing gerbils, people with high skill are even more likely to score 100% on a double blind of any length.

            As for the whole input/display chain affecting the threshold beyond which we can’t tell the difference, I absolutely agree with you. That is one of the reasons that I hate the Unreal3 engine, its processing adds considerable lag which makes low frame rates unplayable for me, even when a different game with similar frame rates works fine. Games with that kind of lag benefit the most from high frame and refresh rates. Games that don’t suffer from it still benefit though, more so when the skill plateau of the game is very high.

            About that test of skill, I’m not arguing that performance would be affected even when the refresh and frame rate is beyond that transparency threshold. I’m arguing that the effect is so pronounced that it definitely has an impact on skill, even when you consider the multitude of other factors that influence player skill.

            As for citing studies, here’s one that found a 7-fold increase in skill when going from a slideshow to 60fps: [url<]http://web.cs.wpi.edu/~claypool/papers/fr-rez/paper.pdf[/url<] If you feel like finding a paper that completely invalidates my point, knock yourself out: [url<]http://scholar.google.com/scholar?q=frame+rate+player+skill&hl=en[/url<]

            • jensend
            • 8 years ago

            Good find! That paper does show a tremendous difference between 7fps and 60fps, but it shows only a minimal difference between 30fps and 60fps. User perception of quality (p.8) barely budged at all between 30fps and 60fps. If they had an additional data point at 45fps I’d be willing to bet they wouldn’t have seen any significant difference between that and 60fps.

            Since you mentioned 100Hz+ CRTs in your previous post and said you’re interested in checking out 120Hz LCDs, I’m guessing that your experience instantly detecting the difference between 60Hz and 120Hz with CS and Quake probably came with CRTs. You may already know this, but the difference between a 60Hz refresh rate and a higher refresh rate on a CRT is much much greater than the difference between refresh rates on an LCD- for a CRT the screen is pretty much black the majority of the time, and the refresh rate determines how many times per second its phosphors are lit up; the eye’s inability to very quickly adjust to the darkness provides the illusion of continuity. See [url=http://en.wikipedia.org/wiki/Persistence_of_vision<]Wikipedia's article about persistence of vision[/url<]. I was quite reliably able to tell when I was sitting at a 60Hz CRT screen because the flickering was rather painful to my eyes; adjusting the refresh to 85Hz would make a huge difference to me, though some people around me couldn't tell the difference. On an LCD the image is lit the entire time; eliminating all flickering by switching to LCDs helped me get rid of eye strain and headaches. But that has nothing to do with frame rate etc. On an LCD, extra high refresh rates are often (even with the most expensive displays) made irrelevant by ghosting. See [url=http://www.displaymate.com/LCD_Response_Time_ShootOut.htm<]this shootout which concluded that there was no perceptible difference in displayed motion between 120Hz LCDs and 60Hz LCDs[/url<]. (Note that what they call "motion blur" is what I would call ghosting and has little connection with the kind of motion blur I was discussing earlier.)

            • Firestarter
            • 8 years ago

            And an additional datapoint at 120hz would, based on that study, probably be not significantly different either. I assume that’s where you’d have to eliminate a lot of other factors to tell a difference in skill, more than I expected.

            You nailed the CRTs vs LCDs argument, but you are preaching to the choir here. I am very much aware of the impact of higher refresh rates on CRTs, and I’m telling you that is not what I saw back then. At 85Hz, the image of my CRT monitor was stable to my eyes, yet I could easily tell the difference between 85Hz and 120Hz just by looking and walking around a bit in Quake III Arena. Between 85Hz and 120Hz, phosphor decay played a negligible role. However, the combined effect of reduced input lag and better fluidity was easily detectable in-game. Granted, the difference between 85Hz and 120Hz was not huge, but going from 85Hz to 60Hz is another jump of the same factor.

            As for the link you quoted, that’s a test comparing HDTVs with video input at 60fps and no interactivity, and a whole can of worms of post-processing and frame interpolation. TVs that interpolate between frames to put a ‘120Hz’ or even ‘240Hz’ sticker on the demo model have little to do with computer monitors that accept 120 progressive scan frames per second, or even with 3D TVs that work with shutter glasses.

            Anyway, I’m going to stop posting here now, you have my honest to goodness word that you too can tell the difference in an interactive setting. You could go to a store that has one on display and see for yourself.

        • jensend
        • 8 years ago

        Note that as long as the “badness” function consistently has a more than linear growth rate — i.e. f(a+b) > f(a) + f(b) for all a,b >0 — this takes care of the problems involved in comparing cards which output a different number of frames during the test period.

        One example of how this has been a problem with the current simplistic weighting functions: one time they had a graph showing the number of frames over 30ms which showed the better cards as being worse. Everybody was over 30ms almost all the time on that test, and the better cards produced more frames overall and therefore looked worse. In that case switching to “time spent over 30ms” wouldn’t have really helped either, since that would have basically just showed that all cards spent basically the entire test period producing >30ms frames.

      • BobbinThreadbare
      • 8 years ago

      I think the point is, Skyrim doesn’t push modern cards, you won’t notice a difference.

    • Krogoth
    • 8 years ago

    Scott’s review cements my conclusion on HD 7970. It isn’t that much faster than the 580 that it knocks out (40% at best, 20-30% on average), but its power efficiency is remarkable (consumes a little more juice than 5870, but it is twice as fast) and the architecture itself is a massive improvement over Cayman (Tessellation and GPGPU performance is at GF110 levels).

    I hope that the derivatives of Tahiti will inherit the same benefits. The true hardware video GPU encoder is more of a benefit for lower-end GPUs and I suspect AMD will throw it onto their next generation Fusion chips.

    I hope Kepler and its incoming siblings yield similar improvements, because competition is good for the customer.

      • MadManOriginal
      • 8 years ago

      So your New Year’s resolution was to actually be..[i<]impressed[/i<] when warranted?!?

      • Deanjo
      • 8 years ago

      [quote<]It isn't that much faster than the 580 that it knocks out (40% at best, 20-30% on average)[/quote<] Another guy that can't do math. Skyrim 9% faster Batman 9% faster BF3 17% faster Crysis 2 20% faster Civilization 13% faster Average 13.6 %

        • Krogoth
        • 8 years ago

        It depends on the resolution, levels of AA/AF.

        The differences are smaller at 2560×1600, 8xMSAA, 8xAF. None of the cards can handle the aforementioned at a butterly-smooth framerate. You have to go for a CF/SLI setup of some kind.

          • Meadows
          • 8 years ago

          “Sorry, my bad” would actually suffice.

        • Meadows
        • 8 years ago

        I admire the jab, but sadly, you’re only reinforcing his first point!

          • Deanjo
          • 8 years ago

          His conclusion is correct, his estimations of how much faster are not. That being said, a process die shrink alone on a GF 110 allowing higher clock speeds should easily make up the difference let a lone architectural improvements coming with Kepler.

        • Arclight
        • 8 years ago

        I see a flaw in your logic, i didn’t check the math. You’re using only 5 games benched at a single resolution to determine the difference between 2 desktop video cards that can play, idk, thousands of games? All we can say is that stock HD 7970 is usually better than a overclocked GTX 580.

        When you take into consideration the fact that nvidia had time to mature the drivers since they launched GTX 480 while AMD just started with theirs…….one fanboy or two might call it a clear win for AMD. Certainly though, nvidia will retaliate with their Keplar cards, the only down side is that it will take time…some people just don’t want to wait…

          • Deanjo
          • 8 years ago

          There is no flaw in the logic. It is pure math. Yes I only used the five games here which represent the current hot titles. This is a key point as older titles don’t receive the same attention for optimization as the newer current titles. Concerning drivers there very well can be older titles that run much better on fermi because it was out when the titles are “hot”. So if anyone’s logic is flawed it is yours as older games do not get the same driver optimizations all the time which is important especially when a new architecture is involved.

            • Arclight
            • 8 years ago

            [quote<]Skyrim 9% faster Batman 9% faster BF3 17% faster Crysis 2 20% faster Civilization 13% faster[/quote<] I don't know if Crysis 2 and Civilization fit that "hot titles" description of yours and again measured at only one resolution. I can only agree to disagree.

            • flip-mode
            • 8 years ago

            [url=http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/49646-amd-radeon-hd-7970-3gb-review-25.html<]Arclight is on point.[/url<]

            • BestJinjo
            • 8 years ago

            But someone who is buying a $500-550 graphics card likely wants to play older games maxed out (i.e., perhaps with mods and super-sampling AA), or has a 2560×1440/1536 monitor. In that case, testing all these games fully maxed out shows exactly how much faster HD7970 is. In the above review, in latest “hot” games and demanding DX11 games (even if you don’t think Civ5 and Crysis 2 are “hot”), HD7970 was barely 20% faster. From that Hardware Canucks review, it’s about 24% faster. That’s not going to be enough at all to combat Kepler, probably not even the GTX670 (2nd fastest). Although, it’s more likely than not that AMD will release a faster version of the card and AIBs will add factory preoverclocked editions.

            • Arclight
            • 8 years ago

            I agree with all you said. But Kepler is not here yet and Deanjo was discussing about GTX 580 vs HD 7970.

            Edited.

            • flip-mode
            • 8 years ago

            Am I the only one that is fully confident that AMD will drop prices on this thing as needed when Kepler comes?

            Also, the Kepler flagship is rumored to come toward the end of the year, is it not? The rumor mill has been claiming that Nvidia will start with the smaller chips. If that is the case, a lot can happen in the next 9 months.

            AMD has always fought Nvidia’s flagship product with a dual GPU card. I expect that to be AMD’s strategy again. Tahiti is looking like a continuation of AMD’s lean die strategy – it’s actually a smaller die than Cayman. If Nvidia’s GF200 is going to be a continuation of the big die strategy then Nvidia is going to run into the same big die problems – i.e. power constraints – and so will have to clock the card lower or some other tactic in order to keep thermals under control, which means that 680 vs 7970 could end up much closer than you think.

            The big difference with this launch is that AMD is actually charging for it’s performance leadership this time around rather than initiating a price war that Nvidia can’t fight anyway until it launches its 28nm chips.

            • khands
            • 8 years ago

            Agreed, AMD is just taking the cash while they can. It’s sad that they have to, and this thing would probably be a lot cheaper if they didn’t need the cash and got better yields. They just don’t want another 5870 issue.

            • flip-mode
            • 8 years ago

            Yep. Also there’s the issue that this card is really only worthwhile if you have a 30″ monitor or Eyefinity array. Anyone with less than 2560×1600 has no need for such a powerful card.

            • cegras
            • 8 years ago

            I don’t know about Crysis 2, but Civ V consistently ranks in the top 5 games being played on Steam, with live stats hovering around 10,000. So yes, Civ V is a lot more popular than people think. Since Civ V is tied into Steamworks I suppose the player number is fairly accurate.

        • wierdo
        • 8 years ago

        That’s true if comparing Tech’s set of benches, which are quite valid. HardOCP’s test methods are another way to look at it, and they stress their systems in interesting ways that may somewhat validate the 20-30% avg performance claims.

        [url<]http://hardocp.com/article/2011/12/22/amd_radeon_hd_7970_video_card_review/[/url<] Seems to me a good chunk of the big gains are noticeable in Eyefinity related scenarios though, this card may potentially be sufficient for single card Eyefinity gaming in some cases where SLI/Crossfire may have been needed last year. Important note: They're using an overclocked 580 model, so this may be of interest to some that are curious about what some mhz bump on the 580 may yield.

        • ptsant
        • 8 years ago

        It’s not only about being faster, but also about being *consistently* faster in most titles, resolutions and workloads. Most importantly, weak areas have been patched (tesselation for example), the compute/GPGPU power has been vastly improved, power consumption is very reasonable, the overclocking headroom is *massive* and all desirable features are there: excellent multimonitor support, directX 11.1, video *encoder* (not decoder, which is trivial), PCIe 3.0.

        My conclusion is that this card is not *just* a 580 with 10-15% better framerate, which would be easy to do, but a well rounded card that pleases everyone. This is much harder. Now, each individual feature may be worth little to some of us (I wouldn’t care about tesselation with current titles…) but the whole is greater than the sum of the parts.

        For those who absolutely care about percentages, overclocked versions could give an important boost to actual performance compared with the factory edition, but with a price to pay in power consumption and noise. You take your pick…

      • ptsant
      • 8 years ago

      Just to nit-pick, but all modern cards have a video *decoder*. This one has a video *encoder*, which is not the same thing. Real-time HD encoding can really stress your new 2600K for example, without speaking about filtering and effects…

        • Krogoth
        • 8 years ago

        My bad, video encoding is still very useful to HTPC and content creation crowd once it gets distilled to lesser GPUs, especially on IGP found on modern CPUs. 😉

    • Arclight
    • 8 years ago

    Finally the gerbils can shut up.

    • MadManOriginal
    • 8 years ago

    It’s aliiiive!

    • BoBzeBuilder
    • 8 years ago

    flip-mode is on point.

      • derFunkenstein
      • 8 years ago

      Forget Krogoth and his unimpressed-ness, this is was a meme with balls.

      • flip-mode
      • 8 years ago

      LOL.

        • yogibbear
        • 8 years ago

        Trust BoBzeBuilder to pull a Nostradamus on us.

    • JustAnEngineer
    • 8 years ago

    Outstanding. I’ve been looking forward to this.

    • ub3r
    • 8 years ago

    shotgun!

Pin It on Pinterest

Share This