AMD’s Radeon HD 7790 graphics card reviewed

Some of us were expecting AMD to unleash next-generation Radeons this spring, but it was not to be. We learned last month that the company intends to keep its Radeon HD 7000 series around through much of 2013. A completely new product series is in the works, but it’s not due out until very late this year—likely just before Christmas.

As we hung our heads listening to the news, we learned that AMD’s plans didn’t preclude new releases long before the holidays. In fact, we were told that the Radeon HD 7000 series would soon be expanded with fresh cards featuring new silicon. We’d soon have some previously unseen hardware to sink our teeth into.

True to its word, AMD has now introduced the Radeon HD 7790, a $149 graphics card powered by a new GPU called Bonaire. This addition offers an interesting middle ground between the Radeon HD 7770 and the Radeon HD 7850, not to mention a potentially compelling alternative to Nvidia’s GeForce GTX 650 Ti.

The new Bonaire GPU

Buckle up, folks, because AMD’s code names get a little bumpy here. Bonaire is officially part of the Sea Islands product family. Sea Islands no longer implies a next-gen graphics architecture as it once did, however; in AMD’s words, the name now encompasses “all products we’re producing in 2013.” Bonaire, despite being a completely new ASIC, is actually based on the exact same Graphics Core Next graphics architecture as the Radeon HD 7000 series (which was itself code-named Southern Islands).

Bonaire also happens to be the name of an island in the planet’s northern hemisphere. And Northern Islands code name refers to the Radeon HD 6000 series. But I digress.

All this code-name mumbo jumbo aside, Bonaire is an exciting addition to AMD’s GPU lineup. While it features the same 128-bit memory interface and ROP arrangement as Cape Verde, the chip that powers the Radeon HD 7770, it has four more compute units and one additional geometry engine. That means the ALU count has gone up from 640 to 896, the number of textures filtered per clock has increased from 40 to 56, and the number of triangles rasterized per clock cycle has risen from one to two.

  ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

ALUs

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die

size

(mm²)

Fabrication

process node

Cape Verde 16 40/20 640 1 128 1500 123 28 nm
Bonaire 16 56/28 896 2 128 2080 160 28 nm
Pitcairn 32 80/40 1280 2 256 2800 212 28 nm
GF114 32 64/64 384 2 256 1950 360 40 nm
GK104 32 128/128 1536 4 256 3500 294 28 nm
GK106 24 80/80 960 3 192 2540 214 28 nm

Translation: Bonaire is rigged to offer higher floating-point math performance, more texturing capability, and better tessellation performance than Cape Verde. Also, as you’ll see on the next page, AMD equips Bonaire with substantially faster GDDR5 RAM, which gives it a bandwidth advantage despite its identical memory controller setup.

In addition to the different unit mix, Bonaire has learned a trick from Trinity and Richland, AMD’s mainstream APUs. That trick takes the form of a new Dynamic Power Management (DPM) microcontroller, which enables Bonaire to switch between voltage levels much quicker than Cape Verde or other members of the Southern Islands family. Behold:


The diagram above shows the different DPM states available to Bonaire. You can click the buttons under the image to switch between the first diagram, which shows Bonaire’s capabilities, and the second diagram, which shows the states available to a “Boost”-equipped version of Tahiti, as found in the Radeon HD 7970 GHz Edtion.

In Tahiti, there are four discrete DPM states, each with its own voltage and clock speed. The GPU can switch between clock speeds very rapidly—in as little as 5-10 ms—but voltage changes require “several hundred milliseconds.” In order to stay within its power and thermal limits at the High and Boost states, the chip attempts to reduce its clock speed without lowering the voltage level. AMD call these reductions “inferred” states. They enable the GPU to respond quickly to load increases in order to prevent power consumption from going over the limit. If lowering the clock speed isn’t enough, then the chip falls back to a lower discrete state, which involves a voltage cut—and therefore takes longer than a simple clock-speed adjustment.

That’s not a bad approach. However, it means the GPU may often find itself with more voltage than it needs to operate at a given clock speed. As a result, power consumption may be higher than it should be, while the core clock speed (and thus performance) might be lower than it needs to be.

How does Bonaire improve on this formula? Well, it has a total of eight discrete DPM states, each with a different clock speed and voltage. Bonaire can switch between those states as quickly as every 10 milliseconds, which removes the need for the “inferred” states seen in Tahiti—that is, clock speed reductions without corresponding voltage cuts. This means the GPU can very quickly select the optimal clock speed and voltage combination to offer the best performance at the predefined power envelope.

Although it lacks support for the Boost power state, the Cape Verde chip in the Radeon HD 7770 otherwise behaves much like Tahiti, whose DNA it shares. Thus, the additional power states in Bonaire give the Radeon HD 7790 an advantage in power efficiency over the 7770.

The card(s)

The Radeon HD 7790 is slated to be available for purchase on April 2 at a suggested e-tail price of $149. At “participating retailers,” the card will be sold with a free copy of BioShock Infinite. Since the Radeon HD 7750 and 7770 were left out of the Never Settle Reloaded bundle, that’s good to know.

AMD sent us a list of some of the Radeon HD 7790 variants its partners have in store. Here it is:

Officially, the Radeon HD 7790 is meant to run at an even 1GHz with 6Gbps memory. As you can tell from the list above, however, retail cards with above-reference clock speeds will be commonplace—possibly more so than standard designs. When briefing us about the 7790, AMD suggested that it gave partners an exceptional amount of leeway in designing their cards. The chipmaker also said we’ll see versions of the 7790 with 2GB of GDDR5 memory, up from the default 1GB. None of those 2GB models are in the list above, though, and we weren’t given a timetable for their arrival.

AMD eschewed sending us a reference board for our review. Instead, the company sent us Asus’ Radeon HD 7790 DirectCU II, which is one of the nicer offerings coming in April. Asus expects to sell it for around $155, but exact pricing hasn’t yet been set.

That big, heatpipe-laden dual-slot cooler almost dwarfs the stubby circuit board, which measures only 6.8″ in length. Cooler included, the card is about 8.5″ long. The DisplayPort, HDMI, and dual DVI outputs can drive up to six displays (provided you use a DisplayPort hub), and the card takes power from a single six-pin PCI Express connector. (Note that the PCIe connector is rotated, so that the clip faces the back of the circuit board. If it were in the usual position, the heatsink fins would be in the way.)

Unfortunately, we didn’t get the DirectCU II until Tuesday, which gave us too little time to benchmark it. By then, we’d already started testing Sapphire’s Radeon HD 7790, which FedEx delivered the day before.

The Sapphire card has the same 1075MHz core speed and 6.4Gbps memory speed as the Asus. It features a similar dual-fan cooler, albeit without conspicuous heat pipes, and it has the same 8.5″ overall length and display output arrangement. The circuit board spans the whole length of the cooler, however, and the PCI Express power connector sits at the top of the card with the clip facing the front—a pretty common arrangement.

A farewell to the Radeon HD 7850 1GB

AMD’s Radeon HD 7850 1GB came out in October. Since its launch, the card has wooed value-conscious gamers by delivering much of the performance of its 2GB namesake at a lower price—often as little as $160. We’ve recommend it in several of our system guides.

Now, sadly, the 7850 1GB is about to disappear from retail listings forever. The Radeon HD 7790 will be its de facto successor.

According to AMD, the 7850 1GB is going away because memory makers have stopped producing the 128MB GDDR5 chips it requires. The card has four 64-bit dual-channel memory controllers that must each be fed by two memory chips; it therefore needs eight 128MB chips to achieve a 1GB capacity. The 7790 doesn’t have that problem. With only two 64-bit memory controllers, it can deliver the same 1GB capacity using larger, 256MB GDDR5 chips, which are still being made.

This disappearing act gives the 7790 some pretty big shoes to fill. GPUs with 128-bit memory interfaces don’t often match the performance of their 256-bit siblings, especially when they’re based on the same architecture. If the 7790 fails to deliver, folks could be forced to splurge for a Radeon HD 7850 2GB, which would set them back at least $180.

The competition

The Radeon HD 7790 won’t just be trying to live up to the 7850 1GB’s legacy. It will also face competition from higher-clocked versions of Nvidia’s GeForce GTX 650 Ti, which are available in the same price range, as well as some of the old GeForce GTX 560 cards that remain on the market. We’ll look at real-world benchmarks very soon, but before we do, let’s take a quick look at theoretical numbers. The table below includes peak rates for both reference cards and the souped-up variants we’ve got in our labs.

Base

clock

(MHz)

Boost

clock

(MHz)

Peak

ROP rate

(Gpix/s)

Texture

filtering

int8/fp16

(Gtex/s)

Polygon

throughput

(Mtris/s)

Peak

shader

tflops

Memory

transfer

rate (GT/s)

Memory

bandwidth

(GB/s)

Radeon HD 7770 1000 N/A 16 40/20 1000 1.3 4.5 72
Radeon HD 7790 1000 N/A 16 56/28 2000 1.8 6.0 96
Sapphire Radeon HD 7790 1075 N/A 17 60/30 2150 1.9 6.4 102
Radeon HD 7850 1GB 860 N/A 28 55/28 1720 1.8 4.8 154
GeForce GTX 650 Ti 928 N/A 15 59/59 1856 1.4 5.4 86
Zotac GeForce GTX 650 Ti 2GB AMP! 1033 N/A 17 66/66 2066 1.6 6.2 99
GeForce GTX 560 810 N/A 26 45/45 1620 1.1 4.0 128
MSI GeForce GTX 560 Twin Frozr II 870 N/A 28 49/49 1760 1.2 4.2 134

Compared to the Radeon HD 7850 1GB, the 7790 in theory has similar texture filtering and shader performance, and it should offer even higher tessellation throughput. However, the 7790 has only three fifths the ROP rate, which means less resolve power for multisampled anti-aliasing, and two thirds the memory bandwidth. Those limitations may or may not affect real-world gaming performance, depending on the nature of the graphics workload.

The 7790 is more comparable to the GTX 650 Ti. On paper, these two cards have roughly equivalent ROP rates, polygon throughput, and memory bandwidth. The 7790 enjoys an advantage in shader throughput, while the 650 Ti promises better texture filtering performance, especially for fp16 texture formats. This contest is probably too close to call at this stage.

As for the old GTX 560, that card has the same advantages as the Radeon HD 7850 1GB—higher memory bandwidth and ROP rates—but it trails the 7790 in key rates like texture filtering, shader arithmetic, and polygon rasterization. The 7790 may come out ahead more often than not in newer games, especially those that use shader-based antialiasing techniques instead of MSAA.

A quick word about our guinea pigs

We had an unusually short time window to review the Radeon HD 7790, and AMD didn’t reveal the 7790’s pricing until Tuesday evening. We did our best to estimate the card’s positioning and obtain a comparable GeForce GTX 650 Ti from Nvidia’s graphics card partners, but we were unable to get one in time.

The card you’ll see tested alongside the 7790 over the next few pages is a Zotac AMP! Edition offering, which has 2GB of onboard memory and somewhat higher clock speeds than most other GTX 650 Ti variants. It currently retails for $181 at Newegg, or about $20 more than what Sapphire expects to charge for its Radeon HD 7790 at launch.

Now, there’s nothing particularly wrong with comparing these two cards. They’re both genuine retail offerings, and the performance comparison should be enlightening. That said, we’d ask that you please keep the price difference in mind as you peruse our benchmarks. GTX 650 Ti variants priced around the $160 mark are likely to be a little slower than our sample. Also, please stay tuned. Very soon, we’ll have another article with more benchmarks that include another version of the GTX 650 Ti.

We were, however, able to get a new model of the Radeon HD 7770 GHz Edition in time for the review: Diamond’s version of the card, which is a good representative of vanilla offerings available out there. It runs at the reference 1000MHz core and 4500MT/s memory speeds, and it has a stubby dual-slot cooler with a large, quiet fan. This seems to be a stubbier version of the model selling at Newegg for $135.99 (before a $20 mail-in rebate) right now.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we reported the median results. Our test systems were configured like so:

Processor Intel Core i7-3770K
Motherboard Gigabyte Z77X-UD3H
North bridge Intel Z77 Express
South bridge
Memory size 4GB (2 DIMMs)
Memory type AMD Memory

DDR3 SDRAM at 1600MHz

Memory timings 9-9-9-28
Chipset drivers INF update 9.3.0.1021

Rapid Storage Technology 11.6

Audio Integrated Via audio

with 6.0.01.10800 drivers

Hard drive Crucial m4 256GB
Power supply Corsair HX750W 750W
OS Windows 8 Professional x64 Edition

 

  Driver revision GPU base

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Diamond Radeon HD 7770 Catalyst 12.101.2.1000 beta 1000 4500 1GB
Sapphire Radeon HD 7790 Catalyst 12.101.2.1000 beta 1075 6000 1GB
XFX Radeon HD 7850 1GB Core Edition Catalyst 12.101.2.1000 beta 860 1200 1GB
MSI GeForce GTX 560 Twin Frozr II GeForce 314.21 beta 880 1050 1GB
Zotac GeForce GTX 650 Ti AMP! GeForce 314.21 beta 1033 1550 2GB

Thanks to AMD, Corsair, and Crucial for helping to outfit our test rig. Asus, Diamond, MSI, Sapphire, XFX, and Zotac have our gratitude, as well, for supplying the various graphics cards we tested.

Image quality settings for the graphics cards were left at the control panel defaults, except on the Radeon cards, where surface format optimizations were disabled and the tessellation mode was set to “use application settings.” Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing a 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.

  • We measured total system power consumption at the wall socket using a P3 Kill A Watt digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Skyrim at its High quality preset.

  • We measured noise levels on our test system, sitting on an open test bench, using a TES-52 digital sound level meter. The meter was held approximately 8″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Synthetic testing

Yes, yes, I know you’re dying to get into game benchmarks. However, synthetic benchmarks set the stage for everything else, helping to demonstrate how well the theoretical peak throughput numbers we’ve discussed translate into delivered performance.

Texture filtering

  Peak bilinear

filtering

(Gtexels/s)

Peak bilinear

FP16 filtering

(Gtexels/s)

Memory

bandwidth

(GB/s)

Radeon HD 7770 40 20 72
Sapphire Radeon HD 7790 56 28 102
Radeon HD 7850 1GB 55 28 154
MSI GeForce GTX 560 Twin Frozr II 49 49 134
Zotac GeForce GTX 650 Ti 2GB AMP! 66 66 99

In the real world, memory bandwidth plays a part in texturing performance. That probably explains why the Radeon HD 7790 falls behind the 7850 1GB here.

Tessellation

  Peak

rasterization

rate

(Mtris/s)

Memory

bandwidth

(GB/s)

Radeon HD 7770 1000 72
Sapphire Radeon HD 7790 2150 102
Radeon HD 7850 1GB 1720 154
MSI GeForce GTX 560 Twin Frozr II 1760 134
Zotac GeForce GTX 650 Ti 2GB AMP! 2066 99

Wow. The 7790’s two geometry engines do wonders for tessellation performance, especially when paired with a 1075MHz core clock speed, as on the Sapphire card.

Shader performance

  Peak shader

arithmetic

(TFLOPS)

Memory

bandwidth

(GB/s)

Radeon HD 7770 1.3 72
Sapphire Radeon HD 7790 1.9 102
Radeon HD 7850 1GB 1.8 154
MSI GeForce GTX 560 Twin Frozr II 1.2 134
Zotac GeForce GTX 650 Ti 2GB AMP! 1.6 99

The 7790 comes out on top here.

We’ll have to stop our theoretical explorations here, unfortunately. I normally include LuxMark, an OpenCL-accelerated ray-tracing benchmark, in this set, but it refused to run using the drivers AMD provided for the 7790. Oh well; moving on…

Tomb Raider

Developed by Crystal Dynamics, this reboot of the famous franchise features a more believable Lara Croft who, as the game progresses, sheds her fear and vulnerability to become a formidable killing machine. I tested Tomb Raider by running around a small mountain area, which is roughly 10% of the way into the single-player campaign.

This is a rather impressive-looking game that’s clearly designed to take full advantage of high-end gaming PCs. The Ultra and Ultimate detail presets were too hard on these cards, so I had to settle for the High preset and leave the game’s TressFX hair physics disabled. Testing was done at 1080p.

Frame time

(ms)

FPS rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Let’s preface the results below with a little primer on our testing methodology. Along with measuring average frames per second, we delve inside the second to look at frame rendering times. Studying the time taken to render each frame gives us a better sense of playability, because it highlights issues like stuttering that can occur—and be felt by the player—within the span of one second. Charting frame times shows these issues clear as day, while charting average frames per second obscures them.

To get a sense of how frame times correspond to FPS rates, check the table on the right.

We’re going to start by charting frame times over the totality of a representative run for each system. (That run is usually the middle one out of the five we ran for each card.) These plots should give us an at-a-glance impression of overall playability, warts and all. You can click the buttons below the graph to compare our protagonist to its different competitors.


Right away, it’s clear that the Radeon HD 7790 is much closer to the 7850 1GB than to the 7770, whose plot shows frequent spikes above 30 ms. However, the 7790’s plot is still a little higher than that of the 7850 and the more expensive GTX 650 Ti 2GB AMP! Edition, which suggests that it’s not quite as fast.

We can slice and dice our raw frame-time data in several ways to show different facets of the performance picture. Let’s start with something we’re all familiar with: average frames per second. Average FPS is widely used, but it has some serious limitation. Another way to summarize performance is to consider the threshold below which 99% of frames are rendered, which offers a sense of overall frame latency, excluding fringe cases. (The lower the threshold, the more fluid the game.)

The average FPS and 99th-percentile results confirm our appraisal of the frame time plots. However, the performance difference between the 7790 and its faster rivals isn’t that big, especially in the 99th-percentile metric, which gives us a better indication of seat-of-the-pants smoothness and playability than average FPS.

Now, the 99th percentile result only captures a single point along the latency curve, but we can show you that whole curve, as well. With single-GPU configs like these, the right hand-side of the graph—and especially the last 5% or so—is where you’ll want to look. That section tends to be where the best and worst solutions diverge.

Finally, we can rank the cards based on how long they spent working on frames that took longer than a certain number of milliseconds to render. Simply put, this metric is a measure of “badness.” It tells us about the scope of delays in frame delivery during the test scenario. Here, you can click the buttons below the graph to switch between different milisecond thresholds.


None of the cards spend much time beyond our most important threshold of “badness” at 50 milliseconds—that means none of them dip below the relatively slow frame production rate of 20 FPS for long. In fact, except for the Radeon HD 7770, none of our cards spend a significant amount of time working on frames that take longer than 33.3 ms to render. That should mean pretty fluid gameplay from each of them.

Crysis 3

Yep. This is the new Crysis game. There’s not much else to say, except that this title has truly spectacular graphics. To test it, I ran from weapon cache to weapon cache at the beginning of the Welcome to the Jungle level for 60 seconds per run.

I tested at 1080p using the medium detail preset with high textures and medium SMAA antialiasing.


We’re seeing a bit more variability in frame times here than we did in Tomb Raider. Variability by itself isn’t necessarily bad; it’s frame time spikes that truly impair gameplay. Except for the 7770, which struggles at these settings, the cards we tested have surprisingly similar plots.

The 7790 may not have the highest average frame rate, but its 99th-percentile frame times are lower than those of all the other cards. Given the choice, we’d pick the 7790 over the cards with higher FPS averages.

Frame times for the GeForces and the 7850 1GB start spiking around the 95th percentile, but the 7790 holds largely steady up until the 97th or 98th percentile. In other word, it stays smoother throughout a larger chunk of the run. (The 7770 shows a similar progression to the 7790, but its frame times are far higher on average. In practice, it feels very sluggish and choppy in this game.)


If the percentile line graph above didn’t make it clear, this will. The Radeon HD 7790 spends a negligible amount of time working on frames that take longer than 50 ms to render, and it also spends less time beyond 33.3 ms than the other cards. Fewer spikes, smoother gameplay.

The 16.7-ms graph doesn’t show the 7790 in as positive a light, but none of these cards are quick enough for that metric to matter very much. Even the 7850 1GB spends almost nine full seconds, or about one seventh of the run, above that threshold. (For reference, a 16.7 ms frame time works out to a 60 FPS frame rate.)

Borderlands 2

For this test, I shamelessly stole Scott’s Borderlands 2 character and aped the gameplay session he used to benchmark the Radeon HD 7950 and GeForce GTX 660 Ti. The session takes place at the start of the “Opportunity” level. As Scott noted, this section isn’t precisely repeatable, because enemies don’t always spawn in the same spots or attack in the same way. We tested five times per GPU and tried to keep to the same path through the level, however, which should help compensate for variability.

I tested at 1920×1080. All other graphics settings were maxed out except for hardware-accelerated PhysX, which isn’t supported on the Radeons.


The Radeon HD 7790 does a much better job of keeping frame times steady than the other Radeons in this game. In fact, the other Radeons don’t look like they’re benefiting from the Borderlands 2 latency optimizations AMD first rolled out in the Catalyst 13.2 beta. Perhaps the beta driver AMD sent us with the 7790 doesn’t include those optimizations for other cards, somehow, or maybe AMD’s optimizations somehow don’t apply to the 7850 and 7770. We’ve asked AMD to clarify and are awaiting a response.


In any event, the 7790 looks to be about neck-and-neck with the pricier GeForce GTX 650 Ti AMP! Edition here. Not a bad showing at all.

Sleeping Dogs

I haven’t had a chance to get very far into Sleeping Dogs myself, but TR’s Geoff Gasior did, and he got hooked. From the small glimpse I’ve received of the game’s open-world environment and martial-arts-style combat, I think I can see why.

The game’s version of Hong Kong seems to be its most demanding area from a performance standpoint, so that’s what I benchmarked. I took Wei Shen on a motorcyle joyride through the city, trying my best to remember I was supposed to ride on the left side of the street.

I benchmarked Sleeping Dogs at 1920×1080 using a tweaked version of the “High” quality preset, with vsync disabled and SSAO bumped down to “Normal.” The high-resolution texture pack was installed, too.


Again, we have a nice, smooth plot for the Radeon HD 7790, and spiky plots for the other Radeons. Hmm. Whatever AMD’s doing, the 7790 performs very well, displaying even fewer spikes than the GTX 650 Ti 2GB AMP! and the GTX 560.


Yep. The 7790 hits a home run here.

The Elder Scrolls V: Skyrim

Here, too, I borrowed Scott’s test run, which involves a walk through the moor not far from the town of Whiterun—and perilously close to a camp of Giants.

The game was run at 1920×1080 using the “Ultra” detail preset. The high-resolution texture pack was installed, as well.


The 7770 and 7850 1GB fare poorly here, too, even though AMD addressed frame latency spikes in Skyrim in recent Catalyst beta drivers. By contrast, the 7790 appears to perform better; its plot has fewer, smaller frame time spikes than its fellow Radeons’ plots. Odd.


Although it has a higher FPS average, the 7790 generally trails the GTX 650 Ti AMP! in Skyrim. It fares worse in the 99th percentile frame time, and it spends more time beyond our 50- and 33-ms thresholds.

Battlefield 3

I tested Battlefield 3 by playing through the start of the Kaffarov mission, right after the player lands. Our 90-second runs involved walking through the woods and getting into a firefight with a group of hostiles, who fired and lobbed grenades at us.

I kept things simple, using the game’s “High” detail preset at 1080p.


The 7790 shadows both the 7850 1GB and GTX 650 Ti 2GB AMP! Edition in these frame-by-frame plots.

Our average FPS and percentile results confirm our initial observation. The 7790, 7850 1GB, and GTX 650 2GB AMP! are all neck-and-neck.


The 7790 pulls ahead ever so slightly in the “time spent beyond 33.3 ms” graph, but not by much. These three top contenders are about equally playable in Battlefield 3.

Power consumption

The Radeon HD 7790 actually draws a touch more power than the 7850 1GB at idle. However, it’s substantially more power-efficient under load, where it doesn’t consume much more than our reference-clocked Radeon HD 7770.

Noise levels and GPU temperatures

Sapphire’s dual-fan cooler keeps the 7790 both quiet and very cool.

A note about our noise levels: I live on the eighth floor of a tall building, and it was unusually windy both times I tried to take noise readings for this review. I attempted to alleviate the problem by taking the lowest reading from a one-minute recording for each card at each setting, so occasional wind gusts shouldn’t have impacted the numbers substantially. These cards really are all very quiet—except for the 7850, whose cooler whines a little more than the others under load. (I also heard a faint mechanical chirping from the 7850 and the GTX 650 that wasn’t present on the 7790.)

Conclusions

Let’s wrap things up with a couple of our trademark value scatter plots. In both plots, the performance numbers are geometric means of data points from all games tested. (They exclude the synthetic tests at the beginning of the article.) The first plot shows 99th-percentile frame times converted into FPS for easier reading; the second plot shows simple FPS averages.

Prices for the GTX 650 Ti, 7850 1GB, and 7770 were taken from the Newegg listings for the cards we tested. The GTX 560’s price was taken from a Newegg listing for a comparable offering that’s still available, while the 7790’s price was taken from Sapphire.

The best deals should reside near the top left of each plot, where performance is high and pricing is low. Conversely, the least desirable offerings should be near the bottom right.


Well, well. Despite being thrown into the ring with a more expensive GeForce GTX 650 Ti card with twice as much memory, the Radeon HD 7790 more than holds its own overall. In fact, it’s quicker on average according to our 99th-percentile plot, which we think offers the best summation of real-world performance. The 7790 is negligibly slower in the average FPS plot—but it’s still a better deal considering the lower price.

Based on these numbers, I’d expect the 7790 to perform even better compared to a lower-clocked, like-priced version of the GeForce GTX 650 Ti. We’ll have to run the numbers to be sure, but this is hardly an outlandish extrapolation to make.

The 7790 also manages to outdo the 7850 1GB overall, proving its worth as a successor to that product. Sure, as the tests on the previous pages show, the 7790 doesn’t always outmatch its predecessor. Nevertheless, the fact that it does so overall should certainly be of some comfort to those saddened by the 7850 1GB’s departure.

Add to that the Radeon HD 7790’s power efficiency, its low noise levels, and the free copy of BioShock Infinite in the box, and it looks like we have a winning recipe from AMD.

Of course, Nvidia isn’t sitting still, and the firm may not be willing to take this new onslaught lying down. We’ve been hearing rumors that Nvidia will soon unleash a new card that could land in this exact same price range. Things may be about to get even more interesting.

Comments closed
    • sarahNL93144
    • 8 years ago
    • Cyril
    • 8 years ago

    Correction: The tables in the first two pages of this review originally quoted peak theoretical shader throughput of 1.8 teraflops for the GeForce GTX 650 Ti and 2.0 teraflops for the GeForce GTX 650 Ti AMP! Edition. The correct figures are 1.4 and 1.6 teraflops, respectively. The article has been updated to reflect this.

    • ub3r
    • 8 years ago

    Has anyone tried this for mining bitcoin??

      • Farting Bob
      • 8 years ago

      It’s only just been released a few days ago, so i expect not really. Anyway, bitcoin mining needs bigger GPU’s than this to be worthwhile. The 7950 is still the best bang for your buck to get if you fancy bitcoining all day every day.

    • alienstorexxx
    • 8 years ago

    why don’t just stick with stock versions to be more accurate on final and partial results.
    amp! brand are high clocked versions, and 1 vs 1 on performance can be misunderstood.
    on the other hand, on price-performance the zotac is also highly priced so, on a general balance, i think it’s not directly comparable nor gives the precise information.

      • ronch
      • 8 years ago

      I don’t like tests that use hot-clocked versions either. Tests should give the consumer the [i<]minimum[/i<] level of performance they can expect. If the product that ends up in their hands performs 10% better, then it's better than seeing a hot-clocked version's results and being dismayed when you see slower clocks being sold at retail... or worse... in your hands.

    • WaltC
    • 8 years ago

    Good review! I’ll probably buy this card next to replace my aging 5770–which has actually been such a decent performer @1900×1200 that I have retained the card far longer than I’d planned…! One somewhat minor bone to pick: I think it’s nigh insane to run 8xFSAA in Skyrim with a budget card like the 7790. The Skyrim auto-configuration routine is botched, I’ve always thought, at least for my card. 8xFSAA + 1920×1200 is what the game auto-configures me for–problem is, the game is pretty much unplayable for me with 8xFSAA. During action sequences the frame rate drops like a rock into the single digits. Ugh. I prefer FXAA or no AA (I vacillate back and forth with the game) as @ 1920×1200 I have no problems with Skyrim and ~30 some-odd mods, many of them texture-related mods. I was curious as to why you benched here with 8xfsaa…of course 1920×1080 is not as tough on the gpu as 1920×1200, but there isn’t a major amount of difference there. Also, did AMD specify 1920×1080 p as “the” resolution to be used here? I’d love to see 2560×1600 with a pair of 7790’s in Crossfire…

      • JustAnEngineer
      • 8 years ago

      AMD’s 2X anti-aliasing is still a very noticeable improvement over no AA, so I try to enable that if I can.

    • adam1378
    • 8 years ago

    So is the card that is comparable to the new PS4 SoC gpu?

      • tipoo
      • 8 years ago

      I thought that card was like a 7850? They’re saying 1.84Tflops for the PS4 GPU, and the 7850 is rated at 1.76, closer than this 7790 would be. Perhaps this would be closer to the slightly weaker rumored Durango GPU at 1.4.

      • sschaem
      • 8 years ago

      Close on the compute side, but with half the bandwidth and 8 time less the video memory.
      So the PS4 is closer to 7850 then a 7790.

      But remember. try to run Battlefield 3 on a nvidia 7900GT equipped PC, compare to a PS3.
      Even so the GPU are comparable the PS3 smokes a comparable PC.

      This will be even more evident on the Ps4.

      So dont buy a 7850 if you plan to play next gen games on your PC at console quality.

        • tipoo
        • 8 years ago

        Agreed about consoles making more efficient use of hardware than PCs, but saying it has 8 times less the video memory is a bit unfair as the PS4 uses that 8GB for both system and video. If a game has very little system memory footprint, sure it could use most of that for the framebuffer, but more realistically it would have maybe 4-6x more. Some (likely 512mb) will also be reserved for the OS.

    • Bensam123
    • 8 years ago

    Are these the last stable drivers from December? 12.101.2.1000

    That number doesn’t look familiar… Googling it doesn’t bring up anything either. Why wouldn’t you guys be using the new 13.3b3 drivers with all the newest latency updates and fixes in them?

      • Cyril
      • 8 years ago

      Because we’re trying to make AMD look bad, of course.

      Or, you know, it could be that AMD sent us those drivers for the 7790 review and specifically asked us to test with them. 🙂

        • l33t-g4m3r
        • 8 years ago

        I approve this post.

        But seriously, you guys have been over-exaggerating the importance of frametimes *, and aren’t properly acknowledging the existence of external variables like drivers and game bugs **, which can contribute greatly to frametime results.

        *[quote<]folks in this thread saying "frame rates are just as important" are simply wrong[/quote<] **[quote<]folks who say high frame latencies are always simply the result of driver issues are incorrect[/quote<] Buggy games like Rage and Tomb Raider aren't credible benchmarks until the bugs are worked out. It's quite obvious Nvidia would have done poorly in this test if you were using older drivers and an unpatched game. These variables shouldn't be ignored. I'm not saying frametimes are illegitimate, but they need to be kept in context with frame rates and other variables. For example, if Card A with a 256 bit bus got 50 fps more than Card B (128 bit) in Crysis but had poorer frametimes, Card A would be the more powerful card, but it would be experiencing some unknown issue that could be fixed down the road. I don't believe in endorsing Card B in that scenario, just because it is temporarily more efficient. Frametimes need to be looked at with greater perspective, as it is only a piece of the puzzle, not the entire picture itself. The GPU isn't the only variable, it's one variable among many. Frame times are useful statistics, but they needs to be kept in context with everything else to actually make any sense.

        • chuckula
        • 8 years ago

        Cyril… you pro-Nvidia schill!! How dare you use the software and methods that AMD’s own engineers recommend to get the best results! We all know that AMD’s employees are a bunch of Nvidia fanboys and that you can’t trust a word they say! You have a duty to scour the forums for unscientific advice about driver hacks and magical registry keys that deliver 100% placebo performance increases!

        • Bensam123
        • 8 years ago

        Not entirely sure why you got rated up for this and me down. How are we supposed to know that’s what those drivers are when you don’t mention it?

          • superjawes
          • 8 years ago

          There, I cleared your downvote….

          • Cyril
          • 8 years ago

          Page 7, paragraph 3.

          [quote<]The Radeon HD 7790 does a much better job of keeping frame times steady than the other Radeons in this game. [b<]In fact, the other Radeons don't look like they're benefiting from the Borderlands 2 latency optimizations AMD first rolled out in the Catalyst 13.2 beta. Perhaps the beta driver AMD sent us with the 7790 doesn't include those optimizations for other cards[/b<], somehow, or maybe AMD's optimizations somehow don't apply to the 7850 and 7770. We've asked AMD to clarify and are awaiting a response.[/quote<]

            • Bensam123
            • 8 years ago

            Ah, sorry I looked for it on the methodology page after reading the review and didn’t see any sort of note there.

            Did you guys consider testing the card with the 13.3b3 drivers for the sake of curiosity?

          • flip-mode
          • 8 years ago

          Way too much paranoia, bro, though sometimes it pays to be paranoid.

            • MadManOriginal
            • 8 years ago

            Just because you’re paranoid doesn’t mean the drivers aren’t bad.

            • Bensam123
            • 8 years ago

            Not sure how this is paranoia, it wasn’t listed on the methodology page in any shape or form and the numbering itself doesn’t correlate with AMDs normal driver numbering scheme. That’s why my first sentence has a ? after it.

            I’m not entirely sure people read posts before trying to jump on a bandwagon. This doesn’t have anything to do with AMD fanboism either (as some people are making it out to be), it’s simply asking about the drivers the card was tested with.

            Sschaem even went on to ask a similar question I did off of the same topic matter.

            • flip-mode
            • 8 years ago

            I /always/ jump first before looking. It’s more of a thrill that way.

        • sschaem
        • 8 years ago

        So NOW you are doing what AMD tells you to do ? 🙂

        Joke aside… I would be curious to know why AMD ask you to stay away from the 13. drivers…
        Are they broken with the 7790 ? Slower ?

        Weird that AMD release new drivers that they dont want you to test on their newest cards.

    • ronch
    • 8 years ago

    I was quite dismayed when I heard that AMD was sticking with its GCN architecture for 2013, but after seeing this review, I guess it doesn’t really matter as long as they can give the consumer a solid card for the money. Honestly, I don’t give a crap whether it’s VLIW5, VLIW4, GCN, etc. as long as performance/watt/dollar is good. As for general compute, well, coming from someone who’s running one of these GCN things, I can’t say I can’t live without the GPGPU capability because I don’t use it either.

      • jihadjoe
      • 8 years ago

      The end justifies the means!

    • Deanjo
    • 8 years ago

    How dare you don’t have a Geforce Titan in the comparison! How else am I supposed to get some reference to the speed of this thing? 😛

      • HisDivineOrder
      • 8 years ago

      The long shadow of the Titan will soon fall upon all cards. The Day of the Titan approaches.

        • dpaus
        • 8 years ago

        And the night of the iceberg follows….

      • MadManOriginal
      • 8 years ago

      Here, I’ll summarize for you: the Titan is faster.

    • jensend
    • 8 years ago

    My speculative take: Though AMD’s PR spin is that they [url=http://www.anandtech.com/show/6837/amd-radeon-7790-review-feat-sapphire-the-first-desktop-sea-islands<]"intend to keep the HD 7000 brand in retail this year [i<]due to the success of the brand[/i<]"[/url<] I think AMD would normally have released this year's chips as 8xxx. Until last month the rumors were that this chip would be released as the 8770, and the performance bears those rumors out; I'd bet that was the plan until the stats from the holiday shopping season started rolling in. Due to bad PR the 7xxx series has sold considerably fewer units than their price/performance competitiveness would merit. (You can get a sense of this from comparing the 7xxx vs the GF 6xx on the Steam survey.) nV has pulled a PR coup this generation; somehow most people are convinced that nV's current-gen cards are markedly superior. When people are confronted with evidence to the contrary- AMD & nV are in a dead heat- they're [url=https://techreport.com/discussion/22890/nvidia-geforce-gtx-690-graphics-card?post=635854<]shocked[/url<] and [url=https://techreport.com/discussion/24381/nvidia-geforce-gtx-titan-reviewed?post=710545<]shocked again[/url<]. Then they forget what they saw and go buy nV anyways. With a pile of unsold current product and immediate cash balance concerns, AMD is very worried about [url=http://en.wikipedia.org/wiki/Osborne_effect<]Osborning[/url<] themselves. So they've been playing down the expectations about their Sea Islands chips and playing up the successes of [s<]Southern Islands[/s<] 7xxx. This shift in emphasis and the late change in branding plans are responsible for the [url=https://techreport.com/review/24368/fate-of-amd-sea-islands-obscured-in-the-fog<]mixed messages etc.[/url<]

      • brute
      • 8 years ago

      Nvidea always sells more. that just how it is! it is gamers card! GEFORCE vs RADEON

      GEFORCE just sound more gamerly

        • HisDivineOrder
        • 8 years ago

        You discount the very real possibility that years of screwing around on the driver front (not recently, back in the day) might have an effect on brand acceptance later in a product’s life.

        What is it everyone says about Radeons to this day? Their drivers suck. What did ATI do for years and years? Have drivers that suck.

        I’m sorry, but AMD inherited the mess that ATI made years ago before Fusion was even a glimmer in AMD’s eye. And then in the pre-7xxx world, when AMD was trying to get GCN drivers up to snuff before it launched or was even announced, AMD dropped the ball big time on the 6xxx series drivers. Witness the Rage Incident and shudder.

        This reawakened the old, lingering, festering memories of long-distant horrors. That’s why Radeon suffers. Because Radeons were synonymous with bad drivers for years. And ATI was synonymous with that for even longer (ie., Rage 128, Rage MAXX, Rage anything).

          • orangecat
          • 8 years ago

          I’m a user of AMD Radeon 6770 and an Asus N43SL-VX211D laptop with an i7-2670QM and a Geforce GT-540M with optimus, and I like the way AMD Driver updated than the GeForce that never updated. Every update make it runs smoother and better, with the original driver from the manufacturer the full screen AA make the text in games unreadable but the latest update make it better and still very readable, and more fps in every games. The update also brings more options and tools in the catalyst configuration screen that make it easier to use.

            • A_Pickle
            • 8 years ago

            Agreed.

            There’s an extraordinary amount of whining about AMD’s drivers that I do not understand. I’ve got a Radeon HD 4850 in one of my machines that’s going great, chews through any game we throw at it, and a Radeon HD 6850 in my main machine that is also going great, and chews through anything I throw at it.

            Remember how downright terrible Nvidia drivers were after the Vista launch, though? Remember how nobody bitched then? Yeah, those were pretty cool days.

            • HisDivineOrder
            • 8 years ago

            Trace back to the very first Radeons and you’ll find a long, storied history of driver horror stories that make the Vista launch look like a cakewalk. Not that it was any more acceptable.

            It’s just if you had owned the first Radeon, the one without a number, or even just the first time they used the 8xxx numbering, you’d know what a PITA those cards were due to their horrible drivers, horrible support for drivers afterward, and why to this day a lot of users remain intensely focused on drivers as the single most important aspect to a card’s value.

            Or if you had owned a Rage MAXX and tried to switch from the Windows 95/98/ME OS’s to the NT-based kernel OS’s of Windows 2k or XP. ATI’s promises have consequences and their reputation was trashed at the time for a reason.

            If it helps you understand, I also don’t forgive OCZ for all the memory crap they did way back in the day and when they began repeating similar crap with their SSD’s, I was not surprised.

            Forget the past and watch it come back. AMD improved the driver reliability of Radeons mostly, except for 2011 where I think AMD lost the plot trying to focus all their driver teams on GCN drivers they had to build from scratch.

            That’s probably why–by their own admission–they screwed up the memory management on the GCN cards so badly, too.

            • Krogoth
            • 8 years ago

            Those who cling on the “ATI has bad drivers” meme are living in the past. This isn’t 1997-1999 anymore. Even back in those days, Nvidia had their own bout of stupid, stupid issues.

            • MadManOriginal
            • 8 years ago

            You don’t have to go back to 1997-1999 to find bad things about AMD drivers.

            NV isn’t perfect either. Drivers had problems when Vista came out (but so did drivers for practically everyone…that’s not an excuse but it makes it harder to point to just one company.) i can think of one other [i<]really[/i<] bad driver problem and that was overheating in...Starcraft 2 I think? Any other stuff was temporary and very short-lived. Short-lived is the key, because driver problems will happen but when they get patched up quick it;s easier to overlook them. AMD has had some really stupid and basic driver problems, like the mis-sized mouse pointer and that took many months to fix.

            • A_Pickle
            • 8 years ago

            I remember when Vista came out. AMD had it’s ducks lined up for Radeon drivers [i<]much[/i<] more quickly than Nvidia did. Remember that pie chart, showing that nearly 30% of Vista BSOD's were caused by Nvidia drivers? Yeah, [i<]everyone[/i<] had to rewrite their graphics drivers with Vista, it's just that some companies did better than others. Nvidia, especially, considering their size, market share, and revenue had very little excuse. I remember forum posts, from here, where people would post screenshots of their Nvidia cards missing polygons and textures [i<]entirely[/i<]. I guess that's the way it's meant to be played, though, because I keep hearing about "AMD driver issues" yet haven't experienced them firsthand at all. Not saying they're not there, but I am saying that they are probably overstated, because it's hip to bash AMD for not being as good as Intel and Nvidia in every respect (even though they make half as much money as the both of those companies).

            • A_Pickle
            • 8 years ago

            I suppose you could call me an AMD fanboy, because I’ve typically used ATI/AMD cards in my rigs. From my first ATI graphics card, the Radeon 9600 XT, to a Radeon X800 XL, to a Radeon Mobility X1900, to a Radeon HD 4850, to now a Radeon HD 6850…

            …it’s not that I dislike Nvidia, it’s that in that span of Radeon ownership, I’ve never had to do anything more than “install graphics card” and “install drivers” to have [i<]years[/i<] of successful gamership with the card. Of the cards I listed above, only the X800 XL that I bought no longer works (even my old 9600 XT, whose cooler is fastened to the GPU with tightly-wound wire, still works). I just run the game, and it works. I crank up some settings, and they work. You're reaching back into the 1990's to find reasons not to use Radeons in the 2010's.

      • cynan
      • 8 years ago

      The whole GeForce is >>>> Radeon this generation has left me scratching my head. AMD just can’t win, even when it has a winning (or at least competitive product). It seems that all the news about their financial woes and stigma of poor CPU performance relative to Intel (which is largely warranted) has definitely infected the Radeon brand. Then you keep hearing the same old “AMD drivers are terrible” being parroted every place you turn. Sure, there has definitely been evidence that Nvidia’s drivers have been more reliable and better optimized in some cases compared with Kepler, but nowhere near to the degree that everyone seems to think. And the fact remains, that even if AMD has subpar drivers, they are still able to keep up, by and large, with the Keplers. So I’m not sure how this logic plays out. If anything, this would mean that current Radeons have more room for improvement in performance than their competing Kepler cards… I wonder how much difference it would have made if AMD had more competative drivers for GCN by the time Kepler showed up. Probably not a whole lot.

      In addition to their unsold stock, it could also just be that Sea Islands was delayed due to recent restructuring and financial issues. Plus, it would be good if AMD can get that new GCN memory management code updated and working properly before this impedes the next line of Radeons (If the whole GCN memory management rewrite thing wasn’t just a cover up for a few poorly optimized driver releases that AMD got caught with their pants down on a few occassions (as far as frame latency in certain games, anyway) .

        • HisDivineOrder
        • 8 years ago

        Radeons have a HORRIBLE reputation for likelihood to have coil whine. Check the user reviews for any 7970 or 7950 and you’ll find lots of people saying things like, “I got coil whine, but it’s ‘not that bad.’ Really. It’s not. It’s… barely noticeable. I mean, sure if the sound is off… or it’s low or it’s well not blaring, I notice it, but it’s not that bad. Really. …It’s not. I swear. I just… I might be ebaying this card soon.”

        That right there was enough to compel me to accept the compromise of a 670 over a Radeon 7970GHZ that has less memory. I don’t need MORE noise from devices that shouldn’t be making noise in that way from my Silence is Golden PC gaming machine. Sorry.

        What’s really telling though is that Geforce user reviews seem to mention coil whine very rarely and because they run cooler than Radeons on the average and use less power, they use the same third party coolers more quietly than the Radeon cards, too, while gaming. At idle, the Radeon turns its fans off and Geforce has the fans going so low they’re essentially lower than the noise level of the system.

        So you can say “superior hardware and failing to get people to notice” all you like, but for my purposes, nVidia made the right compromises to get me almost all the performance and a lot less of the hassle of a Radeon.

        And just think, I didn’t even have to bring up frame latency and that whole clusterf*** because that right there is another solid paragraph of reasons to avoid AMD right now…

          • cynan
          • 8 years ago

          [quote<]And just think, I didn't even have to bring up frame latency and that whole clusterf*** because that right there is another solid paragraph of reasons to avoid AMD right now...[/quote<] If you're referring to multi-gpu setups, then yes, AMD needs to get its act together if it wants to continue marketing crossfire or products like the 7990... Then again, Nvidia isn't perfect in the muli-gpu/microstuttering aspect either. But admittedly, probably better than AMD currently. But most people don't go in for multi-GPU. I suppose it's possible, though, that many enthusiasts in the market for HD 7900 cards might at least entertain the prospects of going multi gpu at a later date (even if most don't). This is made all the more tragic as the extra memory and bandwidth of the HD 7900 cards should translate to a better experience in situations where multi GPU is needed. For single cards, however, the whole latency thing was blown out of proportion - and like I said, AMD got caught with their pants down with a couple of poorly optimized driver releases for certain games. As far as coil whine goes, it apparently [url=https://www.google.com/search?q=HD+7870+and+coil+whine&aq=f&oq=HD+7870+and+coil+whine&aqs=chrome.0.57j62.8151&sourceid=chrome&ie=UTF-8#hl=en&sclient=psy-ab&q=coil+whine+and+gtx+6**&oq=coil+whine+and+gtx+6**&gs_l=serp.4...6182.10519.0.10787.6.5.1.0.0.0.118.566.0j5.5.0...0.0...1c.1.7.psy-ab.1WYRTrqnVSU&pbx=1&bav=on.2,or.r_qf.&fp=6c5ef208963cd8b2&biw=1656&bih=712<]happens on higher end Keplers too[/url<]. But I have no idea whether this is more common on Radeons. For all I know, it could be. I get a bit of coil whine with my radeon, but it's generally no louder than the next loudest component (probably the water pump for me) AND only occurs when the clocks ramp up at load during gaming - when PC users generally don't care about having a supper quiet PC b/c, well, most games have sound. I suppose if you game without headphones or at super low volumes, it might be and issue, or for computing applications, but in my experience, it's simply hasn't been. That said, I agree that AMD should be able to engineer their cards to not do this at all. But I'm curious as to what proportion of people it actually irritates in real use. As we've seen with the driver and latency issues, these things tend to get blown out of proportion. Edit: Here's the [url=https://www.google.com/search?q=HD+7870+and+coil+whine&aq=f&oq=HD+7870+and+coil+whine&aqs=chrome.0.57j62.8151&sourceid=chrome&ie=UTF-8#hl=en&sclient=psy-ab&q=coil+whine+and+hd+7***&oq=coil+whine+and+hd+7***&gs_l=serp.3...167243.173768.0.174726.10.10.0.0.0.0.116.1059.0j10.10.0...0.0...1c.1.7.psy-ab.DtshIoA4mkc&pbx=1&bav=on.2,or.r_qf.&fp=6c5ef208963cd8b2&biw=1656&bih=712<]comparative Google search result for Radeons[/url<]. Only a third the number of hits as for the Keplers. Hmmm ;-P But yeah, I don't know if you can go by this.

            • Krogoth
            • 8 years ago

            Micro-shuttering is an inherent problem with AFR. No matter of software and hardware tricky can overcome it, it only can mitigate it. The other multi-GPU/card rendering strategies have their own set of issues (artifacting, memory efficiency, requiring enough more software optimizations etc.)

          • Krogoth
          • 8 years ago

          Nice red herring.

          The “whining” issue has absolutely nothing to due with Nvidia/ATI. It is an issue with vendors who use bargain-basement inductors on the power circuitry for their cards. This is a common problem for el-cheapo PSUs. The only reason you even hear it is because modern performance GPUs are power-hogs when loaded and aggressive firmware fan profile avoid making the cooler fan rev fast that sounds like a airplane unless the GPU temperature is near its safety limit (90-110C).

        • Bensam123
        • 8 years ago

        It’s been like that for a few generations and the AMD CPUs aren’t fairing any better in terms of publicity, even though they offer a very competitive performance for the price.

        Baring nix, AMD drivers haven’t been bad since they started the Catalyst initiative over a decade ago.

    • wingless
    • 8 years ago

    I just tried to read a few other reviews on different sites and realized this:

    I CAN ONLY TRUST Tech Report to tell the real story (frame time!) on a new GPU! Tech Report is the the best GPU reviewer on the planet right now.

    PS: Thanks for ruining my ability to time-waste on the internets at work. I used to be able to kill at least 2.5 hours reading multiple reviews to get through my day. I simply cannot go to any other website for a GPU review now.

    • Shinare
    • 8 years ago

    I’m looking for a modern upgrade for my old 8800GTX. Is this it? I’ve tried looking for old 8800GTX reviews to find a table that has textures and shaders and vertex per whatever and am not finding anything.

    Is there a good and cheap (around $150) upgrade to the 8800GTX yet?

      • Ryhadar
      • 8 years ago

      This would blow your 8800GTX away.

      The 7790 isn’t far off from the 6870 in performance, which isn’t far off from a 5850 which is at least twice as fast as your 8800GTX: [url<]http://www.hwcompare.com/1121/geforce-8800-gtx-vs-radeon-hd-5850/[/url<]

        • Shinare
        • 8 years ago

        Seriously? This card looks like a midget compared to my 8800GTX, heh. Probably sucks less power as well. Nice.

          • CampinCarl
          • 8 years ago

          Yes, it will blow your 8800GTX away. Look at it this way:

          [url<]https://techreport.com/review/14990/amd-radeon-hd-4870-graphics-processor/9[/url<] That's the review for the 512MB HD4870. The 4870 delivers solidly better performance than your 8800GTX. We can use Anandtech's Bench tool to compare the 1GiB 4870 (which is usually better than the 512MB version) and a 560 Ti, which we see in this article. [url<]http://anandtech.com/bench/Product/304?vs=330[/url<] The 560Ti is 'twice' as good as a 1GiB 4870, which is already quite a bit better than your 8800GTX. The 7790 shows up the 560Ti pretty much everywhere. Therefore, the 7790 would cursh your 8800GTX, especially in games like BF3.

          • CaptTomato
          • 8 years ago

          definately, and don’t go higher than the 7790 on a old system as there could well be a bottleneck somewhere.
          Even a 7770 should go close to producing playable FPS with Crysis1 1080p maxed no aa.

            • willmore
            • 8 years ago

            I don’t know who thumbed you down, but I can agree with what you’re saying. I first put my 7850 in my old Q6600/3.2GHz system and I had a large number of laggy parts in games when there was a lot of action, etc. Once I swapped the MB out for one with an i5-3750K/4.4GHz, all that lag went away. CPU matters, too.

            • CaptTomato
            • 8 years ago

            Yep, it’s basically component matching as you’re only as fast as your slowest piece of HW.
            When you tell the truth, you can be both popular and unpopular, especially here, as IT/HW geeks are the worst type of fanboy’s, LOLOL.

      • willmore
      • 8 years ago

      Heck, a sub $100 HD7770 would do that. I moved from a GF9800GTX+ to an HD7850/2GB and the improvement has been huge. I want to say that benchmarks generally came out at 3x the performance.

      It’s made a huge improvement in my gaming. I now get 60fps consistantly with all the eye candy on. Before I often got frame rate drops at busy times even with most eye candy off.

      • Shinare
      • 8 years ago

      Not that it matters much, or even that anyone will ever see this days after the review, but I went with the XFX 7850 2G newegg had for $159 which included Bioshock Inf and Tomb Raider. Such a deal!

    • willmore
    • 8 years ago

    [quote<]MSI GeForce GTX 560 Twin Frozr II GeForce 12.101.2.1000 beta[/quote<] How did you manage that?

      • Cyril
      • 8 years ago

      Whoops. Typo. Fixed, thanks.

        • willmore
        • 8 years ago

        No worries! Thanks for the novel!

        Edited to say: I was originally going to say something snarky like “Well, that would explain the performance problems!”, but it’s friday and I thought I’d start the weekend early.

    • RtFusion
    • 8 years ago

    That is a smooth PCB right there . . . if you know what I mean . . .

    [url<]http://images.wikia.com/fallout/images/e/e6/If_you_know_what_I_mean..png[/url<]

    • anotherengineer
    • 8 years ago

    Can we expect a stubby version card like Zotac?

      • Chrispy_
      • 8 years ago

      The smaller XFX one looks stubby like the Zotac.
      I’m assuming the smaller one is the GHz Edition and the larger one is the factory overclocked model.

      [i<]edit - [url<]http://images.anandtech.com/galleries/2692/_DSC0935_575px.png[/url<][/i<]

    • chuckula
    • 8 years ago

    It’s a freakin’ shame: the hardware on this card looks really really nice. Something tells me that AMD’s Linux driver support for the card won’t be as nice. Sigh…

    Edit: It’s really ironic how, as a Linux user, the AMD fanboys seem to be gleeful that their products don’t work well with Linux as if it shows how superior they are. I use either Intel or Nvidia graphics solutions on Linux not because I think they make the best hardware out there (Intel obviously doesn’t and Nvidia is not a shoe-in either). Instead, I use them because they take the time to make quality drivers for Linux.

    So basically, I *want* to be able to use AMD graphics parts since I really do think they are high-quality and I’m by no means a huge fan of Nvidia. However, if AMD won’t give a high-level of support for their own hardware, it’s kind of like AMD telegraphing me a very clear message that I’m not valuable to them as a customer and that they prefer for me to spend my money elsewhere. I wish there was a different message coming from AMD that could change my mind.

      • Goty
      • 8 years ago

      Funny, with the Catalyst 13.3 Beta 2 driver installed on my laptop, it’s the [i<]Intel[/i<] driver that keeps crashing.

        • chuckula
        • 8 years ago

        1. What Intel driver do you have when you are using Catalyst?
        2. If you are trying to use an Intel IGP + Catalyst for a discrete GPU, then you just said that the Catalyst driver isn’t behaving and is causing the (completely open-source) Intel driver to crash… that’s not a big endorsement.

      • Deanjo
      • 8 years ago

      I can only assume you are being downvoted from people that have only used AMD cards in linux.

        • chuckula
        • 8 years ago

        s/only/never/2 in your original post and I agree with you….

          • Deanjo
          • 8 years ago

          I’ll give them the benefit of the doubt although I fully realize that fanboys will down vote too.

        • Bensam123
        • 8 years ago

        Why only used AMD cards in nix? Couldn’t they have experience using AMD cards in Nix and other OS’s?

      • WaltC
      • 8 years ago

      What matters most in gaming–like any other computing endeavor–is the software available. The hardware not so much as you can use the same hardware to run multiple OSes these days. But of all the OSes, like it or not, Windows is the best place to be if 3d-gaming software is your cup of tea. More game developers support Windows than any other OS by far, and I suppose the reason they do so is because they can make money in the Windows market and don’t make much of it elsewhere, like, say in Linux or OS X releases. (In a lot of the Linux community the “it should be free!” is often a mantra heard about software. This does not engender participation by corporate entities who must pay salaries and light bills and otherwise *make money* selling their software and hardware.)

      The next layer of negativity for Linux gaming is its fragmentation. As a developer, which distribution of Linux do you release for?

      So if gaming is your thing, or one of your main things, then quite simply it is *you* who is in the wrong place, not AMD or the game developers. If you like gaming and want to buy lots of games and get the most out of your hardware then you can get all of that by booting Windows. Everything you want exists, it is just that your desire to see all of this in Linux gaming is like the square-peg-in-round-hole saga–it’s not ever going to fit.

        • chuckula
        • 8 years ago

        I game under Linux and I’ve had pretty good success with Nvidia drivers, but gaming is not my #1 use for a GPU. I won’t pretend to say that Linux is as gaming friendly as Windows, but people who claim that Linux can’t play games are also wrong, and Valve’s latest moves are just putting an official stamp on what many people have known for a long time: If game vendors put in a modicum of effort to making good code, Linux can run games just fine too.

        I don’t play games exclusively or even predominantly with my PC. Outside of games, I want a GPU that is reliable and supports 2D/3D graphics in desktop compositing, video playback, Blender, the occasional GPU compute task, etc. etc. Nvidia works very well with performance that is about on-par with Windows, full CUDA support (heck, any real CUDA deployment is going to be 100% Linux based anyway), and excellent stability. That’s not to say that Nvidia never has a driver bug, but they tend to be quick to jump on support for new X.org servers and to correct issues when they arise.

        On the AMD side we see support pages like this: [url<]https://wiki.archlinux.org/index.php/AMD_Catalyst[/url<] Another comparison: Nvidia launches Titan and by the time the delivery man has dropped one off for Deanjo, there are already official Linux drivers ready to drive the card and he's posting his own benchmarks within about 10 days of the official announcement. When the 7000 series first launched, it was a wait of several months before you could get one to even partially work under Linux. It's just a matter of caring about your customers. I'd like to see a lot of improvement to AMD's open source driver, and I actually am using an old AMD card in a server box since the open-source drivers are plenty fine for giving me a kernel-mode terminal screen when I need one. Unfortunately, the open-source drivers are really not very good using recent hardware, and by the time they get to be mature, the hardware is obsolete. I like open-source legacy GPU support, but if AMD wants me to buy a new card, they had better support it right out of the box.

          • smilingcrow
          • 8 years ago

          “(heck, any real CUDA deployment is going to be 100% Linux based anyway)”

          Try telling that to Adobe CS users. Things are better in the latest version for OpenCL but if you want the maximum you need Windows + nVidia.

    • Risme
    • 8 years ago

    Not bad, a significant improvement to Cape Verde and Pitcairn silicon. Personally though, i’m more interested in computation nowadays so i would suggest Techreport to look at [url<]http://fahbench.com/[/url<] as a possible addition to their benchmarking suite. As it says on the website FAHBench is the official Folding@Home GPU benchmark that measures the compute performance of GPUs for Folding@Home.

      • dpaus
      • 8 years ago

      I second this…

      • flip-mode
      • 8 years ago

      Thirded. Here’s a GPU renderer: [url<]http://www.indigorenderer.com/[/url<] and here's a real-time renderer / design environment that I haven't tested yet: [url<]http://www.twinmotion.com/twinmotion2/overview.html[/url<]

    • Ryhadar
    • 8 years ago

    Cyril should have started the review like this:

    Now this is the story, all about how
    My life got flipped, turned upside-down
    Now I’d like to take a minute, just sit right there
    I’ll tell you how I reviewed this graphics card, a new GPU called Bonaire!

      • superjawes
      • 8 years ago

      This week’s winrar of one internetz goes to you!

      Well done, sir.

      • ClickClick5
      • 8 years ago

      I miss that show.

      • derFunkenstein
      • 8 years ago

      That hi-top fade really suits you. +1

      • danny e.
      • 8 years ago

      nice

      • LukeCWM
      • 8 years ago

      This must obviously be a reference, but I don’t know to what. Can someone please explain? I’m being completely sincere.

        • superjawes
        • 8 years ago

        [url=http://youtube.com/watch?v=hBe0VCso0qs<]Here you go.[/url<]

          • Ryhadar
          • 8 years ago

          The version I always saw on the show was shorter. I liked this extended version though.

            • superjawes
            • 8 years ago

            They only showed the full intro (and song) on the first episode. After that they used the abridged version.

            I know there’s a full version of the opening credits, but I had a heck of a time just getting that link from my phone since YT is blocked by the admin here.

            EDIT: and of course now that I’m home and can see it, it obviously was the episode 1 intro and not just the song…

    • dpaus
    • 8 years ago

    From p. 2:
    [quote<]can drive up to six displays (provided you use a DisplayPort hub)[/quote<] Will someone tell me where you can buy a DisplayPort hub?? Anyone? Anyone? Bueller?

      • JustAnEngineer
      • 8 years ago

      Staples, CDW, Wal-mart…

        • dpaus
        • 8 years ago

        Seriously?? Because even a Google search doesn’t turn up any. And to be clear, I mean a device that will take one DisplayPort input and drive two or more DisplayPort monitors, preferably at 2560×1600 (not a DisplayPort to multiple DVI; yes, there’s lots of those…)

        EDIT: OK, a little more-creative Googling found [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16815106021<]this Matrox adaptor[/url<], which [i<]will[/i<] drive two DisplayPort monitors, but only at 1920x1200. Sigh.... And it looks to have been discontinued anyway. EDIT2: found [url=http://www.atlona.com/ATLONA-1x4-MINI-DISPLAYPORT-SPLITTER<]this one from Atlona[/url<] too, but it's not clear if it can actually drive all the displays - or even two of them - at 2560x1600 (assuming the host GPU can produce the output, of course). Oh, and it's marked "Limited Quantity", suggesting that it's about to be discontinued too. So, I take it back; there [i<]are[/i<] DisplayPort hubs out there, but they seem to be like mushrooms; appearing overnight and then disappearing just as quickly. They're still a bit away from being a 'concurrent production' device, at any rate.

    • CaptTomato
    • 8 years ago

    Good starter card, easy power requirements and effectively a replacement for 6870.
    But, any serious gamer must start at 7870 2gig IMO.

      • brute
      • 8 years ago

      serious gamer? what does that even mean? do you have to have all the proper, ridiculously named, obnoxiously bright LED’d, and grotesquely overpriced accessories?

        • CaptTomato
        • 8 years ago

        LOL, I have a tricolour 120mm fan in the front of my case.
        I’ve got a 7950 vapour x, and without any changes to volts, it oclocks to 1050/1400{mem might do 1475}

        But what I really wanted was a Titan 4gig for $700aud{7950 was $332}, but Titan is $1250 back here.

      • derFunkenstein
      • 8 years ago

      I dunno, man. My GTX 460 is getting on towards being even with or slower than this card, and I have played plenty of serious solitaire.

        • CaptTomato
        • 8 years ago

        Shouldn’t the aim of a PC gamer be good image quality and FPS on at least a 1080p display?
        Otherwise may as well go console or laptop.

          • jensend
          • 8 years ago

          The tests in this review were done at very-high-quality 1080p and except for the 7770 all the cards were plenty fast. (Knocking the IQ settings down one notch would bring the 7770 up to speed and would probably make little perceptible difference.) So that doesn’t really fit with your “any serious gamer” requirement.

          • derFunkenstein
          • 8 years ago

          I have both. Why does it have to be one or the other? Hard to play MLB The Show on a PC.

            • CaptTomato
            • 8 years ago

            I don’t own or respect this gen of weak assed consoles.

            • derFunkenstein
            • 8 years ago

            I don’t think they care. I bought in 2008, so 5 years on I’m pretty happy with my investment, even though at around September of last year (around 4.5 years) my 40GB PS3 bit it. I had no problem replacing it and it’ll continue to work even when the PS4 is out.

      • Farting Bob
      • 8 years ago

      I have a reference 7850, its a shame to hear that i am now not worthy of serious gaming. I guess ill just stop playing all my games maxed out at 1080p because im clearly just a casual gamer and should be only playing farmville.

        • CaptTomato
        • 8 years ago

        I just stated an opinion based on general GPU performance and my own experience with my 7950.

        • My Johnson
        • 8 years ago

        Well, You can’t play Crysis 3 at max settings. And in this review they stuck to medium settings.

    • USAFTW
    • 8 years ago

    Why is there so much wasted hangover on that PCB? They could’ve easily made it about 2 inches shorter… weird.

      • Arclight
      • 8 years ago

      I think PCB size is nowadays more related to the size of the cooler it needs to support not with the amount of circuitry it needs to house. It’s probably used to marketing as well, i mean if you pay that amount of money on a video cards you’d expect it to have a certain size/weight, at least a layman would.

        • USAFTW
        • 8 years ago

        Not always true. GTX 660s have reference coolers longer than PCB and no one seems to care.
        But I get the argument that longer things or bigger things are better than small ones, especially in computer industry. If I remember correctly GTX 460s sold very well partly because of the shape of the GPU. the world has gone mad…
        [url<]http://www.techpowerup.com/gpudb/273/NVIDIA_GeForce_GTX_560_Ti.html[/url<]

    • albundy
    • 8 years ago

    good review on the card. question is, are you willing to shell out around the same amount of cash for a card that performs easily below the 7850 spec? i mean, even a msi gf 650 ti is going for a Benjamin these days on newegg. so my question is, what were they thinking pricing this so high?

    • Arclight
    • 8 years ago

    Things might get interesting but it’s not in the segment of the market [s<]we are[/s<] I am more interested in, actually is just a bit under it. If the GTX660Ti and the HD 7870GHz edition would have some new refreshes, possibly bringing a little price war that would have interested me. Alas this won't happen and new cards are 9 months away and even more for mid end cards. It will be a slow year for sure.

      • dpaus
      • 8 years ago

      With this silicon out and AMD saying there’ll be additional refreshes, why do think the 7870 won’t be replaced with a spec-bumped version of this?

        • raghu78
        • 8 years ago

        I expect a Pitcairn refresh with 1792 stream processors at 1.1 Ghz and 256 bit memory at 6 Ghz (192 Gb/s) with a TDP of around 170w.

        [url<]http://videocardz.com/39041/meet-aruba-curacao-hainan-and-bonaire-the-codenames-radeon-hd-8000-series[/url<] [url<]http://videocardz.com/34981/amd-radeon-hd-8870-and-hd-8850-specifiation-leaked[/url<] Now that would be an impressive chip at a price of USD 280. If it can match HD 7970(925 mhz) performance at that price AMD would hit the sweetspot of price, perf and perf/watt. that would be the ideal 1080p gaming card.

          • dpaus
          • 8 years ago

          Interesting. If it appears, it’ll make one hell of a companion to an 8-core APU.

          Just sayin’….

          EDIT: You know, that got me thinking: why wouldn’t Sony (or Microsoft, for that matter) offer an ‘expansion module’ for their consoles that consists of a plug-in GPU card/cartridge like the above? The way game code is developed these days, support for it should be almost transparent to the code, and would yield a huge performance boost.

            • cynan
            • 8 years ago

            A few reasons:

            Consoles are designed (theoretically) to have an optimal balance between system hardware. A GPU that offered significantly higher performance than the HD 7850-ish GPU in the PS4 would probably be CPU-limited (particularly by Jaguar).

            The initial cost of the console would be higher as, currently, components are soldered directly to the motherboard, which is cheaper to do than building durable slots and release mechanisms, etc, that would be reliable enough for people with no pc-building experience to work.

            Games are currently pre-optimized for a single hardware spec. This helps consoles squeeze more performance out of their relatively meager hardware spec. Having multiple GPUs would require developers to do more work, as they would essentially have to come up with an optimized game for each GPU config.

            Consoles are generally marketed toward people who don’t care much about hardware. If it works, it works. The ability to turn up a notch or two in visual quality just probably wouldn’t appeal to most console buyers enough to warrant the relatively high price (would probably have to be at least half the cost of the initial console, if not more, to be a significant upgrade)..

            That said, as a tech enthusiast, I think the idea of having GPU upgrades for consoles is appealing. It’s just not practical.

            • dpaus
            • 8 years ago

            All good points – thank you!

          • l33t-g4m3r
          • 8 years ago

          The “Mars Pro” cards are already out under the 7870 (Tahiti LE) name.

          [url<]http://wccftech.com/amd-radeon-hd-8000-series-sea-islands-gpu-specifications-leaked/[/url<] [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814131484[/url<] Yeah, the 1792 XT looks like the card to wait for. The 7790 seems to have some minor tweaks, and hopefully this transfers over to the other cards too. Another point to note here is that the GCN architecture is quite competitive with kepler given the recent driver improvements, and is still better in compute. Nvidia apparently can't optimize drivers enough to fix that.

      • flip-mode
      • 8 years ago

      I don’t think you should say what “we” are interested in. I don’t think you and I – “we” – are necessarily interested in the same things. “I” am not interested in spending $200+ on a video card.

        • Arclight
        • 8 years ago

        Duly noted. Have a marvellous day.

          • flip-mode
          • 8 years ago

          Yeah, it sounded a lot more prickish than intended. Still, there’s a market for the card and it’s also good for AMD in that it should be a lot cheaper to produce than an HD 7850 1GB. Hopefully this card will quickly drop in price and also get a 2GB variant.

      • Phartindust
      • 8 years ago

      Oh I don’t know, seems like there’s room for a 7830, and AMD said it does have more on the way for the 7000 series this year. Perhaps that will include a 7830, 7890, 7930, and 7990?

        • flip-mode
        • 8 years ago

        I don’t think there’s enough room between the 7850 and the 7790 to squeeze in a 7830, but who knows…

    • sschaem
    • 8 years ago

    Weird review… I read a lot of excuses on why a 180+ 2gb 650ti model was used, and other rush.
    Yet TR made a huge stand to delay review of amd apu because they couldn’t, do the test fully…

    Make no sense. It’s ok to rush a review one time, and delay another ??

    So this review take a 650 it as reference that cost more then a 2gb 7850 ? Because of the rush ?

    And I do my own scaling, but I wish their was more overcloking results…

      • Cyril
      • 8 years ago

      There’s a huge difference between a company dictating what can and can’t be tested in a review, and an independent reviewer deciding, entirely of his own volition, to make a small compromise in order to get a review out on time. I’m surprised that you think the two are somehow equivalent.

        • sweatshopking
        • 8 years ago

        Why are you surprised? Haven’t you seen any of his posts before?

    • Stickmansam
    • 8 years ago

    Hmm interesting results in the 7790 beating the 7850 in the 99th percentile metric. The only changes to the GCN2.0 was minor power tweaks AFAIK so is this due to drivers or what?
    And where does the increased shader and tessellation performance coming from?

    I am thinking the 7790 is a bit too strong for what it is. They should have released the original 7770 at 768 shaders and avoided any canabillization of sales (7770 and 7850 may see lower sales (though the 2GB + 256bit of the 7850 will still be a selling point)

    I am a bit confused about the 99th percentile. It means that if a card is able to have better ms for most of the benchmark but does quite a bit worse in a few specific parts, the results can skewed?

    Also is it possible to have the final ms chart done in like the line format as well, seems a bit easier to wrap my head around it

    FYI, I loved your novella though I didn’t like the open ending

      • cynan
      • 8 years ago

      [quote<] am a bit confused about the 99th percentile. It means that if a card is able to have better ms for most of the benchmark but does quite a bit worse in a few specific parts, the results can skewed?[/quote<] You take all the time intervals between frames rendered over the test run, rank them (order them from shortest to longest). The interval length dividing the longest 1% of frame intervals from the rest of the frame intervals (which are shorter) is the 99th percentile number. IMHO, this metric doesn't necessarily always tell you much on its own, but is nice to see in combination with the other frame time data. The reason it may not be too informative in and of itself in some cases is that if the test run had a particularly bad hiccup that consisted of more than 1% of the frames rendered in the test run, you would get a horrible 99th percentile frame time number - even if the rest of the test run was silky smooth with fast frames. We've seen games/GPU/driver combinations that seem to induce spikes of reduced performance that are out of whack with the rest of the run. On the other hand, you could have a high-ish (though not as high as with the spike scenario above) looking 99th percentile frame time number, but still fairly smooth performance, if, say, these 1% of slower frames were evenly spaced, one at a time, over the test run. That said, in general, a lower 99th percentile frame time should correlate to smoother performance, And together with the other frame time metrics, provides valuable information.

        • Stickmansam
        • 8 years ago

        Ha Ha, guess there no excuse for me to just skim to the conclusion then 😛

        So I guess the 99th percentile is a good measure as long as the hiccup is not too drastic and longish

      • jihadjoe
      • 8 years ago

      7790 is GCN 1.1.

    • tbone8ty
    • 8 years ago

    Glad crysis 3 is now included

    7790 does much better in the frame latency department

    • DrCR
    • 8 years ago

    At first I considered upgrading my 8800GT after reading this article, but I have not yet come across a game I actually want to play to justify it. Maybe the upcoming Splinter Cell will prove the catalyst. Maybe someone will second the motion.

      • Mr. Eco
      • 8 years ago

      I recommend you DayZ.

        • Arclight
        • 8 years ago

        Wasn’t the mod CPU bound? Idk about the stand alone version though.

    • MadManOriginal
    • 8 years ago

    First ‘Sea Islands’ card looks pretty nice. I expect when the GTX 650 Ti ‘mk 2’ launches soon it will create some good competition in this price range. And it’s still pretty amazing how much performance you can get for $150 these days, even if it’s not *great* in the latest titles cards in this price range are great values.

      • Krogoth
      • 8 years ago

      They all can handle 2Megapixel gaming with current titles provided that you can live without AA/AF. You can’t complain at $149 price point and the cards themselves are relatively “small” and don’t sound like an airplane when fully loaded.

        • MadManOriginal
        • 8 years ago

        Whoa. Color Krogoth not unimpressed.

          • derFunkenstein
          • 8 years ago

          What color would that be? Gray?

            • chuckula
            • 8 years ago

            Raw umber?
            Burnt Sienna?
            Cadet Blue?
            Sepia?

            • derFunkenstein
            • 8 years ago

            Sepia it is. Well-done.

Pin It on Pinterest

Share This