Review: Nvidia’s GeForce GTX 650 Ti graphics card

Nvidia has been filling out its Kepler lineup these past few months. The first members of the GeForce 600 series were all high-end graphics cards with formidable price tags, but recently, we’ve seen the company dabble in both the very low end and the sweet spot around $200.

Screaming-fast flagship cards are interesting, of course. But not everybody can afford them. Not everybody plays the kinds of games at the kinds of resolutions that truly justify a $300 or $400 GPU, either. For many enthusiasts, getting solid performance at a reasonable price is more important than making friends green with envy.

Today’s launch addresses the last great gap in Nvidia’s 600-series lineup. The GeForce GTX 650 Ti brings us full-fledged Kepler goodness at prices ranging from $149 to $180 or so, bridging the gap between the GeForce GTX 650 and GeForce GTX 660. Nvidia tells us this is the last card it plans to introduce this year. We’re not surprised, since the company now has pretty much all its bases covered.

We’re going to be testing one of the fastest variants of the GTX 650 Ti today: a Zotac card with 2GB of memory and clock speeds substantially above reference. This bad boy will have to spar with the latest sub-$200 offerings from AMD, and we’ll be comparing it to Nvidia’s old GeForce GTX 560, for good measure. The results should be interesting, to say the least.

Introducing the GeForce GTX 650 Ti

Before we look at the amped-up Zotac card, we should probably introduce the vanilla GeForce GTX 650 Ti. Here’s a picture of the reference card in all its bland, black-clad glory:

Source: Nvidia.

Based on the name and the card’s stubby circuit board (which measures just 5.65″), one might think this is merely a higher-clocked version of the GeForce GTX 650. Not so fast, folks! Nvidia’s naming scheme has gotten rather confusing with this generation, so let’s clarify.

The vanilla GeForce GTX 650 is based on the same GK107 graphics processor as the $90 GeForce GT 640. The GK107 is a pretty hobbled chip that’s substantially smaller and cheaper to produce than the rest of the Kepler family. Nvidia’s new GeForce GTX 650 Ti, on the other hand, features a larger GPU: the GK106, which you can also find inside the $230 GeForce GTX 660. Incidentally, those are the only two products to feature that chip. Nvidia’s more upscale GeForce GTX 660 Ti graphics card is based on a different piece of silicon, the GK104, which is even larger and powers all other high-end Kepler offerings up to the GeForce GTX 690.

Confused? Not too much, I hope. Here’s a quick overview of how the GK104, GK106, and GK107 stack up:

  ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

ALUs

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die size

(mm²)

Fabrication

process node

GK104 32 128/128 1536 4 256 3500 294 28 nm
GK106 24 80/80 960 3 192 2540 214 28 nm
GK107 16 32/32 384 1 128 1300 118 28 nm

The GK106 is very much the middle child of the Kepler family. It’s more fleshed-out than its smaller sibling, but it lacks some of the trappings that the eldest enjoys—like more ALUs, more texture units, a wider path to memory… and, we expect, not having to wear hand-me-downs.

Functional block diagram of the GK106 chip. Source: Nvidia.

Where the GeForce GTX 660 uses the full GK106, the new GeForce GTX 650 Ti uses a slightly scaled-back version of the same chip. Nvidia has disabled one of the five SMX engines, leaving 768 ALUs and 64 texels/clock of texture filtering capability. One ROP cluster and one memory controller were also excised, so the card can churn out only 16 pixels per clock, and its path to memory is just 128 bits wide.

Interestingly, Nvidia has two ways of retrofitting a GK106 chip for the GTX 650 Ti. It can disable one of the SMX engines from the two full-sized GPCs, or it can disable the third GPC altogether. Since that third GPC is half-sized and contains only one SMX engine, the end result is pretty much the same. Nvidia tells us there are no performance discrepancies stemming from the two different approaches.

Obviously, having this flexibility means Nvidia can repurpose GK106 chips that didn’t make the cut for the GeForce GTX 660. Flawed chips can be adapted, so long as only one of their SMX engines, ROP clusters, and/or memory controllers is faulty. The same goes for chips that are fully functional but can’t quite hit high enough clock speeds. As you can see below, the GTX 650 Ti has lower base and memory speeds than the GTX 660, and it also lacks GPU Boost, so the card doesn’t venture beyond the base clock regardless of the available thermal headroom. (SLI multi-GPU capabilities aren’t on the menu, either.)

  Base

clock

(MHz)

Boost

clock

(MHz)

Peak

ROP rate

(Gpix/s)

Texture

filtering

int8/fp16

(Gtex/s)

Peak

shader

tflops

Raster-

ization

rate

(Gtris/s)

Memory

transfer

rate

Memory

bandwidth

(GB/s)

Price
GTX 650 1058 N/A 8 34/34 0.8 1.1 5.0 GT/s 80 $109.99
GTX 650 Ti 925 N/A 15 59/59 1.4 2.1 5.4 GT/s 86 $149.99
GTX 660 980 1033 25 83/83 2.0 3.1 6.0 GT/s 144 $229.99
GTX 660 Ti 915 980 24 110/110 2.6 3.9 6.0 GT/s 144 $299.99

In short, we invite you to think of this latest arrival as a cut-down GTX 660, because that’s essentially what it is.

The GeForce GTX 650 Ti is going to be available in several flavors. Offerings based on Nvidia’s reference design are supposed to be priced at $149 with one gigabyte of GDDR5 memory. Reference cards have a 110W power envelope, a single six-pin PCIe power connector, and a not-quite-single-slot design with a slightly protruding cooler. (See the image above.) Nvidia’s partners are also rolling out variants with 2GB frame buffers. Those will cost a little more, and they may have an edge over their 1GB counterparts when handling higher resolutions, larger textures, and higher levels of antialiasing. However, Nvidia points out there probably won’t be much of a difference between 1GB and 2GB variants at the GTX 650 Ti’s target gaming resolution of 1920×1080.

Of course, there will be cards with higher-than-reference clock speeds and larger frame buffers—like the one we’re going to be benchmarking today.

One last thing. Some of the GTX 650 Ti cards you’ll see out there will come with a free license key for Ubisoft’s Assassin’s Creed III. The game isn’t coming out until Halloween, but when it does, it’s no doubt going to carry the same $59.99 price tag as any self-respecting triple-A title from a big publisher. Getting it for free with a $149 card sounds like a pretty sweet deal. Not all of Nvidia’s partners are participating, however, so you’ll want to double-check before making your purchase.

The star of our show

Our guinea pig for today is Zotac’s GeForce GTX 650 Ti 2GB AMP! Edition—the fastest GTX 650 Ti variant the company offers, and quite possibly the high-water mark for GTX 650 Ti cards everywhere. It feature twice as much memory as the reference design, and it pushes the GPU and the memory to a blistering 1033MHz and 6200 MT/s, respectively, quite a ways up from the reference 925MHz and 5400 MT/s. On top of that, Zotac has slapped on a meatier, dual-slot cooler and beefed up the display output configuration:

This card trades the reference offering’s Mini HDMI port for two full-sized HDMI connectors. Nvidia says all of its Kepler GPUs support up to four displays in tandem, but only custom versions of the GTX 650 Ti like this one have enough outputs to take advantage of that capability.

As you might expect, all these extras come at a price. Zotac charges a whopping $179.99 for the GTX 650 Ti 2GB AMP! Edition. That represents a $25 premium over the company’s vanilla 1GB card, and it’s also $10 above the price of Zotac’s reference-clocked 2GB card. It ain’t cheap, but then again, hot-clocked cards with extra memory rarely are.

The competition

Because of its price premium, the Zotac 2GB AMP! card is up against some pretty serious competition. AMD’s Radeon HD 7850 has come down in price since the GeForce GTX 660’s arrival last month, and 1GB versions can be had for well under $200. The XFX Core Edition variant (pictured below and featured in our testing) sells for $179.99 at Newegg right now, and that’s before a $20 mail-in rebate. It has a longer circuit board than the GTX 650 Ti, at 7.8″, but it still requires just one PCI Express power connector.

The 7850 1GB is at somewhat of a disadvantage because of its 1GB frame buffer. However, this puppy has a wider, 256-bit path to memory, which gives it roughly 55% more bandwidth than the Zotac card. That’s not the only place where the two cards’ priorities diverge, either. Take a look:

  Base

clock

(MHz)

Boost

clock

(MHz)

Peak

ROP rate

(Gpix/s)

Texture

filtering

int8/fp16

(Gtex/s)

Peak

shader

tflops

Memory

transfer

rate

Memory

bandwidth

(GB/s)

Price
MSI GeForce GTX 560 Twin Frozr II 870 N/A 32 49/49 1.2 4.1 GT/s 131 $169
Zotac GeForce GTX 650 Ti 2GB AMP! 1033 N/A 16 66/66 1.6 6.2 GT/s 99 $179
XFX Radeon HD 7770 Black Edition 1120 N/A 18 45/22 1.3 5.2 GT/s 83 $154
XFX Radeon HD 7850 1GB Core Edition 860 N/A 28 55/28 1.8 4.8 GT/s 154 $179
XFX Radeon HD 7850 2GB Black Edition 975 N/A 31 62/31 2.0 5.0 GT/s 160 $229

The 7850 1GB can talk to its memory faster and churn out more pixels than the GTX 650 Ti 2GB AMP!, and its shaders are a little faster, but its texture throughput is weaker. This is going to be a hard race to call without plenty of testing.

Interestingly, the same can be said about the GeForce GTX 560. Although that card is nearly 18 months old now, it’s still available in roughly the same price range as our two current-gen contenders. The hot-clocked MSI GTX 560 variant we selected for our testing has specifications not dissimilar from those of the 7850 1GB.

Let’s get on to the numbers—as soon as we’ve outlined our testing methods, that is.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we reported the median results. Our test systems were configured like so:

Processor Intel Core i7-2600K
Motherboard Asus P8Z77-V LE Plus
North bridge Intel Z77 Express
South bridge
Memory size 4GB (2 DIMMs)
Memory type Kingston HyperX KHX2133C9AD3X2K2/4GX

DDR3 SDRAM at 1333MHz

Memory timings 9-9-9-24 1T
Chipset drivers INF update 9.3.0.1019

Rapid Storage Technology 11.1.0.1006

Audio Integrated Realtek audio

with 6.0.1.6657 drivers

Hard drive Crucial m4 256GB
Power supply Corsair HX750W 750W
OS Windows 7 Ultimate x64 Edition

Service Pack 1

 

  Driver revision GPU base

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

MSI GeForce GTX 560 Twin Frozr II GeForce 306.38 beta 870 1020 1GB
Zotac GeForce GTX 650 Ti AMP! GeForce 306.38 beta 1033 1550 2GB
XFX Radeon HD 7770 Black Edition Catalyst 12.9 beta 1120 1300 1GB
XFX Radeon HD 7850 1GB Core Edition Catalyst 12.9 beta 860 1200 1GB
XFX Radeon HD 7850 2GB Black Edition Catalyst 12.9 beta 975 1250 2GB

Thanks to Asus, Corsair, Crucial, Kingston, and Intel for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various graphics cards we used for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing a 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.

  • We measured total system power consumption at the wall socket using a P3 Kill A Watt digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Skyrim at its High quality preset.

  • We measured noise levels on our test system, sitting on an open test bench, using a TES-52 digital sound level meter. The meter was held approximately 8″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Battlefield 3

We tested Battlefield 3 by playing through the start of the Kaffarov mission, right after the player lands. Our 90-second runs involved walking through the woods and getting into a firefight with a group of hostiles, who fired and lobbed grenades at us.

We tested at 1920×1080 using the game’s “High” detail preset, which offered the best compromise between image quality and smoothness on the GTX 650 Ti.

Frame time

in milliseconds

FPS rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

We should preface the results below with a little primer on our testing methodology. Along with measuring average frames per second, we delve inside the second to look at frame rendering times. Studying the time taken to render each frame gives us a better sense of playability, because it highlights issues like stuttering that can occur—and be felt by the player—within the span of one second. Charting frame times shows these issues clear as day, while charting average frames per second obscures them.

To get a sense of how frame times correspond to FPS rates, check the table on the right.

We’re going to start by charting frame times over the totality of a representative run for each system. (That run is usually the middle one out of the five we ran for each card.) These plots should give us an at-a-glance impression of overall playability, warts and all. You can click the buttons below the graph to compare our protagonist to its different competitors.


The GTX 650 Ti AMP! is off to a somewhat rocky start in Battlefield 3. While it suffers from fewer latency spikes than the GeForce GTX 560, it still doesn’t achieve terribly great consistency. Our two Radeon HD 7850 variants both produced thinner plots, with fewer high-latency frame times and thus greater perceived smoothness in-game. Only the Radeon HD 7770 Black Edition seems to be markedly worse than the GTX 650 Ti AMP!.

We can slice and dice our raw frame-time data in other ways to show different facets of the performance picture. Let’s start with something we’re all familiar with: average frames per second. While this metric doesn’t account for irregularities in frame latencies, it does give us some sense of typical performance. We can also demarcate the threshold below which 99% of frames are rendered, which offers a sense of overall frame latency, excluding fringe cases. (The lower the threshold, the more fluid the game.)

Looking at average FPS alone, you might think the GTX 650 Ti AMP! and the Radeon HD 7850 1GB are about neck and neck. As the 99th-percentile results demonstrate, however, that’s not quite the case. 99% of the 7850 1GB’s frames are rendered in less than 22.8 ms, which works out to a threshold of 44 FPS—very reasonable, in other words. On the Zotac card, the threshold is 31.5 ms, equivalent to only 32 FPS.

Now, the 99th percentile result only captures a single point along the latency curve. We can show you that whole curve, as well. With single-GPU configs like these, the right hand-side of the graph—and especially the last 5% or so—is where you’ll want to look. That section tends to be where the best and worst solutions diverge.

Graphing percentile data makes it pretty obvious that the two 7850 variants maintain the lowest and most consistent latencies of the bunch. The GTX 560 starts to spike around the 95% mark, and the GTX 650 Ti AMP! keeps steady until about 97%, but the 7850s manage consistently low frame times until they get right up to the 99% mark.

Finally, we can rank solutions based on how long they spent working on frames that took longer than a certain number of milliseconds to render. Simply put, this metric is a measure of “badness.” It tells us about the scope of delays in frame delivery during the test scenario. Here, you can click the buttons below the graph to switch between different milisecond thresholds.


The GTX 650 Ti AMP! doesn’t spend much time beyond 50 ms, which means the latency spikes from which it suffers aren’t too dramatic. It does, however, spend more time above 33 ms than we’d like, especially compared to the two 7850 cards.

DiRT Showdown

Codemasters’ DiRT: Showdown encourages you to drive other cars off the road, which is a lot more fun than typical racing games. Just as importantly, it has gorgeous graphics that can stress even high-end GPUs. We tested this game in the Miami track. We pushed our way ahead of the pack and spent the second half of our 90-second run maintaining our lead.

We tested at 1920×1080 with 4X multisampled antialiasing using a customized version of the “Ultra” preset, which had advanced lighting effects disabled. The performance cost of those effects was too high for our taste on these cards.


The GTX 650 Ti AMP! pretty much mirrors the old GTX 560 in our latency plot. Unfortunately, that means it again suffers from more long frame times than the Radeons—even the 7770 Black Edition.

Our average FPS chart suggests parity between the GTX 650 Ti and the 7770 Black, but again, our percentile data highlights the latency problems of the Nvidia cards.


The same goes for our measure of “badness.” The 7850s barely spend any time above 16.7 ms, but the other cards aren’t so lucky.

Max Payne 3

The latest chapter in the Max Payne series wasn’t made by the same developers as the first two games, but it follows the formula to a tee. Max still takes on hordes of heavily armed baddies with akimbo pistols and copious amounts of slow-motion dodges. There’s still plenty of cheesy, film noir-style narration, too.

We tested Max Payne 3 at the end of the second chapter, where Max guns down masked kidnappers on the roof of a São Paulo skyscraper. We used the same sequence of bullet-time dodges and surgical headshots each time in order to minimize variability between runs.

Testing was conducted at 1920×1080. Detail settings were maxed out everywhere except for multisampled antialiasing and tessellation. As Scott has pointed out in the past, enabling MSAA disables FXAA in Max Payne 3, and FXAA produces better results with fewer jagged edges here.


Yet again, the GTX 650 Ti AMP! roughly mirrors the GTX 560—except it doesn’t display the same see-saw pattern between short and long frame times toward the beginning and end of the run.

Compared to the Radeons, the GTX 650 Ti AMP! card fares much better this time. It’s quicker than the Radeon HD 7770, and while its frame latencies are clearly a little higher than those of the 7850 1GB and 7850 2GB Black Edition, the difference is small, and frame-time consistency is excellent.

That 99th-percentile frame time of 18.5 ms works out to 54 FPS, which is pretty close to the 60 Hz refresh rate of most LCD monitors. Sure, the 7850 cards are even faster, but exceeding the display’s refresh rate doesn’t normally yield palpable benefits.

Our percentile graph shows the GTX 650 Ti AMP! avoids longer frame times in the last 5% or frames, where latencies for the GTX 560 begin to rise. The GTX 650 Ti card’s frame times are higher than the Radeons’, but they’re still quite low throughout.


As for our measure of badness, it shows none of the cards exhibit substantial latency spikes. Our first-hand impressions corroborate this. Max Payne 3 feels silky smooth overall, even on the Radeon HD 7770 Black Edition.

Sleeping Dogs

I haven’t had a chance to get very far into Sleeping Dogs myself, but TR’s Geoff Gasior did, and he got hooked. From the small glimpse I’ve received of the game’s open-world environment and martial-arts-style combat, I think I can see why.

The game’s version of Hong Kong seems to be its most demanding area from a performance standpoint, so that’s what we benchmarked. We took Wei Shen on a motorcyle joyride through the city, trying our best to remember we were supposed to ride on the left side of the street.

We benchmarked Sleeping Dogs at 1920×1080 using a tweaked version of the “High” quality preset, where we disabled vsync and knocked SSAO down to “Normal.” We had the high-resolution texture pack installed, too.


None of our cards display consistently low frame times here, probably because they have to stream data from the huge open-world map constantly. The patterns of inconsistency are different, though. The Radeons seem to exhibit more of a continuous see-saw pattern, while the GeForces offer somewhat more generally consistent frame times punctuated by taller spikes at random intervals.

In any case, the GTX 650 Ti AMP! seems to fare no worse than the 7850s here, and it’s clearly ahead of the 7770 Black Edition.

The 99th-percentile figures suggest that, with the exception of the old GeForce GTX 560, there isn’t a huge difference in smoothness between the various contenders here. We’ll have to look at fringe cases to see if there’s a clear winner.

As we noticed in the latency plot, the GeForce GTX 650 Ti AMP! keeps frame times consistent throughout a greater percentage of the run, but it suffers from higher spikes than the Radeons. Graphing percentiles confirms this fact…


…and so does tallying up the amount of time spent above 50 ms. 168 milliseconds may not amount to much out of a 90-second run, but it’s clear the Radeons do a better job of avoiding huge spikes. The 7850 1GB stays ahead of the GTX 650 Ti AMP! even when we lower the threshold to 33.3 ms and 16.7 ms, too.

By the way, note that larger frame buffers don’t seem to have much of an impact here, even though we’re running through a huge open world covered with high-resolution textures. The Radeon HD 7850 1GB shadows the 7850 2GB Black Edition despite the latter’s higher clock speeds and extra memory, and the GTX 650 Ti AMP! doesn’t seem to gain an edge over the 7850 1GB. Is a gig of memory really all you need at 1080p?

The Elder Scrolls V: Skyrim

Our Skyrim test involved running around the town of Whiterun, starting from the city gates, all the way up to Dragonsreach, and then trotting back down again.

The game was run at 1920×1080 using the “Ultra” detail preset. The high-resolution texture pack was installed, as well.

We noticed something strange during our testing. Two of the Radeons, the 7770 Black Edition and 7850 1GB, suffered from hitching and general sluggishness during the first couple of test runs, but they performed smoothly during the other three runs. We normally base our latency plots on the third run from each card, but in the interest of highlighting the phenomenon, the plots below all show the first runs:


Ooh. We may have finally found a situation where 1GB frame buffers hinder performance at 1080p.

Perhaps that’s a hasty connection to make, though. The GeForce GTX 560 has the exact same amount of RAM as the misbehaving Radeons—and lower memory bandwidth than the 7850 1GB—yet it doesn’t suffer nearly to the same extent. The GTX 560 only exhibits a single big spike, toward the beginning of the run, and maintains largely consistent latencies the rest of the time. By contrast, the 7850 1GB and the 7770 both see multiple spikes and a general degradation of performance.

The average FPS rankings don’t really reflect the problem, but the 99th-percentile numbers do. That said, we should note that these charts are based on data collected from all runs, not just the initial ones. Since hitching wasn’t a problem in later runs, the differences here don’t appear as stark as in the plots above.


Not even the slower Radeons spend much time above 50 ms. Lowering the “badness” threshold to 33 ms puts the Radeon HD 7700 Black Edition and 7850 1GB at a clear disadvantage, though. The GeForce GTX 650 Ti 2GB AMP! and Radeon HD 7850 2GB Black Edition are undoubtedly the better performers here.

Power consumption

The Radeons switch to an ultra-low-power state when the display goes to sleep, which explains the first round of numbers. Under load, though, the GeForce GTX 650 Ti 2GB AMP! demonstrates excellent power efficiency, drawing only 10W more than the 7770 Black Edition.

Noise levels and GPU temperatures

The GTX 650 Ti AMP! is a smidgen louder than the rest of the pack at idle, but it’s quiet under load—certainly quieter than the Radeon HD 7850 1GB.

Despite its low noise levels under load, the Zotac card’s cooler works very well, keeping the GPU temperature at just 60°C. The power-hungrier Radeons run hotter, which is no surprise.

Conclusions

It may be fair to say Zotac is overcharging a little for the GeForce GTX 650 Ti 2GB AMP! Edition. The card trails the Radeon HD 7850 1GB more often than not, and while our 7850 1GB carries the same $179.99 price tag, other 7850 1GB models priced as low as $164.99 are available right now.

Recommending the Radeon 7850 1GB over the GTX 650 Ti 2GB would be pretty sensible… if it weren’t for the issues we encountered in Skyrim. Considering the 7850 2GB exhibited no problems, it’s likely the 7850 1GB’s smaller frame buffer is proving to be a handicap in that game. And the severity of the hitching we detected (even when re-testing) makes it hard to shrug off this particular problem.

Ultimately, I don’t think the GTX 650 Ti 2GB AMP! Edition is worth the $180 price tag, and I don’t think the 7850 1GB is a good substitute for it, either. If you’re looking for the best deal in this price range, my advice would be to set aside a little extra cash and spring for either a GeForce GTX 660 or a Radeon HD 7850 2GB. You’ll get guaranteed higher performance without memory bottlenecks at 1080p, and you’ll be able to drive a larger monitor with a higher resolution if you need.

Now, what if your budget is pulling you closer to the $150 mark? There will surely be one-gig GTX 650 Ti variants with similar or slightly lower clocks than the Zotac card we tested, and they may be available for well under $180. Odds are they’ll perform similarly in most situations—Skyrim at “Ultra” settings with high-res textures excepted. When considering such cards, then, your choice is going to be between them, cheaper 7850 1GB offerings from AMD, and hot-clocked versions of the Radeon HD 7770, like the $155 Black Edition model we tested.

We can disqualify the 7770 right off the bat, because we know it’s the slowest of the bunch. The 7850 1GB can deliver better overall performance than even a GTX 650 Ti 1GB with higher-than-reference clock speeds, but that performance edge will come at the cost of higher power consumption—and potentially higher noise levels, as well. The GTX 650 Ti should be slightly slower, but cooler-running, quieter, and easier to squeeze into a cramped build.

Then there’s the fact that some GTX 650 Ti cards, including the Zotac model we tested, ship with a free copy of Assassin’s Creed III. That can upset the value equation quite a bit, provided you’re planning on purchasing the game anyway. A $165 Radeon plus a copy of ACIII will set you back around $225, after all, which is quite a bit more than even the Zotac card’s $180 asking price. That said, AMD has a bundled game deal of its own. Some Radeon HD 7850 1GB models come with a free copy of Sleeping Dogs, which is only a couple of months old and sells for $49.99 on Steam right now. In the end, gamers with particularly tight budgets may care less about performance and more about which game they can get for free.

Comments closed
    • CppThis
    • 7 years ago

    I ordered one of these recently because my new 7770 doesn’t like my system for some reason. The only complaint I’ve heard from anyone about it is that it’s priced too high, which is kind of a weak argument given that new cards are always priced up so retailers can discount them later. Heck, the 7770 started life as a $160 card–ten bucks more than this one. The good ones are still in the $130-$150 range if you take rebate roulette out of the picture.

    • Ustauk
    • 7 years ago

    For anyone in Canada interested in the [URL=http://www.memoryexpress.com/Products/MX42111]ZOTAC 1 GB version of this card, Memory Express has it on sale for Cyber Monday for $115[/URL], down from $150, for today only (Monday, November 26, 2012). At the sale price, the card is much more attractive. I’m going to pull the trigger later today as a replacement for my aging Galaxy 8800 GT 512 mb. It may drop some more come Boxing Day (December 26th, our primary Black Friday-ish sale day here in Canada), but as it stands now, that is a good price.

    • aim18
    • 7 years ago
    • rogue426
    • 7 years ago

    Hmm, there seems to be a lot of angst in the comments section between staffers and posters regarding this article.

    • ronch
    • 7 years ago

    Is it just me or do AMD cards offer more performance/price? I mean, even at Tom’s I see AMD cards making the “Best Graphics Cards for the Money” list more often than not, particularly in the ~$100+ segment. I guess that’s why I end up buying AMD even if I wanna go back to Nvidia.

      • willmore
      • 7 years ago

      The current generation of them do seem to be better values in performance/$$, but they do seem to be a bit behind the nVidia cards for performance/Watt. Since noone outside of HTPC or HPC (vastly more dissimilar than the one letter difference would seem to indicate) not much of anyone cares about active power use, the AMD cards have gotten the nods for ‘best card at price point’. It does help a bit that AMD has better idle power consumption than nVidia. And, most peoples cards spend well more time ilde than they do under load.

        • sschaem
        • 7 years ago

        Depend on the game. AMD seem to focus more on shader compute performance, nvidia on texture.

        The 7850 and the 650 ti seem to be in the exact same price range ~$155

        Crysis2 : 70 fps vs 52 fps
        system power use : 270w vs 235w

        3.8w per fps for the 7850 based PC
        vs
        4.5w per fps for the 650 ti based PC

        So the 650 ti is slower and less power efficient. (Specially for games like Crysis2)

        And if you think GPU compute will matter in the next few years.
        Luxmar : 10,000+ on the 7850, 1,800 on the 650 ti

        As a consumer the 650 ti just doesn’t appeal to me.

          • willmore
          • 7 years ago

          Yes, some games perform better with some card architectures than others. Pick the card that plays your game best. Good analysis, sschaem.

    • CaptTomato
    • 7 years ago

    This card is too weak anyway.
    If I was to upgrade from 6850, it would be 7950 minimum.

    • xeridea
    • 7 years ago

    It seems the review has hints of bias.
    For Dirt Showdown:
    7850 and even 7770 clearly wins here, even with global illumination off, the 7850 should be able to handle that fine, 650 Ti AMP would obviously struggle further due to low compute performance, that wasn’t shown either… oh well.

    For Max Payne:
    [quote<]Compared to the Radeons, the GTX 650 Ti AMP! card fares much better this time. It's quicker than the Radeon HD 7770, and while its frame latencies are clearly a little higher than those of the 7850 1GB and 7850 2GB Black Edition, the difference is small, and frame-time consistency is excellent.[/quote<] It is 18.5 to 13.9. The 650 Ti has 33% higher frame times, thats not "small". Also, its only 4% faster than the 7770. For sleeping dogs: [quote<]In any case, the GTX 650 Ti AMP! seems to fare no worse than the 7850s here, and it's clearly ahead of the 7770 Black Edition.[/quote<] 650 Ti is only 1% ahead of 7770 in 99th percentile, thats not "clearly ahead". Its 7% behind the 7850, while not huge, it is notable, and I wouldn't call it "no worse" For conlustion: [quote<]Recommending the Radeon 7850 1GB over the GTX 650 Ti 2GB would be pretty sensible... if it weren't for the issues we encountered in Skyrim. Considering the 7850 2GB exhibited no problems, it's likely the 7850 1GB's smaller frame buffer is proving to be a handicap in that game. And the severity of the hitching we detected (even when re-testing) makes it hard to shrug off this particular problem.[/quote<] The 7850 clearly wins in every other test, sometimes substantially. It is a bit more spikey on Skyrim, but still smooth, even looking at the time spent beyond 16.6ms it fares pretty well, with less than 1 second bellow 60 FPS, I wouldn't really say that is bad. I wouldn't shun it so badly for this smaller issue when in the big picture its a clear winner. Its cheaper and performs better overall. If only playing Skyrim, the 2GB version would be better option, but 1GB version is still plenty fast. Also the 2GB version is closer to the 650 Ti AMP price. For peoples info, in 99th percentile frame times: 7850 - 32% = 650 Ti AMP - 0.1% = 7770 Without Dirt Showdown, which has huge performance difference: 7850 - 15.9% = 650 Ti AMP - 8.1% = 7770 Is this retribution for AMD doing staged release to highlight graphics power of Trinity?

      • Cyril
      • 7 years ago

      [quote<]For Dirt Showdown: 7850 and even 7770 clearly wins here, even with global illumination off, the 7850 should be able to handle that fine, 650 Ti AMP would obviously struggle further due to low compute performance, that wasn't shown either... oh well.[/quote<] You've ignored the part in that page's testing commentary where I said, "The GTX 650 Ti AMP! pretty much mirrors the old GTX 560 in our latency plot. Unfortunately, that means it again suffers from more long frame times than the Radeons—even the 7770 Black Edition." [quote<]For Max Payne: . . . It is 18.5 to 13.9. The 650 Ti has 33% higher frame times, thats not "small".[/quote<] Again, you seem to be reading selectively. I stated, "That 99th-percentile frame time of 18.5 ms works out to 54 FPS, which is pretty close to the 60 Hz refresh rate of most LCD monitors. Sure, the 7850 cards are even faster, but exceeding the display's refresh rate doesn't normally yield palpable benefits." [quote<]For sleeping dogs: . . . 650 Ti is only 1% ahead of 7770 in 99th percentile, thats not "clearly ahead". Its 7% behind the 7850, while not huge, it is notable, and I wouldn't call it "no worse"[/quote<] The "clearly ahead" comment is in reference to the frame latency plot. I'm still seeing a pretty stark difference between the GTX 650 Ti AMP! and the 7770 Black Edition there. The same goes for the "no worse" comment. If you continue reading that page, however, you'll see I slam the Nvidia cards for having higher frame latency spikes than the Radeons. Context is important here. [quote<]For conlustion: . . . The 7850 clearly wins in every other test, sometimes substantially.[/quote<] Yet again, you're focusing on the part that is (sort of?) forgiving of the GTX 650 Ti's failings, and ignoring the rest, which includes statements like this: "Ultimately, I don't think the GTX 650 Ti 2GB AMP! Edition is worth the $180 price tag." Or this: "The 7850 1GB can deliver better overall performance than even a GTX 650 Ti 1GB with higher-than-reference clock speeds, but that performance edge will come at the cost of higher power consumption—and potentially higher noise levels, as well. The GTX 650 Ti should be slightly slower, but cooler-running, quieter, and easier to squeeze into a cramped build." Is your comment retribution for our response to the staged Trinity release? 😉

        • xeridea
        • 7 years ago

        I realize there are mentions of some of the clear disadvantages of the 650 Ti AMP, so I am not discrediting review, and generally your reviews tend to be of top quality. I just noticed the wording in some areas doesn’t really tell the story as it is. Yes, I was selectively picking out points that could be considered misleading to bring them up for debate. I could have mentioned this, but I was just pointing out what I found that could be debatable.

        For Dirt Showdown. I know you are pointing out the 650s shortcomings, I am wondering why global illumination was disabled, since the 7850 should be able to handle it, it is nearly flawless at 16.7ms, though the 650 would obviously choke. It may have a huge disastrous outcome, I can’t be sure but I feel it would be fine.

        For Max Payne:
        Even if both are very playable, I don’t see the reason for saying there is only a small difference when there is substantial difference. One could increase settings or resolution, or do multi-monitor where it may matter, or underclock just because you can for lower power/noise. This may not be the majority, but I would state that there is a significant difference in performance, though both play very smooth at near max settings.

        My comment is for what I noticed, not for your response. I don’t really see how it is a huge issue, since the key point of Trinity is graphics, and a balanced chip that is good enough in most areas except for enthusiasts, or those with a specific need for max x86 throughput, though I can see your point on it possibly being misleading.

        • cegras
        • 7 years ago

        I’m not very sold on the argument made for heat and noise. There are plenty of 7870’s out there which are inaudible even under full load.

        • kalelovil
        • 7 years ago

        “The GTX 650 Ti should be slightly slower, but cooler-running, quieter, and easier to squeeze into a cramped build.”

        Since you aren’t testing reference design to reference design, aren’t any assumptions about card noise level a bit meaningless? There are some quite quiet aftermarket HD 7850 cards.

        Also, most results show the GTX 650 Ti being more than ‘slightly slower’ than the HD 7850. I realise this is somewhat subject, but you instantly disqualify factory overclock HD 7770s in your conclusion because of their lower performance implying the GTX 650 Ti – HD 7850 gap is much closer. It isn’t.

        I would have liked to have seen some overclocked results for the cards. I suspect the factory overclocked GTX 650 Ti and HD 7770 wouldn’t have much more headroom, while most HD 7850s are known to be yield good performance improvements when easily overclocked.

        Otherwise, interesting review.

    • jdaven
    • 7 years ago

    So it took a whole year but finally both AMD and Nvidia have released their full lineup of HD7000 and GTX600 series products (if you count the one Sapphire dual 7970).

    I remember when ATI (at the time) and Nvidia would release the whole lineup in about 1-2 months and then do a complete refresh 6 months later. IMO, that was unsustainable but I don’t know if I like the current method any better. By the time you wait for the whole lineup to get the cheapest price at all performance points, its time for the next series. I guess this is the strategy to get the most amount of money.

    Le Sigh.

      • ZGradt
      • 7 years ago

      Yeah, I miss the good old days. I remember way back when the Radeon 8500 came out, and it ruled the roost. A year or so later, and their flagship card becomes their mainstream card. I was able to pick it up for less than $200. Now they don’t mind releasing cards that are low performing to begin with. I’ll probably never be able to afford another flagship card 🙁

      • Ringofett
      • 7 years ago

      It’s still not quite the full line-up, is it?

      Isn’t there still a GK110 or something in the works, with better double precision floating point performance for compute apps?

        • willmore
        • 7 years ago

        Not if the rumblings from nVidia are true. That chip seems to be destined for HPC use only.

          • jihadjoe
          • 7 years ago

          Nvidia has very little reason to release GK110 as a $700 GTX780 when they can sell every part they make as a $5000+ Tesla or quadro.

            • Chrispy_
            • 7 years ago

            Do people actually buy those? I guess I mean [i<]who[/i<] actually buys those? The market must be tiny and I have never experienced software that runs better (per dollar) on a Tesla or Quadro. We use 3DSMax / Maya / Rhino / X-Flow / Maxwell / VRay / GC / Revit / GIS. Some of the guys are using 32GB workstations and running out of memory because they're modelling [i<]cities[/i<] down the level of detail where you can see kerbs and doorways..... ...and still we don't see any tangible benefits to a Quadro or FireGL. We do simple in-house testing. Does a $500 Quadro beat a $500 Geforce? No. It's not even close, because a $500 Quadro is based on the same silicon as the $99 Geforce from the previous generation. Sure, it may be three times faster than that same $99 Geforce, but the current-gen $500 Geforce is [i<]a whole order of magnitude[/i<] more powerful. I guess things like alpha-blended wireframe antialiasing on the Quadros is nice, but it's not worth giving up 90% of your performance to get.

    • Meadows
    • 7 years ago

    The performance is definitely where it should be, but the price is too high. If I had a say in it, I’d slash the price of every GTX 650 variant by the equivalent of $20.

    (Also, by the time it crosses the pond, it isn’t even *near* 150-180 dollars anymore, which only makes it worse.)

      • LocalCitizen
      • 7 years ago

      buy the card that comes with the game license key, and either 1. sell the game, thus reduce the price of the card, or 2. enjoy the game and count $30 of the card price towards it.

      but yeah, like Cyril said, cash is better

    • kc77
    • 7 years ago

    Is there a reason why the introduction/reviews of new models use factory overclocked cards? I’m not saying there’s anything wrong with the review since it’s factory overclocked against factory overclocked. But since this is an introduction to a new model shouldn’t we know what a regular 650 Ti will do for comparison purposes?

      • JustAnEngineer
      • 7 years ago

      Amen!

        • kc77
        • 7 years ago

        I don’t know who or when but it looks like someone has changed the way we review new video card SKU’s. 🙂

      • Damage
      • 7 years ago

      We have tight time windows for these reviews, and in this case, Nvidia shipped a reference card–to the wrong person, in the wrong country, days before the launch, without asking who would be doing the review.

      If he had the card, I doubt Cyril would have had the time to test it alongside everything else, anyhow.

      I’ll add something: old-school Radeon fanboys probably need to work on getting over the whole “higher than reference” clock thing being a problem or somehow illegitimate. Nvidia has long taken the approach of giving board partners wide leeway to offer higher clock speeds at slightly higher prices within a product range. AMD resisted for a while, but with this generation, has pretty fully embraced the same approach.

      So that ship has now sailed.

      The reference cards and clocks are nothing special, just a speed defined as a common baseline for a range of products, which will vary upward from the base in different degrees. The actual board makers choose the clock speeds they are willing to ship and support, and in practical terms, those speeds are arguably more noteworthy than the chipmakers’ baselines. In fact, that’s an easy argument to make, in terms of consumer relevance.

      The cards we tested are real, packaged products that one might buy, while in some cases–especially in lower price ranges–cards based on the reference design may never see a retail shelf.

      Our preference has always been to test actual, end-user products when practical, so we tend to focus on those when given the chance. And, as you might imagine, both AMD and Nvidia, as well as the board makers, like to put their best feet forward with fast cards when they can. So yeah, you’ll see higher-clocked cards tested, although we do try to match prices on different models from AMD vs. Nvidia when we can. And, of course, regardless of what we’re comparing, we work hard to keep price and performance in tension with one another, usually with a scatter plot.

      So… we’re testing real products and comparing them at actual sale prices. Consider yourself free to get over it now that AMD plays the exact same game as Nvidia.

        • kc77
        • 7 years ago

        Um Wow. Have you gone off the deep end? Damage that seems like a large response for very little text. Your addressing things and points that weren’t even made.

        The first question I asked was sincere. The second post was a joke.

        However, I think you are trying to address previous comments so I’ll end with this….

        [quote<]So... we're testing real products and comparing them at actual sale prices. Consider yourself free to get over it now that AMD plays the exact same game as Nvidia.[/quote<] If cognitive dissonance wasn't an issue you wouldn't be telling me what I had to get over by using an example that says so much. Look if you can't take criticism then why do you have comments? Is everyone just supposed to say... "oh this is just flawless"? When you do good work I say so. If you do work that leaves something to be desired I say that as well. Getting all sensitive every time someone makes a critique that's not glowingly positive, says far more about the work than I ever could.

          • Damage
          • 7 years ago

          It’s interesting how all of you guys go straight for the argument *about* the argument, rather than addressing the substance of my post. You asked a question, and I answered it. Took some time to do it, even. You’ve expressed a problem with us testing actual retail products many, many times in the past, and here we are again. So I took that on, too.

          Now we get this?

          Look, I explained why we’ve done what we’ve done, so everybody knows our position. If there is a cogent reason to object to it, by all means, explain it. Use facts and reasoning. Persuade us. Otherwise, you’re just trolling.

            • kc77
            • 7 years ago

            OK then let me pursaude you.

            [quote<] It's interesting how all of you guys go straight for the argument *about* the argument, rather than addressing the substance of my post. [/quote<] I'll start off by saying that I addressed it and I addressed it that way because it was far more professional and fair than wasting space (and time) completely destroying the argument that I'm some "Radeon" fanboy or a troll. It was not personal and quite general. It was short and said everything I had to say. But if you want something to read and more direct then here you go. Not all arguments can be won by calling someone a troll. Had you truly been viewing ALL of my comments fairly (instead of getting all sensitive because someone didn't give you glowing praise) you would have known that I don't own Radeon Cards. I run Linux and this is widely known. It's in your own Forums (you know the place where I stick around to help people and not just troll) and my rig is in the signature. [url=http://i826.photobucket.com/albums/zz188/kaczu_bucket/DSC01243.jpg<]Here is how it looked 6 months ago.[/url<] Do you notice a problem? If you can't find an AMD logo in that rig that's because it's not there. Now to be fair to you I went dual socket 6 months ago and went with Opterons, but the video card is the same and the GPU manufacturer will always be the same for as long as Linux is my OS....or until AMD sees fit to make a decent Linux driver or Intel finds the the ability to make a decent graphics card. Now why would I go to such lengths to prove this point? Because Damage you should know that not everyone who gives TR criticism is a fanboy or a troll. Sometimes they are loyal readers who have been reading your site since it opened and are merely giving you their opinion. They know when you make a statement that doesn't quite smell right, and if they care enough they just might voice their opinion on the Internet. I can't speak for anyone else, but I come here for tech reviews on hardware and generally to be somewhat helpful in the forums where possible. I don't come here to make sure everything I say is filtered for your personal enjoyment. Not everyone is here to make you feel good about yourself. You have shown over and over again that you just can't take criticism, much less a question. If you don't get your way then there's a consiracy afoot. If someone disagrees with your point of view then all of the trolls real or imagined are out to get you. If someone asks a question then it's really a nefarious plot to destroy you. The cognitive dissonance in all of this is that while your claiming that everyone needs to get over it at the same time you yourself haven't gotten over it. That condition makes appearances quite a lot actually. When you responded to JAE for taking an improper tone the only one seemingly out of line was you. He was just asking a question. The real issue here is that you've got your own demons to deal with and often times you post them or they come through in a review. Then you get angry and retaliate when everyone recoils because they don't agree with your point of view. [b<] Just saying your persuadable or objective isn't enough, you actually need to be the thing that you say you are.[/b<] While there's been many times that I've not been in full agreement. There's many times when I have. Not that I need the gesture of kindness but it would make me happy if you remembered that as well.

            • Damage
            • 7 years ago

            Wow, this is so not about you. Or me. It’s supposed to be about the substance of the issue: why we tested the cards we did, and whether there’s good reason to object to how we did it.

            I have yet to see you addressing the substance of that issue. You just keep attempting to making things more and more personal.

            Do you have anything useful to contribute to the subject at hand, or do you intend to keep saying “I’m not a troll!” and baiting me with personal attacks?

            • kc77
            • 7 years ago

            Wow so now it’s victim time? I believe you called me troll. It seemed kind of peronsal. Anyway I guess now because the evidence doesn’t support the claim I guess it’s time to change strategies. I’ll let you double back.

            [quote<] It's supposed to be about the substance of the issue: why we tested the cards we did, and whether there's good reason to object to how we did it. [/quote<] Huh? The first question was why on a new release do we not see what a reference card scores and was there a reason for it. That's it. You answerd the question and continued with some such personal nonsense. I'll accept the two paragraphs which contained answers and dismiss the rest. I'm sorry I didn't pack my bags I don't do guilt trips. Most of this crap wouldn't have been necessary had you just answered the question like an adult.

            • Damage
            • 7 years ago

            What I said was:
            [quote<]If there is a cogent reason to object to it, by all means, explain it. Use facts and reasoning. Persuade us. Otherwise, you're just trolling.[/quote<] So if I called you a troll, it was only by your own inference. I think, given your track record of complaining bitterly in multiple GPU reviews about the inclusion of higher-than-stock-clocked cards, my decision to take the time and explain our position was entirely warranted and sensible. Your apparent decision to dismiss my explanation of our position on the subject seems less than reasonable. If you have objections, explain them clearly, so we understand going forward what the issues might be with testing retail, boxed products rather than reference cards.

            • kc77
            • 7 years ago

            [quote<]So if I called you a troll, it was only by your own inference.[/quote<] So says the victim. [quote<] I think, given your track record of complaining bitterly in multiple GPU reviews about the inclusion of higher-than-stock-clocked cards, my decision to take the time and explain our position was entirely warranted and sensible. [/quote<] And I think you have a track record of not being able to take critique without making the argument personal. You can infer what you like. [quote<] Your apparent decision to dismiss my explanation of our position on the subject seems less than reasonable. If you have objections, explain them clearly, so we understand going forward what the issues might be with testing retail, boxed products rather than reference cards. [/quote<] There isn't a dismisal you would just perfer to play the victim here. The problem for not having a reference score on a new release of model is pretty damn apparent. What you have reviewed is not the 650 Ti but the Zotac 650 Ti AMP and there's a difference. Because we don't know what the average user will get if they DON'T buy that exact model. The question I had was valid and you answered it along with a whole lot of baggage you are carrying. That's not anyone's problem but yours.

            • Cyril
            • 7 years ago

            [quote<]What you have reviewed is not the 650 Ti but the Zotac 650 Ti AMP and there's a difference. Because we don't know what the average user will get if they DON'T buy that exact model.[/quote<] I disagree with the notion that the "average user" is going to buy a reference-clocked model. Look at current listings at Newegg. The cheapest, reference-clocked GTX 650 Ti is priced at [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814121669<]$154.99[/url<], but for just [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814130839<]$5 more[/url<], you can get a 1GB card with an even higher core clock speed than the model I reviewed. I think anyone with a shred of sense would spend the extra five bucks and get the faster card. That's the problem with these launches. Testing the reference model provides valuable information, and I would have done it had I received the reference card. However, the prevalence of affordable hot-clocked cards (which all have different speeds) means [i<]any[/i<] card we test is unlikely to be an exact representative of what users will end up with.

            • Chrispy_
            • 7 years ago

            That explanation works for me.

            A friend of mine was interested in a MiniITX build and wanted an HD7950 with a blower, like many of the reference designs that reviewers get sent.

            Could I find a reference design anywhere? No.
            The closest I found was a retail listing of a card with a reference design product shot, but when I looked up the actual SKU on the manufacturers website, the cooler was a cheaper, open affair.

            Reference cards don’t exist outside of OEM’s like HP and Dell. You and I will need to go to great lengths to get a reference board for anything other that the first month after a brand new architecture is released.

            • flip-mode
            • 7 years ago

            I love this new side of you! Spank me, big boy!

            • derFunkenstein
            • 7 years ago

            your nose has the brown.

          • derFunkenstein
          • 7 years ago

          Why would anybody want to test a card that you can’t buy at retail, when instead you can test with an actual shipping product with an actual shipping pricepoint? This argument is stupid. If it exists and it’s not just OC results that TR did themselves, what’s the problem?

            • kc77
            • 7 years ago

            There isn’t a problem. I asked a question. Then Damage went postal so now it’s more threads then necessary.

            • derFunkenstein
            • 7 years ago

            It’s a loaded question that implies plenty, given the amount of whining in the past. He addressed all potential future questions and just put that whole thing to bed.

            • kc77
            • 7 years ago

            And your response is loaded. You can’t address future questions that aren’t even asked when they aren’t even related to the topic at hand. You can kneel down if you want to but I have my own mind and stopped kissing people on the dairy aire years ago.

        • cynan
        • 7 years ago

        I don’t think a reviewers test methodology should be dictated by the current marketing arrangements between chip makers and graphics card vendors. They should be established to give the reader the most useful information (as long as this doesn’t cost the reviewer excessive and undue work). When I look at a video card review, how a certain specific factory overclocked model performs is less meaningful – largely because if I end up buying it price and local availability will heavily impact which particular model.

        To me, a graphics card review should always involve benchmarks at the reference core and memory speeds, if possible. This has nothing to do with fanbois or whether or not it is fair for the competing brand, but solely because it provides the reader with the best point of reference to compare with other models and past generations of video cards. As it is, for example, now we have factory overclocked 660TIs that perform almost on par with “reference” 670s and factory overclocked 670s that perform almost on par with “reference” 680s. In order to give the reader the most reliable performance reference across competing models and generations, and to keep testing as standardized and manufacturer agnostic as possible, a reference performance level should be used for each graphics chip model.

        Nvidia and AMD are constantly trying to find new ways to stay competitive and market their products creatively. This includes tactics as banal as binning and shipping factory overclocked graphics cards to the sorts of antics AMD toyed with with their recent Trinity review previews… I would prefer if tech review sites could remain as agnostic to these maneuvers as possible.

        However, if it is not always feasible to obtain reference review samples and not possible to downclock the review models received to reference speeds, then I suppose this all goes out the window. (Is it possible to downclock these cards using utilities such as MSI Afterburner,etc?)

        • puppetworx
        • 7 years ago

        I’d love to know how the reference design performs too. In fact that’s why I just came here. TR’s GTX 660 Ti review included a reference design card from PNY (clearly a real card that real people buy). Well PNY also make (and sell) a reference based GTX 650 Ti card. I’m therefore disappointed that TR haven’t reviewed one – being my ‘go to’ source for tech reviews. I understand the card didn’t arrive at the time but if you had instead explained that at the start of the review and not glossed over it I wouldn’t be here posting how unimpressed I am now. I understand it’s out of your control and you are under a lot of stress but a simple disclosure stating that the card didn’t arrive and that time constraints means you won’t be coming back to review a reference card isn’t entirely out of the question. It would save a hell of a lot of arguing in the comments too.

        Lastly I’ll say that seeing how a reference based card performs IS a valuable metric. Comparing reference design to overclocked cards lets you interpolate for one, but more importantly it helps you choose the best performance/$(or £). At the moment, here in the UK, the reference GTX 650 Ti from PNY is 20% cheaper then any of the overclocked editions: it would be nice to know if that’s money wasted or not. As I’ve said I understand that you weren’t able to perform these tests and that’s fine I’ll bregudgingly go somewhere else for a review, but what’s not fine is pretending that the results of those tests are valueless when they’re clearly not.

      • Deo Domuique
      • 7 years ago

      Good point kc77. This technique matches –in my eyes– with the false advertising about “enjoy the latest games in full 1080P HD with this next-generation NVIDIA Kepler architecture-based GPU”

      • ryko
      • 7 years ago

      Not all cards in this review are factory oc models…

      650ti = oc
      7770 = oc
      7850 1gb = stock
      7850 2gb = oc

      The card closest in price to the 650ti is a stock clocked 7850 1gb.

      Not that it really matters as the testing reveals that the 7850 1gb is better in everything except for skyrim. It is just that the 650ti is shown hanging in there with the competition, maybe due to the oc. So it paints a rosier picture for nvidia than if a stock clocked version was used, and this is what we are concerned about here.

        • Damage
        • 7 years ago

        And, as I said, we kept price and performance in tension in our analysis–even though the 7850 1GB card AMD supplied is priced like the Zotac card. Cyril’s conclusion opens with: [quote<]It may be fair to say Zotac is overcharging a little for the GeForce GTX 650 Ti 2GB AMP! Edition. The card trails the Radeon HD 7850 1GB more often than not, and while our 7850 1GB carries the same $179.99 price tag, other 7850 1GB models priced as low as $164.99 are available right now.[/quote<]

          • jdaven
          • 7 years ago

          I like the Techpowerup.com model the best. Each review is named after the retail card and each retail card receives its own review. In each review, we get numbers from the reference stock clock and the retail card being reviewed. The method is perfect in my view. If you combine the awesomeness of the TR GPU benchmarking procedure with the review model of Techpowerup, TR would be the best on the internet.

            • kc77
            • 7 years ago

            This is precisely what sparked the question. Some people did test with all reference so the scores were wildly different. So I posed what I think was a valid question regarding the worth of seeing the reference cards for comparison on a brand new model.

        • Essence
        • 7 years ago

        I agree and below is what i took away from this review, i think it was done on the sly e.g.

        “The GTX 650 Ti should be slightly slower, but cooler-running, quieter, and easier to squeeze into a cramped build.”

        looking at the review, the 650ti is running hotter and not cooler as was said in conclusion, but less noise but what do you expect from after market 650ti amp vs xfx hd7850 reference?

        “Not all cards in this review are factory oc models…

        650ti = oc
        7770 = oc
        7850 1gb = stock
        7850 2gb = oc

        The card closest in price to the 650ti is a stock clocked 7850 1gb.”

        Overclocked vs basic card… WOW… whatever happened to all the shouting about AMD being deceptive?

          • Cyril
          • 7 years ago

          A few things.

          1) The 650 Ti AMP! does run cooler than the Radeons. See the last graph on [url=https://techreport.com/review/23690/review-nvidia-geforce-gtx-650-ti-graphics-card/9<]page 9[/url<]. 2) The XFX 7850 1GB doesn't use AMD's stock cooler. It has an XFX cooler with the fan in the middle. And considering it runs at reference speeds while the Zotac card is much faster than stock, it's not a given that the Zotac would have lower temps. 3) The XFX 7850 1GB has the same list price as the GTX 650 Ti AMP! ($179.99), so it's a pretty natural competitor. The XFX card does come with a $20 mail-in rebate, which I pointed out in the writeup, but mail-in rebates tend to come and go and aren't a guarantee. I also pointed out in the conclusion that similarly specced 7850 1GB cards are available for as little as $165 before rebates. 4) The Zotac GTX 650 Ti AMP! and XFX 7850 1GB were the only two new cards that were made available to me for this review. The former was provided by Zotac, the latter directly by AMD. I had only a few days to test them. I worked 14-15-hour days and had to sacrifice my weekend in order to test as many games as I could and include the handful of other cards I had on hand. It's cute that some of you think I hatched a dastardly plan to make AMD look bad by picking out deliberately mismatching cards from my limitless video-card bag of holding, all in order to satisfy an imaginary grudge or to fulfill a nefarious editorial agenda. But as always, reality is a lot less exciting than fiction.

            • kc77
            • 7 years ago

            I don’t think anyone is accusing you of deliberately mismatching cards. However, I think what people are looking for is consistency between the reviews. It is irrelevant whether it’s AMD or Nvidia because as you’ve stated they all pull shenanigans. [b<]That's why we look to you to create a standard. [/b<] When new models arrived we used to see reference tested against reference. If factory overclocked cards were released a shootout would be done with all of the factory overclocked cards that were available. This seems logical. I don't think anyone has a problem with that. Then on the next review on the debut of a new reference model, all of a sudden new overclocked models appeared along side of the new reference model which was being released. This changed what some would consider a standard. Then on the next review you went back to the old way of reference with reference and overclocked with overclocked when the next new model family arrived (I actually commended TR on this as I agree with this approach). This brings us to today where we have a new model which isn't reference (as some probably expected) but instead it's a factory overclocked model and that's supposedly representing what the average 650Ti will do. This card was matched with two factory overclocked cards and one reference. However, that Zotac is arguably average as we have core clocks ranging from 928 to 1071. This is why testing one specific factory overclocked model is problematic because not only do we not have the top end but we don't have anything from this model you've tested on down and we don't have a floor/baseline. So if someone is looking at the review they don't have a clue as to what the reference card will do, what the top clocked cards will do, but instead it's just what the Zotac 650 Ti AMP will do. All of the other cards could be cooler, hotter, quieter, louder, slower, or faster and none of those possibilities are represented here. This is why a firewall between reference and factory overclocked works well because no matter what some fanboy says they can't dispute that as a standard. I personally think it should be reference with reference on new releases, and factory overclocked with factory overclocked with a shootout book-ending all of it. This is so not about fanboyism and it's mostly about (at least for me) expecting a standard, receiving the standard, and having that standard applied consistently whether it's 2008 or 2012. Edit: I understand that you received the wrong card for the review. Nor do I blame you for receiving the wrong card. However, I don't think anyone should be let off the hook for that. This would be like on the debut of the Core i7 920 that Intel accidently sends you a Core i7 950 and says, "sorry my bad but test with that."

            • Chrispy_
            • 7 years ago

            So much butthurt in this thread!

            I think your original point was a valid one, but due to past history you and Scott are both reading more into each others comments than is really there. Scott is being overly defensive, you are being overly sensitive and what we have is a massive tangent to do with testing methodology.

            In practice, the cards tested clearly had their clockspeeds and prices as tested, and Cyril’s statements in the article mentioned that the prices and clocks of these exact products were higher than other products on the market.

            In an idea world, all reviews would be done with reference clocks and would include overclocking results to allow us to draw our own conclusions from a baseline when assessing actual retail products available to us. The fact that geographical/availability/time constraints interfere with this is unfortunate, but it is also unrealistic to expect any reviewer to always find a reference board.

            As a reader, it isn’t hard to extrapolate the performance of a slightly lower-clocked card, and it would equally be possible for reviewers to underclock the retail cards back to reference clocks and voltages. That, however, isn’t the whole picture; A large part of the equation in GPU reviews these days is about heat, noise and power testing. Without an actual reference board, underclocked results are much less meaningful. For a flagship product like a GTX690, people expect high power and noise, but for a lesser card like this that will go into cramped cases and quiet environments, I can completely understand the many factors that contribute to reference-clocked results being omitted from from a review.

            To sum it all up, yes we’d all like to see reference cards reviewed, but at the same time we know it’s not always possible or convenient to do so. You’ve been following TR long enough that you already knew the answer to your original post and I (perhaps incorrectly) infer a level of agression and negativity.

    • alienstorexxx
    • 7 years ago

    i just came from guru3d review. damn, each review is worse than the last. so, thank you for doing reviews for users, not for companies.

    7850 1gb is what i wanted to see against 650ti. sadly, here, in argentina, 7850 1gb is priced equal to 2gb edition. moth#rf@ckers everywhere.

    it would have been nice to see price-performance comparison chart also you should fix prices as @sschaem says.

    • Myrmecophagavir
    • 7 years ago

    “… bridging the gap between the GeForce GT 650 and GeForce GTX 660.”

    Should that be GTX 650? It’s getting a little hard to tell these days, though you’ve used GTX 650 elsewhere in the article. (I’d be interested in a review of that just for the numbers. Moar data!)

      • Cyril
      • 7 years ago

      Ah, yes, my bad.

    • sschaem
    • 7 years ago

    I see different prices on newegg
    7770 – $110 not $154
    7850 – $165 not $180
    7870 – $225 (TR used the 7850 at $230)

    Was the review done before AMD price drop ?

    Seeing how the 650 ti is a match with the 560, woudld the 560 ti at the same price be a better value ?
    (specially considering the GPU compute power the 560 ti packs?)
    560 ti – $180

    It seem that the 650 ti is a product for system that want to use a lower rated PSU ?

      • albundy
      • 7 years ago

      the xfx 7850 is now $159. Its not worth downgrading to the GTX650. Now things can change…first, make it a single slot card, and add sli…then it would be worth considering.

      • flip-mode
      • 7 years ago

      The XFX double D black is a silly choice for the article to make.

      • Cyril
      • 7 years ago

      I think you’re looking at prices for the cheapest available version of each card, not the models we tested. The prices are still accurate for those, as far as I can tell:

      XFX Radeon HD 7770 Black Edition – [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814150599<]$154.99[/url<] XFX Radeon HD 7850 1GB Core Edition - [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814150617<]$179.99[/url<] (before the $20 MIR, which I mentioned on page 2) XFX Radeon HD 7850 2GB Black Edition - [url=http://www.amazon.com/XFX-Radeon-Black-Edition-Video/dp/B007MJURJQ<]$229.96[/url<] The two Black Edition models have higher-than-reference clock speeds, which is why they're more expensive than stock variants. The Nvidia cards we tested are also hot-clocked models that cost more than the reference offerings.

        • BestJinjo
        • 7 years ago

        You don’t need to test factory pre-overclocked 7850 at all as even a stock 7850 is 20-25% faster than an after-market 650Ti.

        You can now buy HD7850 1GB without any rebates for $165.
        [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814161426[/url<] This is $15 more expensive than a stock GTX650Ti and offers 30% more performance at 1080P. HD7850 2GB is $185 without rebates: [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814129230[/url<] This is about $15 more expensive than the cheapest GTX650Ti 2GB on Newegg and again offers about 25% more performance. Sure you can say that GTX650Ti uses 30W less power but if power consumption of such low significance is important, why even play PC games, get a console. The Assassin's Creed bundle is about the only positive thing this card has going for it. I definitely think someone who is gaming on a budget will care more about giving up 25-30% performance than saving 30W of power. That's why they are budget PC gamers, not budget console gamers. You didn't even mention that HD7850 has amazing overclocking headroom that allows it to reach HD7870/660 speeds (You did test overclocking of the 7850/7870 XFX cards earlier in the year). Stock vs. stock or OC vs. OC, 7850 mops the floor with GTX650Ti for $15-20 more for 1GB vs. 1GB and 2GB vs. 2GB versions. This is the first review I think you were too nice to NV overpriced and 8 months late card. Power consumption alone is not a factor when you are losing 25-30% performance to save less than 30W. GTX670/680 are excellent cards, but 650/650Ti both are inferior in price/performance to their HD7770/7850 competitors.

          • Cyril
          • 7 years ago

          [quote<]You don't need to test factory pre-overclocked 7850 at all as even a stock 7850 is 20-25% faster than an after-market 650Ti. You can now buy HD7850 1GB without any rebates for $165. [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814161426[/url<] This is $15 more expensive than a stock GTX650Ti and offers 30% more performance at 1080P. [/quote<] I'm confused. Why are you folks repeating things I pointed out in the review? Namely: [quote<] The [GeForce GTX 650 Ti 2GB AMP! Edition] trails the Radeon HD 7850 1GB more often than not, and while our 7850 1GB carries the same $179.99 price tag, other 7850 1GB models priced as low as $164.99 are available right now. [/quote<] I even included a link to the Newegg listing for the $164.99 version of the 7850 1GB. I then went on to say: [quote<] The 7850 1GB can deliver better overall performance than even a GTX 650 Ti 1GB with higher-than-reference clock speeds, but that performance edge will come at the cost of higher power consumption—and potentially higher noise levels, as well. The GTX 650 Ti should be slightly slower, but cooler-running, quieter, and easier to squeeze into a cramped build. [/quote<] As for those lambasting me for testing the XFX Core Edition version of the 7850 1GB (which, again, runs at the reference specifications), this is literally the card AMD sent me last week, after word about the GTX 650 Ti presumably got to them.

            • JustAnEngineer
            • 7 years ago

            Considering that you can get a Radeon HD7850 [b<]2GB[/b<] card for $5 more than the 1GB card, why would those of us who have to pay for our graphics cards select one with only 1 GiB of memory?

            • Damage
            • 7 years ago

            Two responses to your post, JAE:

            1) There is no reason to be uncivil in tone to Cyril. Watch yourself, please.

            2) Concerning the 1GB vs. 2GB issue, this is what Cyril wrote in his conclusions:

            [quote<] Ultimately, I don't think the GTX 650 Ti 2GB AMP! Edition is worth the $180 price tag, and I don't think the 7850 1GB is a good substitute for it, either. If you're looking for the best deal in this price range, my advice would be to set aside a little extra cash and spring for either a GeForce GTX 660 or a Radeon HD 7850 2GB. You'll get guaranteed higher performance without memory bottlenecks at 1080p, and you'll be able to drive a larger monitor with a higher resolution if you need.[/quote<] So I'm stymied about why you're disputing him at all. The fact we tested the 1GB card at all was a kindness to AMD, who asked and offered the card. He tested both variants and offered an opinion that would seem to agree with yours. Get over it.

            • willmore
            • 7 years ago

            [quote<]Two responses to your post, JAE: 1) There is no reason to be uncivil in tone to Cyril. Watch yourself, please.[/quote<] I must have misread JustAnEngineers posts, because I see nothing like this. We're all here having a very polite conversation. If you can, please quote where anyone was speaking in even a remotely disrespectful tone to Mr. Kowaliski. Bringing it back on topic--JustAnEngineers point on the 2GB card being $5 more than the 1GB card is that, if Mr. Kowaliski thinks that the 1GB card was hamstrung in the one banchmark where it performed signifigantly differently than the 2GB 7850, then it is meaningful to test a 2GB card with identical clocks. Mr. Kowaliski's theory could reasily be tested in such a way. If the 2GB card performed closer to expectations, then we could say that Mr. Kowaliski had discovered a wonderful example of why 1GB cards should not be considered by any but the lower end gamer. Being able to make such a statement would have a lot of value. I appreciate that you tested the 1GB card as a kindness to AMD--I don't remember reading that in the article. Even though it is so, finding a test case where that card performed poorly (as compared to a similar 2GB card) would be a useful thing to push back to AMD: "And this is why we thought your 1GB card was a poor comparison." I recently purchased a 2GB 7850 with some good OC--pretty much equivalent to your tested 7850 2GB card. A good chunk of my decision to go with a 2GB card over a 1GB card was for situations just like this. My old 512MB card was hitting a number of situations where it had enough shader power and memory bandwidth, but lacked sufficient memory to keep textures loaded. I wanted to insure that I would not still be in that situation after I upgraded my card. I say this just to underscore how important examples of the benefit of 2GB cards vs 1GB cards is to us readers of these reviews--who use what we read here to drive our purchasing decisions. So, if someone like JustAnEngineer or myself seem to be belaboring some trivial seeming point, there may be a very good reason for it. If we didn't respect Mr. Kowaliski's work and opinion on these matters, we simply would not bother to ask. Edit: spelling errors, holy cow, the spelling errors!

            • Meadows
            • 7 years ago

            JAE said “those of us who have to pay for our graphics cards”, which implies Cyril would be some sort of a spoiled brat who complains about the stuff “he gets for free”.

            • willmore
            • 7 years ago

            That is a very strange way to take that comment. Mr. Kowaliski is a professional reviewer. It’s common knowledge that his work will provide the materials he needs to do his job. Sometimes the vender will provide them, sometimes not. I don’t see how that casts aspersions on Mr. Kowaliski’s character.

            Since we–the customers of these devices–do have to pay for them, the comment seems factual and lacking of the connotations that you imply.

            • derFunkenstein
            • 7 years ago

            I read it the way that you read it. Not sure who minused you but I’m happy to undo it.

            • Meadows
            • 7 years ago

            I’ll send you a “friends forever” sticker, stick it wherever you like.

      • joselillo_25
      • 7 years ago

      Since the end of MHZ as a way to identify chips I have never be able to compare two products without reading several pages on the Internet, and I am an enthusiast with a decent knowledge of these things, the confusion of the average customer need to be epic these days.

    • Arclight
    • 7 years ago

    Thank you Mr. Kowaliski, was just wondering today when the reivew will be posted.

    • Chrispy_
    • 7 years ago

    Nvidia has thoroughly confused me this generation. The naming schemes seem to be more confusing than ever before:

    GK104 Full = GTX680
    GK104 missing an SMX = GTX670
    GK104 missing an SMX and a ROP = GTX660Ti
    GK106 Full = GTX660
    GK106 Missing an SMX = GTX650Ti
    GK107 Full = GTX650
    GK107 with crappy DDR3 = GT640

    My particular gripe is that, at one end, identical products are given whole seperate model numbers just for changing from DDR3 to DDR5; At the other end, model numbers don’t even indicate what base silicon you are getting – 660 could be either GK106 or 104, and 650 could be either GK107 or 106.

    Arguably, the aim is to deceive the partially-educated with the aim of making more money. In reality the informed look up the actual specs anyway and the uniformed had no idea in the first place.

    Given that before the naming went completely belly-up, there was only one missing Kepler chip (the GK106) and one corresponding gap in the product numbers (650). Whoever thought it necessary to avoid making that matchup needs to have their head checked (or amputated)

      • Waco
      • 7 years ago

      This. So very this.

      • derFunkenstein
      • 7 years ago

      +1. Bump. Everybody clap your hands.

      • kroker
      • 7 years ago

      You think this is confusing? Just wait until they rebrand these chips for the next two series of video cards.

      • Medallish
      • 7 years ago

      Imo the table you are presenting isn’t the worst, it’s not super either, but eh, what really annoys me is the ocean of rebrands taking the same names as the Kepler based cards! Like the GT 630, there’s 3…. 2 of which is based on GF108..

      Want a 640? Which one? There’s 4! 5 if you include the 645, three are based on Kepler, for some wierd reason one of these Kepler based 640 has GDDR5(essentially making it a GT 650 @ lower clock), and it’s the OEM version, then there’s the GF116 based GT 640, GT 645 is based on GF114, then the table finally gets to what you’re showing. I know a lot of these cards listed are OEM’s only, but it’s still quite a clusterf**k, and makes recommending lower end cards a bit of a chore if you actually want people to get a Kepler card @ lower end.

        • Chrispy_
        • 7 years ago

        Honestly, I didn’t want to make the post too much like a rant, but it’s [b<]EVEN WORSE[/b<] in the laptop space. With the Mobile 600-series you really are playing silicon roulette.

          • FormCode
          • 7 years ago

          Ah, yes. Like the GT630M which can be based on THREE different chips, with different specifications and performance values >> GF108(96 shaders-GT540M), GF106/116(144 shaders-GT550M) and GF117 (96 shaders, Fermi).

            • Chrispy_
            • 7 years ago

            My point, exactly – you are spot on.

            It wouldn’t matter [i<]quite as much[/i<] if the performance of those three different GT630M products was close (or even in the same ballbark, for that matter) but they're not; The performance deltas are anything from 50% to 600% depending on what application or game engine.

      • phez
      • 7 years ago

      [quote<]Arguably, the aim is to deceive the partially-educated with the aim of making more money. In reality the informed look up the actual specs anyway and the uniformed had no idea in the first place. [/quote<] The 670 is faster than a 660 Ti is faster than a 660 is faster than a 650 Ti is faster than a 650? Wow! How perfectly ordered and logical. I didn't need to take quantum mechanics 101 to figure that out. This is quite possibly the most stupid rant I've ever read here at TR.

        • Ryhadar
        • 7 years ago

        You could have just as easily made your point without an attitude.

        • Chrispy_
        • 7 years ago

        The fact you’ve looked superficially at the naming scheme and seen reason shows you just how well their crazy marketing ploy works.

        The aim is clearly to confuse people over lesser cards; People will see the 650Ti on a shelf for $150 and then a 650 for £110 bucks. They’ll perhaps know from previous generations that the non-Ti versions are a few percent slower that the Ti versions, but they’re similar cards. Maybe they’ve even heard about the 650Ti because it’s the sub-$200 card everyone’s been waiting for. Maybe they’ve been told by a more tech-savvy friend to pick up a 650Ti but don’t realise the significance of the Ti when it’s buried in the title of an [i<]Asus Geforce GTX 650 Ti Direct Cu II[/i<]. Save forty bucks and pick up the "almost the same" GTX650? Sure; It looks like a bargain! WRONG. Nvidia naming scheme claims another punter.

          • phez
          • 7 years ago

          Is the 650 Ti faster than the 650?
          Yes.

          Thusly is the 650 Ti more expensive than the 650?
          Yes.

          What is the difficulty in understanding this. Yet you come up with some arbitrary situation where some person that (as you clearly state) knowingly buys the slower card, and somehow its nvidia’s fault that they screwed themselves over.

          I don’t even.

            • Chrispy_
            • 7 years ago

            You seem to be arguing a point that nobody is contesting; The models do indeed increase in price and performance in the order you’d expect.

            What people don’t agree with is the undeniable intent to mislead people;
            The 680, 670, 660Ti are all very similar products with the distinctions being even more blurred by factory overclocks.

            The 650 and the 650Ti are completely different products with vast specification, power envelope, compute and gaming performance differences. In fact the only thing they share is the architecture. Nvidia has no good reason for naming like this, and the only reason we can see is “to mislead people”.

            Whether you think the badly named product is the 660Ti or the 650Ti doesn’t matter. What they’ve done is massively compressed most of their Kepler product range into two model numbers whilst spreading the GK104 out across a large number of model numbers. For what exact purpose, nobody is sure, but one thing is clear – the model numbers bear no relevance to the actual product you are getting this generation.

    • tbone8ty
    • 7 years ago

    whoever binnes these chips at nvidia should get a raise. they have been hard at work all yr

      • derFunkenstein
      • 7 years ago

      I get the sarcasm. At least, that’s what I hope it is.

    • tbone8ty
    • 7 years ago

    the nvidia 650 Ti is so short it looks like a $50 card

      • My Johnson
      • 7 years ago

      I saw half-height.

      I think the most powerful half-height card so far is the Sapphire 7750.

    • flip-mode
    • 7 years ago

    It doesn’t feel right to read a TR article that doesn’t conclude with an overall value graphic.

    28nm chips are really reflecting value at this point. The GTX 650Ti offers Radeon HD 5870 performance for about $150, and 9-12 months from now this card will probably be sighted for as low as $100.

      • Tamale
      • 7 years ago

      Agreed – I really miss the value scatter plots when they’re not there now!! It really helps put both the landscape and specific product into perspective.

      • derFunkenstein
      • 7 years ago

      Here, I’ll do it for you: the value is shit (though to be fair the same is true of the 7770 and anything else that’s performing worse than GTX 660 right now).

        • xeridea
        • 7 years ago

        7770 can be had for $100 on sale, and performs on average about as good as the 650. It will play Crysis 2 @ 2048×1152 DX11.

          • willmore
          • 7 years ago

          Agreed. I was just shopping for a new video card (ended up with an OC’ed HD7850 2GB) and I had trouble finding a 7770 card for over $119.

          • derFunkenstein
          • 7 years ago

          And it’s basically as fast as a 3-year-old Radeon 5770. How is a $100 video card worth buying unless you’re coming from no card at all?

            • rrr
            • 7 years ago

            Fail.

            7770 is quite a bit faster than 5770:

            [url<]http://www.anandtech.com/bench/Product/538?vs=536[/url<]

      • codedivine
      • 7 years ago

      I think you meant 28nm and not 22nm?

        • flip-mode
        • 7 years ago

        Indeed, I did, thanks.

Pin It on Pinterest

Share This