Nvidia’s GeForce GTX Titan reviewed

You may have noticed that, in 2013, things have gotten distinctly weird in the world of the traditional personal computer. The PC is besieged on all sides by sliding sales, gloom-and-doom prophecies, and people making ridiculous claims about how folks would rather play shooters with a gamepad than a mouse and keyboard. There’s no accounting for taste, I suppose, which perhaps explains that last one. Fortunately, there is accounting for other things, and the fact remains that PC gaming is a very big business. Yet the PC is in a weird place, with lots of uncertainty clouding its future, which probably helps explain why Nvidia has taken so long—nearly a year after the debut of the GeForce GTX 680—to wrap a graphics card around the biggest chip based on its Kepler architecture, the GK110.

The GK110 has been available for a while aboard Tesla cards aimed at the GPU computing market. This chip’s most prominent mission to date has been powering the Titan supercomputer at Oak Ridge National Labs. The Titan facility alone soaked up 18,688 GK110 chips in Tesla trim, which can’t have been cheap. Maybe that’s why Nvidia has decided to name the consumer graphics card in honor of the supercomputer. Behold, the GeForce GTX Titan:

Of course, it doesn’t hurt that “Titan” is conveniently outside of the usual numeric naming scheme for GeForce cards. Nvidia is free to replace any of the cards beneath this puppy in its product lineup without those pesky digits suggesting obsolescence. And let’s be clear, practically anything else is Nvidia’s lineup is certain to be below the Titan. This card is priced at $999.99, one deeply meaningful penny shy of a grand. As the biggest, baddest single-GPU solution on the block, the Titan commands a hefty premium. As with the GeForce GTX 690 before it, though, Nvidia has gone to some lengths to make the Titan look and feel worthy of its asking price. The result is something a little different from what we’ve seen in the past, and it makes us think perhaps this weird new era isn’t so bad—provided you can afford to play in it.

GK110: one big chip


The Titan’s GK110 laid bare. Source: Nvidia

I won’t spend too much time talking about the GK110 GPU, since we published an overview of the chip and architecture last year. Most of the basics of the chip are there, although Nvidia wasn’t talking too specifically about the graphics resources onboard at that time. Fortunately, virtually all of the guesses we made back then about the chip’s unit counts and such were correct. Here’s how the GK110’s basic specs match up to other recent GPUs:

ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

ALUs

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die
size

(mm²)

Fabrication

process node

GF114 32 64/64 384 2 256 1950 360 40 nm
GF110 48 64/64 512 4 384 3000 520 40 nm
GK104 32 128/128 1536 4 256 3500 294 28 nm
GK110 48 240/240 2880 5 384 7100 551 28 nm
Cypress 32 80/40 1600 1 256 2150 334 40 nm
Cayman 32 96/48 1536 2 256 2640 389 40 nm
Pitcairn 32 80/40 1280 2 256 2800 212 28 nm
Tahiti 32 128/64 2048 2 384 4310 365 28 nm

Suffice to say that the GK110 is the largest, most capable graphics processor on the planet. At 551 mm², the chip’s die size eclipses anything else we’ve seen in the past couple of years. You’d have to reach back to the GT200 chip in the GeForce GTX 280 in order to find its equal in terms of sheer die area. Of course, as a 28-nm chip, the GK110 packs in many more transistors than any GPU that has come before.

Here’s a quick logical block diagram of the GK110, which I’ve strategically shrunk beyond the point of readability. Your optometrist will thank me later. Zoom in a little closer on one of those GPC clusters, and you’ll see the chiclets that represent real functional units a little bit more clearly.

In most respects, this chip is just a scaled up version of the GK104 GPU that powers the middle of the GeForce GTX 600 lineup. The differences include the fact that each “GPC,” or graphics processing cluster, includes three SMX engines rather than two. Also, there are five GPCs in total on the GK110, one more than on the GK104. Practically speaking, that means general shader processing power has been scaled up a little more aggressively than rasterization rates have been. We think that’s easily the right choice, since performance in today’s games tends to be bound by things other than triangle throughput.

Versus the GK104 silicon driving the GeForce GTX 680, the GK110 chip beneath the Titan’s cooler has considerably more power on a clock-for-clock basis: 50% more pixel-pushing power, anti-aliasing grunt, and memory bandwidth; about 50% more texture filtering capacity; and not far from double the shader processing power. The GK104 has proven to be incredibly potent in today’s games, but the GK110 brings more of just about everything that matters to the party.

The GK110 also brings something that has no real use for gaming: considerable support for double-precision floating-point math. Each SMX engine has 64 DP-capable ALUs, alongside 192 single-precision ALUs, so DP math happens at one-third the rate of SP. This feature is intended solely for the GPU computing market. Virtually nothing in real-time graphics or even consumer GPU computing really requires that kind of mathematical precision, so Nvidia’s choice to leave this functionality intact on the Titan is an interesting one. It may also explain, in part, the Titan’s formidable price, since Nvidia wouldn’t wish to undercut its Tesla cards bearing the same silicon. Nevertheless, the Titan may prove attractive to some would-be GPU computing developers who like to play a little Battlefield 3 on the weekends.

Double-precision support on the Titan is a bit funky. One must enable it via the Nvidia control panel, and once it’s turned on, the card operates at a somewhat lower clock frequency. Ours ran a graphics demo at about 15MHz below the Titan’s base clock speed after we enabled double precision.

Oh, before we go on, I should mention that the GK110 chips aboard Titan cards will have one of their 15 SMX units disabled. On a big chip like this, disabling an area in order to improve yields is a very familiar practice. Let’s put that into perspective using my favorite point of reference. The loss of the SMX adds up to about two Xbox 360s worth of processing power—192 ALUs and 16 texture units at nearly twice the clock speed of an Xbox. But don’t worry; the GK110 has 14 more SMX units on hand.

The card: GeForce GTX Titan

You can tell at first glance that the Titan shares its DNA with the GeForce GTX 690. The two cards have the same sort of metal cooling shroud, with a window showing the heatsink fins beneath, and they share a silver-and-black color scheme. Nvidia seems to know one grand is a big ask for a video card, and it has delivered a card with the look and feel of a premium product.

GPU
base
clock
(MHz)
GPU
boost
clock
(MHz)
Shader
ALUs
Textures

filtered/
clock

ROP
pixels/
clock
Memory
transfer
rate
Memory
interface
width
(bits)
Peak
power
draw
GeForce GTX 680 1006 1058 1536 128 32 6 GT/s 256 195W
GeForce GTX Titan 836 876 2688 224 48 6 GT/s 384 250W
GeForce GTX 690 915 1019 3072 256 64 6 GT/s 2 x 256 300W

One of our big questions about the Titan has been where the GK110 would land in terms of clock speeds and power consumption. Big chips like this can be difficult to tame. As you can see in the table above, Nvidia has elected to go with relatively conservative clock frequencies, with an 837MHz base clock and an 876MHz “Boost” clock, courtesy of its GPU Boost dynamic voltage and frequency scaling technology. That’s a bit shy of the gigahertz-range speeds that its siblings have reached. However, the Titan may run as fast as ~970-990 MHz, in the right situation, thanks to its new iteration of GPU Boost. (More on this topic shortly.)

Those clock speeds contribute to the Titan’s relatively tame 250W max power number. That’s low enough to be a bit of a surprise for a card in this price range, but then the Kepler architecture has proven to be fairly power efficient. The card requires one eight-pin aux power connector and another six-pin one, not dual eight-pins like the GTX 690. The Titan’s power draw and connector payload fits well with its potential to be the building block of a multi-GPU setup involving two or three cards. Yes, the dual connectors are there to allow for three-way SLI—you know, for the hedge fund manager who loves him some Borderlands.

One other help to multi-GPU schemes, and to future-proofing in general, is the Titan’s inclusion of a massive 6GB of GDDR5 memory. Nvidia had the choice of 3GB or 6GB to mate with that 384-bit memory interface, and it went large. Perhaps the extra RAM could help in certain configs—say in a surround gaming setup involving a trio of four-megapixel displays. Otherwise, well, at least the extra memory can’t hurt.

The Titan measures 10.5″ from stem to stern, putting it smack-dab in between the GTX 680 and 690. That makes it nearly half an inch shorter than a Radeon HD 7970 reference card.


Revealed: the blower and the heat sink fins atop the vapor chamber. Source: Nvidia


A bare Titan card. Source: Nvidia

Nvidia tells us Titan cards should be widely available starting next Monday, February 25th, although some may start showing up at places like Newegg over the weekend. Nvidia controls the manufacture of Titan cards, and it sells those cards to its partners. As a result, we’re not likely to see custom coolers or circuit boards on offer. Only select Nvidia partners will have access to the Titan in certain regions. For the U.S., those partners are Asus and EVGA.

Although those restrictions would seem to indicate the Titan will be a limited-volume product, Nvidia tells us it expects a plentiful supply of cards in the market. The firm points out that the GK110 has been in production for a while, so we shouldn’t see the sort of supply issues that happened after the GTX 680’s introduction, when GK104 production was just ramping up. Of course, the Titan’s $999.99 price tag may have something to do with the supply-demand equation at the end of the day, so we’re probably not talking about massive sales volumes. Our hope is that folks who wish to buy a Titan won’t find them out of stock everywhere. I guess time will tell about that.

Meanwhile, the Titan will coexist uneasily with the GeForce GTX 690 for now—you know, talking trash about micro-stuttering versus raw performance, throwing elbows when Jen-Hsun isn’t looking, that sort of thing. The two cards are priced the same, sharing the title of “most expensive consumer graphics card” and catering to slightly different sets of preferences.

GPU Boost enters its second generation

I said earlier that the PC market has gone to weird place recently. Truth is, very high-end graphics cards have been in a strange predicament for a while now, thanks to the tradeoffs required to achieve the very highest performance—and due to conflicting ideas about how a best-of-breed video card should behave. The last couple generations of dual-GPU cards from AMD have stretched the limits of the PCIe power envelope, giving birth to innovations like the dual-BIOS “AUSUM” switch on the Radeon HD 6990. Those Radeons have also been approximately as quiet as an Airbus, a fact that doesn’t fit terribly well with the growing emphasis on near-silent computing.

With the Titan, Nvidia decided not to pursue the absolute best possible performance at all costs, instead choosing to focus on two other goals: good acoustics and extensive tweakability. The tech that makes these things possible is revision 2.0 of Nvidia’s GPU Boost, which is exclusive to Titan cards. As with version 1.0 introduced with the GTX 680, GPU Boost 2.0 dynamically adjusts GPUs speeds and voltages in response to workloads in order to get the best mix of performance and power efficiency. Boost behavior is controlled by a complicated algorithm with lots of inputs.

What makes 2.0 different is the fact that GPU temperature, rather than power draw, is now the primary metric against which Boost’s decisions about frequency and voltage scaling are made. Using temperature as the main reference has several advantages. Because fan speeds ramp up and down with temperatures, a Boost algorithm that regulates temperatures tends to produce a constant fan speed while the GPU is loaded. In fact, that’s what happens with Titan, and the lack of fan variance means you won’t notice the sound coming from it as much. Furthermore, Nvidia claims Boost 2.0 can wring 3-7% more headroom out of a chip than the first-gen Boost. One reason for the extra headroom is the fact that cooler chips tend to leak less, so they draw less power. Boost 2.0 can supply higher voltages to the GPU when temperatures are relatively low, allowing for faster clock speeds and better performance.

Via Boost 2.0, Nvidia has tuned the Titan for noise levels lower than those produced by a GeForce GTX 680 reference card, at a peak GPU temperature of just 80° C. That’s somewhat unusual for a top-of-the-line solution, if you consider the heritage of cards like the GeForce GTX 480 and the aforementioned Radeon HD 6990.

I’m happy with this tuning choice, because I really prefer a quiet video card, even while gaming. I think a best-of-breed solution should be quiet. Some folks obviously won’t agree. They’ll want the fastest possible solution, noise or not.


EVGA’s Precision utility

Fortunately, Boost 2.0 incorporates a host of tuning options, which will be exposed via tweaking applications from board makers. For instance, EVGA’s Precision app, pictured above, offers control over a host of Boost 2.0 parameters, including temperature and power targets, fan speed curves, and voltage. Yep, I said voltage. With Boost 2.0, user control of GPU voltage has returned, complete with the ability to make your GPU break down early if you push it too hard.

As you can see in the shot above, our bone-stock Titan card is running at 966MHz. We had the “rthdribl” graphics demo going in the background, and our board was happy to exceed its 876MHz Boost clock for a good, long time. As you might imagine given the fairly conservative default tuning, there’s apparently quite a bit of headroom in these cards. You can take advantage of it,if you’re willing to tolerate a little more noise, higher GPU temperatures, or both. Nvidia tells us it’s found that most Titans will run at about 1.2GHz without drama.

Further tweaking possibilities include control over the green LED lights that illuminate the “GEFORCE GTX” lettering across the top of the Titan card. EVGA has a utility for that. And Nvidia says it will expose another possibility—somewhat oddly, under the umbrella of GPU Boost 2.0—via an API for use by third-party utilities: display overclocking. Folks have already been toying with this kind of thing, pushing their LCD monitors to higher refresh rates than their official specs allow. Nvidia will be enabling its partners to include display overclocking options in tools like Precision. We haven’t seen an example of that feature in action yet, but we stand ready and willing to sacrifice our 27″ Korean monitor for the sake of science.

Falcon Northwest’s Tiki

Our GeForce GTX Titan review card came to us wrapped inside of the system you see pictured below.

This is the Tiki, from boutique PC builder Falcon Northwest. The Tiki is one of a new breed of compact gaming systems that seems to be taking the custom PC builders by storm. Nvidia asked Falcon to send us this PC in order to demonstrate how a GTX Titan card might be used. Apparently, boxes from custom shops like Falcon have accounted for about half of GeForce GTX 690 sales, and Nvidia expects that trend to continue with the Titan.

Also, as I’ve said, things are getting a little weird in the PC market, and many folks seem to be expecting compact systems like this one to become the new norm. If that happens, well, cards like the Titan could still have a nice future ahead of them after the iPad-wielding reaper comes for our full-towers. I’ve included the gamepad in the shots above to offer some sense of scale, but I’m not sure the pictures do the Tiki justice. The enclosure, which is Falcon’s own design, is just 4″ wide and roughly 15″ deep and 15″ tall. The base is granite, if you can’t tell from the images, and the whole system feels like it’s chiseled from one block of stone. It’s heavy and dense, but it must pack more computing power per cubic inch than anything else I could name.

Pop open the side, and you’ll begin to get a sense of the power lurking within, which pretty much amounts to most of the best stuff you can cram into a PC these days. The motherboard is the Asus P8Z77-I Deluxe Mini-ITX board we recently reviewed, and it houses the top-end Ivy Bridge processor, the Core i7-3770K. Storage includes a Crucial m4 SSD, WD Green 2TB hard drive, and an optical drive. The CPU is cooled by an Asetek water cooler, and the compact PSU comes from Silverstone.

The Tiki is impressively quiet when idle, about as good as any well-built full-sized system of recent vintage, and its acoustic footprint doesn’t grow by much when gaming. With Titan installed, it’s a stupid-fast gaming PC that emits only a low hiss.

Working inside the Tiki isn’t easy, though. You can kind of see in the picture that the video card plugs into a spacer plugged into a PCIe header that comes up out of the motherboad and does a right turn. One must remove the plate holding the system’s storage in order to extract the video card.

I guess that’s my builder mentality talking. When we talked to Falcon Northwest’s Kelt Reeves, who designed the Tiki, about this issue, he likened working with the Tiki to building a laptop. That’s no big deal to Falcon, since, as Reeves put it, “We’re the people you call when you want someone else to scrape their knuckles and handle the tech support.”

We considered using the Tiki system as the basis for all of our testing, but unfortunately, that wouldn’t fly. You see, while the GTX Titan card will fit into this box, many other cards will not, including the GeForce GTX 690 and any sort of multi-card config. Instead, the Tiki became our go-to system for quick sessions of Borderlands 2 when taking a break from work.

The Titan’s competition

Our time with the Titan and proper drivers hasn’t been long, so we’ve tried to focus our testing on a narrow group of competing solutions. From Nvidia, the Titan’s closest siblings are the GeForce GTX 680 and 690, which we’ve talked about already. The Titan’s rivals from the AMD camp include the Radeon HD 7970 GHz Edition and, well, two of the same. Turns out two of AMD’s fastest cards in a CrossFire team will set you back less than a single Titan card.

Here’s a look at our 7970 GHz Edition CrossFire team mounted in our testbed system. These are reference cards that came directly from AMD. Heck, for the first time in ages, I believe every card we’re testing is a stock-clocked reference model. Purists with OCD, rejoice!

I should mention that there are other options with similar performance. For instance, although AMD never did launch its promised dual-GPU Radeon HD 7990, several board makers have offered dual-7970 cards for sale. Unfortunately, we don’t have one on hand, so we decided to focus on two discrete cards in CrossFire for testing. We’d expect the performance of the dual-GPU cards to be very similar to our dual-card team. Similarly, we had the option of testing dual GeForce GTX 680s in SLI, but our past testing has proven that they perform almost identically to the GTX 690. Just keep in mind those other options exist when it comes time to sum everything up.

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

(Gtexels/s)

Peak

bilinear

fp16

filtering

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory

bandwidth

(GB/s)

GeForce GTX 680 34 135 135 3.3 4.2 192
GeForce GTX
Titan
42 196 196 4.7 4.4 288
GeForce GTX
690
65 261 261 6.5 8.2 385
Radeon HD 7970
GHz
34 134 67 4.3 2.1 288
Radeon HD 7970
GHz CrossFire
67 269 134 8.6 4.2 576

Here’s a brief overview of how the contenders compare in key theoretical peak graphics rates. One interesting feature of the numbers is the fact that the Radeon HD 7970 GHz straddles the space occupied by the GTX 680 and the Titan. The 7970 matches the Titan for memory bandwidth and nearly does for peak shader throughput, but it’s substantially slower in terms of ROP rates and texture filtering capacity.

Meanwhile, the multi-GPU solutions look very nice in these compilations of theoretical peak performance, but these numbers assume perfect scaling from multiple GPUs. As you’ll see soon, that’s a very rosy assumption to make, for lots of reasons. Yes, those reasons include problems with multi-GPU micro-stuttering, which our latency-focused game benchmarks are at least somewhat capable of detecting. We have expressed some reservations about the limits of the tool we use to capture frame rendering times, especially for multi-GPU solutions, but absent a better option, we still think Fraps is vastly preferable to a simple FPS average. We’ll have more to say on this front in the coming weeks, but for today, our usual tool set will have to suffice.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-11-24
1T
Chipset drivers INF update
9.3.0.1021

Rapid Storage Technology Enterprise 3.5.0.1101

Audio Integrated
X79/ALC898

with Realtek 6.0.1.6662 drivers

Hard drive Corsair
F240 240GB SATA
Power supply Corsair
AX850
OS Windows 8
Driver
revision
GPU
base

core clock 

(MHz)

GPU
boost

 clock 

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

GeForce
GTX 680
GeForce
313.96 beta
1006 1059 1502 2048
GeForce
GTX 690
GeForce
313.96 beta
915 1020 1502 2 x 2048
GeForce
GTX Titan
GeForce
314.09 beta
837 876 1502 6144

Radeon HD 7970 GHz
Catalyst
13.2 beta 5
1000 1050 1500 3072
Dual Radeon HD
7970 GHz
Catalyst
13.2 beta 5
1000 1050 1500 2 x 3072

Thanks to Intel, Corsair, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

In addition to the games, we used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing either a 60- or 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.
  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Skyrim at 2560×1600 with the Ultra quality presets.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Texture filtering

We’ll begin with a series of synthetic tests aimed at exposing the true, delivered throughput of the GPUs. In each instance, we’ve included a table with the relevant theoretical rates for each solution, for reference.

Peak pixel

fill rate

(Gpixels/s)

Peak bilinear

filtering

(Gtexels/s)

Peak bilinear

FP16 filtering

(Gtexels/s)

Memory

bandwidth

(GB/s)

GeForce GTX 680 34 135 135 192
GeForce GTX
Titan
42 196 196 288
GeForce GTX
690
65 261 261 385
Radeon HD 7970
GHz
34 134 67 288
Radeon HD 7970
GHz CrossFire
67 269 134 576

In the past, performance in this color fill rate test has been almost entirely determined by memory bandwidth limitations. Today, I’m not so sure. None of the solutions achieve anything like their peak theoretical rates, but I think that has to do with how many color layers this test writes. The Titan’s certainly not 50% faster than the GTX 680, despite having literally 50% higher memory bandwidth.

The multi-GPU solutions, meanwhile, scale up nicely in these synthetic tests.

Among the single-GPU solutions, the Titan is very much in a class by itself here.

Tessellation and geometry throughput

Peak

rasterization

rate (Gtris/s)

Memory

bandwidth

(GB/s)

GeForce GTX 680 4.2 192
GeForce GTX
Titan
4.4 288
GeForce GTX
690
8.2 385
Radeon HD 7970
GHz
2.1 288
Radeon HD 7970
GHz CrossFire
4.2 576

The last couple generations of Nvidia GPU architectures have been able to sustain much higher levels of polygon rasterization and throughput than the competing Radeons. The big Kepler continues that tradition, easily leading the GeForce GTX 680, even though its peak theoretical rasterization rate isn’t much higher. Most likely, the GK110’s larger L2 cache and higher memory bandwidth should be credited for that outcome.

Shader performance

Peak

shader

arithmetic

rate (tflops)

Memory

bandwidth

(GB/s)

GeForce GTX 680 3.3 192
GeForce GTX
Titan
4.7 288
GeForce GTX
690
6.5 385
Radeon HD 7970
GHz
4.3 288
Radeon HD 7970
GHz CrossFire
8.6 576

These tests of shader performance are all over the map, because the different workloads have different requirements. In three of them—ShaderToyMark, POM, and Perlin noise—the Titan and the Radeon HD 7970 GHz are closely matched, with the 7970 taking the lead outright in ShaderToyMark. That’s a nice reminder that the 7970 shares the same peak memory bandwidth and is within shouting distance in terms of peak shader throughput.

I had hoped to test OpenCL performance using at least one familiar test, LuxMark, but it crashed when I tried it on the Titan due to an apparent driver error. Anyhow, I think the GPU computing performance of these chips deserves a separate article. We’ll try to make that happen soon. For now, we’re going to focus on the Titan’s primary mission: gaming.

Guild Wars 2
Guild Wars 2 has a snazzy new game engine that will stress even the latest graphics cards, and I think we can get reasonably reliable results if we’re careful, even if it is an MMO. My test run consisted of a simple stroll through the countryside, which is reasonably repeatable. I didn’t join any parties, fight any bandits, or try anything elaborate like that, as you can see in the video below. Also, all of my testing was conducted in daytime, since this game’s day/night cycle seems to have an effect on performance.

Yes, we are testing with a single large monitor, and we are “only” testing a single-card Titan config today. Inexplicably, I ran out of Red Bull and wasn’t extreme enough to test triple-Titan SLI across three displays. I have several kegs of energy drink on order, though, and am hoping to test Titan SLI in the near future. In the meantime, it turns out you can stress these high-end GPU configs with just a single monitor using the latest games at their highest image quality settings. Even an MMO like this one.


Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

We’ll start with plots of the frame rendering times from one of our five test runs. These plots visually represent the time needed to render each frame of animation during the gameplay session. Because these are rendering times, lower numbers on the plot are better—and higher numbers can be very bad indeed.

You can see some really intriguing things in these plots. Generally, the GTX Titan performs quite well, rendering the vast majority of frames in easily less than 20 ms. The table on the right will tell you that works out to over 50 FPS, on average. There are occasional spikes to around 40 ms or so, but that’s not so bad—except for the cluster of high frame times near the end of the test run on each GeForce. Subjectively, there is a very noticeable slowdown in this same spot each time, where we’re rounding a corner and taking in a new view. The Radeon HD 7970 single- and dual-card configs produce fewer frames at generally higher latencies than the Titan, but they don’t have the same problem when rounding that corner. (You’ll recall that AMD recently updated its drivers to reduce frame latencies in GW2.) You don’t feel the slowdown at all on the Radeons.

Interestingly, both the traditional FPS average and our proposed replacement for it, the 99th percentile frame time, tend to agree that the Titan and GTX 690 perform about the same here and that both are generally faster than the 7970 GHz configs. That one iffy spot on the GeForces is very much an outlier.

The 99th percentile numbers also say something good about how all of these solutions perform. They all render 99% of the animation frames in under 33 milliseconds, so they spend the vast majority of the time during the test run cranking out frames at a rate better than 30 FPS.

We can show the frame latency picture as a curve, to get a broader sense of things. The Titan tracks very closely with the GeForce GTX 690 throughout. Generally, these two cards are the best performers here, offering the smoothest animation. The shape of the curve offers no evidence that multi-GPU microstuttering is an issue on either the dual 7970s or the GTX 690.


However, we know about that one trouble spot at the end of the test run on the GeForces, and it shows up in our measure of “badness,” which considers the amount of time spent working on truly long latency frames. (For instance, if our threshold is 50 ms, then a frame that takes 80 ms to render contributes 30 ms to the total time beyond the threshold.) As you can see, that slowdown at the end pushes the three GeForce configs over 50 milliseconds for a little while.

As I’ve said, you’ll notice the problem while playing. Trouble is, you’ll also notice that animation is generally smoother on the GeForce cards otherwise, as our other metrics indicate. Lower the “badness” threshold to 16.7 ms—equivalent to 60 FPS—and the Titan and GTX 690 spend the least time above that mark.

So, I’m not sure how one would pick between the performance of the various cards in this one case. It’s pretty unusual. I’m happy to be able to show you more precisely how they perform, though. Totally geeking out over that. Let’s move on to something hopefully more definitive.

Borderlands 2

Next up is one my favorites, Borderlands 2. The shoot-n-loot formula of this FPS-RPG mash-up is ridiculously addictive, and the second installment in the series has some of the best writing and voice acting around. Below is a look at our 90-second path through the “Opportunity” level.

As you’ll note, this session involves lots of fighting, so it’s not exactly repeatable from one test run to the next. However, we took the same path and fought the same basic contingent of foes each time through. The results were pretty consistent from one run to the next, and final numbers we’ve reported are the medians from five test runs.



Oh, man. The truth is that all of the configs ace this test. With 99th percentile frame times at 20 milliseconds or less, all of these cards are almost constantly spitting out frames at a rate of 50 FPS or better. And none of them spend any substantial time on frames that take longer than 50 ms to render. We’re talking smooth animation all around, which matches my subjective impressions.

Sleeping Dogs

Our Sleeping Dogs test scenario consisted of me driving around the game’s amazingly detailed replica of Hong Kong at night, exhibiting my terrifying thumbstick driving skills.



Oooh. Look at the 7970 CrossFire plot, which looks like a cloud rather than a line. That’s potential evidence of the dreaded multi-GPU micro-stuttering. Here’s how small chunk of it looks up close:


Multi-GPU solutions tend to split up the workload by handing one frame to the first GPU and the next to the second GPU in alternating fashion. When the two GPUs go out of sync, bad things happen. Now, I should note that the 7970 CrossFire solution still seems to perform well here—even the longer frame times in the pattern are fairly low.

However, the FPS average doesn’t reflect the true performance of the CrossFire solution, which isn’t quite that far ahead of the pack, as the latency-focused 99th percentile metric tells us. The GeForces don’t do as well in the 99th percentile frame time metric as their FPS averages would suggest, either.

All of the GeForces struggle to some degree with the toughest 4-5% of the frames rendered, and we know from the plots that those frames represent spikes riddled throughout our test sequence.


Obviously the GeForce GTX 680 is the big loser here, no matter how you slice it. Its performance is just brutal. One of the surprise winners, in my book, is the GTX 690. When playing the game, the addition of a second GPU just like the GTX 680’s translated into vastly improved smoothness. The difference between the 680 and 690 in our “badness” measurement at 50 ms tells the story there. Multi-GPU solutions don’t always improve fluidity that effectively.

The Radeons are even bigger winners, though, by being straightforwardly quicker overall.

Assassin’s Creed III

This game appears to be a thought experiment centered around what would happen if the Quaker Oats guy had invented parkour in 18th-century Boston.

Since the AC3 menu doesn’t lend itself to screenshots, I’ll just tell you that we tested at 2560×1600 with environment, anti-aliasing, and shadow quality set to “very high” and texture quality set to “high.” I understand that the “very high” AA setting combines 4X multisampling with FXAA HQ post-process smoothing. This game also supports Nvidia’s TXAA, but Nvidia has gated off access to that mode from owners of Radeons and pre-Kepler GeForces, so we couldn’t use it for comparative testing.



Well, the strange news here is how Radeon HD 7970 performance drops when you add a second video card. That ain’t right. We’ve asked AMD what the problem is, but haven’t yet gotten an answer. A little googling around suggests we’re not alone in seeing this problem, though. This sort of thing happens sometimes with multi-GPU solutions, especially with newer games, but AC3 has been out long enough that this situation seems odd.

In other news, the Titan almost exactly mirrors the performance of the GTX 690 here. I sense a developing trend.

Hitman: Absolution

We’ve finally found a good use for DX11 tessellation: bald guys’ noggins.


We noticed latency problems with this game last year when we tested it on the Radeon HD 7950, and they’ve carried over here to the 7970 GHz Edition at higher resolutions. Adding a second 7970 card doesn’t banish the spiky frame time pattern, but the CrossFire rig renders twice as many frames at roughly half the effective latency, which gets the job done.


Our “badness” check provides a useful reminder that the stakes are pretty low here. None of the cards waste much time on truly high-latency frames. That comports with the subjective experience of playing on the single 7970 card. The animation is a little jumpy and unsettled, just like our Fraps data suggests, but it’s not terrible.

Far Cry 3



Here’s a nice example of the trouble with FPS averages. Have look at the average FPS for the Radeon HD 7970 CrossFire rig and then at the frame time plot for that setup. Yikes, right? Although the CrossFire rig slings out loads of frames, the animation it produces here is literally horrible and strangely jumpy. I seriously wondered if frames weren’t being delivered out of order. Even the 99th percentile frame time, which places the dual 7970 rig roughly on par with a single card, doesn’t capture the extent of it.

There’s some funkiness going on with all of the cards, though. Notice the big spikes to over 100 ms in the GeForce GTX 680 and 690 plots. Once almost every run, we hit a place where the game would just choke and then recover.

When I asked AMD about the CrossFire problems, they told me they were investigating multiple angles. When I asked Nvidia about this game, they said they believe there may be some problems with the application itself. So… yeah. In my experience, you can get Far Cry 3 to run fairly smoothly on these video cards, but you’ll have to back way down on the image quality settings.

The only non-loser here is the GeForce Titan, which didn’t encounter the big pauses we saw with the two GK104-based cards, although that could be a benefit of the Titan’s slightly newer driver revision. I think this one is still a work in progress all around.

The Elder Scrolls V: Skyrim

No, Skyrim isn’t the newest game at this point, but it’s still one of the better looking PC games and remains very popular. It’s also been a particular point of focus in driver optimizations, so we figured it would be a good fit to include here.

We did, however, decide to mix things up by moving to a new test area. Instead of running around in a town, we took to the open field, taking a walk across the countryside. This change of venue provides a more taxing workload than our older tests in Whiterun.

I should mention that Skyrim has some problems with extremely high frame rates, like most of these systems produced in this test. You wouldn’t want to play the way we’ve tested, with the frame rate cap removed, because the game’s AI kind of goes haywire, causing animals and NPC to move about erratically. We may consider testing with the game capped in the future. We can’t show differences in FPS averages that way, but we still could measure any frame latency spikes.



All of these graphics solutions are extremely fast in this test. The Radeon HD 7970 CrossFire config, which is the fastest in terms of FPS averages, does encounter a few quick latency spikes along the way. Those knock it back to third in the 99th-percentile standings. Still, we’re gonna need even higher resolutions or some mods for Skyrim to really stress any of these cards.

Power consumption

AMD’s nifty ZeroCore Power feature confers an advantage to the Radeons when the display drops into power-save mode. Their fans stop spinning and their GPUs drop into a low-power state. Even when idle at the desktop with the display turned on, the second card in the 7970 CrossFire team stays in ZeroCore mode, keeping idle system power lower than the GTX 690’s. Meanwhile, the Titan system draws less power when idling at the desktop than any other setup, which is quite the feat for such a large chip.

We tested power under load while running Skyrim, and we can do a rough estimate of power efficiency by correlating power draw and performance in this game, as our scatter plot does. The Titan-based test rig comes out looking pretty good overall.

Noise levels and GPU temperatures

At its default tuning, the Titan is true to its billing as quieter than a GeForce GTX 680. That’s a very nice accomplishment, since we know it’s drawing more power than the GTX 680 or the Radeon HD 7970, yet it dissipates that heat with less noise.

Speaking of the 7970, AMD’s reference cards are much louder than is necessary. We’ve seen better acoustics from 7970 cards with similar performance and non-reference coolers, like the XFX Black Edition we tested here.

Going scatter-brained

This page is a continuation of a little experiment with scatter plots that we started with our GTX 680 review. We’re looking at the correlations between various architectural features and delivered performance across our test suite. This time around, we’ve switched to our 99th-percentile frame time as the proper summary of overall performance, but we’ve converted it into FPS terms so that a higher score is better, to make the plots intuitively readable.

Yeah, the multi-GPU configs on the die size plot are a little iffy, since we’re just doubling the die area for dual-chip solutions. Anyhow, you can see that the GeForces tend to outdo the Radeon HD 7970 in terms performance per die area.

All of the theoretical peak rates we’ve included here look to be somewhat correlated to overall delivered performance. The weakest correlations look to be triangle rasterization rates and, oddly enough, memory bandwidth.

The correlations grow a little stronger when we match the results of directed tests against delivered game performance. I’d like to add more GPUs to these plots in the near future, though, before drawing any big conclusions about GPU architecture features and gaming performance.

Conclusions

As with many reviews of products this complex, we come to the end of a pretty extensive exercise with a long list of things we wish we’d had time to test, including overclocking and tweaking via GPU Boost 2.0, a broader selection of older cards, a nice range of GPU computing applications, and even more games, including the just-released Crysis 3. Oh, yes, and Titan multi-GPU configs. We’ll try to get to as many of those things as possible in the coming days.

For now, we can summarize our current results in one of our famous price-performance scatter plots. As usual, our prices come from cards selling at Newegg, and the performance results are a geometric mean from the seven games we tested. We’ve converted our 99th-percentile
frame times into FPS so that both plots read the same: those solutions closest the top-left corner of the plot area offer the best combo of price and performance.


If you were expecting the Titan to be a slam-dunk value winner, well, no card with a $1K price tag is gonna nestle up into the top-left position in one of these plots. And, if you look at raw FPS per dollar, the Radeon HD 7970 GHz CrossFire config looks to be the clear winner. However, we believe our latency-focused 99th-percentile frame time metric is the best measure of overall gaming performance, and the Titan acquits itself fairly well on that front, coming in juuuust below the GeForce GTX 690 at the same price.

You already know that the Titan is physically shorter, requires less power, and generates less heat and noise than the GTX 690. And it can serve as a basic building block in a two- or three-way SLI config, promising higher peak performance than even dual GTX 690s could achieve, especially with that 6GB of GDDR5 onboard. The Titan definitely has a place in the market, high atop Nvidia’s product stack, and we’re happy to see little extras like double-precision math support, a gorgeous industrial design, and excellent acoustic and power tuning as part of the mix. If you’re laying down a grand for a graphics card, you’re going to want the best of everything, and the Titan doesn’t skimp.

With that said, I can’t help but think most PC gamers would give up the double-precision support and accept a plastic cooling shroud and “only” 3GB of onboard memory in exchange for a price that’s several hundred bucks lower. That was essentially the deal when the GeForce GTX 580 debuted for 500 bucks—or, if you want to point to an even larger chip than the GK110, when the GTX 280 started life at $650. A premium product like the Titan is no bad thing, but we’re kind of hoping Nvidia follows up with something slightly slower and little more affordable, as well. At present, your best value-for-dollar proposition in this space from Nvidia likely comes from dual GTX 680s in SLI, which should perform very much like a GTX 690 at a lower overall price.

As for the competition, our dual Radeon HD 7970 GHz CrossFire team put up a very good fight, taking the top spot in multiple games. Looks to me like AMD has the GPU hardware needed to give Nvidia a truly formidable challenge—more so than it currently does. We saw CrossFire scaling issues in a couple of games, but those surely can be resolved with driver updates. And we’re still waiting on the rewritten memory manager for GCN-based chips that promises lower frame latencies in DX10/11 games. If AMD can deliver on the driver updates, a pair of 7970s could offer a compelling alternative to a single Titan, for less money—and with a very nice game bundle. Just know that waiting for driver updates to fix problems has become a time-honored tradition for owners of CrossFire rigs. And you’ll want to search out a board with a CrossFire-friendly custom cooler. The reference cards we tested are too darned loud.

Maybe we should get some quieter examples in the labs for some Crysis 3 testing, alongside two or three Titans, eh? We’ll get right on that.

People sometimes like to yell at me on Twitter.

Comments closed
    • kamikaziechameleon
    • 6 years ago

    farcry 3 is a flipping mess of an engine.

    • John Doe
    • 7 years ago

    This thing is so ludicrously overpriced it’s absurd. It’s completely obvious that nVidia are RIPPING people off on this card as neither it’s actual build quality nor the pointless magnesium shroud on it nor the fact that it’s nothing more than a 680 with 16 SMX cores rather than 15 makes up for an insane grand of price tag.

    I’d much rather get a Galaxy 680 White over this with my eyes closed. Wait, what? One? I can get TWO for the price of this… LOL. And I did, in fact. Costs $520 over TigerDirect. 🙂

    If that’s too expensive, then get a reference 7970 for $400 and OC it to Mars. It’ll be almost as fast, and you’ll have 600 bucks left in the pocket…

    • beck2448
    • 7 years ago

    “our dual Radeon HD 7970 GHz CrossFire team put up a very good fight,””

    I take issue with this comment in view of recent developments. Tech Report themselves have written yesterday that due to the inclusion of a multitude of RUNT frames in Crossfire(frames that are just a sliver of pixels, not a viewable full frame) the fps numbers posted by ALL Crossfire solutions maybe inflated, in some cases as much as 100%. Thus they are no faster than a single card. Until this matter is resolved, when it comes to Crossfire, buyer beware.

    • tbone8ty
    • 7 years ago

    No crysis 3???

    Im expecting a full fledge cpu and gpu article devoted to crysis 3!

    It would be a perfect game to revisit the 660ti/7950 latency quest

    • setbit
    • 7 years ago

    7.1 billion transistors. So, each and every GK110 has more transistors than there are human beings on the face of the earth.

    I’m not quite sure how I feel about that.

    • kellybboxo32
    • 7 years ago
    • jonjonjon
    • 7 years ago

    screw energy drinks and red bull go to the dr and get some adderall. you will pump out 2x more stories a week guaranteed!

      • auxy
      • 7 years ago

      Or, for a more legitimate and safer-for-your-body solution, try Adrafinil. It’s a central nervous system stimulant, so it gives you clear-headedness and focused attention without the hyperactivity, jittery nerves, and muscle spasms of Adderall or other amphetamine-derivatives. It’s also unscheduled in the US.

    • Airmantharp
    • 7 years ago

    You know, I wonder if the impending release of this card is why Nvidia didn’t push out the GTX690 with 8GB of VRAM. They couldn’t expect to sell many of either, but a GTX690 with 4GB per GPU would likely erase the demand for this card in Triple-SLI.

      • Krogoth
      • 7 years ago

      256-bit memory bus on GK104 doesn’t permit it with the GDDR5 chips that currently exist on the market.

        • Airmantharp
        • 7 years ago

        I’m not sure where you’re coming from- we have GTX670’s and GTX680’s with 4GB, though I understand that a different PCB might be needed. Still, the 2GB/GPU limit on the GTX690 was a bit silly, and limited it’s uses and shelf-life.

        • Waco
        • 7 years ago

        The bus allows it just fine – you just need higher-density chips to do it (or more chips total).

          • Krogoth
          • 7 years ago

          There are no GDDR5 chips with the sufficient density and at given the clockspeed that can make any of G104-based cards into 5GB/6GB.

          Throwing more chips is much easier said then done, especially the speeds that the higher-end GPU platforms call for.

            • Airmantharp
            • 7 years ago

            Posting under the influence? 🙂

            If they can make a 4GB GTX670, they can make an 8GB GTX690.

            • Krogoth
            • 7 years ago

            There’s not enough room on 690’s PCB for the current crop of GDDR5 chips. 😉

            • jihadjoe
            • 7 years ago

            Hynix has several 4Gbit GDDR5 chips in their lineup that fits the requirement. [s<]Samsung shouldn't be too far behind.[/s<] Update: Samsung has some too. Both have parts up to 7Gbps, easily meeting the 6Gbps required speed. Big thanks to Sony, I guess. Hynix's Parts: [url<]http://www.skhynix.com/inc/pdfDownload.jsp?path=/datasheet/Databook/Databook_1Q%272013_GraphicsMemory.pdf[/url<] Samsung's 4Gbit part is available in 7Gbps, 6Gbps and 5Gbps speeds. [url<]http://www.samsung.com/global/business/semiconductor/product/graphic-dram/detail?productId=7824&iaId=759[/url<]

            • Krogoth
            • 7 years ago

            Those chips had just enter production and they will be seen in GPU solution until the next-generation.

            • jihadjoe
            • 7 years ago

            The Samsung chips have been in mass production since Nov 2012, according to their data page.

            I’ll concede they weren’t available at the time of the GTX690 launch, but you did say, and I quote: “current crop of GDDR5 chips.” =P

    • kamikaziechameleon
    • 7 years ago

    I use a 7950 to drive my 2560×1600 monitor. FPS isn’t what I want, better more flexible and stable multi monitor support is.

      • swaaye
      • 7 years ago

      I used to run 1920×1200 on a Radeon 9700. It seems ridiculous that any modern card should have problems with driving any desktop resolution. And I mean multi-monitor resolutions too.

    • HisDivineOrder
    • 7 years ago

    I like this card. It’s like if you took the sheer brute force of the Radeon 79xx series, married it to the advantages of the 6xx series, gave it nVidia drivers and PhysX, and slathered aluminum all over it.

    This is the first card where I looked at it and thought, “Now that’s truly a $700-800 card.” Lest we forget, nVidia tried to raise prices with the Geforce 280 to $600-800. That was after charging an astronomical $700-900 for the Geforce 8800 Ultra. I guess $1k doesn’t really shock me all that much. The only reason those two products didn’t work as well at their (relative to the time period) pricing was that AMD either showed up with a competitor for less or the performance difference was insanely insubstantial.

    Ha! I’m amused that the one time I see a GPU I actually think is worth $700-800, nVidia wants $1k for it.

    But it’s a halo card. Perhaps they don’t want to sell any. Or perhaps they merely want to set the stage for a line of cards at a higher price bracket, so they need the card price point by which all other lower variants can be judged more acceptable.

    After all, what are they going to do with all those Titans that don’t hit the 14 (of 15) SMX’s? It’s a huge part. There have to be a lot of defective ones with 13 instead of 14. Or 12 instead of 14. The card would have significant performance potential even at a few SMX’s missing especially given the reputation they’ve built with the Titan branding…

    HardOCP said that nVidia thought AMD would have competition and made this card in response to that. If that’s true, then you have to think they also had set up product to replace the 600 series. Will they save that now for when AMD shows up to the fight again or will they release it now to give them solidly the advantage across every product line? Right now, the Radeon 7970 GE is the fastest sub-$500 GPU by sometimes an impressive margin.

    So if they’re just releasing what they have because they’ve finished it, thinking AMD would compete when they didn’t, then one imagines nVidia intending to make variants of the 600 series that includes the new temperature-included GPU Boost 2.0 if not a few other improvements.

    I imagine they could also make 4 GB baseline for the 670/680 and 3GB baseline for the 660 Ti. Just those two improvements (GPU Boost 2, more memory) combined with fabrication improvements that offer marginal increases along with a new product launch spoonfed over the next few months would make Geforce relatively high profile compared to the Radeon line that apparently won’t even be given a minor refresh.

    And it would only take very minor improvements to net the performance crown across their entire line. Especially if AMD is relying solely on drivers to keep them in the fight for an entire year. PC enthusiasts who buy GPU’s are a fickle lot who like to buy things often because they’re new.

    Waiting an entire year through nVidia GPU launch after launch, even when those launches are merely refreshes of existing products with minor improvements, would be hard for many of those consumers.

    • Deanjo
    • 7 years ago

    It was an omen… won 2k….. pre-order yah!!!

    • Shobai
    • 7 years ago

    What’s going on with the Titan in the AC3 results? Why does it return such a ‘cloud’ compared to the 690 or 7970? The 680 has limited sections of similar ‘cloudedness’, but otherwise gives a fairly consistent line.

    I probably don’t understand how the graphs work; I would have thought that the width of the frametime results would have been more obvious in the analysis, as with the CrossFire setup in Sleeping Dogs?

    • reever
    • 7 years ago

    nevermind

    • OU812
    • 7 years ago

    [quote<]Double-precision support on the Titan is a bit funky. One must enable it via the Nvidia control panel[/quote<] Isn't Double-precision at 1/24 the SP rate even without enabling it via the Nvidia control panel and 1/3 SP rate when enabled?

    • Nictron
    • 7 years ago

    I must just congratulate you on the best reviews on the net!

    So much detail and effort that goes into these reviews and I am very happy to be a frequent visitor.

    Quality reviewing is always worth the wait!

    • anotherengineer
    • 7 years ago

    Although the TR scatter plot is a thing of beauty, my simple brain does like these graphs on perf/watt and perf/$$ for quite a few resolutions, (including the 1680×1050 I run)

    [url<]http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan/28.html[/url<] [url<]http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan/29.html[/url<]

      • Suspenders
      • 7 years ago

      Nice charts. The performance per dollar chart really nails home what a horrible value the Titan is, hahaha.

      Apparently the HD 7850 is a stellar value, and I don’t believe they included the game bundle in their metric either (which would make it even better).

    • esterhasz
    • 7 years ago

    I wonder if I could deduct the $1000 from my income tax – this is basically a charity donation to Nvidia’s baseline.

    • anotherengineer
    • 7 years ago

    Two Questions

    1. Damage
    “The Titan facility alone soaked up 18,688 GK110 chips in Tesla trim, which can’t have been cheap.”
    Wouldn’t “couldn’t” be more correct than can’t in this case, or is that my crazy Canuck grammar?

    I will accecpt Meadows verdict as correct 😉

    2. $1000, seems like lots, and it is when compared to the performance, however, does anyone know how much the enterprise version of the Titan-Tesla sells for?

      • DeadOfKnight
      • 7 years ago

      1. Couldn’t is correct; however, can’t is very often heard in conversation. Scott often takes a conversational tone in his writing style, although it is more formal and well thought out than most other tech journalism you will find.

      2. This is actually the lower binned model:

      [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814132008&Tpk=tesla%20k20[/url<]

        • anotherengineer
        • 7 years ago

        Margins look pretty good on the enterprise model!!!!

          • Deanjo
          • 7 years ago

          Ya they are good but they also carry ECC as well.

    • beck2448
    • 7 years ago

    Awesome card! best single GPU on the planet at the moment. Almost 50% better in frame latencies than 7970. Crossfire,don’t make me laugh. here’s an analysis.
    [url<]http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-GTX-TITAN-Performance-Review-and-Frame-Rating-Update/Frame-Rat[/url<] AMD cards have a real rendering problem that fraps doesn't show.

      • Krogoth
      • 7 years ago

      SLI/Crossfire are the same bloody thing (AFR, Scissoring, Supertitling). The difference is how much tweaking is done on the software level (application profiles) licensing and marketing non-sense.

      Learn how multi-GPU rendering *works* and come back again.

        • beck2448
        • 7 years ago

        Read the article and come back again.
        [url<]http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-GTX-TITAN-Performance-Review-and-Frame-Rating-Update/Frame-Rat[/url<]

          • Krogoth
          • 7 years ago

          How ironic that the said article proves my point. It perfectly illustrates the drawback of multi-GPU solutions. The platform’s real-world performance hings on the driver implementation and application profile. The reviewer cherry-picks a “The Way it is meant to be Play” title, while I’m willing to wager you will see the same problem with SLI under games behind the “Gaming Evolved” logo.

          • Fighterpilot
          • 7 years ago

          OMG troll,
          How many times and forums are you going to spam that article?
          Try to stay on topic huh….you know The Titan launch…

    • DeadOfKnight
    • 7 years ago

    TITAN up the graphics

    • Bensam123
    • 7 years ago

    “Yes, those reasons include problems with multi-GPU micro-stuttering, which our latency-focused game benchmarks are at least somewhat capable of detecting. We have expressed some reservations about the limits of the tool we use to capture frame rendering times, especially for multi-GPU solutions, but absent a better option, we still think Fraps is vastly preferable to a simple FPS average. We’ll have more to say on this front in the coming weeks, but for today, our usual tool set will have to suffice.”

    Nice.

    It’s great to see how much AMD performance results have improved in a few short months. They’re definitely taking things seriously and it shows, even with CF configurations… well in games they’ve tweaked so far.

    I’m sorta surprised, for how much people hyped up this fabled chip over the last year or so you’d expect better results. But it’s essentially a 690. Nothing too surprising or ‘OMG WORLD SHATTERING!’ here. The power draw numbers are neat, but I suspect this will be all lackluster once AMD gets it’s framerate consistency problems figured out. The average FPS graph screams ‘potential’ to me.

      • beck2448
      • 7 years ago

      AMDs fps numbers are overstated. They figured out a trick to make runt frames, or frames which are not actually rendered to trigger the fps monitor as a real fully rendered frame. This is real problem for AMD much worse than the latency problem. Crossfire is a disaster which is why numerous reviewers including Tech Report have written that Crossfire produces higher fps but feels less smooth than Nvidia.
      Check this article out. [url<]http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-GTX-TITAN-Performance-Review-and-Frame-Rating-Update/Frame-Rat[/url<]

        • Bensam123
        • 7 years ago

        I’ll withhold my judgment until the technology they’re using matures. They announced this back in January and I’m unsure they have everything figured out already.

        It seems neat… but a video card renders a fraction of a frame and not a full frame? Something seems completely off about that. I unfortunately don’t know enough about how capture cards capture information to interpret it more then that. From what I understand a capture card reads every refresh like a monitor and it may not coordinate properly with the capture FPS or even be in sync.

          • MadManOriginal
          • 7 years ago

          Maybe the driver improvements to address frame times are recent, but Crossfire in general is ancient, and Crossfire is doing something bizarrely different from single cards or from SLI. Either AMD is incompetent or intentionally trying to skew average FPS results. There’s no reason to render that additional frame.

            • Firestarter
            • 7 years ago

            crossfire really does look very bad in that pcper article, I’m curious how AMD will respond as it seems to completely invalidate the 7990 card and crossfire in general

            • Deanjo
            • 7 years ago

            [quote<]I'm curious how AMD will respond as it seems to completely invalidate the 7990 card and crossfire in general[/quote<] We will have to wait at least until Monday for the janitor/pr rep to respond.

            • Bensam123
            • 7 years ago

            IF the setup is correctly reading and interpreting the output. This is the first time I’ve ever heard of either Nvidia or AMD cards rendering fractions of frames. Those fraction frames don’t just pertain to SLI or Crossfire.

            • Firestarter
            • 7 years ago

            I’m pretty sure the card is actually rendering the whole frames, even if only a sliver gets sent to the monitor. You can see that the tiny bit of the frame that is actually visible looks like it should, with textures and everything. It makes absolutely 0 sense to try and trick some benchmarks by doing the whole frame setup and only render a tiny bit of it. I bet it’s just a dumb bug that PC Perspective managed to pin down where AMD couldn’t.

            • Deanjo
            • 7 years ago

            [quote<]It makes absolutely 0 sense to try and trick some benchmarks by doing the whole frame setup and only render a tiny bit of it. [/quote<] Except if it makes it look like the performance is better then it really is.

            • Airmantharp
            • 7 years ago

            Thinking about it further, it really looks like Nvidia has the output-side of their GPUs much better sorted than AMD, something we’ve suspected heavily at least with this generation, and it appears on the surface to be paying.

            It also means that there’s probably even more room for AMD to improve on their architecture than most had been believing, which is also a pleasant relief, if true. If only they’d focus a little on making quiet, fully exhausting products.

            • Firestarter
            • 7 years ago

            [quote<]Except if it makes it look like the performance is better then it really is.[/quote<] If they wanted to do that, they could have tricked tools like Fraps without rendering a single valid pixel of that frame. But they don't, the slivers that PC Perspective see in the captured output are tiny parts of valid frames, and if you look at the performance that AMD gets with crossfire, it's not out of line with what NVidia gets. Like Aimantharp said, I think it's a problem with the delivery of the frame, probably some timing sensitive thing that gets all screwed up while trying to do AFR properly.

            • Bensam123
            • 7 years ago

            Yeah, but if this is really happening the screen output looks like shit. This would be a worse ‘optimization’ then when they messed with mip-map levels.

            I still think something is messed up with what they’re capturing (between refresh and capture) or this is a really weird bug. I don’t even know how they would go about rendering a fraction of a frame when graphics cards aren’t setup to do that. I originally thought these lines were from load balancing between two cards in SLI or Crossfire, but they aren’t.

        • Airmantharp
        • 7 years ago

        That PCPer article is pretty eye-opening; I’m looking forward to further investigation by the community.

          • kc77
          • 7 years ago

          Yes it is. But you really do need both frame time and FPS in a review. They ask completely different questions but both have an impact on what the user sees. That’s why I’m glad TR shows both. Thinking that FPS is more important than frame time and vice-versa is wrong. They are asking different questions with regards to performance.

          Another thing that’s not highlighted usually is that of Vsync. I’m not sure if this was off universally for the PCPer review. This makes a world of difference because Nvidia has what is known as adaptive Vsync. Not a lot of people paid attention to this, however it does wonders for producing clean consistent frame rates. Your average frame time should be better but your FPS will be lower. In my opinion it would do AMD wonders to have something like it in the future.

          That being said I think reviewers in general should do a much better job of explaining what things do, what settings are enabled, and how they affect performance. It’s bad to just throw a graph up there and highlight a number without explaining what the hell is going on.

    • Takeshi7
    • 7 years ago

    So you review a graphics card based on a Tesla GPGPU card, but you don’t even do any GPGPU tests? I feel like potential customers might be interested to know how well this card does folding@home or bitcoin mining etc.

      • JohnC
      • 7 years ago

      Yea, such tests would also be interesting to see… Some other reviews have them.

      • Deanjo
      • 7 years ago

      Bitcoin mining…… What a waste of gpgpu resources that could be better utilized for meaningful research.

      • Diplomacy42
      • 7 years ago

      Big thumbs up here, would love to see f@h/bionic type testing in all gpus going foreword, that’s not going to happen though, so I’d settle for gpgpu. The power savings are such that I could see selling off my 2 580s if the performance were there.

    • Krogoth
    • 7 years ago

    GTX Titan = 690 GTX replacement

    It yields nearly the same gaming performance minus all of the issues associated with all multi-GPU solutions. It is cheaper to produce as well since Nvidia only has to recycle GK110 dies that didn’t make the Tesla/Quadro cut instead of using two “excellent” yield GK104 dies along with all of the extra engineering involved in making a SLI/CF on a stick.

    Titan doesn’t change anything in the marketplace, it is meant for gamers who want a solution that can effortlessly handle 4Megapxiel gaming with AA/AF on nearly every title that is out on the marketplace. They can even throw in another Titan and they can handle 4Megapixel gaming or beyond with anything that is in the foreseeable future.

    The most interesting thing about the Titan is that GK110 design is far better suited for GPGPU related stuff then its GK104 kin and depending on the application it can blow away Tahiti. Workstation and number cruncher types may see the Titan as a “poor man’s” K20.

    We may yet see lesser GK110 dies in the form of 675 and 685 or 770 and 780 refresh that will replace existing 670 and 680 lines.

    For people with more restricted budgets, the current dynamic between 660Ti, 670Ti, 7870 and 7950 works for now and they can all handle 2Megapxiel gaming without too much trouble.

      • DeadOfKnight
      • 7 years ago

      I think you’ve missed the mark.

        • Krogoth
        • 7 years ago

        Care to explain?

        Going by past history, SLI/CF on the stick have never been cheap or profitable. They always have been placed as halo product geared towards epenis and bragging rights crowd. They usually get discontinued quickly once a single GPU solution with similar or superior performance comes around. Titan predictably fulfills this role and is no exception to the rule.

        Titan’s GPGPU performance seems to be its shining spot if that is your thing, since it just a “failed K20” rebrand as an ultra-high end gaming solution.

          • DeadOfKnight
          • 7 years ago

          I’m pretty sure this isn’t much cheaper to produce than the 690. If it were we would have seen it a long time ago. It seems to me that this product has come out of demand from customers and AMD tooting their own horn about the single GPU performance crown while the next generation of GPUs may not come out for another 9 months or more.

          This is a strategic move by Nvidia, not just a shuffling of the cards to increase profit margins.

            • Krogoth
            • 7 years ago

            Because Nvidia has been allocating the vast majority of its GK110 yields to the more lucrative HPC and professional markets where they can charge higher premiums for the same silicon. GTX Titans uses GK110 silicon that didn’t make the cut.

            Titans are cheaper to make because they are using binned GK110 silicon rather than taking anyway top-tier GK104 which the GTX 690 currently uses. SLI/CF on a stick cards require additional tracings on PCB, VRM and power circuity to make it all work. That’s why such products have always been placed as low-volume, halo products.

            • DeadOfKnight
            • 7 years ago

            But if you remember correctly, GF114 also was released at about half the price of GF110. They have a lot more chips to pick through with GK104 because it’s their highest volume product by a wide margin which they are in no hurry to start phasing out. If I remember correctly, they are both being produced on the same size wafers which means they spend a lot more on those GK110 chips, regardless of how they bin them. The binning of these chips is likely to be very different for both the Tesla and the GeForce chips. Tesla chips will be binned to find the chips that run well with lower voltage. GeForce chips are not going to be crammed into racks with passive coolers. I’m sure both products are pretty much top shelf for their usage model. It is likely, however, that folks at Nvidia looked at their heap of chips too power hungry to be used on Tesla cards and said “we need to do something with these”

        • JohnC
        • 7 years ago

        Don’t bother…

          • DeadOfKnight
          • 7 years ago

          I only feed the trolls that don’t realize they are trolls.

      • Meadows
      • 7 years ago

      [url<]http://www.youtube.com/watch?v=ZpR9kvaD1mE[/url<]

        • derFunkenstein
        • 7 years ago

        what is that i don’t even

          • MadManOriginal
          • 7 years ago

          Some things can’t be unheard >.< THANKS MEADOWS

        • eitje
        • 7 years ago

        i found this, thanks to you: [url<]http://www.youtube.com/watch?v=oVfUJkYqMOU[/url<]

      • HisDivineOrder
      • 7 years ago

      I may not agree with your assessment that it’s a 690 replacement, but I think part of what you say is true. I disagree in that I think nVidia intends these products at two different consumers. There is the consumer who wants only one card, has room for the larger card, and wants the performance of SLI. And there is the consumer who has less room, wants all the heat exhausted, and doesn’t mind the sometimes substantially less performance.

      That said, I do think the Titan is a poor man’s K20. I think that’s obvious from 1) the sheer amount of RAM and 2) the option to run it at 1/3 fp64. Those are things they’ve excluded when they’ve made “consumer” versions of their high end before. I’m thinking of the 480 and 580 when I say that.

      I think you might also be right about lesser versions of this card. The 680 came out ahead of the 670 because the 670–upon arrival–essentially replaced it. The performance was 90% there and the cost was substantially higher. But for that first month or so, nVidia sold all the 680’s they could ship.

      I can’t help imagining nVidia doing something like that with their $1k GPU. Just as it’s old news, they release a cut-down yet still substantial Geforce GTX Titan Lite that hits a rather hard $700-$800 price point. By comparison to the Titan, the newer card is praised for “value” and nVidia is praised overall for acquiescing to consumer demand. The irony being that if they’d just released the Titan at the $700-$800 price everyone thinks it should be, many people would have complained about the price there, too. And called on nVidia to release a cheaper variant…

      It’s funny, though. Does anyone really think it’s a coincidence that on the same day AMD gets a huge console contract out in the open that nVidia shows up with Titan? It seems like to me nVidia used Titan to steal the limelight away from AMD’s contract for PS4 for all the PC gamers out there. It was the same day…

      If that’s true, then hypothetically one imagines another variant of Titan perhaps showing up on the day Microsoft starts talking about its new console’s innards…

        • Deanjo
        • 7 years ago

        [quote<]It's funny, though. Does anyone really think it's a coincidence that on the same day AMD gets a huge console contract out in the open that nVidia shows up with Titan? [/quote<] Yup, I do. The two are aiming really at two different crowds and the Titan release has been pretty much set in stone for quite a few months already, well before any PS4 announcement. If anything, AMD asked Sony to do a bit of PR for them to lessen the blow as if to say "Hey guys!!! We are still here too!"

    • JohnC
    • 7 years ago

    Thanks for a review and this looks like a pretty nice card overall… I have about $10k of “free money” laying around (well, actually they are on credit card), so I think I might spare some for this card (and maybe for better CPU), especially after seeing Crysis 3 benchmarks on other sites… Hopefully I’ll be able to order it before EVGA runs out of their allocated units 😉

    Technically I can just buy another GTX680 for SLI, but I don’t want to deal with various multi-gpu issues associated with “game profiles” for them and wait indefinite amount of time until all of the issues will be resolved and Nvidia will figure out the way to get the best possible scaling for whatever new game I might be playing in future… Also, teh Titan is perhaps not very “overclockable” due to hard TDP limit (I believe teh “power target” slider only goes to 106%) but I don’t really care much about this, I never tried to overclock the GTX680 and will never do this to Titan.

    • Convert
    • 7 years ago

    I like the aesthetics of the card.

    That is all I have to add.

    • kamikaziechameleon
    • 7 years ago

    IMHO the price of the product should reflect its dominance at said price point. The fact that it isn’t the best value tells me this price might be more of a stunt than anything else. Rich gamers will buy it up and use SLI immediately for no other reason than bragging rights.

      • Krogoth
      • 7 years ago

      It is meant to replace the 690.

      Similar performance and physical appearance? Not a coincidence at all.

        • DeadOfKnight
        • 7 years ago

        You’re crazy.

        • 0g1
        • 7 years ago

        You should read what nVidia said its supposed to do instead of jumping to your own conclusions.

        690 is a different market from the Titan market. Titan is more extreme. Titan is like Intels Extreme edition 1000$ CPU. 690 is like the budget version of Extreme for those that can’t afford to SLI two Titans.

          • Krogoth
          • 7 years ago

          They are in the same market. Look at the price chart in the TR review, Titan and 690 are both occupying the same spot.

          690 is being discontinued like its SLI/CF on stick predecessor once a single GPU solution can deliver similar or superior performance comes around.

    • beck2448
    • 7 years ago

    awesome tech. Much rather a single GPU than any multi config. Bodes well for Nvidia’s future as they will learn a lot from this for future advances.

    • killadark
    • 7 years ago

    Hmm seems like bigfoot decided to show himself :p

    • Chrispy_
    • 7 years ago

    A review worth waiting for.
    Shame about the price, but I guess this is aimed at the supercar owners to whom a roomful of high-end, dual-Titan PC’s is chump change.

    • south side sammy
    • 7 years ago

    forgot to mention. you can set the temp 70/80c, etc. before the card starts to throttle. best I can explain it. sure you guys will figure it out when you start playing with the card/software some more.

    • alienstorexxx
    • 7 years ago

    hi scott, i think ac3 test isn’t fair to AMD cards. very high settings on antialiasing means, msaa for amd, and txaa for nvidia, that’s not directly comparable. normal means fxaa for both, high means, txaa 2x for nvidia and msaa 2x for amd and you can figure what very high means 😀

    great review.

      • Damage
      • 7 years ago

      AC3 has three settings on Radeons: Normal, High, and Very High
      On GeForce Kepler cards, it adds a fourth beyond “Very High”: TXAA

      I did not select TXAA on the GeForce cards. Just “Very High” for all cards.

      My understanding is that the settings map like this:

      Normal (FXAA), High (FXAA HQ), Very High (FXAA HQ + MSAA 4x), and TXAA (4x TXAA).

      If that’s not the case, I’d be interested to see where you got your information.

        • alienstorexxx
        • 7 years ago

        well, i don’t remember where i saw it, i think it could be from an assassins creed forum.
        looking into it, maybe i misunderstood, or read it from some bad translation. you’re right, sorry I goofed

    • nerdrage
    • 7 years ago

    Typo on page 4:
    “You see, while a the GTX Titan card”

    • Alexko
    • 7 years ago

    Thanks for the good review, Scott, a nice read as usual.

    I am, however, concerned about one thing. Damien from Hardware.fr found that when it is “cold”, because of the way Boost 2.0 works, Titan produces measurably higher results than when it is “warm”. [url<]http://www.hardware.fr/articles/887-10/gpu-boost-tests.html[/url<] [French] As an extreme example, he got 75 FPS in Anno 2070 on a cold run, but "only" 63 FPS on a warm card. In other words, the low starting temperature added 19% extra performance. The problem, of course, is that actual gaming sessions last for dozens of minutes, if not hours, while benchmarks are typically over after a few dozen seconds, so they tend to generate unrealistically high results. Did you check for this sort of behavior?

      • south side sammy
      • 7 years ago

      that’s the way it works before opening the voltage regulation ( accept warning to open over clock ) to over clock. the cooler the card runs the faster it WILL be. it has a built in self regulating system……….. kind of but not like the current system that retards the performance when the cards get hot. forget what it is currently called.

      will be interesting to see when they start doing benchmarks with liquid nitrogen.

      • JohnC
      • 7 years ago

      Yes, would be interesting to see something like this tested, as well as “playing” with new “temp target” slider and see its effects…

      • Damage
      • 7 years ago

      Yeah, I worried about this issue in our tests. We have enough data to watch for any performance drop across five test runs. Some examples:

      Guild Wars 2, FPS avgs:
      83.9 84.2 82.9 81.2 82.7
      Median: 83

      Skyrim:
      109.9 109.5 109.6 111.1 110.9
      Median: 110

      Sleeping Dogs:
      28.2 28.0 27.6 28.0 27.6
      Median: 28

      AC3:
      58.8 59.3 58.9 59.4 59.1
      Median: 59

      I just don’t see a slowing effect there or in our other data.

      Perhaps with a lot more time spent in each game some effect would be measurable. (In full test mode, we’d spend maybe 15 minutes in each, reboot, go to the next game, and so on.) I may try to look into this question with some directed tests, but I did watch for it and just haven’t seen a slowdown yet.

        • Alexko
        • 7 years ago

        Thanks.

        It’s interesting that you didn’t see the same phenomenon. Could the card be cooling down between runs? Apparently they’re only 90 seconds long, that might not be enough to get them to warm up to 80°C, or maybe your testing environment afforded it better cooling than Damien’s.

        He noticed that his card reached 80°C (and therefore throttled) in every single game, and I believe he tested it in an actual case as opposed to a testing bench, so perhaps that was a factor. Or maybe just ordinary sample variability.

        In any case, thanks for taking the time to respond.

          • Damage
          • 7 years ago

          There really isn’t time between runs of any significance, so I doubt that’s a factor. It’s quite possible the card will heat up more in a case with poor ventilation than on our open test bench, though.

            • Alexko
            • 7 years ago

            That could be it, then, or at least a big factor.

            Damien did say that it took 5 minutes for the card to really heat up in Anno: [url<]http://forum.beyond3d.com/showpost.php?p=1711223&postcount=1240[/url<]

            • JohnC
            • 7 years ago

            Well, you can try emulating their testing environments… For example, you can simply turn up teh room thermostat (also, may be useful to know what temperature inside your room was during your tests) to see if it may “trigger” the default 80C temperature limit sooner and affect the performance 😉 Then, for example, set teh temp limit to higher value (I believe it’s 95C) and see if it helps to avoid the performance decrease in such “warm” environment…

        • willmore
        • 7 years ago

        You need to add a cold air intake!

    • ClickClick5
    • 7 years ago

    At $1000, i’ll pass. If it ran games at 2560×1600 at 120+fps sure i’ll buy. But, make this card $699, and we will talk!

    • I.S.T.
    • 7 years ago

    Found a typo in the article. On page five, it’s 2560×16000 when it should be 2650×1600.

      • ClickClick5
      • 7 years ago

      No. This is hyper wide resolution.

        • Farting Bob
        • 7 years ago

        It’s the well known 53:32 aspect ratio!

          • MadManOriginal
          • 7 years ago

          I am waiting for 21:12 ratio myself but it wouldn’t be cheap – sadly you can’t get Something for Nothing.

          • ClickClick5
          • 7 years ago

          Here we go!

          [url<]http://i.imgur.com/whdvEYv.jpg[/url<]

            • Suspenders
            • 7 years ago

            I’d hate to be the postal guy who had to deliver THAT monitor…

          • yogibbear
          • 7 years ago

          Yeah for someone finding a mistake in a article like a douche, you think they’d spell check their one sentence… sadly no. Now we get awesome humour like Farting Bob instead 🙂

        • Meadows
        • 7 years ago

        You mean hyper-tall?

          • ClickClick5
          • 7 years ago

          Yeah. I’m not use to using the word “tall” in monitor speak.

            • derFunkenstein
            • 7 years ago

            narrow, then. Super-duper narrow.

            • DeadOfKnight
            • 7 years ago

            But you can rotate it into landscape mode.

        • jihadjoe
        • 7 years ago

        Dont you mean super-tall?

        edit: ah fsck it im late to the party again

    • Wildchild
    • 7 years ago

    I’m still not used to Nvidia beating AMD in temperatures and power usage.

      • Farting Bob
      • 7 years ago

      Temperature is largely irrelevent as chips are rated to very high temps its more of a case of “how quiet do i want it to be?”. The fans can go like crazy and sound terrible while keeping the chip cool or relax a bit and let the chips sit around 80c (still well below worrying levels) with drastically better acoustics. Temperature only becomes an issue when parts start reaching their rated max temp.

      Power draw though was AMD’s game for a long time until the 600 series came around. Kepler is particularly good under load compared to AMD.

      • MFergus
      • 7 years ago

      Power usage and temperature tend to go hand in hand. A lot of power is going to be a lot of heat.

    • MadManOriginal
    • 7 years ago

    Not related to the Titan itself but….15x15x4 for that Falcon NW system leaves me krogimpressed. My mATX case is ~15x15x7 and has way more room for drives and better general airflow.

    • My Johnson
    • 7 years ago

    Bummer. My $1000 has already been set aside for illegal substances and the ladies.

      • anotherengineer
      • 7 years ago

      My 1k is set aside also, for groceries and diapers for the month 😐

    • Firestarter
    • 7 years ago

    for $1000, it might have served them well to throw in an Asetek closer-loop watercooler, it seems this monster is more limited by thermals than by power

    • DeadOfKnight
    • 7 years ago

    Krogoth claps half heartedly, clearly unimpressed.

      • chuckula
      • 7 years ago

      What is the sound of one Krogoth clapping?

        • DeadOfKnight
        • 7 years ago

        It is similar to the sound of the world’s smallest violin.

      • Suspenders
      • 7 years ago

      I love our TR memes 😀

    • Arclight
    • 7 years ago

    It’s pretty nice, but i can’t afford it. Even if i did i’d probably still wouldn’t buy it….because at that price it’s not top dog performance wise for gaming. For GPU compute that’s another story and i understand the arguments.

    • briskly
    • 7 years ago

    Perhaps a silly question, but is it safe to assume Titan’s clock held around 960-990mhz during testing?

    For humor’s sake, how does perform at 1920×1080?

    Edit: Maybe not that humorous. Some people run their monitors at rates significantly higher than 60hz.

    • brucethemoose
    • 7 years ago

    $1000 for a premium card is one thing, $1000 for a card you can’t even OC is a whole different story.

      • south side sammy
      • 7 years ago

      it can be over clocked. somebody missed the boat.

      • Deanjo
      • 7 years ago

      It is overclockable.

      • brucethemoose
      • 7 years ago

      All the reviews I’ve read say overclocking is pitiful.

        • DeadOfKnight
        • 7 years ago

        Are you measuring in MHz or %? The overclocking sounds great to me.

        • Farting Bob
        • 7 years ago

        The automatic OC from its turbo boost thing can take it above 950mhz from word on the street. To get really crazy OC you’ll need to modify the power circuitry though, surprised they didnt go with 2 8-pin plugs as standard even if it didnt need all that at stock since everything else about the card is excessively well made.

          • Suspenders
          • 7 years ago

          This puzzles me as well. After all the effort making it a premium card, I would have expected dual 8-pin connectors for some real juicing up potential. Odd choice not to include that.

            • Meadows
            • 7 years ago

            A 250 W card does not need dual 8-pins.

            • Suspenders
            • 7 years ago

            True, but for anyone out there that wants to really abuse these with overclocking it would have been useful. For an enthusiast oriented card, I would have expected an “Ausum switch” equivalent at the least, especially for $1000 price tag.

            • briskly
            • 7 years ago

            That’s not the issue here. The problem is capping voltage at 1.2V and not enough power phases. As is, the GTX Titan will never quite hit the PCIe limit, unlike the 580 and 7970OC, since your power target is limited as well.

            • Suspenders
            • 7 years ago

            Well, improved power circuitry is sort of implied when adding two 8-pins 😀

            • Deanjo
            • 7 years ago

            [quote<]Well, improved power circuitry is sort of implied when adding two 8-pins :D[/quote<] and racing stripes makes cars go faster.

            • Suspenders
            • 7 years ago

            What does beefing up the power circuitry of a card have to do with racing stripes?

            • Deanjo
            • 7 years ago

            Absolutely nothing, just like beefier connectors don’t imply improved power circuitry or a heavier power supply indicating a better power supply.

            • Suspenders
            • 7 years ago

            *sigh* Clearly my point has escaped you, so I’ll try and be a bit more pedantic.

            My opinion is that Nvidia should have built this card with beefier power circuitry that is capable of supplying more than 265 watts to the card. We can infer that they didn’t because they chose a 6-pin+8-pin power connection instead of dual 8-pin (like on the GTX 690). Based on this inference, I stated that it was a puzzling design choice for a card of this price tag.

            Hopefully that clears up any confusion you’re having with what I’m trying to say.

            Now, why is this puzzling? Because it really limits any extreme overclocking. When you’re building a card that is all about excess (with the price to match), one would hope that part of that excess price would translate into hardware that was a bit more robust, so it can really be put through its paces by those that choose to do so.

            The Anandtech review also states that Nvidia has forbidden it’s partners from shipping cards that allow higher voltages or that have significantly different designs ( like more VRM phases), so they are very intentionally putting a limit on any extreme overclocking. I don’t really understand their point of view on this; for $1000 “luxury” product, that really should be one of your options if you choose.

            • Deanjo
            • 7 years ago

            [quote<]I don't really understand their point of view on this; for $1000 "luxury" product, that really should be one of your options if you choose.[/quote<] I don't see what is so hard to understand that nVidia will not cover failure of a part that is intentionally ran out of spec, especially an expensive one that is of limited supply.

            • Suspenders
            • 7 years ago

            Who said they would? Warranty doesn’t usually cover overclocking, so that’s not really an issue.

            Even so, if warranty was really that much of a concern they could let their partners worry about it, but they aren’t willing to let them. *shrug*

            • Deanjo
            • 7 years ago

            [quote<]Who said they would? Warranty doesn't usually cover overclocking, so that's not really an issue.[/quote<] On previous generations of cards nVidia would cover a GPU failure due to overclocking with their partners. This changed when Kepler was released. [url<]http://www.evga.com/forums/tm.aspx?m=1758604&mpage=1[/url<] [quote<] Regarding overvoltaging above our max spec, we offer AICs two choices: · Ensure the GPU stays within our operating specs and have a full warranty from NVIDIA. · Allow the GPU to be manually operated outside specs in which case NVIDIA provides no warranty. We prefer AICs ensure the GPU stays within spec and encourage this through warranty support, [b<]but it’s ultimately up to the AIC what they want to do.[/b<] Their choice does not affect allocation. And this has no bearing on the end user warranty provided by the AIC. It is simply a warranty between NVIDIA and the AIC. [/quote<]

            • Suspenders
            • 7 years ago

            Interesting, thank you for the link. [url=http://www.anandtech.com/show/6774/nvidias-geforce-gtx-titan-part-2-titans-performance-unveiled/2<]That contradicts somewhat what Anandtech reported in their review wrt overclocking the Titan series; [/url<] [quote<]Finally, as with the GTX 680 and GTX 690, NVIDIA will be keeping tight control over what Asus, EVGA, and their other partners release. Those partners will have the option to release Titan cards with factory overclocks and Titan cards with different coolers (i.e. water blocks), but they won’t be able to expose direct voltage control or ship parts with higher voltages. Nor for that matter will they be able to create Titan cards with significantly different designs (i.e. more VRM phases); every Titan card will be a variant on the reference design.[/quote<]

            • Deanjo
            • 7 years ago

            It doesn’t really contradict it, Anandtech just doesn’t convey the whole terms and the stipulations of nvidias Green Light program that they have now with their AIC partners.

    • DeadOfKnight
    • 7 years ago

    You mention that 2x GTX 680s would offer a better value in this price range, but I think it is worth noting that 2x GTX 670s are probably the best value for a high-end config as it would cost $200 less and would only be ≤10% slower, or equal performance to your numbers for the GTX 690 if you are running at 2560×1440.

      • DeadOfKnight
      • 7 years ago

      In addition, there are other considerations that can make 2x 670 more appealing. The GTX 670 produces considerably less heat and therefore noise which makes it even more of a competitor to the GTX 690, mitigating some of its advantages over 2x 680. Also, many find that you can be more aggressive overclocking them than with both the 680 and the 690. There are also many non-reference designs that double the VRAM which might even beat a 690 in some instances; these also demand a lot lower premiums than they used to.

    • Meadows
    • 7 years ago

    If we don’t just focus on the performance but include the overall construction quality, noise levels, and power draw, then I believe the thousand dollar asking price is valid and well deserved.

    With that said, the AMD Crossfire option looks to perform very well too, but has sad performance bugs in select titles.

    • Deanjo
    • 7 years ago

    I just hope that the released cards from EVGA and others don’t have those gaudy stickers they have a habit of sticking on the cards. Premium products don’t need robots and laser decals slapped all over them.

      • Farting Bob
      • 7 years ago

      I want a frog firing a laser cannon at a impossibly pale girl with purple hair and the worlds most revealing bikini on the box of my graphics card, how else do we know it’s a good product????

    • Deanjo
    • 7 years ago

    [quote<]This card is priced at $999.99, one deeply meaningful penny shy of a grand.[/quote<] Or a flat even $1000 if purchasing in Canada with cash. We got rid of the penny.

      • BiffStroganoffsky
      • 7 years ago

      Sorry, exact change only. No Titan for you or your fellow Kanucks.

      • superjawes
      • 7 years ago

      Now if only we got rid of the US penny…

      I HATE PENNIES.

      • UberGerbil
      • 7 years ago

      Actually I think it’s about [url=http://www.xe.com/ucc/convert/?Amount=999&From=USD&To=CAD<]$1019.00[/url<] right now in loonies. But that's changing all the time (and of course even the list price might be different in Canada).

        • Deanjo
        • 7 years ago

        Yes, if you ordered it from a US vendor. The price will be the same 999 on NCIX and Newegg.ca so technically, Yank’s will be paying a bit more at todays exchange rates.

          • anotherengineer
          • 7 years ago

          Well depends where you are, add another 13%HST tax here in Ontario, which is $130.00 on 1k 🙁

            • Deanjo
            • 7 years ago

            Order it out of province then you will only pay the GST.

            • coldpower27
            • 7 years ago

            That trick doesn’t work in Ontario with the HST now.

            • Deanjo
            • 7 years ago

            In theory it isn’t supposed to but many out of province suppliers still won’t charge anything more then GST.

            • anotherengineer
            • 7 years ago

            Used to work when it was PST & GST, however that all ended the day the HST way implemented.

            • Novuake
            • 7 years ago

            You think that’s bad? Projected pricing in South Africa is 22000 Rand. The exchange rate is R8.90 to 1$.

            So yeah. You think you have it bad, nothing compared to Europe’s pricing on components.

      • derFunkenstein
      • 7 years ago

      Actually in Canada, it’s going to wind up being $1200.

        • Deanjo
        • 7 years ago

        Nope, already got the pricing. Same 999.

    • chuckula
    • 7 years ago

    TR didn’t touch on it too much, but (very surprisingly) this card isn’t actually crippled for doing compute, and even includes some very impressive double-precision performance. At $1000, it’s way overpriced for playing games, but it’s a steal for people that really need to do GPU compute.

      • kcarlile
      • 7 years ago

      Or large scale visualization. I’d love to hook one of these babies up to a 4K projector and spin some fly brains on it.

        • UberGerbil
        • 7 years ago

        “spin some fly brains” — there’s a phrase I had to back up and re-parse a couple of times. And I won’t lie: my first interpretation went [url=http://www.youtube.com/watch?v=jHzZLVyuv98<]here[/url<].

    • GeneS
    • 7 years ago

    Cue classic comment from Krogoth in 3, 2, 1……..

      • Meadows
      • 7 years ago

      The “trading blows yadda yadda price point” template? Don’t worry, I’ll point it out [i<]the very moment[/i<] he clicks on "Submit reply".

        • Arclight
        • 7 years ago

        How much do you hate the guy? Don’t answer, he will not be impressed anyway.

          • Meadows
          • 7 years ago

          I wouldn’t call it “hate” [i<]per se[/i<], but he does make himself very difficult to like.

            • derFunkenstein
            • 7 years ago

            He also makes himself very difficult to NOT mock.

    • kcarlile
    • 7 years ago

    Thanks for including the release date. Haven’t seen that anywhere else, for some godforsaken reason.

    • Silus
    • 7 years ago

    Nice review as always!

    Titan is a halo product targeting a niche market, but it’s more than just a gaming product, since the double-precision capabilities are not crippled. Besides being a formidable gaming card, it is also a formidable and “affordable” (when compared to actual Tesla cards) compute monster. Hence the higher price tag.

    it’s a product that doesn’t interest me, because I’m not the target demographic, but it’s great that it’s here.

      • jensend
      • 7 years ago

      Right. Outside of being a PR stunt and a status symbol, the point of this card is apparently to throw CUDA users a bone.

      The rest of the 600 series is unimpressive for compute purposes. So CUDA users who weren’t ready to shell out the $3500 for a K20 were stuck picking up last-gen parts e.g. 570/580. This is a decent step up across the board and a huge leap forward for strongly DP-limited tasks.

      Titan won’t beat the 7970 at compute by anywhere near the 220% price difference, but despite GCN’s strong compute performance OpenCL hasn’t been gaining traction very fast. By not crippling the DP on this they probably help keep people in the CUDA camp, even though they also probably cannibalize some K20 sales.

        • UberGerbil
        • 7 years ago

        And the flip side of that is that they couldn’t price it any cheaper without badly cannibalizing their Tesla sales. It’s actually kind of surprisingly cheap from that standpoint. Meanwhile, Intel has demonstrated that there is an enthusiast market for thousand dollar components, whether it makes any objective sense or not. Bragging rights have value to certain people, and for those people a high price tag is just the necessary insurance for exclusivity.

        • Silus
        • 7 years ago

        This won’t cannibalize K20 sales, because this isn’t a fully “enabled” Tesla card. It’s missing ECC support and some of the other HPC oriented features that were introduced with the Kepler architecture.
        So in a sense, it is a Tesla card, but for non-professionals or professionals that don’t need everything required in the HPC world.

          • willmore
          • 7 years ago

          Not everyone buying a K20 needs all of that. He didn’t say it would canabalize *all* of the sales, but you have to admit that there are some who will look at it and say “it has the parts I need, Woo, hoo, money saved!”

            • Deanjo
            • 7 years ago

            Na, can’t agree with you. A serious GPGPU guru is going to be looking for ECC and looking towards Tesla (what good is fp64 results if you cannot depend on them to be accurate). The real niche where the Titan fills is the developer who is developing for Tesla but needs a local non mission critical testbed to bang his code against. The Titan fills that prototyping role very nicely. Before hand they would have been using, albeit fairly slow comparatively speaking, GTX-580 to do that work.

            • willmore
            • 7 years ago

            If your code has no way to verify results, then you may need that, but there are plenty of workloads that have verifiable results. Prime95 is a good example. It preforms a test on the result of each round of calculation to see if there is a probability that something went wrong.

            Code comes in varying degrees of brittleness. Not everything needs all of that.

            I agree with the prototype/testbed use as well. That seems like a nice, obvious place to save a few bucks by using a titan in place of a K20.

            For a supercomputer which will run code from many sources, you would want the full blown K20 card, but there are plenty of other uses for GPGPU that simply don’t need the full device or are more cost sensitive and less ‘brittle’.

            • Deanjo
            • 7 years ago

            Those less “brittle” applications will more then likely be focused on fp32 and there are plenty of other less expensive options out there for that.

            • willmore
            • 7 years ago

            Well, to use Prime95 as an example, it’s all DP and it’s pretty hearty.

            • Deanjo
            • 7 years ago

            Do you really seriously see someone buying a monster like this to run Prime 95?

            • jensend
            • 7 years ago

            I don’t think memory errors are anywhere near as common as you seem to think. Soft errors are pretty rare, and vanilla GDDR5 does have EDC (error detection and correction, more limited than the ECC on Tesla boards but it’s something).

            See [url=http://cs.stanford.edu/people/ihaque/papers/gpuser.pdf<]this study done by the F@H team at Stanford[/url<]. A summary: 1.8% of GPUs had hard errors; these can be tested for and returned. As far as soft errors, for GT200 they're talking about >98% of GPUs having something like two errors a week or less. Sure, for some people that could be a problem, but there are countless scientific applications, including ones where SP would cause all kinds of trouble, where that will be exceedingly unlikely to have any significant impact on your results.

            • Krogoth
            • 7 years ago

            When your work involves lives where even an margin of error of that amount can mean the difference between life and death or validating or invalidating your results in an experiment. ECC is a must.

            • jensend
            • 7 years ago

            I never claimed that there were no use cases for ECC, only that there are plenty of researchers, including many who need high precision, who will be fine without it.

            Also, no ECC scheme can completely eliminate errors; they can only make them less likely. nV’s ECC uses a standard single error correct, double error detect scheme. More than half of triple-bit errors would be mistaken for single-bit errors and silently miscorrected. If bit errors were probabilistically independent events such a scheme would make undetected errors too vanishingly unlikely for anyone to care about, but since single-event multiple bit errors are relatively frequent and there are other sources of correlation, the SECDED scheme probably only reduces the chance of undetected errors by a couple orders of magnitude.

            The study I linked above mentioned [url=http://xcr.cenit.latech.edu/resilience2010/docs/Hard%20Data%20on%20Soft%20Errors_%20A%20Global-Scale%20Survey%20of%20GPGPU%20Memory%20Soft%20Error%20Rates.pdf<]in their slides[/url<] that they saw an unrecoverable detected soft error during their relatively brief in-house testing of an ECC Tesla (C2050).

            • jihadjoe
            • 7 years ago

            Considering you can buy four Titans for less than the price of ONE Tesla K20x, i’m sure you can engineer a software-based confidence check into your code and still come out ahead in terms of price/performance.

            It’s even been done before. See MIT paper below for software-mode ECC:
            [url<]http://pdos.csail.mit.edu/papers/softecc:ddopson-meng/softecc_ddopson-meng.pdf[/url<]

    • SHOES
    • 7 years ago

    Tell ya what nvidia make it $800 and it would be tempting make it $700 and you would sell a shit ton of them.. But im guessing you probably dont have enough lying around to make them that cheap just yet.

      • Anvil
      • 7 years ago

      I’m a price-performance guy, but lowering the price accomplishes nothing for Nvidia. It’s simply not meant for mass production and distribution, end of story.

    • Waco
    • 7 years ago

    I know it makes no sense…but I want one. I must have a few screws loose.

      • Prestige Worldwide
      • 7 years ago

      A few [b<][i<]wood screws[/i<][/b<], maybe.

        • yogibbear
        • 7 years ago

        You know a wood grained shroud would just about sell me on a $1k GPU.

        • Waco
        • 7 years ago

        I was going to work that into my reply…but I figured I’d leave it open for someone else. 🙂

      • Deanjo
      • 7 years ago

      Depends on your uses I guess. For me it is appealing for a few reasons.

      1) My main system runs linux exclusively, Crossfire/SLi need not apply.
      2) Recently upgraded to a 27″ Asus PB278Q monitor that I got a sweet deal on at Best Buy (after price protection $550)
      3) Drivers, see reason number 1 for the OS I use
      4) I do a lot of GPGPU code (Cuda mostly but also some openCL and I have the developer drivers that fix the openCL issues encountered here) and this gives good SP/DP performance for development purposes without having to pony up for a Tesla card.
      5) I rather have a single GPU in a system for consistencies sake across the board, noise and power consumption
      6) with more linux games coming out, it is going to be nice to run them at full res with all the eye candy.

        • sweatshopking
        • 7 years ago

        great up till six, but you think this thing will be stressed by source?

          • Deanjo
          • 7 years ago

          Some of us know something you don’t know…..

            • superjawes
            • 7 years ago

            SSK is unimpressed….I’m expecting an all caps response from Krogoth now…

            • Deanjo
            • 7 years ago

            Oh SSK is just showing his insecurities about the future of windows gaming.

        • d0g_p00p
        • 7 years ago

        SLI works with linux. At least on my ubuntu box it does.

          • Deanjo
          • 7 years ago

          Not really, in fact sli actually hinders most applications in Linux using slr or afr then using just a single card. The nvidia drivers carry no proper sli profiling in their drivers for Linux.

      • eitje
      • 7 years ago

      David? Is that you?

    • flip-mode
    • 7 years ago

    Most valuable lesson from this review: I did not know that the HD 7970 was so extremely even with the GTX 680!

    Edit: The noise levels and power consumption are extremely impressive.

      • alienstorexxx
      • 7 years ago

      i think last drivers putted 7970ghz even a little step forward on fps.

        • Farting Bob
        • 7 years ago

        With more promising results to come from the sound of it with the drivers. It seems release drivers for the 7000 series were far from great when they released them.

      • Suspenders
      • 7 years ago

      They’re certainly decent cards, especially for the price+bundle. They get a bit too much of a bad rap imo.

      • halbhh2
      • 7 years ago

      It’s a significant fact, but not given headlines I think.

      It’s funny how some significant facts are the opposite of popular opinion. This applies in many other fields I’ve noticed.

        • Chrispy_
        • 7 years ago

        It’s because people are usually biased by history.

        Once you are old enough to have witnessed good and bad products from both sides of a duopoly (be it Intel/AMD or Nvidia/AMD) it’s a little easier to look at something subjectively, though I always have a little favouritism for the underdog.

        When Tahiti was struggling to match the GK104, I wasn’t worried about the difference, because Nvidia is the company that gave us the late, slow, hot, expensive, noisy FX5800 which couldn’t even run DX9 games competitively until they brutalised the image quality via driver ‘cheats’. ATi’s own 8500 was much the reversal of this scenario a few years earlier, so the fact that both AMD and Nvidia are so close now is often overlooked by binary review conclusions in so many reviews that have to pick product A over product B.

          • Krogoth
          • 7 years ago

          Pretty much.

          Nvidia and AMD’s GPU division are pretty much at the point that the brand name is the only difference. Just like Pepsi versus Coke-Cola Classic.

      • Airmantharp
      • 7 years ago

      Power consumption isn’t always the biggest concern; it’s probably the last, though generally because it’s only a real issue at the high-end where it’s already been accounted for in enclosure, PSU, and system cooling selection.

      Noise is a different beast. The best way to reduce noise in a system is to focus on reducing noise from the beginning, and the GPU is one of the best places to start with the highest power draw in the system along with the smallest fans. I really like that Nvidia has focused on producing high-performance parts that are good acoustic citizens, and I hope that the trend both continues down their product line as well as puts pressure on AMD to do the same.

        • flip-mode
        • 7 years ago

        It’s still impressive, though.

          • Airmantharp
          • 7 years ago

          That it is! I wonder if the Titan presents the best w/DB we’ve seen with a higher-end power budget on a retail card; especially one with that exhausts heat in only one direction, out of the enclosure.

        • BestJinjo
        • 7 years ago

        You cannot buy a reference based HD7970Ghz. The noise complaints regarding that version are irrelevant since every single HD7970Ghz uses an open-air cooled after-market design. Retail HD7970Ghz cards have lower noise levels than the Titan, 690 or a reference 680.
        [url<]http://www.anandtech.com/show/6774/nvidias-geforce-gtx-titan-part-2-titans-performance-unveiled/15[/url<]

    • chuckula
    • 7 years ago

    OK Guys! Great Review!
    Tell ya what, I’ll just swing on by the labs later to make sure that Nvidia gets that test card back. You just go relax and don’t worry about what happens to that review card once it’s in my [s<]greedy paws[/s<] capable hands.

    • jensend
    • 7 years ago

    Tom’s Hardware, trying very hard to not be much indebted to TR, has come up with their own absolutely absurd latency metric- average difference between consecutive frame times. They used it for the first time in their Titan review today. It’s much worse than average absolute deviation, which is worse than standard deviation, which is worse than variance, which TR reasonably declined to use.

    A card with 1ms average consecutive frame deviation can still have horrible consistency- e.g. its frame time graph could be a triangle wave of period 200 frames with the minima at 1ms and the peaks at 99ms, a 53fps average but totally unplayable. A card where frames alternated between 15ms and 25ms would also average 53fps, its average consecutive frame difference would be “10x worse,” but it would obviously offer vastly superior gameplay.

    The metric heavily penalizes AFR schemes but does very little to penalize other kinds of frame time inconsistencies, which are rarer and/or occur over longer periods than just consecutive frames but may have a larger impact on gameplay. So they’re reduced to saying “we’ve got this EXCITING new metric, but please ignore the results because the GTX 690 really isn’t so bad.”

      • Essence
      • 7 years ago

      TR wasn’t the first to show or use latency benchmarks (A few beat TR to that). Nobody can say one result is correct over the other results, this is the nature of theses latency benchmarks, so live with it.

      It always felt smoother on my 7970 ghz compared to nvidia cards… heres the proof [url<]http://www.tomshardware.co.uk/geforce-gtx-titan-performance-review,review-32635-3.html[/url<] Edit: I should have said BF3, as each game is different for both AMD and Nvidia

        • jensend
        • 7 years ago

        Others discussed microstuttering before TR, but if you claim others were seriously coming to terms with frame time distribution metrics before TR’s “inside the second” article, I’m going to slap that with a great big [[u<][color=blue]citation needed[/color][/u<]]. It's not at all true that "nobody can say one result is correct over the other results." Some metrics are better than others at capturing perceived smoothness. Some metrics, like the one Tom's just made up, do not even match the most basic requirements we can see a priori that a decent metric would have to fulfill. Blind testing with a sufficient number of subjects could give us fairly conclusive answers about how different metrics correlate with perceived smoothness and help us choose better ones. Though I do think AMD's current-gen cards have been underrated by a lot of people, the Tom's hardware result you proudly link to does absolutely nothing to show the 7970 is perceptibly smoother than a 680. The average frame rate is ~3% faster, the minimum is ~3% slower, and the consecutive frame deviation is an absurd metric which tells us practically nothing.

        • cynan
        • 7 years ago

        [quote<]Nobody can say one result is correct over the other results...[/quote<] Oh, I don't know. I think jensend did a pretty good job. The whole point of reporting frame times is to try and illustrate whether the rendering experience is smooth vs choppy. As an aside, even TR's reporting of the number of frames above, say, 33 ms is not very useful on its own - if these longer frames are regularly spaced, only occurring one at a time, it may not be too noticeable. But if they cluster, then it likely will be (though in reality, TR's reporting of number of frames above a certain time apart is at least somewhat informative as the more frames above a certain time will likely correlate with more clustering of these longer frames). Still, to me, the latency graphs that TR presents are the most useful for this reason: they tell you how bad this clustering of longer frames is. Averaging of frame times, however, is completely useless. To provide an overly simplistic example, you could have a card that renders 95% of it's frames 1 ms apart, and the last 5% of the frames 100ms apart. Where these 100ms frame times cluster will almost certainly indicate deviation from smooth gaming. However, averaging them together will simply tell you that the card produces frames that are ~6ms apart. Absolutely useless.

        • shaq_mobile
        • 7 years ago

        Oh no you didn’t! You should know better than to marginalize TR on here. 🙂

        • halbhh2
        • 7 years ago

        You brought out the TR vote posse, which seems to number around a half dozen or so. Kinda a autoVote response you can get if you say TR wasn’t the first or missed something, etc.

        They don’t realize we only bother to comment here in the first place due to the quality of TR posts!

      • south side sammy
      • 7 years ago

      I haven’t read their article but this isn’t the first time they’ve used it. But it did start showing up after several copy and paste elements from this site started showing up in threads.

        • jensend
        • 7 years ago

        Looks like you’re right- they introduced it a week ago with a CPU test. This is the first GPU test they’ve used it for, however. A few days ago they did a GPU test without this metric but with per-frame “framerate” graphs (yes, inverting every frametime just to be different, even though that entirely obscures what’s important).

      • MadManOriginal
      • 7 years ago

      The solution to this problem is simple: Don’t read Tom’s Hardware.

        • jensend
        • 7 years ago

        Yep. Normally don’t. I noticed the Anandtech review, which had some good info about compute performance. Then, since TR’s review wasn’t up yet, I did a quick search to see if I could find anybody that’d done frame time metrics, which led me to that review. After seeing their metric I needed an outlet to express my consternation.

    • phez
    • 7 years ago

    [quote<]Maybe we should get some quieter examples in the labs for some Crysis 3 testing, alongside two or three Titans, eh? We'll get right on that.[/quote<] By all means, please!

Pin It on Pinterest

Share This