Nvidia’s GeForce GTX Titan X graphics card reviewed

Some things don’t require a ton of introduction. Like a second Avengers movie or a new Corvette. You pretty much know the deal going in, and the rest is details. When Nvidia CEO Jen-Hsun Huang pulled out a Titan X graphics card onstage at GDC, most folks in the room probably knew the deal right away. The Titan X would be the world’s fastest single-GPU graphics card. The rest? Details.

Happily, it’s time to fill in a bunch of those details, since the Titan X is officially making its debut today. You may have already heard that it’s based on a brand-new graphics processor, the plus-sized GM200, which packs pretty much 50% more of everything than the GPU inside the GeForce GTX 980. Perhaps you’ve begun to absorb the fact that the Titan X ships with 12GB of video memory, which is enough to supply the main memory partitions on six GeForce GTX 960 cards—or 3.42857 GeForce GTX 970s. (Nerdiest joke you’ll hear all week, folks. Go ahead and subscribe.)

These details paint a bigger picture whose outlines are already obvious. Let’s fill in a few more and then see exactly how the Titan X performs.

The truly big Maxwell: GM200

The GM200 GPU aboard the Titan X is based on the same Maxwell architecture that we’ve seen in lower-end GeForce cards, and in many ways, it follows the template set by Nvidia in the past. The big version of the new GPU architecture often arrives a little later, but when it does, good things happen on much larger scale.

The GM200—and by extension, the Titan X—differs from past full-sized Nvidia GPUs in one key respect, though. This chip is made almost purely for gaming and graphics; its support for double-precision floating-point math is severely limited. Double-precision calculations happen at only 1/32nd of the single-precision rate.

For gamers, that’s welcome news. I’m happy to see Nvidia committing the resources to build a big chip whose primary mission in life is graphics, and this choice means the GM200 can pack in more resources to help pump out the eye candy. But for those folks wanting to do GPU computing with the Titan X, this news may be rather disappointing. The Titan X will likely still be quite potent for applications that require only single-precision datatypes, but the scope of possible applications will be limited. Folks wanting to do some types of work will have to look elsewhere.

Speaking of geeky details about chips, here’s an intimidating table:

ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

processors

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die
size

(mm²)

Fab

process

GK110 48 240/240 2880 5 384 7100 551 28 nm
GM204 64 128/128 2048 4 256 5200 398 28 nm
GM200 96 192/192 3072 6 384 8000 601 28 nm
Tahiti 32 128/64 2048 2 384 4310 365 28 nm
Tonga 32 (48) 128/64 2048 4 256 (384) 5000 359 28 nm
Hawaii 64 176/88 2816 4 512 6200 438 28 nm

Like every other chip in the Maxwell family and the ones in the Kepler generation before it, the GM200 is manufactured on a 28-nm fab process. Compared to the GK110 chip that powers older Titans, the GM200 is about 50 square millimeters larger and crams in an additional billion or so transistors.


A simplified block diagram of the GM200. Source: Nvidia.

The block diagram above may be too small to read in any detail, but it will look familiar if you’ve read our past coverage of the Maxwell architecture in our GeForce GTX 750 Ti and GTX 980 reviews. What you see above signifies 50% more of almost everything compared to the GM204 chip that drives the GTX 980. The GM200 has six graphics processing clusters, or GPCs, which are nearly complete GPUs unto themselves. In total, it has 24 shader multiprocessors, or SMs, each of which has quad “cores.” (Nvidia calls them quads and calls ALU slots cores, but that’s just marketing inflation. Somehow 96 “cores” wasn’t impressive enough.) Across the whole chip, the GM200 has a grand total of 3072 shader ALU slots, which we’ve reluctantly agreed to call shader processors.

Compared to the GK110 chip before it, the GM200 has a somewhat different mix of resources. The big Maxwell has twice as many ROPs, which should give it substantially higher pixel throughput and more capacity for the blending work needed for multisampled antialiasing (MSAA). The GM200 also has a few more shader ALU slots and can rasterize one additional triangle per clock cycle. Notably, though, the new Maxwell’s texture filtering capacity is a little lower than its predecessor’s.

These changes aren’t anything too shocking given what we’ve seen from other Maxwell-based GPUs. Thing is, the Maxwell architecture includes a bunch of provisions to make sure it takes better advantage of its resources, and that’s where the real magic is. For instance, the chips’ L2 cache sizes aren’t shown in the table above, but they probably should be. The GM200’s cache is 3MB, double the size of the GK110’s. The added caching may help make up for the deficit in raw texture filtering rates. Also, Maxwell-based chips have a simpler SM structure with more predictable instruction scheduling. That revised arrangement can potentially keep the shader ALUs more consistently occupied. And Maxwell chips can better compress frame buffer data, which means the GM200 should extract effectively more bandwidth from its memory interface, even though it has the same 384-bit width as the GK110’s.

In fact, I’m pretty sure there’s at least one significant new feature built into the Maxwell architecture that Nvidia isn’t telling us about. Maxwell-based GPUs are awfully efficient compared to their Kepler forebears, and I don’t think we know entirely why. We’ll have to defer that discussion for another time.

Nvidia Titans the screws

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

Shader

pro-

cessors

Memory

path

(bits)

GDDR5
transfer

rate

Memory

size

Peak

power

draw

Intro

price

GTX
960
1126 1178 32 64 1024 128 7 GT/s 2 GB 120W $199
GTX
970
1050 1178 56 104 1664 224+32 7 GT/s 3.5+0.5GB 145W $329
GTX
980
1126 1216 64 128 2048 256 7 GT/s 4 GB 165W $549
Titan
X
1002 1076 96 192 3072 384 7 GT/s 12 GB 250W $999

The Titan X is by far the most potent member of Nvidia’s revamped GeForce lineup. The GM200 GPU has a base clock of about 1GHz, a little lower than the speeds you’ll see on the GTX 980. The slower clocks are kind of expected from a bigger chip, but the Titan X more than makes up for it by having more of everything else—including a ridiculous 12GB of GDDR5 memory. I don’t think anybody technically needs that much video RAM just yet, but I’m sure Nvidia is happy to sell it at the Titan’s lofty sticker price.

Heck, to my frugal Midwestern mind, the most exciting thing about the Titan is the fact that it portends the release of a slightly cut-down card based on the GM200, likely with 6GB of VRAM, for less money.

Nvidia has equipped the Titan X with its familiar dual-slot aluminum cooler, but this version has been coated with a spiffy matte-black finish. The result is a look similar to a blacked-out muscle car, and I think it’s absolutely bad-ass. Don’t tell the nerds who read my website that I got so excited about paint colors, though, please. Thanks.

Many of the mid-range cards floating around in Damage Labs these days have larger coolers than the Titan X, so it’s kind of impressive what Nvidia has been able to do in a reasonable form factor. The Titan X requires two aux power inputs, one six-pin and one eight-pin, and it draws a peak of 250W total. Nvidia recommends a 600W PSU in order to drive it.

We could revel in even more of the Titan X’s details, but I think you’ve got the picture by now. Let’s see how it handles.

Test notes

This review is the perfect opportunity to debut our new GPU test rigs. We have a pair of GPU rigs in Damage Labs, one for Radeons and the other for GeForces, that allow us to test two graphics cards at once. We’ve updated the core components on these systems to the very best new hardware, so they should fit quite nicely with the Titan X. Have a look:

The major components are:

  • Intel Core i7-5960X processor
  • Gigabyte X99-UD5 WiFi motherboard
  • Corsair Vengeance LPX DDR4 memory – 16GB
  • Kingston SSDNow 310 960GB SSD
  • Corsair AX850 modular power supply
  • Thermaltake Frio CPU cooler

The CPU-and-mobo combination offers an ideal platform for multi-GPU testing, with tons of PCIe Gen3 lanes up and to four PCIe x16 slots spaced two apart. That’s room for lots of shenanigans.

Speaking of shenanigans, whenever I show a picture of our test systems, people always ask why they include DVD drives. The answer? Mostly so I have place to plug in the extra SATA and power leads that I need when I plug in external drives for imaging. Also, I can install legacy games I own in a pinch. So deal with it.

One place where I didn’t go for the ultra-high-end components in these builds was the SSDs. My concern here wasn’t raw performance—especially since most SATA drives are limited by the interface as much as anything—but capacity. Games keep growing in size, and the 480GB drives in our old test rigs were getting to be cramped.

Thanks to Intel, Gigabyte, Corsair, and Kingston for providing new hardware for our test systems. We’ve already started putting it to good use.

Our testing methods

Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.

We didn’t use Fraps with Civ: Beyond Earth or Battlefield 4. Instead, we captured frame times directly from the game engines using the games’ built-in tools. We didn’t use our low-pass filter on those results.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-5960X
Motherboard Gigabyte
X99-UD5 WiFi
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance LPX
DDR4 SDRAM at 2133 MT/s
Memory timings 15-15-15-36
2T
Chipset drivers INF update
10.0.20.0

Rapid Storage Technology Enterprise 13.1.0.1058

Audio Integrated
X79/ALC898

with Realtek 6.0.1.7246 drivers

Hard drive Kingston
SSDNow 310 960GB SATA
Power supply Corsair
AX850
OS Windows
8.1 Pro
Driver
revision
GPU
base

core clock

(MHz)

GPU
boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Asus
Radeon
R9 290X
Catalyst 14.12
Omega
1050 1350 4096
Radeon
R9 295 X2
Catalyst 14.12
Omega
1018 1250 8192
GeForce
GTX 780 Ti
GeForce
347.84
876 928 1750 3072
Gigabyte
GeForce GTX 980
GeForce
347.84
1228 1329 1753 4096
GeForce
Titan X
GeForce
347.84
1002 1076 1753 12288

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Sizing ’em up

Do the math involving the clock speeds and per-clock potency of the latest high-end graphics cards, and you’ll end up with a comparative table that looks something like this:

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

shader

arithmetic

rate

(tflops)

Memory
bandwidth
(GB/s)
Asus
R9 290X
67 185/92 4.2 5.9 346
Radeon
R9 295 X2
130 358/179 8.1 11.3 640
GeForce GTX
780 Ti
45 223/223 4.6 5.3 336
Gigabyte
GTX 980
85 170/170 5.3 5.4 224
GeForce
Titan X
103 206/206 6.5 6.6 336

We’ve shown you tables like that many times in the past, but frankly, the tools we’ve had to test delivered performance in these key rates haven’t been all that spectacular. That ends today, since we have a new revision of the Beyond3D graphics architecture test suite that measures these things much more accurately. These are directed tests aimed at particular GPU features, so their results won’t translate exactly into in-game performance. They can, however, shed some light on the respective strengths and weaknesses of the GPU silicon.

Nvidia has commited a ton of resources to pixel fill and blending in its Maxwell chips, as you can see. The mid-range GTX 980 surpasses even AMD’s high-end Radeon R9 290X on this front, and the Titan X adds even more pixel throughput. This huge capacity for pixel-pushing should make the Titan X ready to thrive in this era of high-PPI displays.

This test cleverly allows us to measure the impact of the frame-buffer compression capabilities built into modern GPUs. The random texture used isn’t compressible, while the black texture should be easily compressed. The results back up Nvidia’s claims that it’s had some compression for several generations, but the form of compression built into Maxwell is substantially more effective. Thus, the Titan X achieves effective transfer rates higher than its theoretical peak memory bandwidth.

The reason we don’t see any compression benefits on AMD’s R9 290X is because we’re hitting the limits of the Hawaii chip’s ROPs in this test. We may have to tweak this test in the future in order to get a sense of the degree of compression happening in recent Radeons.

’tis a little jarring to see how closely the measured texture filtering rates from these GPUs match their theoretical peaks. These tools are way better than anything we’ve seen before.

Since the Fermi generation, Nvidia’s GPU architectures have held a consistent and often pronounced lead in geometry throughput. That trend continues with Maxwell, although the GM200’s capabilities on this front haven’t grown as much as in other areas.

I’m not quite sure what’s up with the polygon throughput test. The delivered results are higher than the theoretical peak rasterization rates for the GeForce cards. I have a few speculative thoughts, though. One, the fact that the GTX 980 outperforms the Titan X suggests this test is somehow gated by GPU clock speeds. Two, the fact that we’re exceeding the theoretical rate suggests perhaps the GPU clocks are ranging higher via GPU Boost. The “boost” clock on these GeForce cards, on which we’ve based the numbers in the table above, is more of a typical operating speed, not an absolute peak. Three, I really need to tile my bathroom floor, but we’ll defer that for later.

The first of the three ALU tests above is something we’ve wanted for a while: a solid test of peak arithmetic throughput. As you can see, the GM200 pretty much sticks the landing here, turning in just as many teraflops as one would expect based on its specs. The GTX 980 and R9 290X do the same, while the Kepler-based GTX 780 Ti is somewhat less efficient, even in this straightforward test case.

I have to say that, although the GM200 is putting on a nice show, that big shader array on AMD’s Hawaii chip continues to impress. Seems like no matter how you measure the performance of a GCN shader array, you’ll find strengths rather than weaknesses.

Now, let’s see how all of this voodoo translates into actual gaming performance.

Far Cry 4


Click the buttons above to cycle through the plots. Each card’s frame times are from one of the three test runs we conducted for that card.

Right away, the Titan X’s plot looks pristine, with very few frame time spikes and a relatively steady cadence of new frames.

Click over to the plot for the Radeon R9 295 X2, though. That’s the Titan X’s closest competition from the Radeon camp, a Crossfire-on-a-stick monster with dual Hawaii GPUs and water cooling. In the first part of the run, the frame time plots are much more variable than on the Titan X. That’s true even though we’re measuring with Fraps, early in the frame production pipeline, not at the display using FCAT. (I’d generally prefer to test multi-GPU solutions with both Fraps and FCAT, given that they sometimes have problems with smooth frame dispatch and delivery. I just haven’t been able to get FCAT working at 4K resolutions.) AMD’s frame-pacing for CrossFire could possibly smooth the delivery of frames to the display beyond what we see in Fraps, but big timing disruptions in the frame creation process like we’re seeing above are difficult to mask (especially since we’re using a three-frame moving average to filter the Fraps data).

Then look what happens later in the test session: frame times become even more variable. This is no fluke. It happens in each test run in pretty much the same way.

Note that we used a new Catalyst beta driver supplied directly by AMD for the R9 295 X2 in Far Cry 4. The current Catalyst Omega driver doesn’t support multi-GPU in this game. That said, my notes from this test session pretty much back up what the Fraps results tell us. “New driver is ~50 FPS. Better than before, but seriously doesn’t feel like 50 FPS on a single GPU.”

Sorry to take the spotlight off of the Titan X, but it’s worth noting what the new GeForce has to contend with. The Radeon R9 295 X2 is capable of producing some very high FPS averages, but the gaming experience it delivers doesn’t always track with the traditional benchmark scores.

The gap between the FPS average and the 99th percentile frame time tells the story of the Titan X’s smoothness and the R9 295 X2’s stutter.

We can understand in-game animation fluidity even better by looking at the “tail” of the frame time distribution for each card, which illustrates what happens in the most difficult frames.

The 295 X2 produces 50-60% of the frames in our test sequence much quicker than anything else. That changes as the proportion of frames rendered rises, though, and once we hit 85%, the 295 X2’s frame times actually cross over and exceed the frame times from the single Hawaii GPU aboard the R9 290X. By contrast, the Titan X’s curve is low and flat.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

One interesting quirk of this test is demonstrated in the 33-ms results. The GeForce GTX 980 produces almost every single frame in less than 33.3 milliseconds, nearly matching the Titan X. While playing, the difference in the smoothness of animation between the two cards isn’t terribly dramatic.

Meanwhile, the GeForce GTX 780 Ti suffers by comparison to the GTX 980 and the R9 290X. I suspect that’s because its 3GB of video memory isn’t quite sufficient for this test scenario.

Alien: Isolation


After all of the drama on the last page, it’s a relief to see all of the cards behaving well here. What’s remarkable is how well each GPU performs given the quality of the visuals produced by this game. My notes for the R9 295 X2 say: “Isolation isn’t bad, maybe one or two tiny hits. But also isn’t bad with one GPU.”


Click the middle button above, and you’ll see that none of these graphics cards GPU spends any time above the 33-ms threshold.

Civilization: Beyond Earth

Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.

Oh, and the Radeons were tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it.


Check out those smooth frame time plots for the Radeons with Mantle. All of these graphics cards handle this game well in 4K, but the Radeons are just a notch better. As evidence of that fact, notice that the R9 295 X2 trails the Titan X in the FPS average but is quicker at the 99th percentile frame time mark. This outcome is the result of intentional engineering work by AMD and Firaxis. They chose to use split-frame rendering to divvy up the load between the GPUs. Thus, they say:

In this way the game will feel very smooth and responsive, because raw frame-rate scaling was not the goal of this title. Smooth, playable performance was the goal. This is one of the unique approaches to mGPU that AMD has been extolling in the era of Mantle and other similar APIs.

I expect to see game developers and GPU makers making this choice more often in games based on DirectX 12 and Vulkan. Thank goodness.

Note that AMD has taken a bit of a hit by choosing to do the right thing with regard to multi-GPU scaling here. The curve above tells the story. With SFR load balancing, two Hawaii GPUs perform almost identically to a single Titan X across the entire tail of the frame time distribution. SFR doesn’t inflate benchmark scores like AFR does, so it leads to a more honest assessment of a multi-GPU solution’s potential.


Middle Earth: Shadow of Mordor


This whole test is pretty much a testament to the Titan X’s massive memory capacity. I used the “Ultra” texture quality settings in Shadow of Mordor, which the game recommends only if you have 6GB of video memory or more . And I tested at 4K with everything else cranked. As you can see from the frame time plots and the 99th percentile results, the Titan X handled this setup without issue. Nearly everything else suffered.


With 4GB of RAM onboard, the R9 290X and GTX 980 handled this scenario similarly, with occasional frame time spikes but general competence. With only 3GB, the GTX 780 Ti couldn’t quite keep it together.

Meanwhile, although the R9 295 X2 has 8GB of onboard memory, 4GB per GPU, it suffers because AFR load-balancing has some memory overhead. Effectively, the 295 X2 has less total memory on tap than the R9 290X. Thus, the 295 X2 really struggles here. My notes say: “Super-slow ~7 FPS when starting game. Occasional slowdowns during, should show up in Fraps. Slow on enemy kill sequences. Super-slow in menus. Unacceptable.” The fix, of course, is to turn down the texture quality, but that is a compromise required by the 295 X2 that the 290X might be able to avoid. And the Titan X laughs.

Battlefield 4

We tested BF4 on the Radeons using the Mantle API, since it was available.



Here’s another case where a game uses Mantle on the R9 295 X2 and performs nicely in all of our metrics, with a relatively smooth and sensible frame time distribution. Remarkably, this is also another case where the R9 295 X2’s performance matches that of the Titan X almost exactly. Seriously, look at those curves. There’s much to be said about the virtues of a single, big GPU.

Crysis 3



My notes for this game on the R9 295 X2 say: “Seems good generally, but occasional hiccups that take a while.” Take a look at the frame time plots, and you’ll see that I nailed it. By contrast, the Titan avoids those big hiccups while also keeping frame times low overall. That reality is best reflected in the “badness” metrics at 33 and 50 ms.

Borderlands: The Pre-Sequel


This game uses ye olde DirectX 9 to access the GPU, and AMD doesn’t support CrossFire in DX9 on the R9 295 X2. As a result, the 295 X2 is a little slower than the R9 290X here, due to its slightly lower clock speeds and differing thermal constraints. The weird oscillating pattern you’re seeing in the frame time plots for the Radeons is an old issue with Borderlands games that AMD fixed with its Catalyst 13.2 drivers and apparently needs to fix again.


Despite that oscillating pattern, the Radeons spend almost no time above the 33-ms threshold. That’s good. Better is the Titan X, which spends very little time above the 16.7-ms threshold. It’s almost capable of a “perfect” 60 FPS in this game at 4K.

Power consumption

Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

The Titan X more or less holds the line on power consumption compared to the GTX 780 Ti. You know from the preceding pages that its performance is substantially higher, though, and it’s supporting 12GB worth of memory on lots of DRAM chips. So… yeah. Wow.

Speaking of “wow,” have a look at the power draw on the R9 295 X2. Good grief.

Noise levels and GPU temperatures

These video card coolers are so good, they’re causing us testing problems. You see, the noise floor in Damage Labs is about 35-36 dBA. It varies depending on things I can’t quite pinpoint, but one notable contributor is the noise produced by the lone cooling fan always spinning on our test rig, the 120-mm fan on the CPU cooler. Anyhow, what you need to know is that any of the noise results that range below 36 dBA are running into the limits of what we can test accurately. Don’t make too much of differences below that level.

Two of the cards above, the GTX 980 an the R9 290X, have custom, non-reference coolers that sport more fans and larger surface areas than Nvidia’s stock offering. As a result, those two cards are quieter under load than the Titan X. The Titan X is no slouch, though. All of these cards are reasonably quiet.

Of course, the biggest cooler here belongs to the Radeon R9 295 X2, but its integrated liquid cooler with separate radiator has an awful lot of heat to dissipate.

Conclusions

As usual, we’ll sum up our test results with a couple of value scatter plots. The best values tend toward the upper left corner of each plot, where performance is highest and prices are lowest. We’ve converted our 99th-percentile frame time results into FPS, so that higher is better, in order to make this work.


The Titan X is outright faster than everything we tested, including the Radeon R9 295 X2, in our frame-time-sensitive 99th-percentile results. That tracks with my subjective experiences, as I’ve detailed in the preceding pages. The R9 295 X2 has more total GPU power, as the FPS average indicates, but that power doesn’t cleanly translate into smoother gaming. In fact, the results we saw from Beyond Earth and BF4 suggest that the Radeon R9 295 X2’s true potential for smooth gaming pretty closely matches the Titan X’s. Unfortunately, the situation in most games is worse than that for the Radeon.

Heck, as a gamer, if you gave me a choice of an R9 295 X2 or an R9 290X, free of charge, I’d pick the R9 290X. The 290X is a good product, even if it’s a bit behind the curve right now. The 295 X2 is not in a good state.

Here’s the other part of the picture. The Titan X offers a considerable improvement in performance over the GeForce GTX 780 Ti, yet its system-level power draw is about the same—and it’s lower than the Radeon R9 290X’s. Without the benefit of a new chip fabrication process, Nvidia has produced an honest generational improvement in GPU efficiency from Kepler to Maxwell. The Titan X puts a big, fat exclamation mark on that fact. If you want the ultimate in gaming power, and if you’re willing and able to fork over a grand for the privilege, there’s no denying that the Titan X is the card to choose.

For those of us with slightly more modest means, I expect Nvidia will offer a slimmed-down version of the GM200 with 6GB of GDDR5 in a new card before long. Rumors are calling it the GeForce GTX 980 Ti, but I dunno what the name will be. Value-conscious buyers might want to wait for that product. But what do they know?

Enjoy our work? Pay what you want to subscribe and support us.

Comments closed
    • madgun
    • 5 years ago

    Was able to finally snag a Titan X. Overclocks like a champ 1380 Mhz / 7.8 Ghz. Has crazy amounts of VRam and I must say the heatsink shroud looks mean. It’s defnitely an enthusiast grade part and finally there’s a single GPU that can do 4K at 60 fps.

    • Jigar
    • 5 years ago

    None of the review around match the quality, excellent work Damage. Thank you.

    • willyolio
    • 5 years ago

    Oddly enough, this gives me high hopes for AMD’s 390X. The “leaked” slides with performance numbers that came out last week showed Titan X at a 46% FPS over the 290X (averaged over multiple games).

    The numbers you got were almost right on the money.

    Which makes me think the 390X will, in fact, beat the Titan X slightly, if the leaked benchmarks were accurate.

    [url<]http://www.tweaktown.com/news/44047/leaked-benchmarks-tease-radeon-r9-390x-against-titan-gtx-980-ti/index.html[/url<]

    • anotherengineer
    • 5 years ago

    POST 400!!!!!!!!!!!!!!!!!!!

    I’ve always wanted to do that 🙂

      • geekl33tgamer
      • 5 years ago

      It’s catching up to this article: [url<]https://techreport.com/discussion/21813/amd-fx-8150-bulldozer-processor#metal[/url<] I think that's got the highest comments of anything else on this site.

        • anotherengineer
        • 5 years ago

        This one has is beat 😉

        [url<]https://techreport.com/news/2799/dr-evil-asks-gxp-problems[/url<]

          • geekl33tgamer
          • 5 years ago

          Wow, long way to go then… 😉

    • Chrispy_
    • 5 years ago

    Will there be a 980Ti based on a die-harvested version of GM200?

      • Prestige Worldwide
      • 5 years ago

      Yes, according to a recent benchmark leak, there is a cut-down GM200 GeForce on the way.

      Edit: Alleged leaked bench image added to post.

      [url<]http://i.imgur.com/gXG73vW.jpg[/url<] Note that it also shows the 390x at the top of the single-gpu heap.

    • ronch
    • 5 years ago

    AMD should take a page from Nvidia’s book and market a super pricey card and call it Colossus.

      • NeelyCam
      • 5 years ago

      [quote<]call it Colossus.[/quote<] [url=https://www.youtube.com/watch?v=fTYXbFsWg-M<]Sounds good to me[/url<]

      • Krogoth
      • 5 years ago

      [url<]https://www.youtube.com/watch?v=DX6bzq3M4M4[/url<]

      • the
      • 5 years ago

      Zeus would be a better marketing name as he’s the one who defeated the Titans.

        • Meadows
        • 5 years ago

        But can they defeat them?

          • the
          • 5 years ago

          That is a very good question. There is hope that with HBM, it’d be possible. I just hope that AMD is able to balance the design to utilize the massive bandwidth increase.

    • GrimDanfango
    • 5 years ago

    I really thought a straight-up 50% extra everything over the GTX 980 would lead to a near-enough 50% higher framerate, along with perhaps a hint of improved performance on top for 4k resolutions given the extra memory.

    It looks an awful lot closer to a 25% increase, which seems like a rather minor boost for double the price and a 50% increase in TDP.

    One thing I wonder about is the DSR performance. I’ve found on a 2560×1440 display, running 5120×2880 DSR is an absolute killer on the GTX 980, to the point where it actually causes quite a few games to drop high res textures all over the place as it runs out of video memory. I wonder if the added memory and bandwidth of the Titan X might put it well ahead in that scenario.

    Any inclination to test such things Scott?

      • auxy
      • 5 years ago

      Actually, a lot of games I tested exhibited the same behavior on my GTX TITAN, which has 6GB of RAM. I don’t think the issue is running out of video memory (hardware), but the game running out of allocated or cached memory. I most notably had this problem in Neverwinter and Champions Online (same developer and engine), but talking to the developers got a “nobody plays in 4K so we don’t care” response. Sigh…

        • GrimDanfango
        • 5 years ago

        Interesting, I wonder what the problem is then. I had taken a cursory look at GPU-Z when running 4x DSR in a couple of games, and it appeared that with it on, I was scraping the ceiling of 4GB vram usage…
        I wouldn’t have thought developers would put in a maximum ram cap. Maybe it’s some sort of 32-bit memory addressing issue that causes the maximum allocation to cap at 4GB regardless.
        I’ve certainly had issues like that with OpenCL recently.

          • Meadows
          • 5 years ago

          There are tools available that can “patch” 64-bit memory usage flags into game executables, but I don’t know if that would actually positively impact the videomemory side of things and I don’t know if the game in question isn’t 64-bit already.

      • cobalt
      • 5 years ago

      [quote<]I really thought a straight-up 50% extra everything over the GTX 980 would lead to a near-enough 50% higher framerate[/quote<] But the clock on the 980 is almost 25% higher, so that's not really a fair expectation. If you take that into account the extra 50% of everything DOES result in a roughly 50% improvement clock-for-clock

        • GrimDanfango
        • 5 years ago

        I hadn’t really considered how much lower clock it is than the 980.

        I can’t really see that excuses anything on a $1000 card though. It really *should* be clocked up with the 980 for that money. It doesn’t say much positive about the TDP either, if they’re hitting a 50% power requirement increase even when downclocking so much.

        The only real justification for the Titan cards existing at the price point they demand is that they are simply no-question all-powerful beasts. This just isn’t enough of a margin to even begin to justify the price as I see it.

          • Melvar
          • 5 years ago

          It’s the RAM. Yes, it’s stupidly expensive, but it’s the only game in town if you want ALL THE RAM. The performance is frankly the minimum they could get away with if they really want to charge $1000 for 12GB of GDDR5.

            • Meadows
            • 5 years ago

            This kind of future-proofing makes sense for 4K resolutions and games over the next 2-3 years, except it’s going a bit too far into the future because the chip’s performance will have been considered inadequate long before games would generally use all of that memory.

            Better buy one of the 6-8 GiB cards 6-12 months from now from whichever vendor, or 4 GiB today.

    • Westbrook348
    • 5 years ago

    Major respect for the Titan X’s capabilities, and I’m sure plenty of people will buy with no regrets, but the pricetag is too high for me personally for something that will be mid-range in 1-2 year’s time. When I build my new rig this summer, I’ll probably go 970 SLI and save myself a few hundred bucks (I’m targeting 1440p 120Hz for 3D Vision, not 4K). If it were $700-800 things might be totally different. Either way, I’m hoping for some overdue price drops on GM204 in the next few months.

    I know you guys at TechReport like to focus on single GPU, but I wish you’d write an article about the state of SLI and CF in 2015 using your frametime metrics. Other sites show 970 and 980 SLI beating Titan X, but I don’t care about their simple FPS comparisons; that’s why I’m here. I am curious how the frame times for 970 SLI compare to Titan X at 1440p. Do I need to worry about bad microstuttering, or do 2015 drivers limit it for the most part? I don’t think anyone could answer this better than Scott.

    • Hyp3rTech
    • 5 years ago

    295×2, while less elegant, is cheaper, colder and quieter. Not mitx ready, but this is not what to put into a mitx rig anyways. Now it sucks when crossfire is not supported, but I would go 295×2 over Titan X 9 out of 10 times.

      • Meadows
      • 5 years ago

      Cooler? I do hope you jest. That thing takes so much power that it literally heats the room it’s in, not just the PC case.

        • auxy
        • 5 years ago

        It still runs cooler. (‘ω’)

          • Meadows
          • 5 years ago

          Not for long it doesn’t, if you actually keep playing instead of just doing a test run.

            • Hyp3rTech
            • 5 years ago

            Not true. It doesn’t top 70.

            • Airmantharp
            • 5 years ago

            If that’s it’s limiter (I’m not looking it up), then it’ll throttle and shut down. For semiconductors, more energy usage = more heat, every time.

            • Hyp3rTech
            • 5 years ago

            It doesn’t throttle at 70.

            • Airmantharp
            • 5 years ago

            It will either throttle or die.

      • Damage
      • 5 years ago

      Hyp3rTech has been banned for shilling

        • chuckula
        • 5 years ago

        [url=https://youtu.be/5oSz8Xip_ho<] Hammer to fall!![/url<]

        • anotherengineer
        • 5 years ago

        Really???

        He did only say 9/10, not 10/10 😀

        So is that shilling or fanboy’in?

          • Kougar
          • 5 years ago

          I’d suggest you search for his most recent forum thread, the evidence is fairly straightforward.

            • Klimax
            • 5 years ago

            Can’t be found at all.

            • Meadows
            • 5 years ago

            Now I’m curious.

            • Klimax
            • 5 years ago

            Me too. It appears as if all his posts were purged. (Post count on profile reads 0)

            • Kougar
            • 5 years ago

            Someone googled and figured out that he created the same threads & posts across a large number of tech/game forums. Every post was bashing a specific GPU or pretending to have a major problem with said GPU. Yet he never responded to posts from people helping troubleshoot, he’d just make more posts bashing or complaining about the product instead.

        • Prestige Worldwide
        • 5 years ago

        NUKED FROM SPACE!

    • NeelyCam
    • 5 years ago

    I’m disappointed by the lack of fanboi nerdrage in the comments

      • Klimax
      • 5 years ago

      I guess you missed my small exchange I see… 😀

      • chuckula
      • 5 years ago

      What, only one person getting the banhammer isn’t enough for you?

        • NeelyCam
        • 5 years ago

        Oh – I didn’t notice the ban… I guess I should read the comments more carefully before commenting

          • chuckula
          • 5 years ago

          [quote<] I guess I should read the comments more carefully before commenting[/quote<] Neely, you gotta stick with what got you here, so don't start reading them now!

      • Prestige Worldwide
      • 5 years ago

      This isn’t WCCFtech.

        • NeelyCam
        • 5 years ago

        Lol.. just checked, and they have over 1200 comments on their TitanX article:

        [url<]http://wccftech.com/nvidia-geforce-gtx-titan-x-official/[/url<]

    • Damage
    • 5 years ago

    User sschaem has been banned for corporate shilling. Looking at some other suspect accounts now. Make your time.

      • chuckula
      • 5 years ago

      OH CRAP! HE KNOWS I’M SHILLING FOR I CAN’T BELIEVE IT’S NOT BUTTER!

      [stares nervously at autographed Fabbio poster]

      • Airmantharp
      • 5 years ago

      YOU HAVE NO CHANCE TO SURVIVE!

      /ssk

      • Prestige Worldwide
      • 5 years ago

      User sschaem has been nuked from space.*

        • chuckula
        • 5 years ago

        It was the only way to be sure.

        • SoM
        • 5 years ago

        only way to be sure…

        dammit, missed chucks post

        but the message stands

      • auxy
      • 5 years ago

      You banned sschaem…? (ノД`)・゜・。

        • chuckula
        • 5 years ago

        OK, I have absolutely know idea if that emoticon represents joy, sorrow, or mild indigestion.

          • auxy
          • 5 years ago

          Impassioned wailing! (ノД`)・゜・。

            • chuckula
            • 5 years ago

            I thought whaling was outlawed!

            Oh, you mean [i<]wailing[/i<]. Nevermind.

            • auxy
            • 5 years ago

            No, [i<]whaling[/i<] is what I call it when I'm hunting my wife. ( `ー´)ノシ [s<][super<][i<][edit]Sometimes when I make a comment reply, the comment system does this weird thing where it says "Loading..." forever with the little swirly, and when I reload the page, my comment is there, but only shows as a '...' before you expand it ... I wonder why? Trying to fix it now by editing and re-posting... try #3? ... maybe it will work in IE11?[/i<][/super<][/s<]

            • auxy
            • 5 years ago

            I dunno, it seems like sometimes my comments get ‘corrupted’ and can no longer be posted correctly. Kinda weird.

            • Melvar
            • 5 years ago

            [s<]I looks like having that emoji after a particular length of line causes the issue.[/s<] It appears that the problem is having a multi-byte character as the 63rd or 64th character in a post.

            • Melvar
            • 5 years ago

            <———————- 63 characters ————————>シ

            This has to be a record for edits to a single post.

            • Melvar
            • 5 years ago

            <———————– 64 characters ————————>シ

            • sweatshopking
            • 5 years ago

            mY RECORD WAS 22

            • NeelyCam
            • 5 years ago

            What happened to your ‘m’…?

            • auxy
            • 5 years ago

            Shift key! (‘ω’)

            • NeelyCam
            • 5 years ago

            SSK uses the shift key…?

            • Melvar
            • 5 years ago

            hE’S BEEN FAKING THIS WHOLE TIME

            • ronch
            • 5 years ago

            And all this time I thought you’re a lady.

            Oh wait, never mind.

            • auxy
            • 5 years ago

            Always a woman, never a lady! (*´∀`*)

            • derFunkenstein
            • 5 years ago

            no, [url=https://www.youtube.com/watch?v=z_lwocmL9dQ<]this[/url<] is impassioned wailing.

      • Meadows
      • 5 years ago

      I’m actually surprised a little. He’s been around for a little while so I figured he must be below threshold or something.

        • NeelyCam
        • 5 years ago

        Wow, you’re right:

        [quote<]Joined: Tue Oct 02, 2007 9:05 am Last visited: Wed Mar 18, 2015 12:00 pm[/quote<] Almost eight years...

          • JustAnEngineer
          • 5 years ago

          …and an active participant in the forums, with nearly 300 posts.

            • NeelyCam
            • 5 years ago

            It would be cool to see how many comments posts I’ve made in the last five years.

            Over 8000?

            • auxy
            • 5 years ago

            He’s also one of my favorite posters! ( ;_;) Or was…

            • NeelyCam
            • 5 years ago

            Somebody’s downthumbing all your posts. Weird.

            • auxy
            • 5 years ago

            Yah, I have a lot of haters. I say unpopular things. Hehehe. ( ;∀;)

            [i<][super<]I had to edit this post to fix the ... problem again! It only happens if the 63rd character is a multibyte character! How the hell do I keep doing that so precisely all the time?! Isn't that weird?![/i<][/super<]

            • ronch
            • 5 years ago

            Same here. Sometimes I have multiple posts on an article and I see they all get -1’d from the last time I visited. Someone thinks it’s fun downthumbing my posts just for the heck of it, or at least it looks that way.

            • GrimDanfango
            • 5 years ago

            [quote<]How the hell do I keep doing that so precisely all the time?![/quote<] Well, you typically have between 5 and 10 characters at the end of every post that are likely to trip it off. It's not exactly tiny odds 😛

            • auxy
            • 5 years ago

            No, typically only one is a multibyte character.

            • Melvar
            • 5 years ago

            There were several characters that caused it in the one I tested with (this one: ( `ー´)ノシ ).

            Also there are two spots, the 63rd & 64th where it seems to happen.

            • auxy
            • 5 years ago

            W-well, that one’s an exception. (・_・;)

            • GrimDanfango
            • 5 years ago

            Incidentally, in case it seems I’m ragging on your crazy emoji-things, I wouldn’t advocate you stop using them… they often make me smile 🙂

            • auxy
            • 5 years ago

            Hehe. I get that a lot. I’m not very good at expressing myself, so I have to rely on emoji to help emphasize myself. (‘ω’)

            • Melvar
            • 5 years ago

            I have a hard time reading emotion through those weird Japanese face butts.

            I know what you mean though. I personally hate using emoticons myself, but it’s hard to get intent across without it. Like the first line in this post. It could be mean, or just joking around. A smiley would have helped.

            • green
            • 5 years ago

            it would be related to the comment summary, before a comment is expanded, being 64 characters (single byte) long
            so when you happen to get a multibyte charcter on the limit, and the code in the back chops up the string, it’s leaving a non-printable charcter
            this then seems to trigger some kind of comment injection attack prevention and rejects the summary before it’s saved
            which as demonstrated by melvar’s testing higher up in separate child thread above, only occurs on the 63rd character
            where the 64 charcter test displays the summary just fine

        • sweatshopking
        • 5 years ago

        HEY GUIZE!! WHAT’S GOING ON!?

      • anotherengineer
      • 5 years ago

      Can one get banned for anti-shilling………………..everything?

        • ImSpartacus
        • 5 years ago

        Are you implicating Krogoth?

      • ronch
      • 5 years ago

      Whoa. I kinda find his posts interesting. I’ve always wondered how he gets his ‘insider info’.

      • ptsant
      • 5 years ago

      I really hope you have some additional information (IP adress or whatever) because what I see publicly is a long-time member who writes reasonably intelligent and polite comments, whether you agree with them or not. I’m all for active administration and I enjoy the quality of your work and the ensuing discussion, but I wouldn’t want to see accounts deleted simply because they are on the wrong camp.

        • Tirk
        • 5 years ago

        Indeed, its clear sschaem had a bias but if bias was the mark of a shill then most users should probably be banned for shilling. I’ll admit that I have a bias and to work towards objectivity a person has to admit that. Anyone who thinks they are immune to bias are probably the least objective.

        Took a quick scan through his posts and saw pretty much the same as you did. I hope others read your post.

      • Chrispy_
      • 5 years ago

      +1 for more info!

      I’m not disputing the ban, I’m just curious how a longstanding member of the community hid so well under the guise of a fanboy which (whilst provoking amusing arguments) never really does and harm – if anything it provokes further discussion with people trying (and often succeeding) to prove the fanboy wrong with links to evidence.

      It’s your site and if he was simply rude and argumentative I’m pretty sure you’d have stated that as the reason for the ban – I highly doubt anyone would have raised an eyebrow if that was the case.

        • w76
        • 5 years ago

        [quote<] never really does and harm[/quote<] Well, I agree it's a shame, but subtle propaganda is still propaganda, and that lowers the quality of discourse when one isn't actually coming from an honest position.

      • ClickClick5
      • 5 years ago

      Now I feel bummed, I don’t remember their comments. Probably something based on the 295 x2 is the lord of all, etc?

      • maxxcool
      • 5 years ago

      wow … did he sell his account ? I mean I know he was team ‘x’.. makes me sad I missed his post to see the straw that did him in..

        • Damage
        • 5 years ago

        I sacrificed my weekend and worked 12 hour days from Thursday (when we got the Titan X driver) up to Tuesday mid-day when we posted. Wrote the entire review between 8:30AM and 2:00PM on Tuesday. This happens too often, but it never gets easy.

        Really worked hard to produce a review with an honest assessment for the sake of PC gamers. Would be easier to slap in some FPS numbers and pretend that’s sufficient info, but it’s not.

        Posted the review and not long after, in drops sschaem with his usual M.O., not too subtly implying bias or worse on my part over some stupid little thing. Again.

        Straw, meet camel’s back.

        Yeah, I don’t owe him anything, particularly not a forum on my website.

        We are among the last, best independent media in this game, and we are NOT crafting reviews in order to favor one manufacturer over another. I will ban him, his kids, his dogs, his IP block. I don’t care. It’s over. Bensam123 can come at me if he wants. So can JAE or anybody else. This kind of crap needs to stop. Now.

          • maxxcool
          • 5 years ago

          Thanks for the quick feedback! With that I can easily picture the response and lack of respect in it. We’ve all seen it before.

          As to the work, you guys always do a stellar job so keep it up!

          • f0d
          • 5 years ago

          totally agree
          constantly doubting your results and slyly inferring that you favor one team over the other was getting old in every nvidia review

          one that sticks in my head is him doubting your power usage results in the 970 review but i know there was many more

          • Meadows
          • 5 years ago

          Not the dogs! Please!

          • thecoldanddarkone
          • 5 years ago

          Thank you for removing the chaff and thank you for your hard work.

          I’ve been coming here since highschool and I’m now in my 30’s. I will continue to come here.

          • ptsant
          • 5 years ago

          Hey, it was a great review, as always. It’s very hard to expose your work publicly. There are always people that complain, either because they enjoy complaining (such personalities do exist :-)) or simply because you can’t please everyone at the same time. In the end, actions speak louder than words: if people do come at the site, it’s not because they want to complain, but because they enjoy it. People who don’t like a site, simply don’t visit. In that sense, a regular critic is probably a bigger fan of your work than someone who only visits once or twice and never posts anything.

          For your own sanity, don’t feel compelled to follow the rythm of big corporate competitors. I don’t mind waiting 1-2 days for a review. It’s the quality that counts.

            • f0d
            • 5 years ago

            i dont think it was the complaining
            it was the sly suggestions that TR preferred one manufacturer over another and fudged the results to make one manufacturer look better than the other

            well thats what i think anyways

          • geekl33tgamer
          • 5 years ago

          Scott, they are a vocal minority and all sites have them. The majority of your readership and subscriber base are very grateful for the work that goes into this site.

          Do not let them get you down. 🙂

          • ImSpartacus
          • 5 years ago

          Don’t let yourself be affected by that kind of thing. Be an example of how to deal with trolling & similar behavior.

          I know any time a TR writer puts down a passionate comment, it invariably receives a few dozen upvotes and it ends up getting a lot of visibility. Maybe it’s not fair to always have your comments in the spotlight, but that’s the reality that you unfortunately have to live with.

          And the bottom line is that I want to see conversations about the Titan X or the incoming 390X and other awesome stuff. That’s the stuff that I come to this website to read & discuss. TR writers put too much work into articles like these to inadvertently cause the comments section to be about off-topic discussions.

          • ClickClick5
          • 5 years ago

          And this is why I have no problem subscribing. Not in the least.

          “We dont doctor the charts. The game/application was tested on card A and card B. These are the results.”

          Never lose this Scott. Never.

          • auxy
          • 5 years ago

          I think you’re getting a little touchy in your old age, Damage. (´・ω・`)

            • Damage
            • 5 years ago

            Nah, just slow to anger. sschaem mistook my past tolerance for weakness.

          • Westbrook348
          • 5 years ago

          Wow I think a lot of people didn’t realize just how much work you put in, Scott. I was requesting info on 970/980 SLI configs in comparison to TitanX and 295X2, but now I feel guilty. Your work is groundbreaking. Other sites treat Nvidia’s dual GPU setups favorably (high FPS/performance), but nothing compares to your analysis here at TechReport.

          • credible
          • 4 years ago

          I can certainly understand your disdain for what he used to do, kind of annoyed me as well.

          I am as pissed as anyone at AMD’s failures, now otoh having my i5-2500k, still and now a gtx 970 I could not possibly be happier, except for when I need to pay the monopolies when its time to upgrade.

      • Tirk
      • 5 years ago

      On a lighter note I’d like to state for the record that I am not a dingo.

      [url<]https://www.youtube.com/watch?v=hkjkQ-wCZ5A[/url<]

    • bfar
    • 5 years ago

    An amazing card, but like the original Titan, it’s simply atrocious value for money, even if you had money to burn. The upcoming Radeon 390x and so-called GTX 980Ti are massive elephants in the room. By summer we’ll have much cheaper products closing in on this level of performance (perhaps with a little overclocking), and everyone knows it!

    I can’t believe we’re seeing positive recommendations for this. Is the press so afraid of Nvidia now? Show some courage on behalf of consumers and call a spade a spade!

    Edit: just to clarify, I’m not referring to Scott’s review (which comes with a sensible qualification), I’m thinking of the tech press in general. I don’t believe Nvidia is duely challenged on the price of this product.

      • Damage
      • 5 years ago

      Hey, Scott here. Wrote the review, own the site, created the testing methods, got the t-shirt. From my conclusion:

      [quote<]If you want the ultimate in gaming power, and if you're willing and able to fork over a grand for the privilege, there's no denying that the Titan X is the card to choose. For those of us with slightly more modest means, I expect Nvidia will offer a slimmed-down version of the GM200 with 6GB of GDDR5 in a new card before long. Rumors are calling it the GeForce GTX 980 Ti, but I dunno what the name will be. Value-conscious buyers might want to wait for that product. [/quote<] Since you're accusing me of something pretty awful--being biased and/or afraid of the company whose product I've just worked very hard to review on our readership's behalf--I'd like to know what the heck you're talking about. Did you read?

        • bfar
        • 5 years ago

        Apologies Scott, I wasn’t actually referring directly to your review, it was the general consensus across the press I had in mind, and I accept that your conclusion is reasoned and qualified. I did not accuse any individual of being afraid, and I made no accusation of bias whatsoever. Again, I sincerely apologise if my post came across as an attack on your good self.

        Since you ask, I did indeed read and enjoy your work (which I do regularly), thank you for your fine work. With regard to the two paragraphs you’ve quoted, I wholeheartedly agree with the second. On the first, I personally wouldn’t have made the qualified recommendation on the basis of price/performance and upcoming products. Given the sheer scale, I would love to see your conclusion challenge Nvidia a little harder on the price, but that’s only a suggestion from an honest reader.

        • bfar
        • 5 years ago

        [url<]http://wccftech.com/nvidia-ethical-pricing-conundrum/#ixzz3Uw7XyGyW[/url<] Would you consider running a piece like this?

        • beck2448
        • 5 years ago

        Look at the 290x frametimes glitches in Crysis. What a joke!
        TiTan X blows ALL of AMD’s space heater lawnmower cards out of the water on performance, power efficiency, and noise. AMD can’t even solve their glitch problems on a single GPU that’s been out more than a year.

      • ermo
      • 5 years ago

      Are you referring to TR or the tech press in general with your “I can’t believe we’re seeing positive recommendations for this” line?

      Because if you’re pointing fingers at TR, you’re definitely barking up the wrong tree (cf. Scott’s reply).

      • Airmantharp
      • 5 years ago

      Expensive things are bad because they’re expensive…?

      • chuckula
      • 5 years ago

      YEAH WELL, I CAN’T BELIEVE IT’S NOT BUTTER!

      • f0d
      • 5 years ago

      there will always be expensive gpu’s
      just like there will always be expensive cpu’s

      if its too much for you then obviously you are not the target market, there are some people out there where money is no object and simply having the best is all they want

      of course there will be cheap/equal parts in the future – but thats the future not now, the titan is for people who want the level of performance it provides NOW not later

      will i ever buy one??? hell no too much for me but that doesnt mean there isnt people willing to pay those prices for it

      the original titan was sold out in many shops in my area, if there wasnt a market for them then they wouldnt sell simple as that and as long as people are willing to spend $1k on a gpu then there will be products out there for that price

      • Meadows
      • 5 years ago

      They’re called Veblen goods. People want it not despite how expensive it is, but *because* of it. Same thing with the Apple watch.

      Just wait for your regular old goods at regular old prices if you’re so bothered.

        • Melvar
        • 5 years ago

        There may be some of that, but I assure you if I get a Titan X it will be because it’s the cheapest fast gaming card with what I consider a reasonable amount of RAM for 4K.

          • Meadows
          • 5 years ago

          For you maybe. I personally wouldn’t get one unless I literally swam in money.

            • Melvar
            • 5 years ago

            You would get one then? If you were literally swimming in money?

            • Meadows
            • 5 years ago

            Sure, why not.

            • derFunkenstein
            • 5 years ago

            If you’re [url=http://lh5.ggpht.com/__7IJpoKX9tk/SYMdk4D2K4I/AAAAAAAAAJc/C1GCoqtREB0/ScroogeMcDuck.png?imgmax=800<]Scrooge McDuck[/url<] and want a killer gaming system, then yeah, I think you *have* to get a Titan X.

            • JustAnEngineer
            • 5 years ago

            Surely you need three of them for Tri-SLI.

            • derFunkenstein
            • 5 years ago

            I wouldn’t because I know 3 way doesn’t scale as well as 2, but some would. I guess I’d need to get a pair of them.

    • anotherengineer
    • 5 years ago

    What I find pretty amazing, is how close the gtx 980 and the 290x are overall.

    And up here in Canada the GTX 980 is typically around the $700 range w/o tax and the R9 290x is around the $445 range, and down to $360 on sale. The additional tax one pays on the GTX 980, puts it at close to double the price of a R9 290X!!!!!!!!!!

    • Deanjo
    • 5 years ago

    No upgrade to Titan X for me. I’ll have to wait for the next gen but in the meantime I’ll probably pick up some original Titans on the cheap.

      • DrDominodog51
      • 5 years ago

      Where can you find the cheap titans and can you define cheap?

      • l33t-g4m3r
      • 5 years ago

      That’s probably the only upside here. The original Titans will get a price drop.

        • auxy
        • 5 years ago

        Lack of double precision support means they won’t… (´;ω;`)

    • Ninjitsu
    • 5 years ago

    I’ve been expecting 8 billion transistors and a larger die for a long time, but I really expected more shader ALUs – ~5700 was my estimate. Strange, 900 million more transistors but only ~300 more ALUs…

    • mad_one
    • 5 years ago

    For a bit I was surprised that the difference between the 980 and the Titan X is much smaller in this review than in others, until I remembered that you are testing an overlocked retail 980, while most sites use a reference clocked card.

    This makes perfect sense, as this is what most 980 buyers will get and the Titan X will only be sold as a reference card. Pointing this out in the text would be nice though. It also means the already overclocked 980 will probably have a bit less OC headroom than the Titan (which seems to OC surprisingly well). Yet out of the box, your test is as realistic as it gets.

      • chuckula
      • 5 years ago

      The Titan-X does appear to overclock relatively well too. Over time we’ll see how the OEMs tweak the power & cooling for OC’d variants.

        • mad_one
        • 5 years ago

        I don’t think NVidia allows custom Titan variants, but we might see them for the 980 Ti, if such a card is released.

    • l33t-g4m3r
    • 5 years ago

    [quote<]The GM200—and by extension, the Titan X—differs from past full-sized Nvidia GPUs in one key respect, though. This chip is made almost purely for gaming and graphics; its support for double-precision floating-point math is severely limited. Double-precision calculations happen at only 1/32nd of the single-precision rate. [/quote<] Oh, look. Nvidia pulled a 970 on the Titan, where it's price rationalization was originally a fully compute enabled chip. I'm sure there will be a few suckers who don't realize this, and waste money on it. Adaptive sync? Nope. Double precision? Nope. Why should anyone spend 1K on this again? Oh, right. It's [i<]Nvidia's[/i<] fastest gaming card. People who can't see reason will buy this. Hey-o. Why did this double post?

      • Klimax
      • 5 years ago

      Why? Maybe something to do massive performance and memory? Adaptive sync? Check: G-Sync… The only downside is lack of DP.

        • sweatshopking
        • 5 years ago

        DP?
        THIS IS A FAMILY SITE. COME ON NOW

          • geekl33tgamer
          • 5 years ago

          *PASSES SSK SOME BRAIN BLEACH*

        • l33t-g4m3r
        • 5 years ago

        G-Sync… Overpriced, and vendor locked TN garbage. Are there even 4k gsync monitors available?
        [quote<]only downside[/quote<] RABID FANBOY DETECTED. I guess price isn't a downside... That, and the fact it's called Titan, which is misleading. Feel free to drop 2K on NV. I'll wait for AMD's 390 and a decent adaptive IPS monitor, which combined will likely cost less than the Titan. Also, tx for replying to my double post. Par for the course here, I guess. FWIW, TR even insinuates the Titan is a screw job, and that's saying a lot. [quote<]I expect Nvidia will offer a slimmed-down version of the GM200 with 6GB of GDDR5 in a new card before long. Rumors are calling it the GeForce GTX 980 Ti, but I dunno what the name will be. Value-conscious buyers might want to wait for that product. But what do they know? [/quote<] I bet the 980Ti will be NV's answer to the upcoming 390, and I highly doubt the Titan will drop in price, even though it's not a true Titan. But people will buy and defend this card, because BEHOLD the power of FANBOYISM.

          • Klimax
          • 5 years ago

          First, ad hominem is very bad thing.

          I am very sure 390x will be somewhere between cost-wise 980 and Titan X. Costs to produce and if AMD gets win in perf some are predicting, then you can definitely forget any price wars as AMD needs money and they will not play nice. Your “likely” is very, very unlikely.

          As for a-sync, we have yet too see how it performs.

          You shouldn’t be talking about rabid fanboys at all. Your entire post reeks of it! Hypocrisy is ugly thing and your post is perfect showcase.

          ETA: And so far all announced A-sync monitors are not really cheaper then G-sync, so much for price advantage…

            • l33t-g4m3r
            • 5 years ago

            [quote<]First, ad hominem is very bad thing.[/quote<] Are you denying the fact that you're defending the Titan? [quote<]You shouldn't be talking about rabid fanboys at all. Your entire post reeks of it! Hypocrisy is ugly thing and your post is perfect showcase.[/quote<] Nope. I'm anti-fanboy and quite impartial. The reality is that I'm calling you out, and you don't like it, so you're saying, "NO U". Right. [quote<]And so far all announced A-sync monitors are not really cheaper then G-sync[/quote<] Which monitors? The IPS ones? LOL. Gysnc TN monitors have a $200+ markup just for having gsync. Adaptive monitors are expensive, because they actually are higher quality monitors. I'm sure a small part of it is from having adaptive sync, but it's nothing like the gsync tax. The proof is in the first Gsync panel that came out, which was a DIY kit for a $200 1080p Asus monitor. The official Gsync version of that monitor cost well over double what the monitor originally retailed for. Manufacturers might even be marking up adaptive monitors to match NV, but the reality is as soon as competition heats up, adaptive monitors will drop in price, because they're not paying NV $200 for a gsync chip. [quote<]As for a-sync, we have yet too see how it performs.[/quote<] We do know how they perform. They've been demoed already. They're just not available yet. Which is another problem about your price comment. They can't be the same price, when they're not even being sold. Reality is, even if you're a NV fanboy, the Titan is a BAD DEAL. It's not a real Titan, and you'll be getting a more affordable 980Ti down the road. This Titan is LITERALLY just a product for people with more money than brains. I dunno if it's even fully dx12 compliant. AMD's 390 is the real next gen card here, but it's also not out yet, so I'll be witholding final judgement until it's reviewed. Even if the leaked benchmarks were wrong, or AMD charges $600 for it, it'll still be a better deal than the Titan or 980. Maxwell is just an improvement to Kepler, and not a next-gen chip. You don't need to upgrade your kepler cards at all, unless you're buying a 4k monitor. That's really the only reason I can see for it, and since I don't want to buy into gsync, the 390 is looking to be the best way to go. That's not me being a fanboy, that's just the reality of how bad vendor lock-in sucks. Gysnc needs to go the way of Mantle, and go away, but it won't as long as the fanboys keep eating it up. LOL. It's too funny that you're calling me a fanboy for preferring a new standard, while you're defending vendor lock-in. That's the VERY DEFINITION of a fanboy. Oh, the irony.

            • Klimax
            • 5 years ago

            Defending? Just pointing out that your view is very narrow. And even if I were to “defend” Titan it in no way gives you any right to use ad hominems. Extremely bad form even if it might not be against rules. Also way to dishonestly asses those who disagree with you. Your arguments were already weak, this was just nice cherry on top.

            So far you keep digging that hole of hypocrisy…

            You and impartial? Your post was everything but impartial. From calling opposition rabid fanboys to upplaying any random rumor about AMD’s new card doing invalid comparisons. (And ignoring any inconvenient rumor contrary your favourite fantasy)

            Your next block of funny assertions is invalid. Asus ROG Swift PG27AQ is IPS and has G-Sync. And we still don’t know how they perform nor anybody can even use it. (AMD has no support yet in drivers)

            So far loads of unsubstantiated things. Talk about being impartial. So far you are absolutely failing at that.

            • l33t-g4m3r
            • 5 years ago

            You’re not impartial defending Gysnc. That’s the very definition of being a fanboy, because it’s vendor lock-in. Gsync also adds $150-$200 to the cost of a monitor, and that’s a FACT. They can’t possibly compete with adaptive sync on price, and adaptive sync is a FREAKING STANDARD. It’s NOT an AMD thing, Nvidia just wants the vendor lock-in profits. Now STFU.

            • Klimax
            • 5 years ago

            I just said, that adaptive-sync like technology is available for longer then a-sync itself. And got tested. And you still shouldn’t be at all talking about impartiality. You are as far from it as possible. (Your other post perfectly demonstrate it – you are talking about unreleased card as if it was already independently benchmarked and already won the race)

            Talk about competing when we know how it works, not before. You are getting way ahead of real world.

            And STFU is used by those who are getting cornered and are running out of arguments… (and hate seeing somebody disagreeing with yourself.)

            BTW: G-sync is only module in monitor. As far as DisplayPort goes, Nvidia is using V-Blank for signalization, which is part of DisplayPort for longer then A-sync. (DP 1.2a optional part of standard not necessary) Just standalone monitors didn’t support it unlike monitors in notebooks.

            ETA: Small replacement in “impartiality” section.

          • Klimax
          • 5 years ago

          Since you appear to give massive credence to any random rumor, the how about this one:
          [url<]http://www.kitguru.net/components/graphic-cards/anton-shilov/amd-radeon-r9-390x-to-cost-more-than-700-report/[/url<] Allegedly new Radeon is to cost 700USD...

            • l33t-g4m3r
            • 5 years ago

            So? It’s still $300 cheaper than the Titan, and supports Adaptive sync. Not to mention full DP and dx12. If anything, AMD deserves to call the 390 a “Titan” and charge 1k for it, but they’re not. Keep it up, fanboy, your bias is showing.

            Also, I could probably buy an adaptive sync monitor with that $300, which would accomplish what I was wanting in the first place. A competitive video card, AND a monitor for the same or less price of a Titan. The Titan is a JOKE.

            Either way, I’m not going to spend over 1k on this stuff. I’ll wait for lower prices if necessary.

            • Voldenuit
            • 5 years ago

            [quote<]If anything, AMD deserves to call the 390 a "Titan" and charge 1k for it, but they're not.[/quote<] They should call it "Zeus", since Zeus defeated the Titans and imprisoned them in Tartar sauce. Sounds fishy.

            • chuckula
            • 5 years ago

            “Not to mention full DP and dx12.”

            Proof of that please? “full DP” is a pretty vague term and I’m pretty sure the Titan-X will run DX12 titles just fine.

            • l33t-g4m3r
            • 5 years ago

            Fermi and GCN 1.0 can run dx12. That doesn’t mean they’re fully dx12 compliant.

            Anyways, it looks like NV has discontinued their consumer compute cards, and the only way to get that now is with AMD, and no you can’t run CUDA on it, so if you need CUDA you’re stuck buying a Quadro.

            • Klimax
            • 5 years ago

            Last cards had DP disabled. What’s that point a bout DX12? Maxwell does too. (HW features, SW-wise since Fermi, unlike AMD…)

            Nobody saw A-sync work. (All we got some stupid non-gaming demos) And lastly, card unreleased is card not benchmarked and thus its performance unknown. And it appears AMD needs far more resources to compete. (With rivers still not as good as NVidia’s regarding performance.)

            Trying to proclaim unreleased card winner is awfully extremely premature, don’t you think? Only fanboys do that…

            • l33t-g4m3r
            • 5 years ago

            [quote<]unreleased is card not benchmarked[/quote<] And sorry I could not travel both And be one traveler, long I stood And looked down one as far as I could [quote<]With rivers still[/quote<] A star glazed river lies forever still, While distant passengers flow in search for journeys end, A beckon of light comes from beyond the mill, Perhaps a peaceful place for their time to spend.

            • Klimax
            • 5 years ago

            At least you entertained us using my funny typo…

          • Klimax
          • 5 years ago

          And frankly, I don’t talk about price because I treat Titan like any other Ultra-high end stuff. Say like Intel’s HEDT range of CPUs and as such either you consider worth the price or not and therefore content with regular high-end. As if we never saw things like that.

          I consider talking about price waste of time. We no longer are in price sensitive range by long shot. I’d be more curious about technical stuff itself like how much reserves in bandwidth it has and such or how empty that VRAM is.

          • JumpingJack
          • 5 years ago

          [quote<]G-Sync... Overpriced, and vendor locked TN garbage. Are there even 4k gsync monitors available?[/quote<] [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16824009658&cm_re=acer_xb280hk-_-24-009-658-_-Product[/url<]

      • smilingcrow
      • 5 years ago

      “I’m sure there will be a few suckers who don’t realize this, and waste money on it.”

      Suckers bought older Intel Xtreme Editions at $1k which were only slightly better than their $500 CPUs but at the least this is significantly faster than the tier below it and actually uses a different design.
      So this is more like a Xeon which is a much lower volume design which is one reason why the price is so high.
      I don’t think you understand the basics of capitalism.

        • l33t-g4m3r
        • 5 years ago

        *Whoosh* You’re not getting it. I was specifically talking about people who buy Titans for COMPUTE there. There’s NO WAY they’re getting a good deal if they’re buying a Titan with that purpose in mind. I also said suckers, to discount people who read reviews and know that it doesn’t have compute. People who make blind purchases of this card for compute, because that’s the Titan’s typical product segment, are getting screwed.

        My second comment then addressed what you’re going on about, which is the Halo Product segment. Those people can’t be helped, and I’m not bothering to try.

          • sweatshopking
          • 5 years ago

          HI L33T!

          • smilingcrow
          • 5 years ago

          Idiots who buy $1k products without researching their features are not worthy of discussion.
          They aren’t getting screwed they are screwing themselves with their stupidity.
          Do you get the difference?

            • l33t-g4m3r
            • 5 years ago

            I get that you failed reading comprehension from your first reply, but I get it. Stupid people are stupid. Doesn’t make the Titan any less of a screw job. The only purpose it now serves is to be a Halo Product, like the P4 EE.

            How many people here support Halo products, btw? Was the P4 EE a big seller? Will you all go out and buy a Titan now? I mean, it does give you acceptable framerates in 4k, just at the price of a fully compute enabled card, and you guys can’t possibly be bothered to wait for the 980Ti, or the 390.

            Really, what was I thinking? People commonly using common sense? It really isn’t that common, I guess. Kinda like Klimax’s mad grammar skillz. I know I’m not an english teacher, but JEEZ, I could barely read some of that drivel. It was poetically bad.

            • smilingcrow
            • 5 years ago

            As I implied earlier the Intel EE CPUs (P4 and C2D) were a joke because they were the same silicon but cherry picked which only gave a small performance gain.
            This is different silicon and offers a significant gain over ANY other single die GPU.
            It’s not a Halo product but a low volume, high performance and high price top end product; welcome to capitalism.
            You can HATE on it as much as you like but it’s just a silicon chip so lord knows why you need to vent so much spleen over a GPU; did mummy not breast feed you?

            • l33t-g4m3r
            • 5 years ago

            Wow, you’re retarded. All I’m saying is that the Titan series was originally a prosumer card that supported compute features. Now that it doesn’t, there is no reason to call this card a Titan, and leaving the prosumer pricing is a cash grab.

            This card is essentially the 780Ti of the last generation. Don’t be such a fanatical acolyte. You don’t need to excuse NV for what they did here. Really. Stop drinking the Kool-Aid, drooling fanboy dude. I’m not talking smack on your HOLY HALO Products, I’m just saying that NV SHOULDN’T BE SELLING IT AS A TITAN. It’s NOT. This card does NOT compute.

            WHAT’S NEXT? NV SELLING QUADRO’S THAT CAN’T BE USED FOR WORK? BECAUSE THAT’S EXACTLY WHAT THEY DID HERE.

            This mislabeling product lines started with the 970, WHICH WAS A 960, and now the 980Ti is a Titan. NV is abusing it’s product naming to upsell lesser cards. That’s what I’m saying here. The WHOLE 9x lineup was a cash grab. The 980 was NV’s mid-range card from the get-go, but they sold it as the update to the 780Ti. IT WASN’T. The Titan is the updated 780 Ti, and the REAL Titan DOESN’T EXIST.

            • smilingcrow
            • 5 years ago

            It’s a $1,000 card with a performance of x and what the marketing people call it is of zero interest to me and who the manufacturer is has little interest to me.
            I don’t own any nVidia or AMD products but have done in the past.
            I try to look at a product as objectively as I can and that means including any bias I may be aware of.
            I have zero interest in buying this card or something similar but for now it stands as the king of the crop in its sector and is priced accordingly.
            You can hate on it all you like for all kinds of spurious reasons and I couldn’t care one iota as I have nothing invested in it whereas you are clearly heavily invested in hating it. What a waste of time and energy.

            • l33t-g4m3r
            • 5 years ago

            I have nothing against the card. It’s the NAMING. It’s NOT a Titan. I don’t care that the 980Ti get’s X FPS or what it cost. What I’m pissed about is that NV renamed the 980Ti as a Titan, and you obviously are invested in it, or you wouldn’t be defending that. You’re either invested in the brand, or trolling, but you obviously ARE invested in it somehow, because you wouldn’t be arguing for it otherwise.

            The whole 9x lineup was an upsell.
            960 = 950
            970 = 960
            980 = 970
            Titan = 980Ti
            Real Titan = DOESN’T EXIST / QUADRO

            Look buddy, the Ti series was NV’s Halo line, and the Titan was their prosumer card. NV renamed a Halo product as their prosumer card, and KEPT THE PROSUMER PRICING. It’s NOT A PROSUMER CARD. It’s a HALO CARD.

            Every single card of NV’s 9x lineup was upsold as something it wasn’t. I’m now wondering if they’ll call their single SLI card a Titan Ti, and try to sell it under a non-SLI name. It’s ludicrous.

            Hell, next year they’ll try selling their gimped x60 card as a Titan for $1K. Seriously. I wouldn’t put it past NV.

            • f0d
            • 5 years ago

            no need to resort to name calling no matter your argument

            • l33t-g4m3r
            • 5 years ago

            If you can’t think rationally, and are deliberately being unreasonable, why not?

            The reality is, if you don’t want to be called an idiot, THEN DON’T BE ONE.

            • smilingcrow
            • 5 years ago

            Why do you care what it’s called?
            When I choose a film to watch I don’t care what its called.
            When I listen to music I don’t care if the band have a silly name.
            It’s all about the product, the naming is something for the marketing people to fret over and of no relevance to me.
            I buy products not marketing code names.

            Shouting moron just reflects on you as does your attachment to marketing names.

            • smilingcrow
            • 5 years ago

            There’s been one generation of Titans which used Kepler cores so there’s hardly a long history of Titans being DP monsters. If there were 3 generations or so then dropping the 1/3 DP support would be much more notable. But this is only the second generation so it’s not exactly breaking a long standing tradition.

    • l33t-g4m3r
    • 5 years ago

    [quote<]The GM200—and by extension, the Titan X—differs from past full-sized Nvidia GPUs in one key respect, though. This chip is made almost purely for gaming and graphics; its support for double-precision floating-point math is severely limited. Double-precision calculations happen at only 1/32nd of the single-precision rate. [/quote<] Oh, look. Nvidia pulled a 970 on the Titan, where it's price rationalization was originally a fully compute enabled chip. I'm sure there will be a few suckers who don't realize this, and waste money on it. Adaptive sync? Nope. Double precision? Nope. Why should anyone spend 1K on this again? Oh, right. It's [i<]Nvidia's[/i<] fastest gaming card. People who can't see reason will buy this.

    • Zizy
    • 5 years ago

    Very disappointed. Wanted a new cuda card but this one sucks. Ah, you say gaming? 1000? Hahahaha.

    • south side sammy
    • 5 years ago

    anybody add this to this thread yet………… time to start benching professional apps with these cards.. ( all of them )

      • Krogoth
      • 5 years ago

      Titan X is sup-par for anything that requires DP since Nvidia crippled DP for it.

      (IMO, it should have been called 980Ti or 985 not the Titan X)

        • Ninjitsu
        • 5 years ago

        True, this is pretty much a 780 Ti successor.

        • BryanC
        • 5 years ago

        Just curious, what GPU applications do you know of that require DP?

        Nvidia sees Titan as a compute oriented part for machine learning workloads, and they don’t need DP.

    • Melvar
    • 5 years ago

    How easy is it to blow the dust out of a Titan cooler? It doesn’t look very accessible.

      • BIF
      • 5 years ago

      I wonder the same thing. I can’t get all of the dust out of my old AMD; it sure seems stuck in there quite firmly. Yuck.

    • Ph.D
    • 5 years ago

    Not sure a GTX 980 Ti or even a regular 980 is something to consider for most “value-conscious buyers”.

    • tks
    • 5 years ago
    • willmore
    • 5 years ago

    [quote<]I used the "Ultra" texture quality settings in Shadow of Mordor, which the game recommends only if you have 6GB of video memory or more .[/quote<] So, why did you run the game at a setting only the Titan X could meet?

      • Melvar
      • 5 years ago

      To show how much of a difference it actually makes?

      Are you suggesting that review sites shouldn’t be testing high memory requirements just to protect lower memory cards from looking as bad as they really are in those workloads?

        • willmore
        • 5 years ago

        I’m saying that these settings show unrealistic differences in perfromance between the various cards.

          • Melvar
          • 5 years ago

          It’s only unrealistic until at least one card can run it.

          • Chrispy_
          • 5 years ago

          You’re not running a Titan X to run at 1080p medium settings, so why test it like that?

            • willmore
            • 5 years ago

            That’s not the case.

            • Chrispy_
            • 5 years ago

            Err, you’ve just dropped a grand on a gaming-specific card that claims to be best of the best.

            If you can’t whack all the settings sliders to the right, what’s the point in any of this?

            • Melvar
            • 5 years ago

            Some people with 144Hz 1080p monitors might, actually.

            Edit: I’m not saying a lot of people would do this, but for some people framerate trumps all else.

    • Krogoth
    • 5 years ago

    This guy isn’t a Titan.

    This guy should have been called a 980Ti or 990Ti. It has kinda underwhelming performance for what it can do on paper. It isn’t really worth the $1,000 unless you want to game at 4K with no compromises.

    The problem resides with being stuck at 28nm process though.

      • Srsly_Bro
      • 5 years ago

      Thanks. I look forward to your opinion on things. Many of the posters on here are living in a fairy tale land and you come across as the sensible one. 🙂 cheers.

      • auxy
      • 5 years ago

      I actually agree that it isn’t worthy of the TITAN name. However, I think it’s way too different to be called “980 Ti”. I think GTX 990 might work, except x90 has always been dual-GPU. I don’t know what they would have called it; I guess TITAN is better than 980 Ti.

      REALISTICALLY, the GTX 970 and 980 should have been 970 and 970 Ti. Or 960 and 970. Or something. Gxx00 should always be the “x80” of their generation! (e.g. GF100, GK100, GM200, etc) Other people’s remarks on this comment page have adequately covered why that wouldn’t happen, though.

      I think the new graphics APIs will do a lot for these ultra-massive GPUs (including AMD Hawaii in this); I think a lot of their potential is being wasted by low-level bottlenecks. The Mantle results here show that.

      • BIF
      • 5 years ago

      Well, if it can fold and run CUDA renders anywhere close to 50-70% of the time it takes to do it with one or even two GTX 980’s, then I’d call it a Titan, at least for non-gamers.

      For some, money is not the constraint here. Not really. If we do the real math here, we should realize that it’s number of available slots in a motherboard AFTER you have included all the non-GPU cards that require slots. The system I’ve built needs other parts, so I cannot put 4 dual-width GPUs into it.

      Once I fill up the PCI slots, then to get more folding and rendering performance would require that I spend the money to build a WHOLE ‘NOTHER SYSTEM. When you add up the cost of a second case, PSU, memory, SSD, motherboard, CPU, AND the GPU you want anyway (and don’t forget a needed uninterruptible power supply!), that’s going to be a lot more money than a single $1,000 GPU.

      Assuming that the rest of the system is still reasonably viable, it’s much cheaper to replace a 1 or 2 year old $550 GTX 980 with a $1,000 Titan X and get another 30% performance (I’m just guessing that based on the bigger chip size, 30% more processors, and 50% more transistors).

      Buying a component, even an expensive one such as a Titan X, is STILL more cost effective than building a second system. For non-gamers.

      Eventually, I will have to build that second system, sure. But I’m holding off for as long as possible, and cards such as this will really help! When 2TB SSDs come out and I can eliminate a SATA I/O card and other stuff, I’ll be able to free up a couple more PCI slots in my system and possibly add that third CUDA GPU card.

      One of these would be amazing, and would possibly carry me for another 3 years, getting me to my next planned upgrade.

        • Krogoth
        • 5 years ago

        For folding, the Titan X is a poor value.

        You are better off getting 970s en mass for the same cost as a single Titan X.

          • BIF
          • 5 years ago

          Hmmm, first off, I said “folding AND RENDERING”. I didn’t say folding only.

          For some of us (sure, maybe not you, I get that), it’s not about how powerful “a card” is, or even “how powerful per dollar”. It’s how much compute can be performed by a [b<]populated PCIe slot[/b<]. You can pretty much increase parallel compute power in one of these two ways: By adding "populated slots", or by increasing the compute power of any given "already populated slot". The real constraint is that PCIe slots are limited in number, [i<]and limited again by the double-width requirement of most CUDA cards[/i<]. So when you run out of PCIe slots to populate, the only way to add compute cores is to provide more available PCIe slots by building an [b<]additional machine[/b<]. That requires added infrastructure which raises cost exponentially. I want to delay having to build a second machine. And I must take issue with one other point you made. By "en masse", you must mean "two". Because that's what most people can put into their main rendering workstation; especially if they have other needed PCIe devices, such as DSP cards, audio/MIDI interfaces, SATA I/O cards, or other devices which require this type of slot. So let's be clear here. Are you trying to tell me that two GTX 970's are all I really need in my workstation which is used for folding and rendering? I'm not impressed right now, but maybe it's just a misunderstanding. 😉

            • Krogoth
            • 5 years ago

            The problem with that plan is that Titan X doesn’t have prosumer-tier features like its predecessors. You are forced to get a Quadro/Telsa version of Maxwell.

      • Milo Burke
      • 5 years ago

      The trouble is that at 4k there still are compromises. According to the benchmarks in this article, with the Titan X, you play at ~45 FPS or you cut out the AA and get jaggies and shimmering effects, or you play non-recent games.

      I’m hoping my next purchase will play recent games at 1440p at 80-90 fps with some AA, but now it’s looking like that will simply be outside of my budget. I’d need one notch lower than the Titan X for that unless I drop something.

      Maxwell brought generational improvements, but not enough to account for the fact that 4k has twice as many pixels as 1440p, and recent games are exceedingly demanding. This all at the same time that we’ve had a welcome renaissance in high frame rate monitors and monitor tech.

      • bfar
      • 5 years ago

      Looking at the numbers, even a single Titan-x isn’t enough for 4k.

        • swaaye
        • 5 years ago

        Sure it is. It’s just not enough for the latest AAA eyecandy productions at max details, max resolution and the modern day expectation of 60+ fps. It’s all about perception…. Aside from the fact that their low-end segment is gone, it must be pretty good to be NVidia with the expectations of gamers and pricing of cards today.

          • bfar
          • 5 years ago

          But make no mistake, Nvidia and AMD will market their products as 4k ready on the latest titles.

    • TardOnPC
    • 5 years ago

    Impressive but as others reviewers have stated even the Titan X can’t achieve 60FPS at 4K with the highest graphics settings enabled, even with AA disabled.

    If the 390X really does perform 1.5-1.6X faster than the 290X, as some unconfirmed presentation slides indicate, it might be a better deal.

    I’m still going to play the waiting game. I haven’t seen anything on the cut down version of the Titan X. Exciting times for 4K gaming. 🙂

    • Chrispy_
    • 5 years ago

    It’s interesting that Nvidia chose to launch GM200 now, rather than once AMD’s Fiji is out.

    They’re getting great sales on 970/980 cards and Nvidia’s usual strategy is to ruin AMD’s day whenever they launch a new product.

    Maybe 970/980 sales have slowed after the 3.5GB fiasco and they need a new halo product to divert the attention….

      • willmore
      • 5 years ago

      I think they left some price buffer between the 980 and the X, so they’re probably safe. 😉

        • Visigoth
        • 5 years ago

        I agree. I bet they’ll knock the price down pretty quickly, depending how well AMD’s answer to the GM200 is.

          • bfar
          • 5 years ago

          Yep, and that’s why they’re releasing Titan-x today. They won’t get another chance to levy a grand for this and actually sell it in decent volumes, not if Radeon 390x performs in the same ballpark.

          My own feeling is that late summer early Autumn will be a good time to upgrade. We should have the full product families by then, and the nights will be getting longer 🙂

        • JustAnEngineer
        • 5 years ago

        NVidia seems to command a significant amount of fanatical brand loyalty. They may want to continue milking that edge for price premiums that go directly to the company’s bottom line rather than competing on price as happens in a commodity market.

          • derFunkenstein
          • 5 years ago

          Plus, the GPU is ready. R9 300 cards are not.

      • jihadjoe
      • 5 years ago

      Titan came out before 290/290X as well!

      Nvidia usually does that strategy you mentioned with their regular line of cards, not the Titan. I imagine it might be hard to launch a $1000 card when your competition has a $550 part that’s competitive with it.

      Edit: It’s actually interesting that we only got Titan X now, and not the 980Ti. Maybe Nvidia is waiting to do just what you said and will drop the 980Ti at a price point just below what AMD launches Fiji at.

        • swaaye
        • 5 years ago

        What’s interesting is AMD undercuts NV’s high end so aggressively. They are always grasping at gaining marketshare over there. And it has gone oh so super well for them too.

      • homerdog
      • 5 years ago

      The 980Ti (or whatever they call the 6GB GM200) will compete with 390X. Titan is kind of its own thing, and has never been price/perf competitive.

        • JustAnEngineer
        • 5 years ago

        The previous-generation Titan cards appealed to some consumers as a poor-man’s Tesla K20 because of their significant GPU compute capacity, but the new GM200 GPU in the GeForce GTX Titan X is lacking in 64-bit GPU computing performance. It is just a larger version of the GM204 gaming GPU in the GeForce GTX980.

        The old GeForce GTX Titan Black (GK110) could do 1700 GFlops while the new GeForce GTX Titan X (GM200) can do 192 GFlops.

      • beck2448
      • 5 years ago

      To those who pay attention to actually running a business, nvidia’s market share has been growing at AMD’s expense for the last year according to Jon Peddle, and is at above 80% in pro, supercomputer, and consumer segments.

    • Tatsh
    • 5 years ago

    I will buy this for the next computer build, which will probably be sometime later this year.

    Currently happy with a rig using a GTX 980.

      • HisDivineOrder
      • 5 years ago

      If it were me, I’d wait to buy a high end card around the time you’re actually building the build. But hey, that’s just me.

        • nanoflower
        • 5 years ago

        Yeah, next year Pascal will be out and based on what’s been said so far I think even a ‘1080’ Pascal will be good competition for this Titan X.

          • HisDivineOrder
          • 5 years ago

          This may be the year when everything old and new gets tested with DX12 to see what we get (like a long overdue Christmas present), but next year’s when the real toys start coming out that are built in a world where DX12 has already been out a bit.

          Between Skylake’s maturing, AMD Zen hopefully, and nVidia Pascal, next year could be the most interesting in terms of PC hardware in a long time. Especially with the promise of DX11.3/DX12 still having hardware built to suit it.

        • Tatsh
        • 5 years ago

        I am not buying it *today*. I am buying it when I start the build.

    • south side sammy
    • 5 years ago

    the thing about the dvd drive…….. I still use them too. Can’t see myself without one. The new generations won’t/don’t depend on them the way we did/do.

    the card. how much vram did it actually use throughout the tests?

    also, can’t wait until there’s actually something to run against it.

    • mad_one
    • 5 years ago

    Nice article as always!

    Minor nit: It has 100% more ROPs than GK110, not 50% more.

      • Damage
      • 5 years ago

      Of course. Fixed.

    • estheme
    • 5 years ago

    I wish you would test against the 970 too. With it’s weird memory issue, I’d love to see how it compares to the 980 and titan under 4k loads. It’s so much cheaper too, I’d be surprised if it wasn’t a more popular choice for most users. Test that sucker in SLI too.

    • the
    • 5 years ago
    • the
    • 5 years ago

    I don’t think there is going to be much of a disagreement in saying that the Titan X is the fastest single GPU card money can buy right now. Outside of raw performance, it doesn’t get you that much over a GTX 980. The only halo this product really has is the 12 GB of memory over the GTX 980’s allocation. It’d be nice to get a bit more to help distinguish it from the Geforce line due to the loss of DP performance. It could have been something as simple as changing the output configuration (say 4 miniDP v1.2 and two HDMI 2.0) and the ability to drive all of them simultaneously and opening up Surround Vision to any monitor combination. The Titan X can justify a price premium but this is a bit too high.

    The price premium may also be short lived as AMD certainly has another generation of cards waiting for release. The bit change will be a one-two punch of HBM and the new compression scheme for a major bandwidth jump. The older R9 290X can hold its own against the GTX 980 at 4K so it’ll be interesting to see where AMD’s next product performs against the Titan X at the same settings.

    Oddly, Titan X also makes me wonder if/when we’ll see a dual GM204 or GM200 card for consumers. (Quadro is another matter.) The power efficiency is certainly there to do it but I’m just not seeing where it’d actually fit into their line up.

    And lastly, all these cards are [i<]still[/i<] using a 28 nm manufacturing process that was first used by the Radeon 7970. Most of what we see happening right now with the Titan X and AMD's response will likely be leap frogged by this time next year as both graphics card manufacturers move to a new process node.

      • HisDivineOrder
      • 5 years ago

      I think nVidia doesn’t even need to release a cut down version of the Titan X to do well right now. They just need to release 8GB versions of it (EDIT: “it” meaning the Geforce 980, which I assumed people knew via telepathy) and mark them up a bit. People would (re)buy a ton of them. And turns out, future games will in fact need the extra VRAM if you’re going above 1080p.

      Surprise, surprise.

        • the
        • 5 years ago

        nVidia is doing well but ultimately not all of those 600 mm^2 dies are going to be fully functional to pass as Titan X chips. Cut down versions are going to be inevitable.

        8 GB on a 384 bit wide bus would mean something is configured weird. nVidia’s done something similar before with the GTX 650/660 so it isn’t impossible. However with the bruising they’ve received over the GTX 970’s memory configuration, I figure they’d want to play it safe.

        In fact, there is room in the line up for a few more cards on the high end. They’d just have to drop the GTX 980’s price slightly and they could slot in an 8 GB GTX 980 at the old mark, and two different cut down versions of the GM200 chip. I figure one with relatively close to GTX 980 specs but with 6 GB of memory but with a GTX 970-like memory configuration. nVidia could do the devilish thing and price it between the 4 GB and 8 GB GTX 980 cards. Then have a GM200 chip closer to the Titan X’s spec with 6 GB of memory on a full 384 bit width that sits between the 8 GB GTX 980 and Titan X.

    • Takeshi7
    • 5 years ago

    They removed the full speed FP64 performance. That was the whole selling point of Titan cards and they took it away. Lame.

    Edit: Also TR should do more GPGPU tests when they review graphics cards.

      • HisDivineOrder
      • 5 years ago

      When they couldn’t get a dieshrink this year, they had to cut something. Since something had to go, they cut the thing most people don’t use.

      The reason it matters is not really for the Titan series, but the follow-up consumer-focused product that will be the way most of these chips wind up being sold. I suspect nVidia sold a lot more 780 Ti’s than they did Titan’s.

        • Krogoth
        • 5 years ago

        Titans were geared for prosumers who didn’t ECC support or vendor certifications. They were Telsa/Quadros that ate too much power to be used in HPC arena.

        Titan X is just a “980Ti/985” that got pump-up on brand-name in hopes that gamers who have $$$$ then sense will spend $1K for 30% increase over a standard 980.

          • HisDivineOrder
          • 5 years ago

          Titan X is for the 4K gamer who doesn’t want to go SLI/CF. Nothing more, nothing less.

          That said, I think if they’d let an 8GB version of the 980 be released, I’m not convinced we need the Titan X.

            • Krogoth
            • 5 years ago

            Holy fanboy distortion field batman.

            I’ve haven’t seen this much spinning since 2008 presidential election.

      • ptsant
      • 5 years ago

      Actually, I think Nvidia has taken a different direction with their recent designs. The 980 and the Titan don’t even have the uncapped theoretical DP potential that GCN provides. In order to achieve good perf/w and chip size, some compromises had to be made in core design. On the other hand, AMD has built GCN with the explicit goal of developing HSA and penetrating the pro market and has given much more weight on DP performance, probably to the detriment of gaming performance, chip size and heat output.

      It appears that 390X will be a DP monster and I expect a FirePro version to sell very well at staggering price points. Meanwhile, the days of buying a Titan instead of a Quadro/Tesla are probably over.

      • lycium
      • 5 years ago

      +1 for more GPGPU tests, please!

        • BIF
        • 5 years ago

        I quite agree.

    • wingless
    • 5 years ago

    I just realized the 390X will drop soon and push all the R9 200 series down. Can you imagine $250 290Xs and sub $200 290s?! Heck, we’re already close to those prices on the 290. I have Nvidia GPUs in SLI, but cheap AMD GPUs will look very desirable soon because of all this great competition. DX12 and Vulkan will extend the life of them quite a bit I imagine.

    We live in good times ladies and gentlemen!

    • Damage
    • 5 years ago

    Added a note to the review that explains the R9 290X is ROP limited in the compression test. Thanks to Andrew Lauritzen for making me less dumb on this point.

      • Andrew Lauritzen
      • 5 years ago

      Not dumb, just a subtlety of how that particular test works :). Would need wider render target formats (64bpp) to get bandwidth limited on that chip.

      • jra101
      • 5 years ago

      I assume blending is disabled (4 bytes/pixel * 66 Gpixel/s = 264 GB/s)? Enabling blending would double the bytes/pixel (4 bytes read + 4 bytes written) and make the test memory limited.

        • Andrew Lauritzen
        • 5 years ago

        Right, although many GPUs blend at half rate unfortunately (or slower even depending on the format). I believe the R290 does blend at full rate for 8888 though, so perhaps that’s an option as long as the test is large enough to thrash the ROP cache.

          • jra101
          • 5 years ago

          I’ve only ever seen half rate blending for higher precision formats (FP16/FP32). Every GPU I know of blends RGBA8 at full speed.

            • Andrew Lauritzen
            • 5 years ago

            IIRC modern AMD and NVIDIA GPUs do blend RGBA8 at full speed (but not Haswell/Broadwell), but even 32bpp stuff like 10_10_10_2 will often be half speed. It’s all format and architecture specific so technically there’s an entire table of throughputs.

            For this purpose I agree that RGBA8 blend with suitably large framebuffers should work okay on current immediate mode architectures (obviously ROP tests are sort of meaningless on tilers).

            • Ryszard
            • 5 years ago

            I’ll get different blending tests surfaced in the new suite as soon as I can.

      • ermo
      • 5 years ago

      Does the R9 290X really compress textures? I was under the impression that that particular feature was new to the Tonga series of GPUs (R9 285) only?

      Also, have you done any recent benchmarks of the games using Mantle at 4K in this review with a pair of R9 280X cards and a pair of 770 cards? I vaguely seem to remember that you did?

      If not, a small-ish Mantle+CrossFireX scaling article which previews the DX12/Vulkan potential at 4K might not go amiss? =)

        • Damage
        • 5 years ago

        Tonga improves frame buffer compression, but I suspect older GCN GPUs have some FB compression, too. Heck, maybe pre-GCN chips, too. (There was even some color compression in the Radeon 9700.) Nvidia told us they’d had it since Fermi when they revealed the improved compression in Maxwell.

        No recent 4K/Mantle benchmarks with mid-range multi-GPU, no, sorry.

          • ermo
          • 5 years ago

          Thanks for engaging with me.

          And thumbs up for the present review — thoroughly enjoyable and informative as always.

            • Damage
            • 5 years ago

            Thanks, man! Glad you enjoyed it.

            • ermo
            • 5 years ago

            Do you have any informed speculation to offer on whether we will see AMD update the R9 280X (GCN 1.0) and R9 290X (GCN 1.1) to Tonga (GCN 1.2) equivalent SKUs with 384 bit/3GB and 512 bit/4GB framebuffers in the upcoming R9 3xx refresh?

            Which sort of performance improvements would you think we would be likely to see relative to the existing parts based on this speculation and extrapolating from your tests of the R9 285 2GB part?

            • Damage
            • 5 years ago

            My contention all along has been that Tonga has a 384-bit memory interface and more ROPs than we’ve seen in the R9 285. See the chip table in this review, and look at the die size and transistor count vs. Tahiti. Only makes sense.

            Man, I wish we had a die shot to confirm it.

            I think they’ve held back on the R9 285X or whatever due to existing inventory of Tahiti and Hawaii-based parts or chips. I think a full-fledged Tonga would offer performance well above Tahiti and not far from the R9 290.

            The obvious logical move for the 300 series is Fiji at the top with 390 and 390X, Tonga in the middle with 380/X, and lower end stuff below that.

            With the right pricing, AMD could be competitive on price and performance, but I don’t think anything Tonga-based will match the GM204/GTX 980 on power-efficient performance. Even accounting for binning, the R9 285 draws too much power for that.

            If Fiji is really a 20-nm chip, then it could match the GM200 on power efficiency, one would think, but only if it operates at conservative clocks. 20nm probably won’t buy you much at higher voltage. The water cooling rumors around the 390X aren’t making me optimistic.

            Everything I hear about HBM is that it’s expensive. That’s my other big worry about 390X. If AMD can only match or slightly beat Nvidia with an expensive and exotic memory subsystem, that could make life difficult for them.

            Still, I’ve been in the same room with a working 390X-based system, so they clearly have working silicon. That’s a start.

            • ermo
            • 5 years ago

            Thanks for the insight. I’d like to clarify one part of my question, though:

            Do you think it possible that AMD could aim for a 28nm lineup [u<]below Fiji[/u<] which looks something like this (and thus leverages existing designs and just updates the cores slightly): R9 280/280X -> Tonga 256b/2GB/2048 GCN 1.2 cores R9 280X/290 -> Tonga 384b/3GB/2432(?) GCN 1.2 cores R9 290X -> Tonga 512b/4GB/2816 GCN 1.2 cores The above would basically be the same design, but with parts of the cores (and the associated paths to RAM) unceremoniously chopped off to fit a certain price and/or TDP segment? Or are you suggesting that with improved compression tech, the 512b/4GB 290X SKU would be supplanted entirely by Fiji, leaving the 256b/2GB and 384b/3GB versions of Tonga to fill the shoes of the R9 280/280X and R9 290/290X models? The above all assumes that the fully realized Fiji will sport around 4096 GCN 1.3 cores by the way.

            • Damage
            • 5 years ago

            I think once Tonga is fully enabled, there probably won’t be much room for or point to Hawaii existing between Tonga and Fiji. I could be wrong about that, though. I suppose we’ll have to see what AMD does.

            • Eggrenade
            • 5 years ago

            Scott, you could make a very good rumor writer.

            I’m happy to have you writing reviews instead, though. I value your great reviews higher than even the best speculation.

            • Damage
            • 5 years ago

            Thanks, but why not both? 🙂

            I just posted this: [url<]https://techreport.com/news/27994/let-handicap-the-2015-gpu-race-for-a-moment[/url<]

    • tipoo
    • 5 years ago

    So they cut the double precision rate over the last Titan… There goes the Titan being a “cheap” professional card, at least for some of its uses, I guess. Guess they didn’t want it cutting into Quadro anymore – why charge 1000 when you can charge 2499.

    OTOH, The people who bought it for compute were probably a much smaller fraction than gamers, and most pro uses that need double precision performance probably also need ECC memory, which this does not have.

    But still, 1/32 DP…$999…Ehh, ah well, I guess I’m not the market by about 600 dollars.

    • Vergil
    • 5 years ago

    So, if you’re planning on getting a monster over $600; then the R9 295×2 is your champ until R9 390x is out in the summer.
    If you want a beast under $400 then the R9 290x is the best choice.

    Overall, an underwhelming card for $999.99, should be like $600 or less. Nvidia needs adjust their overinflated prices.

    PS: That BF4 performance on Radeons is pretty unusual…

    • anotherengineer
    • 5 years ago

    As for 3D power consumption, I think Nvidia’s variable 3D clocks and voltages help.

    [url<]http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan_X/34.html[/url<] Compared to AMD's clocks and voltages. [url<]http://www.techpowerup.com/reviews/MSI/R9_290X_Gaming/28.html[/url<] For quick and dirty estimates AMD 290 x ~ 1Ghz @ 1.2V Nvidia Titan x ~ 1GHz @ 1.0V About 20% power savings in 3D right there.

      • derFunkenstein
      • 5 years ago

      I didn’t think power consumption was directly linear with voltage – or maybe I’m thinking about overclocking? …hmm

        • anotherengineer
        • 5 years ago

        It is linear to a certain extent, once you reach the upper end then it scales up exponentially around that frequency wall.

        • Meadows
        • 5 years ago

        No, you’re right.

        Assuming capacitance is equal between the two chips and the frequency stays constant (neither of which is going to be true), you’re actually saving around 30-31%.

        • anotherengineer
        • 5 years ago

        A little GPU voltage scaling graph.

        [url<]http://www.techpowerup.com/reviews/AMD/R9_290X/31.html[/url<] Not perfectly linear, but fairly close for most practical purposes. Edit - I also like how it shows an extra 200W of system power over that range also. If AMD would have used a 975MHz gpu clock, and lowered the voltage as much as possible, and used fast 7GHz ram like Nvidia, overall performance probably would have been about the same, but power consumption would have been quite a bit lower.

          • Meadows
          • 5 years ago

          Yeah, good luck reading whether that graph is linear with that 10px thick line and the scrunched up Y-axis.

          Edit: don’t strain yourself, it’s not linear.

            • anotherengineer
            • 5 years ago

            Well not logarithmic or flat out exponential either, except at the end.

            It’s called interpolation Meadows 😉

            • Meadows
            • 5 years ago

            It’s called physics. What you’re seeing is only muddied because they measured the entire PC as a whole.

            • anotherengineer
            • 5 years ago

            Of course, I agree, but at least it’s some data, which is better than no data.

            • Meadows
            • 5 years ago

            Funny thing is, some data is not always better than no data.

      • Laykun
      • 5 years ago

      I’ve not seen something that’s more apples to oranges in my life. Power is a result of both voltage AND ampage, saying lower voltage = more power savings is completely misleading.

        • anotherengineer
        • 5 years ago

        Power is amperage X voltage (in DC anyway), when working with DC I=E/R, if your resistance is constant, then increasing the voltage increases the amperage and therefore increases the power. How is that misleading?

        However we are talking about a semi-conductor here, and it’s the same silicon, made at the same fab, on the same 28nm node, now the actual process type I am not certain about.

        So in that regard it is close to an apples to apples comparison. If you want a bit closer you can compare another Nvidia card
        [url<]https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_780/30.html[/url<] as you can see at approx 1 GHz ~ 1.16V in 3D Edit - and if one compares idle power between amd and nvidia they are pretty close to even, where the power draw is noticeably different, is gaming/3D loads. So I think it's reasonable to say that my hypothesis can hold water, even if it's not the only thing contributing to power savings. I think it would be interesting to run a 290x at 1GHz at 1.00V (if it doesn't crash) and see what power reduction there is, if any.

          • Laykun
          • 5 years ago

          [quote<]However we are talking about a semi-conductor here, and it's the same silicon, made at the same fab, on the same 28nm node, now the actual process type I am not certain about.[/quote<] With completely different numbers of transistors and chip sizes, resulting in completely different current ratings. My assumption here is that more transistors in use means more current.

            • anotherengineer
            • 5 years ago

            Well you didn’t mention that in your first post. Secondly if you compared like that you wouldn’t be able to compare anything.

            Let’s take a car analogy for example – they are usually compared in categories, compact, sub-compact, etc. etc. now all the cars are totally different, different transmissions, engines, manufacturers, tires, weights etc. So they compare them in generalized categories.

            By that comparison, two silicon gpu chips both made from the same silicon, at TSMC, at 28nm, is a pretty close comparison.

            • Laykun
            • 5 years ago

            [quote<]Well you didn't mention that in your first post. Secondly if you compared like that you wouldn't be able to compare anything.[/quote<] I believe that's why I used the term apples to oranges.

    • sschaem
    • 5 years ago

    What, the R9-290x is 4db quieter then the Titan X ? is the test correct, how can this be ?
    nvidia uses cooler made by the goods, and AMD use POS, right ?

    So can a $330 card, with the same TDP as a $1000 card, be quieter and run cooler to boot?

      • geekl33tgamer
      • 5 years ago

      Based on the temps, noise and make (Asus) of the 290X used, I would assume it’s a custom cooler – Like almost all 290X’s today seem to be sold with*.

      *(because AMD’s one is/was useless!)

      • Meadows
      • 5 years ago

      I’d eat those words about “same TDP” if I were you, according to TR’s load results.

      Not to mention the card is supposed to compete with the GTX 980, compared to which it fails even more. …And after all is said and done, it still ends up being *slower* than that “mid-range” card.

      • HisDivineOrder
      • 5 years ago

      If AMD would make a reference cooler that didn’t push the card to the edge of overheating and/or use water cooling to overcome bad thermal design, you might have a point.

      Alas…

    • ish718
    • 5 years ago

    LMFAO?! $1000?

    So you are paying an extra $450 for a 15%-25% increase in performance from the GTX 980

    What a terrible value. Maybe not to the audience they’re targeting…

      • Airmantharp
      • 5 years ago

      It’s smooth at 4K, with the memory to handle anything within it’s useful lifetime. Still not worth $1,000 to most (or me), but it definitely delivers.

      • anotherengineer
      • 5 years ago

      Get a Titan Z instead then 😉

      [url<]http://www.ncix.com/detail/evga-geforce-gtx-titan-z-54-96879.htm[/url<]

        • Melvar
        • 5 years ago

        Effectively half the RAM.

    • sschaem
    • 5 years ago

    My take away, but hard to see in the conclusion:

    The Titan X, for about the same power consumption and heat generation of a r9-290x, is 25% to 65% faster depending on the game you play at 4K.

    (But I guess the R9 is a hot potato that require a nuclear plant and is the causing the icecap melting, but the the Titan X is all cool in the eye of many, even so it generate the same heat at idle or gaming.)

    Another note: I see the same game tested at PCPER giving different results.
    Shadow of mordor is 10% faster on a R9-290x then a GTX 980 in PCPER test.
    (4K ultra or 4K very high)
    But in this review the r9-290x is 8% slower ?

    CPU / Driver difference / Ultra Preset diff / game build ?
    But its interesting how the same game tested can show a 10% lead in one review VS an 8% deficit in another.

    While I’m on the subject : Suggestion, the CPU you guy used is very costly and is not the best for gaming. (*3ghz* with a max turbo at 3.5ghz) Why not use a 4790K ?
    its 25% faster and will better represent the gaming community, but most importantly better isolate the GPU by reducing CPU bottleneck.

      • MadManOriginal
      • 5 years ago

      I concur regarding the CPU. I don’t think anyone building a gaming PC would consider that CPU a good choice, and even TR doesn’t in their system guides.

      • f0d
      • 5 years ago

      im guessing they did it this way so all their testing with video cards will be on the same cpu
      (as in a test done a few months ago with a video card can be compared with this test today with a different video card)

      that said it is a tad slow
      instead of a completely different cpu why not overclock the 5960x? i have seen those 5960x’s hit 4.4-4.8 fairly easily

        • sschaem
        • 5 years ago

        first : For the downvotes, I know I’m overly critical of TR. But I think I have a point, even if its minor.

        And this did cross my mind.”what would be the reference cpu I want to use for the next few years for cross reference”
        The 5960x is a beast, and might be the top option for future dx12 games…
        But I think it will be a long, long while for this CPU beat a i7-4790k in gaming (even with Dx12.)

        I also agree, overclocking to 4ghz would mitigate the ‘single core’ concerns.

        We know that dx11 is a pig of an API, and will devour any single core performance.
        So is there any chances even a 5960x is causing driver / dx11 based bottleneck and GPU game benchmark are not reflected truly ?

        If not, anyone down-voting willing to post data its a non issue ?

      • JustAnEngineer
      • 5 years ago

      [quote=”sschaem”<] The Titan X, for about the same power consumption and heat generation of a r9-290x, is 25% to 65% faster depending on the game you play at 4K. [/quote<] Titan X is 330% of the cost of a Radeon R9-290X. Isn't that a more important metric than the similar power consumption?

        • geekl33tgamer
        • 5 years ago

        Exactly. I have never understood why people will pay so much more for something because it’s going to use 50-100W less power.

        Unless your graphics card is running flat out 24/7×365, you’ll never likely recoup the higher cost of the card back through any power saved from the wall during it’s useful life.

    • Meadows
    • 5 years ago

    As for this new VRAM rage, Mr Wasson, I do have a recommendation:

    When testing the games, momentarily check how much VRAM is actually in use. This should make it crystal clear whether a particular game (at a particular resolution) chokes because of memory or whether it’s because of some other resource.

    A number of tools, most notably MSI Afterburner, will let you graph it with the update tick period of your choosing.

      • Flapdrol
      • 5 years ago

      but many games use a dynamic system for textures, an open world game could easily fill 10GB if it only purges when needed.

        • Meadows
        • 5 years ago

        That would pretty much defeat the purpose of the “texture quality” setting in basically every game except Rage, wouldn’t it?

        In addition, there are variables such as AA and the memory overhead of some other effects, on top of the resolution itself.

        If all else fails, the graphing feature in such diagnostics software allows you to check whether VRAM is managed according to a specific target or whether usage just piles up forever.

          • Flapdrol
          • 5 years ago

          Yes, in some games the setting is indeed somewhat redundant.
          [url<]http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan_X/33.html[/url<]

      • anotherengineer
      • 5 years ago

      Is there not a period after Mr.??

      😛

      Edit – Can GPU-Z be used to log memory usage also?

        • Ninjitsu
        • 5 years ago

        GPU-Z can, but I don’t really like its log format (when I tried last). As I mentioned below, HWiNFO is pretty good for this job.

      • Ninjitsu
      • 5 years ago

      I was about to suggest VRAM usage with something like HWiNFO (it lets you log FRAPS data too).

    • the
    • 5 years ago

    601 mm^2 die size. Hrm, reminds me of a discussion where some one was convienced that it’d be below that value and where I disagreed.

    That die size really points toward the need to go to 20 nm or 16 mm FinFET. What we’ve seen here really isn’t ‘Big Maxwell’ but rather medium Maxwell that didn’t make it to its correct process. A year from now I’d expect to see the real big Maxwell with more impressive DP numbers. The real question for the future big maxwell is how much fixed function GPU logic (ROP, TMU etc.) will be on the chip. With the push for 4K game play, a lot of die space has been allocated for them where they would be just taking up space for a compute product like Tesla. Also with nVidia indicating that stacked memory is for the next Pascal generation, it wouldn’t surprise me if the real big Maxwell came with a 512 bit wide bus.

    [quote<]Maxwell-based GPUs are awfully efficient compared to their Kepler forebears, and I don't think we know entirely why. We'll have to defer that discussion for another time.[/quote<] I see what you did there. 🙂

      • chuckula
      • 5 years ago

      [quote<]Maxwell-based GPUs are awfully efficient compared to their Kepler forebears, and I don't think we know entirely why. We'll have to defer that discussion for another time. I see what you did there. :)[/quote<] Your statement renders me confused.

      • Pwnstar
      • 5 years ago

      High end GPUs are jumping straight to 16/14nm. But since those are basically 20nm anyway, you’re right.

    • chuckula
    • 5 years ago

    The good: Performance/watt.
    The bad: Absolute performance will probably trail the R9-390X (although not by leaps & bounds).
    The ugly: The price, especially since this is a gamer-only card and it’s not like the original Titan where the strong dual-precision performance actually made the price very reasonable.

    Good news for Nvidia: Cutting the price, while abhorrent to Jen-Hsun, is also possible to keep parity with the R9-390X when it shows up. Additionally, while the Titan-X is probably a little slower at stock speeds, the reasonable power consumption + OC headroom will make up some of the difference since the R9-390X is likely going to be closer to its maximum thermal envelope at default speeds.

      • HisDivineOrder
      • 5 years ago

      I’ve got this feeling in my bones that if any of the rumors circulating around the R9 390X are true, it won’t be that much cheaper than the price they’ve got for the Titan X. More than that, if the rumors that they increased the VRAM to 8GB and that’s all at the full speed they’re promising to hit, then I REALLY don’t think they’ll price it much lower, if at all lower.

      And nVidia’s pretty good about NOT doing a price drop when it comes to their “Titan” series. I suspect right when that might be about to happen, that’s when we might get an almost identically equipped “980 Ti” to replace it in the eyes of gamers everywhere.

      Though I doubt they’ll call it a 980 Ti and miss the chance to sell a whole bunch of cards with a rebrand a la when the Titan was cut down and released as the 780 Ti, triggering a whole bunch of rebrands of the 680 (as the 770) and even more cut down versions of the 670 (as the 760).

        • chuckula
        • 5 years ago

        Yeah, the AMD fanboys expecting it to go for $599 are in for a disappointment (unless AMD decides to go out of business selling these cards). AMD will still try to be cheaper than Nvidia, but it’s going to be a much smaller margin than what they’ve tried in the past.

        I’m expecting $799 for the 4GB models and either $899 or $999 for the 8GB models if they actually show up at the initial launch.

          • geekl33tgamer
          • 5 years ago

          I’m expecting it to be about £700 or so, as they will (hopefully) try and sell it cheaper than the Titan X (rumoured to be £850, but nowhere is listing it yet).

    • Meadows
    • 5 years ago

    Allow me to let out a sigh as the GTX 980 is apparently suddenly “mid-range” now.

      • USAFTW
      • 5 years ago

      Everyone knew that GM204 was midrange-performance. Exhibit GF104, GF114, GK104 and finally GM204. Gx110 and Gx100 is where the highest performance Nvidia chips on a particular node slots in, and they’re always 500+ mm2.

        • Meadows
        • 5 years ago

        500 bucks is not mid-range.

          • swaaye
          • 5 years ago

          How much did GTX 680 sell for initially? 😉

          • USAFTW
          • 5 years ago

          In an ideal world where an stronger AMD would put pressure on Nvidia in the midrange segment the 980 would cost $250-300. But when Performance justifies the Price who would complain? That’s a sure sign of what sort of price we could expect from Nvidia if it weren’t for AMD undercutting them. Suddenly low-end is mid-range and mid-range is the new high-end.
          And high-end costs as much as a Smart city car.

            • swaaye
            • 5 years ago

            I think if AMD could better compete the magical duopoly in this industry would likely have them matching each other’s desire for higher pricing. That’s how it worked back in the days before NV started building 500mm^2 GPUs at least.

            However AMD’s market share is so low it’s hard to tell what they might try.

            • USAFTW
            • 5 years ago

            True. Maybe the $999 tag is justified considering the premium build and 12GB framebuffer and a 601mm2 die at TSMC shouldn’t come cheap. Getting good yields on the GM200 should be a nightmare.

            • Ninjitsu
            • 5 years ago

            While agree behind the reason you’ve given for GM204’s price, I’m afraid AMD would do the same if they had such a lead.

          • derFunkenstein
          • 5 years ago

          Neither is the performance of the GTX 980 compared to AMD’s stuff. It’s probably overpriced, but it’s not as bad as it first seems – the scatter plots bear that out. It moves to the “bigger” end on both the X and Y axis. It costs a lot but it performs a lot, too.

          • Prestige Worldwide
          • 5 years ago

          Correct. A Nvidia mid tier chip at high end price is what we call “Milk Range”

          • MadManOriginal
          • 5 years ago

          The GTX 980 was never the value card anyway, the GTX 970 blew everything away in performance/dollar when it came out.

          • l33t-g4m3r
          • 5 years ago

          Nvidia thinks it is, lol.

            • Meadows
            • 5 years ago

            No, TR does. Read page 3 of this review.

            • l33t-g4m3r
            • 5 years ago

            What? You’re on crack. Everyone knows the 980 is mid-range. I was just agreeing with your price analysis. $500 is NOT acceptable pricing for a mid-range card, while Nvidia obviously DOES think it is acceptable. I also think it’s even less acceptable to charge $1K for a normal high end GPU.

            This change in product line means anyone who wants a compute enabled NV card will have to shell out for a Quadro, because the Titan is now compromised. We still get the benefit of paying NV’s Titan pricing though. *gag*

          • Prestige Worldwide
          • 5 years ago

          But the architecture inherently is. They just pushed all of their cards up a price tier starting with the 600 generation.

          • anotherengineer
          • 5 years ago

          $700 bucks is not mid-range either, closer to $800 after tax.

          [url<]http://www.newegg.ca/Product/Product.aspx?Item=N82E16814487079&cm_re=gtx_980-_-14-487-079-_-Product[/url<]

          • Tirk
          • 5 years ago

          Indeed the performance of the 980 is nice but its price is not. Even at release it seemed much more like a $299-$349 product.

          Titan X seems to be the $500 product (Maybe will happen with a 980Ti ?)

          Although obviously I’m not in the market for a Titan X or 295 X2, so for those going for the $800-$1200 product they might see value where I do not.

          If a 980Ti product arrives or when the 390x product shows up might be a more engaging proposition for the price.

          • Pwnstar
          • 5 years ago

          That’s his point. nVidia sold you a mid-range GPU for high end prices.

          lmao

        • Krogoth
        • 5 years ago

        GM204 maybe classified under Maxwell family as a mid-tier but it is actually closer to a high-end part since the silicon itself is nearly as massive as a Hawaii and GK110 chip.

      • Billstevens
      • 5 years ago

      The 980 is still high end… but yeah you cant go more than about 1 year if your lucky as number 1…

      Now with Nvidia creating the $1000 class of single GPU its even harder .

        • HisDivineOrder
        • 5 years ago

        Actually, it’s the people who BUY the $1k GPU that make a $1000 “class” viable. If people didn’t buy them, nVidia wouldn’t price them that way. They’d just sell their high end parts at the $550-650 they once did long, long ago.

        But if you tell Jen he can get $1k for a GPU instead of $550-650, well, what do you think he’s going to do? Leave money on the table?

        Lack of competition feeds this instinct in him. It’s the reason he thought he could get away with a $1k Titan in the first place.

      • HisDivineOrder
      • 5 years ago

      We all knew it was mid-range. nVidia thinks of $550 as the new $200 mainstream price point. That’s why they figured that “low-end” $330 card’s real specs didn’t matter so much.

        • Krogoth
        • 5 years ago

        Wrong, 980 has always been pitched as high-end part at launch.

        The mid-tier parts is the entire 960 line-up.

        What really happened is that GPU outpaced the fab tech and Nvidia no longer releases the big version of their architechture to the market first because of yield issues. They got burnt by Fermi and saw how well mid-range version sold versus the big Fermi. They decided to release mid-Kepler first and big-Kepler later. This proved to be far more successful and that Nvidia decided to repeated it with Maxwell.

          • Prestige Worldwide
          • 5 years ago

          Your argument:
          – 980 is high end
          – 960 is mid range
          – Big Fermi GF100 was difficult (yields, power consumption, heat I assume?)
          – 460 sold like hotcakes

          Your conclusion:
          It is OK to sell this generation’s GTX 460 at GTX 480 prices and sell this generation’s GTX 480 for $1000

          Wat

            • Krogoth
            • 5 years ago

            Reading comprehension my friend.

            You are confusing hardware design with how it is marketed.

            980 is technically a mid-range design from am hardware standpoint, but it is marketed as high-end part because Nvidia has abandoned the route of launching a flagship big design first and later sell lesser designs.

            They got burn so badly by GF100 fiasco and saw how GF104 outsold the GF100 by a good margin. They decided to delay GK110 launch and release the GK104 first in the next round. This proven to be so successful that they play the same card with Maxwell round. The Maxwell round so far working out in the same way.

            • HisDivineOrder
            • 5 years ago

            Pretty sure they released the Geforce 680 (GK104) first because AMD released the 7970 as a card that gave 10% performance improvement over the product generation it was replacing at 10% more cost.

            As a result, nVidia–who had planned around offering a LOT more performance than that–had an opportunity to release the GK104 (which had been planned to be a replacement for the x60 line) and call it a high end. Tweaking the clockspeeds a bit, they could offer the same performance as what AMD had convinced people was “high end performance” for that generation. As a bonus, they could knock $50 off the MSRP of the 7970.

            This was obvious when nVidia was asked to comment on the new AMD line and their response was, “We were expecting more.”

            So much more in fact, they could delay Big Kepler, focus it on Tesla and other compute-related industries, and put them all toward the data farm from which Titan eventually got its new monicker.

            All thanks to AMD releasing the 7970 at $550 with incredibly shoddy drivers that couldn’t even do CF reliably. Later, the 7970 got its drivers up to a point where they had a decent advantage over Kepler, but by then word of mouth and the legend of nVidia’s trumping AMD with a mid-range part had taken hold.

            Ever since then, the Gxx04 part has been “good enough” to stand in for the high end when AMD sits out a generation.

            • Meadows
            • 5 years ago

            Model numbers ending in 80 or 800 have always been the standard “high-end” for NVidia cards going as far back as the 5000 and 6000 series.

            Your backwards explanation doesn’t make that much sense in this context.

            • Prestige Worldwide
            • 5 years ago

            So because their marketing department used the high-end branding, it must be a high-end card?

            I seem to remember that there was once a time where 9 out of 10 doctors agreed smoking was good dor you.

            Flex those critical thinking muscles for a moment and look at the specs.

            GF104 was the mid range GTX 460.
            GF100 was the high end GTX 480.
            GF114 was the mid range GTX 560.
            GF110 was the high end GTX 580.
            GK104 was the “high end” GTX 680 (hmmm)
            GK110 was the real high end chip, and was rebranded the GTX Titan.

            The fact of the matter is that Nvidia saw that their midrange chip could hold its own against AMD’s high end full Tahiti chip that launched as the HD 7970 so they decided to rebrand their midrange chip as the GTX 680 and create a new “ultra enthusiast” GTX Titan with their actual high-end chip.

            The 7970 underdelivered and Nvidia capitalized on it.

            The same trend continues in the GTX 980 / Titan X generation. It is very obvious to just about anybody who has been following GPU tech for a few years.

          • Melvar
          • 5 years ago

          I was under the impression that they release GPU’s in whatever order gives them as many “fastest nvidia product available” cycles as possible.

          If they had released the Titan X and the 980 at the same time all the people with more money than sense wouldn’t even get the chance to buy both, they’d just get the TitX and skip the 980.

          When is this thing supposed to hit the stores? My 980 is feeling awfully slow.

      • Andrew Lauritzen
      • 5 years ago

      It’s midrange in the same way that a 4770K is “midrange” (compared to a HSW-E) really. i.e. yes professional-level stuff is going to be faster, but the price/performance starts to get a bit questionable for most users vs. upgrading more frequently.

      • paulWTAMU
      • 5 years ago

      let me just hug my 260x and try to reassure it that it’s adequate

    • rogthewookiee
    • 5 years ago

    With the cut down version be the Titan V or the Titan ^ ?

      • Thrashdog
      • 5 years ago

      In all seriousness, NVidia missed an opportunity to fix the product-naming nightmare they created with the first Titan here. To my thinking, they should have called this one the Titan IX, and then when they come out with 10-series cards, *that* Titan would be the Titan X, and so on.

        • HisDivineOrder
        • 5 years ago

        They should have just called them all Titan. Then the fine print would be a model number or something.

        What could possibly follow Titan X? Titan XL? Titan X2 (implying SLI and a new Titan Z)? I think they’re painting themselves into a corner.

        I’d prefer they just come up with a new monolithic awesome name of awesome for each generational change, but… Titan’s pretty awesome in terms of naming.

          • the
          • 5 years ago

          My guess for the name of a dual GM200 card would be Titan ZX.

          Personally I would have just called this the Titan M, m as in murder… I mean Maxwell.

            • MrJP
            • 5 years ago

            Then a version with extended colour depth called the Titan ZX Spectrum?

            • HisDivineOrder
            • 5 years ago

            Riva humor, haha.

          • Milo Burke
          • 5 years ago

          The Titan XL has like way four more cylinders than the standard Titan.

            • JustAnEngineer
            • 5 years ago

            The [url=http://historicspacecraft.com/Rockets_Titan.html#Titan-4B<]Titan IV[/url<] could put 6½ tons into GEO.

    • USAFTW
    • 5 years ago

    Just had a nerdgasm at glancing over the die shot and the die size figure, I mean holy cow!
    16nm finfet HP couldn’t arrive sooner.

    • guardianl
    • 5 years ago

    The almost perfect scaling (50% more ALUs, Memory BW etc.) versus a modest drop in clock speed is pretty interesting. Although most people don’t care, getting perfect linear scaling in the same generation of GPUs is pretty neat (at lower clock speeds!), and not usually achieved.

    Other than that… 82% more $$$ for 33% more performance compared to a 980? No DP compute basically? The first Titan was a raw deal for gamers, but at least the DP meant something. Now it’s just a halo gaming product at a new inflated price.

      • willmore
      • 5 years ago

      Hey, it could cost $1499.

        • guardianl
        • 5 years ago

        Don’t give them any ideas! 🙂

      • Krogoth
      • 5 years ago

      On paper, but not in practice.

      Titan X yields only 20-40% more performance than 980. It is almost like the jump from 680 to 780 except price.

    • albundy
    • 5 years ago

    “The Titan X is outright faster than everything we tested, including the Radeon R9 295 X2, in our frame-time-sensitive 99th-percentile results.”

    the 295 X2 is ahead of the FPS pack on most of the games you tested, some by a large margin. 99th-percentile results wont do you much good if your card cant push higher rez at decent FPS in the near future. it’s also half the price of the titan x.

      • Damage
      • 5 years ago

      [url<]http://i.lvme.me/v8ccqht.jpg[/url<]

        • albundy
        • 5 years ago

        lol, nice! i stand corrected. i was thinking of the time spent beyond X.

        • Meadows
        • 5 years ago

        Admit it, you’ve been waiting for the opportunity to link that.

      • geekl33tgamer
      • 5 years ago

      I think Scott was saying it’s to do with the better consistency of the frame delivery that the Titan offers. I don’t have a 295, but do have a pair of R9 290X’s in Crossfire and can attest that despite getting really high frame rates – it appears jittery right across the board.

      Not as bad as when the R9 was new thanks to better drivers, but it’s still not perfect.

      Edit: My spelling… 😉

        • sschaem
        • 5 years ago

        If you ever plan to get a VR headset, all this will go away and you will get 100% perfect scaling across both GPU.

        The Titan X is not going to age well at all…. (Unless people are ready to buy another one to put them in SLI mode.)

          • Melvar
          • 5 years ago

          [quote<]The Titan X is not going to age well at all...[/quote<] Unlike every other video card ever.

            • sschaem
            • 5 years ago

            Well, in a sense things are shifting if VR takes off this year.

            VR finally is the killer app for SLI/crossfire configurations.
            It offer simple and perfect dual GPU scaling.

            What I’m hinting at: in that environment the table will be turned.
            Solution like the r9-295×2 will consistently be 30% faster and have lower frametime across the board.
            You will get a better experience by keeping a GTX 970 and adding one in SLI, then upgrading to a Titan X.

            The only time you will beat those setup is if your are ready to plow another $1000.

            Kind of sux to spend $1000, then in 6 month instead of being at the top of the chart your are near the bottom (not second place, but bottom).

            Assuming VR takes off 🙂 (if VR is not of interest, then I would go with a Titan X vs two GTX 980 anyday)

            • Voldenuit
            • 5 years ago

            [quote<]VR finally is the killer app for SLI/crossfire configurations. It offer simple and perfect dual GPU scaling. [/quote<] Are you saying scaling will be perfect because each GPU can render to each eye? If you think frame stutter is bad, just wait till you see how bad nausea gets with individual eye lag. VR will have overhead requirements for syncing display output, it probably won't be too different from AFR or split-screen SLI/XF.

            • sschaem
            • 5 years ago

            AFR is a huge problem because the game logic need to sync with render time of each frame, but it cant.

            With VR, its the same frame being rendered. Same time. 100% the same as single GPU rendering.
            The same frame data is used, the same frame time is used, the real only difference is the camera origin. (meaning all the draw call, everything is the same)

            With AFR, you also need to share time based data accross GPU. you no dot with ‘VR’.
            If GPU need to previous frame rendered for motion blur or other effect, that frame exist on the other GPU.

            thats why its a nightmare for driver to make it work.. but its a non issue with VR.

            Also because both frame are from the EXACT same game time, but only with a slight shift in view origin, the render time will be equivalent.
            split screen render very different views. Please check any and all VR screen showing both eyes, there are near identical.

            • the
            • 5 years ago

            Render times between both eyes are very similar as you are correct that each frame will be very similar. That does not mean identical though. Lots of subtle situations where one eye would be able to get a view of a long hallway or an area with an extra light entering the scene for example. Thankfully it isn’t too difficult to sync things by moving on to the next frame after the laggy eye completes rendering [i<]if necessary.[/i<] Syncing like this is great for a system that isn't fast enough to drive each eye independently at full frame rate. Interestingly enough, keeping the GPU's perfectly in sync is only needed if the VR headset has a single display that spans both eyes. To compose a single frame for the display, half of the image needs to be sent from one GPU to the other for composing the end frame buffer. There are a few tricks here as the entire buffer doesn't need to be moved to start sending the frame to the headset. Streaming half the frame from one GPU to another is possible and should take a fixed amount of time. Thus the operations to sync to a single frame buffer and sending that frame buffer to the headset can overlap slightly. DisplayPort here is also interesting as refreshing a screen doesn't have to occur in the traditional scan line method that CRT's used ages ago (and which DVI/HDMI inherited). A single panel can be seen as two tiles and treated as two displays for VR. The bane of 4K displays on the desktop can be an advantage here for VR. If the headset uses two panels, each eye can refresh at different times, think of it as dual G-sync or adaptive sync. If the rendering time for each eye is similar, it would be simpler to continue syncing after each pair of frames but it isn't necessary. Unlike AFR, the overhead for frame pacing isn't needed either since the frames from each GPU are not going to the same display. Ironically, AMD's older stutter prone Crossfire drivers from years past would be ideal here.

            • Melvar
            • 5 years ago

            [quote<]Kind of sux to spend $1000, then in 6 month instead of being at the top of the chart your are near the bottom (not second place, but bottom).[/quote<] The bottom of what? It'll still be faster than all the cards slower than it. Anyhow, VR may be a killer feature for me down the road, but currently I'm trying to game at 4K and I'm hitting the memory wall hard. I would definitely not get a better experience with 2x 980's than 1 Titan X, because a 980 is already too fast for 4GB. 1080p requires 4GB now. Call it lazy developers if you want, but some games (e.g. Dying Light) will stutter on a 4GB card unless you turn settings down or turn the resolution down to 1080p. Performance being similar, 12GB for $1000 right now is looking a lot better to me than 8GB for $700-800 in June.

            • thecoldanddarkone
            • 5 years ago

            Seriously Dying light? The game that was SO HORRIBLY CODED IT ATE OVER 7 Gigs of ram before 1.4. Look, at least use a good example…

            • Melvar
            • 5 years ago

            That [i<]is[/i<] a good example. It really does have whatever system requirements it has, regardless of if it has a good justification for them.

            • thecoldanddarkone
            • 5 years ago

            No it’s not, it’s a good example of beta apps that shouldn’t even be launched.

            Just to be clear, I don’t buy games that should still be in beta. When your game goes up 15-20 percent on performance and uses 2-3 gigs of less ram, thats garbage.

            • Melvar
            • 5 years ago

            Please explain how complaining about the quality of developers is going to make current games from current developers run faster. I’ve been thinking it was faster hardware that did that.

            • JustAnEngineer
            • 5 years ago

            A few graphics cards have stood out as good choices.

            GeForce256 was a game changer. Radeon 9700 Pro provided high performance against [b<]two[/b<] successive generations of GPUs from the competition. Radeon R9-290 is still the best value available, a year and a half after its launch... This overpriced $1000 Titan is going to look very lame when mid-range GPUs based on the next manufacturing process finally arrive.

            • Terra_Nocuus
            • 5 years ago

            Oh man, I loved my 9700 Pro. That card was amazing for its time.

          • slowriot
          • 5 years ago

          I don’t think people buying $1000 graphics cards are all that concerned about how well they age. Something tells me they’ll just buy the next $1000 graphics card.

          • geekl33tgamer
          • 5 years ago

          No video card has ever aged well?

            • Laykun
            • 5 years ago

            The ATi Radeon 9700 Pro absolutely aged well.

            • geekl33tgamer
            • 5 years ago

            Oh, good point. I had a GeForce 4 during it’s period of existence, and thinking about it, I held onto that for a while too because of the dust buster debacle that replaced it. I skipped Nvidia FX entirely. Ended up with an ATI X800XT when I finally upgraded.

            Think it had a good 4 year run. That would be unheard of today!

            • auxy
            • 5 years ago

            Lots of people still running Fermi GPUs. 2010-now.

            • the
            • 5 years ago

            The 8800GTX also aged very well. It has a remarkably long life span.

            More recently I’d cite the Radeon 7970/GTX 680 as a pair of cards which are still very respectable for gaming at 1920 x 1080 or 2560 x 1440 today. That’s three years of usefulness. While the faster Hawaii and GK110 chips have appeared since then, outside of 4K gaming there wasn’t much pressure for owners to upgrade. It took the launch of the GTX 980 to deflate prices of the Hawaii and GK110 based cards to make purchasing them worthwhile form a price/performance stand point for owners of the Radeon 7970/GTX 680.

            However, I don’t think that the Titan X and coming R9 390X will age very well. They are 28 nm based chips but 16 nm FinFET is right around the corner. HBM memory is also a very big deal and while the R9 390X is rumored to use it, the next generation of HBM has already been announced which gives another massive bandwidth boost. (I suspect that nVidia’s Pascal architecture will start off using the second generation of HBM.) Essentially a year from now we’ll be seeing a rather large performance again.

            • Vaughn
            • 5 years ago

            My 7970Ghz card is aging well also.

            Its now only 10% behind the performance of the 780.

            • Essence
            • 5 years ago

            The 7970/280X says hello, even the 290X rips the 780 TI and Titan/780 a new one. The problem was it took over 6 months to do it. At 4K the 290X is only 25% slower vs the Titan X, and 290X is same or better vs the 980 in 4K.

            I would think the 290X aged pretty darn well, wouldn’t you? And its the same story with the 280X at the “middle” end (aka 7970)

        • Chrispy_
        • 5 years ago

        Multi GPU is, and will probably always be a kludge.

        I ditched it for gaming years ago but you and a few others bravely soldier on with it, reminding us with your forum threads how horrific, broken, disapointing and generally is.

        Unlike you brave few, I don’t game on multiple monitors at 144Hz.

          • geekl33tgamer
          • 5 years ago

          I hear SLI is better (if you need it), but yes – I won’t be using Crosssfire again when the R9 290X’s are replaced in this system. 😉

            • Zizy
            • 5 years ago

            SLI was better but AMD fixed CF by eliminating the bridge and has been better since.
            Neither solution is good though. AFR is just a poor way to do it but anything else is much harder.

            • geekl33tgamer
            • 5 years ago

            CF is not fixed, and removing the bridge connectors created new problems with bandwidth.

            The 990FX chipset is dated at any rate, but in an all AMD system like I have, the PCI-E 2.0 16/8/8 set-up chokes the HT link back to the CPU. The last thing AMD needed to was add extra bandwidth usage to the PCI-E bus that can already be pretty stretched in CF.

            I accept it is better than it used to be, but yes – neither solutions are great, and they are far from fixed.

            • geekl33tgamer
            • 5 years ago

            Down voted for an all AMD system? Got it…

            • Airmantharp
            • 5 years ago

            You said, ‘CF is not fixed’, and got downvoted by people that apparently haven’t read the article that you are commenting on…

      • USAFTW
      • 5 years ago

      Frame delivery and overall smoothness is where it matters. FPS is nothing if those pesky frames won’t even show on the screen or there are huge frametime spikes. Honestly, I applaud the effort TR puts into their GPU reviews and there are still so many so-called professional review sites (Anandtech, TPU) that are stubbornly sticking to raw FPS figures when it comes to evaluating GPUs.

        • l33t-g4m3r
        • 5 years ago

        That only holds true to old monitors. This won’t matter as much when adaptive sync becomes the norm.

          • Damage
          • 5 years ago

          Don’t make me post that meme again.

          Fixing display output timing doesn’t fix GPU frame render timing.

            • l33t-g4m3r
            • 5 years ago

            No, it doesn’t. BUT, your display isn’t dropping unsynchronized frames either. So, is 60 “stuttery” frames per second on a adaptive sync display as bad as a normal display? I think I’d need to see slow motion video footage of that. FOR SCIENCE.

      • SHOES
      • 5 years ago

      99th percentile is ALL that matters when you cant stand micro-studder.

      • Hyp3rTech
      • 5 years ago

      What’s with all the downvotes? He’s right. I mean, 99th percentile is nice but overall fps is more important for most people. Ignoring the fact that the 295×2 is not only butt-ugly, but takes over 500 watts of power. It also runs cooler (by 20 degrees C!) and is a little quieter.

        • auxy
        • 5 years ago

        You don’t understand what the 99th percentile measurement means, do you? (;´・ω・)

          • Hyp3rTech
          • 5 years ago

          Glad you mentioned it. I DO. The worst the 295×2 did in this test was about 30? (Maybe less) milliseconds than the titian x, and it did better in most games than the 980 and 780 Ti. I don’t see you complaining about them.

    • Prestige Worldwide
    • 5 years ago

    4K benchmarks for BF4… come on. Ain’t nobody got time fo’ playing an online first person shooter at 48fps.

      • geekl33tgamer
      • 5 years ago

      Dropping the 4 x MSAA at 4K will probably help a lot. 😉

        • HisDivineOrder
        • 5 years ago

        I see these benchmarks that inflate the difficulty to drive the game at a given resolution, but AA is absolutely the last thing they drop.

        And I ask myself are there really people who will suffer lower framerates and reduced effects just to have 4X or higher AA? Because it’s the first thing I turn off.

        Note: I game at 1600p.

          • auxy
          • 5 years ago

          Nobody cares what you do. Anti-aliasing is non-negotiable for some of us.

            • geekl33tgamer
            • 5 years ago

            You really don’t need AA at 4K with a 28″ or smaller screen.

            • Melvar
            • 5 years ago

            If you want your in-game geometry like stairs and barred gates to not flicker when you move you still need AA.

            • l33t-g4m3r
            • 5 years ago

            You know, FXAA would be [i<]perfect[/i<] for 4k. It's not like you'd actually be losing any noticeable detail at that resolution.

            • Melvar
            • 5 years ago

            4K is not such a high resolution that you can’t see all the detail.

            I can tell you from personal experience that FXAA looks just as bad at 4K as it does everywhere else. It’s due to the way it works (AFAIK, it detects contrast, not aliased edges); even with DSR compensating for the reduction in detail, FXAA degrades the image quality more than SMAA does without DSR.

            • auxy
            • 5 years ago

            I have experience with 4K at 24″ and 28″.

            You still need AA with 4K on a 24″ screen. End of discussion. Shimmer is still a problem.

            • Melvar
            • 5 years ago

            Yeah, it seems like a lot of people are under the impression that AA is just for smoothing edges, but real rendering based AA (e.g. SSAA, DSR/VSR, etc.) greatly increases the amount of detail that’s encoded in the same number of pixels. It improves nearly every aspect of image quality.

            And 4K is not the crazy out-there resolution people think it is. It’s hard for current GPU’s, but at 28 inches without AA it looks just like 1440p only smaller. You can still see the aliasing just fine. If you add good AA and sit back a bit it’s sorta kinda retina. No one with decent vision will have a problem telling a difference between 4K and 8K on a large desktop monitor.

            • Airmantharp
            • 5 years ago

            I agree with both you- I can see aliasing clear as day at 1600p; I don’t think 4k (well less than twice the linear resolution) would make it go away.

            • Airmantharp
            • 5 years ago

            What did you do before 3Dfx first implemented it?

            • auxy
            • 5 years ago

            3dfx didn’t “first implement” anti-aliasing! What are you smoking?! (´・ω・`)

            • Airmantharp
            • 5 years ago

            For 3D gaming (they didn’t do much else)? Yeah, they did.

            • auxy
            • 5 years ago

            No they didn’t. Even the venerable Nintendo 64 used anti-aliasing before 3dfx did.

            3dfx didn’t have anti-aliasing in their graphics cards until the Voodoo5. PowerVR, ATI, and Matrox were all doing it before that.

            • l33t-g4m3r
            • 5 years ago

            LOL. Both the voodoo2 and TNT2 had AA.

            • Melvar
            • 5 years ago

            They weren’t even last, if you’re talking about proper AA on Voodoo Graphics. They actually rendered the AA separately as textured outlines outside the AA’d objects.

            I remember this distinctly because I was a teenage boy at the time and the Voodoo Graphics AA made Lara Croft’s butt look bigger. It really bummed me out that my CPU wasn’t fast enough to keep the AA turned on.

            • auxy
            • 5 years ago

            Err, well, Voodoo Graphics didn’t have anti-aliasing…

            That was probably a feature of the game.

            • Melvar
            • 5 years ago

            IIRC it was a feature of Glide that had to be specifically enabled by the developer.

            • auxy
            • 5 years ago

            Ohh. Given the limited horsepower of Voodoo Graphics and the way you described it, it might be Xiaolin Wu-style, like the N64 used.

            • Melvar
            • 5 years ago

            I don’t know what that is, but 3DFX basically rendered a single pixel wide texture of the antialiased edge (probably with the CPU) and then rendered it outside the object that it was trying to antialias. I imagine it was somewhat fiddly for developers at the time; it wasn’t something they could just turn on, they had to make it work.

            • auxy
            • 5 years ago

            That doesn’t make any sense. Do you have any sources that talk about the method in detail? (‘ω’) I’m curious!

            • Melvar
            • 5 years ago

            No, this is from memory from when I was a rabid teenage 3DFX fanboy. I’m sure I picked it up from posts on their newsgroups. Gary Tarolli used to talk about the tech in there quite a bit.

            I could have misremembered this, or I could have misunderstood it in the first place, but that’s how I remember it working.

            I distinctly remember that it was the CPU that seemed to be the performance bottleneck. I had a 133MHz Pentium and it couldn’t maintain a constant framerate in Tomb Raider with AA turned on, but a Pentium Pro supposedly could.

            • Melvar
            • 5 years ago

            Okay, I think this may be it. From [url=http://www.anandtech.com/show/580/18<]an old Anandtech article[/url<]: [quote<]Many cards claim support for anti-aliasing by implementing "edge" anti-aliasing or anti-aliasing through "oversampling." Edge anti-aliasing is accomplished by tagging which polygons are an edge and then going back and letting the CPU perform anti-aliasing on these edges after the scene is rendered. In order for a game to support this, it has to be designed with this in mind as the edges have to be tagged. The extra steps cause serious latency issues and sucks up all the CPU power. [/quote<] That sounds pretty close to what I was thinking. I may have incorrectly inferred the whole "outside the AA'd object" thing from the fact that it made Lara's legs thicker, without realizing it could be doing the same thing on the inside of the edge and it just wouldn't be visible. Edit: and the "render it as a texture" part was probably just because that was the only way to get something into the Voodoo Graphics framebuffer quickly.

            • Airmantharp
            • 5 years ago

            That’d be the Voodoo 4/5 (same chip in different numbers), before Nvidia went and came up with MSAA/Quincunx.

            • auxy
            • 5 years ago

            NVIDIA did not invent MSAA nor “Quincunx”, just the name… (つー`;)

            • Airmantharp
            • 5 years ago

            And the question remains- would you just stop gaming if you couldn’t use AA?

            • auxy
            • 5 years ago

            [quote=”myself, in another post,”<]I won't play a game if I can't have a look of which I approve and also a decent framerate. I'll just save it for when I have hardware that CAN handle it. Luckily with a 4790K and a 290X there's not really any game on which I can't have my cake and eat it too. ('ω')[/quote<] It's not about being able to use AA, it's about getting visuals with which I am satisfied. With 4K at 24", I'm generally satisfied by SMAA.

            • Prestige Worldwide
            • 5 years ago

            So is a proper frame rate.

            • auxy
            • 5 years ago

            I don’t really disagree! I won’t play a game if I can’t have a look of which I approve and also a decent framerate. I’ll just save it for when I have hardware that CAN handle it.

            Luckily with a 4790K and a 290X there’s not really any game on which I can’t have my cake and eat it too. (‘ω’)

            • Prestige Worldwide
            • 5 years ago

            Ergo 4K is not an appropriate resolution to test this card for this game, as it will not perform at a reasonable level with the settings selected.

            Most gamers still play at 1080/1200p. Some play at 1440/1600p. Barely anybody plays at 4K so these benches don’t inform us of how this GPU will perform in our rigs.

            • auxy
            • 5 years ago

            They do — are you incapable of inference?

            Also, 48fps by itself (especially given the stable frametimes) is quite a reasonable framerate.

            • Prestige Worldwide
            • 5 years ago

            Not for a competitive multiplayer fps.

            You need a high, smooth, and consistent frame rate to ensure that you can play at your highest level without being held back by poor performance.

            48fps is unacceptable for multiplayer fps.

            • auxy
            • 5 years ago

            You play a lot of competitive games on Ultra settings, do you? What a stupid argument.

            • Prestige Worldwide
            • 5 years ago

            Somebody who spent a thousand dollars on a graphics card might want to.

            And tone down the attitude, thanks.

            • HisDivineOrder
            • 5 years ago

            And my point is there is a decent number of people who weigh AA as just another feature to be turned on and off, done in another manner than MSAA or FXAA, or in general tweaked.

            Because that’s what PC gaming does best. Tweak different settings so one user gets more framerate while another gets more AA.

            I honestly wonder if you see the irony that you’re complaining about nobody caring what I do while telling me why I should care about what you do.

        • Prestige Worldwide
        • 5 years ago

        With a Maxwell card, 2xMFAA to get 4xMSAA quality at 2xMSAA performance hit would also be a viable option.

          • auxy
          • 5 years ago

          Not that MFAA, like, works, or is a thing anyone wants. Have you USED it? It looks crap. Not as bad as TXAA, but …

            • Prestige Worldwide
            • 5 years ago

            MFAA is flawless for me on GTX 970 and BF4.

            • Melvar
            • 5 years ago

            What are you saying MFAA does to the image quality? Blur it?

            • auxy
            • 5 years ago

            MFAA is similar to old TXAA — it’s a temporal anti-aliasing method, although it’s simpler than AMD’s old TAA and NVIDIA’s newer TXAA. Basically, what MFAA does is do 2x MSAA while using the AA’d color data from the previous frame. It looks pretty nice — not as good as 4x MSAA IMO, but not bad — as long as we’re looking at a still image, but as soon as you move it basically goes away completely and it looks butt.

            The worst part is that when you’re moving, it’s still causing a performance hit, even though it’s not working!

            It’s really stupid. If you can’t handle proper MSAA or SSAA, just use SMAA. It looks better and it works for more things than MFAA.

            • Prestige Worldwide
            • 5 years ago

            I agree that SMAA is great, especially when your rig can’t handle MSAA. Performance benefits aside, I think SMAA actually looks better than MSAA in Far Cry 4 and SMAA T2x is pretty good in Alien: Isolation.

            But BF4, the game I was posting about when I started this thread, does not support SMAA. Only MSAA (and by extension, MFAA) and FXAA.

            Just curious though, I seem to remember you saying that you have a 290x. Please correct me if I’m wrong. Have you ever used a Maxwell card to use MFAA or are you just saying it looks like butt based on second hand impressions online?

            • auxy
            • 5 years ago

            Of course. I’ve sold two machines with GTX 970s, and my friend has GTX 980 SLI.

            My wife’s machine has a 750 Ti, but it doesn’t support MFAA of course. (・へ・)

            • Prestige Worldwide
            • 5 years ago

            Cool beans.

            • Melvar
            • 5 years ago

            MFAA makes 2x or 4x AA look better (while holding still) at very little performance cost, and when you’re moving they still are real 2x or 4x. It makes things looks better sometimes for almost free.

            The question is does it ever make things look worse? If not, there’s nothing (that I know of) to not like about MFAA.

      • Airmantharp
      • 5 years ago

      It’d be pretty, but like geek above, I’d drop settings until I hit a useful median framerate.

        • JustAnEngineer
        • 5 years ago

        4X AA is my preferred quality setting. However, for games that tax my graphics card at 2560×1600, I will drop down to 2X AA. This is still much better than none. On my old NVidia cards, their 2X sampling pattern didn’t work as well as AMD’s, but the 4X appearance was fine. I haven’t tried it with a current-generation NVidia GPU.

      • jihadjoe
      • 5 years ago

      Winning > Looking Good

        • Chrispy_
        • 5 years ago

        I look good because I’m winning, thanks.

          • auxy
          • 5 years ago

          [url=https://www.youtube.com/watch?v=9QS0q3mGPGg<]WINNING![/url<]

    • geekl33tgamer
    • 5 years ago

    Your move, AMD…

      • albundy
      • 5 years ago

      R9 390×2 should do it!

        • geekl33tgamer
        • 5 years ago

        Titan X has impressive performance. I hope the 390X keeps it on it’s silicone toes frame for frame, but comes in under $1k.

          • albundy
          • 5 years ago

          i hope so too. if that competition dies, then we are doomed!

            • geekl33tgamer
            • 5 years ago

            No, you just sell body parts to buy GPU’s. #geeklogic

        • sschaem
        • 5 years ago

        From the leak, the TitanX is a little slower then a single 390x

        Thats rumors, but I haven’t seen one that that claim that the 390x is slower then a r9-290x…

          • geekl33tgamer
          • 5 years ago

          AMD may be cash strapped, but even releasing the 390X to be slower than the 290x would be going some even by their standards.

          We’ll see a decent performance jump over the 290X. Re-wind 18 months and when the 290X launched it was competing with the first Titan, and was faster overall at half the price.

          Funny how quickly people forget just how great the 290X’s performance was when it launched, and just focus on that cards heat output pretty much all the time.

            • Chrispy_
            • 5 years ago

            The 290X isn’t even that hot.

            AMD were clearly pushing Hawaii hard at launch, probably harder than was good for publicity but all the recent samples I’ve come across can stay above 1000MHz during Furmark even with powertune down to -20%

            That’s a TDP reduction from 290W to 230W which puts it favourably in the same ballpark as other big-die players of its age like the riginal Titan and 780Ti/

            What’s more, If you drop powertune down to -30% you’re getting an average clock of something like 975MHz but the card draws less than 200W, and at -50% (145W) it’s still running at around 850MHz. IMO, the design isn’t good for high clockspeeds, it’s actually very efficient at low clockspeeds; AMD just needed more performance for the launch reviews so they overvolted and overclocked the snot out of it. The lack of clock and voltage headroom on Hawaii cards kind of backs this up.

            A 145W card with a completely silent cooler giving 85% the performance of a launch 290X would have had very different reviews, even if it didn’t manage to steal the performance crown.

            • anotherengineer
            • 5 years ago

            “A 145W card with a completely silent cooler giving 85% the performance of a launch 290X would have had very different reviews, even if it didn’t manage to steal the performance crown.”

            If they would have used GDDR5 at 7GHz like Nvidia, it may have been closer than 85%.

            • sschaem
            • 5 years ago

            I also see the same thing. The 290x doesn’t output any more heat really compared to a 780 Ti or the brand new Titan X… Yet, when the 780 Ti was out, it was a non issue. but as soon as the 980 ti came out it was a first world problem… and now again with the titan X this level of power/heat is again a non issue.

            • Melvar
            • 5 years ago

            If there hadn’t been the issue (real or perceived) of 290X’s with stock coolers throttling loudly, people probably wouldn’t have made nearly as big a deal about it. That gave the green team a lot of ammunition.

            First impressions are a bitch. I think AMD lost a huge chunk of potential sales by not waiting for their OEM’s to have the good coolers ready. Much more than they gained by launching a month or so early.

            • Chrispy_
            • 5 years ago

            Yeah, AMD really needs to work on their reference coolers.

      • Prestige Worldwide
      • 5 years ago

      390x is coming in June if you give the latest rumours any weight. We could be waiting a while.

        • geekl33tgamer
        • 5 years ago

        Won’t lie, the rumoured specs look good if they are true? R9 290 series turns 2 in October, so they better get a move on.

        That’s practically antique in the technology world!

      • USAFTW
      • 5 years ago

      Recent leaks point to the 390X being a strong contender. And hopefully it’ll come with watercooling as standard or a good air cooler. The 285 reference cooler is supposed to have a better fan and improved acoustics (from Roy Taylor interview at Ep. 229 MaximumPC podcast) and if they build on that with a nice shrould I’ll be a happy man.

Pin It on Pinterest

Share This