Nvidia’s GeForce GTX 780 graphics card reviewed

When Nvidia unleashed the GeForce GTX Titan a few months ago, that card’s combination of world-beating performance and an eye-popping $1K price tag pretty much immediately ignited speculation about what would come next. Surely a somewhat trimmed-down version of this same GPU would be used to power a less expensive product before long, right?

Well, yep. We’re gathered here today to say hello to the GeForce GTX 780, which is a GeForce Titan that’s gotten an incredibly close haircut. The result is a card that closely resembles the Titan, only with a few hundred bucks off the top. The 780’s performance is a little lower, but only by a couple of Xbox 360s—in other words, not enough that anybody is likely to notice, given the sheer scale of the remaining graphics processing power.

The GTX 780: Not quite Titanic

Like the Titan, the GeForce GTX 780 is based on the GK110 graphics chip, the big daddy of Nvidia’s Kepler lineup. To understand the 780’s relationship to the Titan, let’s pull up a functional block diagram of the GK110 GPU. This diagram is grossly oversimplified, yet we’ve had to shrink it down to nearly unreadable size in order to fit it on the page. These things happen when you’re dealing with, you know, the most complex consumer semiconductor product in history. With 7.1 billion transistors, the GK110 is beefier than a Five Layer Burrito.

The squint-inducing diagram above shows the GK110’s five graphics processing clusters, or GPCs. Each GPC is virtually a GPU unto itself, with its own rasterizer engine and three separate shader multiprocessing engines, or SMX units. Each SMX then has 192 shader ALUs—often called “shader cores” by marketing types who may or may not know better—and 16 texture management units. Scale all of these resources up across five GPCs, and you have a massive pool of graphics processing resources.

Trouble is, you also have a really huge chip where a single flaw or weakness could scuttle the whole thing. To manage that problem, chipmakers will disable portions of a chip that aren’t quite perfect. Aboard the Titan, the GK110 has one of its SMX units disabled. In the GTX 780, three of the SMX units have been shut down.

Interestingly enough, that change means different things in different cases. Some GTX 780 cards will have all three SMX units in a single GPC disabled, so the entire GPC goes dark. In that case, the card will have four raster engines, so its peak rasterization rate will be four triangles per clock. Other 780 cards may have their disabled SMX units spread around, so all five GPCs and raster engines remain active. Which configuration you get is presumably the luck of the draw. I’d get spun up about the potential disparity, but I don’t think the 780’s rasterization rates are likely to limit its gaming performance any time soon.

Speaking of things that don’t matter much, Nvidia has decided to scale back the GTX 780’s capacity for double-precision floating-point math. Double-precision support is built into the GK110 GPU because of the chip’s compute-focused role aboard Nvidia’s Tesla products. Real-time graphics basically don’t require that level of precision. The Titan offers the GK110’s full DP performance, so it can be used for scientific computing and other non-graphics compute applications. On the GTX 780, DP math executes at 1/24th the rate of single-precision math, just enough to maintain compatibility without truly being useful.

GPU
base
clock
(MHz)
GPU
boost
clock
(MHz)
Shader
ALUs
Textures

filtered/
clock

ROP
pixels/
clock
Memory
transfer
rate
Memory
interface
width
(bits)
Peak
power
draw
GeForce GTX 580 772 512 64 48 4 GT/s 384 244W
GeForce GTX 680 1006 1058 1536 128 32 6 GT/s 256 195W
GeForce GTX 780 863 900 2304 192 48 6 GT/s 384 250W
GeForce GTX Titan 836 876 2688 224 48 6 GT/s 384 250W
GeForce GTX 690 915 1019 3072 256 64 6 GT/s 2 x 256 300W

Outside of the GPCs, the GK110 chip on the GTX 780 isn’t hobbled at all. All six of its memory controllers and ROP partitions are active, as is its full 1536KB of L2 cache. The GTX 780 has a 384-bit aggregate path to memory and 48 pixels per clock of ROP throughput, just like the Titan. Even the 6Gbps memory transfer rate is the same, although the GTX 780 has 3GB of GDDR5 memory, not the outsized 6GB memory capacity of the Titan.

In fact, the 780’s clock frequencies are a little more aggressive than the Titan’s, with an 863MHz base and a 900MHz Boost clock. (Nvidia says the Boost clock should be the typical operating speed while gaming.) By contrast, the Titan’s base and boost clocks are 836 and 867MHz, respectively.

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

(Gtexels/s)

Peak

bilinear

fp16

filtering

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory

bandwidth

(GB/s)

GeForce GTX
580
37 49 49 1.6 3.1 192
GeForce GTX 680 34 135 135 3.3 4.2 192
GeForce GTX
780
43 173 173 4.2 4.5 288
GeForce GTX
Titan
42 196 196 4.7 4.4 288
GeForce GTX
690
65 261 261 6.5 8.2 385
Radeon HD 7970
GHz
34 134 67 4.3 2.1 288
Radeon HD
7990
64 256 128 8.2 4.0 576

The result of all of this fine-grained tuning is evident in the table above. The GTX 780 has lower peak texture filtering and shader arithmetic rates than the Titan, but its ROP and rasterization rates are potentially higher than the Titan’s by just a smidgen. The two cards’ memory bandwidth specs are equivalent.

The 780’s status as a not-quite-Titan puts it a notch or two above the GeForce GTX 680 in almost every key rate. This new card has AMD’s fastest single-GPU card, the Radeon HD 7970, outgunned in every category except two: memory bandwidth and shader arithmetic, where the two are neck and neck. If you step back another generation and compare to the GeForce GTX 580, the contrasts are much starker. The GTX 780 has about three and a half times the texture filtering capacity of the GTX 580 and offers smaller-but-still-noteworthy gains in every other category.

The card

The not-quite-Titan theme carries over into the physical appearance of the GTX 780. The two cards are practically identical, save for the extra 3GB worth of memory chips on the back of the Titan and the different names etched into their aluminum-and-magnesium cooling shrouds. That’s a good thing, since we are big, er, fans of the Titan’s cooler. Not only does it perform well, but the premium materials also lend it a touch of class that the usual shiny plastic shrouds can’t match.

Like the Titan, the GTX 780 is 10.5″ long—same length as a Radeon HD 7970—and requires 6-pin and 8-pin aux power inputs. The output complement is the same, as well, with two dual-link DVI ports, an HDMI output, and a full-sized DisplayPort 1.2 connector.

Source: Nvidia

One new wrinkle Nvidia has added to the GTX 780 is a revised fan speed control algorithm. This new routine attempts to limit the amount of fluctuation in fan speeds over time. That should reduce the number of pitch changes coming from the card’s blower, making the noise it produces less noticeable.

Now for the scandalous bit. The GeForce GTX 780 should be available at online retailers starting today for $649.99. That’s 350 bones less than the Titan, for a card that’s just a slightly de-tuned variant with 3GB of memory. If you just recently paid a grand for a Titan, well, my condolences. From here out, I suspect the Titan’s appeal will be very much limited by the appearance of the GTX 780.

Several software updates

Nvidia has just recently rolled out a new release of its GeForce graphics drivers, R320, ahead of the GTX 780’s introduction. Those drivers include the usual sort of performance increases one might expect, including optimizations for Tomb Raider and Metro: Last Light. Along with those updates, the R320 drivers have some deep voodoo intended to reduce the sort of performance problems we now track with regularity: frame time variations. These drivers should help even out frame delivery on GeForce cards, although Nvidia won’t say exactly why or how they achieve that goal. Hmmm.

An even bigger change is happening today. After being downloaded over 2.5 million times during testing, Nvidia’s GeForce Experience software is leaving the beta stage and going official. Immediately, it becomes Nvidia’s recommended software option for GeForce owners. For the uninitiated, GeForce Experience does a couple of things for gamers automatically. The program helps manage the downloading and installation of graphics driver updates, replacing old-school manual driver downloads with an automated tool.

Also, GeForce Experience will scan your system for installed games, read in their current image quality settings, and recommend optimal settings for your GPU. You can see an example above from our test system where it’s recommending a switch to TXAA anti-aliasing. If you click the “Optimize” button, the GFE software will write its optimized settings to disk, so the game will start up with those options the next time it runs. Nvidia has taken the time to profile a host of games on its GPUs in order to make this sort of automation possible. For the average gamer who’s probably befuddled by the choice between SSAO, HBAO, and HDAO, this sort of thing could be incredibly helpful. Even for folks who consider themselves experts, I’d consider these profiles a worthwhile resource. Folks are, of course, free to deviate from Nvidia’s recommended settings if they so choose.

A couple of other nifty features are on the horizon for GFE in the near future. First, when Nvidia’s Shield handheld gaming system arrives next month, GFE will act as the server software for remote gaming sessions. The idea here is that, in a sort of “local cloud” config, the Shield handheld can control a game running on your home PC. The visuals will be streamed to Shield over the network in real time, after being compressed via the GPU’s H.264 video encoding hardware. This I want.

The other addition will probably prove to be even more popular. Nvidia’s calling it ShadowPlay, and the concept is simple. Folks who record their gaming sessions for others to watch will know that recording via Fraps or streaming a session via other tools can involve lots of performance overhead. ShadowPlay will use the H.264 video encoding block built into any Kepler GPU to enable in-game recording with minimal performance impact. In fact, the overhead is low enough that Nvidia touts ShadowPlay’s potential as an “always on” recording feature. Imagine being able to allocate disk space so that the last 20 minutes of your latest gaming session are always available. That’s the idea here.

I’d like to give you more details about exactly how ShadowPlay will work, but I haven’t been able to try it yet. Nvidia tells us ShadowPlay is “coming this summer,” so it’s still in development right now. I think there’s some possibility that game streaming services might be supported eventually, in addition to local recording. If that’s something you’d like to have, you might want to post something in the comments about it.

That’s it for the GeForce Experience, but Nvidia has made one other software change worth mentioning. Both the GTX Titan and the GTX 780 have version 2.0 of Nvidia’s Boost dynamic clocking routine. Boost allows Nvidia’s Kepler-based graphics cards to operate at higher clock speeds by monitoring a range of inputs including GPU utilization, power draw, and temperature. Version 2.0 of Boost debuted with the GTX Titan, and it introduced a new algorithm that keys on GPU temperatures, rather than power draw, when making its decisions about what clock speeds to choose. This algorithm opened up more frequency headroom for Titan.

Nvidia initially exposed a host of tweaking options for end users, accessible via tools like EVGA’s Precision, so folks could overclock their Titans with Boost 2.0 active. Trouble is, the Boost 2.0 algorithm is very complex, with lots of inputs determining whether it’s safe for GPU clock speeds to rise. Users could have the off-putting experience of cranking up the GPU clock speed slider, expecting to get MOAR POWER, only to see little or nothing happening because of some other limitation.


The “reasons” feature identifies the current frequency constraint

To rectify this situation, Nvidia has now exposed some additional variables so users can understand which factor is currently constraining GPU clock speeds. The screenshot above from EVGA Precision shows the relevant limits on our GTX 780 card while running a simple graphics workload. If one of these numbers turns from a 0 to 1, it has become the limiting factor for GPU frequencies. As you can see, in our example, the GPU voltage limit is currently keeping clock speeds in check. If we want higher clocks, we’ll need to overvolt our GPU a bit. This addition should take some of the mystery out of attempting to overclock a Boost 2.0-enabled GPU like the GTX 780 or Titan.

Test notes

The results you’ll see on the following pages are not like those from old-school video card reviews, as our regular readers will know. Instead, we analyze the time needed to render each and every frame of animation. For an intro to this approach, you should probably start with this article. You’ll also notice that we’re capturing frame times with two different tools, Fraps and FCAT. These tools sample at different points in the frame production pipeline—Fraps near the beginning and FCAT at endpoint, the display. The results from both are potentially important, especially when they disagree. For an explanation of these tools, see here.

Fortunately, our task for today is relatively straightforward, as these things go. After diving deep into the issue of multi-GPU microstuttering in my Radeon HD 7990 review, I’ve elected to concentrate today on single-GPU solutions. I think that’s the right call for now, since dual-GPU Radeon configs aren’t likely to pose much of a challenge to the GTX 780—not unless and until AMD releases a public driver with the frame-pacing capability we explored in the 7990 review.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24
1T
Chipset drivers INF update
9.2.3.1023

Rapid Storage Technology Enterprise 3.5.1.1009

Audio Integrated
X79/ALC898

with Realtek 6.0.1.6662 drivers

Hard drive OCZ
Deneva 2 240GB SATA
Power supply Corsair
AX850
OS Windows 7
Service Pack 1
Driver
revision
GPU
base

core clock 

(MHz)

GPU
boost

 clock 

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

GeForce
GTX 680
GeForce 320.18 beta 1006 1059 1502 2048
GeForce
GTX 780
GeForce 320.18 beta 863 902 1502 3072
GeForce
GTX Titan
GeForce 320.14 beta 837 876 1502 6144

Radeon HD 7970 GHz
Catalyst
13.5 beta 2
1000 1050 1500 3072

Thanks to Intel, Corsair, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

In addition to the games, we used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Crysis 3

You can click through the buttons below to see the frame time distributions from the different cards captured with both Fraps and FCAT. Although the results from these tools tend to correlate pretty closely most of the time, you will see some variance because, in most games, we can’t use both tools simultaneously. So what you’re seeing in the plots is the result from a single test run from each tool. Naturally, those test sessions will vary a bit, since we’re playing the game manually. The other graphs below are based on three full test runs from each tool, with the median result shown.


I tried to pick some nicely playable settings for this round of Crysis 3 gameplay testing. The frame times are generally quite low for each of the cards, although there are occasional spikes in both the Fraps and FCAT plots. You’ll feel these slowdowns while playing. They happen as the test run is first starting and then a little later when I’m shootin’ dudes with exploding arrows. As you can see, the spikes tend to be a bit larger on the GeForces than on the Radeon HD 7970.

Interesting. The FPS average sorts the four cards pretty straightforwardly, with the GTX 780 just trailing the Titan while the 7970 and GTX 680 are deadlocked. The latency-focused 99th percentile frame time tells a similar story, but the FCAT frame times are consistently lower than in Fraps. That’s because some of the variance in frame dispatch, near where Fraps measures, is smoothed out by buffering by the time the frames reach the display, where FCAT measures. Even with the difference between Fraps and FCAT, they still agree on the basic sorting of cards.


The 99th percentile frame time is just one point on the overall latency curve. I’ve shown a larger section of that curve above, focusing on the last 10% of frames rendered. You can see where the 99th percentile point is and how very close the different cards tend to be overall.


This last metric measures “badness”—that is, time spent working on really high-latency frames. Here, the Radeon HD 7970’s smaller spikes at those trouble points in the test run give it the edge. AMD’s drivers and GPU combine to produce a smoother gaming experience in this case.

Far Cry 3: Blood Dragon


Wow. Many of our test sessions involve quite a bit of variance through the course of the run. Not so with our Blood Dragon scenario, where I’m sniping dudes from behind cover. Frame times are remarkably stable overall.



Not too many surprises here. The 99th percentile results are all under 33 ms, which means except for the last 1% of frames, all of the GPUs are slinging frames at better than 30 FPS. Even the longest frame times aren’t too large: no frame takes more than 50 milliseconds to render, so there’s very little “badness” going on.

Even so, there’s a clear hierarchy of performance, and the GTX 780 nestles in just below the Titan, ahead of the 7970 and GTX 680.

Tomb Raider




These results are boring, but in a good way. The FPS average and 99th percentile frame times tend to agree, which means we aren’t looking at any major problems with high frame latencies. A look at the distributions from each card will confirm that assessment. What’s left is a very straightforward outcome: the GTX 780 is somewhat slower than the Titan but faster than the Radeon HD 7970. See a pattern developing yet?

Guild Wars 2


This is the one game in our test suite where the Fraps and FCAT results come from the same exact test run. You can see how closely the results correlate and how Fraps data tends to be a little more variable. All of the cards show some small latency spikes as we move through the landscape during our test session.



Wow, so this outcome is more complicated than I thought. What’s happening here is evident in the plots above, if you look closely. Although there are some frame time spikes for all of the cards, only the Titan and GTX 780 show substantial spikes in their FCAT results as well as in Fraps. The GTX 680 and 7970 have very smooth lines in FCAT; the hiccups in Fraps are buffered out. As you sort through the various results, you’re see the different cards changing position depending on whether we’re looking at Fraps or FCAT data and what’s being emphasized.

We could get off into the weeds discussing which latency spikes matter more, but we need to be careful. We have powerful tools now that can measure performance very precisely, but we don’t want to overemphasize minor differences. In this case, all of the cards perform quite well, and the differences between them are imperceptible, as a look at the videos we captured with FCAT confirms.

Sleeping Dogs




The Radeon HD 7970 pulls off the upset here in our latency-focused performance metrics, followed closely by the Titan and the GTX 780.

Power consumption

I’ve pulled these numbers for most of these cards from from my GTX Titan review. They should suffice for a quick comparison.

Although the GTX 780 has the same peak power rating as the Titan, our review card tends to draw a little less power than our Titan, overall. That’s not a bad place to be, in the grand scheme of things.

Noise levels and GPU temperatures

That Titan-style cooler is even quieter on the GTX 780, in part because of the 780’s lower power draw. As a result, 780 is the least noisy card of the bunch under load. (They’re all whisper quiet at idle.) That cooler keeps the GK110 chip right at 80° C, which is the temperature target for Boost 2.0.

Conclusions

The results of our performance tests aren’t terribly complicated. We can summarize them using one of our famous price-performance scatter plots, like so:


You can flip between average FPS and our latency-focused 99th percentile frame time metric with the buttons above. I think they’re both helpful here. The 99th percentile plot illustrates how little difference there is between these four cards when it comes to real-world performance—in part because other limitations, like CPU speed and other system bottlenecks, contribute to slowdowns. The FPS average, meanwhile, offers a sense of each card’s potential. Compare the two plots, and you’ll see that the GeForces and the Radeons appear to convert their potential into consistent frame delivery in roughly equal measure. I’ll admit we could have teased out even larger differences between the cards by testing at unusually punishing image quality settings, but I wanted to focus on typical usage scenarios. That meant using settings that produce consistently playable frame rates.

Ultimately, the value picture for the GTX 780 isn’t tough to sort. The 780 is just a whisker slower than the GeForce Titan, which is the fastest single-GPU graphics card on the planet—and the GTX 780 costs 650 bucks, while the Titan will set you back a grand. The two cards are almost identical physically, and that fantastic all-metal cooler is even more effective aboard the GTX 780. For the vast majority of folks, the GTX 780 will be the obvious choice between the two. It’s our new favorite implement for the mass production of eye candy. Heck, if you want extreme performance, match up a pair of GTX 780s in SLI for $1300 instead of buying a single Titan for $1k. This isn’t exactly advanced math.

That said, the GTX 780 is still a $649 graphics card; it ain’t gonna be the value leader. For now, the value title among the cards we’ve tested looks to be owned by the Radeon HD 7970 GHz Edition. The 7970 GHz can’t quite keep pace with the GTX 780, but it cleanly defeats the GeForce GTX 680 while starting at about 10 bucks less online. AMD appears to have made considerable progress with its graphics drivers since late last year, when we found that Radeons had frame latency issues in a number of games. And I haven’t even compared the game bundles yet.

Then again, the GTX 780 is just the first member of the GeForce 700 series. Surely there are more changes in store soon, right?

Comments closed
    • k1zit
    • 6 years ago

    To every user who keeps boasting about how the Radeon 7970 is a much better value card…THE GTX 780 IS NOT MARKETED OR INTENDED FOR VALUE BASED CUSTOMERS LOOKING FOR THE BEST PERFORMANCE PER DOLLAR. I’m sure the 7970 is a much better “value” card because I’m sure you still get great performance for a great price. Well this card destroys the 7970 in every benchmark and yes it’s priced significantly higher, but just because it is and you cant afford it, why do you have to bash it? Does it make you feel better about your inferior card?

    If you have to sit there and dissect a top tier single gpu video card every which way about how many free games youre getting and it’s price point in order to justify it’s cost, then you can’t afford or at least shouldnt be buying it. This is the second fastest single GPU card on the market, has destroyed the 7970 in every benchmark by a large margin, and is priced as such. This is intended for the higher end enthusiast that wants the best and is willing to pay for the best and has the money to do so without having to sit there and justify costs and worry about his bank account. He/She doesnt care about what free games, or bundles, or free products come with the card, they simply want the best and have the means to do so without having to think twice about it.

      • D@ Br@b($)!
      • 6 years ago

      Dmn, you don’t have to shout in my ear.
      Anyway, someone who wants the best buys the Titan, actually she or he would buy four of them 🙂

    • kileysmith31
    • 6 years ago
    • LauraLasher43
    • 6 years ago
    • ub3r
    • 6 years ago

    My Leadtek Winfast Nvidia FX5800x2 ultra pro edition is still good enough for my needs.

    • cynan
    • 6 years ago

    If AMD maintains their progress in the latency department, then this is a bad sign of what the real HD 8970 will cost when it comes out in ~8 months (and future high end single GPU video cards from both companies).

    AMD had a 25-30% gain in performance over the GTX 580 upon release, charged ~$50 more than the GTX 580 and got crapped all over for it. Nvidia has a similar performance lead (and that’s being generous given the results in this review) over the GTX 680/HD 7970, charges $150-$250 more. Not a good trend indeed.

    • ub3r
    • 6 years ago

    Cant mine bitcoin…. More lost sales for Nvidia.
    Check out this thread: [url<]https://bitcointalk.org/index.php?topic=7216.2300[/url<] ..and admire the hardware.

      • tipoo
      • 6 years ago

      That’s nothing new, Nvidias architectures are simply doodoo for what Bitcoin mining takes.

      • Dashak
      • 6 years ago

      Bitcoin operations have long moved on to ASICs.

      • chuckula
      • 6 years ago

      If I ran Nvidia marketing, I would loudly and proudly brag that my hardware is intended for real work and [b<]not[/b<] for bitcoin mining.

      • indeego
      • 6 years ago

      Curious what bitcoiners will do when the currency is broken/supplanted? That’s a ton of trust in something with not much healthy history.

        • peartart
        • 6 years ago

        Pan for gold?

        • Deanjo
        • 6 years ago

        Melt down their 7970’s to try to extract the gold and sell it on a real market.

    • Klimax
    • 6 years ago

    7970 all over again, this time green side.

    • Bensam123
    • 6 years ago

    Most disappointing new GPU generation launch ever. This is like getting hand me downs to replace your current clothes. Of course they’re nicer then what you’re wearing, but they lose their luster when other people have already been wearing them. While AMD held back, I hope their newest bread and butter video card isn’t also something like this.

    On the plus side I like the idea of Shadowplay as a streamer. Ideally it’d be great if they just made a x264 compilation with the necessary ability to take advantage of their GPU encoding scheme, NOT an entirely new program which you’ll have to use and stream with. In my experience there are actually quite a few streaming programs out there and most of them just plain suck. They don’t do what you want them to do, they lack support, and they’re generally a pita to use.

    I’m guessing Nvidia is going to throw this out there as a all in one program which you’ll have to use if you want to use the hardware encoder which is something I don’t like… at all. Especially if AMD and Intel decide to follow suite. You’ll have a bunch of different splintered programs all running around in different directions which ideally could be done with one simple thing, which is a custome x264 encoder.

    As Cyril reviewed a few years ago, x264 encoding has been built into the last few generations of video cards. I also hope this isn’t limited to the latest and greatest video card that simply isn’t worth buying for it’s performance alone, such as the 780. But I’m guessing since Nvidia is going to whore this out to the max (like everything else they do), you’ll be required to buy a 780, which is something begrudgingly a lot of streamers would do. I personally rather spend a extra 300 and get a hex core over spending a extra 350 for a tiny bit faster video card, that in itself is a no brainer. There is also the downside of hardware based encoders not being adjustable at all. You take what you can get, you can’t change any part of it, so they can look worse then a well optimized encode.

    Currently there aren’t any streaming programs that implement hardware based encoding as that is something that deals more with the encoder, then the program itself and they really need a little bit of experience with both in order to implement it properly, so it simply hasn’t been done. It has been talked about a lot, but nothing has happened even though it’s been around for years. Not just hardware encoding, but also OpenCL encoding. All that’s come of OpenCL encoding is a couple experimental x264 builds that offload lookahead, which only changes CPU usage of the encoder by a couple %.

    (For those of you that have been living under a rock and don’t know what video game streaming is, visit Twitch.tv)

      • chuckula
      • 6 years ago

      Before you complain that you were downmodded with no responses, here’s your response: WTF are all your posts about all of the sudden? What is up with “streaming” and why are you so obsessed with it? Are you trying to rip movies while playing games and complaining that the GPU isn’t doing the rip process while playing the game????

        • Bensam123
        • 6 years ago

        Twitch.tv?

        I’m unsure if you’re trying to be obtuse or you truly don’t know. For someone who seems to pride himself on being very knowledgeable, I find it hard to believe you haven’t caught both ‘streaming’ and ‘games’ in any of my posts. It was even mentioned in the article when Scott was talking about Shadowplay, to which I was also talking about.

        There was even a Friday night topic about it:

        [url<]https://techreport.com/news/24512/friday-night-topic-glued-to-the-youtube[/url<] And of course my very own channel: [url<]http://www.twitch.tv/bensam123[/url<]

          • chuckula
          • 6 years ago

          OK, so I visit this site and I am under the impression that it is for watching other people play video games? What little free time I have for games tends to get put into playing them instead of watching them, but anyway.. LEROY JENKINS!

          So, from your long and drawn out posts it looks like you are freaked out about making videos of your games and putting them on this website. Well, if you know anything about modern GPUs, I’d highly recommend avoiding the GPU for anything related to making the video unless you really don’t care about the performance of your game. GPUs are really really great at perfectly parallelized high-latency number crunching. That basically means they totally suck at making context switches between your game and whatever video encoder you would use with the GPU. Basically: do it on the CPU, use a dedicated piece of hardware like Quick Sync or, if you are obsessed with using a GPU, get another one that is dedicated to the video encoding.

            • Bensam123
            • 6 years ago

            So you find out you’re wrong and you immediately look down on it because you were wrong? “Watching video games?!?! God, who would ever do that?!?!”

            As I said there was a whole Friday night topic discussing this, check it out.

            Second paragraph isn’t worth commenting on as you’re trying to piece together a solution like Deanjo for something you just recently found out about and I’m not going to take the time to explain why it’s wrong as even you aren’t interested in it and are just looking for a way to belittle someone you’re asking questions.

            It’s actually sorta interesting. I’d say about 98% of the people who comment don’t have experience with game streaming so it’s actually pretty easy to draw conclusions on personality based on how they talk about the topic for the first time without really knowing anything about it. You sir didn’t do your google homework before responding to a topic.

            • chuckula
            • 6 years ago

            1. Your first paragraph makes a huge deal about how I’m “wrong” acting like everybody knows exactly what you are talking about, and then the last paragraph is “98% of the people who comment don’t have experience with game streaming,” which is probably an accurate statement (maybe even low-balling a bit).

            2. You accuse Deanjo & I of not having any good solutions, but you don’t seem to have anything other than throwing around buzzwords about different x264 implementations. I think Deanjo’s idea for a dedicated encoding card is dead-on right and you should seriously consider it or at least give a [b<]legitimate[/b<] reason for why it can never work instead of just calling us idiots. A card that is dedicated to encoding video data is going to be a whole lot better than trying to screw with your GPU that is purportedly being used to play a game. Do you have any substantive reasons *why* a dedicated encoder card won't work?

            • Bensam123
            • 6 years ago

            The point of my first paragraph was that you’re acting like a douche without knowing at all what you’re talking about. You can’t talk down to people like they don’t even know what they’re talking about, when you don’t even know what you’re commenting on. Taking this a bit further, both you and Deanjo going through all the posts and marking each other up and me down. Such childish behavior.

            Streaming definitely actually happens thats what Twitch.tv is. People stream to it using applications like OBS, Xsplit, and FFSplit. They don’t use hardware encoding. The only piece of hardware that I know of that’s currently capable of live streaming is the Avermedia Live Gamer HD (which has a lot of issues).

            They aren’t solutions if they don’t actually solve a problem. Neither you or Deanjo offered a solution, you were simply making off the cuff statements about something you have no experience in and then making it seem like I don’t know what I’m talking about.

            Because there isn’t a dedicated card that also allows for on the fly video editing and has an entirely built in streaming software package! That’s why this is a big deal and why Shadowplay is a big deal. Neither you or Deanjo offer such a solution. Deanjo’s idea of a solution to this is a VCR. They don’t run the Superbowl off a VCR.

            A capture PC is and always has been a option, but that’s an entirely other computer and that’s equivalent of the ‘dedicated encoder’. Because you need to edit scenes in real time off the actual stream. Adding a webcam is the easiest and most prevalent example.

            Many people have integrated graphics in their computer now, that’s not being used in addition to dedicate GPU. That can be used exclusively for hardware encoding but hasn’t yet (OBS just released an experiment build three days ago with support for IB and SB though).

            Adding a $20 video card into a system for hardware encoding is a very small price. That’s why this is great for me and I wrote a few paragraphs on it, which you immediately put down, without even knowing what I’m talking about.

          • Deanjo
          • 6 years ago

          Get one of the many Hauppauge Colossus/HD PVR products that are able to do exactly what you are wanting to do. Hell they even have made a dedicated piece of software dedicated for what you are wanting to do.

            • auxy
            • 6 years ago

            Those products are by and large terrible, and come with a lot of caveats and restrictions.

            That said, GPU-based encoding is also terrible, and isn’t really a solution either.

            • Deanjo
            • 6 years ago

            I have a couple of colossus and hd pvr and they do EXACTLY what he wants to do. With the new software you don’t even have to use band aids like xsplit and flash media encoder. Sorry, my real life use of the products trumps your assumptions.

            • Bensam123
            • 6 years ago

            I’m guessing you’ve never streamed gaming before and are just suggesting a off the cough answer you think will suffice. There is a thread on this in the builders forum and plenty of other resources to look into before you think you have a prime answer. It’s not as clear cut as you’re suggesting.

            A capture card isn’t hardware offloading, it STILL need to encode the video and as such you still need to either need to encode it on the primary computer (which makes the capture card pointless in the first place) or build a second computer to do it.

            • Deanjo
            • 6 years ago

            The colossus and hd-pvr are hardware encoders. You do not need another computer either. Most video cards today have a HDMI out in addition to DVI / DP outs which can be looped back into the colossus/hdpvr on the same system. You have to put your system in mirror mode. I have done this tonnes of times, all the software does is encapsulate and segment the resulting stream to your twitch.tv servers. It all runs in the background and uses less then 1% cpu usage on the average computer system now days.

            Dude I have done this a ton of times and a second system is not needed at all.

            Even hardware even takes care of reducing the frame rate, resolution scaling on then encoded stream making it suitable for streaming on slower home connections.

            • Bensam123
            • 6 years ago

            Looking at the product page, it’s a recorder. Recording to a local disk isn’t the same as live streaming. You can pretend it is and say how much they’re alike, but they aren’t the same thing.

            I only know of one hardware encoder for live streaming and that’s the Avermedia Live Gamer HD, which has quite a few problems of its own (including needing to use Xsplit to use it).

            • Deanjo
            • 6 years ago

            Do you know what the difference between and recording and streaming is? It’s your out destination of the stream, you dump to file or you dump to a cache file where it is segments and encapsulated. Both are easily done with the hauppauge products. The Avermedia Live Gamer HD does the exact same thing (albeit at a worse quality).

            • Bensam123
            • 6 years ago

            No, see the a DVR is missing the entire software backend. Just because the hardware is capable of doing a task, doesn’t mean that it does the task.

            Intel, AMD, and Nvidia have been capable of doing hardware based encoding for years, but it doesn’t happen because there isn’t software available for it.

            Imagine them streaming the Superbowl off a DVR. You’d have to institute scene changes on the fly, which you can’t do. You can’t insert video clips in the middle of a live stream. You can’t add a webcam to the output. You can’t play video files on it because it records EXACTLY what your monitor displays. The software back end is completely missing.

            A DVR takes into account that you have time to edit the footage after it’s made. Live streaming is done on the fly, there is an entirely different dimension in there to deal with… time.

            • Deanjo
            • 6 years ago

            [quote<]No, see the a DVR is missing the entire software backend. [/quote<] No it isn't that is what their StreamEez is for. You are even able to live encode a logo if you want. [quote<]Imagine them streaming the Superbowl off a DVR. You'd have to institute scene changes on the fly, which you can't do. You can't insert video clips in the middle of a live stream. You can't add a webcam to the output. You can't play video files on it because it records EXACTLY what your monitor displays. The software back end is completely missing.[/quote<] Again no it is not. It is right there with StreamEez and even allows you to mix audio and overlay graphics if you wish realtime. It is mixed into the hardware transcoding of the Dolphin/ViXS/etc hardware encoders.

            • Bensam123
            • 6 years ago

            I looked at StreemEez. It simply fires off your encode to Twitch. That doesn’t allow for real time editing of the stream.

            [url<]http://www.youtube.com/watch?v=_8ng8_Ls22I[/url<] That is nothing like Xsplit or OBS. You can't add multiple sources, you can't add video on top of it unless you actually play the video on your computer. You can't change scene. You can't mux audio. You can't add transitions. You can't add a green screen. You can't stream to multiple services at the same time. You can't mix multiple scenes (chat, webcam, and video). You can't use more then one source. It simply fires off what's on your screen. This is not the same thing. You're trying to compare something like Adobe Premiere to a VCR. I can't run the software as I downloaded it, so I'm just going off the screenshots and the reviews I've seen (Putting aside the ridiculous amount of bug reports and usability problems that popped up). They all say the same thing. You're more then welcome to try either Xsplit or OBS.

            • auxy
            • 6 years ago

            They
            don’t.

            ┐(‘~`;)┌

            Hardware is always more restrictive than software, and these devices aren’t as versatile as a software solution. Thanks for the suggestion, but it’s just not that useful.

            • Deanjo
            • 6 years ago

            They are more then capable of handling the types of quality made for live streaming to twitch.tv

            • auxy
            • 6 years ago

            That’s great! It also has nothing to do with me. They can’t record 1080p60 (or higher), and that’s a minimum for me. (⊙﹏⊙✿)

            I also don’t like the formats they encode to. High profile yuv420? Hi444PP, please!

            • Deanjo
            • 6 years ago

            Guess what twitch.tv doesn’t take streams higher then 1080p30. Moot point. They also don’t accept yuv420 or Hi444PP streams. Straight 4:2:0 and that is it.

            • Bensam123
            • 6 years ago

            This is wrong. They accept higher resolution streams then 1080p. People have streamed Eyefinity setups to Twitch. They also accept 1080p@60. I’ve personally streamed Simcity to Twitch in 1080p@60.

            Point me to where they don’t accept yuv420 or Hi444p streams. I’ve seen nothing posted of the such and I suspect you’re just pulling this out of your bum. If the Flash streaming protocol supports it, Twitch supports it.

            • Deanjo
            • 6 years ago

            The streams they are feeding are captured via the likes of xsplit but the transcoded stream going out to their servers is 1080p. yuv420 and Hi444p streams are not accepted as well, I have tried you will get nothing streaming at all. Their servers will not accept it you will get an error on every attempt. Go ahead and try it.

            • Bensam123
            • 6 years ago

            They aren’t. Transcoded resolutions, possibly (the FPS), but if you’re using the original resolution no transcoding is being done and it’s just being relayed. Transcoded resolutions also go higher then 1080p. There was a website that used to pull video statistics from streams, but I don’t remember the URL. It was posted on OBS awhile back.

            What’s your Twitch channel?

            • Deanjo
            • 6 years ago

            Bull**** it is not being transcoded, your overlay is being encoded to a h264/AAC rtmp stream. Sending raw untouched video would be well beyond even FIOS connections.

            • Bensam123
            • 6 years ago

            You missed the point and instead decided to look for a easy way to counter.

            I mentioned in terms of transcoding, after an initial encode has been done on the users end. Twitch doesn’t transcode the stream for the original resolution, they simply relay what you’ve already encoded.

            Transcoding is not the same as simply encoding. Transcoding is going from an already encoded format to another.

            • Bensam123
            • 6 years ago

            GPU accelerated encoding definitely has it’s pitfalls, but for being able to free up the hardware based resources that would be otherwise spent on CPU cycles by using a idle built in GPU isn’t anything to look down upon. Or if you’re able to simply buy a super duper cheap video card and offload it to that instead of the primary card

            Cyril did a article on the video quality a few years ago, it’s buried somewhere in the reviews if you’re interested. That’s on just encoding, not real time encoding for streaming.

            • auxy
            • 6 years ago

            Yah, using something like QuickSync would be pretty neat.

            • Bensam123
            • 6 years ago

            That’s called a capture PC and you still need to stream it. So you have to build a dedicated PC in order to do what you’re suggesting. We had a giant thread about this in the builder forums. For guys that don’t understand what you’re talking about, you’re being awfully rude while asking questions in a ‘matter of fact’ tone.

            • Deanjo
            • 6 years ago

            Because it is fact. You do not need a separate PC at all. You loop back your HDMI out to the colossus or hdpvr. Since it is a hardware encoder, the resource usage is next to nothing. I am easily able to stream 3 HD streams with less then 4% CPU usage (which is taken from the only software operation of encapsulating and segmenting the stream in a suitable format for live streaming to twitch’s / youtube live streaming servers).

            • Bensam123
            • 6 years ago

            It’s a recorder, read my other post.

            Live streaming isn’t the same as recording locally. Don’t try to muddle the two together.

            • Deanjo
            • 6 years ago

            Sigh, the capture process is the same for both. Both have to be encoded into a codec suitable for streaming.

            [url<]http://www.youtube.com/watch?v=6dBjicbHcqg&hd=1[/url<] You are not recording locally other then for the small buffer needed for any type of video streaming. Here is the exact process step by step. Plug HDPVR into gaming machine. Hook up HDMI out from your video card to the HDMI of the HD PVR. Set your video card to 1080p and setup the system for mirrored display mode. Open streaming software (either Hauppauges StreamEez or use their new capture suite and use StreamEez there). Log into your twitch tv account. Click stream. Open up game and start playing. ALL ON ONE MACHINE, ALL LIVE STREAMING, ALL HARDWARE ENCODED!

            • Bensam123
            • 6 years ago

            Read my other post.

            • Deanjo
            • 6 years ago

            I did and you clearly do not understand what streaming is.

            • Bensam123
            • 6 years ago

            Helpful and informative… good job.

      • Bensam123
      • 6 years ago

      I know this is like beating a dead horse, but where did all these negatives come from and why? XD

        • auxy
        • 6 years ago

        Your posts are too long. ┐(‘~`;)┌

        I get downvoted for the same thing.

          • rxc6
          • 6 years ago

          Length has nothing to do with it. chukula was very clear in the first response.

            • auxy
            • 6 years ago

            So people are downthumbing him for asking about streaming? What a bunch of jerks!

            • Bensam123
            • 6 years ago

            They aren’t dumbthumbing for asking for streaming, rather talking about streaming features of the new Nvidia series when they don’t even know what live streaming is and decisively decide I’m wrong for mentioning such a thing.

            I haven’t touched any of the thumb ratings on any of the posts who have responded here to make a point.

        • Krogoth
        • 6 years ago

        Haters are going to hate.

        Mostly die-hard fanboys and cheerleaders who will discredit anything that puts a negative light on their favorite team.

    • superjawes
    • 6 years ago

    So the bad news for the GeForce crew is that there is no new silicon this time around. The good news is that the value is going to take a jump.

    Not bad. It would be nice to get a “real” launch, but this will at least keep the current silicon market interesting.

    • Lordhawkwind
    • 6 years ago

    At £500+ it’s a waste of money. If it was priced at around £350 I’d be all over it. Obviously Nvidia are eeking out the biggest margin they can for the this high end card. I’m sure AMD would do the same.

    I’ll just wait until the next gen comes along and maybe replace my 7950 with a 8950 or whatever if the price point is right. Nothing really taxes my 7950 at 1080 so why waste my money.

    For the money Nvidia are charging you will be able to buy a PS4 or Xbox One and still have change to buy a couple of good new game releases.

    At these silly price points PC gaming may be on its last legs.

      • travbrad
      • 6 years ago

      [quote<]At these silly price points PC gaming may be on its last legs.[/quote<] PC gaming has been declared dead every year for the past 10 years at least. Meanwhile I've been enjoying great PC games the whole time (on $200ish video cards) [quote<]If it was priced at around £350 I'd be all over it. ... Nothing really taxes my 7950 at 1080 so why waste my money.[/quote<] Seems to me you'd be wasting your money no matter what price this card is, if your current GPU runs everything great already.

    • thegtproject
    • 6 years ago

    Very nice article! I love techreport’s article depth. This really has given me thought of “why not just buy a titan then” Just a couple hundred extra bucks, i know it’s expensive but it is in my budget. Anyone have any thoughts on why one with the means should choose the 780 over the Titan or just get the Titan? My first thought is driver support, is nVidia going to redhead stepchild the Titan in driver support?

      • beck2448
      • 6 years ago

      The Titan is still the King. Get it if you can, you won’t be disappointed.

      • Meadows
      • 6 years ago

      If you use a GPU for work, get the Titan.

    • USAFTW
    • 6 years ago

    Those greedy bastards @NV! how dare they charge 1000 bucks for Titan and not give it a back plate? It has 12 memory hips on the back, Why make the front look so nice to see and touch, and forget the part which is prone to be mostly in my eyesight? reference 5870 and 6970 had these back-plates, so what’s the matter with a couple of dollars extra to spend on a flagship part? WTH.

    • puppetworx
    • 6 years ago

    Classic sales technique. Release an extortionately priced one-off product (Titan) in a limited run and grab lots of headlines, then release an ever so slightly lower specced product (GTX 780) for a fraction of the cost. Relatively the new product then looks like a bargain because the public has been ‘primed’ with expecting a shockingly high pricetag for such a product.

      • bfar
      • 6 years ago

      It’s still pretty expensive, but I suppose it’s not far off the launch prices of GTX 280 and 8800GTX.

      I’m just sore because I can’t afford one for another couple of months!

    • Damage
    • 6 years ago

    Quick correction: Nvidia originally told us it would be bundling Metro: Last Light with the GTX 780, and an earlier revision of this article reflected that. Nvidia has just informed us that this game won’t be bundled with the GTX 780 after all, so I’ve corrected the review text. Sorry for any confusion.

      • derFunkenstein
      • 6 years ago

      Jerks.

        • ULYXX
        • 6 years ago

        You’re a classy gentleman because that’s a nice way of saying it.

          • derFunkenstein
          • 6 years ago

          First time someone’s called me classy in an unironic, non-sarcastic fashion. 😆

      • tipoo
      • 6 years ago

      Bundlegate!

      • drfish
      • 6 years ago

      As one of the 3 people who bought a 780 after reading the original article and already promised the copy of Metro to friend I am disappoint. Oh well.

      • ssidbroadcast
      • 6 years ago

      Pre-order cancelled!! :O

      • ClickClick5
      • 6 years ago

      Nvidia: “WAIT!!!! That would cost US money….lets not do that.”

      • chuckula
      • 6 years ago

      KAAHN!!!!

      Oops.. I hope nobody reads this post who hasn’t already seen Star Trek [s<]II[/s<] [u<]Into Darkness[/u<].

    • chuckula
    • 6 years ago

    I’m building a new machine next month and at first I was going to stick with my GTX-560 (non-TI), but after this review I’m leaning strongly in favor of…. a GTX 770 (not the 780). For my 1920×1200 display, the 780 is overkill.

    Now I am well aware that the 770 is just an overclocked version of the 680 BUT.. apparently there will be models available that include the same cooler & fan setup as the Titan. That is clutch since those coolers don’t sound like jet engines. Does anyone know if there are already 680’s on the market that use the Titan’s cooler? If so, I might grab one of those if it is cheaper.

      • jthh
      • 6 years ago

      Me too. I want that sweetspot for 1900×1200 gaming for my build next month. Keep me posted!

    • Voldenuit
    • 6 years ago

    I’d be interested to see how the 780 and Titan fare in Tomb Raider + TressFX compared to the ‘lite’ Keplers and AMD’s GCN GPUs.

    • Takeshi7
    • 6 years ago

    I’m annoyed that Nvidia gimped the double precision performance of this card in the drivers. They took away the greatest thing about the GK110. I don’t like when companies make a great product, and afterwards spend more engineering dollars to purposely make it worse.

      • HisDivineOrder
      • 6 years ago

      They always do this with their gaming-focused high end card, no matter what tech it’s based on. This is the 580’s successor FINALLY coming to market and the 580 (and 480) also were gimped in this respect.

    • odizzido
    • 6 years ago

    The 7970 comes out looking amazing.

      • briskly
      • 6 years ago

      Frankly, they all seem a little clustered up. It makes me wonder if there is a bottleneck somewhere, maybe CPU side.

        • Farting Bob
        • 6 years ago

        I doubt the 3820 is bottlenecking these games. Maybe if a particular game was very CPU intensive or poorly multithreaded it might, but i havent heard of the games tested being CPU limited on a 3.6Ghz i7.

      • CreoleLakerFan
      • 6 years ago

      Agreed. On performance per $, the 7970 wins by a small but consistent and significant margin over the 680. If you factor the gaming bundle the AMD becomes the clear winner from a value perspective. Of course, none of that matters if you aren’t interested in playing any of the titles included in the bundle.

        • HisDivineOrder
        • 6 years ago

        Unless you sell them.

        • JustAnEngineer
        • 6 years ago

        It would tickle me to extend the value chart just a bit more and add the $270 -20MIR Radeon HD7950 3GB, the $370 -10MIR GeForce GTX670 and the $280 -20MIR GeForce GTX660Ti to the same chart.

        • rechicero
        • 6 years ago

        It matters, you can always sell the keys. I’d say a 50% value would be fair.

          • Airmantharp
          • 6 years ago

          I’ve bought most of these keys for less than 50%… These AMD bundles have been more of a boon for people looking to pick up new game license keys on the cheap more than anything- not that I’m complaining!

      • Pettytheft
      • 6 years ago

      I’ve seen them go for cheaper than what’s on the plot even. With the bundle it boosts the value even more.

      • tipoo
      • 6 years ago

      Yeah, it’s unfortunate that they didn’t already have all these driver improvements for performance ready when GCN launched, the performance impression is already in peoples heads no matter if newer drivers push it ahead of the competition.

        • Airmantharp
        • 6 years ago

        …and they still haven’t fixed their multi-GPU drivers.

    • PopcornMachine
    • 6 years ago

    “AMD’s drivers and GPU combine to produce a smoother gaming experience in this case.”

    Whoa. Never expected to read that! Something must have changed recently.

      • wierdo
      • 6 years ago

      Could also be improved measurement tools, the difference between 7970’s Fraps and FCAT numbers (16.7ms tab) under Guild Wars 2 for example is kinda dramatic.

    • WaltC
    • 6 years ago

    Good review, and more proof positive that the game engine contributes as much if not more to frame latency than IHV drivers. Every game engine is going to be different in this regard, even when running the same drivers. How to explain the same driver sets giving both high & low latency readouts in different games: differences in the [i<]game engines.[/i<] This used to be the province of 3d-gaming 101, but somehow was temporarily lost along the way. Glad to see this bit of knowledge being "rediscovered"...;)

      • shalmon
      • 6 years ago

      could be good marketing material for engine developers…

      if you’re a game developer why wouldn’t you want your game to use an engine that has lower latency than competitors? A higher framerate/lower latency resulting in a better game play experience for the audience in general should be a valid incentive, let alone on lower end hardware.

      This in turn could put pressure on other developers to put a little more effort into frame delivery aspects of their game engines.

      • HisDivineOrder
      • 6 years ago

      I never read anyone saying that engine had nothing at all to do with frame latency. I mean, look at the universal scorn for Far Cry 3 months back in this regard. What I see is people saying that given the same game engine, you should expect a LOT less frame latency than the 7xxx series was giving you when compared to how well the Geforce was doing at the time.

    • clone
    • 6 years ago

    HD 7970 launched, it redefines high end single gpu performance for $550 & gets criticized on it’s pricing with many considering it a “crime”, GTX 780 does the same but demands an extra $100 ($650) and ppl consider it a bargain?

    lol.

    • Disco
    • 6 years ago

    Wow, this review just makes me even happier about my 7970 I bought last OCTOBER for $370 CDN from NCIX. And it included the original game bundle (Far Cry 3, Hitman, Sleeping Dogs). No wonder AMD has not bothered to bring out the 8xxx Radeons, if this is the best price/performance that Nvidia can bring to the table. AMD have a good thing going right now.

      • Stickmansam
      • 6 years ago

      NCIX even had some drop to $300 CDN after MIR for Black Friday + Boxing Day 😉 Had a friend pick up 2 for less than $650 and two game bundles.

        • Disco
        • 6 years ago

        Yes. Very happy with no regrets at all. How often can you say that 6+ months after a hardware purchase?

          • NeelyCam
          • 6 years ago

          Doesn’t happen often…

          (Written on a 5-year-old laptop that still works pretty damn well)

          • JustAnEngineer
          • 6 years ago

          I purchased a Sandy Bridge processor on launch day. That’s held up pretty well.

          I bought a Radeon 9700 Pro the month that it launched. That card was better than the competition for a long time.

          The best example though, is that I bought a 2560×1600 IPS LCD monitor more than six years ago that is still better than the displays that 98½% of users have today.

            • willmore
            • 6 years ago

            Yeah, I used an SGI LCD monitor (1600×1024) for a decade. Completely worth it. Some investments pay off long term. Not all do.

    • brucethemoose
    • 6 years ago

    $650 is 7950 CF / 660 TI SLI territory.

    I know single cards are better than multi card setups, but even a single OC’d 7950 can nip at the heels of an OC’d 780… This card should be $550 at most, which is still way more than the 570/275 were.

      • briskly
      • 6 years ago

      Define nip at the heels. Not that I think this card is well priced at all, but the OC’ed 780 can hit ~1160mhz for core clock.

      • Cataclysm_ZA
      • 6 years ago

      No it can’t. Even through overclocking, the HD7950 would still trail by 10% at best. Plus, even with minor overclocks the GTX780 overtakes Titan quite easily.

        • auxy
        • 6 years ago

        Most Titans run a ~40% overclock without a lot of drama. This translates directly to ~40% performance gains in games that are GPU-limited. Who’s overtaking what, now?

          • Klimax
          • 6 years ago

          A little backing:
          [url<]http://www.hardocp.com/article/2013/04/29/nvidia_geforce_gtx_titan_overclocking_review/[/url<] Increase fan speed, get nice boost...

      • beck2448
      • 6 years ago

      CF is a mess. This card ocs to faster than titan easily.
      [url<]http://www.hardocp.com/article/2013/05/23/nvidia_geforce_gtx_780_video_card_review/7#.UZ8gWkQk-ik[/url<] With OC faster than stock Titan for 35% less.

    • ssidbroadcast
    • 6 years ago

    Wow. Disappear from TR for a couple years, and when I come back there are $650 and $1000 video cards?!! What kind of future distopia is this…

      • brucethemoose
      • 6 years ago

      Moore’s law now applies to graphics card prices, too!

      • tipoo
      • 6 years ago

      Wasn’t the Geforce 7900 gx2 600 dollars, unadjusted for inflation? And the 8800 Ultra…

      How long were you gone? What did you miss? I have a Radeon 2900 to sell you 😛

      • clone
      • 6 years ago

      Nvidia’s 8800 GTX Ultra was $830, things haven’t changed much.

      • NeelyCam
      • 6 years ago

      Welcome back!

        • ssidbroadcast
        • 6 years ago

        Thanks dude.

    • Spunjji
    • 6 years ago

    Eesh… I don’t know how to feel about this launch. Titan makes its own sense with GPU compute capabilities and the best performance, 7970 takes the performance/value crown and this… I don’t know, just doesn’t seem to have much of a position. I just can’t see how it won’t look like a silly extravagance next to the 770 when they release that.

    Definitely agreed with Chuckula, nVidia clearly want you to be looking down from the price of Titan to this (as TR have) and not up from the prices of other cards (as PCPer have).

      • ptsant
      • 6 years ago

      Titan only exists so that nVidia can make the $650 price point seem reasonable. Obviously, they also have a few other marketing tricks, like “special offers” at $600, so that the high-end $500 GPU tier becomes the $600 tier…

        • tipoo
        • 6 years ago

        If you need DP performance, Titan is the cheapest card for it, the next uncrippled card up is 2400 dollars.

          • ptsant
          • 6 years ago

          Yeah, like the 2×7970 won’t give you enough DP performance for much less money… A quick reminder: FP performance on the 7970 is 1/4 (vs 1/3 for the Titan), so it’s not that far away. Consider the following example: [url<]http://www.sisoftware.co.uk/?d=qa&f=gpu_finance_fp64.[/url<] Also: [url<]http://www.theinquirer.net/inquirer/review/2162193/nvidias-gtx680-thrashed-amds-mid-range-radeon-hd-7870-gpu-compute.[/url<] I'll admit that CUDA has a better ecosystem right now, but you really can't say that the 7970 DP performance is crippled...

            • tipoo
            • 6 years ago

            Next uncrippled Nvidia card up then.

    • Silus
    • 6 years ago

    NVIDIA really did a great job with the Kepler family of GPUs. The highest end GPU is reigning supreme, yet it’s not even fully enabled on any of the products it’s in. I wonder if that will ever happen in any consumer product (probably only in Teslas)

      • Krogoth
      • 6 years ago

      Most of the Kepler-generation GPUs are just tweaked Fermi designs that rebalanced resources in their SMX clusters making trade-off for better efficiency and game performance over GPGPU performance. GK110 doesn’t follow this convention though which is why it outclasses its lesser kin by a large margin in GPGPU related stuff.

      Nvidia learn from its mistakes with the Fermi-generation. Putting their full-GPU design first (GF100) into the market and grossly underestimating the yielding problems at TSMC.

        • Silus
        • 6 years ago

        Disagree, because it’s quite clear how different Kepler is from Fermi, architecturally speaking. Sure they share some concepts, but that’s usual. For the most part no architecture changes completely. The only instance when that happened was when they went from fixed pipeline design (pre-G80 era) to unified (G80 and post-G80 eras).
        However, in terms of products, Kepler followed Fermi closely, since GF100/GF110 were gaming and compute monsters, with GF104 and below being game oriented. Same thing with Kepler: GK110 is a gaming and compute monster while GK104 and below are gaming oriented. The only thing different was that NVIDIA was able to compete with AMD’s highest end GPU with their mid-range GPU.

          • clone
          • 6 years ago

          Nvidia lost that competition Silus…. I’m not sure that’s worth bragging about given how far HD 7970 is distancing itself from GTX 680 over time especially given 680 isn’t going away but instead is going to be rebranded as GTX 770 with a 10% boost via some tweaks to the design.

          it’s kind of a drag for anyone who bought a GTX 680, GTX 670 given how they are aging so quickly just a year in…. usually it’s more like 3 before architectural shortcomings get exposed leaving cards looking terrible compared to their direct competition.

          the other side of the coin being Nvidia’s yields were so terrible they were incapable of offering more.

          the ultimate irony in all of this is that it’s AMD’s drivers that are now killing Nvidia’s cards which is a true testament to the GCN design, faster out of the gate, cheaper to buy and faster over time.

            • swaaye
            • 6 years ago

            It took AMD well over a year to get GCN performing like it does today. The “GHz editions” were a reaction to Kepler’s performance. GK104 is also considerably smaller than Tahiti but has successfully competed with it for a year.

            I would prefer to ask the question of why on earth it took AMD so long to get the drivers performing as they now are. And why it seemingly took the press to bring some of the problems to their attention.

            • MEATLOAF2
            • 6 years ago

            One way to look at it is that assuming AMD continues to use/tweak the GCN architecture in it’s future cards, they already have drivers that work now, so a huge delay for decent drivers likely won’t happen in the future.

            And another way to look at it: Nvidia probably won’t have much higher prices on average, compared to AMD. IF AMD doesn’t screw something up. That’s a win win whether you prefer Nvidia or AMD.

            • ptsant
            • 6 years ago

            GCN was a completely new architecture and a major transition from VLIW. That seems a decent reason to me.

            • swaaye
            • 6 years ago

            It’s a transition from VLIW for sure, but it’s certainly not completely-new-from-scratch hardware. Kepler was also quite a transition from what NV was doing previously.

            No, what I think happened in 2012 was AMD’s driver department being in some kind of turmoil. Not only was GCN not progressing much, they were causing new problems for older cards too. They had also recently had that inexcusable driver disaster with Rage.

            • HisDivineOrder
            • 6 years ago

            I suspect you’re right, but I think the reason the driver team so completely dropped the ball with their 6xxx series drivers ahead of the 7xxx series (ie., The Rage Incident, many delays on drivers for games, etc) was because they were trying to build those drivers for GCN and they had far fewer resources to send toward making drivers for their older lines with new games. Plus, they DID have some internal strife at the time and they did do a few layoffs in the interim.

            So really, the company was/is a mess internally and they had to sort some things out–especially a new card launch–to get things good and proper. I think that’s one of the advantages of not doing a card launch besides the cost argument (ie., cheaper to do bundles than launch a respin of the same tech).

            If they’d done a respin, then they would have had to focus their driver teams on making sure those new designs with their tweaks to the architecture were performing optimally and I think after all the layoffs, they don’t have the engineers to both improve the current GCN drivers (ie., frame metering, FCAT analysis, new games, AND a new memory manager) and also tailor make/tweak new drivers to work to their best with the tweaked cards that would come.

            I wonder if they’ll have enough resources to divert to the upcoming generation of cards that they’ll apparently release at the end of the year or will we have a repeat of the end of the 6xxx series where drivers will get horrendous for the last part of the year?

            • clone
            • 6 years ago

            it took AMD a year to extend it’s lead… having the lead and then extending it is good, not having it and falling farther behind later is bad.

            p.s. die size is a choice made by the builder not the consumer, in Nvidia’s case it was a cost advantage that was not passed on to the consumer.

            proclaiming GTX 680’s inability to keep pace with HD 7970 less than a year after launch a success is as silly as complaining that AMD could have pulled even further ahead sooner which on it’s own is an acknowledgement that GTX 680 was always going to age badly.

            • Silus
            • 6 years ago

            Truly amazing how from a horrible state of affairs with drivers for everyone using a new Radeon when it came out, you make it into “AMD extends its lead”…

            Facts: GTX 680 was faster than the HD 7970 when it came out (March 2012). It also consumed less energy while being faster overall. Not by much…but still faster. Oh and it was cheaper too!

            AMD responded 3 months (June 2012) later with the Ghz Edition, which is …an overclocked HD 7970…(nothing wrong with that) and they managed to take the single GPU crown back.

            Driver woes continued for the whole 2012 and only in 2013 have they finally provided something to fix the long term driver problems that affected every Radeon user…

            Of course Titan comes out and takes the performance single GPU crown back and the GTX 780 now takes the second spot. Sure it is expensive, but you’ll see the GTX 770 take on the HD 7970 Ghz Edition easily and who knows, at a lower or similar price point.

            • clone
            • 6 years ago

            Silus don’t lie,

            fact 1: AMD came out with HD 7970 taking the lead.
            fact 2: Nvidia officially paper launches GTX 680 with an MSRP lower than HD 7970’s.
            fact 3: AMD’s HD 7970 street prices at the time of GTX 680’s paper launch are lower
            fact 4: AMD launches HD 7970 ghz edition, also a paper launch it officially retakes the lead.
            fact 5: Nvidia finally gets GTX 680’s to market 2 months after HD 7970 Ghz edition.
            fact 6: AMD has in your words “broken drivers” drivers so terrible that Nvidia has remained in 2nd place.
            fact 7: AMD works on it’s drivers and now HD 7970 is faster than GTX 680 and while Nvidia is working on theirs they are unable to prevent dropping to third.
            fact 8: GTX 680’s architecture will forever limit it’s ability to improve while HD 7970’s silicon still has even more room to stretch it’s lead.
            fact 9: AMD came out first, they came out fastest, they sold for less money, they sell for less money, they continue to stretch their lead over GTX 680.
            fact 10: I can get a $328.00 plain vanilla Gigabyte HD 7970 that is faster than a $430.00 GTX 680. (cheapest vs cheapest)
            [url<]http://www.directdial.com/GV-R795WF3-3GD.html[/url<]

            • Silus
            • 6 years ago

            “AMD has in your words “broken drivers””

            This alone shows what a hopeless AMD fanboy you are. Even when they screw you over crappy drivers that hinder the performance of the card you bought and take over a year to fixed them, proved many times over by this and other sites, you still try to deny or forget about it…suit yourself.

            Then there’s the bit about Kepler architecture somehow being forever limited while GCN is oh so much better…
            Never before have NVIDIA and AMD’s architectures been so close to each other, in performance and features. Kepler was made with power efficiency in mind, GCN was made with compute in mind. GK110 is a different matter, because just like GF110 before it, it was a compute oriented product, that might not even see the light of day in desktop products.
            Please go read something about each architecture instead of just talking about things you know nothing about.

            GTX 680s to market 2 months after the HD 7970 Ghz ? Give me a break…yes they were hard to find when they were released, but so was the HD 7970 when it “launched” and that’s not out of the ordinary, unless it’s a so called “hard launch”
            As for price…give it a rest…have a look at the price of the HD 7970 when the GTX 680 was hard to find online:

            [url<]https://techreport.com/news/22713/nvidia-geforce-gtx-680-is-hard-to-find-online[/url<] For the most part, above MSRP...this was more than 3 months AFTER it was released. And this happened because TSMC had supply issues with 28nm, that made even Qualcom look for someone else to fab their chips, also widely reported. This constrained EVERYONE that used 28 nm @ TSMC...including your favorite AMD! Anyway, why am I losing time with you ? You're hopeless...go buy all the HD 7970s you want! No one will and wants to stop you!

            • clone
            • 6 years ago

            Silus an AMD with “broken drivers” was faster than GTX 680, I’ve never denied AMD’s drivers had issues but somehow even with issues their product remained faster than GTX 680….. do you have any idea how badly that speaks of GTX 680?

            if GTX 680 was made with “power efficiency in mind” then why is it’s power consumption within 2 1/2 % of an HD 7970 Ghz edition?…. that’s a horrific sacrifice given it’s more often than not 15% slower which also refutes your comment that AMD and Nvidia’s architectures are “close” to one another in performance.

            thx for admitting GTX 680 was both overpriced and late to market and while yes TSMC had issues that affected AMD, Nvidia had more issues that can’t entirely be put on TSMC given AMD managed to get product to market ahead of Nvidia by 5 months…. in volume.

            p.s. the link you provided showed reference HD 7970’s listing for the MSRP of $549 along with non reference models with custom cooling listing for more…. and those prices didn’t reflect the MIR’s available at the time, the link you provided also solidly confirmed that GTX 680 was more paper launch then real launch……. thanks I guess.

            you lost as usual because you see only Nvidia while calling everyone else fanboy.

            • Silus
            • 6 years ago

            LOL is that you Charlie ? I mean, such lapses with reality usually only come from that guy….

            It’s hilarious how you (and this was already pointed out by someone else) consider great to buy a piece of hardware more than a year ago, to only see it work as “intended” today, because crappy drivers hindered its performance. Yeah, I’m sure the early adopters of the HD 7900s are loving that.

            And then the talk about yields, which is yet another hilarious point (if one can call it that). Good/bad yields affect everyone in the same way given how mature the process being used is. 40 nm was known to be a quite troublesome process and it affected everyone that used it and this was reported many times by various sources. 28 nm was quite a different beast and there were no where near as many problems, also reported many times. Plus, NVIDIA had the die area advantage this time around, which would mean AMD’s yields were worse when compared to GK104.

            And this “killing” you talk about, must be coming from that single review you like the most ? The one with some cherry picked results ? Or maybe from just Gaming Evolved titles ? Even in those, NVIDIA wins in some of them…

            Sure AMD is cheaper, there’s no questioning that, but they’re not cheap because everyone’s buying them…

            • clone
            • 6 years ago

            Silus, even with driver issues AMD kept the lead, now they are extending it.

            that makes GTX 680 look pooh pooh.

            yields affect everyone true but in this case poor yields cost Nvidia the lead, caused them to paper launch a product for 5 months and never allowed them to compete on price. worse still the issue continues to plague their lineup a year later as evident by Nvidia choosing to prolong GTX 680’s life cycle in the form of a rebadged GTX 770 when it’s always been trailing.

            as for the reviews saying AMD’s HD 7970 is faster than GTX 680 look no further than the review that created this discussion, overall HD 7970 is faster than GTX 680 to the point that it’s left behind in a class unto itself.

            p.s. I knew you lost this discussion the moment you went offtopic, few would claim that GTX 680’s inability to compete on price or performance constitutes a victory but then you are one of those few.

            when AMD’s in this position you call ppl idiots, now that it’s Nvidia what are you? (it’s rhetorical)

      • ptsant
      • 6 years ago

      The chip is so big, I’m not sure it’s interesting for them to sell “full” versions. Yields are probably quite low and the price would have to be completely unreasonable. Maybe when the process matures…

    • Krogoth
    • 6 years ago

    780 GTX = Titan Light, enough said.

    Kinda disappointing, considering that Nvidia didn’t even bother to make it go after $499 price point. I guess we will have wait for Maxwell and Volcanic Island.

      • Silus
      • 6 years ago

      The proper way to refer to GTX 780 is that it’s “Titan gaming prowess light”. In compute, Titan is still far superior because it’s DP capabilities are not capped, while the GTX 780 ones are. Just like Titan isn’t really a “K20X light” because it lacks things like ECC support.

        • Krogoth
        • 6 years ago

        Titan is a “failed” K20 not a lighter version of it. GTX 780 is an artificially crippled version of K20.

      • brute
      • 6 years ago

      YEAH, they ignored ALL the price points other than $1000 and $650!!!!

      WOOWOWOWOWO NVIDEA LOSING ALL THE MONEY CUZ THEY DONT RELEASE AN ENTIRE PRODUCT LINE ON THE SAME DAY

    • marraco
    • 6 years ago

    The performance vs price XY charts are true eye opener shocks.

    I wish I could see the same chart with more cards.

    • Deanjo
    • 6 years ago

    Should compliment my Titan as a Physx card quite nicely. ;D

    • dashbarron
    • 6 years ago

    It has boggled my mind for years why the release so many iterations of GPUs that are relatively meaningless in price and power compared to other products. I know they do it just to flood the market at every price point and confuse half the buyers to just throwing cash at a product without understanding, but still.

    A staggered approach would be so much better. Have a 600 for low performance, a 650 for mid performance, a 680 for high performance, and a 600Titan for balls-to-the walls. Each level could have clear performance differences at vastly difference prices. I really think clear product division makes it easier for consumers to decide and they’d trust the company more.

    Or we could just throw out a 660, 660TI, and a 650 with 8GB of memory at the same price for the hell of it. And a Titan and a 680 that have near the same performance but are $300 difference in price.

      • nanoflower
      • 6 years ago

      I don’t know that they do it to confuse the consumer. It’s more likely that it’s a result of yields on the GPUs. If you’ve got a bunch of chips that aren’t as capable as the 680 but perform far better than the 650 does it make sense to sell them as a 650?

      That does two things.
      1) It leaves money on the table as you should be able to get more money for the better performing chips but you have to sell it as something better than the 650 to get that money.
      2) It confuses and possibly angers consumers who find out that their 650 doesn’t perform nearly as well as their friends 650 because they got one of the better GPUs with fewer defects. That’s not a good thing for Nvidia (or any company.)

      • ptsant
      • 6 years ago

      In fact, even though too much choice can be confusing for the customer, it is a very common way to sell something to everyone at the price they are willing to consider. That’s why Starbucks sells 100 coffee combinations, for example. I know you couldn’t care for the 99 other ones, but every customer is different.

      • tipoo
      • 6 years ago

      Just how silicon yields work, with only 4-5 categories they’d be throwing out a lot of chips that could work in between two (or making less profit on them). Same with why Intel has a bajillion models.

    • End User
    • 6 years ago

    Glad I did not buy a Titan. The 780 is a punch to the gut for anyone that did.

      • derFunkenstein
      • 6 years ago

      Everyone who bought a Titan should have been well aware that this was coming. They had to find something to do with chips that were too defective for Titan.

      • Deanjo
      • 6 years ago

      Not really, I purchased titan for it’s DP capabilities and GPGPU development. Titan blows the GTX-780 out of the water in that respect so it blows your theory that it was a punch to the gut for “anyone”.

        • End User
        • 6 years ago

        “The GK110 also brings something that has no real use for gaming: considerable support for double-precision floating-point math. ”

        Hey, if that floats your points then bully for you. For those that bought it for gaming a gut punch it is.

          • Krogoth
          • 6 years ago

          Ultra high-end has never been known for retaining its value. The buyers in this demographic don’t care bout price, they simply want to get the best at the time of purchuse no matter the cost.

          • Silus
          • 6 years ago

          Titan has uncapped double precision support i.e. 1/3 of FP64, while GK110 in the GTX 780 has it capped:

          [url<]http://www.anandtech.com/show/6973/nvidia-geforce-gtx-780-review/18[/url<] GTX 780 is gaming oriented. Titan is oriented for whatever the user wants it to be as it's not capped in any way. So no, there was not gut punch for anyone, although of course the price is still hefty, despite no caps.

          • Deanjo
          • 6 years ago

          [quote<]On the GTX 780, DP math executes at 1/24th the rate of single-precision math, just enough to maintain compatibility without truly being useful."[/quote<] A boat load of titans were sold just because of the uncrippled DP capability which is 1/3rd the rate of single-precision math. Next card to offer that capability is a $3000 Tesla card. Titan is still a bargain for those capabilities.

        • ptsant
        • 6 years ago

        How do you justify spending 2×7970 for the DP capabilities of the Titan? I have the impression the 7970 is quite close to the Titan in GPGPU. Unless of course you are a great fan of CUDA.

          • Deanjo
          • 6 years ago

          Easy, first of all I primarily use linux, when it comes to linux support, nvidia rules the roost, second, yes I do prefer CUDA over openCL (even AMD admits their openCL support in linux is sorely lacking) and the systems that I develop for are using Cuda as well, it is a more mature option then openCL at the moment, third is that the memory limitation of the 7970 when dealing with a large data set the Titan offers a lot more room without incurring the PCI-e bus transfer penalty. Fourth, if you have a CUDA version and a openCL version of the same application, you will usually find that the CUDA version smokes out much better numbers then the openCL version without having to optimize the hell out of it for one particular device.

          Even with openCL, it is extremely important to optimize for a particular device. Optimizations can make a world of difference in terms of performance. If I have to start tweaking openCL code to get it to run nicely on a particular device then I might as well go the CUDA route instead with does not require as much tweaking.

            • ptsant
            • 6 years ago

            You raise good points and I understand your point of view. Although CUDA is probably more mature at this point, within reasonable practical limits, I personally always prefer the “open” solution instead of a vendor specific API. It is more futureproof and better for the consumer in the long term.

            • Deanjo
            • 6 years ago

            open is nice, however if it adds more to the development and maintenance then it really doesn’t have any advantage from a developer POV. I’d rather be providing one optimized solution and adding more features to it rather then having to constantly go back to tweak the code for many different devices to get decent performance out of all devices. Right now a vast majority of GPGPU work is done on Cuda/Nvidia in the academic and research fields so wasting my time with worrying about a very small percentage that run other devices isn’t a concern of mine and is quite frankly, a waste of time for me.

            Keep in mind that being open does not by any means future proof it. There are plenty of devices that have been dropped from various linux aspects such as video card and add in card support. Despite being open, nobody is around to maintain it nor care to maintain it.

      • tipoo
      • 6 years ago

      Titan was like a consumer grade compute card. Anyone who got it purely for gaming had more money than sense anyways. The 780 may perform close, but DP is still 1/24th.

      The next nv card up with that kind of DP support is 2400 dollars. So I don’t think Titan was such a rip, for people who knew what it did well.

      • Krogoth
      • 6 years ago

      Welcome to buying high-end GPUs 101.

      There’s always a GPU around the corner that blow away its predecessor in the price/performance ratio.

        • End User
        • 6 years ago

        It only took 3 months! Ouch.

          • Krogoth
          • 6 years ago

          This happened a number of times over the past decade.

      • HisDivineOrder
      • 6 years ago

      Just like when nVidia released the 680, then a month or so later released the 670 and a lot of people who bought a 680 felt like they’d had $100 just STOLEN right out of their wallet.

      Stealthy-like.

        • Airmantharp
        • 6 years ago

        I bought a GTX670 on release day, but I’ve been doing this for a while…

      • beck2448
      • 6 years ago

      from techpower up Compared to the GTX 780 reference design, the improvement is 9% on average and 13%(!) at 2560×1600. AMD’s fastest single-GPU card, the HD 7970 GHz, is 24% slower—no danger at all.

      The EVGA SC looks real good and super quiet.

    • brute
    • 6 years ago

    how do they justify charging so much for plastic and sand and metal? i can get all that at the beach

      • MFergus
      • 6 years ago

      Maybe you could try making your own gpu with that sand.

        • brute
        • 6 years ago

        wait for Brute Beachworks Graphics Co. to release its ipo. i bet there’ll be enough to buy a 30 pack of high life!

      • derFunkenstein
      • 6 years ago

      It’s not the materials, it’s the way they’re configured.

      • paulWTAMU
      • 6 years ago

      I can get wood and mortar pretty easily, but I still paid for my house 😛 And a hell of a lot more than 600 dollars.

    • redwood36
    • 6 years ago

    Great review as usual! Don’t go anywhere else.
    However I gotta say– this isn’t nearly as exciting as the 680 was last year. The performance benefits seem really minimal.

    • Cataclysm_ZA
    • 6 years ago

    Am I… am I reading this correctly? The Radeon HD7970 is actually better value than the second-fastest single-GPU card on the planet? I…may need to sit down for a minute. That was totally unexpected.

      • jihadjoe
      • 6 years ago

      I think the general rule is the faster a card is, the worse it becomes as a value proposition.
      All of the “best value” cards are mainstream, mid-range models.

    • kamikaziechameleon
    • 6 years ago

    Can we get a scatter plot that subtracts the value of the game bundles from the respective cards? That will give us a more realistic notion of the cost of the cards.

      • auxy
      • 6 years ago

      No, because some people (many?) don’t care about those games. Besides, you still have to -pay- for the rest of the card; it’s not as if the price is actually discounted.

        • rxc6
        • 6 years ago

        True. The bundle is still an advantage. If eBay is correct, you can make some money selling the games if you don’t care about them.

          • HisDivineOrder
          • 6 years ago

          But how would you determine the value of the bundle when the MSRP stated value is not at all representative of the actual value to either the gamer OR the ebayer? Just because the person might enjoy the game doesn’t mean they’d have paid MSRP for them instead of waiting for a sale (or buying them on ebay for that matter).

          And just because one guy sells his AC3 for $20, another might sell his for $9.

          It’s better to think of the bundle as a fringe benefit–gravy–and not count on it being a specific “amount” of value since the values fluctuate a lot.

            • kamikaziechameleon
            • 6 years ago

            Well I think a consumer would like to pay as much as is reasonable for a product they are a fan of.

            I use “sales” to buy games I wasn’t sold on, I don’t hold out and spend as little as possible on the games I value the most.

            The current bundle has what will likely be 2 game of the year nominees on it and 2 other really good and fun games. I think that is a high value I would have bought them in their first month for their 50-60 dollars respectively. That is value. You vote with your dollar and if they are good games then they demand a higher perceived value. They didn’t bundle turds on there fellas. Those are not only great pc games but some of the best games in the last 12 months on any platform.

            That being said I see your point and I should have prefaced my entire rant here with, opinions and value are subjective. I just perceive for the majority of people there is a great value in that bundle. Just like for the majority of people Nvidia is just to darn expensive. I used to buy only Nvidia for good reason, regardless of price the value was there. Now that I don’t do solid works or photo shop at home anymore I can be content with cheaper AMD cards that support more monitors for less and offer great game bundles.

            • auxy
            • 6 years ago

            [quote=”kamikaziechameleon”<]The current bundle has what will likely be 2 game of the year nominees on it and 2 other really good and fun games. I think that is a high value I would have bought them in their first month for their 50-60 dollars respectively. That is value. You vote with your dollar and if they are good games then they demand a higher perceived value. They didn't bundle turds on there fellas. Those are not only great pc games but some of the best games in the last 12 months on any platform.[/quote<]I wouldn't buy any of these games at any price higher than $20 or so. So yes, it's all subjective. ヾ(*´ー`)ノ

      • derFunkenstein
      • 6 years ago

      I don’t care about any of the games that come with any of the cards. I don’t care about the bundle with AMD and I don’t care about Word of Tanks or whatever garbage is being bundled with nVidia.

      • D@ Br@b($)!
      • 6 years ago

      Yeah and while you’re at it, subtract the value of the power and video adapters and don’t forget the poster(EVGA GeForce GTX 780 Superclocked ACX 3GB)

    • kamikaziechameleon
    • 6 years ago

    Here is the thing… unless you do solid modeling on your machine there is no value to a GeForce card these days. AMD is cheaper to start by a WIDE margin in all price/performance categories and they bundle between 100 and 200 dollars worth of games with their GPU’s making the 7950 a sub 100 dollar card if you think about it.

    Nvidia has no attention from me. I’ll put up with all the AMD quirks that are out their based on the fact that I barely pay for the GPU I’m getting, might as well be an integrated GPU for the price I’m paying, but wait, it plays all the games they bundle at 1080p perfectly! so way better than an integrated solution.

    I understand that Nvidia has made the superior product here, that is clear but AMD is clearly offering about 10 times the value.

      • kamikaziechameleon
      • 6 years ago

      why the thumbs down. Don’t you play games on your GPU?

      • neahcrow
      • 6 years ago

      Exactly, I think there are others like me who need their computer for more than high-performance gaming. I need serious video editing power for Premiere and Resolve among other applications so it will always be NVIDIA for me. I’m trying to wait on the next generation of GPU to upgrade but this does have my attention as I was even considering the Titan

        • kamikaziechameleon
        • 6 years ago

        That used to be me but as I NEVER work at home anymore I’ll save hundreds of dollars and get a AMD. If you aren’t using your machine for professional graphics then flip go AMD. If they are games that are good (and generally they are) you are practically getting a free GPU

        • shalmon
        • 6 years ago

        [url<]http://www.amd.com/us/press-releases/Pages/amd-and-adobe-2013apr5.aspx[/url<] not that it is "right now", but at least soon enough you'll have options

          • kamikaziechameleon
          • 6 years ago

          Next they will offer better support for modelers and drafting… hopefully.

      • maxxcool
      • 6 years ago

      Not those games. No interest.

      • albundy
      • 6 years ago

      so true. another issue is the life of the card. you can buy a 100-200 dollar card now and it will last you quite a bit. many expect their $500+ card to last even longer, except for the fact that they will lack any improvements/upgrades that a newer 100-200 dollar card can provide in the near future.

        • kamikaziechameleon
        • 6 years ago

        Buy a 500 dollar card for 3 years or a 200-300 dollar card and replace it every year and a half to 2 years. I vote for option 2. that way you get the most current technology. A 300 dollar card is typically the same tech as a bleeding edge card for the same manufacturer just turned down a notch.

      • auxy
      • 6 years ago

      Well, I thumbed you down because I don’t see bundled products as value. I don’t see them any different than a mail-in rebate; I do them, and I get them, but I don’t consider it at all as part of my purchasing criterion. I still have to pay the full up-front price. ┐(‘~`;)┌

      As you mentioned in your other reply to the other post (that you made? Should just make longer posts. Or maybe that’s what keeps getting me and Bensam in trouble…), “value” is highly subjective, and the lack of a proper tweaking tool like Nvidia Inspector for Radeons (radeonpro does not count, buggy garbage) means I can’t really even consider an AMD graphics card.

      What exactly was the point of your post, anyway? Just trolling for pro-nvidia responses? Or looking for validation for your Radeon? ( ̄ヮ ̄)

    • tipoo
    • 6 years ago

    If a slightly cut down Titan is their solution for the higher end 700 series card, I wonder what else the series will be like? Will everything just plop down a price category, the 680 in the 670s price point, etc? That would be uninteresting, but reasonable I guess, given how much power Kepler has on tap. And it wouldn’t do much for mobile.

      • jessterman21
      • 6 years ago

      I’m still praying for a 6 SMX 256-bit GTX 760 at $200. Odds are low, but that’s the card I really want.

        • jthh
        • 6 years ago

        Me wonders when the 770 and 760/760Ti will rear their heads. I gots a PC to build!

    • Arclight
    • 6 years ago

    Well that was unexpected.

    Anyways the card is pretty baller and the price tag is way more sane compared to the GTX Titan.

      • HTarlek
      • 6 years ago

      As chuckula (correctly) said, that’s Nvidia’s hope: that you compare the price of this against the Titan instead of the7970+games bundle, because that’s the only way it looks good.

        • auxy
        • 6 years ago

        Games bundle is largely irrelevant if you don’t play those games. The codes go for peanuts on eBay.

          • rxc6
          • 6 years ago

          Last time I checked you get somewhere between $30 and $40 for the full bundle. Not the full price but good enough to cover some of the cost.

    • dpaus
    • 6 years ago

    [quote<] Surely there are more changes in store soon, right?[/quote<] Yeah, like a price drop...? Taking a look at the scatter plot, they have quite a value gap to close with AMD's flagship, even without taking their game bundle into account (and that's a significant value in itself).

      • Game_boy
      • 6 years ago

      The chip is far too large to price any lower. This is already a low bin.

        • derFunkenstein
        • 6 years ago

        Price drop for the 600 series would be nice, though. 15% across the board would be welcome and would stimulate purchases, I’m sure. I’m waiting for 660Ti speeds at or near $180-200 before I upgrade from my GTX 460. That’s a bit more than 15% though.

        edit: holy crap can I spell any worse?

          • cynan
          • 6 years ago

          I know the review stated that the GTX 680 was only about $10 more than the HD 7970, but where I am it’s a lot more like $50.

          Up here in Can-nee-da, I can routinely find the HD 7970 for around or just under $400, but you’d be hard pressed to find a GTX 680 for under $450. A similar deal is going on at Newegg.com now… More like 10-15% cheaper, not $10.

          So yes, the 600 series could well use a nice price drop. That is, if Nvidia is even interested in staying competitive with AMD – and I’m not sure they are on a price/performance level due to public perception (which, due to past driver issues on AMD’s part, is not completely unwarranted).

            • auxy
            • 6 years ago

            Don’t confuse the Ghz Edition for the base card, cynan. (⌒_⌒;)

            • cynan
            • 6 years ago

            Meh. The difference is moot. So what if the GHz – aka “marketing” – edition is priced slightly more on average. It’s not like they’re different in any material way. I don’t know of a single “regular” edition that can’t be comfortably clocked to 1000-1050 MHz on stock cooling solutions.

            Then you have SKUs [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814127732<]like this one[/url<] or [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814150586<]this one[/url<] or [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814125413<]this one[/url<] that are not named "GHz edition", but have the same exact clocks (and guess how much they are going for right now...). People who pay more for the "GHz edition" label over one of these above offerings are just throwing their money away.

            • dpaus
            • 6 years ago

            Just to contradict myself: why should Nvidia lower their prices when they continue to sell so many more cards than AMD despite what we see as a value gap?

            • cynan
            • 6 years ago

            Yup. This is what I was alluding to in the last part of my comment.

    • chuckula
    • 6 years ago

    Great review guys! I know that there are a bunch of new things out today (Kabini, this card) and new stuff coming soon (GTX 770, Haswell, Richland)!

    One comment about the card: Great performance, but is it worth it? Compared to the Titan, the price/performance ratio is good, but how about compared to more consumer oriented GPUs? From the early leaks it looks like the GTX 770 is just an overclocked 680.. but it should just about match the 7970 GHz edition and if the price is right, it might be the better card unless you are doing multiple monitor or 2560×1600+ resolution gaming.

      • entropy13
      • 6 years ago

      I’ll be using Techpowerup’s aggregate graphs, because essentially they’re still the only one that benches a lot of games and a lot of cards all at the same time…

      The price really means that it (the GTX 780) lags behind the likes of the HD 7970 GHz Edition in perf/dollar, but performance [b<]alone[/b<] you see that it's closer to the Titan than either the AMD card or the GTX 680. And in the perf/dollar graph it obviously meant 'weighting' of the two is the same. But if, say it's 60 for performance and 40 for price, then it's 'worth it'. If it's the other way around, then it's not.

      • jdevers
      • 6 years ago

      Richland came out last month and was little more than a minor revision, maybe you mean Kaveri?

    • thanatos355
    • 6 years ago

    [quote<]The GeForce GTX 780 should be available at online retailers starting today for $649.99. [/quote<] Something something nVidia something something drunk. <_<

      • chuckula
      • 6 years ago

      Nvidia wants you to look at the price of the Titan first, then all of the sudden the 780 looks like a good deal*. 😛

      * No seriously, it’s called a “framing effect” that is a psychological trick for fooling you into paying too much. You see $650 next to $1000, and all of the sudden it seems “cheap”. The exact same $650 next to a cheaper price would seem more expensive.

        • thanatos355
        • 6 years ago

        Same reason they price products at “x.99”. “Look, it’s less than “x-dollars!” Yeeeeeeeeeeah.

        Your Jedi mind tricks have no effect on me! YOU HEAR THAT nVIDIA? <_<

        • USAFTW
        • 6 years ago

        But then people fail to see that 7970 is clearly the $/perf leader right there. 200 bucks less than a 780. that another GTX 660 on top of a 7970! With your framing thingy theory, it’s a win for AMD.

          • jessterman21
          • 6 years ago

          At least until tomorrow…..

        • HisDivineOrder
        • 6 years ago

        That’s what I keep doing when I look at the Radeon 7970GHZ and the Geforce 780. Framing effect makes me think that the 7970GHZ seems like a steal, given the givens in this review.

        That said, I think I’ll wait for the entire 7xx series product stack to arrive and see where Radeon pricing ends up. Yeah, yeah, they have a bundle, but their sales could STILL take a nosedive and force them to make some (possibly unannounced) price adjustments.

        Too bad Radeon’s don’t come with the Titan cooler.

          • willmore
          • 6 years ago

          There are plenty of good third party cards with better/different coolers. Not everyone ships a reference board.

            • Airmantharp
            • 6 years ago

            …but none of them have good [i<][b<]blowers.[/i<][/b<]

        • Bensam123
        • 6 years ago

        I’m glad you’re expanding your bones psychology knowledge, but given the example you used, this would be door in the face.

        [url<]http://en.wikipedia.org/wiki/Door-in-the-face_technique[/url<] Starting with a large request, to make a smaller one look more reasonable. This is basic consumer psychology. [url<]http://en.wikipedia.org/wiki/Compliance_(psychology)[/url<] Framing is weighing pros and cons and making the cons look better with the pros. It's all about wording and context... or how it's framed. [url<]http://en.wikipedia.org/wiki/Framing_effect_(psychology)[/url<]

          • chuckula
          • 6 years ago

          Lol… funny that you mention Bones, I ran stage crew in high school when Michaela Conlin (Angela) was in Bye Bye Birde & Mame.

            • Bensam123
            • 6 years ago

            Curious how you’re completely avoiding the issue of being wrong in this case…

            Having +17 votes while saying something that isn’t correct would be rather tragic, wouldn’t it? Look at all those plus votes on people mimicking your information too…

            If I had to reach into my bag of complicated psychology terms, just for you… this may be classified as deflection. Or in laymans terms called… changing the subject.

            • peartart
            • 6 years ago

            Door in the face isn’t really an accurate description, since the only people where the price of the 780 will necessarily be considered after the price of the Titan are people who follow graphics card releases as they happen, i.e. not many people.

            • Bensam123
            • 6 years ago

            It was considered in the context Chuck gave, that’s why it wasn’t framing. Framing doesn’t have a comparison between two different things, rather just rewords the same same thing

            [quote<]Nvidia wants you to look at the price of the Titan first, then all of the sudden the 780 looks like a good deal*. :-P[/quote<]

    • StuG
    • 6 years ago

    Woot GTX780 review on my birthday! Too bad it’s only the review and not one of those though! 😛

      • ClickClick5
      • 6 years ago

      Take and receive what you can. This review was quite unexpected!

Pin It on Pinterest

Share This