GeForce GTX 980 cards from Gigabyte and Zotac reviewed

The GeForce GTX 980 is the
new king of the hill among single-GPU graphics cards, and with nifty features like DSR, it looks like an awfully tempting potential purchase.

If you’re feeling this particular temptation, there’s probably one question on your mind: which one should I get?

The first GTX 980 cards to hit the market were based on Nvidia’s reference design, with that familiar aluminum cooling shroud and blower. Demand for these cards is high, and supplies are tight. Now, however, a number of custom-designed GTX 980 cards are becoming available. Not only are they potentially more abundant, but they also promise various upgrades over the reference cards. Are they worthy of your attention? We’ve spent some time with a couple of slick offerings from Gigabyte and Zotac in order to find out.

Zotac’s GeForce GTX 980 AMP! Omega

Pictured above is the GeForce GTX 980 AMP! Omega from the folks at Zotac. This hulking creation looks like some sort of heavy mechanized military unit. Here’s how it compares to the GTX 980 reference card:

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

GDDR5
transfer

rate

Aux

power

ports

Length Intro

price

Reference
GeForce GTX
980
1126 1216 7 GT/s 2 x 6-pin 10.5″ $549
Zotac
GTX
980 AMP! Omega
1203 1304 7 GT/s 2 x 8-pin 10.75″ $579

The Omega is bigger and beefier than the vanilla GTX 980 reference design in almost every way. Its GPU clocks are higher, it takes in more juice via dual eight-pin aux power inputs, and its price is pumped up by 30 bucks, too. About the only thing that’s the same is its 4GB of GDDR5 memory, which is clocked at 7 GT/s, just like stock.

The most notable way that the Omega differs from the reference card, though, has gotta be its massive cooler. Zotac has a happy tradition of choosing exotic coolers for its aftermarket board designs, and this one fits the mold—or breaks it, I suppose, if the mold is conventionally sized. This thing will occupy three slots in the host PC and is 10.75″ long. Beyond that, it sticks up past the top of the PCIe slot cover by about 1.25″, enough that it could present clearance issues in older or smaller cases.

The oversized cooling shroud covers a pair of densely populated banks of heatsink fins fed by quad heatpipes. The twin cooling fans are positioned directly above those banks. That’s an awful lot of metal and gas to situate atop a GPU with a 165W power envelope (although I doubt the Omega really honors that limitation).

Despite the obvious excess, the Omega retains something of a stately look, at least around front. There aren’t any illuminated logos or other such bling. The only LEDs present are basic power indicators on the back of the card.

Also around back is one of the Omega’s most intriguing features: a USB port labeled “OC+”. Zotac includes a cable to plug into this port and into an internal nine-pin USB header on the host PC’s motherboard. Via this connection, the OC+ feature monitors some key variables, including the 12V line from the PCIe slot, the 12V line from the PCIe power connectors, GPU current draw, and memory voltage. Beyond monitoring, OC+ also allows control over the card’s memory voltage.

Although Nvidia already has built-in hooks for monitoring and tweaking various aspects of the GPU’s operation, OC+ makes an end-run around all of it. This monitoring capability is external to the GPU and relies on a separate chip and shunt resistors. Based on the device IDs shown in Device Manager, Zotac has apparently incorporated a Texas Instruments MSP430 USB microcontroller onto the board to drive OC+.

Eager to try out the OC+ monitoring capability, I connected the USB cable to my motherboard’s header, installed Zotac’s FireStorm tweaking utility from the included DVD, and was confronted with the interface you see above.

At this point, my feeble brain became confused. Pressing the “advance” button in the interface brought up the series of sliders you see above, but all of those options are available with pretty much any Maxwell or Kepler tweaking utility. The only monitoring I could find consisted of those two small graphs on the top left showing the GPU core and memory clocks. Most of the other buttons like “setting” and “info” proved fruitless. The “Quick Boost” icon was self-explanatory—likely a modest pre-baked overclocking profile—and I figured “Gamer” was probably a slightly more aggressive version of the same. OC+ was nowhere to be found.

Worse, neither Zotac’s website nor the included documentation offered any explanation of what OC+ actually does (beyond the words “OC Plus real-time performance intelligence . . . takes your graphics experience to the next level”) or how to access it. Hrm.

After consulting with Zotac’s friendly PR types, I was encouraged to press the “Gamer” button. Lo and behold, clicking “Gamer” brought up a new window called “S.S.P Chip Setting.” There’s no mention of OC+ anywhere, but the right info is present.

Once you find the right spot, OC+ does indeed tell you things you can’t know via Nvidia’s usual GPU monitoring hooks.

Oddly enough, though, the Omega doesn’t expose much control over those variables. The GPU Vcore setting appears to allow the user to raise the card’s peak GPU voltage by 0.02V, to 1.21V, but it’s fussy. The FireStorm app doesn’t always keep up with the GPU’s dynamic behavior under load, so you’re not always adjusting the present voltage properly. Causing a system crash with this slider is way too easy.

The memory voltage slider has two settings, “no change” and a +20 mV offset. That’s it.

My understanding is that you may have to pony up for Zotac’s GTX 980 AMP! Extreme edition, priced at $609, in order to get working voltage control.

The OC+ limitations chafe a bit, but the worst of it came when I tried to tweak the Omega using the regular controls in the “settings” window, like one would with any recent GeForce card. You can adjust the sliders to your heart’s content, but near as I can tell, none of them do anything at all. The Omega’s GPU clocks and memory speeds simply don’t change when you press “Apply.”

For the purposes of this review, I was able to overclock the Omega somewhat using a much older version of FireStorm that I grabbed from Zotac’s website. (The new version hasn’t yet been posted online.) This older utility has a simpler and frankly more logical interface, and it works reasonably well. That said, nothing I did in software allowed me to raise the Omega’s GPU voltage. That variable is evidently locked on this card—a curious choice by Zotac since even the reference design cards aren’t voltage-locked.

Gigabyte’s G1 Gaming GTX 980

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

GDDR5
transfer

rate

Aux

power

ports

Length Intro

price

Reference
GeForce GTX
980
1126 1216 7 GT/s 2 x 6-pin 10.5″ $549
Gigabyte
G1 Gaming
GTX
980
1228 1329 7 GT/s 2 x 8-pin 11.75″ $629

The G1 Gaming GTX 980’s default clock speeds are about 20MHz higher than the Omega’s, and the card has a steeper price of entry of $629. Other than its clock speeds, the G1 Gaming has some virtues that might help justify that premium.

Virtue number one is that sexy-looking Windforce cooler with three separate fans. I swear, GPU cooling fans have entered the same territory occupied by disposable razor blade counts. Regardless, the G1 Gaming wears this look well.

That third fan may have contributed to Gigabyte’s main achievement here, which is fitting a tremendous amount of cooling power into a dual-slot width with a low profile. Gigabyte claims this Windforce 3X cooler is good for dissipating up to 600W. That’s bonkers. The only dimension in which the G1 Gaming is larger than Nvidia’s reference cooler is length, where it has an additional 1.25″. This is one of the longest video cards you’ll find, so I’d advise you to measure your case before ordering one.

Assuming it fits, the G1 Gaming should offer plenty of cooling capacity. The twin heatsinks beneath its cooling fans are pierced by five copper heatpipes each, and there’s barely a cubic millimeter of wasted space beneath that flat-black shroud. The G1 Gaming is dense and feels heavier in the hand than the Omega.

Gigabyte hasn’t neglected the bling factor, either. That Windforce cooler lights up like my two-year-old’s face when he’s destroying something expensive. The bright blue color is hard to capture entirely on camera. Despite what you see above, the LEDs give off a pretty intense shade of medium blue in person. Whether or not that color will match your PC’s chosen aesthetic is iffy, but it’s certainly distinctive.

The Gigabyte card carries its nifty aesthetic around back, where a metal shroud offers protection from accidental damage.

The G1 Gaming’s other distinctive virtue is indicated by the presence of a sixth video output, a DVI-D port. The card can support a total of four displays at once, but using a feature Gigabyte calls Flex Display technology, it auto-detects any connected displays and enables the appropriate combination of outputs.

I haven’t had a chance to connect four or five monitors in order to try every combination, but crucially, Gigabyte’s website indicates the G1 Gaming can drive two DVI displays combined with one DP and one HDMI simultaneously. With only a single DVI-I port, most GTX 980 cards can’t do that. If you have a couple of decent but older DVI-only monitors on hand, the G1 Gaming may be your best bet.

I’ll admit I was initially skeptical, but after using it, I’m impressed. Gigabyte’s OC Guru II tweaking utility has a clean, logical layout that exposes each of the key variables you might want to tweak in order to overclock at GTX 980. The only red mark on its record is the resolution indicator that says my display is running at a “60 MHz” refresh rate. If only!

Click the “more” button, and OC Guru pops open a monitoring window like the one above. Again, it hits all of the right notes. Although the info presented there isn’t a verbose as what you might see in EVGA’s Precision app or MSI’s Afterburner, the main variables you need for overclocking are present—and their values are plotted over time.

OC Guru even offers the ability to define a custom fan speed profile. Given how deadly effective the cooler on this card can be, I could see myself creating a less aggressive fan speed curve at some point. But I’m getting ahead of myself. Let’s look briefly at performance, and then we’ll see how effective the cooler is.

Performance

We’re not going to spend a ton of time obsessing over how much an additional 100MHz or so will affect the performance of a GM204 GPU. We can get a quick assessment of the performance differences between these cards by using the built-in benchmark from Thief, which spits out a simple FPS average. If you want the full-on inside-the-second performance treatment, please go read my initial GeForce GTX 980 review. By the way, our test system config for this article was the same as the one we used for that review.

Out of the box, both of these cards offer a 5-10% performance increase over a stock-clocked GTX 980, depending on the scenario. The G1 Gaming is a bit faster than the Zotac AMP! Omega, thanks to its ~20MHz advantage in base and boost clock frequencies.

Power consumption

Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

To get higher clock speeds and performance, these aftermarket cards require about 10% more power under load than a reference GTX 980. They’re still quite power efficient overall compared to cards like the GeForce GTX 780 Ti and the Radeon R9 290X.

Noise levels and GPU temperatures

I wouldn’t make too much of the results above since we’re flirting with the noise floor in Damage Labs when taking these readings. Still, the meter is not wrong; these two cards are exceptionally quiet at idle. They should be essentially inaudible in a normal home environment.

When the GPU is running a game, Zotac’s magnificently enormous cooler is among the quietest we’ve tested. The reference GTX 980’s blower isn’t bad, either, in part because it just doesn’t have all that much heat to dissipate.

The G1 Gaming isn’t noisy by any stretch, but it is one of the louder coolers in this group. Thing is, this is a pretty quiet group. Note that our Radeon R9 290X is an XFX card with a big aftermarket cooler. The R9 290, which has AMD’s reference blower, illustrates how much louder video card coolers can be.

There’s a reason the G1 Gaming’s cooler makes a little more noise than most. It’s keeping the GTX 980 GPU 13°C cooler than the Zotac Omega does—and 19°C cooler than the reference card. In fact, the reference GTX 980 butts up against its pre-defined temperature limit of 80°C and may be slowing down in order to avoid exceeding it. This result is why I said I might define a custom fan profile for the G1 Gaming. Gigabyte has clearly built in a ton of thermal headroom out of the box, more than is necessary unless you’re overclocking the card.

Speaking of which…

Overclocking

Overclocking Nvidia’s recent GPUs can be a complex affair. GPU clock speeds are controlled from moment to moment by Nvidia’s GPU Boost algorithm. A number of different variables can become the key factor that limits GPU clocks, including temperatures, GPU current draw, voltage, and base and boost clock speeds. Meanwhile, clock frequencies change dynamically in response to the present workload. Squeezing the most out of a GeForce card means monitoring all of these inputs and making sure they’re in range—all while testing for stability.

Fortunately, getting a little more out of one of these aftermarket cards doesn’t have to be too difficult. For example, Gigabyte’s choice of an aggressive cooling policy ensures GPU temperatures will almost never be the limiting factor in overclocking the G1 Gaming. You can pretty much forget about that variable. Also, Nvidia has dictated some power and voltage limits that will probably prevent you from ruining your shiny new GPU. With good cooling on tap, like both of these cards have, you can pretty much move the voltage and power limit sliders to the max without much risk—at least, that’s my sense of things. Just don’t sue me if you somehow release the magic smoke from your new GTX 980.

That said, my sense is that GTX 980 clock speeds are largely gated by voltage. My approach to overclocking was to max out the power and voltage sliders for each card, set the temperature target to 80°C, and ensure good cooling. From there, I raised GPU and memory clocks while running MSI’s Kombustor GPU burn-in utility and checking for three things:

  • Stability — Does it crash?
  • Visual artifacts — Do Kombustor’s images render correctly?
  • Delivered speeds — Does turning up the slider actually mean increased clock frequencies?

Since voltage is a major part of the equation and some overclocking utilities only expose voltage as an offset (for instance, +0.087V above stock), I had to establish a baseline for each card by monitoring it under load. Here’s where each one started.

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

Memory

clock

(MHz)

Kombustor

GPU

voltage

Kombustor

GPU

clock

(MHz)

Reference
GeForce GTX
980
1126 1216 7010 1.118 1177
Zotac
GTX
980 AMP! Omega
1203 7044 1.212 1328
Gigabyte
G1 Gaming
GTX
980
1228 1329 7010 1.250 1367

The aftermarket cards are overvolted out of the gate compared to the reference board. Here are the best speeds at the highest voltages I was able to coax out of each card.

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

Memory

clock

(MHz)

Kombustor

GPU

voltage

Kombustor

GPU

clock

(MHz)

Reference
GeForce GTX
980 OC
1337 1426 7960 1.243 1475
Zotac
GTX
980 AMP! Omega OC
1352 8000 1.212 1478
Gigabyte
G1 Gaming
GTX
980 OC
1368 1469 8010 1.250 1520

Although both aftermarket cards have voltage sliders exposed in software, tweaking them didn’t actually raise the max voltage we observed under load. (The G1 Gaming’s minimum voltage slider does work, at least.) I can kind of forgive that fact in the case of the G1 Gaming, but the Omega’s voltage is locked at a lower level than the reference board’s! Yikes.

Incidentally, I overclocked the reference card using OC Guru, which worked nicely. Unlike the aftermarket cards, the reference board’s fan policy required some tweaking in order to keep temperature overruns from limiting its clock speed. Using a manual fan policy in OC Guru solved this problem, but it came at the cost of increased blower noise. I probably could have tuned the fan curve manually for a more optimal combination of noise and cooling, but there’s no question the aftermarket coolers are quieter and more effective overall.

All three of the cards are happy with memory clocks at about 8 GT/s. Thanks to the highest GPU voltage and killer cooling, the G1 Gaming maintains the fastest GPU clocks under load.

Here’s how the three cards perform while overclocked.

There’s just not much daylight between the overclocked reference card and the two aftermarket offerings. The aftermarket cards do have better, more capable cooling hardware. They are quieter, and their GPUs never even approach the 80°C thermal limit. But they don’t offer much more performance potential than a vanilla GTX 980, at the end of the day.

Conclusions

The tale of the Zotac GTX 980 AMP! Omega is one of hardware and then everything else. Zotac has nailed the basic hardware formula on the Omega by slapping a hulking and potent triple-slot cooler on top of a GeForce GTX 980. The result is a product that’s both faster and quieter by default than a GTX 980 reference card. It’s hard to argue with that proposition.

As for everything else, well, that’s where things get shaky. Maybe it was just some quirk of my system, but I couldn’t get Zotac’s included FireStorm utility to modify the Omega’s clock speeds or other parameters at all. It just didn’t work. The OC+ monitoring feature is a nice idea, but the lack of documentation and poorly executed user interface dampen my enthusiasm for it. Why go to the trouble of adding this sort of custom hardware if you’re not going to develop the appropriate software and documentation to take advantage of it?

Then there’s the fact that the Omega doesn’t offer any more overclocking headroom than a reference GTX 980, despite the enormous cooler, the dual eight-pin power input requirement, and the giant “OC+” label on the side of the card. The trappings are there, but the Omega just doesn’t deliver on its apparent promise.

The Omega’s redeeming quality is a fairly modest price premium of 30 bucks over reference cards—and being in stock right now at Newegg. If you have the room in your system to accommodate this monster and the twin eight-pin power plugs to feed it, the Omega isn’t a bad choice. Just be aware that you’re paying more for two things—somewhat higher base clocks and a big, quiet cooler—and not much else.

On the other hand, I’m favorably impressed by Gigabyte’s G1 Gaming GTX 980 in spite of the fact that it costs $629 and doesn’t have tremendously more clock speed headroom than the reference design. Gigabyte made a bunch of smart decisions while designing this card and its associated software. In my book, it’s a cut above the Zotac—and the reference design. That sleek Windforce cooler looks great, works very well, and doesn’t intrude into a third slot. Gigabyte’s OC Guru software has a logical layout, decent monitoring, and makes GTX 980 overclocking relatively painless.

The G1 Gaming design team even nailed the little touches, like rotating the orientation of the aux power plugs by 180° to allow more room for the heatsink fins. The inclusion of two DVI outputs and Flex Display tech may justify the price premium all by itself, for the right user. And, you know, it has glowy lights.

This isn’t entirely rational, but I simply like the G1 Gaming because it’s a slick, well-executed product. You may have to work with one yourself in order to understand, but I think this sort of attachment is what a premium bit of hardware is supposed to invite.

In the end, I think Gigabyte has done enough with the G1 Gaming to earn the distinction of a TR Recommended award. Prospective buyers should keep in mind that this card requires 11.75″ of lengthwise clearance inside of a case. Also, realize that this is very much a premium product. If you want sheer value, look to the GeForce GTX 970 instead. (Gigabyte even offers a G1 Gaming version of the 970.) That said, the G1 Gaming GTX 980 is one of the finest single-GPU video cards on the market. If that’s what you’re after, it will not disappoint.

Unfortunately, the G1 Gaming is out of stock at Amazon and not yet listed at Newegg as I write, so getting your hands on one may require some patience.

Enjoy our work? Pay what you want to subscribe and support us.

Comments closed
    • Nemesis
    • 5 years ago

    Anyone able to get me a really accurate measurement of the card length of the Gigabyte G1 Gaming GTX 980 from the back (internal to the case) side of the PCI plate to the longet part of the card?

    Not sure how accurate and/or if they include the bent part of the bracket some of the measurements I have found around are and I need to try and check if it will fit in my case. Sadly it really might be “that” close that a few mm might make all the difference.

    Thanks in advance 🙂

    • dashbarron
    • 5 years ago

    Unless I don’t understand it correctly, the winner here is the “flex display technology.” Nothing is more annoying than crawling around fiddling with monitor cables until the right combination of cable + monitor + display + maybe multiple GPUs has been achieved. One of those annoyances about tinkering with one’s own computer.

    • anotherengineer
    • 5 years ago

    I think this is a big reason for Maxwell’s power consumption (see clocks and voltages at bottom)
    [url<]http://www.techpowerup.com/reviews/MSI/GTX_980_Gaming/29.html[/url<] Variable voltage and speeds at 3d loads. Apparently AMD does not seem to implement this at the vBIOS level yet [url<]http://www.techpowerup.com/reviews/Sapphire/R9_285_Dual-X_OC/29.html[/url<] and starts to ramp the clocks and the voltages more than Nvidia for all other loads. Seems pretty straight forward, I wonder what the Radeon power draw would be if it implemented the same vBIOS instructions and algorithms (frequencies/voltages)? I mean they are on the same node at TSMC so power draw should be able to be similar??

      • anotherengineer
      • 5 years ago

      So anyone want to try to edit their vBOIS to try???? 🙂

    • itachi
    • 5 years ago

    Nice to see Techreport do some aftermarket cooler GPU reviews !

    Just one thing I believe you forgot is on the overclocking part, sure the reference card can be overclocked just as much, but what about the noise levels ? Not nice having a f18 in your room.

    Also always nice to include some 1080p benchmark for us poors peasants who can’t afford a 1440p, lol.

    Altought I wasn’t much interested in 1440p initially due to the IPS “more ghosting” nature, I heard about that news that they gonna bring in some 144hz IP-type panels in the market soon, so that’s exciting..

    • kamikaziechameleon
    • 5 years ago

    I’m considering a 970 in my near future.

    • Ph.D
    • 5 years ago

    I’d like to see some of these non-reference model manufacturers slap a bunch of extra VRAM on the 970/980.

      • vargis14
      • 5 years ago

      I too see no reason they cannot put 8gb of vram on these cards. there is plenty of room on the boards.

      I expect to see 8gb, even 6gb 980’s in 3 months or less,that’s my guess.

        • kamikaziechameleon
        • 5 years ago

        I wonder if their is some sort of pipeline limitation for utilizing that much memory…

    • internetsandman
    • 5 years ago

    I’m reminded of your ITX gaming build from a while back using a Silverstone SG07 and the windforce cooler GPU. Now that Gigabyte has improved the plastic shroud I’m even more interested in using this card, I really wanna go ITX for my next system

    • Prestige Worldwide
    • 5 years ago

    Hi Scott, great article as always.

    I was wondering if you’d be interested to do a similar article with various non-reference 970 models and overclocking? You already have the Asus 970 Strix and it would be great to see the detailed TR cross-examination between other OC models like the MSI 4G Gaming and Gigabyte G1.

    Also, I’ve heard some complaints of the Asus 970 Strix being voltage locked on some customer reviews, can you shed some light on the situation given your experience with the card?

    Cheers!

      • Ninjitsu
      • 5 years ago

      Yes, and 1080p please!

        • syndicatedragon
        • 5 years ago

        Please test 1080p with some DSR settings too. DSR sounds great in theory for these cards paired with a 1080p monitor, but I don’t want to slip below 60fps either.

          • derFunkenstein
          • 5 years ago

          DSR is the only reason I’d consider one of these new big Maxwells – otherwise I’m wasting a lot of capabilities on my 1080p display.

            • Prestige Worldwide
            • 5 years ago

            A 970 is a good choice for 1080 / 120hz displays, it should hit 120fps in most games, it should be able to do over 100 with MSAA off in BF4.

            I’m also looking forward to DSR to clean up some games that have trash AA support like Mafia 2. Not having to fudge around in nvidia inspector to force AA will be nice…

            • derFunkenstein
            • 5 years ago

            I don’t have a display that can do 120Hz – it’s an IPS display and I haven’t figured out how to drive it to higher refresh rates. It defaults to 59Hz and if you plug in an HDMI switch, it gets weird red speckles like stuck pixels on dark objects. I think it might actually be defective but it works well enough for now. It’s a Dell S2240L 21.5″ IPS.

            So I might as well improve image quality, and DSR is interesting for that.

            • Ninjitsu
            • 5 years ago

            Most benchmarks I’ve seen for the 970 and 980 indicate that they’re just enough to hold 60 fps with 4xMSAA and max detail settings for 1080p.

            • brothergc
            • 5 years ago

            Yes I agree 🙂
            I got the gigabyte 970 G1 that replaces 2 gtx660s in SLI nice and at lower power , nice to have a single card again and free up some motherboard space

    • anotherengineer
    • 5 years ago

    Out of stock at newegg.ca also

    [url<]http://www.newegg.ca/Product/Product.aspx?Item=N82E16814125682[/url<] $710+$15+13% tax = $819.25 A bit over the budget!!

    • sschaem
    • 5 years ago

    No price performance plot this time?

    Now that the 290x is about 200$ cheaper then the 980, and the 290x is faster in a few of the latest game release then the 980, the plot would look interesting.

      • dragontamer5788
      • 5 years ago

      I didn’t downvote btw.

      [quote<] Now that the 290x is about 200$ cheaper then the 980, and the 290x is faster in a few of the latest game release then the 980, the plot would look interesting.[/quote<] But to be fair, there is a price shakeup going on right now. A ton of GPUs are changing their price right now, so maybe its best to see what happens to the market before makign a new graph.

        • sschaem
        • 5 years ago

        Well, it would be the graph at the time of the review.

        But yes, a live graph would be even better. newegg is usually the reference in the US, so its not very time consuming to update even once a week.

        Now the part that could be contented is the performance axis.

        The 290x seem to even beat the GTX 980 in some of the newer releases, so it would be easy to get a plot that biased, one way or another.

        Also, resolution affect relative performance.

        And to help so many argument, it would be great to see a ‘cost of ownership’ graph that show the yearly cost to use the card.
        Like 8h a day iddle + 2h gaming (740 hours of gaming a year).

        Might be eye opening… but I dont think the difference is bigger then 50kw a year.
        or about 5 to 7$ in electricity (for US citizen)

    • Sabresiberian
    • 5 years ago

    I don’t know, maybe it’s just me, but for a 15% price increase I want a 15% performance increase BEFORE I try to overclock. Across the board. In the light of Radeon 290X’s for under $400 GTX 980s are too expensive already, stacking $80 on top of that is just, well, insult added to injury.

    And putting a triple-wide cooler on a card that runs as cool as the 980 make no sense either.

    The 980s are a “pass” for me anyway. Lots of promise in the Maxwell series, but the card I really want hasn’t been brought out yet. Looking forward to AMD’s next release – THEN maybe Nvidia will see fit to bring out the big Maxwell guns. Seriously, I’m getting tired of this “trickle it out” marketing crap – get on with it already! I’ve got 2560×1440 monitors I want to run at 120Hz!

    • BoBzeBuilder
    • 5 years ago

    I can’t believe my GTX 780 suddenly looks puny. Getting the itch.

      • Jigar
      • 5 years ago

      Puny, you should meet my HD7970, its outright embarrassing compared to mighty GTX 980

      • l33t-g4m3r
      • 5 years ago

      I’m not. Especially gaming @ 1080.

      • flip-mode
      • 5 years ago

      The longer you wait, the lesser the itch.

    • vargis14
    • 5 years ago

    You guys need to star showing some skin!!!!! Seriously I would really like to see these cards naked, a good number of sites take off the cloths/ heatsinks/backplates so we can get a good look at the board layouts, VRM design and a good look at the cooler design.

    I know you can do it and have a nice tool kit in the labs..as always nice review but I think some naked video cards would be awesome.

    • Chrispy_
    • 5 years ago

    Two main questions:

    1) DSR is underwhelming the higher your screen resolution gets, and you’d only consider buying one of these cards if you have decent screen(s). What makes DSR so nifty?

    2) If the GM204 is so power efficient, why do these cards need either triple fans, triple slots, or both? We’ve seen [i<]quietly[/i<] cooled cards over the last two years that use way more power, yet don't need triple-slot, triple-fan cooling solutions.

      • auxy
      • 5 years ago

      1) If you don’t get why DSR is amazing already after the other article, you probably aren’t going to. You seem to be one of those people with the opinion that 1920×1080 is “low resolution” and are really focused on 2560x and higher. MOST PEOPLE have 1080p monitors, many of which offer features not available on higher-resolution displays, and MOST PEOPLE don’t desire to upgrade from that, even your enthusiast gamers. There are a lot of reasons why not. I know this probably blows your mind and you’re shaking your head going “gosh but why I just can’t understand it” but it’s the truth.

      2) They don’t NEED triple fans, triple slots, or both, but the more surface area your cooler takes up, the less noise it has to make with airflow.

        • Chrispy_
        • 5 years ago

        1) See my reply to Scott below

        2) Point taken. I guess there will be cheaper, compact cards released at some point in the future for people who want to squeeze GM204 into a smaller case.

        • flip-mode
        • 5 years ago

        You criticize the DVD drive but suddenly you’re justifying something on the basis that MOST PEOPLE have it. How quaint.

      • Damage
      • 5 years ago

      1) My take on DSR is here:

      [url<]https://techreport.com/review/27102/maxwell-dynamic-super-resolution-explored[/url<] 2) Two answers, really. One, these card don't need coolers this big, really. Gigabyte's claimed 600W is bonkers. I'm on record about that. These coolers are overkill, to some extent, and I'd probably take advantage of that fact by tuning them for quiet operation. Also on record about that. 🙂 Two, GM204 is a very power-efficient GPU, but when you're overclocking, you're pushing toward the hairy end of the frequency-voltage curve, where the voltage squared term in the power use equation can really spike upward. Any chip will be less efficient when you're pushing like that. Doesn't mean it's not an efficient GPU.

        • Chrispy_
        • 5 years ago

        I liked your DSR article last week – and I came away from reading it thinking that DSR is a 30% performance hit for minimal extra image quality. The new thing it brings to the table is the improved fine-geometry detail, but even then the geometry (taking your windmill example) still looks awful. It’s [b<][i<]better[/i<][/b<] but it's nowhere near convincing. There's only one benchmark (Thief) and you supplement it by inferring that DSR is too heavy for Guild Wars 2, Tomb Raider, and Battlefield 4 [i<]even with a 980 SLI setup.[/i<] FXAA gets us most of the way towards DSR quality [i<]for free[/i<], and with you mentioning that DSR is extremely expensive and overly demanding on a bunch of 2013 titled, I don't see it being of much use going forwards. It's probably a lot to ask, but would you be willing to run or show a frame-time plot rather than an average fps result? My reasoning is that a 73fps average actually masks some seriously high frame times, enough that with DSR enabled there are moments when the game perhaps drops to unacceptable levels. My experience as I doubled the megapixel value of my displays was that the minimum framerates became proportionally lower than ever before - I guess I was running into memory bandwidth bottlenecks that I'd never encountered until 2560, meaning that my GPU performance nosedived rather than scaling as expected. As always, keep up the awesome work.

          • swaaye
          • 5 years ago

          FXAA and similar make nice screenshots but have obvious problems in motion. I find SMAA T2x to be pretty nice but it still has the same problems.

          DSR and the plain old upscaling trick are SSAA. DSR adds a blur filter as an optional extra which is a subjective improvement. SSAA most definitely looks better than the post-processing techniques because it improves the quality of every pixel on the screen and lacks the temporal problems. Of course SSAA is very expensive.

          It is unfortunate that few games implement a good SSAA function. There are a few games with rotated/jittered/sparse grid supersampling in the menu though. I imagine the reason it’s not popular is it is demanding and not useful on most hardware. This is the path to the best image quality though. Essentially simulating vastly higher pixel density in a smart way.

      • Ninjitsu
      • 5 years ago

      I partly* agree with you on DSR, it makes the image a bit too blurry and the perf hit is equivalent to SSAA, plus it can also mess up the UI, unlike SSAA.

      On top of that it only makes sense if you have too much performance headroom available, in which case I don’t see why SSAA isn’t better.

      Since driver based SSAA is still available on Fermi, maybe we could test and compare…

      EDIT:
      *didn’t quite agree with this part:
      [quote<] and you'd only consider buying one of these cards if you have decent screen(s). [/quote<] I'd consider a 970 the minimum I'd invest in for 1080p gaming, to maintain 60 fps almost all the time. Though I guess the disagreement here isn't over DSR...

      • Terra_Nocuus
      • 5 years ago

      I’m still rocking a pair of monitors at 1680×1050 (which I plan to rectify next year, hopefully), and running games at 2520×1575 via DSR brings a noticeable image quality improvement. The downside is the “un-embiggened” UI in certain games.

      If I were running a native 1440p+ display, I probably wouldn’t bother with DSR.

    • Meadows
    • 5 years ago

    Overclocking the GTX 980 gave 10-20% increased performance in your tests. (Closer to 20% in the case of 4K.) Claiming that such a setting “doesn’t offer much more performance potential” can be misleading. After all, you said it yourself that these cards, powerful as they are, still need all the help they can get for a good 4K experience.

    Overclocked power consumption is ugly though. The factory-OC cards used 30 W extra (+25-30% compared to stock) and I dread to think where efficiency dropped to using the serious overclock.

    I wonder how well a GTX 970 could do 4K.

      • Damage
      • 5 years ago

      I was comparing to the OC potential of the reference card. As demonstrated, the delta was not large.

    • beck2448
    • 5 years ago

    Gigabyte temps are outstanding!

    • terranup16
    • 5 years ago

    Be careful with just tossing voltage onto Maxwell cards to get the maximum OC. As I recall, [H] put a detailed write-up on this and my GTX 750Ti bears it out well- these cards are generally TDP limited.

    So the more voltage you toss on them, the faster they hit +10% TDP and therefore the lower overhead you have. Most of these cards will actually hit their max OC at stock voltage because again they’re mostly power-limited so the goal is to make it as hard as possible for them to hit nV’s artificial barrier.

    Unless you want to run an article on what these things can do with BIOS mods raising the power limits 😉

      • exilon
      • 5 years ago

      The Gigabyte G1s have a raised TDP ceiling. I looked at the vBIOS and it was something like 250W at 100%. There’s no worries about TDP throttling on these cards.

      I’m trying to get my hands on the non-G1 to see if it has the same raised TDP limit, but no luck so far. From what I see, EVGA and ASUS have the lower limits than MSI and Gigabyte. No idea about PNY or Zotac.

    • UberGerbil
    • 5 years ago

    What is under that extra-height bulge on the top of the Zotac card? Just heatpipes they couldn’t figure out how to route anywhere else?

      • MadManOriginal
      • 5 years ago

      That’s what I assumed when I saw it. Lots of cards have heatpipes that protrude above the PCB in that area, looks like Zotac just decided to put a shroud over them.

    • I.S.T.
    • 5 years ago

    Some of the math on Page 4’s load temps paragraph is off. The Gigabyte is 13 C cooler than the Zotac. Is that a typo or something?

      • Damage
      • 5 years ago

      Yep. Fixed!

      • twocsies
      • 5 years ago

      Yes, I think there’s something off, whether it’s sloppiness I’m not sure. The noise level of the Asus Strix at idle was quite loud, but this makes no sense… This is the model that: “In fact in idle or desktop mode, the fans won’t even spin until they reach 67 Degrees C.” Where is the noise level coming from?

Pin It on Pinterest

Share This