Poll: Will you be upgrading to Nvidia’s GeForce GTX 1080?

The first round of reviews are in, and it looks like the GeForce GTX 1080 really has taken the single-GPU graphics performance crown—at least for now. But there's still a lot of variables to consider when buying a shiny new card, and not everyone is looking for the biggest and fastest thing around. Although we haven't seen any hard numbers, the less-expensive GeForce GTX 1070 may have some serious price-to-performance promise for enthusiasts. We're hearing rumors that AMD's new Polaris GPU will rise above the horizon in the next few weeks, too. There's even the option to stick with what you have if a new card isn't in the, well, cards right now.

With all these new and upcoming options, the choice to upgrade to a GeForce GTX 1080 is largely a personal one right now, so we figured we'd put the question to you. Tell us what your thoughts are in the poll below. If you think we missed an option, or you have another plan, let us know in the comments.

Comments closed
    • Ninjitsu
    • 4 years ago

    So serious question:

    I’m seeing a lot of you complain about a lack of driver support for Kepler, except in most cases it seems that the argument is that “AMD’s slower cards are now faster”, and not “Kepler cards are slower than they themselves used to be”.

    In that case, how is it a performance regression? Could it not be possible that AMD cards just had shit drivers with lots of overhead, etc. (which iirc was a problem at one point) while Kepler has been performing at full potential for a long time?

    From what I remember, Kepler and CGN were already fairly close.

    And finally, I’m seeing stuff here about a 960 beating a 780, and frankly I can’t find it. It seems to be at par with 770, +/- a few percent depending on the game – and this is what I remember when the 960 launched as well.

    [url<]http://www.anandtech.com/bench/product/1596?vs=1493[/url<] [url<]http://www.computerbase.de/2016-05/geforce-gtx-1080-test/7/#diagramm-the-division-1920-1080_2[/url<] p.s. When I bought my 560 the deciding factors were familiarity with nvidia control panel, overclocking the cards, good price and good driver support. Since then Fermi got support for Shader Cache and even OpenCL 1.2. Game support has been great, even among indie titles. Yes, I see some weird artifacting occasionally while on the desktop, and windows media player videos become choppy when minimized and then maximized again. Both things appeared to have been introduced via driver update quite a while ago. But the issues overall have been fairly low. I'm more bugged about "clean installs" not being completely clean, and Nvidia collecting the setup files in the C: drive.

      • djayjp
      • 4 years ago

      Yes, a 960 doesn’t match a 780– even an overclocked 960 vs a stock 780. Against a 770 though? Sure.

      Is the artefacting a result of the OC maybe? I find that OCs tend to deteriorate slightly over time, necessitating a bit of a reduction in clocks. I use OCCT error checking, works great!

        • Ninjitsu
        • 4 years ago

        No, when in 2D mode the card is downclocked so definitely not because of overclocking.

          • djayjp
          • 4 years ago

          Hmm yeah you’re probably right, but I think it does occasionally clock up (maybe due to heavy use or hardware acceleration of web pages, video decoding, scrolling, etc) leading to a general instability. Try OCCT– I bet it’ll show errors.

            • Ninjitsu
            • 3 years ago

            Well, I never got around to doing that but my GPU died yesterday while playing a game with it overclocked. πŸ˜€

            • djayjp
            • 3 years ago

            Oh wow…sorry to hear that. Always sad when an early death strikes :s. Obviously under warranty I should hope? More importantly, was it overvolted? I’m running +100mV on mine and still wondering if that’s such a great idea ha

        • the
        • 4 years ago

        I remember a few cases were a GTX 960 did out run a GTX 780 too. Though I’d call them edge cases due to the game being inherently broken like Assassins Creed Unity upon release. The problem was less about nVidia and more about a rushed to market console port.

        For the most part, the GTX 960 was on par with a GTX 770 when the developers weren’t monkey on type writer or run by managers that never had a lobotomy.

          • djayjp
          • 3 years ago

          Lobotomy lol. Yeah sounds like a driver optimization (or lack thereof) issue there

          • DoomGuy64
          • 3 years ago

          Off the top of my head, TW3 and DOOM are two instances where the 960 outperforms the 780. Considering the hardware specs though, there shouldn’t be [i<]any[/i<] cases of that happening. This technically isn't "performance regression", it's more like lack of optimization. Nvidia is not optimizing for kepler in newer titles, and being that kepler's performance is dependent on driver optimization, Nvidia had no business putting kepler optimization on the back burner. They shouldn't have slowed Kepler support, or at least until dx12 started to replace dx11. Dropping kepler this soon only shows nvidia doesn't care about supporting "legacy" cards, and none of their cards are safe to purchase if you care about long term performance. I don't think current maxwell users will find this trend amusing in the slightest, and I have a strong feeling it will. Trading profits for mindshare will eventually have repercussions.

            • Ninjitsu
            • 3 years ago

            Okay, well consider this. Fermi got OpenCL 1.2, performance in Luxmark shot up by some 50-70% for me, on a GTX 560.

            I also tested Cinebench a few weeks ago, and OpenGL scores were [i<]over 100% higher[/i<] than a few months ago. Granted, I was running a Core 2 Quad before and now I'm on a 4690K - but this isn't just removal of a CPU bottleneck, it's driver optimisation. Proof: [url<]http://imgur.com/TQyc7se[/url<] [url<]http://imgur.com/4capt0n[/url<] It's really starting to sound like a conspiracy theory more than anything. p.s. [url<]http://www.computerbase.de/2016-05/geforce-gtx-1080-test/7/#diagramm-the-witcher-3-1920-1080_2[/url<] ?

            • DoomGuy64
            • 3 years ago

            Fermi updates don’t prove anything except Fermi was a future proof architecture. Kepler cut out Fermi’s hardware schedulers and moved them to software. THUS, relying more on driver optimization. Nvidia doesn’t have to optimize per game with Fermi, because certain features are handled in hardware, whereas Kepler requires per game optimization. It’s probably a lot of work, so when Maxwell arrived Nvidia put those Kepler per game optimizations on low priority.

            Either way, Fermi is no longer relevant simply because Nvidia chose to keep memory capacity to a bare minimum. You can’t play modern games, even if the GPU can handle them, simply because ~1GB of VRAM isn’t enough. 2GB is the bare minimum to play anything today.

            Fermi supporting OpenCL 1.2 is technically NOT optimization either. That’s only updating the software. Fermi already had that performance capability in hardware, and updating the software enabled the underlying performance to show up.

            It’s like how AMD released Mantle to show performance vs dx11. Mantle enabled the underlying performance that dx11 was holding back. Same exact situation here with Fermi’s OpenCL update. It’s enabling performance that the old API was holding back, and is not technically an optimization. Optimization is where you take existing features and tune them for performance.

            • djayjp
            • 3 years ago

            Dunno why you got the down vote (I corrected that injustice ;)) since you’re clearly right:

            [url<]http://kotaku.com/doom-pc-benchmarks-not-hellish-at-all-1776755400[/url<] I suppose ever since the Kepler generation where nvidia disabled the hardware that automatically optimized the code (can't remember the name), they've become far more reliant on proper hand made driver optimization. So we're basically paying for that now... :/

      • jihadjoe
      • 4 years ago

      A lot of people don’t realize that Tahiti in the 7970 was 21% larger than GK104. Nvidia’s performance advantage at the start was mostly because AMD had way inefficient drivers which took so long to get optimized.

    • ultima_trev
    • 4 years ago

    The >30% improvement over the 980 Ti is a good bit more than I was expecting. However I’m not sure I can justify 600 bucks for GPU when my current one handles my monitor’s native 900P res just fine.

    • BoilerGamer
    • 4 years ago

    My single 1350Mhz GTX980Ti on water(EVGA Hybrid) is pulling 45-60 FPS(No AA) on 4K Armored Warefare Ultra, so I am quite content with it. Might consider step-up if the cost isn’t big but most likely I will wait for a GTX 1080Ti

    • travbrad
    • 4 years ago

    I have a 970 so until I run into some games where it’s not fast enough I won’t be upgrading. Pascal and Polaris look like great GPUs but I just can’t justify them right now. If I had an older/slower card I’d definitely be upgrading to one or the other though. In any case I’d be going for something more in the 1070 price range rather than a “flagship” card.

    I am still holding out hope of a future Intel CPU with eDRAM as well so that some day I may finally upgrade from Sandy Bridge. A guy can dream anyway..

    • dikowexeyu
    • 4 years ago

    I feel great temptation, but:

    0- Long term driver support matters.

    1- I need to know the price of 1070.
    2- I need to know the performance of AMD alternatives.
    3- I need to consider the total price, including a G-Sync vs FreeSync monitors, and knowing that I will be long term hooked by the monitor choice.

    4- I stay with Windows 7, and will stay for a long time.

    • ForceEdge
    • 4 years ago

    Where’s the poll option for: sick of overpriced midrange gpus can we go back to the big die first strat and at 500$ too? like the decade or so ago until recent few years.. feelsbadman.. I remember a time when new midrange gpus beat old flagships AND were half price or less.. Some were even on the same node/halfnode jump whereas this new one and half node jump and a smaller die costs more?!

    • deruberhanyok
    • 4 years ago

    The 1070 looks far tastier to me.

      • chuckula
      • 4 years ago

      I wouldn’t recommend eating GPUs from any of the major vendors.

        • Mad_Dane
        • 3 years ago

        Guess Matrox cards is safe for human consumption then πŸ˜›

    • cynan
    • 4 years ago

    Meh. GTX 1080 doesn’t quite cut it.

    But in all seriousness, as someone who generally upgrades in ~4-year cycles (9800xt in 2004, HD 4850 in 2008, HD 7970 in 2012), I’m definitely due for something a bit faster than my downright elderly HD 7970s. Thing is, my current monitor is 1600p, and though obviously there are compromises, the HD 7970s can still make this resolution playable for most newer games.

    While something like a 1080 or 980Ti would be better at 1600p, I think I’m going to hold out for big Pascal (, ie, GP102) or Vega. The reason being is that the GTX 1080 still doesn’t promise a whole lot of future proofing for 2160p, and I likely next upgrade my monitor before 2020 to 4k and I sure won’t feel like turning around and paying top dollar for another GPU.

    Sadly, this means waiting at least a whole year to avoid getting gouged by early adopter prices… Sigh.

    It also resonates with me that paying >$500 for a GPU is bordering on irresponsible. But at least if this only happens a couple of times a decade, it is mitigated somewhat.

      • Firestarter
      • 4 years ago

      4850 to 7970 was a huge jump, way bigger in my in my mind than from the 7970 to the 1080

    • southrncomfortjm
    • 4 years ago

    As I have a G-Sync monitor, I’m all in with Nvidia. Whether I’ll be upgrading to a 1080 is a different story. Really depends on when we can expect consumer grade GTX cards with HBM2 on board and what those will likely cost. In the end though, there’s little denying that a non-Founder’s Edition card looks like a rather good deal given the performance gains, even at $600.

    • Prestige Worldwide
    • 4 years ago

    I would buy a GTX 1080 if the Canadian dollar was at par with the USD.

    But 600 US MSRP will likely mean 800 CAD which is pretty brutal for a GPU.

    I’m going to go cry into a bottle of maple syrup now…..

    My options are:
    – Burn $$$$ and get 1080 in a purchase I can only describe as irresponsible
    – Wait for 1070 benchmarks and see if it has adequate performance for what will likely be $500 CAD
    – Save a wad of cash and buy Polaris 10 for $300 and be content with midrange hardware until Vega and the 1080ti release and push prices down on the current high end. But that is hardly an upgrade from the 970 I sold in anticipation of the Pascal release (and also because I haven’t had any time to game for the last few months and wanted to sell my 970 while it still held a relatively high value in the market).

      • tipoo
      • 4 years ago

      Don’t forget lovely tax, that would be 920 in my province πŸ™

        • Prestige Worldwide
        • 4 years ago

        Since I’m in Montreal, if I order from NCIX or another out-of-province shop, I’ll only be charged 5% federal tax and no PST/HST.

        So it would be $840 (assuming free shipping), which I find is still very hard to justify for a GPU even though I can definitely afford it.

        I just feel like it costs too much for what it is, a midrange GPU with some higher-end memory. It’s the undisputed fastest single-GPU card on the market, but I can’t help but feel Vega and Titan “P” and 1080ti will blow it out of the water over the next year.

    • Kretschmer
    • 4 years ago

    No. I’d love to return to Nvidia’s drivers, but:
    +My 290X is still quite quick at 1440P.
    +Thanks to zombie GSync tech I can’t do adaptive sync and Nvidia. Spending $700 on a monitor + $400 on a GPU would be crazy…

      • Chrispy_
      • 4 years ago

      How about buying a freesync monitor and no GPU, saving yourself $700 and allowing your 290X to keep shining until all the Pascal vs Polaris smoke blows over?

    • lilbuddhaman
    • 4 years ago

    My 770 4GB is still pushing 1920×1200 pretty well. Can’t completely max out Witcher3, but I can do all but nightmare settings with Doom and High settings on the crazier req titles like Arma3/JC3.

    As I’ve always done, I’ll wait until there is a suitable card in the $250-$350 price range to upgrade to.

    • ronch
    • 4 years ago

    4th Option. My HD7770 plays the old titles I still play extremely well.

      • Fonbu
      • 4 years ago

      I have a 5770 in a HTPC and it does the job of old games also still. Even old emulated games.

        • ronch
        • 4 years ago

        I would think the bulk of work when playing console emulators is done by the CPU. But yes, I’d think even a midrange HD2000 series card will do just fine for old titles.

    • iatacs19
    • 4 years ago

    There will be a 1080Ti, right? I will wait for the GP100. πŸ˜‰

    • PrincipalSkinner
    • 4 years ago

    Getting Polaris 10. NGREEDIA can shove off.

      • chuckula
      • 4 years ago

      Considering what we know about Polaris 10, the results of the GTX-1080 review are irrelevant.

      You should be more concerned about the price points of the old R9-390 and 390X cards after Polaris 10 launches and also see what happens to the GTX-970 & 980 prices after the GTX-1070 hits the market at under $400.

    • ThatStupidCat
    • 4 years ago

    Oh heck no. This card cost more than my last build. But then we get what we need. I don’t need it for what I do so not for me but I can see gamers and graphics pros using it.

    • DeadOfKnight
    • 4 years ago

    Definitely going to jump on Big Pascal or Vega next year when I do my next build, but right now I’m waiting for Polaris and Mobile Pascal because I will be getting a new laptop this year.

    • Anovoca
    • 4 years ago

    The biggest hangup on 3XX and Fury for me was the lack of HDMI 2.0 as I wanted something to drive a 4k TV at 60hz. I already know that 4xx will have current display output firmware support.
    The only thing I need to see now in order to upgrade is if the price/perf is good enough to justify pulling the trigger (having already bought Maxwell), or wait for HBM2. In all likelihood, I will wait in hopes that at that point, the mid-tier products ($250-$400) will be in the 4k at 30-50fps ballpark.

    • BIF
    • 4 years ago

    I am thinking of ordering TWO 1080’s. But waiting for board partners to come out with their products first.

    And I’ll need to figure out if I am going to have to build a new system. I prefer to not have to buy CPU, motherboard, memory, [i<]AND[/i<] two GPUs all at once, yikes. I am hoping I can hold off for Broadwell-E while waiting for the board partners. We'll see what Polaris offers, but if AMD can't get its act together, I won't wait for them. AMD, with its terrible CPU/Apoo choices, no GPUs with CUDA or a viable alternative to it, or even a sense of urgency for those of us who fold and render, have shown me just how little they value my compute business. And that's 85% the reason I buy a GPU; for compute. My occasional games of "Civilization: Beyond Earth" don't need a GTX 980 or a 1080. Well, I do hope that the Civ people will figure out how to use the GPU compute to help the game process all the math during late-game stages, especially on big maps with a lot of civs. AMD is increasingly less and less of an "underdog" competitor to Intel and Nvidia, and instead has become more and more of an enticement to just go straight to those manufacturers. If AMD wants my love, they need to start showing some.

    • I.S.T.
    • 4 years ago

    Too rich for my poor ass. I’m still stuck with a lowly GTX 660. Good in 2013, not so much in 2016. I have games that don’t run well at all even on lowest settings with this thing now.

    Can someone spare some money for the “Get I.S.T. A Modern Card” Fund? ;_;

      • ForceEdge
      • 4 years ago

      there there my friend, same here with the aging 660.. hope it can hold off until better cards come. can’t really justify the sick 25-30% price increases from us > uk/malaysia on already expensive gpus.

      • DPete27
      • 4 years ago

      GTX 660 users unite!!!

        • ForceEdge
        • 4 years ago

        best perf/$ at the time!!! never understood why people bought the 660Ti though.. it had the bigger die but performed what 10% faster but at 30% higher price? 660 users unite!
        EDIT: some numbers here, 660-229$,660Ti-299$, perf difference according to [H]: average fps difference 10% dont quote me on that i just did some quick estimation with the frames πŸ˜›
        link: [url<]http://www.hardocp.com/article/2012/09/13/asus_geforce_gtx_660_directcu_ii_video_card_review/11[/url<]

          • jihadjoe
          • 4 years ago

          Sorry, but the [url=https://techreport.com/review/24562/nvidia-geforce-gtx-650-ti-boost-graphics-card-reviewed<]650Ti Boost[/url<] was Nvidia's absolute perf/$ king at the time. 90% of the performance of the 660, and just $150-170. In the same way you compared the 660 to the 660Ti, you could say the 660 was just 10% faster, but priced 35% higher than the 650Ti Boost.

            • djayjp
            • 4 years ago

            Definitely. I like to use this:

            [url<]http://www.videocardbenchmark.net/gpu_value.html[/url<] I bought the highest performer (fps/dollar) card there that has a 4GB option (the GTX 960 4GB) and it overclocks to 1544mhz :D. So nearly as fast as a 970 but much cheaper. Should last awhile! Let me know if you know of any other sites that compare using that metric (wish TR had a monthly running tally...)

            • jihadjoe
            • 4 years ago

            I usually go by TPU’s performance per dollar charts. They don’t do frametimes, but they test a lot games and keep a relatively long list of GPUs in every chart.

            • djayjp
            • 3 years ago

            Cool I’ve been there to download software before but didn’t know they had that..thx!

            • ForceEdge
            • 4 years ago

            by At the time you meant 6 months after the 660/ti were launched? if that was your frame for ‘at the time’ then… I wouldn’t be surprised that that relatively new process would be better after half a year to afford that would you? Sure looking back after so many years and only by judging nvidia msrp isn’t exactly the way to go here my friend.. When they were launched, the 660Ti was actually sold at 280+EUR whereas the 660 was at 180EUR. In who’s right mind was it justified to spring for the Ti then? That’s my original point thank you.

            • jihadjoe
            • 4 years ago

            I took “at the time” to mean the release cycle of the whole 600 series.

            If you qualify “at the time” as being only the release date of the 660, then I could easily apply that argument to the 670 and 680 and say the 670 I purchased was the absolute price/performance king, but no, value-wise it was eventually supplanted by the 660, and later on by the 650Ti Boost. IMO we have to look at the product range as a whole, because nobody releases a top to bottom overhaul of their GPUs in a single month.

            The 660 still leans toward the upper portion of the mid-range, and for both AMD and Nvidia the absolute best value cards tend to be in the tier below that.

          • DoomGuy64
          • 3 years ago

          Imho, the 7950 had better perf/$. The hardware was superior, but the drivers at the time were sub-par. If you bought a card based on capability instead of day1 driver performance, the 7950/7870 was it. It was pretty clear upfront that the 660 was too crippled to run games past 1080p, and had no long-term viability.

          I bought the 780 back then, and in retrospect maybe I should have considered a 7970. But yeah, AMD’s drivers were a big problem, and I skipped over them solely for that reason. Now, it’s not an issue. I think my 390 will outlast the 970, just like the 7950 outlasted the 660ti. You get about a years worth of decent performance out of nvidia cards. That’s not worth bothering with just to have those couple extra temporary fps.

      • ronch
      • 4 years ago

      If you’re ‘poor’ for still running a 660, how about sparing a poor boy like me who still runs a slower HD7770 a few spare coins?

        • ForceEdge
        • 4 years ago

        tell that to the people i’ve seen give anything from tens to hundreds of ‘donations’ to a guy for a gpu that went on fire a day after the 1080 was announced. out of 15 years of building and maintaining hundreds of pcs i’ve never seen a gpu burn itself and take the whole rig with it too, especially a 980 from evga without touching overclocks πŸ˜€ people on the ‘net nowadays am i right.. (one guy even mailed him his 980..) people with too much disposable income i guess :/

      • Ninjitsu
      • 4 years ago

      What…games are these? My 560 isn’t doing fantastic, but I don’t need to use the lowest settings to be in the 30-60 range. Except XCOM 2, maybe.

    • Klimax
    • 4 years ago

    Missing option: Waiting for Big Pascal.

    Still using original Titan. Can wait bit more…

      • drfish
      • 4 years ago

      I’m in the same boat – but in no hurry – these are nice gains but I took a calculated risk buying a 980 Ti late last year and I’m good with it for now. I need a total system rebuild anyway, so a new GPU will probably wait until I’m really to pull that trigger.

        • dashbarron
        • 4 years ago

        Do you have an original Titan?

        Q9450 system here, getting very long in the tooth. Do me a favor Fish, just order two of everything and I’ll swing over there when they come in and pick up the supplies.

        Thanks chap!

          • drfish
          • 4 years ago

          To clarify, same boat waiting wise, different boat card wise. I just installed my old 780 into my wife’s computer (Phenom II X4 955), so I had something similar to the original Titan and now I have something similar to the Titan X.

          In any case I don’t have any spares lying around better than a 6950 – but I think that’s destined to go into a franken’build to replace [url=https://techreport.com/forums/viewtopic.php?f=33&t=83983&start=90#p1279735<]this guy[/url<] as the secondary niece/nephew/guest computer.

      • gbcrush
      • 4 years ago

      Same boat as you and DeadOfKnight

      My build plan was always for next year (on account of aiming for a Skylake-E build).

      Somewhere along the way this year, I decided I might give big pascal a try too (due to an interest in seeing how HBM2 pans out for team green).

      A little bit of patience will go a long way for me. I get to see how 3rd party board makers handle Pascal and Polaris 10. I’ll get to see big Pascal and Vega 10 develop. There may even be time for the market to settle a little…

      One more year it is. πŸ™‚

    • Krogoth
    • 4 years ago

    I’m going to get a new monitor before upgrading my current GPU (660Ti).

      • September
      • 4 years ago

      That’s the rub, isn’t it?

    • geekl33tgamer
    • 4 years ago

    I’m replacing my SLI 970’s with a single 1080. It’s astonishing that this singe card is faster than the pair I currently have, and will have none of the SLI problems I’ve experienced these past 2 years.

    Edit: I’m not paying $100 more for reference neither. Will wait for Asus and Gigabyte to get their custom cards to market.

      • Spunjji
      • 4 years ago

      I did 970 SLI for a while too and was bitterly disappointed. Way too many performance variances and/or games with no significant benefit for my liking. I got a better overall experience from a single 980 despite the ostensibly lower frame rates. :/

    • auxy
    • 4 years ago

    Surely you jest, Mr. Wild. Fastest single GPU, assuredly. But I do think you will find the Radeon Pro Duo quite faster for “single card”, hmm? HMM?

    (γƒ»βˆ€γƒ»)

      • beck2448
      • 4 years ago

      According to PC PERSPECTIVE: The Radeon Pro Duo, supposedly NOT a gaming card and was not released to reviewers freely, is a “stuttery mess”.
      [url<]http://www.pcper.com/reviews/Graphics-Cards/AMD-Radeon-Pro-Duo-Review/Grand-Theft-Auto-V[/url<]

        • juzz86
        • 4 years ago

        As was the 295X2 at release. Give it a driver release or two it’ll be a solid card.

        Agree that it’s not the best of business practices, but it’s easy paid beta testing πŸ™‚

      • Fonbu
      • 4 years ago

      Two interposers on a single card, if you want to call it a card.

    • End User
    • 4 years ago

    I’ve got a bunch of games I’ve been saving to play on the new card so I’m going to be one of the suckers ordering a Founders Edition.

    • maxxcool
    • 4 years ago

    As soon as I hit the lotto!

    • Fonbu
    • 4 years ago

    Sing the early adopter glory song⌑⌑⌑ Is that not exciting?
    Maybe someone can fill in the actual song part :/

    • chuckula
    • 4 years ago

    [quote<]Integrated graphics for life (40 votes)[/quote<] Look Andrew Lauritzen, what did we tell you about stuffing the ballot box here!

      • tipoo
      • 4 years ago

      I’d say 72EU Skylake GT4e would be an interesting choice in a box similar to the Alienware Alpha or Brix! Nearing XBO levels of performance. But for the price, you could break out a better dGPU if you took a hit to the CPU.

      Intel, such a tease! GT4e graphics on a cheaper, even dual core CPU would be so very interesting for budget “just comparable to console” small form factor builds.

        • Spunjji
        • 4 years ago

        Seriously, this. They just plain don’t want to though – the GPU is most of the die size, so they need to keep the CPU part of it as a high-end product to justify the margins they want from the product.

        • Andrew Lauritzen
        • 4 years ago

        You do know of the skull canyon NUC, right? Obviously not cheap, but small + powerful.

          • tipoo
          • 4 years ago

          I have, it’s definitely a curiosity of mine. Though like you said, not cheap for what it is, that’s 150 more than the Alienware Alpha or similar small/console class PCs with a 860M, albeit the Alpha trades off two CPU cores and some CPU speed for that price reduction (you could get a quad i5 for more).

          So Intel probably wouldn’t see much market reason to pair the Iris Pro GT4e with a dual core, but I just think it would be even more interesting in the Alpha price/performance league. At 650 you’re looking at a decent micro ITX build if you really wanted.

          Not saying any of this would make sense for you/intel, it would just scratch my curious itch as I want to mess around with GT4e πŸ˜›

      • derFunkenstein
      • 4 years ago

      I LOL’d.

      • Andrew Lauritzen
      • 4 years ago

      Ha πŸ™‚ Ironically I’m probably more of a high end snob than most people here! But in reality these polls are a bit ambiguous for me – I have a variety of machines with a variety of hardware even @ home.

      For the curious, I answered the “intrigued but wait for AIB” one. The $100USD early adopter tax is a little bit ridiculous even for me, but I’ll probably upgrade one of my machines to a 1080 at some point.

        • chuckula
        • 4 years ago

        As the resident person accused of being the biggest Intel Fanboy around here, I need to get my digs in!

        I also chose the wait-for-AIB option since the delay also lets us see the results for the GTX-1070 to figure out what is the better option.

        P.S. –> On mobile, I’m all about the IGP and I noticed that Broadwell just got a nice graphics boost with the latest Mesa 11.3 drivers.

          • Andrew Lauritzen
          • 4 years ago

          Yeah discrete on mobile or SFF stuff makes zero sense for any of my use cases. I’m basically a high end desktop for gaming + ~Surface Pro like device for mobility guy at this point. Have many “gaming” laptops at work and they are all impractical and a bit silly unless I was severely space constrained.

            • SomeOtherGeek
            • 4 years ago

            Great dialog! Keep it up.

    • Thresher
    • 4 years ago

    Waiting on the GTX 1080Ti.

    • chΒ΅ck
    • 4 years ago

    Other.
    I haven’t owned a desktop in nearly 10 years. I’ve been making due with laptops with mid-range graphics (curently a i5-6300HQ + GTX960M).

    • tootercomputer
    • 4 years ago

    I think it will be real interesting to see whether the 1070outperforms the GTX 980 cards and similarly-performing AMD cards but with the 1070 at a lower price point.

    The 1070 is what interests me.

    • albundy
    • 4 years ago

    I’ll let Bishop Bullwinkle answer that.

    [url<]https://youtu.be/JdKI1wj-JpI?t=38s[/url<]

    • Chrispy_
    • 4 years ago

    I don’t think I’m alone in feeling anti Nvidia of late;

    This generation, Nvidia have:

    [list<][*<]Played the market dirty with G-Sync and selfishly stifled the progress of VRR monitors. [/*<][*<]Played the market dirty with Gameworks, TWIMTB by bribing lazy devs with black boxes that provably do nothing of real value but are designed to cripple competitor products and binding NDA agreements. [/*<][*<]Completely ignored previous generation products; Good luck getting performance optimisations and bug fixes for your 600 and 700-series cards....[/*<][/list<] There's no doubt in my mind that Nvidia has provided the better gaming experience in the last few years, but even as I type this from a 4x Nvidia, 0x AMD household, I don't have to like what Nvidia are doing to the market, so I seriously hope Polaris and Vega aren't massive disappointments. I am prepared to wait and give AMD another chance [i<]simply out of spite towards Nvidia[/i<] as a petty revenge for the filth they've brought upon PC gaming.

      • swaaye
      • 4 years ago

      This is how NV has operated since forever. And they are more popular than ever. ATI /AMD/DAAMIT/RTG has been around through all of it but for some reason just has never been able to beat them at it. Leadership issues are my guess. We’ll see how War Leader Lisa Su does. πŸ™‚

      • Airmantharp
      • 4 years ago

      Point three is the only one that really resonates; the first two seem underhanded (and the second probably is to a degree) until you realize that they’re simply the result of Nvidia actively working toward maintaining a competitive edge.

      But not updating drivers? That’s rough. I don’t care if they play hard and fast with their competitors, they still shouldn’t screw with their customers. Good (or at least prompt!) drivers are one cachet that Nvidia cannot afford to lose.

        • Klimax
        • 4 years ago

        I don’t think that argument is valid. At least so far haven’t seen any evidence despite repeatedly asking for it. And I am in position to actually verify any of it as I am still using original Titan. (Or observe it)

        Last example I saw was The Division and Nvidia provided optimizations for Kepler arch too.

        So again, does anybody have actual evidence or is it still just baseless assertion?

          • bfar
          • 4 years ago

          I’m using a GTX 780, and the experience for me deteriorated after Maxwell launched. With full driver support it was a faster card than the 290/290x. Nowadays it trails the 290x and even the GTX 960 in several benches. The Witcher 3 specifically was a big launch where Keplar users were made to queue behind Maxwell owners.

          Having bought into the big powerful chip, and subsequently experienced the polarized support, I’m looking more skeptically at Nvidia’s high end offerings now. You get 5 star treatment for a year. After that you’re in the second class seats I’m afraid.

            • Klimax
            • 4 years ago

            Ehm. Drivers by AMD were way behind NVidia. AMD fixed them mostly. That’s why situation could reverse a bit with 290x.

            So no evidence for missing or degradation. Thanks.
            ETA: I will wait, patiently till somebody finally provides the evidence. Asserting something and not backing up with anything is not the right way.

            • Pholostan
            • 4 years ago

            I was looking at a new GPU there in early 2014, and the 780ti looked very good. But it was also almost twice the price of a 290X, as the prices kinda crashed there after the mining craze. And Nvidia high end cards have always been kinda expensive here, I guess I live in Nvidia land (Sweden). So I got a 290X and have been kinda puzzled when my card started to match the 780ti and then blew past it. And now even an overclocked 980 struggles to keep up with my ~1200 MHz Hawaii. So I’m kinda happy still with that card, not sure I’ll buy a new GPU this year at all.

      • Flapdrol
      • 4 years ago

      I only care about point 3, the witcher 3 was the last game nvidia bothered to fix for kepler, I guess other games weren’t big enough to cause enough spam on the nvidia forum.

      I think gameworks is harmless, just a bunch of bolt on effects, can be turned off if they run badly.

      Gsync was first, so I feel no ill will towards them for that. I’m just not going to buy nvidia cards as long as freesync doesn’t work.

        • Klimax
        • 4 years ago

        No. The Division got nice Kepler update too. Frankly, is there actual verifiable evidence?

          • Flapdrol
          • 4 years ago

          When was that? If I look for the division performance tests I get a guru3d test where a 770 is beat by a gtx950 and r9 370.
          [url<]http://www.guru3d.com/articles_pages/the_division_pc_graphics_performance_benchmark_review,6.html[/url<] How about doom? 960 beating a 780 [url<]http://www.pcgameshardware.de/Doom-2016-Spiel-56369/Specials/Benchmark-Test-1195242/[/url<]

            • Klimax
            • 4 years ago

            It was during closed beta when NVidia published game read drivers. I was in it thanks to pre-order and actually saw significant improvement in performance. Should show up in current version when you test game-ready versus prior drivers. IIRC it was about 20% aka moving from choppy medium settings to high+ settings. (note: original Titan with fans at 80-90%)

            Anyway, it is not drivers, but architecture. Nvidia is simply introducing with each generation quite bit more performance and thus obsoleting previous gens. It happened with Fermi->Kepler, Kepler->Maxwell and same is seen now with Maxwell->Pascal. New “midrange” beats previous top. As it better should.

            So why the hell is it surprising then that new cards are beating old guard??? 960 versus 780 or initial hints with 660 versus 580. (http://www.anandtech.com/bench/product/660?vs=517)

          • bfar
          • 4 years ago

          The last stats I looked at for the Division, my GTX 780 was behind the 7970 Ghz!! I dunno if that was before the optimization, but either way I wasn’t impressed.

            • Klimax
            • 4 years ago

            Difference between GPGPU and GPU and brute force/massive resources versus maximal efficiency in their usage. Especially when AMD was completely incapable in exploiting actual HW.

            Also one each company learned different lesions from Fermi v1.

          • Chrispy_
          • 4 years ago

          And yet there are plenty of games that never get performance fixes for Kepler, whilst AMD cards and Maxwell owners were fine.

          Shadows of Mordor and Alien Isolation are two examples that are easy to find benchmarks on, but the list of disgruntled Kepler owners (of which I am one) is pretty big and easy to find discussions on.

          Just search for Kepler vs Maxwell and you’ll find the rage threads and the benches to back up the rage.

            • Klimax
            • 4 years ago

            Yeah, so no links. (Ain’t doing your job for you)

            Post links you consider best evidence. And no evasions. (I suspect there will be big nothing)

            • Chrispy_
            • 4 years ago

            You want spoonfeeding?

            [url<]http://lmgtfy.com/?q=Alien+Isolation+Benchmarks#[/url<] [url<]http://lmgtfy.com/?q=Shadow+of+Mordor+benchmarks[/url<] Do you even Reddit, bro?

      • Ninjitsu
      • 4 years ago

      I just find it curious that Fermi still gets new stuff and runs fine.

        • beck2448
        • 4 years ago

        Conspiracy nuts live in their own zipcode.

        • beck2448
        • 4 years ago

        Losers always blame other for their own failures. AMD went from mid forties to 3. Many of their promises didn’t pan out at all. We’ll soon see if they can turn it around but vaporware won’t do it.

        • DoomGuy64
        • 4 years ago

        Fermi doesn’t require as much driver tuning. It does however, have plenty of issues associated with having low amounts of memory, aside from the few rare cards that had extra.

        I always thought Fermi was one of Nvidia’s best cards. People complain about the heat and power use, but that never was something I cared about. The card RAN. You just can’t play modern games on it, because of the memory issues.

          • tipoo
          • 4 years ago

          It ran hot because it was their last architecture before the big compute cull too. Sure, Kepler got more efficient, but by cutting so much of the compute side out.

          I kind of want to see the alternate universe where Fermi stayed the prime development target with iterations on it, it would have been an interesting target for the new wave of compute accelerated games. Though I’m guessing it coudln’t interleave graphics and compute as well as new architectures (well, new AMD ones, Pascal still has more primitive switching but it can now at least brute force it).

          • Ninjitsu
          • 4 years ago

          I agree that memory is limiting, but that was pretty much all cards from that time.

          At the same time I won’t fully agree that you can’t play modern games on it, but indeed 2GB VRAM is a min requirement on some of the bigger games this year.

      • Topinio
      • 4 years ago

      Couldn’t agree more, except that I already jumped ship (back) to AMD.

      The other big issue I have with NVIDIA is its price gouging as its market share has risen (due to better UX, due in part to playing dirty): it has increased prices through the roof and now we see GTX 1080, made from the successor chip to the one in GTX 460 yet 3 times the price…

      • gamerk2
      • 4 years ago

      -) Played the market dirty with G-Sync and selfishly stifled the progress of VRR monitors.

      First off, Gsync was the technically superior option. Secondly, Freesync also required additional control HW that 99.9% of monitors did not have. So both require HW, and Gsync is the technically superior option. Not surprising Freesync isn’t going anywhere in that context.

      -) Played the market dirty with Gameworks, TWIMTB by bribing lazy devs with black boxes that provably do nothing of real value but are designed to cripple competitor products and binding NDA agreements.

      And yet, no one complains about another black box proprietary API called “Direct X”. That’s how APIs work. Google did the R&D to create and implement features outside the Direct X API, and they are free to do with them whatever they want. Likewise, AMD is free to create a similar feature set any time it wants.

      -) Completely ignored previous generation products; Good luck getting performance optimisations and bug fixes for your 600 and 700-series cards….

      My 770 GTX has gotten plenty of performance updates for recent titles. NVIDIA’s been historically good in this regard.

        • Chrispy_
        • 4 years ago

        Freesync existed in AMD laptops for three generations before Nvidia launched G-Sync. The only additional hardware required was a Monitor that supported it, which was not dependent on Nvidia’s overpriced and proprietary G-Sync FPGA solution.

        What Nvidia get credit for is taking an idea that already existed, marketing the hell out of it and making it proprietary for minor tech benefits and massive “lock-out-the-competition-and-make-it-proprietary” reasons, so that they could put a huge pricetag on it.

        [url=https://techreport.com/news/25867/amd-could-counter-nvidia-g-sync-with-simpler-free-sync-tech<]This is the article, and remember that the article is already over two years old, so "three generations" ago refers to Feb 2012 with the launch of GCN.[/url<] [quote<]He explained that this particular laptop's display happened to support a feature that AMD has had in its graphics chips "for three generations": dynamic refresh rates. AMD built this capability into its GPUs primarily for power-saving reasons, since unnecessary vertical refresh cycles burn power to little benefit. There's even a proposed VESA specification for dynamic refresh, and the feature has been adopted by some panel makers, though not on a consistent or widespread basis. AMD's Catalyst drivers already support it where it's available, which is why an impromptu demo was possible.[/quote<]

        • tipoo
        • 4 years ago

        Plenty of developers complain about black box APIs, it’s why the industry is shifting all of a sudden to low level πŸ˜‰

        Trying to solve performance issues with a huge monolithic opaque API like DX11 or OpenGL is no joke. People seem to think low level APIs are harder, and at a low programming level, that’s true, but once you get to huge games with complicated programming, it’s far easier to debug a thin API with plenty of dev access.

        “Google did the R&D to create and implement features outside the Direct X API”

        …Google did? πŸ˜›

        • DPete27
        • 4 years ago

        [quote<]First off, Gsync was the technically superior option.[/quote<] Keyword = "was". The only difference between GSync and FreeSync in its infancy was low framerate compensation. AMD has since added that feature to FreeSync, essentially making both technologies equivalent now...except price.

      • bfar
      • 4 years ago

      I have another beef with Nvidia. We got around 20-30% improvements each generation for a few years now, irrespective of process shrink.

      It’s clearly engineered rather than chance. Like Intel, they’re definitely holding back, but my God they’re charging for it!

        • Ninjitsu
        • 4 years ago

        You mean 60-70%.

      • the
      • 4 years ago

      The third point has a lot to do with the underlying technology in each generation. Maxwell added a handful of features that made working with several effects easier than the techniques required by the Kepler generation. The result is a clear jump in performance for Maxwell cards.

      I’m predicting the exact same thing with Maxwell -> Pascal in VR. There are a handful of new techniques Pascal can do (simultaneous multi-projection) that’ll give it a very respectable lead in performance over previous generation cards. I expect much of this to be blamed upon poor driver optimizations again even though there were real hardware changes to make Pascal faster.

        • DPete27
        • 4 years ago

        Yeah, how did nobody think “hmm, you know, instead of generating two full nearly identical images independently, we could just generate a single minimally larger image and copy it to both eyes.”
        Idiots.

          • Voldenuit
          • 4 years ago

          You know it’s not that simple, right?

          Projection is only a single step in the rendering pipeline (lighting, texturing, culling, shading, etc).

          And historically, there was little to no benefit to making multiple projection calculations in a single pass, so the transistor budget was better spent elsewhere.

          SMP is a nifty advancement in GPU tech, but it’s not like nvidia (or AMD) purposely held back on it just to stick it to previous gen users. If anything, it is coming out now even before the killer app for it (VR) hits critical mass. Yes, it’s useful for multi-monitor gaming too, but judging by Steam stats, that was never a significant proportion of gamers, either.

      • PixelArmy
      • 4 years ago

      #1 – Lack of market understanding – VRR demand has led to the monitor market being the way it is
      #2 – Lack of market understanding – “Lazy” devs do not like re-coding the wheel, prepackaged stuff is a nice option, open source only matters to some degree, customers do like FX
      #3 – Unrealistic expectations – See comment from “the”.

      • DPete27
      • 4 years ago

      I completely agree. I’ll be making biased purchasing decisions once Polaris launches. Really AMD just needs to get close to Nvidia and my GTX660 will be replaced by an AMD card.

      • Firestarter
      • 4 years ago

      To be fair, AMD hasn’t done much better with the VLIW4 architecture, as far as I know its performance has basically been the same since the GCN products launched, and the only reason the GCN 1.0 competitors to the Kepler cards are doing better than them now is because the current AMD cards have a very similar architecture. Had I bought a HD6950 instead of a HD7950 I might have had just as good a reason to complain about AMD as you have for Nvidia.

        • Chrispy_
        • 4 years ago

        You are right about it being easier for AMD than for Nvidia, but people aren’t angry at Nvidia for their architectural leaps. They’re angry at Nvidia for their neglect of Kepler in relation to other static architectures.

        Read comments higher in this thread. You have owners of 2013 GTX 780 cards with a $649 launch price being vastly outperformed by 2013 AMD cards with $399 launch prices despite the pricing being appropriate for the performance in 2013, and in some cases (Doom, The Division, Alien Isolation, Shadows of Mordor) being outperformed by 2013’s $299 rebrands of the ageing 7970.

        It’s one thing to be outperformed by new architecture, but Kepler is losing ground against other cards of the same era, completely ignoring the launch of Maxwell and now Pascal. 780 owners got it worst, but I felt the pain with my laptop too, especially since the 650M was barely up to the task in the first place.

      • anotherengineer
      • 4 years ago

      I’ve been feeling a little anti-Nvidia since they bought out ULi and basically shut them down along with support.

      [url<]http://www.anandtech.com/show/1700/2[/url<] and a few months later and that was it [url<]http://www.geek.com/games/nvidia-acquires-chipset-maker-uli-557960/[/url<] I mean if they continued/improved driver support well that would have be fine, but nothing. Disclosure a former a8r32 mvp deluxe owner, which is actually still running!!!

    • RickyTick
    • 4 years ago

    Truth is, my GTX670 plays everything just fine at 1920×1080. As much as I’d love to upgrade, I just can’t justify it at this point.

    • chuckula
    • 4 years ago

    I want to.
    Definitely not Founder’s Edition.
    We’ll see, because there are two factors at play: Sure there will be faster cards out next year, but are they going to make a useful difference on my single 2560×1440 monitor? What about their power consumption, will it be reasonable or are we back up to 300 watts again?

    OTOH, will the 1070 be just as good for practical purposes (we’ll find that one out soon)?

      • Ninjitsu
      • 4 years ago

      [quote<] but are they going to make a useful difference on my single 2560x1440 monitor? [/quote<] Yes.

    • NovusBogus
    • 4 years ago

    From what I’ve seen so far neither GeForce makes sense for a 1920×1200 display–970 was already overkill and 1070 sits even higher on the price/performance chart. Maybe Polaris will be the $250 GPU I wanted in the first place, maybe the 1060 Ti turns out to be a real thing, otherwise I see a Volta x60 in my future.

    • juzz86
    • 4 years ago

    Roll on the cheap used 980Ti’s!

    • techguy
    • 4 years ago

    What, no “waiting for GP102” aka “Big Pascal” option?

    • wierdo
    • 4 years ago

    I’m gonna stick with my current card until things settle down a bit. Mainly three things stand out to me this generation so far:

    1. Price a bit high for me, but it’s new and highend so no surprise there.
    2. Performance gains in the ~30% range are basically what we got before last gen. That was without the benefit of a major process shrink. I think they’re holding back a bit so maybe something else is in store and they’re just waiting to see what’s the competition is up to while they milk the current market advantage.
    3. No Freesync. Not sold on the G-Sync premium so far, would be nice to see them either embrace open standards or the competition coming up with a credible alternative to consider – and/or put price pressure on the market.

    • hechacker1
    • 4 years ago

    I just got a Vive, and I have a GTX970, which is the minimum. I also have a 1440p screen, and the 970 just can’t do ultra settings while maintaining above 60. So yeah, I’m waiting for the custom versions. But I can live with the 970 for now.

    I do want the single best GPU since it looks like VR just won’t use SLI until the software and developer support is there.

    • Mikael33
    • 4 years ago

    Nope, got an asus strixx 980 ti in January when I finally built a Skylake rig to replace my phenom 2 based rig, when I upgrade it won’t be for a few years since I don’t play super demanding games, I mostly play an mmo that wouldn’t benefit at all from more GPU power.
    I might just go Sli if 980 ti prices plummet and there’s some kickass game that looks amazing but is demanding.

      • djayjp
      • 4 years ago

      Remember with dx12 you can pop in a different card and they’ll work in tandem

        • Voldenuit
        • 4 years ago

        Also, now twice as buggy, since you’ll get bugs from both sets of drivers.

        Heck, pcper and guru3d couldn’t get SLI or XFire to scale in most of the DX12 games they tested, and now you’re expecting cross-architecture multi-GPU to work nicely? Good luck with that.

          • djayjp
          • 4 years ago

          Seems to work well to me:

          [url<]http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-preview/4[/url<]

            • Voldenuit
            • 4 years ago

            From [url=http://www.pcper.com/reviews/Graphics-Cards/GeForce-GTX-1080-8GB-Founders-Edition-Review-GP104-Brings-Pascal-Gamers<]pcper 1080 review[/url<]: Hitman DX12: "What is NOT impressive is the lack, again, of scaling for DX12 titles with SLI (and CrossFire for that matter). " Gears of War DX12: " Notice that the GTX 980 SLI configuration is flat; it runs slightly slower than a single GTX 980." Rise of the Tomb Raider: (Single GTX 980 is faster than GTX 980 SLI) Note that only one of the titles they tested (GoW) was run on UWP, so the problem isn't just restricted to UWP alone. Crossfire and SLI have always been at the mercy of developer support and implementation. DX12 may broaden the list of compatible hardware, but it doesn't magically solve XF/SLI if the application (or in the case of UWP, the API) doesn't support it.

            • djayjp
            • 4 years ago

            That’s not specific to dx12, it’s a general, pre-existing issue and so is mostly irrelevant to my comment. Of course multi gpu support has, and likely always will be, dependent upon developer and driver support.

        • Firestarter
        • 4 years ago

        don’t assume that will work

          • djayjp
          • 4 years ago

          [url<]http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-preview/4[/url<]

      • K-L-Waster
      • 4 years ago

      Similar situation here – got a 980TI last year, so in no hurry to switch. I’ll keep using it until I feel that the performance is lower than I actually want to live with, which hopefully will be a while. (Note: not lower than is available — lower than I find acceptable. There will of course be faster cards continuing to arrive.)

      By that time, there should be Polaris cards from AMD as well as NV and AMD cards from 3rd party board members (skip the Founders Edition Tax…) — heck I may end up skipping this generation of cards entirely.

    • Tumbleweed
    • 4 years ago

    Intrigued, but will probably wait for Big Pascal for my video rendering needs.

    • ptsant
    • 4 years ago

    The price is quite high and I don’t think we have enough reviews and feedback yet. Pre-ordering hardware is for people richer than me.

    For most people the 1070 vs Radeon choice is much more interesting, but we don’t have any evidence for one over the other.

    • Kaleid
    • 4 years ago

    Both 70 and 80 are too expensive. I’ll wait, and need freesync anyhow.

    • DragonDaddyBear
    • 4 years ago

    I don’t game enough these days to justify the cost of top tier GPU. The rumors around Polaris lead me to think that Team Red is gunning for a bang for the buck GPU, so that is where I will likely land.

    • tipoo
    • 4 years ago

    Hoping that the 200-250 dollar price point sees a substantial shove forward at last. After being on 28nm since 2011, now is a very good time for that to happen, if both companies don’t hamstring it.

    Really don’t care which one does it, whichever one substantially pushes that bracket forward I’m game for.

      • derFunkenstein
      • 4 years ago

      I’m interested in seeing what a GP106 has in store. The sub-$250 market is way lame at this point.

        • jessterman21
        • 4 years ago

        I agree. GTX 660 to 760 to 960 were 20% jumps. I hope the GTX 1060 can mirror the amazing 75% boost from 980-1080.

      • Spunjji
      • 4 years ago

      Polaris looks likely to be your friend here if rumours are correct πŸ˜€

    • yogibbear
    • 4 years ago

    eVGA custom ACX or equivalent factory OC’d 1080GTX please.

      • arbiter9605
      • 4 years ago

      yea i am waiting for that as well, got gtx980 model atm but just got a 1440p 144hz monitor so.

    • cygnus1
    • 4 years ago

    I plan to upgrade to a 16nm GPU as soon as reasonably possible, but I definitely want to see what the board partners come up with for the 1080 and what Polaris is going to be able to do before I decide to pull the trigger on any purchase

    • Firestarter
    • 4 years ago

    I’m waiting for the 300W, 600mm[super<]2[/super<] monster that will inevitably come after this, which will most likely be the fastest GPU for a long time

    • Ifalna
    • 4 years ago

    Definitely waiting on details regarding AMDs cards.

    Also I do think that a 1080 would be kind of overkill for a single 1080p monitor.

    Meh. I’ll probably decide around Christmas.

      • ptsant
      • 4 years ago

      The 1080 is a massive overkill for a 1080p monitor. Even a GTX 970 is probably an overkill for 1080p, especially if you aren’t going for 144Hz.

      My objective is 1440p @ 144Hz (ie ~80-90fps average) and I feel the 1070 or Polaris will provide enough future-proofing for maybe a couple of years. Then again I don’t care if I play on ultra or high. Most of the time, the difference is minimal.

        • Ninjitsu
        • 4 years ago

        970 can barely hit 60 fps averages, 980 does cross it in most games, and 1070 will just be enough.

        1080 is a 1440p card though, from what I can tell so far. TR won’t test at 1080p i suppose so i’ll never see the data I want to, but pcper’s is good till then.

          • anotherengineer
          • 3 years ago

          Techpowerup, they test several resolutions. Albeit just fps and not frame time/stutter, but it’s still useful since it can show if a game is GPU or CPU limited and what to expect at whatever resolution.

    • Concupiscence
    • 4 years ago

    In terms of raw grunt my GTX 970’s still more than capable for my workstation and periodic gaming needs… My home theater PC could certainly use a power-efficient upgrade beyond its Radeon 7750, but Polaris 10 sounds like it will fit that bill better than Nvidia’s latest.

      • ptsant
      • 4 years ago

      Do you really need a full Polaris 10? A Polaris 11 will demolish the 7750 (expect 2.5x performance at least….) and run comfortably without any PCIe connector. We will probably even see a passive version.

        • Concupiscence
        • 4 years ago

        I’d been under the impression Polaris 11 was a laptop-targeted chip, but didn’t extend that line of thought to a desktop part. OK, now I’m [i<]much[/i<] more excited, thank you!

          • EndlessWaves
          • 4 years ago

          Current thinking seems to be that Polaris 10 is Tonga’s replacement (380X/M390X/M395X) with Polaris 11 being Bonaire’s replacement (7790/260X/M385X, also 360 in cut down form). If they’re laptop chips then they’re gaming laptops, with the full Polaris 11 having 960M/965M positioning and Polaris 10 at the 970M and maybe 980M position.

    • LauRoman
    • 4 years ago

    I’ll just wait for the gtx 1440.

      • Duct Tape Dude
      • 4 years ago

      It took just over 2 years for nvidia to go from the Geforce 720 to 1080, so 1440 is probably launching in Q3 2018, and the GTX 2160 will launch around Q4 2020.

    • jts888
    • 4 years ago

    As a Kepler user, I have a hard time going green this round after what I saw happen in the market during Maxwell’s product cycle, particularly regarding TWIMTBP and superfluous levels of tessellation and with refusing to support VESA Adaptive-sync in addition to G-sync.

    If AMD keeps things even remotely competitive in the price/performance and power/performance metrics, I will almost certainly go with a Polaris 10 card this summer.

    I’m not overly concerned with true async compute support (which the 1080 still seems to lack in favor of less costly preemption), but Nvidia’s MSRP marketing strategy ($700/$600 and $450/$380) continues to leave me with unfavorable opinions of their candor.

      • Pitabred
      • 4 years ago

      Me too. But I’m also rocking dual 280X’s on a 4K screen, and they just aren’t cutting it. So this fall I’m budgeting to just have to get the best single-card 4K solution out there, whichever color it is. Red goes better with my case and conscience, though πŸ˜‰

      • Klimax
      • 4 years ago

      Because only one instance of “over-tessellation” counts for so much. (Concrete walls in Crysis 2 and that’s fairly insignificant in total)

      Just reminder: When you force debug mode to view wireframe,. your conclusions about many things are completely wrong.

        • Spunjji
        • 4 years ago

        30 seconds of Google took me to the Hairworks over-tessellation issue in Witcher 3. So there’s another already.

          • ermo
          • 4 years ago

          I play TW3 with two HD 7970 GHz Ed cards @ 1080p and per AMD’s recommendation I’m capping the tessellation at x16 via a driver level game profile. FWIW, in TessMark x16 would be considered the ‘normal’ setting.

          Doing a WinDiff comparison of the Medium/High/Ultra config file settings for e.g. water, it is clear that pretty much the only change they make is to increase the tessellation factor from Med = x16 (‘normal’) to High = x32 (‘extreme’) and Ultra = x64 (‘insane’).

          I’m pretty sure that most people would be hard pressed to notice the difference past x16 and that I could probably live with x8 if I had to.

          So, yeah, not sure I see the value of TWIMTB, GameWorks etc. as implemented in e.g. The Witcher 3.

            • Klimax
            • 4 years ago

            But that is no evidence for overtesselation. Reminder: Overtesselation would be case were final triangles would be invisible due to being smaller then minimal amount of pixels. (IIRC 3 pixels) What you two are arguing is whether or not some setting changes visibly something. Different argument and concerning number of various options in games, but orthogonal to over-tessellation.

            And second reminder: Tesselation is used for true displacement mapping were it provides true geometry for displacement mapping without need for approximation techniques. One of advantages is less troubles with shadowing and some other effects dependent on mesh. (Respectively no need to compensate) Also it provides with finer geometry to geometry shaders. (They are after tessellation stages)

            Note: Tesselation is also part of trade-offs in engine between VRAM usage/bandwidth and computing resources.

          • rxc6
          • 4 years ago

          That’s assuming that he was looking for more evidence and not trying to dismiss negative points raised against his favorite GPU maker. I happen to agree with Klimax in many other posts, except when it comes to NVidia. Read his posts and you will notice how everything about NVidia must be a positive or it is dismissed by him. I am not saying that I am unbiased, but I prefer not to post if I can not defend my opinion.

        • Pitabred
        • 4 years ago

        What about on Batman: Arkham Origins? That’s where it first surfaced. It’s been in a lot of games since then, so much so that AMD had to implement a lock on the tesselation level in it’s drivers, which still isn’t perfect because there are lots of over-tesselated surfaces for no good reason.

          • Klimax
          • 4 years ago

          Are they over-tessellated? Are those final triangles smaller then three pixels? Is there evidence. And no reason? So they don’t improve quality by bypassing approximations for displacement mapping or don’t upgrade quality of shadows and don’t provide reduction in bandwidth nor improve geometrical effects? (See geometry shaders)

          Quite a claim. Without evidence. Although evidence might be harder to find since rarely games compensate for reduced or absent tessellation.

            • jts888
            • 4 years ago

            To each his own, but I have yet to meet someone IRL who felt that pixel-perfect cape silhouette edges were worth 10% or worse framerate hits.

            ROP blocks in GPUs have tended to be 4×4 (AMD) or 4×2 (Nvidia) pixel blocks for long enough that flooding scenes with 1-2 pixel triangles just murders performance for the benefit of details imperceptible in actual live gameplay (i.e., not just standing around admiring scenery).

      • bfar
      • 4 years ago

      They’ve been throwing their weight around in order to maintain a competitive edge. I could understand that strategy if it only impacted the competition, but they’re kicking their existing customers too.

      The message Nvidia is sending is that they’ll look after you if you’re in the latest Nvidia product camp, otherwise you are an outsider. I could maybe live with that if they didn’t change exorbitant prices for their flagship products, but if you look at the offering very carefully, the value proposition is not what it once was.

      A $600 GPU should command three years of excellent support. With Nvidia currently, it’s a little over a year.

        • Eversor
        • 3 years ago

        I seriously don’t get what people are complaining about, being a Kepler user. The drivers are rock solid, don’t have a massive CPU overhead and the control panel even lets me control dithering on the LCD, which is particularly useful to eliminate color banding on most LCDs you can get your hands on.

        Then there’s the support for the card. IIRC, AMD has been the one that frequently drops support for not so old GPUs as soon as they can. Nvidia has been doing things like support 6 or 7 year old cards even for Linux and BSD – which is another reason for me to keep buying from them.

      • derFunkenstein
      • 4 years ago

      In a way, that’s kind of only punishing yourself. I get the sentiment but I just can’t be bothered to care about corporations like AMD or Nvidia. Just give me what works the most.

      • brucethemoose
      • 4 years ago

      As a 7950 owner, on the other hand, my GPU has aged amazingly well.

      I’m not a fan of brand loyalty… But as someone who likes to keep GPUs for a long time, this makes me lean towards a GCN card for my next purchase.

        • djayjp
        • 4 years ago

        Yep as long as you’ve got a gpu that’s faster than ps4 (which that is..the only problem might be the ps4’s enhanced asynchronous compute and larger frame buffer) and as long as the driver support keeps coming, it should be good πŸ™‚

      • Kretschmer
      • 3 years ago

      Beware the AMD drivers. The last update introduced 2D corruption for me, and Freesync still isn’t working in Doom on my setup.

    • Srsly_Bro
    • 4 years ago

    My plan is to get a GTX 1070 or Polaris 10, and upgrade to either Vega or 1080 Ti.

      • neverthehero
      • 4 years ago

      That’s not a bad idea, haven’t considered that.. My new build (geared towards making sure VR will run with out a hitch for the foreseeable future) is just needing a video card..

    • liquid_mage
    • 4 years ago

    I’m not a big fan of spending over $400 for a video card. Ideally my price point would be $250. But AMD and Nvidia pricing mid-high end cards at $600+ is just a bit high for my taste. Especially considering Nvidia will release a TI version for $700 in 6 months that will be 20% faster.

      • jts888
      • 4 years ago

      On what basis do you make the $700/6 month Ti claim?

      Is there going to be 12+GT GDDR5X out then, or is it really known that the GP1080 die really has many if any shaders fused off?

        • arbiter9605
        • 4 years ago

        Um TI will be using HBM2 dummy, not GDDR5x

          • jts888
          • 4 years ago

          If you think a 600 mm^2 plus HBM2 plus interposer GP100 is going to sell for $700 dollars six months after a traditional 300 mm^2 plus GDDR5X card debuts at the same price, you’re completely off your rocker.

          There are serious questions about the suitability of GP100 for gaming, but it’s hard for me to imagine it selling for anything less than $1k if a consumer version does get made.

            • yogibbear
            • 4 years ago

            I think a 600mm^2 Pascal card with HBM2 would probably retail for $1500USD at launch if it came out tomorrow. (and that’s the MSRP, before the $200-300 markup because they can).

            • September
            • 4 years ago

            Our best hope is still the rumored GP102 chip – a large Titan/1080Ti chip with HBM2 that has less FPU64 than GP100, is less than 600mm2 but still has 6 GPC’s versus the 4 GPC’s in GP104 (1080/1070). Maybe it has better than 1/32 FPU64 if also used for Titan but then some get fused off for 1080Ti. Or it is just 1/32 and only for 1080Ti, with Titan really getting a cut-down GP100 treatment for $1500USD or more.

            Either way nVidia is setting the pricing landscape to make these cards start at $$$$. Maybe the prices will drop after the early adopter phase. I like Tech but I’m more of a $$ kind of budget (still running GTX 770 2GB on a 1200p60 IPS panel). I’m not planning on doing VR but I might upgrade to a super-wide screen 1440p fast refresh gaming panel – let’s see what kind of card it would take to drive that. No way I’m spending over $600 though.

        • Ninjitsu
        • 4 years ago

        x80 Ti versions are never based on the Gx104 die.

      • ptsant
      • 4 years ago

      I have a hard time believing the Ti in 6mo. There are two main reasons:
      1. The 1080 is thermally quite limited. With better cooling we may see a +20% increase in GHz but the power consumption will explode (relation is not linear).
      2. Any major overclock or widening of the chip will require faster memory (HBM2?)

      The 1080 is a great card, but after ASUS/MSI etc have done their magic with coolers and OCd editions I don’t think there is going to be enough space for a Ti.

        • derFunkenstein
        • 4 years ago

        The 1080 has a 180W TDP. There’s plenty of room for more in a 1080Ti. The thermal argument is, frankly, asinine.

        And to think these not enough room for a 1080Ti when the vanilla 1080 isn’t doing 2160p60 is also kind of insane. It’ll be like $800 but it’ll be there.

          • Laykun
          • 4 years ago

          Even 1440p144 is a difficult target and my 980Ti is certainly not up to the challenge. 30% higher performance compared to my 980Ti still won’t cut it so I really want to see a 1080 Ti variant and what it has to offer.

          • ptsant
          • 4 years ago

          I am aware of the TDP. However, with the stock cooling, temperature is stuck at 85 and clock speed varies with workload. So, it is limited by temperature. Yes, you can do better cooling and increase GHz, as I noted above but then you will probably hit against the bandwidth limitations.

          Maybe a 1080Ti will be made for marketing purposes, but as a meaningful purchase it will have to do much better than the aftermarket editions of 1080. And these will probably exploit all the thermal, frequency and bandwidth headroom. In my opinion the 1080Ti will be a minor upgrade, only to counter Vega at a marketing level.

      • Ninjitsu
      • 4 years ago

      A Ti in six months is hard, given the circumstances. I think March would be a better time frame for that. You [i<]may[/i<] see a $1000-$1500 Titan with HBM in Nov/Dec, but the chances for that are really slim imo.

      • liquid_mage
      • 4 years ago

      I was just going off the last 2-3 release cycles with Nvidia. They tend to release a TI card halfway through there release cycle. So it may be 9 months and I was taking a wild guess at the price. And yes I know the last few TI were based on the Titan cards.

    • gecko575
    • 4 years ago

    Mandatory “no cheese option!?” comment

      • Redocbew
      • 4 years ago

      Cheese as a GPU. Full of melty goodness.

    • djayjp
    • 4 years ago

    I’ll wait for the real (big) Pascal, thank you (with HBM2).

      • jts888
      • 4 years ago

      Better hope for a GP102 or something then, since GP100 looks almost completely targeted towards the HPC/GPGPU market segment.

      As seen in Hawaii, 2:1 FP32:FP64 support just wastes power and die area in consumer cards where the FP64 throughput gets massively handicapped anyway.

        • Airmantharp
        • 4 years ago

        At some point, we’ll just start referring to big (insert Nvidia codename) as the gaming version and not the HPC version, and we’ll stop seeing HPC die in consumer Geforce products altogether.

        This is not a correction, of course, because we haven’t yet reached that point with only one instance of their being a ‘big’ die that wasn’t HPC-focused so far, but the day will likely come as consumer workloads and HPC workloads continue to diverge.

        • djayjp
        • 4 years ago

        For sure fp64 is completely unnecessary. We’ll get an optimized, big Pascal. It happened with Kepler (GTX 680 –> 780 Ti) and it’ll happen with Pascal.

      • leor
      • 4 years ago

      Same here, it will come out eventually

      • Kougar
      • 4 years ago

      Exactly what I was going to post. Given the relatively small physical size of the die there is plenty of room to 50% more everything and add HBM2. Might drop clockspeeds some though…

        • djayjp
        • 4 years ago

        Heck, double even πŸ˜€

      • Laykun
      • 4 years ago

      Yeah me too. This card is not a considerable jump up from my current 980Ti, I’d much rather a ‘big’ Pascal card.

        • beck2448
        • 4 years ago

        The Custom cooled oc’d to the max Big Pascals should be sick!

      • hubick
      • 4 years ago

      Yep. This needs to be an option. I want Battlefield 1 running in Ultra on my 4K screen at 60hz. I want Star Citizen running in my Rift CV1 at 90hz. I can hold out with my 980 until the Pascal Titan arrives.

        • djayjp
        • 4 years ago

        Hell yeah :D. Not to mention once 4K VR arrives sooner than later

Pin It on Pinterest

Share This