Nvidia confirms GeForce GTX 1070 specifications

Ever since Nvidia’s announcement of its 10-series graphics cards with Pascal GPUs onboard, the internet has been abuzz about the potential performance of the GeForce GTX 1080. Today, we got a look at the first independent performance numbers from the new card, but we still didn’t know much about the 1080’s little brother, the GeForce GTX 1070—at least, until now. PCWorld says it confirmed official specifications for the new card with Nvidia after a series of leaks.

The specs PCWorld is reporting for the GTX 1070 pretty much align with our expectations for a cut-down version of the GTX 1080. Here are some highlights. We've included the GTX 970 for comparison's sake since that's the card the GTX 1070 aims to replace.

  GTX 1070 GTX 1080 GTX 970
GPU GP104 GP104 GM204
CUDA cores 1920 2650 1664
Memory 8GB GDDR5 8GB GDDR5X 4GB GDDR5
Boost clock 1600MHz 1733MHz 1178MHz
TDP 150W 180W 145W
Process technology 16nm 16nm 28nm
Release price $380 $599 $329

There are two major changes between the GTX 1080 and the GTX 1070. The GP104 GPU inside the GTX 1070 has a quarter of its streaming multiprocessors disabled, leaving it with a total of 15 functional units. Those SMs are good for 1,920 stream processors versus the 1080’s 2,650. While we don’t have stats on the base clock of the 1070, its boost clock is 1600MHz, about equal to the 1607MHz base clock on the 1080.

The 1070 also gives up GDDR5X video memory in favor of GDDR5. We don't know just how much of a performance difference GDDR5X makes on its own, so it's hard to say how big of a deal it is that the new card is sticking with good old GDDR5 right now. GDDR5X only recently entered volume production, and supply of the new chips may be a reason why Nvidia chose to go with regular GDDR5. Overall, these specs do answer some of our questions about where the GeForce GTX 1070 is positioned between the GTX 1080 and the top-end cards from the last generation, but we'll have to see how it performs in the lab to fill in the rest of the blanks. 

Comments closed
    • Ninjitsu
    • 4 years ago

    According to GeForce.com it’s 1506 MHz base clock.

    Additionally I find the 1920 shader number amusing (1920×1080). Is shader cores per horizontal pixel the next big metric to look out for? XD

    [url<]http://www.geforce.com/hardware/10series/geforce-gtx-1070[/url<] Neat summary table here, in case the GeForce site is too 1337: [url<]http://www.anandtech.com/show/10336/nvidia-posts-full-geforce-gtx-1070-specs[/url<] EDIT: TL;DR -> 1070 ~ 71% of 1080's perf.

    • Anovoca
    • 4 years ago

    I always find it amusing that these cards are physically the exact same, only extra work was put in to throttle them down, and then for the EXTRA effort we the consumer get them cheaper. In any other type of business this model would be considered lunacy. It would be like Pizza Hut only baking pizzas at one size but then they throw slices away so they can sell it to you at a different price point.

      • deruberhanyok
      • 4 years ago

      That’s not really the whole story. Typically the cut-down (not just throttled down) versions didn’t meet the standard needed to be a full version, and so the company has two options: salvage the “bad” GPUs, disable the parts that don’t work, and sell it at a lower profit, or just trash them.

      So in this case, a little extra effort allows them to still make some kind of profit instead of just throwing out a bunch of processors. Plus they can sell to two different segments of the market: the high-end enthusiast (1080) and the mainstream enthusiast (1070), so their potential marketshare is much larger than it would be if they only sold the 1080s, too.

      (Later in a product lifecycle when yields are much higher, the question of whether or not there are still enough failed GPUs to do so or if they are intentionally crippling processors comes up, but that’s a question that has always been asked).

      But this is the way these things have gone for well over a decade now.

        • Anovoca
        • 4 years ago

        ah, didn’t know that. As long as those stream-processors aren’t needlessly going to waste.

          • Ninjitsu
          • 4 years ago

          Same tends to happen with CPUs. 8-core Core i7 die with some defects? Fuse off two cores and supporting elements and sell it as a hexacore for less.

            • tipoo
            • 4 years ago

            AMDs tri-cores used to be a decent low end value due to the exact same business process. On some motherboards people could often unlock the fourth core too, sometimes only stable at a lower clock than the other three, but still a fourth core for “free”.

      • sparkman
      • 4 years ago

      whatever the market will bear

    • Peanut
    • 4 years ago

    The gtx 1070 looks like it’s going to be a very good card and it works well with my budget so I will be getting one but I’m also curious to see how AMD will do with their new cards as well. Should be fun!

    • south side sammy
    • 4 years ago

    2 words: SINGLE SLOT.

      • DrCR
      • 4 years ago

      1 word: WHY?

    • rudimentary_lathe
    • 4 years ago

    I’m hopeful that Polaris 10 will be competitive with the 1070. Daddy needs a new GPU for some occasional 1440p gaming, and I want to support the red team this go ’round due to its support of open standards (FreeSync). The 1080 and Vega later this year are out of my price range.

    It’s been rumoured that Polaris 10 will offer 390X performance at a better price, and while that would be most welcome, it would also be somewhat disappointing given the 1070 will supposedly deliver 980 Ti levels of performance.

    If there isn’t a Polaris 10 card capable of competing with the 1070, then we have a long wait until presumably cut down Vega occupies the space.

      • DoomGuy64
      • 4 years ago

      As long as polaris is cheaper and faster than a 390X, it’ll be a good card. We don’t really need 980Ti performance levels, unless you’re trying to play on a 4k monitor. What sells AMD today isn’t 4k, it’s 2k with 144hz freesync. Nvidia doesn’t have anything in that price/perf category, and what they’re trying to sell people on is high end displays like VR. Both VR and Gsync price Nvidia out of the mid-range market, so unless the 1070 suddenly supports freesync, polaris will still do well.

      Sometimes I wonder if Nvidia is deliberately giving the mid-range market to AMD. They could easily take it by supporting adaptive, but they’ve chosen to lock themselves in to high end displays. Perhaps it’s just more profitable to not compete with AMD. Right now both sides win from not directly competing with each other. AMD monopolizes the mid-range, while NV monopolizes the high end.

        • ImSpartacus
        • 4 years ago

        There’s no chance that Nvidia gives up the mid range. They have a ton of mindshare as being “the best”. They get a surprisingly noticeable bump from the halo effect that the 1080 provides for at least the next six months (maybe longer if we get an overpriced gp100-based or dual gp104-based titan in early 2017).

        Likewise, it’s not like amd postponed vega because they wanted to give up the opportunity for a halo card. They have to take care of their semi-custom customers (apple, 4K consoles, etc) and Polaris 10 & 11 are absolutely perfect in that role. And hbm supply issues probably helped the decision as well.

        Competition is alive and well.

          • DoomGuy64
          • 4 years ago

          You can pretend they’re not, but without supporting adaptive nvidia is effectively throwing in the towel for the mid-range market. 1070 buyers either have buy into gysnc at a $100+ premium, or be stuck playing on 1080p monitors to eliminate tearing and judder. AMD has a monopoly on the total cost to own mid-range market, all because nvidia is refusing to support adaptive.

          Anyone buying a cheaper adaptive sync monitor will be locked into polaris, since nvidia isn’t supporting industry standards. So yeah, Nvidia has completely given up the mid-range. The 1070 isn’t mid-range, it’s the beginning of the high-end market.

          If polaris can even somewhat compete against the 1070, it will be a no-brainer purchase for anyone looking at price/perf. The total cost of ownership will be around $200 cheaper than the nvidia equivalent. I know Nvidia has a lot of fanboys, but I think a good percentage will be looking twice at saving $200 by going AMD.

          Adaptive is a big deal. I can’t even look at nvidia now that I have a MG279, and that’s on them. Otherwise, I would be interested in the 1070. That’s why I’m saying they’ve given up the mid-range. I’m not spending $800 to buy a gsync monitor just so I can use the 1070. I’ll be sticking with AMD until Nvidia realizes they’re only screwing themselves.

            • chuckula
            • 4 years ago

            What is this huge price premium again?

            I just bought a 2560×1440, 144Hz Dell G-sync enabled monitor for $515. The absolute cheapest monitor (an Acer) that I could find with the same resolution and refresh, and without any adaptive sync whatsoever including Freesync, was $470. That’s a $45 difference and I highly doubt it was 100% due to G-sync support.

            Incidentally, the g-sync works under Linux. Freesync support is still in development and still isn’t working in actually released drivers from AMD.

            • DoomGuy64
            • 4 years ago

            Total cost of ownership = Monitor + Video card. Both are cheaper. Thanks for the -3, btw, as I see you fight your debates via gold status downthumbs, logical fallacies, and snark, instead of having anything substantive to say.

            Yes, you can cherry pick sale prices, and no those prices do not hold from day to day. The price difference still exists, whether or not you get a sale, and AFAIK no sale has yet put a gsync monitor below a freesync monitor, and that’s because the nature of the FPGA module will always mean Gsync costs more, not to mention the typical nvidia lock-in price gouging. Sales are the exception here, not the norm.

            Not only that, but you’re conveniently ignoring the fact that Adaptive is a VESA standard, while Gsync is not. Nvidia can support Adaptive, but AMD cannot support Gsync. Only die hard fanboys would buy into a more expensive and proprietary monitor technology. I bought my MG279 not only because it was cheaper, but also because I had hoped nvidia would support it in their newer cards. They haven’t, so they’ve effectively lost me as a customer until they support the industry standard.

            • chuckula
            • 4 years ago

            Ok, anti-Nvidia propaganda blah blah blah.

            Here are facts:
            My monitor: $45 more than the absolute cheapest [b<]NON-FREESYNC[/b<] model that I could find. Assume that Freesync monitors are actually "free" (they aren't): "Increased" cost: $45. Oh, but since Freesync still isn't working under Linux, the G-sync actually delivers some degree of value. Now let's look at the GPU: $700 for a GTX-1080... except I won't pay that much, or if I do it will be for a factory OC'd version that's noticeably better than the default version. But let's stick with $700. That's a whopping $50 more than the R9 Fury-X, which was hailed at a miracle not that long ago. Want to make any bets at who is going to win on TR's price/performance chart even with a $700 price for a stock-clocked GTX-1080? So, our "TCO" exorbitant price premium over AMD for both monitor & GPU using unrealistically favorable parameters for AMD is... $95. That's not $500, $300, $200, or even $100. And remember that Freesync still literally does not work at all under Linux and I'm not in the mood to wait for AMD, so a large portion of that "overcharge" from Nvidia is that they actually deliver support. Oh, and as an addendum, if we really want to talk TCO, over the course of two or three years I'll probably get all that money back in power savings compared to AMD's lineup, $500 "low power" Furry Nano or not.

            • DoomGuy64
            • 4 years ago

            Linux = logical fallacy. Period. Full stop. Dx12 doesn’t work in linux either. So what? Freesync as a hardware standard does indeed work under linux. What doesn’t work is AMD’s driver. What you said, as you said it, is a lie, not to mention a desperate fallacy that only a fanboy would bring up because he couldn’t find any other straws to grasp at. Desperate straw grasping going on here, that’s all it is. My answer: so what? Takes one driver update to fix, not that anyone really games on linux. Linux “gamers” are more likely to VM/dual boot windows than game natively anyway. For all we know, freesync could work through a VM, which would completely negate your desperate straw grasping non-argument.

            You know what else could work in linux? Adaptive sync on a 1080. The only reason why it doesn’t is because Nvidia loves their vendor lock-in, price gouging garbage, and that completely prices them out of the mid-range.

            I don’t even know why you’re digressing about $700, the 1080, or the FuryX. That isn’t mid-range. I never said Nvidia wasn’t competing in the high-end market. I specifically said: Nvidia has a monopoly on the high end, while AMD is monopolizing the mid-range. Nvidia is not directly competing with AMD in the mid-range. They’re offering products in different markets. AMD might be attempting to break into the high-end market here and there, but that’s completely irrelevant to the point I’m making.

            If Polaris debuts at $300-350, while Nvidia continues to not support adaptive sync, they will effectively not be competing in the same mid-range market segment. The total cost of ownership will be that much lower for AMD hardware.

            • chuckula
            • 4 years ago

            [quote<]Linux = logical fallacy. Period. Full stop. [/quote<] Yeah, that does sum up AMD's attitude towards Linux quite succinctly. Nvidia doesn't have that attitude. Nvidia also makes a profit every quarter. AMD? Not so much. Maybe it's AMD that needs to change with the times, not me. The rest of your wall-o-text is line noise.

            • DoomGuy64
            • 4 years ago

            [url<]https://www.reddit.com/r/linux_gaming/comments/3lnpg6/gpu_passthrough_revisited_an_updated_guide_on_how[/url<] Otherwise, use nvidia for linux. Freesync has nothing to do with AMD's linux driver. Two separate things. Your argument is a red herring. Nothing to do with my original point. What's the matter, can't address the point? Or is this how the ol chuckster addresses all his arguments, by using fallacies?

            • Tirk
            • 4 years ago

            To be fair the Titan X and 980ti were also hailed as a “miracle not that long ago” and they’ll be in the similar position that the Fury X will be when the 1080 comes out. In fact the only reason I mentioned the horrible price/performance of the Titan X is that is exactly the card Nvidia placed on their slides to advertise the 1080, not the Fury X or 980ti.

            Yes, both of you are correct that there is value added depending on what a person wants on both solutions currently. Doom values the open Vesa standard more than Linux support probably because like many others he doesn’t use linux. You value the current linux support of GSync which is perfectly fine but that shouldn’t discount Doom’s preference in supporting Vesa’s open standard.

            And please don’t bring up that power red herring, people get the cards that are in their power budget. Shown time and time again the power costs over time shift drastically depending on where you live and how much your electricity costs. You know if you don’t own a computer you’ll save even more power but that doesn’t make it always the right decision now does it? People use electric ovens and AC units and they use far more electricity than a computer does so lets not advocate for eliminating all the amenities of a modern society shall we.

            Both of you should get an extra hour of sleep it might make both of you more amiable people.

            • ImSpartacus
            • 4 years ago

            Please don’t subtly paint this as some AMD/Nvidia fanboy discussion. That’s a tired strawman.

            Both AMD and Nvidia are headed by smart folks with tons experience in debuting consumer graphics card lineups. While neither can debut a full 14/16nm lineup all at once, they both will be competitive pretty much throughout the gaming graphics card spectrum.

            The only possible exception is that a Vega-less amd might not have anything above the ~$400/500 mark as fiji is amd’s only weapon against the 1070 and they have nothing to put up against the 1080. However, what amd DOES sell will be competitive.

            Variable refresh rate monitors are not that big of a deal. They are still out of the price range of most consumers and the g-sync-v-freesync debacle scares off others (including myself). It’s simply not materially relevant to the 2016 consumer graphics cards.

            • DoomGuy64
            • 4 years ago

            I’m not talking about the 1080. I’m [i<]specifically[/i<] saying Nvidia is not competing in the mid-range market via total cost of ownership, and vendor lock-in. Not supporting adaptive sync effectively prices any "mid-range" nvidia card out of mid-range, not to mention current owners of adaptive monitors will ignore any nvidia product that doesn't support their monitor. Nvidia is primarily competing in the high-end market, and has defacto abandoned mid-range. You can't compete in the mid-range market without supporting Adaptive Sync.

            • ImSpartacus
            • 4 years ago

            The variable refresh monitor market just doesn’t matter right now, so I don’t think it’s fair to include that in cost of ownership.

            As long as their are competing standards only compatible with certain GPUs, then it’s just not relevant in the big picture.

            • Airmantharp
            • 4 years ago

            It’s definitely a red-herring argument to claim that VVS is more or less essential.

            I’ll freely admit that *I* won’t be upgrading my 30″ 1600p monitor to something that lacks VVS, but I’m not typical; there are far more people just trying to play games in the first place than people that are trying to get the best possible experience.

            • DoomGuy64
            • 4 years ago

            A lot of people still have 1080p monitors, which I’ve mentioned, and as such will not be an issue for the 1070. The problem lies with people upgrading their monitor, because most people will want a cheaper monitor that adheres to universal VESA standards and compensates for frame variance. In that exact scenario, Gsync prices nvidia out of mid-range perf/$ buyers, and forces people to choose either higher prices with nvidia, or perf/$ with AMD. Nvidia currently does not offer mid-range perf/$ esp when you throw gsync in the mix. It’s enough that nvidia charges extra for their cards. Charging extra for the monitor will only peeve off value conscious people, and drive those people away.

            Of course, that’s only with new monitors, as anyone sticking with their old 1080p screen won’t have to worry about it since the 1070 should never drop below 60fps @ 1080p.

            It’s a multi variable equation, but overall I don’t think nvidia is appealing to the mid-range market at this time.

            • Airmantharp
            • 4 years ago

            It’s *your* multi-variable equation. Most people simply won’t upgrade their monitor, as 1080p60 is still plenty for most desktop and gaming use for most people.

            • DoomGuy64
            • 4 years ago

            1080p60 is the monitor equivalent of WinXP. People have been holding out because of cost and frame drop issues. Adaptive sync solves that. Once prices drop to levels affordable to the mid-range crowd, I don’t see 1080p sticking around.

            People with 1080p can already tell what they’re missing by playing with VSR/DSR. Unlike win8, there is no downside to upgrading other than having to deal with Nvidia’s reprehensible lock-in tactics, which even at that, adaptive still feels like the way to go.

            Nvidia uses eDP adaptive for laptops, not a gysnc module. It’s very possible someone with the right motivation could hack the driver and enable adaptive on their desktop cards. Either way, it doesn’t feel like Nvidia can hold out forever, and this also reflects poorly on anyone apologizing for their tactics.

            I mean seriously, do any of you fanboys think your reputation is not going through the toilet by defending gsync? You’re only putting a red flag on yourself as someone who is completely obsessed with defending consumer unfriendly tactics. You lack any objectivity, and are downright delusional to obsessively defend nvidia when they’ve taken an indefensible position.

            1080p is the past. WinXP is the past. The future is 1440p+ and adaptive. Holding on to a 1080p monitor is only a defensive position people have taken because Nvidia has priced adaptive out of reach of the average consumer. You can’t logically defend it. If Nvidia doesn’t start supporting adaptive, people will simply leave the reservation.

            Gsync is to Nvidia, as Win8 was to Microsoft. It has already failed, and the more you defend it, the more you lose. Win10 is right around the corner. You know it, we know it. Nvidia must support adaptive at some point. The only question is: how low does nvidia wish to ruin its reputation before supporting the industry standard?

            edit:
            [url<]http://wccftech.com/amd-takes-gpu-share-nvidia-q1-2016/[/url<] It's already begun. AMD is gaining ground, and it's all because Nvidia doesn't respect its mid-range base.

            • f0d
            • 4 years ago

            it has nothing to do with amd vs nvidia

            most people just dont know or care about what VRR is, as long as they can play their dota 2 or wow or whatever they are playing they dont care what 1440p or VRR is

            trying to get the “average gamer” to change their monitor is like trying to pull teeth with tweezers in one hand while juggling bowling balls in the other…..impossible

            the only time the average gamer purchases anything is when it stops working or when it can no longer play the games they want

            you would be surprised how little the average pc gamer knows about hardware

            (coming from a freesync user that talks to lots of pc gamers daily in various games)

            • travbrad
            • 4 years ago

            I think Nvidia should just support the open standard too but there is very little reason for them to do that until a lot of people actually care about VRR. Right now most gamers have never even heard of it let alone know what it does.

            I would attribute AMD’s market share gains to the fact they have some good cards right now and had a “refresh” of their product line more recently than Nvidia (up until now anyway), not so much because of VRR. Their market share also had nowhere to go but up. Only a few years ago they had over 40% market share, so they are still regaining what they have lost.

            If you just read TR you’d think everyone has 4K or 1440p VRR monitors, but in reality that is a small minority of PC gamers.

            • DoomGuy64
            • 4 years ago

            Disagree. You guys are projecting the ignorance of console peasants onto PC gamers. PC gamers aren’t stupid, they’re price conscious. The reason why 1080p is so popular isn’t that PC gamers are retarded, because they’re obviously smart enough to upgrade their CPU, GPU, and RAM, it’s that Gsync is priced out of their reach.

            PC gamers will buy VRR when it becomes mainstream, which it absolutely will, being the VESA standard. At a certain point, no future monitor will NOT support adaptive, aside from Nvidia’s lock-in scamonitors.

            • f0d
            • 4 years ago

            [quote<]The reason why 1080p is so popular isn't that PC gamers are retarded, because they're obviously smart enough to upgrade their CPU, GPU, and RAM[/quote<] nobody said they were retarded or ignorant - just because the majority of pc gamers dont know about the hardware doesnt make them in any way "retarded" or "ignorant" - those are your words not mine someone who owns a car isnt "retarded" if they cant rebuild the engine, they just dont have the skillset or knowledge to do so whereas they probably have the skillset and knowledge for other things also as far as i have seen the majority just buy a whole new box instead of "upgrade their CPU, GPU, and RAM" - once their computer isnt fast enough they just buy another one

            • DoomGuy64
            • 4 years ago

            Regardless, whole box or not, you are assuming/projecting people who “just buy a whole new box” are going to ignore new monitor tech, and stick with their old 1080p60 monitors.

            THAT’S FREAKING INSANE.

            4k is starting to become mainstream. It’s in every mall’s electronics section, you see it [i<]everywhere[/i<]. PC gamers are not oblivious to monitor tech, and adaptive is the VESA standard. STANDARD. That means all new monitors BY DEFAULT will soon support adaptive. Not supporting an industry standard to lock your userbase into buying your videocards is morally bankrupt, and outright rage inducing. I feel like asking, WTF is wrong with you people who defend this?

      • brucethemoose
      • 4 years ago

      There’s a smaller Vega that might use GDDR5X, and a bigger HBM one that’ll be out of your price range. I’m betting that smaller one is the GP104 competitor.

      • bfar
      • 4 years ago

      There was also a rumor suggesting that it’s the mobile part that offered 390X performance. We can all get excited again!

      • Chrispy_
      • 4 years ago

      For me the reason I’ve switched my personal gaming boxes to Nvidia is power efficiency. I own a couple of 970s and have kept a faster 290X and given away the other 290X because they’re noisy.

      Part of the noise is the lack of decent blower options, which are essential to a SilentPC build with sound-absorbing foam and indirect airflow in a case. Having to use open coolers like on my 290X cards is silly because they just heat up the inside of the case too much and then crank up their fans.

      The other part of the noise is that they consume 250W compared to the 970’s 145W. Doesn’t matter how good your cooling is if you add almost 75% more heat.

      I’m really hoping Polaris is competitive in term of performance/Watt. Not everyone needs the top-end graphics cards, but everyone can appreciate a cool and quiet PC.

    • Ushio01
    • 4 years ago

    Damn that’s not a small cut that GPU’s been hacked to death.

    No wonder Nvidia is pushing founders cards for the 1080 they are probably hoping sales will be low enough that people won’t notice the lack of stock and shitty yields.

      • Chrispy_
      • 4 years ago

      I don’t see your point at all;

      It’s 63% of the cost of a 1080 (380/599=63%)
      It’s 69% of the performance of a 1080 (15/20*1600/1733=69%)

      So, it is better value than the 1080 at stock and likely has much higher overclocking headroom. You can clock GTX970s to similar levels as the 980, making the stock clockspeed differences between 970 and 980 irrelevant.

        • jts888
        • 4 years ago

        1070 overclocking may not matter substantially if the card is bandwidth bound, which could certainly be the case if it suffers from the 224b+32b split bus issue and ends up with only as much usable bandwidth as a non-Ti 980.

        Improved delta color compression (reportedly ~20%) is nice, but it and even 20% faster memory buses don’t provide as much effective bandwidth as a 384b bus would have.

        • bfar
        • 4 years ago

        It’s still a big cut though. That might bring it down to ballpark 5% faster than a 980ti. Now if Polaris is good (and cheap) there could be a real scrap starting up. I’m beginning to see why Nvidia is gouging the early adopters so hard.

        • Klimax
        • 4 years ago

        Using your estimates 1070 should be slotting around 980ti/Fury X depending on game. Pascal might pose severe threat even to Polaris. Appears to be enough room or GP106 or further salvaged GP104 to eliminate Polaris 10.

          • ImSpartacus
          • 4 years ago

          Yeah, a lot of people are speculating that a hypothetical GP104-based 1060 Ti could compete with the top-end Polaris 10 part. Though I think Nvidia will relegate a full blown GP106 for the min vr spec part to compete with Polaris 10 on the other end.

          But we’ll see. I expect Polaris to at least be competitive.

        • ImSpartacus
        • 4 years ago

        Remember that the price factor is 450/700 (64%) for all intents and purposes. Not much difference, I know.

        But in general, I don’t think we should give Nvidia a pass on this founder’s edition business. I get why they are doing it, but the “regular” prices are effectively a scheduled price cut – likely around the time Polaris puts pressure on the mid-range.

      • Ninjitsu
      • 4 years ago

      Well previously the x70 cards were close enough to the x80s that the more expensive ones didn’t have a point.

      This probably gives Nvidia more breathing room, and to be fair the 1070 still won’t be bad for the money.

        • bfar
        • 4 years ago

        It all depends on Polaris price and performance.

          • brucethemoose
          • 4 years ago

          We basically know that Polaris is slotting itself below the 1080. Little Vega will be the GP104 competitor.

          • ImSpartacus
          • 4 years ago

          Polaris 10 going to perform like Hawaii and probably be priced at no more than $300. We have a ton of official amd evidence of Polaris 10’s performance profile while the price is suggested by both leaks and Nvidia’s lineup so far.

            • bfar
            • 4 years ago

            It’s all very woolly though. AMD are keeping very quiet about it.

            Edit: Back of an envelope calculation:
            Polaris is 232nm. If it was built on the old 28nm node it would be roughly double the size at 460nm. 390x is 438nm on the 28nm node, so Polaris probably has more transistors than Hawaii.

            This isn’t really proof of anything, but based on size alone it clearly has the potential to be faster than 390x. The price is the key. “up to $300” could mean anything. If it were $2-250 it would fly off the shelves.

    • jra101
    • 4 years ago

    Should be 2560 CUDA cores for GTX 1080, not 2650.

      • DoomGuy64
      • 4 years ago

      It’s basically the nvidia equivalent to the 390, except faster.

        • Peanut
        • 4 years ago

        It’s not the nvidia equivalent of the 390. They are both on different levels entirely but we may soon find out what amd card best matches up against the 1070. Gonna be very exciting

    • Tristan
    • 4 years ago

    Perf of 1070 is the same level as 980 or 390X. For 300$ Polaris will be better choice than 1070.

      • travbrad
      • 4 years ago

      How do you figure? Based on the specs and what we have seen from the 1080 it should slot in somewhere between the 980 and 980ti. And how do you know the price and performance of Polaris already?

      You may be right and Polaris could end up being the better deal, but you seem awfully certain of how two unreleased and untested cards will compare to each other.

        • Tristan
        • 4 years ago

        Specs of 1070 are relative much worse than 970. In fact 970 is 980 Lite, with few shaders disabled, but with the same clocks and speed of memory. NV realized that it was mistake, and corrected this creating crappy 1070. With 75% shaders and 85% clocks of 1080 you can count only for 65% perf of 1080. And 20% slower memory can drag this even futher to 60% of 1080, the same level as 980. Compare this to 970 that was able to get 80-85% perf of 980.
        I know that many wannabes are excited about 1070 = Titan X empty promises, but downvoting won’t make 1070 so fast.

          • Freon
          • 4 years ago

          The die specs have the 1070 ahead of the 980 across the board when you consider clock speed. You’re throwing around percentages you’ve pulled out of thin air and are using them in meaningless compounded calculations. Please don’t quit your day job to start doing GPU analysis.

          Note how the 1080 handily beats the 980 Ti or Titan X despite a net drop in memory bandwidth (down from 336 to 320Gbps), which is good indication memory bandwidth efficiency is up a fair bit. So despite being on 256bit GDDR5 yet again the 1070 should not be limited to 980 performance due to memory bandwidth.

    • Ninjitsu
    • 4 years ago

    A [s<]bit lower performance than I had expected[/s<] (25-30% less was what I had thought), but this is still approximately equal to the 980 Ti. Now that the base clock is out, 25% less shaders and 7% lower base clock means at worst 30% less performance, which is good. At best the gap is more like 27%. Yay for more correct speculation 😀

      • strangerguy
      • 4 years ago

      It’s another case of labeling something as a failure because it fails to meet some absurdly high imaginary standard.

      • bfar
      • 4 years ago

      It keeps the 64 ROPS, so it’s just the CUDA cores and TMUs that get the chop. Nvidia are claiming it still beats the Titan x. But even 980ti speeds is a very good upgrade for anyone who skipped last year’s cards.

        • ImSpartacus
        • 4 years ago

        Remember the 980 Ti and Titan X are effectively on top of each other in typical gaming performance.

        Nvidia had to do that in order to shut down the Fury X (and they were largely successful).

        • tipoo
        • 4 years ago

        That’s interesting. I wonder if there will be situations where the resolution performance changes are similar but added effects hit the 1070 more, at least if you clock them the same with the same number of ROPs. Unless GDDR5X is the make or break at those resolutions.

    • Srsly_Bro
    • 4 years ago

    Saying $380 and $599 are misleading. I’m not sure if Nvidia made TR report those numbers without disclosing the price of the FE series at $449 and $699, for GTX 1070 and GTX 1080, respectively.

    I hope Nvidia helping TR get a 1080 for the review didn’t come with that condition to downplay the increased costs of video cards this round.

      • Freon
      • 4 years ago

      I agree. There is a lot of gaming the pricing here on NV’s behalf. I think they may just be bracing for either success or failure on AMD’s part, plus getting in a healthy dose of early adopter gouging since they know people will be happy to pay a premium for earlier examples off the assembly line while they continue to ramp up production.

      But, thus is supply and demand. If you don’t like the pricing wait it out a few months for all the 3rd parties to start competing with one another. For now I agree, the cards should be reviewed as the $700 and $450 cards that they are.

        • Topinio
        • 4 years ago

        This.

        Launch price is launch price is launch price.

          • Klimax
          • 4 years ago

          Still both apply. In this case launch != retail availability IIRC both FE and OEM versions should be available at the same time or closely enough. (variation of “closely enough” reader-dependent…)

            • Zizy
            • 4 years ago

            Just reference is launching at those specific dates, so there is just a single launch price, and it is the high one.
            When and how much for OEM versions remains to be seen.
            So, until anything changes, prices are 700 and 450 and anything else is a misleading bullshit.

            • Klimax
            • 4 years ago

            Nothing about that in announcement. So barring evidence I missed, no. Thus last sentence is without evidence just bare assertion, nothing more. (Also could be considered a FUD)

            So, any links backing up your claim and assertion?

            • ImSpartacus
            • 4 years ago

            [url<]https://techreport.com/news/30096/report-founders-edition-geforce-cards-will-be-first-to-ship[/url<] Also, it just makes sense. The founder's edition just wouldn't sell against non-reference stuff that's so much cheaper.

            • nanoflower
            • 4 years ago

            That’s only true if businesses were going to sell the non-reference cards at MSRP. Given the interest I expect that most will sell the AIB and FE cards at the same (higher) price. At least while the interest is so high.

          • MOSFET
          • 4 years ago

          True, Topinio, true.

          Just that the prices reported now [i<]are[/i<] considerably higher than when 980/970/960 first hit Newegg. Also true that I haven't yet seen at what price point Newegg will launch the 1080/1070/1060.

    • Takeshi7
    • 4 years ago

    This doesn’t answer the most important question: Does it still have 7GB of fast RAM and 1GB of slow RAM similar to the way the 970 had 3.5GB of fast RAM and 0.5GB of slow RAM?

      • maxxcool
      • 4 years ago

      Dead Horse is Dead ..

        • morphine
        • 4 years ago

        Dead horse was never alive to begin with. You’d think that all the 970s suddenly became slow after that absurd brouhaha broke out.

          • Takeshi7
          • 4 years ago

          The performance hit was huge (~50%), so yes, I’d like to know if we can expect the same mutant memory setup in the GTX 1070.
          [url<]http://www.pcgamer.com/why-nvidias-gtx-970-slows-down-using-more-than-35gb-vram/[/url<]

            • morphine
            • 4 years ago

            Did you actually read that article? It shows a 1-3% performance drop difference between the same settings on the 980 and on the 970.

            And the bit about forcefully using the last 500MB is just plain wrong, as any card simply cannot allocate all of its VRAM to textures alone. You need room to store the various framebuffers, Windows’ UI in some cases, and so on.

            At most, one can say it was poor PR. Certainly not a technical problem.

            • Takeshi7
            • 4 years ago

            Yes, I did read the article.
            [quote<]Those are also average framerates, which don’t address the problem some commenters have pointed out: dramatic framerate stutter at the moment the GTX 970 starts utilizing its final 500MB of VRAM.[/quote<] And even if it is only a PR problem like you claim, it is something that NVidia should be very clear about concerning the GTX 1070's memory setup. And since they haven't said one way or another, it makes me leery.

            • ImSpartacus
            • 4 years ago

            So where did the “huge” “~50%” statement come from?

            If the issue is inconsistent frame latencies, then that’s generally something that you can’t document with a single figure like “~50%”. It’s a complex issue. So I’m assuming that you’re discussing something else.

            • djayjp
            • 4 years ago

            Here’s a gaming test that shows frametimes 72% slower than a GTX 980: [url<]http://www.extremetech.com/extreme/198223-investigating-the-gtx-970-does-nvidias-penultimate-gpu-have-a-memory-problem/2[/url<]

            • f0d
            • 4 years ago

            yes and even the 980 was getting 33 fps at 4k (which is how they triggered it for the 970) which is unplayable

            the 970 issue was only an issue at the kinds of framerates you cant use

            • djayjp
            • 4 years ago

            Since when did 33fps become “unplayable”? Lol

            Seems like a framerate people can (and do) use. Heck, a number of games won’t even run at 60fps (either due to gpu or cpu limitation and/or poor coding).

            • f0d
            • 4 years ago

            since always?

            that 33fps “average” means there will be a lot of frames below 33fps and thats what the 980 got the 970 was 27fps which would mean lots of frames below 27 and in both cases thats unplayable

            imo 30fps minimum is still unplayable (looks so horrible) but i can accept that some people think its ok but thats a minimum of 30, both the 980 and the 970 got an AVERAGE of around 30 which isnt a minimum and thats unplayable for both cards

            edit: even the article itself mentions the same thing
            [quote<]However, the strength of this argument is partly attenuated by the frame rate itself — at an average of 33 FPS, the game doesn’t play particularly smoothly or well even on the GTX 980[/quote<]

            • djayjp
            • 4 years ago

            It really depends on the game and the consistency of its frametimes. If a game’s average fps of 33 doesn’t fall much below that then it can be very playable (I recall finding 28fps in the OG Crysis decent enough).

            Anyway, the point is though that the situation will only deteriorate as more games require more memory, thus providing an experience significantly worse than what people were sold on, thus drastically shortening the expected usable life of the card (unless you love stuttering). At least nvidia has said they’d “help” people who are unhappy with their 970s by offering refunds or replacements (but they were very pretty quiet about it).

            • djayjp
            • 4 years ago

            He’s right actually. Here’s a gaming test that shows frametimes 72% slower than a GTX 980:

            [url<]http://www.extremetech.com/extreme/198223-investigating-the-gtx-970-does-nvidias-penultimate-gpu-have-a-memory-problem/2[/url<]

            • Froz
            • 4 years ago

            That’s only relevant if you like playing at around 30 FPS (the numbers are from 4k test).

          • Krogoth
          • 4 years ago

          970 micro-shudders when it is forced to use its partitioned memory pool a.k.a last 512MiB. That’s why it doesn’t show up in avg FPS scoring but it does show up in frame-time scoring. It has been verify by people who did stress-testing comparisons between a 980 and 970 under extreme conditions.

          Then again by the time the 970 is forced to use the last 512MiB it is already overtaxed. It is similar to issues with GK106’s asymmetric memory setup. The only difference is that Nvidia’s marketing had failed to disclosed this little tidbit until enthusiast discovered the problem when they were stress-testing their 970s.

          It is like finding out that the your “6-gear sports car” has shifting issues when it tries go into the sixth gear ratio due to technical issues to how the transmission was engineered but manufacturer’s marketing department never disclose this problem until after the fact.

            • Airmantharp
            • 4 years ago

            Even rolling 970 SLI at 1600p, I’m not hitting the point where it matters enough to care. This issue was a marketing failure, not an engineering failure.

            • Krogoth
            • 4 years ago

            970 SLI does run into significant micro-shuddering problems with 4Megapixel gaming and beyond with current and future content that [b<]needs[/b<] more than 3.5GiB of VRAM. You will probably upgrade that setup when the problem starts to become painfully apparent.

            • Airmantharp
            • 4 years ago

            No doubt I’ll upgrade, but it won’t be this generation- it’s just not needed. I’m more than happy to turn down a few settings and keep the >US$1000 that I’d need to shell out for this generation in order to appreciably eclipse the performance of my 970s.

            • Chrispy_
            • 4 years ago

            Quite often the difference between “Ultra” and “Very High” requires you to take two static screenshots, load them into Photoshop and perform an XOR filter on them to see any difference at all.

            “Ultra” is so frequently bad value for the performance penalty it causes.

            • Airmantharp
            • 4 years ago

            In older times it was certainly more obvious- where we were fighting for every little bit of IQ we could muster.

            I agree that it’s not like that today, with just how complicated graphics engines have become; I’d have to care more about minute details than actually playing and experiencing the games themselves and I honestly couldn’t give a rats.

      • Krogoth
      • 4 years ago

      That is entirely depends on how the GP104 is binned, but considering how the enthusiast felt about the 970 debacle. I would imagine that engineers at Nvidia would try to avoid similar problems.

        • travbrad
        • 4 years ago

        Most of the “debacle” was that they didn’t disclose those details from the start, not the memory configuration itself. If 1070 provides good performance for the money I doubt having “only” 7GB would even matter to most people. I don’t even know what you’d have to do to use that much VRAM.

          • brucethemoose
          • 4 years ago

          [quote<] I don't even know what you'd have to do to use that much VRAM. [/quote<] People said the same thing about Tahiti when it came out. 3GB may have been excessive back then... But look how well it aged compared to GK104. I bought my 7950 with that in mind, and it ultimately paid off.

            • biffzinker
            • 4 years ago

            The new DOOM at High Quality Settings, and 1080P doesn’t even use up all 3GB on my 7950 from looking at a log generated by GPU-Z. GPU usage sits at 70-80%, and out of the 3GB of GDDR5 it only peaked at 2.4GB.

            • Krogoth
            • 4 years ago

            It is because of the ID Tech 6 engine (really just a refined ID Tech 5). Id Software before Carmack left may not made the most innovate or best games, but they certainly know how to make their engines and netcode.

          • ImSpartacus
          • 4 years ago

          It’s effectively required for marketing purposes because Polaris 10 will likely have 8GB of vram. Therefore both the 1070 as well as the hypothetical GP104-based Polaris 10-competing 1060 Ti will probably have 8GB of vram even though 7GB or 6GB would be plenty.

      • jihadjoe
      • 4 years ago

      A quick look at the specs will show there isn’t any funky partitioning going on. It’s 256-bit, and 7GT/s of 8GB GDDR5.

      The 970 was 226-bit which made addressing the final .5GB problematic.

        • ImSpartacus
        • 4 years ago

        I think the issue was that you couldn’t look at the specs of the 970 and detect that work in it, so we doubtful could do the same for the 1070.

        But honestly, I’d be amazed if Nvidia did that to the 1070 this time around. They priced it well above the 970, so there’s probably no special yield-improving tricks involved.

        • Takeshi7
        • 4 years ago

        Thank you, I appreciate you answering my question unlike all of the other butthurt fanboys in this thread who deflected the question and downvoted me.

        • auxy
        • 4 years ago

        224-bit, and you mean 8GB of 8GT/sec GDDR5. [url=http://www.geforce.com/hardware/10series/geforce-gtx-1070<]Source.[/url<] Anyway it's not as if looking at the specs for the GTX 970 would have told you about the memory partitioning. NVIDIA always listed it as 256-bit and 4GB just like the GTX 980. To be clear I don't think the GTX 1070 has the same kind of problem. I doubt NVIDIA would have done the same thing twice, even though the backlash didn't seem to have affected their sales...

      • homerdog
      • 4 years ago

      The 1070 has the full 256bit bus, no 970 like split. Was confirmed by NVIDIA, you can google it. In fact you probably should have done that before posting =)

    • NovusBogus
    • 4 years ago

    Slightly higher TDP than the 970 concerns me, as it will likely prohibit mini ITX implementations. I had hoped it would be in the 120-130 watt range.

      • DancinJack
      • 4 years ago

      Clock it down 200MHz.

      I’m sure someone will build one anyway.

      • bhtooefr
      • 4 years ago

      There have been Mini-ITX GPUs as high as 175 W TDP, though. (R9 Nano, anyone?) And, Nvidia GPUs up to 170 W have been sold as Mini-ITX variants, too – the GTX 670 and 760 were at least.

      • HERETIC
      • 4 years ago

      Looking at the 1080 being close to double the performance of 970-
      When/if Nv cuts that die in half and releases it as a midrange part-Should be able to run at higher clock and get close to 980 performance at a nice low power range…………
      Interesting times ahead…………………………………………..

      • bfar
      • 4 years ago

      There’s mini itx and there’s mini itx. Mine has a GTX 580 cooking away inside it!

        • JustAnEngineer
        • 4 years ago

        My mini-ITX Fortress FTZ01 holds a GeForce GTX980Ti. With a ginormous 28nm GPU putting out so much heat, the vaunted NVidia reference cooler isn’t very quiet.

        Power consumption is where 14/16nm is really going to help. This new generation offers almost double the gaming performance per watt when compared to GPUs fabricated on the older process.

Pin It on Pinterest

Share This