In the lab: Nvidia’s GeForce GTX 1080 Ti graphics card

Comments closed
    • Kretschmer
    • 3 years ago

    Can’t wait to snag one of the new 35″ 100Hz displays and this baby. Gaming for kings!

    • USAFTW
    • 3 years ago

    Jeff, will you be testing the card with the DX12 optimised driver they talked about recently? I think it was version 378.74 or something.

    • Longsdivision
    • 3 years ago

    I do have one request, that you put the GTX 1080 “AIR” in the review. For some members of the users here to get comprehensive review of both cards.

      • Jeff Kampman
      • 3 years ago

      Not sure which card you’re referring to. The GTX 1080 Founders Edition is representing the GTX 1080 for this review.

        • Longsdivision
        • 3 years ago

        Was an attempt of bad humor. Was referring to right after TR transitioned from Wasson to you, and TR couldn’t get it’s hands on a sample to review on release day. Someone made a stinker about TR’s inability to get a review out, while you stated you’ll just review the “air” in the space the card was suppose to be.

    • south side sammy
    • 3 years ago

    Must have been a slow day at TR.

      • derFunkenstein
      • 3 years ago

      It was a Saturday, so….

    • donkeycrock
    • 3 years ago

    Do you plan on doing all future video card reviews with amd and Intel CPU’s?

    Sounds like alot of work.

      • Jeff Kampman
      • 3 years ago

      No.

        • ImSpartacus
        • 3 years ago

        Damn, I was really hoping for a review of the all-important GT 945A.

        • Redocbew
        • 3 years ago

        Best post ever.

    • Chrispy_
    • 3 years ago

    Cool. Do you have a Titan XP to test alongside, perchance (or comparable numbers from previous Titan XP testing?)

    I’m curious whether Jen-Hsun’s claim that the 1080Ti is faster than Titan XP is actually true. It’s clocked just 4% faster but they’ve substantially reduced bus width, ROP count and pixel throughput. For the 4K resolutions that this is aimed at, I’d assume that those three things matter far more than a bit of shader clockspeed can compensate for….

      • Jeff Kampman
      • 3 years ago

      Nope, and no chances of getting one.

        • Chrispy_
        • 3 years ago

        Shame 🙁

        Oh well, it’s academic anyway. Nobody is going to buy a TitanXP for gaming once the 1080Ti is out on the market for $400 less.

          • JustAnEngineer
          • 3 years ago

          The Titan X (Pascal, 2016) wasn’t ever marketed to anyone except gamers. It lacks the double-precision throughput that the original Titan had.

            • Airmantharp
            • 3 years ago

            It’s nice that Nvidia finally found a big-iron use for a low-precision big-chip: now we can get basically the fastest GPUs that can be made at an acceptable (not gonna say ‘reasonable’…) price.

            Here’s hoping that AMD is willing to bring competition to this level!

            • JustAnEngineer
            • 3 years ago

            Whether you choose an NVidia GPU or an AMD GPU, the price/performance ratio is going to be much better for the consumer when there is competition. The $1200 to $1600 price of the Titan X (Pascal, 2016) that prevailed for the past seven months is the wallet-busting consequence when we don’t have competition at the high end. The current appealing sub-$200 prices of cards like the Radeon RX-480 8GiB and GeForce GTX1060 6GiB show how much better things can be for consumers when there is competition.

            • Airmantharp
            • 3 years ago

            The question is, will AMD reach that high?

            For years we’d been wondering why Nvidia insisted on making compute-heavy big-chips (the Gx100’s) and then selling them as high-end GeForce/Titan GPUs, when they weren’t considerably faster than their middle-chip (the Gx104’s) parts.

            And AMD hasn’t bothered to really compete with the big-chips on a regular basis either. Now, it seems that Nvidia is happy to produce leaner big-chips (the Gx102’s) that are essentially up-sized Gx104’s alongside their compute-heavy parts, but only because they have big-iron customers. If the math absent the supercomputer etc. markets doesn’t work for Nvidia, would it make sense for AMD to compete here at all, given the cost of producing these parts in volume and the limited market for >US$400 GPUs?

            • ImSpartacus
            • 3 years ago

            I don’t think AMD has any near-term plans of competing in the >$1000 consumer space with a single gpu. The 4096 SP ~1.5 GHz ~500+ mm2 Vega 10 is as big as it gets.

            But I wouldn’t be enormously surprised if we see a 300-375W Vega 10 “X2” being sold in roughly the same way as the Radeon Pro Duo.

            Though, with AMD’s long rumored MCM ambitions, they might wait for 7nm Vega 20 (a glorified Vega 10 shrink) and double up on that to make it easier to get down to an in-spec 300W tdp (the >300W 295X2 notoriously broke PCIe spec).

            • Airmantharp
            • 3 years ago

            Well, it’s not really a ‘price’ space per-se- Nvidia once sold mostly-enabled Gx100 GPUs for <US$300, after all (see: GTX260, GTX570 on sale). These cards fit into the same ‘size’ bracket as the current Titan X and 1080 Ti, and this type of GPU was last seen as a GeForce as the 780.

            Thing is, AMD’s architecture is and really has been (since the HD48x0, at least) competitive with Nvidia’s, they just have to make a GPU big enough. The challenge there is that the business case doesn’t look too promising: Nvidia is building a large low-precision part that first hits big-iron, then Tesla/Quadro, then a Titan, and finally becomes a GeForce in a somewhat cut-down fashion, in order of sales margins it would seem.

            It would seem then for AMD to target that level of performance- nee, that GPU ‘size’ effectively- they’d need to either have a good bead on their manufacturing process or have volume customers lined up and willing to pay above and beyond the enthusiast consumer.

            • ImSpartacus
            • 3 years ago

            What are you talking about? AMD has definitely made some “Nvidia-size” big GPUs in the modern day.

            Fiji was effectively the same exact size as GM200 and yet we never saw a $1000 Fiji card because of how brutality competitive GM200 was.

            And it happened again. Vega 10 is ~500+ mm2 and it won’t get six months of >$1000 pricing like GP102 did. We have yet to see how close it’ll get to GP102’s performance (the 1080 Ti being a

            We all would want AMD to be more competitive, but this definitely isn’t an issue of AMD not making big enough GPUs. They are doing so, but they are too slow to market or too unperformant to demand the >$1000 pricing that Nvidia can periodically get. Nvidia has had several recent back-to-back homeruns with their architecture improvements (while stuff like Polaris barely qualifies as more than just a process shrink). We can’t ignore that deficit just because it’s uncomfortable.

            • Airmantharp
            • 3 years ago

            Nope, I stand corrected. I’d forgotten that Fiji was so large, as it only seemed competitive in a few scenarios for whatever reason. Well, I could speculate that it might be slower in games due to a heavier compute-focus than Nvidia has done with their Gx100 parts versus their Gx102 and Gx104 parts, but I don’t really have any evidence to back that up beyond TR reviews.

            Given that they seemed size constrained due to needing four stacks of HBM for Fiji but will only likely need two stacks of HBM2 for Big Vega, maybe they’ll be able to put a greater emphasis pixel throughput logic instead?

            • ImSpartacus
            • 3 years ago

            Remember that Fiji needed to reduce dp units down to a 1/16 ratio so as to maximize gaming performance in a 28nm reticle-limited die. The 980 Ti was in the same boat and used a 1/32 ratio. I consider those effectively “as low as possible” for each given architecture. Both were wringing every last drop from 28nm. Compute-friendly double precision is the first to go.

            Concerning hbm stacks for Vega 10, my understanding is that it’s not possible to physically fit four stacks of hbm2 on an economical interposer next to a ~500+ mm2 gpu since hbm2 stacks are generally wider/longer than hbm1 stacks. Exactly how wide/long is unclear since it depends who is making them (the hbm spec only outlined the necessary height of the stack, not other dimensions). Intuitively, this is justified because hbm1 wasn’t built to stack chips (hence the 1GB limit per stack), but hbm2 has to support all of those fancy TSVs so up to eight stacked chips can talk to each other in a given stack.

            But regardless, I think you’re right. Despite having the same number of SPs as Fiji (4096), Vega 10 is rumored to be a much more balanced design (ignoring massive clock speed increases and other “ipc” improvements) since it’s not butting up against 28nm’s reticle limit like Fiji was.

            • Chrispy_
            • 3 years ago

            Without knowing more about the difference between Vega’s new architecture (NCU instead of GCN) we have to assume that a 4096 SP 1.5 GHz Vega 10 is going to perform roughly in line with the GTX 1080.

            That’s just an estimate by extrapolating clocks/config of the RX480. If Vega 10 outperforms the 1080 it’ll be through architectural improvements.

        • ImSpartacus
        • 3 years ago

        That’s sad, but good that you can be upfront about it.

        • leor
        • 3 years ago

        I have one, if you want to send me a VM of your testing image, I can run a few tests 🙂

    • USAFTW
    • 3 years ago

    I find AMD’s silence (not in absolute terms, but they’ve shown a whole lot of nothing so far) kind of disturbing. How long are they going to cede the higher-end-than-RX 480 space to Nvidia?
    Their small die strategy worked back when the 4870 was the thing, since it was almost as fast as a GTX 280 at less than half the price, but things are different now.
    The GTX 1080 and by extension the 1080 Ti are no GTX 280, and the RX 480 ain’t a 4870.

      • ronch
      • 3 years ago

      They surely want to be up there again but given their limited resources these days I believe they needed to carefully plan their strategy which includes aiming for the meat of the market. Products there are cheaper to develop and sell more units. When and if AMD manages to gather enough steam you can bet they’ll play hard ball again.

      • ImSpartacus
      • 3 years ago

      Yeah a 300W 16GB “480X2” in late 2016 would’ve been really helpful for amd.

      They could’ve sold it for $500-600 and competed with the 1080.

      I mean, wasn’t that part of the original “small die” strategy? You extract extra performance options via crossfire-on-a-card.

    • torquer
    • 3 years ago

    God I hope the “new” FE cooler is better than it was on 1080, considering I already pre-ordered.

      • USAFTW
      • 3 years ago

      It’ll almost assuredly be more efficient due to the full slot opening at the back and probably will dump less heat inside the case.

    • Pancake
    • 3 years ago

    In before controversy starts – probably not a good idea to test at 1920×1080. M’kay?

      • Jeff Kampman
      • 3 years ago

      Nah, any true graphics card reviewer uses 1024×768.

        • torquer
        • 3 years ago

        1600×1200 4 life.

          • FranzVonPapen
          • 3 years ago

          1600×1200 1/2 life.

        • KikassAssassin
        • 3 years ago

        But how fast will it run the only benchmark that matters: a Quake 3 timedemo?

        • Pancake
        • 3 years ago

        Free Sunday and a long weekend. Playing through Fort Apocalypse at glorious 160x200x2bpp on C64 emulator. I don’t think the graphics card is doing very much. One for the CPU benchmark suite?

          • LoneWolf15
          • 3 years ago

          Man, I miss Fort Apocalypse. But Raid on Bungeling Bay was good too.

        • Voldenuit
        • 3 years ago

        Heh, Kyle used 640×480 (ouch) at [H] in their Ryzen review, but prefaced that they did not feel the results were significant.

        I do think testing at low resolutions may uncover performance shortcomings in CPUs, but at the same time, super low resolutions may be bottlenecked by factors that have no impact in more commonly used resolutions. Similarly, testing at super high resolutions are likely to be completely GPU bottlenecked, but could uncover unforeseen limitations, such as if a given system has shoddy PCIE performance or a PCIE multiplexer that’s overtasked.

        It’s probably worth spot-checking both extremes, and then just mentioning that no unexpected findings were discovered. However, if a reviewer does find something unexpected, then it might be worth digging into.

          • Chrispy_
          • 3 years ago

          Silly-low resolutions are synthetic tests, so of course they help show differences between CPUs.

          BUT: They’re not real-world results. Nobody buying a $300+ processor is going to run at VGA resolution. 1080p is still the most likely scenario. Perhaps 1440p is gaining traction in the market demographic of $300+ CPU buyers but I suspect a lot of people for whom money is no object are still playing at 1080p. My huge-ass television cost way more than my HTPC rig, but it’s only 1080p (it’s also 5 years old now, though).

            • JustAnEngineer
            • 3 years ago

            4K TVs are quite affordable these days. Nevermind that the pixels are far too tiny for the distance between the sofa and the screen and there’s almost no 4K source material to view on them.

            • Firestarter
            • 3 years ago

            tiny pixels is one way of doing antialiasing

            • JustAnEngineer
            • 3 years ago

            [quote=”Firestarter”<] Tiny pixels [i<]are the most expensive and performance-degrading[/i<] way of doing antialiasing.[/quote<] FTFY.

            • derFunkenstein
            • 3 years ago

            It still looks better than FXAA. MSAA strikes a good balance.

            • Chrispy_
            • 3 years ago

            Not sure I agree with any of these comments;

            Each type of AA is best-suited to particular content and there’s no right or wrong answer. The post-process options, especially the temporal ones are practically free from a resource/performance impact but that doesn’t mean that certain graphic styles can’t still completely trip them up.

            One thing’s for sure though; JAE is right that that more pixels – whether that’s native or via DSR – is the single most expensive and performance impacting way to improve IQ. practically anything is better than it unless you have an extremely low demand on the hardware and orders of magnitude more hardware performance than your situation requires.

            • derFunkenstein
            • 3 years ago

            FXAA doesn’t seem to do much with object edges, and that’s where (to me) aliasing is the most obvious. In GTA 5, give any character a bald head and contrast it with the blue sky. You see very definite blocks, even with FXAA enabled. It just seems like a waste, even if it’s free. For the longest time, the “Antialiasing” checkbox in SC2 would turn the segmented life/energy meters into egg shapes but at least the rest of the game looked nice. Blizzard “fixed” the life bars and now every unit and structure has jagged, visible edges (and they did the same thing in Diablo III). MSAA does a better job of erasing those jaggies and DSR/SSAA does it up best, high cost or not. So I’m happy to turn down in-game settings a bit if it’s enough to let a game run with one of those AA schemes enabled (or DSR).

            But I don’t buy every AAA game out there and so I’m sure there are examples where it helps.

            • Chrispy_
            • 3 years ago

            Pixel density is pretty much the least important aspect of a TV. Almost every 4K television focuses on pixel density at the dentriment of all the important features:

            [list<][*<]Input lag, [/*<][*<]response time [/*<][*<]Viewing angles [/*<][*<]HDMI CEC compatibility [/*<][*<]HDMI switching capabilities for multichannel audio [/*<][*<]8:8:8 colour 1:1 pixel mapping with zero postprocessing [/*<][*<]24/48/50/60Hz support [/*<][*<]Contrast ratio and absolute black level. [/*<][*<]SRGB coverage [/*<][*<]TV tuner quality [/*<][*<]Decent Android firmware with app support and display mirroring that doesn't suck[/*<][/list<] If a 4K TV satifies that list of far more important requirements than pixel density whilst not costing too much more, I'll consider it. To me though, 4K models seem to focus on resolutio at the detriment of everything else; Even in "money-no-object" situations. And you know what? I run my TV at 720p sometimes because native 720p "HD" content looks better if my HTPC's GPU is doing the scaling.

            • JustAnEngineer
            • 3 years ago

            You’re not the first person to notice that. TVs are in a race to the bottom. Here’s a resource that may help address a few of your concerns:
            [url<]http://www.rtings.com/tv/reviews/by-usage/video-gaming/best[/url<]

            • Voldenuit
            • 3 years ago

            Thanks for that link, glad to see some people are testing TVs for gaming.

            Those numbers are pretty horrific, though. 20-30ms input lag is not where I want to be in 2017. My benq FP241W from over 10 years ago had ~26 ms lag, and it was not considered fast even for its time.

            Then again, considering most console gamers are stuck at 30 fps, and 60 if they’re lucky, I suppose there’s not much motivation to getting lag down.

          • ImSpartacus
          • 3 years ago

          Both 720p or 640×480 are equally silly in this context.

          No desktop gamer reading TR is going to seriously game at <1080p with a modern desktop.

          It’s a psuedo synthetic benchmark for those that believe that results from those benchmarks can somehow affect higher resolutions (to date, I don’t think anyone has ever actually supported this with proper data-supported analysis).

        • Jigar
        • 3 years ago

        And here i thought 640 X 480 resolution was the champ.

        • southrncomfortjm
        • 3 years ago

        Use a 1080p monitor, but DSR it up to 4K.

      • odizzido
      • 3 years ago

      1920×1080 is still a good test because of 120hz monitors.

        • Firestarter
        • 3 years ago

        you mean 240hz monitors?

      • drfish
      • 3 years ago

      1920×1080, 640×480, 3840×2160, Arma don’t care.

      • tipoo
      • 3 years ago

      But it says 1080 right on the card!

        • Redocbew
        • 3 years ago

        And the “ti” means “tests interestly”.

      • GrimDanfango
      • 3 years ago

      1920×1080 isn’t where the card will shine, but it helps to measure 1080p, 1440p, and 2160p, so you can compare how well each card scales with increased resolution. The likelihood is, it’ll clearly demonstrate the 1080ti taking less of a hit as it moves up through the resolutions than other cards.

      What would be good is to see maybe an additional DSR test, to show scaling beyond 2160p.
      4x DSR on a 2560×1440 monitor gives an effective render target of 5120×2880. According to nVidia’s own marketing guff, future 5k resolution is the reason they plucked the 11GB of RAM number out of the air… so it’d be good to see some tests of that compared to other cards.

      • Ninjitsu
      • 3 years ago

      And use Ryzen as the host CPU!

    • LoneWolf15
    • 3 years ago

    It will be interesting to see how it performs on an Intel platform vs. a Ryzen one.

      • DancinJack
      • 3 years ago

      That’s not really something you measure beyond how the CPU does. I’m not really sure what you’d be looking for?

        • Visigoth
        • 3 years ago

        Frame rate differences, obviously? To see if Intel/AMD can maintain a minimum threshold of FPS?

          • Redocbew
          • 3 years ago

          Say what?

          A “minimum threshold”? Caused by the CPU? And you expect to find this when all the testing will be GPU oriented?

            • Visigoth
            • 3 years ago

            I didn’t know you could game with a GPU only…now that’s a surprise!

        • LoneWolf15
        • 3 years ago

        Well, for one, AMD’s PCIe 3.0 implementation is new. Can it hold up?

          • DancinJack
          • 3 years ago

          Did you read all the Ryzen reviews?

      • Jeff Kampman
      • 3 years ago

      At 4K with max settings, I can guarantee the CPU will not have more than an incidental bearing on the proceedings.

    • Kougar
    • 3 years ago

    Be sure to check if the included displayport->DVI adapter is single-link or dual link!

    If it’s single I’m sure some users are going to have fun figuring that one out after the fact.

    • thedosbox
    • 3 years ago

    What does the cat think?

      • Jeff Kampman
      • 3 years ago

      Not impressed.

        • tipoo
        • 3 years ago

        I’d fully approve of your cat shamelessly featuring in more reviews for more internet points.

        • Redocbew
        • 3 years ago

        Cats so rarely are.

        • Wirko
        • 3 years ago

        Look for subtle signs, like … is it trying to shed as much hair as possible where the computer sucks the air in?

        • Krogoth
        • 3 years ago

        Challenge accepted.

      • USAFTW
      • 3 years ago

      Who cares about cats anyways? I’m gonna build a barn in my back yard and put a buffalo in there and maybe get a female one down the road so they can have little buffalo babies.

      • ronch
      • 3 years ago

      The Cat (cores) think they can’t push enough bits to make this card sweat at all.

      • Wilko
      • 3 years ago

      Cat’s been in better bags.

        • pranav0091
        • 3 years ago

        You mean *worse, right?

    • chuckula
    • 3 years ago

    No post-RyZen vacation for you!
    Thanks for keeping up with the hardware launches!

    • TwoEars
    • 3 years ago

    How does your dad feel about you dipping into the green stuff? He cool?

      • chuckula
      • 3 years ago

      It was you Wasson, Okay!
      He learned it from watching you!

    • Star Brood
    • 3 years ago

    I’ve been slacking on these benchmarks last few years. How much faster would something like this be, real-world, than a Radeon 7850? 3x?

      • jihadjoe
      • 3 years ago

      9x

        • Star Brood
        • 3 years ago

        Dude that’s disgusting! The top-end stuff back then was within spitting distance of the 7850. I’m amazed by how far the performance has gone. Way way faster than the CPU market, for sure.

          • Redocbew
          • 3 years ago

          Competition is a good thing.

          • Visigoth
          • 3 years ago

          Damn right! Looking at CPU performance across the board, it hasn’t improved significantly since at least Sandy Bridge hit the market. And that’s WITH process shrinks!

          I can’t even remember how many times NVIDIA/AMD had to redesign their whole architectures around the same damn process node, because everybody except Intel was stuck on the same process node and nobody (TSMC/UMC/Samsung/GloFlo etc.) could provide an alternative.

          Taking that into context, getting better CPU performance from a new generation has been anemic, to say the least! Hopefully AMD has spurred Intel into action so we can get more nice, 8/10-core CPU’s at more affordable prices.

            • Ninjitsu
            • 3 years ago

            [quote<] it hasn't improved significantly since at least Sandy Bridge hit the market. [/quote<] You *did* read the Zen review, right? 😛

          • jihadjoe
          • 3 years ago

          Just a rough estimate. Actually doing the math it seems closer to about 8x.

          1080ti is about a third faster than 1080, which was roughly [url=https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/26.html<]twice as fast as the 980[/url<]. 980 in turn was a little less than [url=https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_980/26.html<]twice as fast as the 680[/url<], which was about a [url=https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_680/27.html<]50% faster than the 7850[/url<]. So 1.3 * 2x * 2x * 1.5x = 7.8x

            • ImSpartacus
            • 3 years ago

            It’s much simpler if you just look at this direct comparison between the 370 (rebranded 7850 with higher clocks) and the Titan X (which is basically like a 1080 Ti).

            [url<]https://www.techpowerup.com/reviews/NVIDIA/Titan_X_Pascal/24.html[/url<]

            • jihadjoe
            • 3 years ago

            Yes! And to think I even used their charts to figure out my math.

            • ImSpartacus
            • 3 years ago

            As you know, techpowerup charts will vary heavily over time due to changes in drivers and game choice. For example, you can see the downfall of kepler and the subsequent “fine wine” rise of early gcn parts.

            So using a bunch of reviews across a massive 4 year span can really throw a wrench into your result. Much cleaner to use one review from as recent as possible, which the 2016 Titan X review is.

            • Pancake
            • 3 years ago

            You’re doing your maths wrong. You should consider Amdahl’s Law. Basically, the CPU workload (lumping together all things CPU – PCIe transfer rate, RAM latency and bandwidth in addition to raw CPU grunt) is pretty much a fixed component and CPUs aren’t increasing in speed the same as GPUs. So, even with an infinitely fast GPU you won’t achieve infinite frame rates.

            Having said that, I don’t know but I would speculate the raw performance of TitanX would be MUCH more than 10x the 7850. Maybe closer to 20x.

      • Demetri
      • 3 years ago

      It’s the same chip as R7 370, so check the benches in this review:

      [url<]http://www.guru3d.com/articles_pages/nvidia_geforce_titan_x_pascal_review,21.html[/url<] 5x in Witcher 3, 4x in GTAV, 4x in The Division, 5x in Battlefield

      • ImSpartacus
      • 3 years ago

      This will perform basically like a Titan X.

      A Titan X is about 5-6x faster than a 370.

      [url<]https://www.techpowerup.com/reviews/NVIDIA/Titan_X_Pascal/24.html[/url<] And then a 370 is just a heavily overclocked 7850, so you're probably looking at about a 5-7x overall difference between a 7850 and a 1080 Ti depending on resolution and game choice. But do note that you can get like 2/3 of a 1080 Ti's performance for like $350 in the form of a 1070.

        • Chrispy_
        • 3 years ago

        Or, if you want to stick to something more sensible, a [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16814202270&cm_re=nitro_RX_480_4GB-_-14-202-270-_-Product&CMP=OTC-TechReport<]$172.99 RX480[/url<] gives you half the performance of the $700 card and you get a free copy of the DOOM reboot (very worth playing!). Vulkan is better than on the GTX1060, DX12 is better than on the GTX1060, and Freesync is worth having. That's a triple win, IMO. [i<]edit - updated the link to the TR/Newegg affiliate link.[/i<]

          • ImSpartacus
          • 3 years ago

          Yeah, the 480 had been retardedly cheap in the past couple months. It’s quite the steal (though the slower 4GB version gets starved for bandwidth).

          • Bexy
          • 3 years ago

          Actually it gives about half the performance of the GTX 1080, the GTX 1080Ti is 30 – 35% Faster so an RX480 is not half the performance.

            • Chrispy_
            • 3 years ago

            Not sure where you’re getting that; The review ImSpartacus linked from TPU is one of the most in-depth reviews on the web covering an ungodly number of titles at multiple resolutions. Even with that outdated review data from Polaris’ launch I’m not seeing the 30-35% you’re talking about

            However, the important thing now is that that data was on very early Polaris drivers and it is common knowledge from updates to early reviews as well as more recent reviews that the RX480 is better than a GTX1060 6GB. For most games it matches the 1060 but in some DX12, Async, Vulkan or increasingly-common deferred-rendering games it has a distinct 5-10% advantage. At 1080p that would actually make it [i<]more[/i<] than half the speed of Titan X(P) and put it very close to 50% of a Titan X(P) at 1440p. If you want to drive a 4K screen, then you have no choice, but I'm pretty sure OP is not driving a 4K screen with a 7850, either 😉 Edit, I suppose I should be helpful and back up the RX480 improvements with a link. [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/73945-gtx-1060-vs-rx-480-updated-review-23.html[/url<]

          • southrncomfortjm
          • 3 years ago

          I got a 480 about 2 months ago and I love it. Interestingly, when matched with games like Deus Ex: Mankind Divided and some other newer games at very high or ultra settings, it seems to be pushing up against the limit of my 4.3ghz overclocked i5-3570k. Not in a terrible way, just occasionally it seems like my FPS drops a bit, then recovers. Just a thought if you have an older CPU and are thinking of upgrading to a 480 or something more powerful. Personally, I’ll be looking at a Kaby Lake 7700k this late fall/early winter when prices are hopefully a bit better.

      • CaptTomato
      • 3 years ago

      more like 5-6x…easy.

    • GrimDanfango
    • 3 years ago

    I look forward to reading about how good the thing I already blind-preordered is 🙂

    • Firestarter
    • 3 years ago

    well that’s one way of stealing AMD’s thunder

      • Visigoth
      • 3 years ago

      I’m really curious if Vega can beat this. My bet is, they can’t, but it’ll be performing close to it. I’m more concerned for AMD when Volta hits, this is supposed to be a much more radical arch than Pascal, with a lot of very beefy upgrades across the board.

        • odizzido
        • 3 years ago

        Since this is going to be a 1000 dollar card I don’t think it matters if vega is faster or not. All they have to do is offer something that is good value.

          • Voldenuit
          • 3 years ago

          $1000? MSRP is $699. I mean, it’s not a budget card by any means, but it’s not a thousand dorra card.

        • jihadjoe
        • 3 years ago

        I actually kinda hope Vega does beat it, so Nvidia is forced to drop the price some more or make another one that’s fully enabled.

        Edit: spellcheck

      • deruberhanyok
      • 3 years ago

      They would need to have some thunder to steal for that to happen, though. Vega was a no-show at the show.

Pin It on Pinterest

Share This