3DMark Time Spy benchmark puts DirectX 12 to the test

3DMark Time Spy, the built-from-the-ground-up DirectX 12 graphics benchmark we talked about late last month, is now available from Futuremark. The benchmark was stealthily released via Steam late last night to owners of 3DMark on that platform. It's also been added to the stand-alone installer that you can pick up here.

Futuremark says that Time Spy is one of the first DirectX 12 apps built "the right way," by which they mean to say that it is not adapted from existing code. That's a bit ironic considering the content of the benchmark, which is primarily scenes from current and previous 3DMark releases running in small dioramas as a feminine character wanders through them. The benchmark is visually stunning, although the impossibly complex scenes—which appear to be rendered in their full original detail—are a bit small to really appreciate.

Of course, as with any 3DMark benchmark, the point of the software is to test things, and Time Spy aims to exercise all of the latest DirectX 12 features, including Explicit Multi-Adapter mode, Asynchronous Compute, and all the threads your CPU can offer up. Multithreading doesn't sound as exciting as other technologies with fancy names, but the removal of the mostly single-threaded DirectX 11 bottleneck is arguably the feature of DX12 with the most impact. The graphic below, courtesy of FutureMark, illustrates the difference in efficiency between Time Spy and the DX11 Fire Strike benchmark.

We haven't had time to do any testing of our own with these new benches, but the folks at PC Perspective did some brief benchmarking with a few recent graphics cards from both teams. As one might expect, Radeon cards get a fair boost from async compute, while a pair of GTX 1080s scales some 80% over a single card. That's impressive stuff for a brand-new benchmark using a brand-new API on a pair of brand-new cards.

Anyone who owns 3DMark can run the benchmark, but they won't be able to disable the demo mode or toggle specific features (including Asynchronous Compute) without ponying up a small fee for the test. The price of 3DMark as a whole also increased to $29.99 (from $24.99) to account for the new test. However, Futuremark is running a sale to celebrate the release of the new benchmark: until July 23rd, the complete 3DMark Advanced package is just $9.99 on Steam, and it'll only cost $4.99 to add Time Spy to an existing license.

Comments closed
    • ermo
    • 3 years ago

    Tried running this last night on my 2xHD7970 GHz Ed. 3GB cards clocked at 1050/1500 GPU/RAM.

    After running for a bit, Windows 10 had a blue screen error (MEMORY_MANAGEMENT) which also happened on the 2nd try at the same point (just before the CPU test).

    This is the first time I’ve seen Windows 10 do that with my current hardware. I’m on Radeon Crimson 16.5.3 if that matters any.

    EDIT: 3DMark stopped crashing once I disabled CrossFireX in my default game settings. But the score also took a hit? Isn’t 3DMark supposed to use the support for DX12 multiadapter mode?

    EDIT#2: Newest 16.7.2 Hotfix driver fixes the CrossFireX crash.

    • Shobai
    • 3 years ago

    Oh bother. I’m not sure if there’s much to this, but [url=http://www.overclock.net/t/1606224/various-futuremarks-time-spy-directx-12-benchmark-compromised-less-compute-parallelism-than-doom-aots-also<]some users on another forum[/url<] are kicking up a stink about how 3DMark has implemented DX12's Async Computer when compared with other benchmarks. They appear to claim that it's been written in such a way as to cast Pascal in the best possible light, by doing as little Async Compute as they can get away with. If this is the case, it might explain why the Radeons aren't seeing the gains that they might see in Doom, for instance [because they just aren't being fed to capacity].

      • Shobai
      • 3 years ago

      It looks like it boils down to the charge that, contrary to 3DMark’s claim that “[url=http://www.futuremark.com/pressreleases/introducing-3dmark-time-spy-directx-12-benchmark-test<]Time Spy is an ideal benchmark for testing the DirectX 12 performance of the latest graphics cards[/url<]", 3DMark Time Spy is not a DX12 benchmark because it "[url=http://steamcommunity.com/app/223850/discussions/0/366298942110944664/<]was not tailored for any specific architecture[/url<]", which apparently violates DX12's Engine Considerations on [url=http://www.dualshockers.com/2016/03/14/directx12-requires-different-optimization-on-nvidia-and-amd-cards-lots-of-details-shared/<]slide 4 of Nvidia and AMD's joint presentation at GDC 2016[/url<] - this same slide suggests developers target DX11 if they can't code IHV specific paths. Oh!

    • Mikael33
    • 3 years ago

    This a bragging thread right?
    6700k @ 4.5ghz and 980 ti
    Stock 980 ti: [url<]http://www.3dmark.com/spy/51304[/url<] Overclocked 980 ti: [url<]http://www.3dmark.com/spy/51752[/url<] Core clock won't go much higher without a driver crash, memory might have more left but I think 8ghz is pretty decent and I'm not sure a higher memory clock would help much.

      • JustAnEngineer
      • 3 years ago

      The bragging thread is in the forums:
      [url<]https://techreport.com/forums/viewtopic.php?f=3&t=118209[/url<]

        • Mikael33
        • 3 years ago

        Apologies 🙂

    • tipoo
    • 3 years ago

    “Performance on 3DMark Time Spy with the GTX 980 and GTX 970 are basically unchanged with asynchronous compute enabled or disabled, telling us that the technology isn’t being integrated. In my discussion with NVIDIA about this topic, I was told that async compute support isn’t enabled at the driver level for Maxwell hardware, and that it would require both the driver and the game engine to be coded for that capability specifically.

    To me, and this is just a guess based on history and my talks with NVIDIA, I think there is some ability to run work asynchronously in Maxwell but it will likely never see the light of day. If NVIDIA were going to enable it, they would have done so for the first wave of DX12 titles that used it (Ashes of the Singularity, Hitman) or at the very least for 3DMark Time Spy – an application that the company knows will be adopted by nearly every reviewer immediately and will be used for years.

    Why this is the case is something we may never know. Is the hardware support actually non-existent? Is it implemented in a way that coding for it is significantly more work for developers compared to GCN or even Pascal? Does NVIDIA actually have a forced obsolescence policy at work to push gamers toward new GTX 10-series cards? I think that final option is untrue – NVIDIA surely sees the negative reactions to a lack of asynchronous compute capability as a drain on the brand and would do just about anything to clean it up.”

    Just keeps getting more curious. Are they giving it to Pascal as an exclusive feature when Maxwell would benefit from it, or is the implementation in Maxwell so bad it would harm performance rather than help?

      • Jeff Kampman
      • 3 years ago

      I wrote about this in our GTX 1080 review, but Nvidia says right in the GP104 whitepaper that Maxwell’s async implementation is rather coarse-grained and has the potential to deliver worse performance when it’s enabled than when it’s not. When Nvidia says the feature isn’t enabled in the driver, it’s probably a matter of not giving programmers the rope to hang themselves with. It would genuinely surprise me if we ever see Nvidia enable the feature in the driver, or if async code will ever see any kind of performance increase on Maxwell.

        • tipoo
        • 3 years ago

        I missed that, thanks. Crummy about their earlier “don’t worry, it’s in the works” comments to other press earlier then. What’s the point of responding to questions about lacking Async with it’s in there, if they knew it would perform worse and therefore not be enabled?

        Even now they’re kind of doing it with the “it has to be enabled in both the driver and game” commentary, implying everything will be all right, except their own whitepaper says otherwise, only most users won’t read that.

          • Pwnstar
          • 3 years ago

          nVidia lies a lot. Remember “Pascal 10x Maxwell”?

            • tipoo
            • 3 years ago

            CEO Math™

            • stefem
            • 3 years ago

            And here we have the winner of our “great quote out of context” contest!

            NVIDIA clearly explained (and I was there) how they roughly calculated that estimation which was referred to a particular kind of workload (Convolutional Neural Networks) and on a HPC architecture machine.

            There was a giant banner exampling this so if you can’t understand are blind or usually read wccftech like sources then we can talk on the official statement about Fiji overclock ability which is much simpler to debunk “You’ll be able to overclock this thing like no tomorrow” “This is an overclocker’s dream.”

    • chuckula
    • 3 years ago

    From the results shown so far:

    1. The so-called “async” functionality has a greater relative impact on AMD hardware than Nvidia hardware.

    2. BUT: Pascal definitely shows a performance boost when the async is turned on, just not as great (relatively speaking) as some of the AMD hardware. Incidentally, at least some tests show that the *relative* performance boost of the Rx 480 using async is actually less than the *relative* performance boost for older GCN parts. [see here: [url<]http://www.pcper.com/files/imagecache/article_max_width/review/2016-07-14/timespy-3.png[/url<] ] 3. AND: Maxwell shows basically zero performance delta with or without async. Which doesn't mean that Maxwell doesn't "support" async, just that it doesn't see a performance boost. 4. Stripping away the whole "OMG ASYNC" sideshow: A 6.5 Tflop GTX-1070 with "obsolete" GDDR5 beats the 8.6 TFlop HBM-equipped R9 Fury X by 5.3% when [b<]async is turned on for the Fury X but turned off for the GTX-1070[/b<] and that number grows to over 11% when the GTX-1070 gets async turned on too. Oh, and the 9 TFlop GTX-1080 with the still subpar GDDR5X compared to the 8.6 TFlop Fury X? Well, let's just say it wins by a bit more than the < 5% theoretical compute advantage it has over the Fury X. Edit: Ooh, yeah, keep the hate coming as a substitute for having any facts to discuss! Here, let's add a direct quote from PC Perspective: [quote<]NVIDIA’s Pascal based graphics are able to take advantage of asynchronous compute, despite some AMD fans continuing to insist otherwise. The GeForce GTX 1080 sees a 6.84% jump in performance with asynchronous compute enabled versus having it turned off. The gap for the GTX 1070 is 5.42%.[/quote<]

      • tipoo
      • 3 years ago

      Nice, the last part is what I wanted to see. Good news then. Any word on Maxwell, just to close that book?

      “Oh, and the 9 TFlop GTX-1080 with the still subpar GDDR5X compared to the 8.6 TFlop Fury X? Well, let’s just say it wins by a bit more than the < 5% theoretical compute advantage it has over the Fury X.”

      …Was this ever contested on this site? Tflops are a purely mathematical measure, core count x billion operations per second x 2 operations per core per clock for the same performance level, AMD being more floppy was already a known quantity.

      • RAGEPRO
      • 3 years ago

      We already know all about Maxwell and async. I don’t really know why everyone’s talking about it like it’s some kind of mystery. Jeff talked about it in his 1080 review [url=https://techreport.com/review/30281/nvidia-geforce-gtx-1080-graphics-card-reviewed/2<]here[/url<] -- I realize most people don't actually read review text. Basically Maxwell doesn't really have any async support. You can run compute and graphics workloads but you have to set a hard division of resources between the two and never the twain shall meet, which means async efficiency is ultimately really bad. Like Kanter said, "disastrous." I would say the differences PCPer saw there are within the margin of error which leads me to believe that async simply isn't being used here, even when enabled. Your post comes off a little bombastic and inflammatory besides. I don't know that anyone says GDDR5 is "obsolete"; the RX 480 uses the same GDDR5 that's on the 1070, so you might want to tear down that strawman. And I don't think anyone is surprised that theoretical compute figures have little bearing on graphics performance, heh.

        • derFunkenstein
        • 3 years ago

        Maybe you’re new to chuckula’s posts, but this tilting at windmills is normal for him.

          • RAGEPRO
          • 3 years ago

          Heh, I guess. I’m just a little tired of it.

            • chuckula
            • 3 years ago

            Well, I’m a little tired of being one of very few voices around here who point out these things called “facts” in the face of the usual “OMG AMD IS FUTUREPROOF DX12! ASYNCJUICE ASYNCJUICE ASYNCJUICE” nonsense that’s been going on for a long long time here.

            Here we have the first truly standardized DX12 benchmark out and it seems that maybe, just maybe, Nvidia isn’t so hopelessly behind after all.

            • RAGEPRO
            • 3 years ago

            I would suggest that you’re simply being baited too easily by people who are disconnected from reality. The reality is (of course, as you well know) that Pascal has async compute support and it works just fine. As DPete27 said below, it seems like Pascal probably has less resources sitting idle than the Radeons do, so there’s less for async to improve.

            There’s no reason to be so defensive that you go on the offensive and throw up strawmen because a few fanboys are shouting from the rooftops. 🙂

            • AnotherReader
            • 3 years ago

            Pascal definitely supports asynchronous shading and given its resource balance, gains less from it than Polaris. Note that I said resource balance, because for DX12 and Vulkan, the drivers should have less to do with performance than the developers.

            As a general note, we should hope that AMD gives Nvidia a bloody nose from time to time: competition is good, even for Nvidia.

            Edit: to bust some stereotypes, I own a 290X. One’s affiliation, if any, shouldn’t keep one from being fair and objective.

            • bjm
            • 3 years ago

            RAGEPRO isn’t the only one tired of your posts. So am I.

            In literally every article related to AMD, you start threads with your passive aggressive tone against AMD fanboys and go out of your way to try and disprove them when, in fact, nobody around here actually made those claims. Sure, they may have AMD nutjobs in places like the WCCFTech forums or YouTube comments, but it wasn’t as prevalent here until you went on your AMD fanboy inquisition.

            To put it simply: There is no greater fanboy than the AMD straw man that you build up at the beginning of every thread. Your posts lately have done nothing but contribute to the vitriolic tone that has been dominating the discussions lately. Just look at your own posts:

            [quote<]Remember when the Fury X was supposed to be the ultimate "4K" card and testing it at low resolutions was cheating in favor of Nvidia? Funny how things have changed.[/quote<] [quote<][Thanks for all the downthumbs from completely honest and objective people who would TOTALLY call me out and disagree with me using rational arguments if the headline had included the word "Nvidia" instead of "AMD"][/quote<] [quote<]Want to see Nvidia get performance improvements like that? Sure! Just have Nvidia do that thing that you always accuse them of: Have them gimp their OpenGL drivers to be just as incompetent as AMD's. Then, you can show the same massive "improvement" for the GTX-970.[/quote<] ...and those are just from the past three days! Those were all thread-starters, replies to nobody, starting right off the bat with subtle, passive aggressive, shots at your straw man fanboy. And when discussions start off on that foot, it drives down the quality of the discussion that follows. It's like you jumped right out of YouTube comment section and then took your frustrations out on the Tech Report readership. Sorry to burst your bubble, but the TR readership isn't YouTube or WCCFTech, you're not one of the "very few" voices who is after the facts. Your tone is tiring. And no, before you go accusing me, I'm not an AMD fanboy. I'd feel the same way had you done the same for months on end to nVidia, Intel, Qualcomm, Apple or Samsung. The fact that I need to even make that disclaimer says a lot to the tone of these comments now. (Before posting this, I just saw AnotherReader's post, to which he also felt compelled to make a similar disclaimer)

            • cegras
            • 3 years ago

            I would sign a petition to ban chuckula from posting. I would gladly give up my own rights to post for this to happen. Also, maybe he owns stock in nvda and intc – he frequently does damage control for them.

            • Pwnstar
            • 3 years ago

            I’m almost 100% sure he does own their stock.

        • stefem
        • 3 years ago

        Except NVIDIA, like AMD, has been able to do ATW (Asynchronous Timewarp) with past generation GPUs, Oculus explain this on its developers blog.

      • Tirk
      • 3 years ago

      Don’t give it power! Here in the wizarding world we refer to it as, “The DX12 feature who must not be named”

      #savethemugglethreads

        • chuckula
        • 3 years ago

        Your trolls are getting less and less amusing.

        I’d go off in a corner and practice hard for when the performance-per-watt metrics between the GTX-1060 and Rx 480 get posted and you have to really work to show how much Ngreedia is a screwing over its poor poor customers by denying them futuristic features like overwatting.

          • Tirk
          • 3 years ago

          And your rants are getting less coherent. I have never a made a post for or against overwatting. I have never put much weight in performance-per-watt metrics either for or against and have never made a post to suggest otherwise. I’m sorry to disappoint you, but I think you are confusing me for someone else or other mysterious voices you hear.

          Maybe you should go off in a corner and practice hard about not accusing someone of something they have never done.

      • AnotherReader
      • 3 years ago

      Graphics performance is more nuanced than who has more Tflops. Even the GTX 1080 isn’t 33% ahead of the GTX 1070 as the relative flops count would lead you to expect. The 1080 and 1070 are matched in all other areas except shader throughput so the performance delta is even less than 25% (the memory bandwidth advantage for the 1080).

        • chuckula
        • 3 years ago

        [quote<]Graphics performance is more nuanced than who has more Tflops.[/quote<] Yeah, that was kind of the point of my post. However, in the narrower context of "OMG ASYNCHRONOUS SHADERS" the whole point of the feature is to make more efficient usage of the raw shader power, which does directly go to the teraflop number. If Nvidia is so hopelessly outclassed by not having the magical "real" async support, then there is no way in hell than an 8.6 Tflop Fury X with all the bandwidth in the world and full "async" support to boot should be losing to a 6.5Tflop GTX-1070 (with 33% lower compute performance). First, the Fury X has the raw power advantage. Second, the Fury X is supposed to have this amazing efficiency advantage due to async. There's no way it should lose to a clearly smaller and lower performance part in a benchmark that has specific features that are supposedly only used by GCN.

          • AnotherReader
          • 3 years ago

          The GTX 1070 has less compute throughput, but it has nearly a 70% advantage in both ROP throughput and geometry setup. The descendants of the fixed-function pipeline are not done having their say.

      • DPete27
      • 3 years ago

      As I said on my comment thread, Async helps more when there are more resources sitting idle. Async essentially just stuffs things into those holes.

      Yes, Maxwell doesn’t really have Async support, so we should just omit that result. But to me, since Fiji, Polaris, and Pascal support Async, we can glean that with Async off, Fiji has the most resources sitting idle and Pascal has the least.

        • tipoo
        • 3 years ago

        Async; Stuffer of Holes®

      • BurntMyBacon
      • 3 years ago

      Pretty much what I expected:[quote<]Pascal definitely shows a performance boost when the async is turned on, just not as great (relatively speaking) as some of the AMD hardware.[/quote<] Though this is interesting:[quote<]Incidentally, at least some tests show that the *relative* performance boost of the Rx 480 using async is actually less than the *relative* performance boost for older GCN parts.[/quote<][s<]Do you suppose the RX480 is more capable of keeping its pipes filled than previous generation parts? Or perhaps the test doesn't have a great mix of commands for keeping the pipes filled while using Async compute on the RX480. Given that it is DX12 either way, I can't imagine driver support has much to do with it. Alternate theories?[/s<] [i<]Edit: The older GCN parts they are referring to are all Fury cards. I think I'll stick with the prevailing theory that the Fury shaders are far more underutilized and therefore have more to gain than the RX480.[/i<] [quote<]A 6.5 Tflop GTX-1070 with "obsolete" GDDR5 beats the 8.6 TFlop HBM-equipped R9 Fury X by 5.3% when async is turned on for the Fury X but turned off for the GTX-1070 and that number grows to over 11% when the GTX-1070 gets async turned on too.[/quote<]Why bother comparing async to not async when both cards have it and both cards benefit from it? The GTX1070 is over 11% faster. Done deal.

      • nanoflower
      • 3 years ago

      As I understand it’s not that Nvidia doesn’t do async compute but that they don’t do it in the hardware so you won’t see the same benefits of async compute as you would with a GPU that does the work in hardware (as with AMD.)

      • shank15217
      • 3 years ago

      Maxwell does not support the “async” feature, it never will, move on.

    • End User
    • 3 years ago

    Ran Time Spy. My GPU performed well. My OC’ed 3770K did not. Kaby Lake cannot arrive soon enough.

      • 223 Fan
      • 3 years ago

      The choice of CPU for gaming <<<<<<<<< the choice of GPU. The fact that a benchmark purportedly measuring gaming prowess breaks out CPU performance to the extent that it makes you want to upgrade seems misleading.

        • RAGEPRO
        • 3 years ago

        This is DX12, bro. Whole new world. It depends on the game, of course, but CPU performance might be a whole lot more relevant than it has in the past.

          • chuckula
          • 3 years ago

          Actually, the net score for the benchmark is weighted 85% for GPU and only 15% for CPU.

          So that’s pretty much tells you what 3DMark things about the relative importance of both components.

            • RAGEPRO
            • 3 years ago

            Fair enough. I did say “might.”

            • 223 Fan
            • 3 years ago

            That sounds about right.

            • JustAnEngineer
            • 3 years ago

            Excellent point.

            The CPU sub-score does give significant credit for “MOAR CORES”, though.

          • Vhalidictes
          • 3 years ago

          Note: One of the big features behind DX12 was the removal of CPU bottlenecks in rendering performance, so… basically the exact opposite of what you’re saying?

            • RAGEPRO
            • 3 years ago

            Not quite — DX12 and Vulkan have the ability to enable game developers to spread workloads across many CPU threads much more effectively, which is great! Specifically, the CPU bottleneck addressed is one of single-threaded performance. (Of course, single-threaded performance is still really important, but not as “absolutely the most important thing” critical.)

            However, if a game developer targets six or eight hardware threads and you’ve got but two or four, well, you may be in a pickle. Of course, a CPU with good enough single-threaded performance will be able to power through regardless.

            Still, I think that if developers get used to having loads of unused CPU laying around it may bode poorly for those with lower-end or older CPUs. I’m not saying this is a reason not to use DX12 or Vulkan or anything. Just pointing out that overall CPU performance (and the ability to make use of things like hyper-threading, which is also supported on the upcoming Zen) may become much more relevant than it is now, where a Core i3-6100 will basically run any game nearly as well as a Core i7-6700K. 🙂

        • End User
        • 3 years ago

        Party pooper. I’m just using any excuse I can think of to build a new gaming rig.

      • travbrad
      • 3 years ago

      I’m eagerly awaiting and hoping for Kaby Lake + eDRAM but I don’t think you’ll find many GAMES where a OCed 3770K is too slow. It’s okay for 3dmark to run poorly (it always has) as long as your games still run well.

      • RAGEPRO
      • 3 years ago

      What was your CPU score? The benchmark won’t run on my PC — GPU is too old — but a good friend of mine scored ~4600 on her 4790K. The framerate was pretty darn low on the CPU test, so don’t take that as an indication that your CPU is slow.

        • End User
        • 3 years ago

        Compared to your friends score of 4,600 my score of 4,270 does not look too bad. I was originally comparing my CPU score to the CPU score from guru3d.com (9,439) but it turns out their test rig is based on an 8 core 5960X.

        My GPU only score is 7,316 and that does compare favorably with [url=http://www.guru3d.com/articles-pages/futuremark-3dmark-timespy-benchmark-review,1.html<]that scored by guru3d.com.[/url<]

      • Dudeface
      • 3 years ago

      And yet my old faithful Nehalem i7 930 @ 3.99 GHz gave the same Fps in graphics tests 1 and 2 as a 6700k with the same GPU (GTX 970). Good to know the old girl isnt holding the gpu back.

        • evilpaul
        • 3 years ago

        Wouldn’t Skylake only be ~25% faster at its default 4.0 Ghz with IPC increases?

          • Freon
          • 3 years ago

          In pathological and narrow cases, i.e. synthetic benchmarks, possibly.

        • End User
        • 3 years ago

        It turns out that my CPU score is fine. I was unaware that I was comparing my CPU score to that of an 8 core CPU.

        Kaby Lake is about [url=http://www.samsung.com/us/computer/memory-storage/MZ-V5P512BW#key-specs<]M.2 bandwidth[/url<], USB 3.1, Type-C, and Thunderbolt 3.

      • Freon
      • 3 years ago

      Doesn’t seem to be that important in real game benchmarks. Even a 2600K is probably just fine unless you are running a relatively high end GPU and relatively low resolutions and trying to tweak out super high framerates.

    • chuckula
    • 3 years ago

    Came in wanting to see a blimp-ship combo.
    Left happy.

    • DPete27
    • 3 years ago

    Nice to see the benefits of Async isolated using a consistent benchmark.
    AMD = 11% vs NVidia = 5% on current generation cards.

      • Tirk
      • 3 years ago

      Async compute is garbage AMD technology and people who believe its ok to cut in line. I always saw my teacher force someone who cuts to the back of the line, it should be no different in a GPU. Those mean threads deserve no sympathy from me. #dontletthemcut

      I’m sure its an error in Nvidia’s otherwise perfectly optimized drivers that already allow every ounce of their card to be utilized without it. They’ll optimize it back down to a 0% gain like they’ve successfully done with Maxwell cards.

        • tipoo
        • 3 years ago

        Not sure if sarcasm. “Cutting in line” isn’t impacting render performance in this case, as portions of the GPU are going to go inactive in every frames rendering, async takes advantage of that. Why not take advantage of hardware that’s twiddling it’s thumbs for part of each frame?

        Yes, Nvidia GPUs have higher shader utilization rate, but that doesn’t make “cutting the line” with async a bad technology.

          • Tirk
          • 3 years ago

          You couldn’t tell my post was sarcasm? I know some people post some pretty ridiculous non sarcastic posts but I hardly thought the linguistics in posts had gotten that low for it to be considered non sarcastic to post the way I did.

            • tipoo
            • 3 years ago

            Sorry, guess I haven’t seen you around enough, there are always some accounts and new accounts that’ll post something that bad, poes law and all 😛

            • Tirk
            • 3 years ago

            hehe indeed 😉

        • chuckula
        • 3 years ago

        I know you are just posting the usual anti-Nvidia fanboy troll, but if you had actually bothered to look at the detailed numbers in the PC Perspective article, you would have noticed that the older Fiji parts with the previous-generation GCN actually have larger relative gains from Async than the brand new Polaris does. In fact, that 12.9% relative gain for the Fury X shows more than a 50% greater boost for Fiji in asynchronous compute compared to the brand new Polaris architecture.

        Additionally, at a relative gain of 8.5% with async for Polaris vs. 6.8% relative gain (and a much larger absolute gain) for the GTX-1080, it looks like Nvidia is moving in the right direction for Asynchronous compute with newer architectures vs. AMD, which seems to have gone backwards in getting gains from asynchronous compute vs. its older hardware.

        So, if anything, your disingenuous sarcasm would actually have been more accurate if it had been applied to AMD.

          • RAGEPRO
          • 3 years ago

          Doesn’t it seem more like Fiji gets a larger boost from async compute due to simply having a larger shader array?

            • chuckula
            • 3 years ago

            Note that I used the word [b<]relative[/b<] here. The absolute boost for Fiji ought to be bigger, but as a percentage you would expect the "new" GCN 4.0 Polaris to have a bigger boost (or at least the same boost) percentage-wise as AMD's older architecture. The charts in PC perspective make this crystal clear.

            • RAGEPRO
            • 3 years ago

            No, I don’t think that follows. You would expect the GPU with the larger shader array to have a bigger boost. The wider and deeper the GPU is, the harder it is to make effective use of it. We saw this with the launch of the original GTX Titan, and also with AMD Hawaii, neither of which were as efficient as smaller GPUs (GTX 770, Bonaire) using similar technology. It’s much easier to benefit from a high clock rate than from a wider processor. Amdahl’s law and all that.

            So saying, I think that it’s been long-established that Fiji is severely bottlenecked, either by geometry throughput or ROP throughput. Either way, large portions of that massive shader array are probably idle in non-async-using titles. I think it makes a lot of sense that Fiji sees a larger gain than Polaris, which is a much smaller chip at a higher clock rate.

            • BobbinThreadbare
            • 3 years ago

            Depends where the bottle neck is. The new card is midrange, so it might have other bottlenecks compared to high end with previous architecture.

          • Voldenuit
          • 3 years ago

          I’m fairly sure that, combined with the Seinfeld-esque comment about cutting in line, that post was meant to be sarcasm. No soup for you!

          • DPete27
          • 3 years ago

          OR, Polaris is less bottlenecked than Fiji. Async is helping Fiji because there are a bunch of idle resources that can now be put to use. Polaris has less bottlenecks so less resources are idle without Async.

          Judging by the fact that Nvidia seem to be able to squeeze more performance out of less resources than AMD, this same idea might be the reason we’re not seeing large gains from Async on their cards.

            • Airmantharp
            • 3 years ago

            That’s the running conclusion with Nvidia re: DX12/Vulcan/Async etc.; that these technologies designed to squeeze more performance out will gain less vs. DX11 on Nvidia’s parts because they’re already so well optimized.

            • DPete27
            • 3 years ago

            That’s probably why AMD spearheaded the DX12 development. They realized they weren’t doing as well as nVidia at resource proportioning and their GPUs were leaving a lot of resources unused during gaming tasks. Lets face it, AMD has traditionally been better at GPU compute than nVidia. DX12 was a way to get “free” performance improvements.

            As it so conveniently turned out, Async also gave them positive publicity in “Async improves performance on AMD more than nVidia.” The surface reader thinks that’s awesome.

            Ultimately I think DX12/Vulkan and Async is a great development, and hopefully it’s not too much of a burden on game devs. But the 3dMark tests show that Pascal and Polaris are benefiting about the same from Async in [probably] it’s best optimized form. I see no reason why that trend shouldn’t continue as a level playing field performance boost to both camps.

          • Tirk
          • 3 years ago

          Funny because it was you not me who posted,

          [quote=”chuckula”<] Well, according to AMD's official press statement in the link: DirectX® 12 Asynchronous Compute Asynchronous compute is a DirectX® 12 feature exclusively supported by the Graphics Core Next or Polaris architectures found in many AMD Radeon™ graphics cards. [Edit: Funny how the AMD fansquad is downthumbing me for posting official statements from AMD. It's amusing how "fans" don't like to acknowledge the pronouncements of their own leaders.] [/quote<] [url<]https://techreport.com/news/30384/amd-and-firaxis-join-forces-to-bring-dx12-goodies-to-civilization-vi?post=989937[/url<] For your jab to have any meaning other than to show your bias, the Async performance increase for Pascal must be a mistake right? I am fully aware Pascal improved its thread preemption over Maxwell and am glad it is showing some improvements over Maxwell's architecture "Ignore that last line just like every line I've posted complimenting Nvidia's improvements". You are the one taking every opportunity to dismiss Async compute when it doesn't fit your world order. You are the one finding every angle to put down AMD's performance boosts in Async compute. You are the one who is constantly berating down open standards until your favorite company starts to use it. What's funny is that you are the one imagining that the only reason you 'd get a down thumb is because someone is a fanboy and not because of your obviously visible bias. If I'm looking for a black pot, I'll be sure to let the kettle know.

            • chuckula
            • 3 years ago

            That was to point out AMD’s official position to somebody who didn’t want to believe what AMD was saying. It was made in the context of AMD [s<]bribing[/s<] uh... "partnering" with a game developer to add features that are supposedly AMD-specific, at least according to AMD's press releases. And I was right about that. I never said I agreed with what AMD said, I just pointed it out in the name of stopping some of the hypocrisy that people like you have been spreading.

            • Tirk
            • 3 years ago

            And I hope you fight with just as much rigor against hypocrisy even when its coming from other companies that are not AMD. Your posts have thus far not fulfilled that hope.

            They also mentioned explicit multi-adapter in that very same article that you conveniently omit to reinforce your position. Care to include a feature that specifically allows an Nvidia GPU to work in the same PC that includes and AMD GPU to resist your own hypocrisy?

        • Jigar
        • 3 years ago

        Please don’t breed. I request you. /s

      • Tirk
      • 3 years ago

      It is good to see improvements on both Nvidia (Pascal) and AMD utilizing a DX12 feature.

      I want both companies to get positive gains using standards and features that BOTH have FREE access to implement. In a perfect world, Intel would jump on Vulkan features in their iGPUs to push the industry towards even more open standards. I will embrace when Intel implements adaptive sync as they’ve already stated is their intention, not be boo hoo about it because Intel is not AMD. The market works best when AMD, Nvidia, and Intel all embrace features that are not locked behind a walled garden.

      And to be clear, my other post was sarcasm, this one is not.

Pin It on Pinterest

Share This