Fable Legends DirectX 12 performance revealed

One of the more exciting features built into Windows 10 is DirectX 12, a new programming interface that promises to modernize the way games talk to graphics chips.

Prior versions of DirectX—and specifically its graphics-focused component, known as Direct3D—are used by the vast majority of today’s PC games, but they’re not necessarily a good fit for how modern GPUs really work. These older APIs tend to impose more overhead than necessary on the graphics driver and CPU, and they’re not always terribly effective at keeping the GPU fed with work. Both of these problems tend to sap performance. Thus, DirectX has often been cited as the culprit when console games make a poor transition to the PC platform in spite of the PC’s massive advantage in raw power.

Although, honestly, you can’t blame an API for something like the Arkham Knight mess. Console ports have other sorts of problems, too.

Anyhow, by offering game developers more direct, lower-level access to the graphics processor, DirectX 12 promises to unlock new levels of performance in PC gaming. This new API also exposes a number of novel hardware features not accessible in older versions of Direct3D, opening up the possibility of new techniques that provide richer visuals than previously feasible in real-time rendering.

So yeah, there’s plenty to be excited about.

DirectX 12 is Microsoft’s baby, and it’s not just a PC standard. Developers will also use it on the Xbox One, giving them a unified means of addressing two major gaming platforms at once.

That’s why there’s perhaps no better showcase for DX12 than Fable Legends, the upcoming game from Lionhead Studios. Game genres have gotten wonderfully and joyously scrambled in recent years, but I think I’d describe Legends as a free-to-play online RPG with MOBA and FPS elements. Stick that in yer pipe and smoke it. Legends will be exclusive to the Xbox One and Windows 10, and it will take advantage of DX12 on the PC as long as a DirectX 12-capable graphics card is present.

In order to demonstrate the potential of DX12, Microsoft has cooked up a benchmark based on a pre-release version of Fable Legends. We’ve taken it for a spin on a small armada of the latest graphics cards, and we have some interesting results to share.

This Fable Legends benchmark looks absolutely gorgeous, thanks in part to the DirectX 12 API and the Unreal 4 game engine. The artwork is stylized in a not-exactly-photorealistic fashion, but the demo features a tremendously complex set of environments. The video above utterly fails to do it justice, thanks both to YouTube’s compression and a dreaded 30-FPS cap on my video capture tool. The animation looks much smoother coming directly from a decent GPU.

To my eye, the Legends benchmark represents a new high-water mark in PC game visuals for this reason: a near-complete absence of the shimmer, crawling, and sparkle caused by high-frequency noise—both on object edges and inside of objects. (Again, you’d probably have to see it in person to appreciate it.) This sheer solidity makes Legends feel more like an offline-rendered scene than a real-time PC game. As I understand it, much of the credit for this effect belongs to the temporal anti-aliasing built into Unreal Engine 4. This AA method evidently offers quality similar to full-on supersampling with less of a performance hit. Here’s hoping more games make use of it in the future.

DX12 is a relatively new creation, and Fable Legends has clearly been in development for quite some time. The final game will work with DirectX 11 as well as DX12, and it was almost surely developed with the older API and its requirements in mind. The question, then, is: how exactly does Legends take advantage of DirectX 12? Here’s Microsoft’s statement on the matter.

Lionhead Studios has made several additions to the engine to implement advanced visual effects, and has made use of several new DirectX 12 features, such as Async Compute, manual Resource Barrier tracking, and explicit memory management to help the game achieve the best possible performance.

That’s not a huge number of features to use, given everything DX12 offers. Still, the memory management and resource tracking capabilities get at the heart of what this lower-level API is supposed to offer. The game gets to manage video memory itself, rather than relying on the GPU driver to shuffle resources around.

Asynchronous compute shaders, meanwhile, have been getting a lot of play in certain pockets of the ‘net since the first DX12 benchmark, built around Oxide Games’ Ashes of the Singularity, was released. This feature allows the GPU to execute multiple kernels (or basic programs) of different types simultaneously, and it could enable more complex effects to be created and included in each frame.

Early tests have shown that the scheduling hardware in AMD’s graphics chips tends to handle async compute much more gracefully than Nvidia’s chips do. That may be an advantage AMD carries over into the DX12 generation of games. However, Nvidia says its Maxwell chips can support async compute in hardware—it’s just not enabled yet. We’ll have to see how well async compute works on newer GeForces once Nvidia turns on its hardware support.

For now, well, I suppose we’re about to see how the latest graphics cards handle Fable Legends. Let’s take a look.

Our testing methods

The graphics cards we used for testing are listed below. Please note that many of them are not stock-clocked reference cards but actual consumer products with faster clock speeds. For example, the GeForce GTX 980 Ti we tested is the Asus Strix model that won our recent roundup. Similarly, the Radeon R9 Fury and 390X cards are also Asus Strix cards with tweaked clock frequencies. We prefer to test with consumer products when possible rather than reference parts, since those are what folks are more likely to buy and use.


The Asus Strix Radeon R9 390X

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-5960X
Motherboard Gigabyte
X99-UD5 WiFi
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance LPX
DDR4 SDRAM at 2133 MT/s
Memory timings 15-15-15-36
2T
Hard drive Kingston
SSDNow 310 960GB SATA
Power supply Corsair
AX850
OS Windows
10 Pro
Driver
revision
GPU
base

core clock

(MHz)

GPU
boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Sapphire
Nitro R7 370
Catalyst 15.201
beta
985 1400 4096
MSI
Radeon R9 285
Catalyst 15.201
beta
973 1375 2048
XFX
Radeon R9 390
Catalyst 15.201
beta
1015 1500 4096
Asus
Strix R9 390X
Catalyst 15.201
150922a
1070 1500 8192
Radeon
R9 Nano
Catalyst 15.201
150922a
1000 500 4096
Asus
Strix R9 Fury
Catalyst 15.201
150922a
1000 500 4096
Radeon
R9 Fury X
Catalyst 15.201
150922a
1050 500 4096
Gigabyte
GTX 950
GeForce
355.82
1203 1405 1750 2048
MSI
GeForce GTX 960
GeForce
355.82
1216 1279 1753 2048
MSI
GeForce GTX 970 
GeForce
355.82
1114 1253 1753 4096
Gigabyte
GTX 980
GeForce
355.82
1228 1329 1753 4096
Asus
Strix GTX 980 Ti
GeForce
355.82
1216 1317 1800 6144

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Fable Legends performance at 1920×1080

The Legends benchmark is simple enough to use. You can run a test with one of three pre-baked options. The first option uses the game’s “ultra” quality settings at 1080p. The second uses “ultra” at 3840×2160. The third choice is meant for integrated graphics solutions; it drops down to the “low” quality settings at 1280×720.

The demo spits out tons of data in a big CSV file, and blessedly, the time to render each frame is included. Naturally, I’ve run the test on a bunch of cards and have provided the frame time data below. You can click through the buttons to see a plot taken from one of the three test instances we ran for each card. We’ll start with the ultra-quality results at 1920×1080.


Browse through all of the plots above and you’ll notice something unusual: all of the cards produce the same number of frames, regardless of how fast or slow they are. That’s not what you’d generally get out of a game, but the Legends benchmark works like an old Quake timedemo. It produces the same set of frames on each card, and the run time varies by performance. That means the benchmark is pretty much completely deterministic, which is nice.

The next thing you’ll notice is that some of the cards have quite a few more big frame-time spikes than others. The worst offenders are the GeForce GTX 950 and 960 and the Radeon R9 285. All three of those cards have something in common: only 2GB of video memory onboard. Although by most measures the Radeon R7 370 has the slowest GPU in this test, its 4GB of memory allows it to avoid some of those spikes.

The GeForce GTX 980 Ti is far and away the fastest card here in terms of FPS averages. The 980 Ti’s lead is a little larger we’ve seen in the past, probably due to the fact that we’re testing with an Asus Strix card that’s quite a bit faster than the reference design. We reviewed a bunch of 980 Ti cards here, and the Strix was our top pick.

The 980 Ti comes back to the pack a little with our 99th-percentile frame time metric, which can be something of an equalizer. The GTX 980 is fast generally, but it does struggle with a portion of the frames it renders, like all of the cards do.

The frame time curves illustrate what happens with the most difficult frames to render.


All of the highest-end Radeons and GeForces look pretty strong here. Each of them struggle slightly with the most demanding one to two percent of frames, but the tail of each curve barely rises above 33 milliseconds—which translates to 30 FPS. Not bad.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame, and 8.3 ms is a relatively new addition that equates to 120Hz, for those with fast gaming displays.

As you can see, only the four slowest cards here spend any time beyond the 50-ms threshold, which means the rest of the GPUs are doing a pretty good job at pumping out some prime-quality eye candy without many slowdowns. Click to the 33-ms threshold, and you’ll see a similar picture, too. Unfortunately, a perfect 60 FPS is elusive for even the top GPUs, as the 16.7-ms results illustrate.

Now that we have all of the data before us, I have a couple of impressions to offer. First, although the GeForce cards look solid generally, the Hawaii-based Radeons from AMD perform especially well here. The R9 390X outdoes the pricier GeForce GTX 980, and the Radeon R9 390 beats out the GTX 970.

There is a big caveat to remember, though. In power consumption tests, our GPU test rig pulled 449W at the wall socket when equipped with an R9 390X, versus 282W with a GTX 980. The delta between the R9 390 and GTX 970 was similar, at 121W.

That said, the R9 390 and 390X look pretty darned good next to R9 Fury and Fury X, too. The two Fury cards are only marginally quicker than their Hawaii-based siblings. Perhaps the picture will change at a higher resolution?

Fable Legends performance at 3840×2160

I’ve left the Radeon R7 370 and GeForce GTX 950 out of my 4K tests, and I’ve snuck in another contender. I probably should have left out the GeForce GTX 960 and Radeon R9 285, which have no business attempting this feat in 4K.




We’ve sliced and diced these frame-time distributions in multiple ways, but the story these results tell is the same throughout: the GeForce GTX 980 Ti is easily the best performer here, and only it is fast enough to achieve nearly a steady 30 frames per second. The 980 Ti’s 99th-percentile result is 33.8 ms, just a tick above the 33.3-ms threshold that equates to 30 FPS.

The Fury X isn’t too far behind, and it leads a pack of Radeons that all perform pretty similarly. Once again, there’s barely any daylight between the Fury and the 390X. The Fiji GPU used in the Fury and Fury X is substantially faster than the Hawaii GPU driving the 390 and 390X in terms of texturing and shader processing power, but its really no faster in terms of geometry throughput and pixel-pushing power via the ROP units. One or both of those two constraints could be coming into play here.

CPU core and thread scaling

I’m afraid I haven’t had time to pit the various integrated graphics solutions against one another in this Fable Legends test, but I was able to take a quick look at how the two fastest graphics chips scale up when paired with different CPU configs. Since the new graphics APIs like DirectX 12 are largely about reducing CPU overhead, that seemed like the thing to do.

For this little science project, I used the fancy firmware on the Gigabyte X99 boards in my test rigs to enable different numbers of CPU cores on their Core i7-5960X processors. I also selectively disabled Hyper-Threading. The end result was a series of tests ranging from a single-core CPU config with a single thread (1C/1T) through to the full-on 5960X with eight cores and 16 threads (8C/16T).

Interesting. The sweet spot with the Radeon looks to be the four-core, four-thread config, while the GeForce prefers the 6C/6T config. Perhaps Nvidia’s drivers use more threads internally. The performance with both cards suffers a little with eight cores enabled, and it drops even more when Hyper-Threading is turned on.

Why? Part of the answer is probably pretty straightforward: this application doesn’t appear to make very good use of more than four to six threads. Given that fact, the 5960X probably benefits from the power savings of having additional cores gated off. If turning off those cores saves power, then the CPU can probably spend more time running at higher clock speeds via Turbo Boost as a result.

I’m not sure what to make of the slowdown with Hyper-Threading enabled. Simultaneous multi-threading on a CPU core does require some resource sharing, which can dampen per-thread performance. However, if the operating system scheduler is doing its job well, then multiple threads should only be scheduled on a CPU core when other cores are already occupied—at least, I expect that’s how it should work on a desktop CPU. Hmmm.

The curves flatten out a bit when we raise the resolution and image quality settings because GPU speed constraints come into play, but the trends don’t change much. In this case, the Fury X doesn’t benefit from more than two CPU cores.

Perhaps we can examine CPU scaling with a lower-end CPU at some point.

So now what?

We’ve now taken a look at one more piece of the DirectX 12 puzzle, and frankly, the performance results don’t look a ton different than what we’ve seen in current games.

The GeForce cards perform well generally, in spite of this game’s apparent use of asynchronous compute shaders. Cards based on AMD’s Hawaii chips look relatively strong here, too, and they kind of embarrass the Fiji-based R9 Fury offerings by getting a little too close for comfort, even in 4K. One would hope for a stronger showing from the Fury and Fury X in this case.

But, you know, it’s just one benchmark based on an unreleased game, so it’s nothing to get too worked up about one way or another. I do wish we could have tested DX12 versus DX11, but the application Microsoft provided only works in DX12. We’ll have to grab a copy of Fable Legends once the game is ready for public consumption and try some side-by-side comparisons.

Enjoy our work? Pay what you want to subscribe and support us.

Comments closed
    • willgart
    • 4 years ago

    Why there is no test with 2 cards in the PC, like 1 Nvidia and 1 AMD. (or like a 3/4 years old card and a 2015 one, both with a good price performance, like the current GTX 970 and an old equivalent)
    DX12 is supposed to use both cards.
    so it will be great to start seeing this in action.
    instead of dropping our old video cards and doing SLI or Crossfire, if we can just get a good card + our old one and getting some boost, it will be awesome!!!

      • f0d
      • 4 years ago

      the mixed manufacturer sli/crossfire thing isnt a requirement of dx12 its an option and its up to the developer to implement it

      imo its actually going to be a rare thing to see with VERY few games supporting it

    • Digidi
    • 4 years ago

    The Fury X have a lot of shader. They have to be feeded from Async Compute. Is Fable Legends using a lot of asynce compute? Maybe thats the lagg of Fury X

    • TopHatKiller
    • 4 years ago

    i utterly love assuming things not in evidence… every single conspiracy theory is based on this reasoning… one and one-half benchs proves amd is crap or nv is crap or dx12 is crap….no, wait, the absolute opposite of whatever i said is..
    true. truest. true-ish. not true.

    this is all is like a scavenging hunt for epistemological survivors of a nuclear holocaust.

      • Mr Bill
      • 4 years ago

      Holocaust? Godwin’s!

      • LoneWolf15
      • 4 years ago

      I’m still trying to understand what your point was.

        • tipoo
        • 4 years ago

        A..Rolling stone doesn’t grow mold. Yeah, lets go with that.

    • WaltC
    • 4 years ago

    Scott, exactly how many “DX12” features are we looking at here…?

    Was async-compute turned on/off for the AMD cards–we know it’s off for Maxwell, so that goes without saying. Does this game even use async compute? AnandTech says, “The engine itself draws on DX12 explicit features such as ‘asynchronous compute, manual resource barrier tracking, and explicit memory management’ that either allow the application to better take advantage of available hardware or open up options that allow developers to better manage multi-threaded applications and GPU memory resources respectively. ”

    If that’s true then the nVidia drivers for this bench must turn it off–since nVidia admits to not supporting it.

    But sadly, not even that description is very informative at all. Uh, I’m not too convinced here about the DX12 part–more like Dx11…this looks suspiciously like nVidia’s behind-the-scenes “revenge” setup for their embarrassing admission that Maxwell doesn’t support async compute…! (What a setup… It’s really cutthroat isn’t it?)

    I also thought this was very funny: “That may be an advantage AMD carries over into the DX12 generation of games. However, Nvidia says its Maxwell chips can support async compute in hardware—it’s just not enabled yet.”

    Come on… nVidia’s pulled this before…;) I remember when they pulled it with 8-bit palletized texture support with the TNT versus 3dfx years ago…they said it was there but not turned on. The product came and went and finally nVidia said, “OOOOps! We tried, but couldn’t do it. We feel real bad about that.” Yea…;)

    Sure thing. Seriously, you don’t actually believe that at this late date if Maxwell had async compute that nVidia would have turned it *off* in the drivers, do you? If so, why? They don’t say, of course. The denial does not compute, especially since BBurke has been loudly representing that Maxwell supports 100% of d3d12–(except for async compute we know now–and what else, I wonder?)

    I’ve looked at these supposed “DX12” Fable benchmark results on a variety of sites, and unfortunately none of them seem very informative as to what “DX12” features we’re actually looking at. Indeed, the whole thing looks like a dog & pony PR frame-rate show for nVidia’s benefit. There’s almost nothing about DX12 apparent.

    We seem to be approaching new lows in the industry…:/

      • Meadows
      • 4 years ago

      Yes, except the TNT was some 20 years ago.

      I’m giving NVidia the benefit of the doubt on this one, as having a “secret feature” that’s in reserve is entirely plausible considering the market. Why enable something if there are zero games that use it?

      [b<]However,[/b<] I [i<]do[/i<] doubt their ability to patch already-sold Maxwell cards on the fly. Pessimistically, I expect a second round of updated silicon, with different names such as "GTX 960 Ti", "970 Ti", and/or "980 Ultra".

        • DoomGuy64
        • 4 years ago

        AFAIK, Nvidia’s async problem is possibly driver related, but at the same time it has already been enabled with software emulation, which makes their incoming support comment questionable. The Oxide guys have made a couple of statements about their experience with it:
        [quote<]some unfortunate complex interaction between software scheduling trying to emmulate it which appeared to incure some heavy CPU costs.[/quote<] [quote<]Essentially, in order to perform Async Compute Maxwell might have to rely significantly on context switching, which is the process of storing and restoring a thread so that its execution can be resumed later; this is what allows a single CPU to have multiple threads, thus enabling multitasking. The problem is that context switching is usually pretty intensive from a computational point of view, which is why it’s not clear yet if Async Shaders can be done by current gen NVIDIA cards in a way that will be useful in terms of performance.[/quote<] Then: [quote<]We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We’ll keep everyone posted as we learn more.[/quote<] I'm sure everyone is willing to give Nvidia the benefit of a doubt, but if this still isn't fixed over the next few months, then it probably isn't something that they can fix. Also, the zero games comment is pretty funny. Zero games use it, only because dx12 games haven't hit the market. Async is a key feature of dx12, and the consoles already use it. It's not a question of "if", but [i<]when[/i<]. Regardless, it's not like missing this one feature kills Nvidia off completely. Their Ti still does well, but it is enough to give AMD the edge over their mid-range cards. It is what it is.

    • mikato
    • 4 years ago

    How about some slower CPUs?!?!! And more results than just FPS! There is a big hole in these DirectX 12 results without that.

    Otherwise, good write-up.

      • maxxcool
      • 4 years ago

      🙂 I’d LOVE to see a 4ghz 2500k dx12 bench ..

    • Ninjitsu
    • 4 years ago

    Hey I just checked out the AT comment section, turns out Epic are in bed with Nvidia and UE4 is biased towards Nvidia. I also learnt that the 390X puts the the 980 to shame, and that the only Nvidia card that wins anything is the outrageously priced ($650!) GTX 980 Ti. Fury X does well considering that the 980 Ti has an unfairly large number of ROPs.

    It’s so stupid.

    At least the comments here vary between entertaining and informative. Onward to 300!

      • MathMan
      • 4 years ago

      TIL that it’s unfair to have a large number of ROPs.

      You could turn it around and say that AMD was incredibly stupid to design a chip with this much BW and then forget to add a sufficient amount of ROPs to make use it.

        • sweatshopking
        • 4 years ago

        you know he’s joking, right?

          • MathMan
          • 4 years ago

          Ouch. I wasn’t. Embarrassing. It seemed like such an obvious AMD fan thing to say.

            • Ninjitsu
            • 4 years ago

            And it is! – that’s exactly what people were saying. I don’t know why I subjected myself to reading the whole bunch of things. Mildly entertaining but after a point I felt sort of sick. 😀

          • DoomGuy64
          • 4 years ago

          Joking or not, it’s the truth. Some of his comments are exaggerated, but the part about the ROPs is spot on, which is why someone could be confused by the “humor”.

          The only thing I took away from the “joke”, was that there really is no length nvidia fans won’t go to diss and dismiss AMD, and it’s rather sad. The Ti has more ROPs, and beats the Fury because of it. So? That doesn’t apply to the 970/980, and AMD has picked up the slack in that segment.

          You guys act like the Ti’s win over the Fury somehow invalidates the 390/X’s win over the 970/980, and it really doesn’t.

            • Ninjitsu
            • 4 years ago

            WTF. LET ME BREAK IT DOWN FOR YOU.

            1) No one’s dismissing AMD. It’s the fanboys. It’s the people who created so much noise over AoS and Async compute that are causing this reaction.

            2) The Ti having a higher ROP rate isn’t “unfair”. It’s a design decision. It’s like saying HBM or Async compute is “unfair”. It’s not unfair, it’s fair.

            3) Hawaii and the 980 have the same ROPs. So the 390X/390 tie the 980 because of the same number of ROPs, plus the AMD cards (unfairly???) get to use async compute.

            4) The 970 only loses because of having an (unfairly!) low ROP rate.

            How does that sound? Daft? No? Yes? Can’t tell? ¬_¬

            No one’s acting like GM200 win invalidates Hawaii’s win. Again, the only reason people are picking on AMD/fans is because of all the premature noise they created because of AoS. And in fact, I think we’ve been saying for quite some time that AMD’s drivers have been holding Hawaii back.

            What we are looking at and examining is architectural tradeoffs and how they’ve played out. The 980 Ti wins because Nvidia made the right call. Hawaii wins because AMD made the right calls. Fury loses because AMD made the wrong ones. GM104 performs quite well if you consider that it’s a smaller, lower powered, cheaper to produce mid-range chip. Of course, it’s more expensive for consumers than Hawaii which is actually what goes against it.

            None of this is still really a problem for Nvidia because:

            #1 No games have been released yet

            #2 Async compute, if it can be enabled on Maxwell, will likely improve performance.

            #3 A game could be produced that uses a larger set of DX12 features, including ones that both Nvidia and AMD have. Things could vary considerably.

            #4 A lot of existing DX11 and older games are held back on AMD because of bad drivers.

            All things are obviously still problems for AMD. This will change as the first DX12 generation actually comes out, and if Pascal gets delayed.

            Also, none of what I said was exaggerated (by me). Everything was paraphrased.

            • DoomGuy64
            • 4 years ago

            1.) Yes they are. The fanboys are the worst, but you’re doing it too. See #3.
            2.) Never said it was “unfair”. Nvidia made the right design choice there.
            3.) What did you just say about being “unfair”?
            4.) The 970 still has 64 ROPs. It just has less memory and shaders.

            Fury was not designed as efficiently as the Ti. Agree 100%. That doesn’t mean it’s compute power can’t be useful, but it definitely was the wrong choice for high resolution, and it’s not going to beat the Ti. Hawaii on the other hand, seems to have hit the nail on the head for mid-range.

            #1. No games use dx12 yet. This means nothing, because all future games are pretty much guaranteed to support dx12. The xbox runs dx12, and it would be a straight port for AMD, including support for Async.
            #2. Maybe. The 390’s still going to win though, having more ram and shader power.
            #3. Sure, and I can pretty much guarantee it. All future gameworks titles will be optimized for nvidia’s extra dx12 features. The only catch is Async, and that gameworks never increases performance.
            #4. Nope. Hawaii isn’t so far behind in dx11 that it’s unplayable. It may occasionally lose by a small margin, but it handles dx11 games fine, and AMD consistently provides decent performance improvements with driver updates.

            All in all, I don’t see any problem. Things are pretty equal right now, aside from the undeserved AMD bashing. Async is no less “unfair” than the Ti having more ROPs, so that needs to stop. It’s not cheating, it’s a performance enhancing feature of dx12. To me, it seems like AMD is now doing in hardware what Nvidia had previously been doing in software with their dx11 driver, and it turns out to be more efficient. It is what it is, don’t make excuses for Nvidia. They still have that $650 Ti which you can rave about.

            • LoneWolf15
            • 4 years ago

            [i<]You guys act like the Ti's win over the Fury somehow invalidates the 390/X's win over the 970/980, and it really doesn't.[/i<] I won't deny the 390X wins. However, it wins at a cost of more than 150 watts of power consumption under load. I'd rather lose a couple of frames and save on my energy bill with a GTX 980, as well as have a cooler running system. I don't dismiss AMD; I had two R9 280X cards in Crossfire before the 980, and two 6970s before that, and so on. I'm not a "fan" of either insomuch as I'm a fan of the product that gets the job done for me in an efficient manner.

    • Klimax
    • 4 years ago

    Is it downloadable anywhere or is it currently unavailable? (Couldn’t find a thing)

      • Ninjitsu
      • 4 years ago

      I don’t think so – Scott implied it came straight from MS.

      • Damage
      • 4 years ago

      No, as far as I know, it’s press only. You can sign up for the Fable Legends beta here, though:

      [url<]https://www.fablelegends.com/betasignup[/url<]

    • Milo Burke
    • 4 years ago

    Great Scott! That’s a great article, Scott!

    [url<]http://www.weregoingback.com/wgb-content/uploads/Christopher-Lloyd-as-Doc-Brown.jpg[/url<]

    • Mr Bill
    • 4 years ago

    About the core/thread scaling differences… Given that the rest of the platform is the same; your thoughts about numbers of threads in the drivers is very interesting. I wonder if AMD has a lower limit on thread usage because the AMD 8-“core” is actually 4 modules and they are not targeting higher thread counts?

    • RdVi
    • 4 years ago

    Thanks for the hard work!

    I’m curious about CPU clock speeds impact on performance. Combining this with threads would introduce a huge amount of test runs, but perhaps as you noted in the article using a few lower end cpu’s would be just as interesting and more of a real world test.

      • Jason181
      • 4 years ago

      Pcper inadvertantly did this due to a power management problem. It’s only a comparison of two data points, but interesting nonetheless.

        • RdVi
        • 4 years ago

        I did notice that after reading the article here first. It is mainly interesting as AMD GPU performance with slower processors has been uncompetitive for some time with DX11. As it stands an AMD GPU purchase is a very bad idea for anyone without a fast CPU.

        It seems from this test at least that this may no longer be the case with DX12, indeed they actually fared better than Nvidia at lower CPU clock speeds. I’d love to see more investigation into this once some final DX12 benchmarks are made available.

          • Jason181
          • 4 years ago

          Yes, it would be interesting. We’ll probably get our wish once DX12 games become more prevalent.

    • travbrad
    • 4 years ago

    So one pre-beta benchmark was enough for some people to declare Nvidia was doomed in DX12. I assume this one doesn’t count?

    In all seriousness I think we will need to see at least a handful of DX12 games to get any real sense of how they will truly perform, but it’s nice to see TR testing them as they become available.

      • Puiucs
      • 4 years ago

      pretty sure that this benchmark puts AMD in a favorable position (sans the fury X). the 390 is trading blows with the 980 depending on which version of the cards you have.
      and the biggest winner is by far the 290x if you take price in consideration.

    • orik
    • 4 years ago

    I’d love to see more CPU testing with DX12. Is the Fable Legends benchmark available for everyone to try out?

    I want to know how a Intel X5690 or a i7-980x compares to a modern quad core quad thread for green team in particular.

    Do the extra cores make up for IPC efficiency?

    • Bensam123
    • 4 years ago

    I think it’s probably going to be more interesting to see the results of DX12 across multiple processors and price points. That was roughly covered at the end by disabling/enable parts of the processor, but definitely isn’t comparable across generations of processors, models (within reason), or different brands.

    Guess we’ll start to see more of the whole picture as more DX 12 games are released and tested in various ways.

    • the
    • 4 years ago

    It has been awhile since I’ve seen such a large performance drop with Hyperthreading enabled. 8C/8T to 8C/16T is only a single data point and it is also a rather large number of threads. Any data for the more common 2C/4T and 4C/8T configurations? I’m curious of those so any signs of slow down.

    As for a cause, I’m more inclined to think it is more about workloads shifting between cores and causing a bit of L2 cache thrashing. Power management can rotate around workloads to normalize individual core temperatures. Not sure how practical it would be to lock down specific threads to individual cores here but conceptually it would prevent the trashing.

    The other oddity I’ve spotted is the spiky nature of the the GTX 970. There are a handful of frame it seems that simply take a long time for these cards to render. With the other cards, GTX 950/960 and R9 285/370 was to be somewhat expected considering their midrange/low end nature but I didn’t expect them to be that bad either.

      • Krogoth
      • 4 years ago

      Hyper-threading and gaming have never really mixed well at all. DX12 doesn’t change the game at all.

        • Klimax
        • 4 years ago

        It would, if there would be sufficient workload.

          • auxy
          • 4 years ago

          It’s not really about ‘sufficient’ per se, but about having the right workloads. For it to be useful within a single application, Hyper-Threading requires an application to have a healthy mixture of both FPU and ALU work, and it helps a lot if the application is not particularly latency-sensitive! (*’▽’)

          … you can probably see where I am going with this already, but basically GAMES ARE NOT THAT. ( ゚Д゚)

          Games are almost exclusively nasty, branchy, hard-to-predict FPU-heavy stuff; they’re also extremely latency-sensitive. CPU cores need a really good front-end and a strong FPU; little else is relevant. Adding another thread on to the same CPU core is only going to increase latency as the core has to shift stuff around to work in the second thread. That’s why I have hyper-threading disabled on my 4790K. ( `ー´)ノ

            • srg86
            • 4 years ago

            Games sound like most simulation software, especially when you’re trying to simulate something in the real world, branchy, hard-to-predict FPU heavy stuff, at least the simulations I’ve been involved with (maybe there are some that are more parallel).

            • auxy
            • 4 years ago

            Hehehe. “Games are essentially simulations” is one of my catch phrases in Freenode ##hardware! (*’▽’)

            • the
            • 4 years ago

            At this point in history, I would have thought AI would have evolved. That is more integer based and would have been great to run alongside a FPU heavy thread. Maybe now that consoles can run 8 threads, they may finally change.

            • Airmantharp
            • 4 years ago

            Well, they can “run” eight threads, but it’s doubtful that they can push more powerful AI- damn things are too weak to run at 1080p60, and that’s where all the new resources go.

            And I do believe that it’s the limitations of console hardware that’s left AI stagnating in games.

            • I.S.T.
            • 4 years ago

            AI’s been in the same state since 2004. I don’t think CPU power is the issue per se, I think dev time and the difficulty of making good AI is.

            It would be easier to implement AI if everybody(including the consoles) had 20 cores with Skylake level IPC, but a ton of games would still come out with crappy AI. It’s as much an art designing good AI in games as it is a science.

            • the
            • 4 years ago

            Despite being weak, they’re still more powerful than either the PowerPC or SPE (what is a branch unit?) cores used in the previous generation of consoles.

            • I.S.T.
            • 4 years ago

            Eh, not quiiiiiiiiiiiiiiiiite.

            Their peak floating point performance is the same(Well, the Xbone’s might be higher. It got a clockspeed boost not that long before release), but the peak integer performance is much higher. The biggest deal is it’s much easier to hit those peaks(Hell, I don’t think it was possible to get anywhere near the theoretical peak integer performance on the old consoles’ CPUs… Floating point, if you were a sneaky optimizing ninja of doom, you could get near), and that alone helps the average game maintain better levels of CPU performance and utilization.

            • the
            • 4 years ago

            The peaks are high but are unsustainable considering that the old console hardware is strictly in-order. Also the SPE lacks features like branch hardware which can cause tremendous stalls in the execution pipeline.

            For branchy code like AI, I figure the new consoles would be able to post higher sustained performance.

            • auxy
            • 4 years ago

            Well, the PS3 Cell’s SPEs are really more akin to the specific vector units (AVX) inside the Jaguar cores, and speaking purely in terms of raw performance, the Cell is actually faster (for SP FP workloads) than the Jaguar CPUs in the newer consoles. Of course, raw floating-point throughput doesn’t really tell you much, but the point is that the newer consoles really do have very slow CPUs. Eight cores does NOT make up for the poor per-thread performance of the low-power cores used, thanks to Amdahl’s law which I’ve mentioned elsewhere in this post’s comments.
            [quote=”the”<]At this point in history, I would have thought AI would have evolved. That is more integer based and would have been great to run alongside a FPU heavy thread.[/quote<]AI has evolved, a bit, and it's way, way, way too hard to run on a game console alongside a game. To make AI convincing it has to have a huge dataset to work with and traversing large indices is, well, [i<]impossible[/i<] when you're trying to run an intense 3D game on the same processor. [quote="the"<]Maybe now that consoles can run 8 threads, they may finally change.[/quote<]People have been trying to push this "now that the consoles have 8 threads" idea for a while now, but not only do the consoles not have 8 threads (as the OS reserves two or three for itself), but a Haswell dual-core can utterly smash the five or six threads they have. No, in terms of CPU stuff, games will continue to be as single-threaded as they have always been and will always be, simply because that's the nature of simulations. (*‘ω‘ *) The stuff that can be parallelized -- physics and graphics -- will be, and the stuff that can't will just take a faster CPU.

            • sweatshopking
            • 4 years ago

            Since you seem to know everything about everything you should have no problem finding fantastic pay!

            • auxy
            • 4 years ago

            Very rude.

            • sweatshopking
            • 4 years ago

            Lolirl. It’s funny you should say that. I was just teasing out of love.

            • Jason181
            • 4 years ago

            Actually hyperthreading helps a lot if you look at the i3 vs i5 benchmarks. It’s just that we usually are talking about i7 since we’re enthusiast, and games don’t use more than three or four threads usually, so it’s more of a hindrance than a help with lots of cores.

            • auxy
            • 4 years ago

            You mean i3 vs Pentium benchmarks. (‘◇’)ゞ

            • Jason181
            • 4 years ago

            No, I mean that you don’t gain a huge amount going from an i3 to an i5 in a lot of games, meaning that hyperthreading is quite effective, assuming the game uses at least 3 threads (seems if they’re multithreaded at all, they usually use at least 3). But I understand your point, and it would be clearer to compare those two. It’s just that often CPU comparisons don’t include Pentiums.

            • auxy
            • 4 years ago

            Mostly in these cases it’s really just that hyper-threading lets you avoid a context switch (+whole cache flush) when the OS or another application needs to do something in the background.

            I’m not saying it’s not helpful, but it’s really got more to do with the fact that the CPU just doesn’t have enough cores. (‘ω’)

            • Jason181
            • 4 years ago

            If that were the case, wouldn’t hyperthreading have helped in multithreaded games a lot more in the Pentium 4 era?

            • auxy
            • 4 years ago

            Well, no. Netburst cores didn’t have anywhere near the execution resources that even an Ivy Bridge core does.

            Basically, hyper-threading on these newer chips is an entirely different animal.

            • Jason181
            • 4 years ago

            Yes, as evidenced by the Pentium 4, without free execution resources hyperthreading isn’t that useful; it’s about a lot more than just eliminating a context switch.

        • the
        • 4 years ago

        While HyperThreading hasn’t increased performance, it never really hurt it either. A -10% impact is kinda large and something I haven’t seen since the Pentium 4 days.

      • Ninjitsu
      • 4 years ago

      Would love to see results from Scott’s testing, but AT has 2C/4T and 4C/8T results.

        • f0d
        • 4 years ago

        im still amazed how well a plucky little dual core with hyperthreading does (in pretty much everything not just fable)

        lately they have been my go to processor for budget builds and so far no problems playing just about anything with one, for around half the price of an i5 they are pretty amazing

      • Andrew Lauritzen
      • 4 years ago

      I wouldn’t be surprised if some of this is Win10 maturity-related to be honest. Probably worth bringing up with Microsoft given they control all the pieces in this particular equation 🙂

        • auxy
        • 4 years ago

        Surely the scheduler wouldn’t have seen a regression, yeah? My understanding was that Windows 8/8.1 were handling Hyper-Threading pretty well these days. (‘◇’)ゞ

          • Andrew Lauritzen
          • 4 years ago

          It definitely could have seen “regressions” in areas like this. Getting peak performance in these sorts of game benchmarks is often very dissimilar with getting optimal battery life while viewing a video, etc 🙂 Ideally the OS will handle both well (and indeed it was pretty good in 8/8.1) but the latter tends to get more optimization focus so it’s very possible that some regressions slipped through.

      • Mr Bill
      • 4 years ago

      A rather nice graph of Frame Rate vs cumulative percent answers your question for both the Ti 80 and the Fury over at [url=http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/4<]anandtech[/url<]

    • sweatshopking
    • 4 years ago

    Where can I get this to run it at home?

    • Meadows
    • 4 years ago

    I came here expecting weird results, given how NVidia’s cards were uncommonly humiliated in some other DirectX 12 benchmark recently (for one reason or another), but the 980 Ti stayed impressive.

    I am, however, surprised by the results of the 390X.

    • anotherengineer
    • 4 years ago

    Damage, 1st page, in your chart, you have XFX R9 390 and 4096 for memory. Should that be 8192??

    Also at 1080p the 390 and the 980 are so close in performance, but newegg doesn’t seem to reflect that
    [url<]http://www.newegg.ca/Product/Product.aspx?Item=N82E16814127874&cm_re=r9_390-_-14-127-874-_-Product[/url<] [url<]http://www.newegg.ca/Product/Product.aspx?Item=N82E16814127834&cm_re=gtx_980-_-14-127-834-_-Product[/url<] $220 diff + 13% tax ~$250 diff I wonder if more DX12 games will bring the 390 and 980 to close to the same performance level if there will be some price cuts to the 980??

    • HisDivineOrder
    • 4 years ago

    Seems to me, if the 390X is spoiling the R9 Fury X’s day, then the 290X would be, too, since they’re mostly the same. And the 290X at its current pricing would REALLY spoil its day. Then if the 290 and 290X are as close as they typically have been, that would also mean the R9 290 would be spoiling the entire Fiji line and the 390 and 390X.

    Of course, Fiji isn’t THAT far behind 980 Ti, either, which in reality means that the 290 cards still floating around right now are also ghosting it. So basically the R9 290/290X are the best deal you can get atm and should get if you’re getting anything unless you must have the absolute top end performance…

    …in this specific game benchmark anyway. I do wish AMD would work on fixing their DX11 drivers to be multithreaded and to implement something along the lines of Shadercache to improve DX11 performance, which would benefit a great many games a lot of us own already.

      • auxy
      • 4 years ago

      290X is significantly slower than 390X. I’ve seen it myself in person with my own 290X vs. a friend’s shiny new 8GB 390X, even though my card overclocks to 1111Mhz core and his came in at 1040Mhz.

      I don’t know exactly what the difference is; I know the memory is higher clocked. Maybe it’s much lower latency or something?

        • slowriot
        • 4 years ago

        TR’s reviews with the 290X and 390X in them show the cards being close. I would investigate issues in your system if you’re noticing such a big difference from your friends performance. Your card may be throttling.

          • auxy
          • 4 years ago

          It isn’t.

          “Such a big difference” is rather different in meaning from “significant”. You should be more careful with your language.

            • slowriot
            • 4 years ago

            [url<]https://techreport.com/review/28612/asus-strix-radeon-r9-fury-graphics-card-reviewed[/url<] Huh. Well, TR shows rather convincing that these cards are very, very, very close in performance. And if your's is OC'd above your friends 390X then it should be even closer. I think you are either mistaken about what "significant" means or you're experience some kind of issue. Anyway, it's always pleasant to exchange posts with you. No way was I seriously thinking "Do I even want to bother" when I saw your name and sure enough you immediately justified that feeling. Thanks!

            • auxy
            • 4 years ago

            [quote=”slowriot”<]Anyway, it's always pleasant to exchange posts with you. No way was I seriously thinking "Do I even want to bother" when I saw your name and sure enough you immediately justified that feeling. Thanks![/quote<] Aww, thanks! I always like exchanging posts with my fans. (*'▽') Did [b<]you[/b<] read the article? [url=https://techreport.com/r.x/radeon-r9-fury/pcars-99th.gif<]Even the R9 390 is clearly faster than the 290X in most games.[/url<] The pattern continues anywhere the card isn't limited by shader throughput, and you can easily identify those situations as those are the only times the Fury will really take off past the 390. [url=https://en.wiktionary.org/wiki/significant<]Definition of 'significant'.[/url<] It just means that it's likely to be the result of something consistent and repeatable rather than the result of random chance. Maybe you are the one who doesn't know what 'significant' means? ( `ー´)ノシ

            • slowriot
            • 4 years ago

            No where have I claimed the 290X is faster than the 390X or 390 for that matter, just that they’re very close. Which is demonstrated repeatedly in that article. I find it hilarious you’re referencing the PCars graph that shows a 1.2ms spread between 390X and 290X. That backs my point up… That is a repeated, but insignificant, difference.

            And the Wiktionary link…. so you’re just going to ignore those 1-4 definitions under adjectives? Got’cha. I would stop posting links to stuff which undermines your own argument. You could have said “Oh well I was thinking about it more this way” in your first reply to me but no, you IMMEDIATELY get defensive (over what I don’t even know) and act like a jerk.

            To answer your earlier (rhetorical) question… that’s why no one listens to you.

            • auxy
            • 4 years ago

            It’s not insignificant. It literally is by definition significant. (*‘ω‘ *) Sorry! You lose.

            • Waco
            • 4 years ago

            You have to wonder why everyone responds to you this way…right?

            I promise you it’s not a conspiracy.

            • auxy
            • 4 years ago

            ‘Everyone’ doesn’t respond to me this way, only people who can’t stand being challenged. Little boys with ego problems are the least my concerns! ( *´艸`)

            • Pwnstar
            • 4 years ago

            Little boys? What are you, 12?

          • Mr Bill
          • 4 years ago

          They were close in DX11 but the 390 and 390X with a slight overclock and twice the GDDR5 ram pull away from the 290 and 290X in this particular DX12 Game.

    • Ninjitsu
    • 4 years ago

    Ooh cool thing I just saw at AT (haven’t read their article yet, just glanced over some graphs):
    [url<]http://images.anandtech.com/graphs/graph9659/All-4K-render.png[/url<] [url<]http://images.anandtech.com/graphs/graph9659/All-4K-render-low.png[/url<] For all the noise about async compute, Nvidia is [i<]better[/i<] at Compute Shader Simulation and Culling. But yeah, jibes at the async compute folks aside, these two graphs are really interesting for an insight into architectural differences between the two. Would love to see Scott talk about these results (and for 1080p too, please!).

      • Damage
      • 4 years ago

      Interesting. Dynamic GI could be faster on Maxwell if UE4 is using Nvidia’s VXGI algorithm, which is built into UE4 (as I understand it) and which uses DX12’s conservative rasterization. GCN lacks that feature.

        • Andrew Lauritzen
        • 4 years ago

        Fable actually uses a GI system they developed internally based on light propagation volumes. It uses neither VXGI nor any feature level 12_1 features like conservative rasterization, else it would not run on the consoles (or a fair amount of PC hardware). See here:
        [url<]http://www.lionhead.com/blog/2014/april/17/dynamic-global-illumination-in-fable-legends/[/url<] It's hard to draw any conclusions about the frame breakdown numbers without knowing specifically how they are measured. In particular things like the amount of time things take in the presence of pipelining and async compute is not well defined. Ex. submitting an async compute shader and measuring how long until it "finishes" might actually show a significantly longer time than submitting it synchronously, since in the former case it's competing for GPU resources with other things running at the same time. Thus things like basic timings become fairly meaningless and are not representative of how efficiently an architecture is executing that portion of the frame. It's neat to see the breakdown, but the deeper you dig with perf stuff the more complex the interpretation becomes 🙂 I'd definitely caution folks to avoid trying to draw simple conclusions about "async compute efficiency" and so on, as I've done a few times in the past. It really is not that simple.

      • Mr Bill
      • 4 years ago

      I like the Frame Rate vs cumulative percent graphs anandtech has put up. They expand the areas that show the largest differences for core count and thread count.
      [url<]http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/4[/url<] [url<]http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/5[/url<] TR's Frame Time vs cumulative percent graphs have the same information but its interesting to see the variation more clearly. How about a double Y axis Damage?

    • guardianl
    • 4 years ago

    Based on what AMD has said, including that GCN 1.x doesn’t scale beyond four “shader engines”, it is pretty clear now that Hawaii was truly the fully realized version of GCN 1.x. Fiji’s last chance to prove that it was “just drivers” holding it back was DX12 / Vulkan titles and we’re seeing every indication it’s just as unbalanced for DX12 games as DX11.

    On the brightside, it looks like DX12/Vulkan are really going to ignite the competition fire between Nvidia and AMD again since driver-performance was really Nvidia’s only consistent competitive advantage.

    Sadly, what Nvidia can no longer invest in drivers, they will surely invest in gameworks and such. The GPU software wars will continue, just in a new arena. 🙂

    • geekl33tgamer
    • 4 years ago

    Suddenly regretting selling my trio of R9 290X’s in favour of a pair of GTX 970’s. :-/

      • hellowalkman
      • 4 years ago

      Now sell these and get 2 390s 😀

        • Kretschmer
        • 4 years ago

        290X->390 is a sidegrade…unless you’re at a crazy rez, the 290X is an unbeatable value.

      • chuckula
      • 4 years ago

      SO ARE WE!!
      — The Power Company.

        • geekl33tgamer
        • 4 years ago

        +1 I thought that was pretty funny! 😛

        • Bensam123
        • 4 years ago

        Hah! Yeah good thing, the power company would’ve started making money off of him in 3-4 years after break even. Instead he paid it up front to Nvidia! ^^

          • chuckula
          • 4 years ago

          Hey loser: Where’s the apology for your flat-out wrong propaganda campaign about Intel “abandoning” socketed CPUs?

          You might find this article about socketed chips that I guarantee are better than Zen a fascinating read: [url<]https://techreport.com/news/29100/the-core-i7-6700k-and-5775c-are-both-available-now[/url<] Where's your disingenous screed about how AMD loves overclocking freedom because the only new chip they've released in 2 years is soldered onto a motherboard? As for GPUs, the fact that a chip that's a power-hogged overclocked version of another originally $550 GPU that was AMD's premier flagship GPU for over two years can marginally beat a part from Nvidia that was literally never a high-end GPU isn't anything to gloat about. The fact that a $650 insult from AMD known as the Nano is tangibly worse than the GTX-980 isn't anything to gloat about either... well, not for you anyway.

            • Bensam123
            • 4 years ago

            So… None of what you just said had anything to do with what I just said. Gotcha…

            BTW I hope you file this away under ‘I can’t figure out how to use search on my Bensam database’.

        • DoomGuy64
        • 4 years ago

        [url<]https://www.youtube.com/watch?v=fBeeGHozSY0[/url<]

          • chuckula
          • 4 years ago

          I’m sure you think that video is effective.
          If this discussion had anything to do with CPUs.
          Which it doesn’t.

            • DoomGuy64
            • 4 years ago

            Power use is Power use.

            • chuckula
            • 4 years ago

            Yes, and AMD knows *all about* power use.

            • DoomGuy64
            • 4 years ago

            Lovely opinion. I think you need to re-watch the video.

            Oh, and it’s also real mature of you to abuse your gold status privileges. I was wondering what was going on for a second there. Also, I noted that in between your -3’s, there was a period of time where I got a single -1, and you got a single +1. Looks like you have a second account here too.

            One point I should probably emphasize from this mythbusting, is that AMD only has higher [i<]peak[/i<] power use. It doesn't max out sitting idle. It peaks running things like furmark, and there are several power management options available that can limit furmark situations. Even with furmark, an extra 100 watts isn't going to radically alter your power bill, and will take years to justify the purchase of a similarly performing Nvidia card. Not that facts matter to you, but someone needs to debunk your nonsense for the good of the community.

      • derFunkenstein
      • 4 years ago

      For why? A pair of pretty much anything faster than a GTX 960 or Radeon 285 will be way beyond playable, even at 4K.

      • auxy
      • 4 years ago

      I told you that was a bad idea when you did it. Doofus. ( ;∀;)

      But nobody listens to the stupid girl with the kaomojis… (´Д⊂ヽ

        • geekl33tgamer
        • 4 years ago

        Site is full of hypocrites. Everyone beat up on me for “being an idiot” when I bought the 290X’s in that particular thread.

        I then got bombarded with the “I told you so” comments when I announced they and the FX-9590 were going in favour of an i7 and GeForces (ya know, circumstance and a magic smoke incident forced that decision).

        I was then called an idiot again when I bought 2 GTX’s for SLI with the said i7. Whatever – System’s going well after almost a year btw.

          • Bensam123
          • 4 years ago

          No forethought and everyone loves a ‘I told you so’ especially when used against AMD in any flavor.

          • f0d
          • 4 years ago

          im gonna go against the “hypocrites” and say pretty much whatever you buy nowdays is good value, you cant go wrong with either amd or nvidia

          with the exception of running tri crossfire though – afaik three cards no matter the brand or model really are not worth it imo

          you should have kept 2x 290x’s and just sold one imo 🙂

          either way the 2x GTX’s will still be great cards until the next generation of cards come along

          just dont sell them…. they are fine

            • JustAnEngineer
            • 4 years ago

            It looked to me as if folks were trying to help you by freely spending their time to provide you excellent advice at no charge. There was no hypocrisy involved.
            [url<]https://techreport.com/forums/viewtopic.php?f=33&t=94427[/url<] [quote="Forum gerbils"<] · For £276 (including VAT), a Radeon R9-290 is still a decent value. · there is the multi-monitor + multi-GPU bugs that tend to crop up for being on the bleeding edge. · a single GPU solution is the way to go. · I strongly advise against SLI/Crossfire insanity. · I recommend the Radeon R9-290 as an excellent value. [/quote<] [url<]https://techreport.com/forums/viewtopic.php?f=33&t=95714[/url<] [quote="Forum gerbils again"<] · Aren't 3 way SLI and Crossfire setups notoriously brittle? My own preference would be to just get 2 graphics cards and a more modest PSU. · Scaling is going to be better with a pair of GCN 1.1 cards than it would be with a trio of GCN 1.0 cards. · only use additional cards when a solution with fewer cards simply does not exist.... trying to use three cards while scrimping on the CPU is a recipe for disappointment. · You will find a heck of a lot more games that scale properly using a two card setup than a 3 card setup (personally I prefer one big card myself as it offers a more consistent performance across the board). · 280X also lacks the GCN 1.1 XDMA engine for large data cross transfers. GCN 1.1 cards or SLI would be a better choice if you must/absolutely want to go multi-GPU. Otherwise one bigger card seems the wiser purchase. · What a third or fourth card gains in raw FPS you lose in frame-time inconsistency · What happens when game X comes out that you've been really looking forward to and there is no Crossfire or SLI support for it? · Buying three R9-280X graphics cards for £600 is a bad idea. Two each R9-290 or R9-290X is less bad for the same price. · It's your money to spend, of course. You're not obligated to pay heed to the recommendations of the community. [/quote<]

            • auxy
            • 4 years ago

            You replied to the wrong person. (*’▽’)

          • MOSFET
          • 4 years ago

          [quote<]Whatever - System's going well after almost a year btw.[/quote<] That's all that matters - if you don't like the opinions shared with you, don't ask for them next time. I mean this as a "good mood" comment despite the easily misunderstood overtones (so prevalent in online commenting).

    • ultima_trev
    • 4 years ago

    This goes to illustrate how future proof Hawa… Erm, Grenada is. Also shows how ROP/geometry limited Fiji is. Hopefully Greenland will have double the ROPs/geometry rather than just MOAR SHADERZ UND MOAR TEXTUREZ(tm).

    Also shows GTX 970 is at its limits here. Glad I chose the 390 over the 970… and the 390X for that matter.

    • Mr Bill
    • 4 years ago

    Ho! I just realized the R9 nano is in the test system configuration table, but not in any of the graphs. I sense an upcoming: In-The-Nano, Second TR Review. Yay!

      • Damage
      • 4 years ago

      It’s in the 4K graphs. 🙂

        • Mr Bill
        • 4 years ago

        DOH! I must have stutterrrred when I was reading the review.

        • Mr Bill
        • 4 years ago

        “…the GeForce GTX 980 Ti is easily the best performer here, and only it is fast enough to achieve nearly a steady 30 frames per second…”
        Should’nt this read “a steady 30 milliseconds per frame”?

          • Damage
          • 4 years ago

          Those two things are interchangeable, more or less. 33 ms/frame = 30 FPS. Trouble is that most utilities measure FPS by averaging over a series of one-second intervals, obscuring changes in frame times.

          I’m looking at frame times and saying it’s nearly a *steady* 30 FPS–i.e., 33 ms or less per frame. For reals.

            • Mr Bill
            • 4 years ago

            Got ya. 😉
            I was forgetting that 4K had dragged the whole pack down to 37 fps and lower.
            That said, its fortuitous that 33ms/frame give 30 frames/sec as tabulated below…
            ms/frame = frames/sec
            3.33 = 300.03
            6.67 = 150.01
            10.00 = 100.00
            13.33 = 75.00
            16.67 = 60.00
            20.00 = 50.00
            23.33 = 42.86
            26.67 = 37.50
            30.00 = 33.33
            33.33 = 30.00
            36.67 = 27.27
            40.00 = 25.00
            43.33 = 23.08
            46.67 = 21.43
            50.00 = 20.00
            53.33 = 18.75
            56.67 = 17.65
            60.00 = 16.67

    • DPete27
    • 4 years ago

    Pretty safe to say that you were partially CPU-limited on the 1080p tests (more frametime spikes) as compared to the 4k tests where the GPU was limiting and frametimes leveled off.

    • hellowalkman
    • 4 years ago

    I heard there is a new driver for the Fury cards optimised for Fables which increases performance somewhat .
    The Extremetech review is done on that platform .

    • chrcoluk
    • 4 years ago

    Shame on all those reviewers who were claiming the 960 is a good 1080p card to buy in 2015.

      • UberGerbil
      • 4 years ago

      Actually
      [quote<]The worst offenders are the GeForce GTX 950 and 960 and the Radeon R9 285. All three of those cards have something in common: only 2GB of video memory onboard. Although by most measures the Radeon R7 370 has the slowest GPU in this test, its 4GB of memory allows it to avoid some of those spikes.[/quote<] It's quite possible that a 960 [i<]with 4GB[/i<] might be a perfectly fine card to buy in 2015. People have been suggesting 2GB is not enough since before 2015. And it's not like such a configuration is exotic or [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=gtx+960&N=100007709%20600007787&isNodeId=1<]expensive[/url<]. I'd actually love to see a follow-up review, testing the same GPU(s) with multiple memory configurations.

    • brucethemoose
    • 4 years ago

    Looks like Hawaii is aging really well. I remember when it was a good deal slower than the 780 TI, but now it’s matching the 980 and the Fury.

    It makes me wonder… how well would Tahiti or Kepler fare in these tests?

      • Jigar
      • 4 years ago

      Here s the link for HD 7970 (Non GHZ version) – [url<]http://anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/2[/url<] GHZ edition should get at-least 50FPS at FULL HD resolution - 9 FPS shy of GTX 970.

    • Jigar
    • 4 years ago

    I see there is some confusion, TR is using Asus Strix GTX 980Ti with base clock at – 1216 MHZ and RAM clock at 8000MHZ (that card is factory overclocked).

    If you want to see the comparison of both cards with stock clocks you should see -http://www.overclock3d.net/articles/gpu_displays/fable_legends_dx12_benchmark_-_amd_vs_nvidia/1

    or may be – [url<]http://www.extremetech.com/gaming/214834-fable-legends-amd-and-nvidia-go-head-to-head-in-latest-directx-12-benchmark[/url<]

      • Meadows
      • 4 years ago

      There’s no confusion, Mr Wasson literally pointed out that fact at least twice in the review. Which brings me to another point, namely that people hardly ever buy “stock clockspeed” cards anymore, so your complaint is not relevant.

        • Poindexter
        • 4 years ago

        It’s very confusing in many ways, even thou it was pointed out:
        1) If it wasn’t enough for us to keep track of the default specs of almost a dozen cards, now we must also put those overclocked cards in perspective too;
        2) The charts don’t point this out, and they are pasted/linked to in other forums as well;
        3) It makes power consumption discussions even more confusing, because most partners don’t publish their cards TDPs, so people end up using the stock card TDP for those too;

        Factory overclocked cards should be relegated to shoot-out style articles, otherwise they add a lot of noise and echo.

          • Meadows
          • 4 years ago

          I don’t keep track of default specs for so many cards. Why do you?

          If an upgrade is imminent for me, then I just look up some relevant benchmarks and do some basic maths if the clock speed of my planned purchase differs from the parts reviewed.

      • K-L-Waster
      • 4 years ago

      I see you failed to note that the AMD cards are *also* factory overclocked.

      Or is it OK to OC the red team but not the green team? (I’m still trying to figure out what constitutes “fair” in AMD land….)

        • Meadows
        • 4 years ago

        [i<]"Reviews need to be fair."[/i<]

          • K-L-Waster
          • 4 years ago

          … especially when we can define fair on an as-needed basis.

            • Mr Bill
            • 4 years ago

            “Fair and Balanced”
            [url<]http://static.deathandtaxesmag.com/uploads/2011/06/Stewart.png[/url<]

        • chuckula
        • 4 years ago

        [quote<]Or is it OK to OC the red team but not the green team?[/quote<] Yeah pretty much. [quote<](I'm still trying to figure out what constitutes "fair" in AMD land....)[/quote<] To appease the AMD fanbase, a TR review should consist of a verbatim copy of marketing material directly from AMD's PR-squad followed by a lengthy conspiracy theory rant about how AMD's own marketing material is biased against AMD's own products and that they are really ten times better than what AMD advertises.

        • Rza79
        • 4 years ago

        As far as I can see, the Radeon R9 Fury X used in this review is a stock card. It gets 30fps at 4K just like in Anand’s review. The stock GTX 980 Ti in Anand’s review gets 32fps (compared to 37fps in TR’s review).

        In my opinion, reviews like this one should be done with stock cards.
        But it’s true that the GTX 980 Ti overclocks much better than the Radeon R9 Fury X. Still … choosing the highest overclocked one (Asus) … don’t know …

          • K-L-Waster
          • 4 years ago

          From the first page of the article:

          [quote<]Similarly, the Radeon R9 Fury and 390X cards are also Asus Strix cards with tweaked clock frequencies. We prefer to test with consumer products when possible rather than reference parts, since those are what folks are more likely to buy and use.[/quote<] Plus there's a table with the clock frequencies for good measure. AFAIK, the Fury X doesn't have any custom coolers or factory OC models available, so option not available....

      • maxxcool
      • 4 years ago

      Both cards are overclocked as pointed out ..

        • Jigar
        • 4 years ago

        Yeah i keep forgetting how good Fury X overclocks and how good FPS gains are once you OC fury X

    • USAFTW
    • 4 years ago

    Very nice results for both teams! I love how even 980 Ti and Fury X are well catered for with only two threads to issue draw calls.
    Oh, sorry. A Mr Evilhuang/Nvidiots/Chizow/etc from wccftech, you were saying?

    • UberGerbil
    • 4 years ago

    So if this game is doing its own memory management in DX12, that suggests the DX12 codepath is radically different from the DX11 codepath. It will be interesting to see comparisons of the two (hard to tease out with the other variables, of course).

    The results from the 2GB cards suggests a follow-up comparison that holds the GPU constant and varies only the available graphics memory, at various image quality settings. Of course that may be quickly moot if results like this quickly drive sub-4GB cards out of the enthusiast market, but a low-end comparison (2C/4T CPUs, iGPUs and sub-$200 dGPUs) would still be interesting since DX12 may make those perfectly viable gaming configurations (with IQ turned down, of course).

    Related to this (and I’m far from the first to suggest this): game performance may be about to get a lot more variable, especially as you vary the amount of graphics memory, because memory management is the kind of thing that’s easy to get wrong, especially in the odd corner cases. It may be less of an issue if most of that is deferred to game engines like Unreal, but with great(er) power comes great(er) responsibility, and not all game devs will be up the challenge (at least initially).

    • jokinin
    • 4 years ago

    It’s weird how the R9 390 consistently beats the GTX970 but the Fury X can’t reach the GTX980Ti.
    I guess if you want to play at 1080p, the R9 390 is the way to go.

      • Freon
      • 4 years ago

      Looking at the 390/390X/Fury/FuryX it seems we could draw the conclusion that ROP*clock is far more important than shader grunt and memory bandwidth for this particular benchmark.

      Cross comparing to the NV lineup is harder, but that conclusion doesn’t seem to be nullified.

        • auxy
        • 4 years ago

        3dfx was right all along! Clock x ROPs. (*’▽’)

        Geez, it’s almost like…

        … it’s almost like [url=https://en.wikipedia.org/wiki/Amdahl%27s_law<]Amdahl's law[/url<] applies even to embarrassingly parallel workloads! ( *´艸`)

          • auxy
          • 4 years ago

          [s<]Yes, downvote me! It will make the truth less truthy! (/・ω・)/[/s<]

            • Klimax
            • 4 years ago

            Welcome to the club…

      • Bensam123
      • 4 years ago

      Yup, considering a 390 is pretty much a 290x with higher clocks, 290/290x has been the way to go for quite some time, although not as lucrative anymore.

        • JustAnEngineer
        • 4 years ago

        [quote=”In a forum thread, I recently”<] The best deal going on a new graphics card is a Radeon R9-290 4 GiB for $250 -20MIR. Even two years after its release, the Hawaii GPU has still got legs. [/quote<] What would one of TR's famous value scatterplots look like using this DirectX 12 benchmark?

          • Bensam123
          • 4 years ago

          I don’t know, still waiting for them to do this on the CPUs as well. Although I assume one game isn’t enough data yet.

          Sorta interesting how TR never uses a market median or average for hardware prices after the initial release and they’re usually stuck at MSRP.

        • Mr Bill
        • 4 years ago

        and 8GB of GDDR5 in the 390 and 390X

      • Pholostan
      • 4 years ago

      Yes, the 390/390X looks like a real good performer for the dollar.

      • Flapdrol
      • 4 years ago

      The 390 is a bigger chip, uses a lot more power and has almost twice the memory bandwidth compared to the 970.

      The fury X has similar size, uses only a little more power and has “only” 50% more bandwith compared to the 980Ti.

      So not really a surprise the 390 does better compared to 970 than fury X compared to 980Ti.

    • Kretschmer
    • 4 years ago

    It’s too bad that the 290/290X weren’t kept around at ~$225 and $250 price points. They would utterly demolish anything else on that graph for price vs. performance.

      • BestJinjo
      • 4 years ago

      You can still find after-market R9 290 cards for $230-260 on Newegg. R9 390 has gone as low as $270-280. While the 980Ti beats the Fury X/Nano, R9 390/390X are clearly becoming a better price/performance value than 970/980 cards, not even mentioning the remaining R9 290 cards on Newegg that utterly demolish a $200 R9 380 4GB and $200 GTX960 4GB.

        • Kretschmer
        • 4 years ago

        I already bought an R9 290X; I’m all set!

      • Bensam123
      • 4 years ago

      Weird I said the same things back when they were around those price points and people ridiculed me for it. 😀

    • tipoo
    • 4 years ago

    Nice article. Looks like i3s are going to only get *more* feasible for gaming rigs under DX12. There’s still the odd title that suffers without quads though, but most console ports at least should do fine.

      • Demetri
      • 4 years ago

      Now if only Intel could give us an i3 that’s unlocked, then we could have some fun.

    • maxxcool
    • 4 years ago

    OMG SAY IT Isn’t TRUE THAT THE RED JESUS HBM CARD LOSES TO A CARD WITH SLOW OLD TRADITIONAL MEMORY BUSSES ?

    but seriously .. we need at least 5 mainstream games that have 30 million players to run a real comparison with before any real conclusion can be drawn.

    • Laykun
    • 4 years ago

    Look, the internet clearly told me that DX12 only runs well on AMD cards not 2 weeks ago, so how about you guys stop doing unfair biased tests and show me the AMD results I want to see, not the ones I need to see.

    When you re-run the test properly I expect to see the Fury X smashing the 980 Ti, OK?!

      • auxy
      • 4 years ago

      All kidding aside, this test does look pretty darn good for AMD with respect to 380/390 series. It’s just the Fury that underperforms, but … it probably always will. It’s such an imbalanced part you’d damn near have to write a whole new game for it. (´Д⊂ヽ

    • Ninjitsu
    • 4 years ago

    This will be another 200+ comment article, won’t it? 😀

    As far as I see it…

    Wins for Nvidia:
    – The 980 Ti looks [i<]even better[/i<] than it did under DX11. Still the fastest card, Fury X is still pointless. - They're beating, matching or slightly behind AMD without async compute or HBM. - Driver performance scaling. Problems for Nvidia: - None, really, except maybe pricing for the 980 and 970. - Probably should work on getting async compute working, but I don't exactly see why they should hurry. Wins for AMD: - Ignoring the 980 Ti, better frame times and/or frame rates. - Solid performance from the 390X relative to the 980, and 390 with respect to both the 980 and 970. - more efficient driver with limited threads. Odd lack of scaling at 4K. Problems for AMD: - Nano puts the X to shame, cements its place as the 980 Ti of the mITX world. Unless you can get a case that [url=https://www.dan-cases.com/dana4.php<]supports a regular 980 Ti[/url<] of course. - 390X puts all Furies to shame on performance, price at the cost of power. - HBM may have been a waste on Fiji. Could have just used DDR5 and cut costs. - They need a more balanced architecture. Wins for me: - Told you that a 970/390/390X/980 is the min required for 1080p! - And 4K performance with today's cards is a bad idea, DX12 notwithstanding. Wait for next gen. Thanks to Scott for testing CPU scaling and detailed 1080p results! EDIT: Why the downvotes - it DID cross 200 comments! :p

      • f0d
      • 4 years ago

      the way i see it they are all close enough to call it a tie

      none of them can do 4k good enough
      they can all do 1080p pretty good (970 and up)

      how much better one card is over another is reflected by its price

      while the 980ti is faster than the furyx in average fps – if you look at the frame times the furyx isnt too far off for it to matter

      imo from 970 and up its all equal because when you spend slightly more you get slightly faster

      edit: in no way do i disagree with anything you have said ninjitsu, i just think the differences are tiny and both amd and nvidia are good choices 🙂

      • _ppi
      • 4 years ago

      With respect to 390X vs 980 and 285 vs 960, I find it really intersting that 99th percentile is better for AMD than FPS. In DX11 tests it was usually (or always?) the other way around.

      I was considering to buy 960 … hmm

        • atari030
        • 4 years ago

        I just bought an R9 380 4GB (faster than the 285)…this is making me even more comfortable with that decision.

    • hellowalkman
    • 4 years ago

    Fury X seems to be doing a lot better against 980 ti in other reviewer’s benchmarks I have seen .

    I also wonder why it does not seem to be scaling over 4 cpu threads .

      • Damage
      • 4 years ago

      If other folks tested a 980 Ti reference card rather than a consumer product, then their 980 Ti scores will be slower than ours. We used an Asus Strix, which is quite a bit faster.

        • f0d
        • 4 years ago

        can you even buy a 980ti reference?
        (/me goes over to newegg to check)

        edit: yes you can [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814500376&cm_re=980ti-_-14-500-376-_-Product[/url<]

          • Damage
          • 4 years ago

          I think maybe EVGA sells one.

        • hellowalkman
        • 4 years ago

        yes that is correct !

    • Krogoth
    • 4 years ago

    CPU benches are far more interesting the GPU stuff.

    It looks like four or mores threads will start to become useful in gaming. The days of dual-core chips are really numbered now.

      • drfish
      • 4 years ago

      Agreed! It’s a shame the PCPer didn’t do any CPU testing, I would have liked to see their results as well. [url=http://images.anandtech.com/doci/9659/1%20-%20FRP%204K%20612.png<]These[/url<] [url=http://images.anandtech.com/doci/9659/2%20-%20FRP%204K%2044.png<]two[/url<] graphs at Anandtech were particularly intriguing to me (open a tab for each and toggle between).

      • EndlessWaves
      • 4 years ago

      Really? I’m seeing no benefit from more than two cores on a Fury X at settings you’d play the game on, very minor benefits (~10%) between two and six cores on a 980ti and a drop in performance when SMT is enabled.

      Obviously Intel only offers dual cores with low clockspeeds and low cache amounts, both of which will potentially have an additional impact on performance.

      It would be nice to see a Pentium vs. i3 vs. i3 minus HT vs. 2C/2T 5960X vs. 2C/4T 5960X test to see which factors do make the biggest difference.

        • derFunkenstein
        • 4 years ago

        At some point the GPU is 100% taxed. What’s interesting is that something prevents many-core and many-thread CPUs from achieving 100% utilization, apparently.

        • Peldor
        • 4 years ago

        I don’t see what CPU frequency these tests were run at but the 5960X normally runs at 3.0 GHz with up to a 3.5 GHz turbo. Maybe Scott can clarify.

        The i3 line goes all the way to 3.9 GHz now. Even the Pentiums reach up to 3.6 GHz.

          • EndlessWaves
          • 4 years ago

          Intel’s fastest dual core is 4.4Ghz. It’s called the i7-4790k.

          Dedicated dual core chips should be able to exceed that. Intel could easily sell a 4.5Ghz i3 but they choose to keep clock speeds low and overclocking turned off except when the chip is restricted in other ways (G3258).

          It’s no surprise they want people to move to more expensive products but is is something to bear in mind that half the benefit of i5/i7s is because of Intel’s product lines rather than the extra engineering.

            • fbertrand27
            • 4 years ago

            Isn’t the 4790 quad core? </pedantic>

            • auxy
            • 4 years ago

            Yes. It’s also a dual-core. </morepedantic>

            • derFunkenstein
            • 4 years ago

            dual-dual-core with dual dueling threads per core.

            (although yes, I realize the point is that it’s the fastest CPU with two-or-more cores by clock speed)

            • EndlessWaves
            • 4 years ago

            It’s the fastest CPU for a two thread workload.

            If you absolutely have to have the maximum performance from something that can’t benefit from more than a dual core your current best choice in the Intel range is the 4790k, hence it’s ‘Intel’s fastest dual core’.

            There are good reasons to spoil the performance of dual cores and keep clock speeds low. It helps AMD compete and it also helps the environment – more cores at a lower clockspeed is a more power efficient way to provide processing ability which is why phones and tablets tend to be quad cores.

            I’m just trying to make the point that I’m skeptical that the orwellian mantra of ‘four cores good, two cores bad’ is the way we should be thinking of these things.

            • auxy
            • 4 years ago

            I like how you got +7 for this and I got -1 even though we were literally saying the same thing.

            • derFunkenstein
            • 4 years ago

            Sorry?

      • tipoo
      • 4 years ago

      I kind of drew the opposite conclusion. While DX12 will allow multicores to be really used well, it still has substantial benefit in overhead reduction to i3s. I think if anything it makes i3s even more feasible for gaming.

      I guess both of these things are true. Low end CPUs get a boost, high end CPUs get used better.

        • UberGerbil
        • 4 years ago

        I agree. I think it will be interesting to see a low-end gaming comparison using lower thread and core count CPUs and integrated graphics (as well as the sub-$200 dGPUs). This may be a new renaissance for budget gaming builds (and “gaming laptops” that aren’t neon-lit behemoths).

          • Ninjitsu
          • 4 years ago

          Indeed, the AT results seem to imply that Core i3s are just as capable; I’d pick an i5 just for balanced performance (DX11 and productivity).

          But yeah, going forward, Core i3s will be even better for budget gaming than they are now.

      • jihadjoe
      • 4 years ago

      6 cores is the peak. 5820k let’s go!

    • f0d
    • 4 years ago

    but but but…..
    all the amd fans say nvidia is horrible at dx12 and you should sell your 980ti to get a fury

    how can this be?!?!?!?

      • Ninjitsu
      • 4 years ago

      ASYNC SHADERS AND ALL THAT

      • chuckula
      • 4 years ago

      This game isn’t REAL DX12 that’s why!

      There’s a very simple formula to figure out if a game is REAL DX12:
      Q: Does AMD win?
      — If yes: REAL DX12. AMD IS BETTER.
      — If no: OMG NGREEDIA EVULZ! AMD IS BETTER.

        • Silus
        • 4 years ago

        LOL, that’s AMD fanboyism to its full extent!

      • Silus
      • 4 years ago

      It’s called fanboyism. There’s no rational thought. Just buzzwords that their favorite company used, that fans grip to as if their life depended on it to try and justify buying inferior products.

      Remember the “Blast Processing” thing by Sega many years ago, in their marketing campaign against Nintendo ? It’s the same thing. They had an inferior console in every way to Super Nintendo, except their chip had a higher frequency than the Super NIntendo chip, so “BLAST PROCESSING RULEZ”

        • auxy
        • 4 years ago

        Blast Processing wasn’t strictly marketing; the Genesis’ Yamaha VDP (read: graphics chip, primitive as it was) was significantly faster than the SNES’ Ricoh PPU, particularly with regard to DMA. This is what it referred to. Of course, the SNES’ PPU had various other features that the older Yamaha VDP didn’t, but it’s unfair to point out “Blast Processing” as being raw marketing or fanboyism. (*’▽’) It’s a real hardware feature, and it’s just as valid as bringing up Mode 7.

        But really, the Nintendo Super Famicom and SEGA Mega Drive are much more closely matched than you seem to think despite being radically different in design. SEGA’s console excelled at scenes with fast motion because they play to the hardware’s strengths — check out games like Thunder Force IV and of course the Sonic the Hedgehog series. You won’t find anything like that on the SFC because that hardware isn’t great at fast motion, and instead excels in slower titles where it can focus on showing off its much richer color palette and visual special effects, like hardware-based sprite scaling and rotation. Compare the frenetic Gunstar Heroes (SMD) to the much more deliberate (and gorgeous!) Assault Suits Valken (aka Cybernator) (SFC). Both AMAZING, excellent games, and both representative of their respective hardware. There is a reason most of the best shooter games of the generation were on the SEGA console, and most of the best RPGs were on the SFC.

        It’s very popular to say things like “ah, the Genesis’ music sucked, it was really a generation behind”, but they were closely matched here too; they simply took radically different approaches. The SFC used a primitive relative of wavetable MIDI for its music generation, which sounded great on titles with more orchestral scores (Final Fantasy titles, Zelda), but really falls down with rock, electronic, or industrial styles. Meanwhile, the Mega Drive’s complex Yamaha FM synthesis + Texas Instruments programmable sound generator were much harder to work with (and easier to make sound awful), but by nature were much more versatile and could produce [url=https://www.youtube.com/watch?v=qTiEESxMD30<]some[/url<] [url=https://www.youtube.com/watch?v=yCPgPENPT7w&t=17m18s<]truly[/url<] [url=https://www.youtube.com/watch?v=bIt1mZDOR-E<]epic[/url<] [url=https://www.youtube.com/watch?v=MaOlDwRyjjI<]and[/url<] [url=https://www.youtube.com/watch?v=oTQIiIKummw<]brutal[/url<] [url=https://www.youtube.com/watch?v=kRrf-WxRsgs&t=8m16s<]music[/url<], certainly at least as good as anything on the SFC. Along the same lines, it's also popular to do things like compare the SFC's amazing SuperFX2 port of Doom to the slapdash version on the SEGA 32X, but this comparison is like comparing an expensive meal at a fancy restaurant to the same dollar value of ramen noodles; the latter accomplishes its job in the simplest way possible, while the former was crafted with loving care, with music tracks rejiggered to better suit the SFC sound hardware. The 32X version actually plays the stock MUS lumps from the Doom wad file just like you'd hear on an AdLib-compatible soundcard! The point, really, is that making games back then was more of a technical challenge than it is these days and even though it seems paradoxical, it was truly both more and less about what your hardware could do -- more, because you had to work within the limitations of your hardware, but BECAUSE of that restriction, ultimately that meant the skill of the creators was just that much more important. A well-crafted game would seem like it was on an entirely different machine than a poorly-made one, and as a fan of retro gaming and retro games' hardware, it's really frustrating to me when people perpetuate stereotypes like this. Don't take it personally! Just wanted to drop a knowledge bomb on you. (*‘ω‘ *)

          • sweatshopking
          • 4 years ago

          Tldnr. Plus, post seems to be too nerdy to exist.

            • auxy
            • 4 years ago

            [url=http://i.imgur.com/Nwx4wxK.jpg<]No, this is too nerdy to exist.[/url<] ( *´艸`)

          • auxy
          • 4 years ago

          Oh, another great set of links to demonstrate the difference:
          [url=https://www.youtube.com/watch?v=0fs05RepMiY<]Thunder Force 3 vs. Thunder Spirits / The Genesis vs SNES Soundtrack Comparison[/url<] The developers did a decent job porting the music over from the radically different Megadrive audio hardware, but the original compositions still sound AWFUL side-by-side. Demonstrating my point about the skill of the creators, though, there's this comparison: [url=https://www.youtube.com/watch?v=Di1iKxgMVtk<]Rock and Roll Racing Nintendo and Sega comparison[/url<] Reproducing the real-life classic rock tracks is a tall order for the FM-synthesis-based YM2612, but Tim Follin worked his magic well on the Nintendo version and it sounds fantastic. I'm sure you could do better than the people who ported Rock & Roll Racing actually did with the YM2612, but it would take some real artisanship. I actually feel pretty strongly about the idea that these synthesizer chips like the Yamaha in the Mega Drive are essentially instruments unto themselves and learning to "play" (read: program) them is a real artform. (*‘∀‘) I'm not the only one who feels this way either; [url=https://www.youtube.com/watch?v=8uEUiOzE294<]check out this video where a guy modded a better Yamaha synthesizer chip into a Genesis console.[/url<] The quality improvement is stark! Anyway, I'll quit posting this nerdy filth. I'm just really excited about the topic. ('ω')

        • DoomGuy64
        • 4 years ago

        Sorry, all I’m seeing is Ferguson level rioting being thrown by nvidia fans. Sure the TI comes on top, but for $650 it better. The real story here is how well Hawaii is doing, and the 970 is no longer the card you want to be getting for future games and dx12. I mean you could, but you’d obviously be screwing yourself.

          • derFunkenstein
          • 4 years ago

          Is it “how well Hawaii does” or is it “how poorly Fiji does”? Yes, the two are close together, but man. For $650 and a “better” architecture for “future” games, one would think Fiji would be the clear, overall winner here. The 980Ti is nearly 25% faster at 4K.

            • DoomGuy64
            • 4 years ago

            Because the Ti has more ROPs. Fiji is clearly ROP limited, and does not deserve to be in the same price range as the Ti. Hawaii on the other hand, is doing really well against the 980/970. Nvidia wins at the high end, loses at the mid-range. It’s not a complete win for either side.

          • chuckula
          • 4 years ago

          [quote<]Sure the TI comes on top, but for $650 it better. [/quote<] Sure the $650 Furry X loses to the equivalently-priced, earlier launched, and more energy efficient GTX-980Ti... even in DX12. At $650, it's a failure. Sure the $650 Furry Nano isn't effectively any better than the cheaper GTX-980. At $650, it's an insult.

            • DoomGuy64
            • 4 years ago

            Any video card selling for $650+ is an insult, and the Nano is a niche M-ITX product that doesn’t directly compare to full size gpu’s.

            • Klimax
            • 4 years ago

            R&D and drivers has to be funded by something and those foundries would like to get something too…

        • Krogoth
        • 4 years ago

        Kettle calling pot black……

      • auxy
      • 4 years ago

      Where are these strawman AMD fans? ( ゚Д゚)ノシ

      Oh, they don’t exist. Because they’re just strawmen. Hehe. ( *´艸`)

      • JumpingJack
      • 4 years ago

      Ohhh, yeah — remember AMD has HBM and exclusivity for HBM2 and async shaders…. ooohhh, and don’t forget…. AMD invented DX12.

    • terminalrecluse
    • 4 years ago

    DX12 looks to bring new life to radeons.

    Also the game looks incredible. I’m really excited for Dx12 games.

      • Klimax
      • 4 years ago

      Meh. Easily gained, easily lost. I doubt it will last like previous instances.

    • anubis44
    • 4 years ago

    A fine article, Scott. Thanks for all this work. Makes me want to subscribe. 🙂

    I would like to know what’s going on with the Radeons not getting any additional speed with 6 cores enabled over 4. They’re supposed to have superior compute, and we’ve been told that DX12 is supposed to remove the driver as a bottleneck between the CPU and the GPU cores. It somehow seems like there’s still a coding issue with this, as, even if the Radeons were to be slower than the GeForces, they should still see an increase with the same number of CPU cores enabled.

      • ish718
      • 4 years ago

      Not even DX12 will stop AMD drivers from sucking…

      • UberGerbil
      • 4 years ago

      There could be some kind of serial code or data bottleneck anywhere in the software stack (DX12 itself, the drivers, or Unreal Engine) that shows up at thread counts above 6. It’s early days yet for all the pieces involved, and that sort of issue is not unusual with multithreaded code — something you don’t think will be a problem turns out to be when all the other resource constraints are removed. It’s also at the high end of hardware, which represents a very small portion of the bell curve; you’d expect most of the testing (and even the design) to target the four-core “sweet spot” that represents the vast middle of the curve. It’s the kind of thing that gets tweaked late, or even not until the next version.

      • pranav0091
      • 4 years ago

      Well, the gpu still needs [i<]drivers[/i<] with DX12, doesn't it ? 😉

      • TheRealSintel
      • 4 years ago

      Over at AT they looked into it a bit more, it seems that the scheduling mechanism in use is overwhelmed by the many requests with multiple cores (resource contention). Switching contexts/threads/.. still carries overhead.

    • daviejambo
    • 4 years ago

    so glad I bought a 290x over a 970

      • anubis44
      • 4 years ago

      Or even my Gigabyte R9 290 4GB that I paid $259 Canadian for two Christmases ago, and bios flashed to 1040MHz.

        • BestJinjo
        • 4 years ago

        Wow, that’s a smoking deal considering the time frame when you got it and the depreciation of the Canadian dollar which means today a used R9 290 goes for that much or more.

      • tanker27
      • 4 years ago

      I assume the weird memory set up is what’s killing the 970.

        • Krogoth
        • 4 years ago

        Yep, the micro-shuddering issues when the card is forced to use the last 512MiB pool is what kills 970 in time-framing at 4K.

        • Damage
        • 4 years ago

        You might perhaps be looking at the 960 plots?

          • tanker27
          • 4 years ago

          No I was looking right. “killing” may have been an overstatement by me. Looking at it again I concede that it’s very reasonable 970 < 980 < 980 TI.

          But HEY, I am a 970 owner! 🙂

    • sweatshopking
    • 4 years ago

    how much ram does the 970 have?
    I expected some replies, but this worked better than I had imagined. You guys are so easy.

      • ClickClick5
      • 4 years ago

      Oh you know, a weird amount.

      • morphine
      • 4 years ago

      4GB if you’re a sane person, 3.5GB if you love getting into uninformed internet arguments 😉

        • Krogoth
        • 4 years ago

        970’s memory pool is partitioned into two pools of 3.5GIB and 512MiB.

        970 in total does have 4GiB of memory, but the GPU logic cannot access all of it without a performance penalty (micro-shuddering).

          • morphine
          • 4 years ago

          YOU DON’T SAY! </même>

          … and yet the card runs just fine, and always has.

            • Krogoth
            • 4 years ago

            Not exactly, it has known shuddering issuse when it stressed and forced to the use the 512MiB pool. Granted, under such conditions the 970 begins to be limited by its shading and texture power. It is a minor problem in the grant scheme of things.

            What ticked people off is that Nvidia’s marketing didn’t read the memo from engineering team that disclosure the odd memory design of 970 and what it could do. People who got a hold of 970 discovered it through stress testing. Nvidia did an official addendum afterwards.

            • K-L-Waster
            • 4 years ago

            TL:DR — the 970 is fine for 1080 and 1440, but a bit overmatched for 4K.

            • f0d
            • 4 years ago

            frame times on the 970 seem ok to me at 4k (no massive spikes)
            fps is a little low but pretty much all of them are a little low

            970 seems just as playable as any other card (as in not playable because no single card can really do 4k properly yet, yes furyx and 980ti are better but far from playable imo)

            • Krogoth
            • 4 years ago

            Compare it to the 980(Full GM204) and it is quite clear that 970 suffers from intermittent spiking and it not just from having less power at its disposal. The 290X/390X doesn’t suffer from the same spikes.

            • f0d
            • 4 years ago

            i cant see any nasty spikes in the graph
            the 960 is clearly spiking badly but the 970 isnt

            also the 970 is missing a higher percentage of resources from a 980 than a 290 is from a 290x (2048/1664 vs 2816/2560) so there will be a bigger difference between them
            on the chart the 980 sits around a little over 40ms and the 970 at around 50-55 ms most of the time, the 980 spikes to a little over 60ms and the 970 to around 80 at the same spot (around 2700 seconds into the bench)

            to me its clearly because of less resources (almost 1/4 less)

            where are these crazy spikes?

            • derFunkenstein
            • 4 years ago

            Think you’re looking at the 960.

            • Krogoth
            • 4 years ago

            No, it is the “970”.

            Unless you got some form of color blindness. There are spikes going on with 970 that aren’t found in the 980. There aren’t as dramatic as 960, but they are there.

            The 970 and 980 both follow a similar curve, but just pay extra attention to the 970. Notice how there appear to be 5-15ms spikes going on a minute or so and they are completely abstain from 980? The 390X doesn’t suffer from it either at the same points. Although it does come across a major spike in one portion of the 4K test. The 970 is “shuddering” when it is trying to use the last 512MiB and the core logic has to take the long way to access it. This inflicts a hit on latency/frame-timing. This does appear to happen in 4K benches but it doesn’t happen that often since 970’s drivers are trying to avoid using that last 512MIB as much as possible.

            970 users who were stress testing their units beyond 3.5GiB range have been noticing the same shuddering/spiking during benching and real-time gaming.

            The only people who are denying this at point are die-hard Nvidia fanboys or ignorant 970 owners (who never stress their cards) suffering from buyer’s remorse.

            • derFunkenstein
            • 4 years ago

            No. Stop it. Just stop it. Stop the silly excuses and look at the actual results, rather than the ones you dreamed up. I zoomed wayyyy in on the 4K, 960/970 graph. Here it is:

            [url<]http://imgur.com/K8ZayLU[/url<] There are exactly 3 spikes, one of which just barely eclipses 80ms. And the exact same spot on the 980 graph shows the 980 spikes, albeit slightly lower, into the mid-60-ms range. Is it higher than the 980? Sure. Is it loads of extra spikes? By all means, no.

            • Krogoth
            • 4 years ago

            It is not silly excuses. It is cold hard facts that fanboys like to dismiss. 970 has a minor problem that will never go away. It will become more apparent as content begins to require more than 3.5GiB of VRAM becomes more commonplace.

            • Meadows
            • 4 years ago

            Read the review instead of arguing with TR contributors.

            • Krogoth
            • 4 years ago

            I have read the article and look at the data carefully.

            The problem is there if you look carefully not just skim through it. It doesn’t help that 960 dramatic spiking on the chart masks the 970’s data to the casual observer.

            • Voldenuit
            • 4 years ago

            Moderately priced card is faster than cheaper card and slower than more expensive card.

            Details at 11.

            • derFunkenstein
            • 4 years ago

            No, THIS is cold, hard facts.

            [url<]https://techreport.com/r.x/fable-legends-dx12/fable4k-970-only.gif[/url<]

            • Ninjitsu
            • 4 years ago

            SEE 5 GIANT SPIKES! ASYCN PUTER MEMRY

            • derFunkenstein
            • 4 years ago

            Heh, yeah, look at ’em go. Also let’s not forget that 22 frames per second on any card is basically unplayable.

            • Krogoth
            • 4 years ago

            You are only half-way there.

            Try putting in the 980’s data with 970 and come back again.

            • derFunkenstein
            • 4 years ago

            Your original hypothesis:

            [quote<]There are spikes going on with 970 that aren't found in the 980. There aren't as dramatic as 960, but they are there. [/quote<] [url<]https://techreport.com/r.x/fable-legends-dx12/fable4k-gtx-980.gif[/url<] [url<]https://techreport.com/r.x/fable-legends-dx12/fable4k-970-only.gif[/url<] I don't believe your request is necessary for those who can read. The exact same "spikes" show up on the 980. At around 2,000 frames, again around 2,600 frames, 3,100 and 3,400. Those are all in the 980, too. And they're not enormous spikes in the first place. Instead of 40ms it's 75ms. Two dropped frames at 60Hz, but when you're starting at 25fps in the first place, who really cares? Contrast with the 960 [url<]https://techreport.com/r.x/fable-legends-dx12/fable4k-gtx-950.gif[/url<] There are spikes that aren't present with the 980...in the 960's results. In one place toward the end, instead of latencies around 60ms, there are multiple hitches pushing up around a full quarter of a second. Now go on - it's your turn. Say something inane and incomprehensible.

            • sweatshopking
            • 4 years ago

            [quote<] Now go on - it's your turn. Say something inane and incomprehensible. [/quote<] QUIT TRYING TO GIVE AWAY MY JOB.

            • derFunkenstein
            • 4 years ago

            Sorry but dang did you see that reply? Classic. Way better than anything you’d say.

            • Krogoth
            • 4 years ago

            Close but no cigar. You didn’t put the data of 980 and 970 together under one chart.

            The spikes are small but there are around 1200 frames, 3200 frames, 3400 frames. It is not a big deal in the grand scheme of things. However it does show that unusual memory setup of 970 does pose some problems under extreme conditions. It will ultimately affect the card’s longevity.

            • Meadows
            • 4 years ago

            If it’s not a big deal, then what are you arguing for?

            Here, I’ll help you out because you haven’t seen your optometrist in years:
            [url<]http://puu.sh/kosPg/d90c78696e.png[/url<] Observe the two graphs overlaid on top of one another. The same bloody thing. "Where the spikes are" is not as important as "How many spikes are there", so in the end, the cards are literally equal. In fact, if you tried hard enough (which you are bound to do), you could actually argue that the 980 is the worse one because it has "more small spikes" or something. The memory arrangement doesn't matter. Not here, not in the future. [b<][i<]It wouldn't cause spikes anyway:[/i<][/b<] rather, the memory arrangement would [b<][i<]raise average frametimes altogether[/i<][/b<] if/when it became full.

            • Ninjitsu
            • 4 years ago

            Or you could appreciate the fact that someone’s putting up with your nonsense and:

            1) Open those two images in two tabs
            2) Press and hold Ctrl + TAB.

            • Freon
            • 4 years ago

            [url<]https://dl.dropboxusercontent.com/u/27624835/shack/KrogothIsMentallyIll.png[/url<] What I see simply does not comport with your words. The few real spikes seem to translate to the 980 and 980 Ti.

            • Meadows
            • 4 years ago

            The 970 literally performed the same as the 980, discounting the fact its average frame times were slightly higher (the graph looks the same, but it’s sitting higher).

            I’m not sure how you saw spikes going on “a minute or so” when the benchmark didn’t even measure time.

            None of the cards have serious spikes at 4K except for the R9 285 and the GTX 960.

            • Krogoth
            • 4 years ago

            Wrong, the 970 has 5-15ms (small) spiking going every minute or so on in addition to being ~15%-20% slower than 980 on frame-timing for 4K benching. You see a similar difference under the with testing but the 5-15ms spiking on the 970 happens is missing. I wonder why?

            The FPS averages masks the 970’s minor problem by making look like it is almost as 980. It actually demonstrates perfectly why FPS average mertic doesn’t tell the whole story.

            • Meadows
            • 4 years ago

            There are no minutes on these charts.

            I see the part of the charts you’re trying to dress up as a problem, but the small spikes happen with the same frequency and magnitude on the GTX 980. (Not exactly on the same frames, but equally regularly.)

            • llisandro
            • 4 years ago

            Ok, I reread all of your posts and you’ve changed my mind. Your VRAM analysis has convinced me not to buy a 970 to play at a resolution that even the 980 can’t really handle.

            • Meadows
            • 4 years ago

            Careful, he won’t pick up on sarcasm and will respond in earnest.

            • llisandro
            • 4 years ago

            He forgot that micro-stuttering is only something you worry about when you don’t have [i<]macrostuttering[/i<].

            • anotherengineer
            • 4 years ago

            Wait what, it runs?!?! I never knew the 970 had legs?!?!?

            Or did you mean operate?? 😉

            • morphine
            • 4 years ago

            Great, I have another editor. As if the two I already have weren’t enough.

            • sweatshopking
            • 4 years ago

            I CAN EDITOR TOO

            • anotherengineer
            • 4 years ago

            😀

            Well TR missed talk like a pirate day, so I unofficially made Friday – Edit the Forum Admin Articles day 😉

          • _ppi
          • 4 years ago

          For the driver and DX12 dev, it is the 3.5GB.

          nVidia has very good drivers, that were able to avoid any issues with this … unusual … memory config in DX11 games. It remains to be seen, if DX12 devs will care so much as well.

            • Meadows
            • 4 years ago

            Technically, it should be possible for developers to artificially limit their games to 3.5 GiB of videomemory in DirectX 12 when the game detects a GTX 970, but I’d imagine they have no incentive to do so (yet).

            Then again, it still doesn’t matter because the last 512 MiB is still much much faster than hitting the system bus and going to system RAM for more.

          • geekl33tgamer
          • 4 years ago

          I’ve yet to see it be a problem……………in ANYTHING. Just saying.

            • Krogoth
            • 4 years ago

            It is a problem under extreme conditions and stress-testing. it is jarring for those who want a smooth experience as possible.

            They are people who actually noticed it in the first place.

            • geekl33tgamer
            • 4 years ago

            I think Nvidia’s locked it off – 3505MB is the most VRAM I’ve ever seen mine use (and I game at 4K)?

            I’m sure this leads to more questions if they actually have totally disabled it, but that’s what mine do these days.

            • Krogoth
            • 4 years ago

            Nvidia did put 3.5GIB limiters on their driver profiles for the most common titles out there. That was also another clue that lead to discovery of 970’s little problem.

      • K-L-Waster
      • 4 years ago

      That talking point never gets old with you, does it?

      • Chrispy_
      • 4 years ago

      Why does it matter to you? You can’t plug a 970 into your Nokia Windows computing device.

      • Ninjitsu
      • 4 years ago

      Unsure whether you’re trolling or not – because now you mention it, I do wonder if it’s making a difference. Performance seems in-line relative to the 980, though, so I don’t think so.

      • llisandro
      • 4 years ago

      enough

      • geekl33tgamer
      • 4 years ago

      Enough to know that’s not the reason why it’s slower than the 980. Go and count the cores again on a 980 – Notice anything? 😉

      • maxxcool
      • 4 years ago

      elventie-sextillion..

    • drfish
    • 4 years ago

    Interesting stuff! With the options on the Gigabyte do you have the ability to run a 4C/8T test?

      • Damage
      • 4 years ago

      Yes.

        • bfar
        • 4 years ago

        Any chance you could add that to the chart? 4C/4T has been the sweet spot for years. This analysis indicates that it’s moved, the question is how far?

          • Ninjitsu
          • 4 years ago

          Actually no- this suggests 4C/4T > 4C/8T, something I’ve noticed a lot lately.

            • drfish
            • 4 years ago

            Exactly what I want to see here. If frame times are the priority and I have “enough” physical cores is there ever a reason to muck it up by allowing the use of additional logical cores?

            • Ninjitsu
            • 4 years ago

            So 4C/8T seems to be an advantage at 720p low settings, but not at the higher resolutions and presets.

            [url<]http://anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/3[/url<]

            • Jason181
            • 4 years ago

            Looks like they actually used a 6C/12T; wish they had benched using a 4C/8T as those are so much more common.

            • Ninjitsu
            • 4 years ago

            Oh? I didn’t read the entire thing so I just went by the chart labels.

      • ClickClick5
      • 4 years ago

      It is as of the law of diminishing returns holds true here. Having one helper in the kitchen is good, having sixteen helpers is going to slow things down as there is too much going on.

      EDIT: Thread scheduling issues? Trying to divide the work up on many threads, causing a bog vs a performance boost?

        • drfish
        • 4 years ago

        Like he says, it’s about how well things are scheduled. For very CPU dependent games where clock speed/IPC is king there’s certainly reason to believe hyper-threading can get in the way of work being done “on time” That makes things like the eDRAM on the 5775C even more important to consider as this whole “next gen” DX12/frame-time centric world comes into focus and we start planning our next builds…

      • UberGerbil
      • 4 years ago

      It would be interesting to see a 2C/4T test also, for comparison to the 4C/4T and as a representative of what the low end might have (along with integrated graphics, an interesting subject for another day).

      My theory is that you’ll see a drop with HT in general, mostly due to cache contention. Two threads on the same physical CPU effectively halves the available cache (particularly L1) and, depending on the code, that can be a significant hit. We used to see that a lot with early iterations of HT, but Intel did some magic in later chips to reduce its impact; it may be that DX12 has stumbled into it again.

      It’s also possible that something in the stack (DX12, the drivers, or the game engine) isn’t well threaded for thread-counts above 6. In that case threading overhead (Amdahl limits on serial code, or locking contention on shared resources — I wonder if Unreal Engine uses TSX?) could be the culprit, and we may be seeing the evidence for that in the results for 6C/6T vs 8C/8T

    • tanker27
    • 4 years ago

    Was it just me or did the video seem like it was stuttering a bit?

    So far i’m not impressed. /shrug

      • Damage
      • 4 years ago

      From the article: ” The video above utterly fails to do it justice, thanks both to YouTube’s compression and a dreaded 30-FPS cap on my video capture tool. The animation looks much smoother coming directly from a decent GPU.”

        • tanker27
        • 4 years ago

        Reading is fundamental! 😉

        • Mr Bill
        • 4 years ago

        Reading 60-90 pages per hour without stuttering with greater than 90% comprehension is a must. Slower reading or microstuttering while reading can significantly affect the page rate, in-the-sentence, and even in-the-word comprehension. Think of YouTube as the cliffnote version.

      • Chrispy_
      • 4 years ago

      There are about four stutters per second, meaning that the source would be in phase if it were captured at a multiple of about 34Hz, most likely from a graphics card outputting at somewhere near 68fps.

      Based on the 1080p 69fps average fps result of the GTX980, I’m guessing Scott captured that on a GTX980.
      Do I get a cookie, or are the numbers out because of a capture overhead I haven’t accounted for?

        • tanker27
        • 4 years ago

        I give you a thumbs up for just trying. It’s obvious that you don’t have enough work to do today. 😛

          • Chrispy_
          • 4 years ago

          Pffft, “trying” 😉

          1) guess the number of hitches a second. It’s maybe 3 or 4.
          2) Add 3 or 4 to 30Hz to get 34Hz
          3) Look for which GPU is outputting frames near a 34Hz multiple.
          4) Find GT980 as only candidate
          5) Type two short paragraphs

          I have loads of work to do, I just have a short enough attention span that I spend at least 10 minutes an hour doing non-work stuff. I don’t think I could enjoy my day if I had to concentrate for 2 hours straight like I used to as a schoolkid.

        • Mr Bill
        • 4 years ago

        This is what FFT Nyquist Frequency is for.

      • geekl33tgamer
      • 4 years ago

      You wanna speak to Krogoth about that.

Pin It on Pinterest

Share This