Project Cars is beautiful. I could race around Road America in a Formula C car for hours and be thoroughly entertained. In fact, that's pretty much what I did in order to test these graphics cards.
Click the buttons above to cycle through the plots. You'll see frame times from one of the three test runs we conducted for each card. Notice that PC graphics cards don't always produce smoothly flowing progressions of succeeding frames of animation, as the term "frames per second" would seem to suggest. Instead, the frame time distribution is a hairy, fickle beast that may vary widely. That's why we capture rendering times for every frame of animation—so we can better understand the experience offered by each solution.
If you click through the plots above, you'll probably gather that this game generally happens to run better on GeForce cards than on Radeons, for whatever reason. The plots from the Nvidia cards are generally smoother, with lower frame times and more frames produced overall. However, you'll also notice that the Radeons also run this game quite well, with almost no frame times stretching beyond the about 33 milliseconds—so no single frame is slower than 30 FPS, basically.
The FPS averages and our more helpful 99th percentile frame time metric are mostly the inverse of one another here. When these two metrics align like this, that's generally an indicator that we're getting smooth, consistent frame times out of the cards. The one exception here is the Radeon R9 295 X2. Dual-GPU cards are a bit weird, performance-wise, and in this case, the 295 X2 simply has a pronounced slowdown in one part of the test run that contributes to its higher frame times at the 99th percentile. I noticed some visual artifacts on the 295 X2 during testing, as well.
We can understand in-game animation fluidity even better by looking at the entire "tail" of the frame time distribution for each card, which illustrates what happens with the most difficult frames.
The Radeons' curves aren't quite a low and flat as the GeForces', but they're largely excellent anyhow, with a peak under 30 milliseconds.
These "time spent beyond X" graphs are meant to show "badness," those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you're not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you're into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we'd like to achieve (or surpass) for each and every frame.
In this case, generally good performance means all of the cards ace the first two thresholds, without a single frame beyond the mark. The Fury spends a little more time above 16.7 ms—or 60 FPS—than the Fury X, so it's not quite as glassy smooth, but it's close.
Again, this game just runs better on the GeForce cards, but not every game is that way. Let's move on.
|Intel security patches could cause restarts on hardware old and new||0|
|Samsung fires up its foundries for mass production of GDDR6 memory||7|
|Use InSpectre to see if you're protected from Meltdown and Spectre||31|
|David Kanter dissects Intel's 22-nm FinFET Low Power process tech||12|
|TPCast's second-gen wireless VR adapter can deal with 8K streams||7|
|Synaptics' Clear ID fingerprint sensor feels like the way of the future||25|
|Be Quiet cranks its Straight Power PSUs to 11||15|
|Cherry MX Low Profile RGB switches arrive in the Ducky Blade Air||20|
|Nothing Day Shortbread||14|
|On look, an InSpectre Gadget.||+68|