Updated: GeForce cards mysteriously appear to play nice with TR's FreeSync monitors
— 11:42 PM on September 29, 2018

Update 9/30/18 3:22 AM: After further research and the collection of more high-speed camera footage from our G-Sync displays, I'm confident the tear-free gameplay we're experiencing on our FreeSync displays in combination with GeForces is a consequence of Windows 10's Desktop Window Manager adding its own form of Vsync to the proceedings when games are in borderless windowed mode, rather than any form of VESA Adapative-Sync being engaged with our GeForce cards. Pending a response from Nvidia as to just what we're experiencing, I'd warn against drawing any conclusions from our observations at this time and sincerely apologize for the misleading statements we've presented in our original article. The original piece continues below for posterity.

It all started with a red light. You see, the primary FreeSync display in the TR labs, an Eizo Foris FS2735, has a handy multi-color power LED that flips over to red when a FreeSync-compatible graphics card is connected. I was setting up a test rig today for reasons unrelated to graphics-card testing, and in the process, I grabbed our GeForce RTX 2080 Ti Founders Edition without a second thought, dropped it into a PCIe slot, and hooked it up to that monitor.

The red light came on.

Some things are just not supposed to happen in life, like the sun circling the earth, people calling espresso "expresso," and FreeSync monitors working in concert with Nvidia graphics cards. I've used GeForce cards with that Eizo display in the past as the occasion demanded, but I can't recall ever seeing the monitor showing anything other than its white default indicator with the green team's cards pushing pixels.

At that point, I got real curious. I fired up Rise of the Tomb Raider and found myself walking through the game's Geothermal Valley level with nary a tear to be seen. After I recovered from my shock at that sight, I started poking and prodding at the game's settings menu to see whether anything in there had any effect on what I was seeing.

Somewhere along the way, I discovered that toggling the game between exclusive fullscreen and non-exclusive fullscreen modes (or borderless window mode, as some games call it) occasionally caused the display to fall back into its non-variable-refresh-rate (VRR) default state, as indicated by the LED's transition from red to white. That color change didn't always happen, but I always noticed tearing with exclusive fullscreen mode enabled in the games I tried, while non-exclusive fullscreen mode seemed to reliably enable whatever VRR mojo I thought I had uncovered.


Our Eizo FS2735 failing to do the variable-refresh-rate dance in exclusive fullscreen mode
 

Our Eizo FS2735 delivers tear-free gaming, courtesy of double buffering and not VRR, with our RTX 2080 Ti in non-exclusive fullscreen mode

Next, I pulled up my iPhone's 240-FPS slow-mo mode and grabbed some footage of Deus Ex: Mankind Divided running on the RTX 2080 Ti while it was connected to the Eizo monitor. You can sort of see from the borderless windowed mode video that frames are arriving at different times, but that motion is advancing an entire frame at a time, while the exclusive-fullscreen mode shows the tearing and uneven advancement that we expect from a game running with any kind of Vsync off.

Now that we seemed to have a little bit of control over the behavior of our Nvidia cards with our Eizo display, I set about trying to figure out just what variable or variables were apparently allowing us to break through the walls of Nvidia's VRR garden beyond our choice of fullscreen modes.


Our LG 27MU67-B failing to sync up with the RTX 2080 Ti in exclusive fullscreen mode
 

Our LG 27MU67-B exhibits regular Vsync—not VRR—with the RTX 2080 Ti in non-exclusive fullscreen mode

Was it our choice of monitor? I have an LG 27MU67-B in the TR labs for 4K testing, and that monitor supports FreeSync, as well. Shockingly enough, so long as I was able to keep the RTX 2080 Ti within its 40-Hz-to-60-Hz FreeSync range, the LG display seemed—emphasis: seemed—to do the VRR dance just as well as the Eizo. You can see what I took as evidence in the slow-motion videos above, much more clearly than with the Eizo display. While those videos only capture a portion of the screen, they accurately convey the frame-delivery experience I saw. I carefully confirmed that there wasn't a visible tear line elsewhere on the screen, too.

Was it a Turing-specific oversight? The same trick seemed to work with the RTX 2080, too, so it wasn't just an RTX 2080 Ti thing. I pulled out one of our GTX 1080 Ti Founders Editions and hooked it up to the Eizo display. The red light flipped on, and I was able to enjoy the same tear-free experience I had been surprised to see from our Turing cards. Another seemingly jaw-dropping revelation on its own, but one that didn't get me any closer to understanding what was happening.

Was it a matter of Founders Editions versus partner cards? I have a Gigabyte RTX 2080 Gaming OC 8G in the labs for testing, and I hooked it up to the Eizo display. On came the red light.

Was it something about our test motherboard? I pulled our RTX 2080 Ti out of the first motherboard I chose and put it to work on the Z370 test rig we just finished using for our Turing reviews. The card happily fed frames to the Eizo display as they percolated through the pipeline. Another strike.

Was Windows forcing Vsync on thanks to our choice of non-exclusive fullscreen mode? (Yes, as it turns out, but we'll get to why I think so in a moment). I pulled out my frame-time-gathering tools and collected some data with DXMD running free and in its double- and triple-buffered modes to find out. If Windows was somehow forcing the game into Vsync, I would have seen frame times cluster around the 16.7-ms and 33.3-ms marks, rather than falling wherever.

Our graphs tell the opposite tale, though. Frame delivery was apparently happening normally while Vsync was off, and our Vsync graphs show the expected groupings of frame times around the 16.7-ms and 33.3-ms marks (along with a few more troublesome outliers). Didn't seem like forced Vsync was the reason for the tear-free frame delivery we were seeing.

Update: Some reasoning about what we're seeing underlines why the above line of thought was incorrect. If the Desktop Window Manager itself is performing a form of Vsync, as Microsoft says it does, we probably wouldn't see the results of those quantizations in our application-specific frame-time graphs for games running in borderless windowed mode. The DWM compositor itself would be the place to look, and we don't generally set up our tools to catch that data (although it can be logged). The application can presumably render as fast as it wants behind the scenes (hence why frame rates don't appear to be capped in borderless windowed mode, another source of confusion as we were putting together this article), while the compositor would presumably do the job of selecting what frames are displayed and when.

We didn't try and isolate drivers in our excitement at this apparent discovery, but our test systems were using the latest 411.70 release direct from Nvidia's website. We did install GeForce Experience and leave all other settings at their defaults, including those for Nvidia's in-game overlay, which was enabled. The other constants in our setup were DisplayPort cables and the use of exclusive versus non-exclusive (or borderless windowed) modes in-game. Our test systems' versions of Windows 10 were fully updated as of this afternoon, too.

Conclusions (updated 10/1/18)

So what ultimately happened here? Well, part of the problem is that I got real excited by that FreeSync light and the tear-free gaming experience that our systems were providing with the settings we chose, and I got tunnel vision and jumped the gun. There was one thing I neglected to do, though, and that was to double-check the output of our setups against a genuine variable-refresh-rate display. Had I done that, I probably would have come to the conclusion that Windows was performing Vsync of its own a lot faster. Here's some slow-motion footage of the G-Sync-compatible Asus PG279Q we have in the TR labs, running our DXMD test sequence:

You can see—much like in our original high-speed footage of G-Sync displays—that the real VRR experience is subtly different from regular Vsync. Motion is proceeding smoothly rather than in clear, fixed steps, something we would have seen had our GeForces actually been providing VRR output to our FreeSync displays. The FreeSync light and tear-free gaming experience I was seeing made me hope against hope that some form of VRR operation was taking place, but ultimately, it was just a form of good old Vsync, and I should have seen it for what it was.

Even without genuine VRR gaming taking place, it's bizarre that hooking up a GeForce graphics card would cause a FreeSync monitor to think that it was receiving a compatible signal, even some of the time. Whatever the case may be, the red light on my Eizo display should not have illuminated without a FreeSync-compatible graphics card serving as the source. We've asked Nvidia for comment on this story and we'll update it if we hear back.

66 comments — Last by Gastec at 1:50 PM on 10/16/18

Weighing the trade-offs of Nvidia DLSS for image quality and performance
— 4:34 PM on September 22, 2018

While Nvidia has heavily promoted ray-traced effects from its GeForce RTX 2080 and RTX 2080 Ti graphics cards, the deep-learning super-sampling (DLSS) tech that those cards' tensor cores unlock has proven a more immediate and divisive point of discussion. Gamers want to know whether it works and what tradeoffs it makes between image quality and performance.

Eurogamer's Digital Foundry has produced an excellent dive into the tech with side-by-side comparisons of TAA versus DLSS in the two demos we have available so far, and Computerbase has even captured downloadable high-bit-rate videos of the Final Fantasy XV benchmark and Infiltrator demo that reviewers have access to. (We're uploading some videos of our own to YouTube, but 5-GB files take a while to process.) One common thread of those comparisons is that both of those outlets are impressed with the potential of the technology, and I count myself as a third eye that's excited about DLSS' potential.

While it's good to be able to look at side-by-side still images of the two demos we have so far, I believe that putting your nose in 100% crops of captured frames is not the most useful way of determining whether DLSS is effective. You can certainly point to small differences between rendered images captured this way, but I feel the more relevant question is whether these differences are noticeable when images are in motion. Displays add blur that can obscure fine details when they're moving, and artifacts like tearing can significantly reduce the perceived quality of a moving image for a game.

Before I saw those stills, though, I would have been hard-pressed to pick out differences in each demo, aside from a couple isolated cases like some more perceptible jaggies on a truck mirror in the first scene of the FFXV demo in DLSS mode. To borrow a Daniel Kahneman-ism, I'm primed to see those differences now. It's the "what has been seen cannot be unseen" problem at work.

This problem of objective versus subjective quality is no small issue in the evaluation of digital reproduction of moving images. Objective measurements such as the peak signal-to-noise ratio, which someone will doubtless produce for DLSS images, have been found to correlate poorly with the perceived quality of video codecs as evaluated by human eyes. In fact, the source I just linked posited that subjective quality is the only useful way to evaluate the effectiveness of a given video-processing pipeline. As a result, I believe the only way to truly see whether DLSS works for you is going to be to see it in action.

This fact may be frustrating to folks looking for a single objective measurement of whether DLSS is "good" or not, but humans are complex creatures with complex visual systems that defy easy characterization. Maybe when we're all cyborgs with 100% consistent visual systems and frames of reference, we can communicate about these issues objectively.

What is noticeable when asking a graphics card—even a powerhouse like the RTX 2080 Ti—to render a native 4K scene with TAA, at least in the case of the two demos we have on hand, is that frame-time consistency can go in the toilet. As someone who lives and breathes frame-time analysis, I might be overly sensitive to these problems, but I find that any jerkiness in frame delivery is far, far more noticeable and disturbing in a sequence of moving images than any tiny loss of detail from rendering at a lower resolution and upscaling with DLSS, especially when you're viewing an average-size TV at an average viewing distance. For reference, the setup I used for testing is a 55" OLED TV about 10 feet away from my couch (three meters).


FFXV with DLSS

FFXV with TAA

The Final Fantasy XV benchmark we were able to test with looks atrocious when rendered at 4K with TAA—not because of any deficit in the anti-aliasing methods used, but because it's a jerky, hitchy mess. Whether certain fine details are being rendered in perfect crispness is irrelevant if you're clawing your eyes out over wild swings in frame times, and there are a lot of those when we test FFXV without DLSS.

Trying to use a canned demo with scene transitions is hell on our frame-time analysis tools, but if we ignore the very worst frames that accumulate as a result of that fact and consider time spent beyond 16.7 ms in rendering the FFXV demo, DLSS allows the RTX 2080 to spend 44% less time working on those tough frames and the RTX 2080 Ti to cut its time on the board by 53%, all while looking better than 95% the same to my eye. Demo or not, that is an amazing improvement, and it comes through in the smoothness of the final product.

At least with the quality settings that the benchmark uses, you're getting a much more enjoyable sequence of motion to watch, even if not every captured frame is 100% identical in content from TAA to DLSS. With smoother frame delivery, it's easier to remain immersed in the scenes playing out before you rather than be reminded that you're watching a game on a screen.

Some might argue that Nvidia's G-Sync variable-refresh-rate tech can help compensate for any frame-time consistency issues with native 4K rendering, but I don't agree. G-Sync only prevents tearing across a range of refresh rates—it can't smooth out the sequence of frames from the graphics card if there's wild inconsistency in the timing of the frames it's asked to process. Hitches and stutters might be less noticeable with G-Sync thanks to that lack of tearing, but they're still present. Garbage in, garbage out.


The Epic Infiltrator demo with DLSS. Vsync was on to permit better image quality evaluation

The Epic Infiltrator demo with TAA. Vsync was on to permit better image quality evaluation

The same story goes for Epic Games' Infiltrator demo, which may actually be a more relevant point of comparison to real games because it doesn't have any scene transitions to speak of. With DLSS, the RTX 2080 cuts its time spent past 16.7 ms on tough frames by a whopping 83%. The net result is tangible: Infiltrator becomes much more enjoyable to watch. Frames are delivered more consistently, and major slowdowns are rare.

The RTX 2080 Ti doesn't enjoy as large a gain, but it still reduces its time spent rendering difficult frames by 67% at the 16.7 ms threshold. For minor differences in image quality, I don't believe that's an improvement that any gamer serious about smooth frame delivery can ignore entirely.

It's valid to note that all we have to go on so far for DLSS is a pair of largely canned demos, not real and interactive games with unpredictable inputs. That said, I think any gamer who is displeased with the smoothness and fluidity of their gaming experience on a 4K monitor—even a G-Sync monitor—is going to want to try DLSS for themselves when more games that support it come to market, if they can, and see whether the minor tradeoffs other reviewers have established for image quality are noticeable to their own eyes versus the major improvement in frame-time consistency and smooth motion we've observed thus far.

123 comments — Last by DoomGuy64 at 12:28 PM on 10/02/18

The days of casual overclocking are numbered
— 4:45 AM on May 17, 2018

The days of overclocking for the casual PC enthusiast are numbered, at least if we define "casual enthusiast" as "a person who just wants to put together a PC and crank everything to 11."

We've become more and more fenced in about the chips we can and can't tweak on the Intel side of the fence as the years go by, but efforts at product segmentation aside, the continued race to get more and more performance out of next-gen silicon may put the final nail in the coffin of casual overclocking's wizened form regardless of whose chip you choose. The practice might not die this year or even next year, but come back three to five years from now and it'd surprise me if us dirty casuals are still seriously tweaking multipliers and voltages in our motherboard firmware for anything but DRAM.

The horsemen of this particular apocalypse are already riding. Just look at leading-edge AMD Ryzen CPUs, Intel's first Core i9 mobile CPU (sorta), and Nvidia's Pascal graphics cards. The seals that have burst to herald their arrival come from the dwindling reserves of performance that microarchitectural improvements and modern lithography processes have left chip makers to tap.

As per-core performance improvements at the microarchitectural level have largely dried up, clock speeds have become a last resort for gaining demonstrable improvements from generation to generation for today's desktop CPUs. It's no longer going to be possible for companies to leave clock-speed margins on the table through imprecise or conservative characterization and binning practices—margins that give casual overclockers reason to tweak to begin with. Tomorrow's chips are going to get smarter and smarter about their own capabilities and exploit the vast majority of their potential through awareness of their own electrical and thermal limits, too.


AMD's Ryzen 7 2700X

AMD has long talked about improving the intelligence of its chips' on-die monitoring to lift unneccessarily coarse electrical and thermal restrictions on the dynamic-voltage-and-frequency-scaling curve of a particular piece of silicon. Its Precision Boost 2 and XFR 2 algorithms are the most advanced fruits of those efforts so far.

Put a sufficiently large liquid cooler on a Ryzen 7 2700X, for example, and that chip may boost all the way to 4 GHz under an all-core load. Even if you manage to eke out another 200 MHz or so of clock speed from such a chip in all-core workloads, you're only overclocking the chip 5% past what its own monitoring facilities allow for. That performance comes at the cost of higher voltages, higher power consumption, extra heat, and potentially dicier system stability, not to mention that the 2700X is designed to boost to 4.35 GHz on its own in single-core workloads. Giving up any of that single-core oomph hurts.

When the difference between a Ryzen 5 2600 and a Ryzen 5 2600X is just $20 today, and a Ryzen 7 2700 sells for just $30 less than its X-marked counterpart, I have to wonder whether the tweaking is really worth the time. If one can throw $100 or so of coolant and copper at the problem to extract 95% of a chip's performance potential versus hours of poking, prodding, and testing for stability, well, I know what I'd rather be doing, to be honest. As I get older, I have less and less free time, and if it's down to gaming or not gaming, I'm going to do the thing that lets me game more.

The slim pickings of overclocking headroom for casual tweakers these days doesn't stop with CPUs, either. Nvidia's Pascal graphics cards enjoy a deadly-effective dynamic-voltage-and-frequency-scaling algorithm of their own in GPU Boost 3.0. Grab a GeForce GTX 1080 Ti equipped with any massive air cooler or hybrid arrangement, for just one example, and you're already within single digits of the GP102 GPU's potential.


Corsair's Hydro GFX GTX 1080 Ti

We got just 6% higher clock speeds versus stock out of a massive air-cooled GTX 1080 Ti and about 7% higher clocks out of a liquid-cooled version of that card, all at the cost of substantially higher system power draw. I don't feel like the extra heat and noise generated that way is worth it unless you just enjoy chasing the highest possible benchmark numbers. That's a fine hobby in its own right, but single digits just aren't going to make me pursue them for their own sake these days.


The Intel Celeron 300A. Image: Querren via Wikimedia Commons; CC-BY-SA 3.0

Lest you think I'm being fatalistic here, there was a time—almost 20 years ago, to be exact—when ye olde Intel Celeron 300A with 128 KB of L2 cache could famously be tapped for a whopping 68% higher clock over its stock specifications with a good sample and the attentions of a casual enthusiast. The 300A sold for much lower prices than the chips it proceeded to outpace at those speeds, too. When we talk about casual overclocking, the Celeron 300A is perhaps the high-water mark for what made that kind of tweaking worth it.


Intel's Core i7-8700K

Sure, you might take a Core i7-8700K from its 4.3 GHz all-core speed to 5 GHz under non-AVX workloads, but that 16% of extra speed comes with roaring CPU fans and an exceedingly hot chip without some kind of thermal-interface-material-related surgery. You can bet that double-digit margin will rapidly shrink as soon as Intel releases a next-generation architecture with more intelligent Turbo Boost behavior that's not just tied to the number of active cores.

Turbo Boost 2.0 was introduced with Sandy Bridge chips all the way back in 2011, and the technology has only received a couple notable tweaks since then, like Turbo Boost Max 3.0 on Intel's high-end desktop platforms and the XFR-like Thermal Velocity Boost on the Core i9-8950HK. Like I've said, Precision Boost 2 and XFR 2 both show that there's more dynamic intelligence to be applied to CPU voltages and frequencies.

AMD, to its credit, is at least not working against casual overclockers' chances with TIM under its high-end chips' heat spreaders or by segmenting its product lines through locked and unlocked multipliers, but that regime may only last as long as large amounts of clock-speed headroom become exposed through better microarchitectures and process technology. The company's lower-end APUs already feature TIM under the heat spreader, as well, limiting overclocking potential somewhat. More capable Precision Boost and XFR algorithms may ultimately become the primary means of setting AMD CPUs apart from one another on top of the TDP differences we already come to expect.
 
As we run harder and harder into the limits of silicon, today's newly-competitive CPU market will require all chip makers to squeeze every drop of performance they can out of their products at the factory to set apart their high-end products and motivate upgraders. We'll likely see similar sophistication from future graphics cards, too. Leaving hundreds of Hertz on the table doesn't make dollars or sense for chip makers, and casual overclockers likely will be left with thinner and thinner pickings to extract through manual tweaking. If the behavior of today's cutting-edge chips is any indication, however, we'll have more time to game and create. Perhaps the end of casual overclocking won't be entirely sad as a result.

Feature image: Querren via Wikimedia Commons; CC-BY-SA 3.0

120 comments — Last by DevilsCanyonSoul at 7:13 PM on 06/21/18

The Tech Report secures an independent future
— 2:40 PM on February 28, 2018

A little over two years ago, Scott Wasson, the founder and long-time Editor-in-Chief of The Tech Report, began a new role with AMD to make life better for gamers using Radeon hardware and software. After his departure, Scott maintained his ownership of TR's parent business while searching for a new caretaker that would let us continue to enjoy the editorial independence that's been a hallmark of our work from day one.

Today, I'm pleased to announce that search has come to an end. The Tech Report will remain an independent publication under the ownership of Adam Eiberger, our long-time business manager. I will be staying on as The Tech Report's Editor-in-Chief. We're excited that this arrangement will allow us to sustain and grow the project that Scott started almost two decades ago for many years to come on terms of our choosing.


Our 2003 home page

It is critical to note that during the past two years, Scott exercised no control over the editorial side of TR. What we chose to cover and not to cover, the testing we performed or didn't perform, and the conclusions we reached or didn't reach were entirely under my direction. That remained the case even when our methods led to conclusions that favored products from AMD's competitors, and when we wrote articles that were complimentary to AMD's products.

These facts will not surprise anybody who has paid attention, but this final passing of the torch should put to rest any lingering concerns about our editorial independence.

With new ownership, the inevitable first question will be what else is going to change. Happily, the answer is nothing. You've already seen what The Tech Report looks like under my leadership over the past two years, and we'll continue to bring you the day-to-day news and in-depth reviews that we've always produced. We may explore new platforms and new audiences, but the core reporting and reviewing work that made The Tech Report famous will remain the bedrock of our work going forward.


Our 2010 home page

Although we're excited for the opportunity to continue what Scott started in 1999, this changing-of-the-guard comes at a tough time for online media. While we enjoy (and are deeply grateful for) support from a wide range of big names in the PC hardware industry, advertising support simply isn't as lucrative as it once was for many online publications, and The Tech Report is no exception. At the same time, there's more going on in personal computing and technology than ever, and we want to remain a leading voice in what's new and what's next.


Our home page today

If you believe in The Tech Report's work as strongly as I do, there is no better time to help us build the foundation for the next steps on our journey. Please, for the love of all things holy, whitelist us in your ad blocker. Subscribe for any amount you like. Comment on our articles, participate in the forums, and spread our work far and wide on Facebook and Twitter. Help us help you. Thanks in advance for your continued readership and support.

I'd also like to share a message from Scott regarding this news, as well:

To the TR community:

As you may know, I left my position as TR's Editor-in-Chief to take a job in the industry just over two years ago. In the time since, I have technically retained ownership of The Tech Report as a business entity. Naturally, I've had to stay well away from TR's editorial operations during this span due to the obvious conflict of interest, and I have been looking for the right situation for TR's ownership going forward.

I'm happy to announce that we completed a deal last week to sell The Tech Report to Adam Eiberger, our long-time sales guy. The papers are signed, and he's now the sole owner of the business.

I think this arrangement is undoubtedly the best one for TR and its community, for two main reasons.

For one, I'm still passionate about the virtues of a strong, independent media. With this deal in place, TR will maintain its status as an independent voice telling honest, unvarnished truths in its reporting and reviews.

Second, putting the site into the hands of a long-time employee gives us the best chance of keeping the TR staff intact going forward. Jeff, Adam, Bruno, and the gang have done a stellar job of keeping The Tech Report going in my absence, and I'm rooting for their continued success and growth.

I want to say thanks, once again, to the entire community for making our incredible run from 1999 to today possible. I'll always look back with gratitude on the way our readership supported us and allowed me to live my dream of running an independent publication for so many years. With your continued support and a little luck, TR should be able to survive and thrive for another couple of decades.

Thanks,
Scott

The TR staff would like to extend our thanks to Scott and the many TR alumni for their hard work in building not only one of the finest technology sites around, but also one of the best audiences any writers could ask for. We wish Scott the best as we part ways and carry TR's mission forward.

137 comments — Last by BIF at 6:30 AM on 03/28/18

DirectX 11 more than doubles the GT 1030's performance versus DX12 in Hitman
— 4:02 PM on February 16, 2018

It's been an eventful week in the TR labs, to say the least, and today had one more surprise in store for us. Astute commenters on our review of the Ryzen 5 2400G and Ryzen 3 2200G took notice of the Nvidia GeForce GT 1030's lagging performance in Hitman compared to the Radeon IGPs and wondered just what was going on. On top of revisiting of the value proposition of the Ryzen 5 2400G and performing some explorations of just how much CPU choice advantaged the GT 1030 in our final standings, I wanted to dig into this performance disparity to see whether it was just how the GT 1030 gets along with Hitman or an indication of a possible software problem.

With our simulated Core i3-8100 running the show, I fired up Hitman again to see what was going on. We've seen some performance disparities between Nvidia and AMD graphics processors under DirectX 12 in the past, so Hitman's rendering path seemed like the most obvious setting to tweak. To my horror, I hit the jackpot.

Hitman with DirectX 11 on the GT 1030 ran quite well with no other changes to our test settings. In its DirectX 11 mode, the GT 1030 turned in a 43-FPS average and a 28.4-ms 99th-percentile frame time, basically drawing dead-even with the Vega 11 IGP on board the Ryzen 5 2400G.

Contrast that with the slideshow-like 20-FPS average and 83.4-ms 99th-percentile frame time our original testing showed. While the GT 1030 is the first tier on the Pascal GeForce ladder, there was no way its performance should have been that bad in light of our other test results.

This new data puts the GT 1030 in a much better light compared to our first round of tests in our final reckoning. Even if we use a geometric mean to lessen the effect of outliers on the data, a big performance drop like the one we observed with Hitman under DirectX 12 will have disproportionate effects on our final index. Swapping out the GT 1030's DirectX 12 result for DirectX 11 is only fair, since it's the way gamers should apparently play with the card for the moment. That move does require a major rethink of how the Ryzen 5 2400G and Ryzen 3 2200G compare to the entry-level Pascal card, though.

With the parts lists we put together yesterday, the Ryzen 5 2400G system is about 15% less expensive than the Core i3-8100 system, and its 99th-percentile FPS figure is now about 11% lower than that of the Core i3-8100-and-GT-1030 box. That's still a better-than-linear relationship in price-to-performance ratios for gaming, and it's still impressive. Prior to today, gamers on a shoestring had no options short of purchasing a discrete card like the GT 1030, and the Ryzen 5 2400G and Ryzen 3 2200G now make entry-level gaming practical on integrated graphics alone.

Dropping a Ryzen 3 2200G into our build reduces 99th-percentile FPS performance about another 16% from its beefier sibling, but it makes our entry-entry-level build 17% cheaper still. As a result, I still think the Ryzen 5 2400G and Ryzen 3 2200G are worthy of the TR Editor's Choice awards they've already garnered, but it's hard to deny that these new results take a bit of the shine off both chips' performance.

To be clear, I don't think this result is an indictment of our original data or testing methods. We always set up every graphics card as equally as possible before we begin testing, and that includes gameplay settings like anti-aliasing, texture quality, and API. Our choice to use Hitman's DX12 renderer across all of our test subjects was no different. This is rudimentary stuff, to be sure, but the possibility simply didn't occur to me that using Hitman's DirectX 12 renderer would pose a problem for the GT 1030.

We've long used Hitman for performance testing despite its reputation as a Radeon-friendly title, and its DirectX 12 mode hasn't caused large performance disparities among GeForces and Radeons even as recently as our GeForce GTX 1070 Ti review. Given that past history, I felt it would be no problem to continue as we always have in using Hitman's cutting-edge API support. Testing hardware is full of surprises, though, and putting the Ryzen APUs and the GT 1030 through the wringer has produced more than its fair share of them.

57 comments — Last by xrror at 11:15 PM on 02/25/18

Revisiting the value proposition of AMD's Ryzen 5 2400G
— 3:23 PM on February 15, 2018

The mornings before tight deadlines in the world of PC hardware reviews often follow a week or less of nonstop testing, retesting, and more testing. Sleep and nutrition tend to fall by the wayside in the days leading up to an article in favor of just one more test or looking at just one more hardware combination. None of these conditions are ideal for producing the best thinking possible, and as a human under stress, I sometimes err in the minutes before a big review needs to go live after running that gauntlet.

So it went when I considered the bang-for-the-buck of the Ryzen 5 2400G, where my thinking fell victim to the availability heuristic. I had just finished the productivity value scatter chart and overall 99th-percentile frame time chart on the last page of the review before putting together my conclusion, and having those charts at the top of my mind blinded me to the need for the simple gut check of, y'know, actually putting together a parts list using some of the CPUs we tested. Had I done that, I would have come away with a significantly different view of the 2400G's value proposition.

While the $170 Ryzen 5 2400G would seem to trade blows with the $190 Core i5-8400 on a dollar-for-dollar basis for a productivity system, even that forgiving bar favors the Ryzen 5 once we start putting together parts lists. Intel doesn't offer H- or B-series motherboards compatible with Coffee Lake CPUs yet, so even budget builders have to select a Z370 motherboard to host those CPUs. That alone adds $30 or more to the Ryzen 5 2400G's value bank.

The Multi-Tool    
CPU AMD Ryzen 5 2400G $169.99    
CPU cooler AMD Wraith Spire --    
Memory G.Skill Ripjaws V 8 GB (2x 4 GB)
DDR4-3200 CL16
$103.99    
Motherboard ASRock AB350 Pro4 $69.99 (MIR)    
Graphics card Radeon Vega 11 IGP --    
Storage WD Blue 1TB $49.00    
Power Corsair VS400 $34.99    
Case Cooler Master MB600L $46.99    
Total $474.95    

To demonstrate as much, here's a sample Ryzen 5 2400G build using what I would consider a balance between budget- and enthusiast-friendliness. One could select a cheaper A320 motherboard to save a few more bucks, but I don't think the typical gamer will want to lose the ability to overclock the CPU and graphics processor of a budget system. The ASRock AB350 Pro4 has a fully-heatsinked VRM and a solid-enough feature set to serve our needs, and the rest of the components in this build come from reputable companies. Spend less, and you might not be able to say as much.

The Caffeinator    
CPU Intel Core i5-8400 $189.99    
CPU cooler Intel boxed heatsink --    
Memory G.Skill Ripjaws V 8 GB (2x 4 GB)
DDR4-2666 CL15
$103.99    
Motherboard Gigabyte Z370 HD3 $99.99 (MIR)    
Graphics card Intel UHD Graphics 630 --    
Storage WD Blue 1TB $49.00    
Power Corsair VS400 $34.99    
Case Cooler Master MB600L $46.99    
Total $524.95    
Price difference versus Ryzen 5 2400G PC $50.00    

For our Core i5-8400 productivity build, the $20 extra for the CPU might not seem like a big deal, but it's quickly compounded by the $30 extra one will pay for the Z370 motherboard we selected—and that's after one chances a mail-in rebate to get that price. Intel desperately needs to get B- and H-series motherboards for Coffee Lake CPUs into the marketplace if it wants non-gamers to have a chance of building competitive or better-than-competitive systems with AMD's latest.

The Core i5-8400 can still outpace the Ryzen 5 2400G in many of our productivity tasks, though, and on the whole, the $50 extra one will pay for this system is still more than worth it for folks who don't game. If time is money for your heavier computing workloads, the i5-8400 could quickly pay for the difference itself. Ryzen 5 2400G builders can probably make up some of the performance difference through overclocking, but we don't recommend OCing for productivity-focused builds that need 100% stability.

The Instant Coffee    
CPU Intel Core i3-8100 $119.99    
CPU cooler Intel boxed heatsink --    
Memory G.Skill Ripjaws V 8 GB (2x 4 GB)
DDR4-2400 CL16
$103.99    
Motherboard Gigabyte Z370 HD3 $99.99 (MIR)    
Graphics card Asus GT 1030 $89.99    
Storage WD Blue 1TB $49.00    
Power Corsair VS400 $34.99    
Case Cooler Master MB600L $46.99    
Total $544.94    
Price difference versus Ryzen 5 2400G PC $69.98    

Those building entry-level PCs might not have the luxury of choosing between productivity chops and gaming power, though. To make a gaming build with capabilities similar to those of the Ryzen 5 2400G, building a system around the Core i5-8400 quickly leads to a bottom line that's too expensive to really be considered budget-friendly. That's thanks to the need for an Nvidia GT 1030 like the one we employed with our test system. Those cards were $70 or $80 until just recently, but a mysterious shortage of them at e-tail has suddenly led to a jump in price.

Regardless, back-ordering one of those cards will run you $90 at Amazon right now, and even though we're rolling with that figure for the sake of argument, $90 is honestly too much to pay for a discrete card with the GT 1030's performance. If you had to buy one, we'd wait for prices to drop once stock levels return to normal.

To restore our system to something approaching budget-friendliness, we have to tap a Core i3-8100 for our Coffee Lake gaming system instead of the Core i5-8400, and that suddenly puts the CPU performance of our build behind that of the Ryzen 5 2400G in most applications. Oof.

AMD Ryzen 5 2400G
February 2018

With new information gleaned from retesting the GeForce GT 1030 in Hitman, the Ryzen 5 2400G no longer beats out that card in our final reckoning. On the whole, though, it clears the 30-FPS threshold for 99th-percentile frame rates that we want to see from an entry-level gaming system. Before this week, that's not something we could say of any integrated graphics processor on any CPU this affordable. As part of a complete PC, it does so for $70 less than our GT 1030 build. Gamers don't have to tolerate 1280x720 and low settings on the 2400G, either; we used resolutions of 1600x900 and 1920x1080 with medium settings for the most part.

So there you have it: the Ryzen 5 2400G is a spectacularly balanced value for folks who want an entry-level system without compromising much on CPU or graphics performance, just like its Ryzen 3 sibling is at $100. Both CPUs were equally deserving of a TR Editor's Choice award for their blends of value and performance, and I'll be updating our review post-haste to reflect AMD's dominance in that department. Sorry for the goof, and I'll make a better effort to look before I leap in the future.

57 comments — Last by raaj13 at 8:37 PM on 02/28/18

Tobii makes a compelling case for more natural and immersive VR with eye tracking
— 4:36 PM on January 26, 2018

 We've heard murmurs about the benefits of eye-tracking in VR headsets for quite some time now, but even with the number of press days and trade shows we attend in the course of the year, I'd never had the opportunity to give the tech a spin. That changed with a demo we got to try this year at CES. Tobii, probably the leading company in eye-tracking technology, invited us in for a private showing of its most recent round of VR eye-tracking hardware this year. The company had a prototype HTC Vive headset at hand with its eye trackers baked in for me to kick the tires with, and I came away convinced that eye tracking is an essential technology for the best VR experiences.


Tobii's prototype HTC Vive

Tobii's demo took us through a few potential uses of eye-tracking in VR. The most immediate benefit came in setting interpupillary distance, an essential step in achieving the sharpest and clearest images with a VR headset. With today's headsets, one might need to make a best guess at the correct IPD using an error-prone reference image, but the Tobii tech gave me immediate, empirical feedback when I achieved the correct setting.

Next, the demo pulled up a virtual mirror that allowed me to see how the eyes of my avatar could move in response to eye-tracking inputs. While this avatar wasn't particularly detailed, it was clear that the eye-tracking sensors inside the headset could translate where I was looking into virtual space with an impressive degree of precision and with low latency.

I was then transported to a kind of courtyard-like environment where a pair of robots could tell when I was and wasn't looking at them, causing one to pop up a speech bubble when I did make eye contact. That cute and rather binary demo belies a future where VR avatars could make eye contact with one another, a huge part of natural interaction in the real world that's not present with most human-to-human (or human-to-robot) contact in VR today.

After that close encounter, I was transported to a simulated home theater where I was asked to perform tasks like dimming a light, adjusting the volume of the media being played, and selecting titles to watch. With eye tracking on, I had only to look at those objects or menus with my head mostly still to manipulate them with the Vive's trackpads, whereas without it I had to move my entire head, much as one would have to do with most of today's VR HMDs. It was less tiresome and more natural to simply move my eyeballs to perform that work as opposed to engaging my entire neck.

Another more interactive demo involved picking up a rock with the Vive's controller and throwing it at strategically-placed bottles scattered around a farmyard. With eye tracking off, I was acutely aware that I was moving around a controller in the real world to direct the simulated rock at a bottle. This motion didn't feel particularly natural or coordinated, and I'd call it typical of tracked hand controllers in VR today.

With eye-tracking on, however, I felt as though I was suddenly gifted with elite hand-eye coordination. The eye-tracking-enhanced rock simply went where I was looking when I gave it a toss, and my aim became far more reliable. I wouldn't say that the software was going so far as to correct wildly off-course throws, but it was somehow using the eye-tracking data to smooth some of the disconnect between real-world motion and its effects in VR. The experience with eye-tracking on simply felt more immersive.

Another interactive demo simulated a kind of AR game where a military installation on Mars was poised to fire at UFOs invading Earth. With eye-tracking off, I had to point and click with the controller to adjust the various elements of the scene. When my gaze was tracked, I simply had to look at the stellar body I wanted to adjust and move my finger across the touchpad to move it, rather than selecting each planet directly with the controller. This experience wasn't as revelatory as the rock toss, but it was more inviting and natural to simply look at the object I wanted to manipulate in the environment before doing so.

The final demo dropped me into a sci-fi setting where I could toggle a number of switches and send an interplanetary message. Without eye-tracking on, this demo worked like pressing buttons typically does in VR right now: by reaching out with the Vive controller and selecting the various controls with the trigger. With eye tracking on, however, I had only to look at those closely-spaced buttons and pull the trigger to select them—no reaching or direct manipulation required.


A simulation of how a foveated frame might look in practice. Source: Tobii

The big surprise from this experience was that Tobii had been using a form of foveated rendering throughout the demos I was allowed to try out. For the unfamiliar, foveated rendering devotes fewer processing resources to portions of the VR frame that fall into the user's peripheral vision. Early efforts at foveation relied on fixed, lens-dependent regions, but eye-tracked HMDs can dynamically change the area of best resolution depending on the direction of the wearer's gaze. The Tobii-equipped VR system was invisibly putting pixels where the user was looking while saving rendering effort in parts of the frame where it wasn't needed.

Indeed, the company remarked that nobody had noticed the foveation in action until it pointed out the feature and allowed folks to see an A-B test, and I certainly didn't notice the feature in action until it was revealed to me though that test (though the relatively low resolution and considerable edge aberrations of today's VR HMDs might have concealed the effects of foveation on some parts of the frame). Still, if foveation is as natural on future hardware as Tobii made it feel on today's headsets, higher-quality VR might be easier to achieve without the major increases in graphics-hardware power that would be required to naively shade every pixel.

All told, Tobii's demos proved incredibly compelling, and I was elated to finally experience the technology in action after hearing so much about it. The problem is that getting eye-tracking-equipped headsets onto the heads of VR pioneers is going to require all-new hardware—the company says its sensors require a companion ASIC to process and communicate the eye-tracking data to the host system, and it can't simply be retrofitted to existing HMDs. Asking early adopters to dump their existing hardware for a smoother and more immersive experience might prove to be an uphill climb. Keep an eye out for Tobii tech in future HMDs, though—it makes for a much more natural and immersive VR experience.

26 comments — Last by SgorageJar at 7:10 PM on 02/05/18