Readers of The Tech Report,
It is with a heavy heart that I announce I am stepping down as the site's Editor-in-Chief, effective as of the end of this week. You'll still see my byline appear on future articles from time to time as we take on some projects and wrap up others, but I will no longer be participating in executive or editorial decision-making for the site.
In my stead, Seth Colaner will be joining The Tech Report as interim managing editor. Seth is a seasoned, capable, and passionate watcher of this industry. He's served as a writer and editor for fine publications that you may already know of, including HotHardware and Tom's Hardware. Of late, he's worked as Editor-in-Chief of the mechanical keyboard enthusiast site KeyChatter. I believe Seth is an excellent choice to take up the mantle of The Tech Report's mission, and I hope that you'll continue to support him and TR honcho Adam Eiberger as they begin this new chapter in the site's history.
It has been a great pleasure and honor to serve as your eyes and ears in the PC hardware world for over four years. I never would have imagined that I would come to head one of the most respected sites in the industry—one I read faithfully for many years before throwing my hat into the ring as a humble freelance writer. That opportunity has been a dream come true for me, even as I pass the torch today.
In turn, running a site like The Tech Report incurs many debts of thanks. First and foremost, I would like to thank you, our readers, for making every day in this role challenging and engaging. Your feedback and questions have made me a more thoughtful and inquisitive writer, and the community around The Tech Report is without a doubt the finest on the web. I will miss writing for you dearly.
I would also like to thank TR founder Scott Wasson for allowing me to take the reins of his baby of sixteen years when he joined AMD's Radeon Technologies Group some time ago. While The Tech Report may have run into the occasional bumps and bruises along the road since, I would like to think we did our best to uphold the core principles that Scott established as he built The Tech Report into the fiercely independent and inquisitive site that we all know and love.
My thanks as well to the freelance writing team that I had the pleasure of managing during my time in the Editor-in-Chief's chair. Zak Killian, Wayne Manion, Nathan Wasson, Tony Thomas, Eric Born, and Eric Frederiksen all brought unique voices and perspectives to The Tech Report, and it was a pleasure to watch them grow as writers as we worked together.
Endless gratitude is due for Bruno Ferreira, our Swiss Army knife of a coder, sysadmin, editor, and general pinch hitter. Among many, many other development projects, Bruno undertook the critical work needed to make TR's benchmarks easier to run and the results easier to digest, and the products of that work have been a resounding success. I have been much more productive of late thanks to his efforts, and making our testing methods more portable and accessible will be invaluable to the site going forward.
My thanks as well to Adam, who did the essential work of keeping the lights on for us in a relentlessly challenging business environment. In turn, a round of applause for Corsair, Gigabyte, G.Skill, Adata, Toshiba, and the many other loyal sponsors who understood the value of The Tech Report's work and continued to support us even as our methods may not have cast their products in the best light.
Finally, I'd like to thank TR alums Geoff Gasior and Cyril Kowaliski, whose combined knowledge and considerable patience I had the good fortune to draw upon while I learned the ropes of writing and editing within this industry. I wouldn't be where I am today without their tutelage.
If you'd like to stay in touch, you can email me at email@example.com or give me a shout on Twitter. Once more, it's been a true honor and privilege to serve you as TR's Editor-in-Chief. I'll see you around the 'net.Updated: GeForce cards mysteriously appear to play nice with TR's FreeSync monitors
Update 9/30/18 3:22 AM: After further research and the collection of more high-speed camera footage from our G-Sync displays, I'm confident the tear-free gameplay we're experiencing on our FreeSync displays in combination with GeForces is a consequence of Windows 10's Desktop Window Manager adding its own form of Vsync to the proceedings when games are in borderless windowed mode, rather than any form of VESA Adapative-Sync being engaged with our GeForce cards. Pending a response from Nvidia as to just what we're experiencing, I'd warn against drawing any conclusions from our observations at this time and sincerely apologize for the misleading statements we've presented in our original article. The original piece continues below for posterity.
It all started with a red light. You see, the primary FreeSync display in the TR labs, an Eizo Foris FS2735, has a handy multi-color power LED that flips over to red when a FreeSync-compatible graphics card is connected. I was setting up a test rig today for reasons unrelated to graphics-card testing, and in the process, I grabbed our GeForce RTX 2080 Ti Founders Edition without a second thought, dropped it into a PCIe slot, and hooked it up to that monitor.
The red light came on.
Some things are just not supposed to happen in life, like the sun circling the earth, people calling espresso "expresso," and FreeSync monitors working in concert with Nvidia graphics cards. I've used GeForce cards with that Eizo display in the past as the occasion demanded, but I can't recall ever seeing the monitor showing anything other than its white default indicator with the green team's cards pushing pixels.
At that point, I got real curious. I fired up Rise of the Tomb Raider and found myself walking through the game's Geothermal Valley level with nary a tear to be seen. After I recovered from my shock at that sight, I started poking and prodding at the game's settings menu to see whether anything in there had any effect on what I was seeing.
Somewhere along the way, I discovered that toggling the game between exclusive fullscreen and non-exclusive fullscreen modes (or borderless window mode, as some games call it) occasionally caused the display to fall back into its non-variable-refresh-rate (VRR) default state, as indicated by the LED's transition from red to white. That color change didn't always happen, but I always noticed tearing with exclusive fullscreen mode enabled in the games I tried, while non-exclusive fullscreen mode seemed to reliably enable whatever VRR mojo I thought I had uncovered.
Next, I pulled up my iPhone's 240-FPS slow-mo mode and grabbed some footage of Deus Ex: Mankind Divided running on the RTX 2080 Ti while it was connected to the Eizo monitor. You can sort of see from the borderless windowed mode video that frames are arriving at different times, but that motion is advancing an entire frame at a time, while the exclusive-fullscreen mode shows the tearing and uneven advancement that we expect from a game running with any kind of Vsync off.
Now that we seemed to have a little bit of control over the behavior of our Nvidia cards with our Eizo display, I set about trying to figure out just what variable or variables were apparently allowing us to break through the walls of Nvidia's VRR garden beyond our choice of fullscreen modes.
Was it our choice of monitor? I have an LG 27MU67-B in the TR labs for 4K testing, and that monitor supports FreeSync, as well. Shockingly enough, so long as I was able to keep the RTX 2080 Ti within its 40-Hz-to-60-Hz FreeSync range, the LG display seemed—emphasis: seemed—to do the VRR dance just as well as the Eizo. You can see what I took as evidence in the slow-motion videos above, much more clearly than with the Eizo display. While those videos only capture a portion of the screen, they accurately convey the frame-delivery experience I saw. I carefully confirmed that there wasn't a visible tear line elsewhere on the screen, too.
Was it a Turing-specific oversight? The same trick seemed to work with the RTX 2080, too, so it wasn't just an RTX 2080 Ti thing. I pulled out one of our GTX 1080 Ti Founders Editions and hooked it up to the Eizo display. The red light flipped on, and I was able to enjoy the same tear-free experience I had been surprised to see from our Turing cards. Another seemingly jaw-dropping revelation on its own, but one that didn't get me any closer to understanding what was happening.
Was it a matter of Founders Editions versus partner cards? I have a Gigabyte RTX 2080 Gaming OC 8G in the labs for testing, and I hooked it up to the Eizo display. On came the red light.
Was it something about our test motherboard? I pulled our RTX 2080 Ti out of the first motherboard I chose and put it to work on the Z370 test rig we just finished using for our Turing reviews. The card happily fed frames to the Eizo display as they percolated through the pipeline. Another strike.
Was Windows forcing Vsync on thanks to our choice of non-exclusive fullscreen mode? (Yes, as it turns out, but we'll get to why I think so in a moment). I pulled out my frame-time-gathering tools and collected some data with DXMD running free and in its double- and triple-buffered modes to find out. If Windows was somehow forcing the game into Vsync, I would have seen frame times cluster around the 16.7-ms and 33.3-ms marks, rather than falling wherever.
Our graphs tell the opposite tale, though. Frame delivery was apparently happening normally while Vsync was off, and our Vsync graphs show the expected groupings of frame times around the 16.7-ms and 33.3-ms marks (along with a few more troublesome outliers). Didn't seem like forced Vsync was the reason for the tear-free frame delivery we were seeing.
Update: Some reasoning about what we're seeing underlines why the above line of thought was incorrect. If the Desktop Window Manager itself is performing a form of Vsync, as Microsoft says it does, we probably wouldn't see the results of those quantizations in our application-specific frame-time graphs for games running in borderless windowed mode. The DWM compositor itself would be the place to look, and we don't generally set up our tools to catch that data (although it can be logged). The application can presumably render as fast as it wants behind the scenes (hence why frame rates don't appear to be capped in borderless windowed mode, another source of confusion as we were putting together this article), while the compositor would presumably do the job of selecting what frames are displayed and when.
We didn't try and isolate drivers in our excitement at this apparent discovery, but our test systems were using the latest 411.70 release direct from Nvidia's website. We did install GeForce Experience and leave all other settings at their defaults, including those for Nvidia's in-game overlay, which was enabled. The other constants in our setup were DisplayPort cables and the use of exclusive versus non-exclusive (or borderless windowed) modes in-game. Our test systems' versions of Windows 10 were fully updated as of this afternoon, too.
Conclusions (updated 10/1/18)
So what ultimately happened here? Well, part of the problem is that I got real excited by that FreeSync light and the tear-free gaming experience that our systems were providing with the settings we chose, and I got tunnel vision and jumped the gun. There was one thing I neglected to do, though, and that was to double-check the output of our setups against a genuine variable-refresh-rate display. Had I done that, I probably would have come to the conclusion that Windows was performing Vsync of its own a lot faster. Here's some slow-motion footage of the G-Sync-compatible Asus PG279Q we have in the TR labs, running our DXMD test sequence:
You can see—much like in our original high-speed footage of G-Sync displays—that the real VRR experience is subtly different from regular Vsync. Motion is proceeding smoothly rather than in clear, fixed steps, something we would have seen had our GeForces actually been providing VRR output to our FreeSync displays. The FreeSync light and tear-free gaming experience I was seeing made me hope against hope that some form of VRR operation was taking place, but ultimately, it was just a form of good old Vsync, and I should have seen it for what it was.
Even without genuine VRR gaming taking place, it's bizarre that hooking up a GeForce graphics card would cause a FreeSync monitor to think that it was receiving a compatible signal, even some of the time. Whatever the case may be, the red light on my Eizo display should not have illuminated without a FreeSync-compatible graphics card serving as the source. We've asked Nvidia for comment on this story and we'll update it if we hear back.Weighing the trade-offs of Nvidia DLSS for image quality and performance
While Nvidia has heavily promoted ray-traced effects from its GeForce RTX 2080 and RTX 2080 Ti graphics cards, the deep-learning super-sampling (DLSS) tech that those cards' tensor cores unlock has proven a more immediate and divisive point of discussion. Gamers want to know whether it works and what tradeoffs it makes between image quality and performance.
Eurogamer's Digital Foundry has produced an excellent dive into the tech with side-by-side comparisons of TAA versus DLSS in the two demos we have available so far, and Computerbase has even captured downloadable high-bit-rate videos of the Final Fantasy XV benchmark and Infiltrator demo that reviewers have access to. (We're uploading some videos of our own to YouTube, but 5-GB files take a while to process.) One common thread of those comparisons is that both of those outlets are impressed with the potential of the technology, and I count myself as a third eye that's excited about DLSS' potential.
While it's good to be able to look at side-by-side still images of the two demos we have so far, I believe that putting your nose in 100% crops of captured frames is not the most useful way of determining whether DLSS is effective. You can certainly point to small differences between rendered images captured this way, but I feel the more relevant question is whether these differences are noticeable when images are in motion. Displays add blur that can obscure fine details when they're moving, and artifacts like tearing can significantly reduce the perceived quality of a moving image for a game.
Before I saw those stills, though, I would have been hard-pressed to pick out differences in each demo, aside from a couple isolated cases like some more perceptible jaggies on a truck mirror in the first scene of the FFXV demo in DLSS mode. To borrow a Daniel Kahneman-ism, I'm primed to see those differences now. It's the "what has been seen cannot be unseen" problem at work.
This problem of objective versus subjective quality is no small issue in the evaluation of digital reproduction of moving images. Objective measurements such as the peak signal-to-noise ratio, which someone will doubtless produce for DLSS images, have been found to correlate poorly with the perceived quality of video codecs as evaluated by human eyes. In fact, the source I just linked posited that subjective quality is the only useful way to evaluate the effectiveness of a given video-processing pipeline. As a result, I believe the only way to truly see whether DLSS works for you is going to be to see it in action.
This fact may be frustrating to folks looking for a single objective measurement of whether DLSS is "good" or not, but humans are complex creatures with complex visual systems that defy easy characterization. Maybe when we're all cyborgs with 100% consistent visual systems and frames of reference, we can communicate about these issues objectively.
What is noticeable when asking a graphics card—even a powerhouse like the RTX 2080 Ti—to render a native 4K scene with TAA, at least in the case of the two demos we have on hand, is that frame-time consistency can go in the toilet. As someone who lives and breathes frame-time analysis, I might be overly sensitive to these problems, but I find that any jerkiness in frame delivery is far, far more noticeable and disturbing in a sequence of moving images than any tiny loss of detail from rendering at a lower resolution and upscaling with DLSS, especially when you're viewing an average-size TV at an average viewing distance. For reference, the setup I used for testing is a 55" OLED TV about 10 feet away from my couch (three meters).
The Final Fantasy XV benchmark we were able to test with looks atrocious when rendered at 4K with TAA—not because of any deficit in the anti-aliasing methods used, but because it's a jerky, hitchy mess. Whether certain fine details are being rendered in perfect crispness is irrelevant if you're clawing your eyes out over wild swings in frame times, and there are a lot of those when we test FFXV without DLSS.
Trying to use a canned demo with scene transitions is hell on our frame-time analysis tools, but if we ignore the very worst frames that accumulate as a result of that fact and consider time spent beyond 16.7 ms in rendering the FFXV demo, DLSS allows the RTX 2080 to spend 44% less time working on those tough frames and the RTX 2080 Ti to cut its time on the board by 53%, all while looking better than 95% the same to my eye. Demo or not, that is an amazing improvement, and it comes through in the smoothness of the final product.
At least with the quality settings that the benchmark uses, you're getting a much more enjoyable sequence of motion to watch, even if not every captured frame is 100% identical in content from TAA to DLSS. With smoother frame delivery, it's easier to remain immersed in the scenes playing out before you rather than be reminded that you're watching a game on a screen.
Some might argue that Nvidia's G-Sync variable-refresh-rate tech can help compensate for any frame-time consistency issues with native 4K rendering, but I don't agree. G-Sync only prevents tearing across a range of refresh rates—it can't smooth out the sequence of frames from the graphics card if there's wild inconsistency in the timing of the frames it's asked to process. Hitches and stutters might be less noticeable with G-Sync thanks to that lack of tearing, but they're still present. Garbage in, garbage out.
The same story goes for Epic Games' Infiltrator demo, which may actually be a more relevant point of comparison to real games because it doesn't have any scene transitions to speak of. With DLSS, the RTX 2080 cuts its time spent past 16.7 ms on tough frames by a whopping 83%. The net result is tangible: Infiltrator becomes much more enjoyable to watch. Frames are delivered more consistently, and major slowdowns are rare.
The RTX 2080 Ti doesn't enjoy as large a gain, but it still reduces its time spent rendering difficult frames by 67% at the 16.7 ms threshold. For minor differences in image quality, I don't believe that's an improvement that any gamer serious about smooth frame delivery can ignore entirely.
It's valid to note that all we have to go on so far for DLSS is a pair of largely canned demos, not real and interactive games with unpredictable inputs. That said, I think any gamer who is displeased with the smoothness and fluidity of their gaming experience on a 4K monitor—even a G-Sync monitor—is going to want to try DLSS for themselves when more games that support it come to market, if they can, and see whether the minor tradeoffs other reviewers have established for image quality are noticeable to their own eyes versus the major improvement in frame-time consistency and smooth motion we've observed thus far.The days of casual overclocking are numbered
The days of overclocking for the casual PC enthusiast are numbered, at least if we define "casual enthusiast" as "a person who just wants to put together a PC and crank everything to 11."
We've become more and more fenced in about the chips we can and can't tweak on the Intel side of the fence as the years go by, but efforts at product segmentation aside, the continued race to get more and more performance out of next-gen silicon may put the final nail in the coffin of casual overclocking's wizened form regardless of whose chip you choose. The practice might not die this year or even next year, but come back three to five years from now and it'd surprise me if us dirty casuals are still seriously tweaking multipliers and voltages in our motherboard firmware for anything but DRAM.
The horsemen of this particular apocalypse are already riding. Just look at leading-edge AMD Ryzen CPUs, Intel's first Core i9 mobile CPU (sorta), and Nvidia's Pascal graphics cards. The seals that have burst to herald their arrival come from the dwindling reserves of performance that microarchitectural improvements and modern lithography processes have left chip makers to tap.
As per-core performance improvements at the microarchitectural level have largely dried up, clock speeds have become a last resort for gaining demonstrable improvements from generation to generation for today's desktop CPUs. It's no longer going to be possible for companies to leave clock-speed margins on the table through imprecise or conservative characterization and binning practices—margins that give casual overclockers reason to tweak to begin with. Tomorrow's chips are going to get smarter and smarter about their own capabilities and exploit the vast majority of their potential through awareness of their own electrical and thermal limits, too.
AMD has long talked about improving the intelligence of its chips' on-die monitoring to lift unneccessarily coarse electrical and thermal restrictions on the dynamic-voltage-and-frequency-scaling curve of a particular piece of silicon. Its Precision Boost 2 and XFR 2 algorithms are the most advanced fruits of those efforts so far.
Put a sufficiently large liquid cooler on a Ryzen 7 2700X, for example, and that chip may boost all the way to 4 GHz under an all-core load. Even if you manage to eke out another 200 MHz or so of clock speed from such a chip in all-core workloads, you're only overclocking the chip 5% past what its own monitoring facilities allow for. That performance comes at the cost of higher voltages, higher power consumption, extra heat, and potentially dicier system stability, not to mention that the 2700X is designed to boost to 4.35 GHz on its own in single-core workloads. Giving up any of that single-core oomph hurts.
When the difference between a Ryzen 5 2600 and a Ryzen 5 2600X is just $20 today, and a Ryzen 7 2700 sells for just $30 less than its X-marked counterpart, I have to wonder whether the tweaking is really worth the time. If one can throw $100 or so of coolant and copper at the problem to extract 95% of a chip's performance potential versus hours of poking, prodding, and testing for stability, well, I know what I'd rather be doing, to be honest. As I get older, I have less and less free time, and if it's down to gaming or not gaming, I'm going to do the thing that lets me game more.
The slim pickings of overclocking headroom for casual tweakers these days doesn't stop with CPUs, either. Nvidia's Pascal graphics cards enjoy a deadly-effective dynamic-voltage-and-frequency-scaling algorithm of their own in GPU Boost 3.0. Grab a GeForce GTX 1080 Ti equipped with any massive air cooler or hybrid arrangement, for just one example, and you're already within single digits of the GP102 GPU's potential.
We got just 6% higher clock speeds versus stock out of a massive air-cooled GTX 1080 Ti and about 7% higher clocks out of a liquid-cooled version of that card, all at the cost of substantially higher system power draw. I don't feel like the extra heat and noise generated that way is worth it unless you just enjoy chasing the highest possible benchmark numbers. That's a fine hobby in its own right, but single digits just aren't going to make me pursue them for their own sake these days.
Lest you think I'm being fatalistic here, there was a time—almost 20 years ago, to be exact—when ye olde Intel Celeron 300A with 128 KB of L2 cache could famously be tapped for a whopping 68% higher clock over its stock specifications with a good sample and the attentions of a casual enthusiast. The 300A sold for much lower prices than the chips it proceeded to outpace at those speeds, too. When we talk about casual overclocking, the Celeron 300A is perhaps the high-water mark for what made that kind of tweaking worth it.
Sure, you might take a Core i7-8700K from its 4.3 GHz all-core speed to 5 GHz under non-AVX workloads, but that 16% of extra speed comes with roaring CPU fans and an exceedingly hot chip without some kind of thermal-interface-material-related surgery. You can bet that double-digit margin will rapidly shrink as soon as Intel releases a next-generation architecture with more intelligent Turbo Boost behavior that's not just tied to the number of active cores.
Turbo Boost 2.0 was introduced with Sandy Bridge chips all the way back in 2011, and the technology has only received a couple notable tweaks since then, like Turbo Boost Max 3.0 on Intel's high-end desktop platforms and the XFR-like Thermal Velocity Boost on the Core i9-8950HK. Like I've said, Precision Boost 2 and XFR 2 both show that there's more dynamic intelligence to be applied to CPU voltages and frequencies.
AMD, to its credit, is at least not working against casual overclockers' chances with TIM under its high-end chips' heat spreaders or by segmenting its product lines through locked and unlocked multipliers, but that regime may only last as long as large amounts of clock-speed headroom become exposed through better microarchitectures and process technology. The company's lower-end APUs already feature TIM under the heat spreader, as well, limiting overclocking potential somewhat. More capable Precision Boost and XFR algorithms may ultimately become the primary means of setting AMD CPUs apart from one another on top of the TDP differences we already come to expect.
As we run harder and harder into the limits of silicon, today's newly-competitive CPU market will require all chip makers to squeeze every drop of performance they can out of their products at the factory to set apart their high-end products and motivate upgraders. We'll likely see similar sophistication from future graphics cards, too. Leaving hundreds of Hertz on the table doesn't make dollars or sense for chip makers, and casual overclockers likely will be left with thinner and thinner pickings to extract through manual tweaking. If the behavior of today's cutting-edge chips is any indication, however, we'll have more time to game and create. Perhaps the end of casual overclocking won't be entirely sad as a result.
A little over two years ago, Scott Wasson, the founder and long-time Editor-in-Chief of The Tech Report, began a new role with AMD to make life better for gamers using Radeon hardware and software. After his departure, Scott maintained his ownership of TR's parent business while searching for a new caretaker that would let us continue to enjoy the editorial independence that's been a hallmark of our work from day one.
Today, I'm pleased to announce that search has come to an end. The Tech Report will remain an independent publication under the ownership of Adam Eiberger, our long-time business manager. I will be staying on as The Tech Report's Editor-in-Chief. We're excited that this arrangement will allow us to sustain and grow the project that Scott started almost two decades ago for many years to come on terms of our choosing.
It is critical to note that during the past two years, Scott exercised no control over the editorial side of TR. What we chose to cover and not to cover, the testing we performed or didn't perform, and the conclusions we reached or didn't reach were entirely under my direction. That remained the case even when our methods led to conclusions that favored products from AMD's competitors, and when we wrote articles that were complimentary to AMD's products.
These facts will not surprise anybody who has paid attention, but this final passing of the torch should put to rest any lingering concerns about our editorial independence.
With new ownership, the inevitable first question will be what else is going to change. Happily, the answer is nothing. You've already seen what The Tech Report looks like under my leadership over the past two years, and we'll continue to bring you the day-to-day news and in-depth reviews that we've always produced. We may explore new platforms and new audiences, but the core reporting and reviewing work that made The Tech Report famous will remain the bedrock of our work going forward.
Although we're excited for the opportunity to continue what Scott started in 1999, this changing-of-the-guard comes at a tough time for online media. While we enjoy (and are deeply grateful for) support from a wide range of big names in the PC hardware industry, advertising support simply isn't as lucrative as it once was for many online publications, and The Tech Report is no exception. At the same time, there's more going on in personal computing and technology than ever, and we want to remain a leading voice in what's new and what's next.
If you believe in The Tech Report's work as strongly as I do, there is no better time to help us build the foundation for the next steps on our journey. Please, for the love of all things holy, whitelist us in your ad blocker. Subscribe for any amount you like. Comment on our articles, participate in the forums, and spread our work far and wide on Facebook and Twitter. Help us help you. Thanks in advance for your continued readership and support.
I'd also like to share a message from Scott regarding this news, as well:
To the TR community:
As you may know, I left my position as TR's Editor-in-Chief to take a job in the industry just over two years ago. In the time since, I have technically retained ownership of The Tech Report as a business entity. Naturally, I've had to stay well away from TR's editorial operations during this span due to the obvious conflict of interest, and I have been looking for the right situation for TR's ownership going forward.
I'm happy to announce that we completed a deal last week to sell The Tech Report to Adam Eiberger, our long-time sales guy. The papers are signed, and he's now the sole owner of the business.
I think this arrangement is undoubtedly the best one for TR and its community, for two main reasons.
For one, I'm still passionate about the virtues of a strong, independent media. With this deal in place, TR will maintain its status as an independent voice telling honest, unvarnished truths in its reporting and reviews.
Second, putting the site into the hands of a long-time employee gives us the best chance of keeping the TR staff intact going forward. Jeff, Adam, Bruno, and the gang have done a stellar job of keeping The Tech Report going in my absence, and I'm rooting for their continued success and growth.
I want to say thanks, once again, to the entire community for making our incredible run from 1999 to today possible. I'll always look back with gratitude on the way our readership supported us and allowed me to live my dream of running an independent publication for so many years. With your continued support and a little luck, TR should be able to survive and thrive for another couple of decades.
The TR staff would like to extend our thanks to Scott and the many TR alumni for their hard work in building not only one of the finest technology sites around, but also one of the best audiences any writers could ask for. We wish Scott the best as we part ways and carry TR's mission forward.DirectX 11 more than doubles the GT 1030's performance versus DX12 in Hitman
It's been an eventful week in the TR labs, to say the least, and today had one more surprise in store for us. Astute commenters on our review of the Ryzen 5 2400G and Ryzen 3 2200G took notice of the Nvidia GeForce GT 1030's lagging performance in Hitman compared to the Radeon IGPs and wondered just what was going on. On top of revisiting of the value proposition of the Ryzen 5 2400G and performing some explorations of just how much CPU choice advantaged the GT 1030 in our final standings, I wanted to dig into this performance disparity to see whether it was just how the GT 1030 gets along with Hitman or an indication of a possible software problem.
With our simulated Core i3-8100 running the show, I fired up Hitman again to see what was going on. We've seen some performance disparities between Nvidia and AMD graphics processors under DirectX 12 in the past, so Hitman's rendering path seemed like the most obvious setting to tweak. To my horror, I hit the jackpot.
Hitman with DirectX 11 on the GT 1030 ran quite well with no other changes to our test settings. In its DirectX 11 mode, the GT 1030 turned in a 43-FPS average and a 28.4-ms 99th-percentile frame time, basically drawing dead-even with the Vega 11 IGP on board the Ryzen 5 2400G.
Contrast that with the slideshow-like 20-FPS average and 83.4-ms 99th-percentile frame time our original testing showed. While the GT 1030 is the first tier on the Pascal GeForce ladder, there was no way its performance should have been that bad in light of our other test results.
This new data puts the GT 1030 in a much better light compared to our first round of tests in our final reckoning. Even if we use a geometric mean to lessen the effect of outliers on the data, a big performance drop like the one we observed with Hitman under DirectX 12 will have disproportionate effects on our final index. Swapping out the GT 1030's DirectX 12 result for DirectX 11 is only fair, since it's the way gamers should apparently play with the card for the moment. That move does require a major rethink of how the Ryzen 5 2400G and Ryzen 3 2200G compare to the entry-level Pascal card, though.
With the parts lists we put together yesterday, the Ryzen 5 2400G system is about 15% less expensive than the Core i3-8100 system, and its 99th-percentile FPS figure is now about 11% lower than that of the Core i3-8100-and-GT-1030 box. That's still a better-than-linear relationship in price-to-performance ratios for gaming, and it's still impressive. Prior to today, gamers on a shoestring had no options short of purchasing a discrete card like the GT 1030, and the Ryzen 5 2400G and Ryzen 3 2200G now make entry-level gaming practical on integrated graphics alone.
Dropping a Ryzen 3 2200G into our build reduces 99th-percentile FPS performance about another 16% from its beefier sibling, but it makes our entry-entry-level build 17% cheaper still. As a result, I still think the Ryzen 5 2400G and Ryzen 3 2200G are worthy of the TR Editor's Choice awards they've already garnered, but it's hard to deny that these new results take a bit of the shine off both chips' performance.
To be clear, I don't think this result is an indictment of our original data or testing methods. We always set up every graphics card as equally as possible before we begin testing, and that includes gameplay settings like anti-aliasing, texture quality, and API. Our choice to use Hitman's DX12 renderer across all of our test subjects was no different. This is rudimentary stuff, to be sure, but the possibility simply didn't occur to me that using Hitman's DirectX 12 renderer would pose a problem for the GT 1030.
We've long used Hitman for performance testing despite its reputation as a Radeon-friendly title, and its DirectX 12 mode hasn't caused large performance disparities among GeForces and Radeons even as recently as our GeForce GTX 1070 Ti review. Given that past history, I felt it would be no problem to continue as we always have in using Hitman's cutting-edge API support. Testing hardware is full of surprises, though, and putting the Ryzen APUs and the GT 1030 through the wringer has produced more than its fair share of them.Revisiting the value proposition of AMD's Ryzen 5 2400G
The mornings before tight deadlines in the world of PC hardware reviews often follow a week or less of nonstop testing, retesting, and more testing. Sleep and nutrition tend to fall by the wayside in the days leading up to an article in favor of just one more test or looking at just one more hardware combination. None of these conditions are ideal for producing the best thinking possible, and as a human under stress, I sometimes err in the minutes before a big review needs to go live after running that gauntlet.
So it went when I considered the bang-for-the-buck of the Ryzen 5 2400G, where my thinking fell victim to the availability heuristic. I had just finished the productivity value scatter chart and overall 99th-percentile frame time chart on the last page of the review before putting together my conclusion, and having those charts at the top of my mind blinded me to the need for the simple gut check of, y'know, actually putting together a parts list using some of the CPUs we tested. Had I done that, I would have come away with a significantly different view of the 2400G's value proposition.
While the $170 Ryzen 5 2400G would seem to trade blows with the $190 Core i5-8400 on a dollar-for-dollar basis for a productivity system, even that forgiving bar favors the Ryzen 5 once we start putting together parts lists. Intel doesn't offer H- or B-series motherboards compatible with Coffee Lake CPUs yet, so even budget builders have to select a Z370 motherboard to host those CPUs. That alone adds $30 or more to the Ryzen 5 2400G's value bank.
|CPU||AMD Ryzen 5 2400G||$169.99|
|CPU cooler||AMD Wraith Spire||--|
|Memory||G.Skill Ripjaws V 8 GB (2x 4 GB)
|Motherboard||ASRock AB350 Pro4||$69.99 (MIR)|
|Graphics card||Radeon Vega 11 IGP||--|
|Storage||WD Blue 1TB||$49.00|
|Case||Cooler Master MB600L||$46.99|
To demonstrate as much, here's a sample Ryzen 5 2400G build using what I would consider a balance between budget- and enthusiast-friendliness. One could select a cheaper A320 motherboard to save a few more bucks, but I don't think the typical gamer will want to lose the ability to overclock the CPU and graphics processor of a budget system. The ASRock AB350 Pro4 has a fully-heatsinked VRM and a solid-enough feature set to serve our needs, and the rest of the components in this build come from reputable companies. Spend less, and you might not be able to say as much.
|CPU||Intel Core i5-8400||$189.99|
|CPU cooler||Intel boxed heatsink||--|
|Memory||G.Skill Ripjaws V 8 GB (2x 4 GB)
|Motherboard||Gigabyte Z370 HD3||$99.99 (MIR)|
|Graphics card||Intel UHD Graphics 630||--|
|Storage||WD Blue 1TB||$49.00|
|Case||Cooler Master MB600L||$46.99|
|Price difference versus Ryzen 5 2400G PC||$50.00|
For our Core i5-8400 productivity build, the $20 extra for the CPU might not seem like a big deal, but it's quickly compounded by the $30 extra one will pay for the Z370 motherboard we selected—and that's after one chances a mail-in rebate to get that price. Intel desperately needs to get B- and H-series motherboards for Coffee Lake CPUs into the marketplace if it wants non-gamers to have a chance of building competitive or better-than-competitive systems with AMD's latest.
The Core i5-8400 can still outpace the Ryzen 5 2400G in many of our productivity tasks, though, and on the whole, the $50 extra one will pay for this system is still more than worth it for folks who don't game. If time is money for your heavier computing workloads, the i5-8400 could quickly pay for the difference itself. Ryzen 5 2400G builders can probably make up some of the performance difference through overclocking, but we don't recommend OCing for productivity-focused builds that need 100% stability.
|The Instant Coffee|
|CPU||Intel Core i3-8100||$119.99|
|CPU cooler||Intel boxed heatsink||--|
|Memory||G.Skill Ripjaws V 8 GB (2x 4 GB)
|Motherboard||Gigabyte Z370 HD3||$99.99 (MIR)|
|Graphics card||Asus GT 1030||$89.99|
|Storage||WD Blue 1TB||$49.00|
|Case||Cooler Master MB600L||$46.99|
|Price difference versus Ryzen 5 2400G PC||$69.98|
Those building entry-level PCs might not have the luxury of choosing between productivity chops and gaming power, though. To make a gaming build with capabilities similar to those of the Ryzen 5 2400G, building a system around the Core i5-8400 quickly leads to a bottom line that's too expensive to really be considered budget-friendly. That's thanks to the need for an Nvidia GT 1030 like the one we employed with our test system. Those cards were $70 or $80 until just recently, but a mysterious shortage of them at e-tail has suddenly led to a jump in price.
Regardless, back-ordering one of those cards will run you $90 at Amazon right now, and even though we're rolling with that figure for the sake of argument, $90 is honestly too much to pay for a discrete card with the GT 1030's performance. If you had to buy one, we'd wait for prices to drop once stock levels return to normal.
To restore our system to something approaching budget-friendliness, we have to tap a Core i3-8100 for our Coffee Lake gaming system instead of the Core i5-8400, and that suddenly puts the CPU performance of our build behind that of the Ryzen 5 2400G in most applications. Oof.
With new information gleaned from retesting the GeForce GT 1030 in Hitman, the Ryzen 5 2400G no longer beats out that card in our final reckoning. On the whole, though, it clears the 30-FPS threshold for 99th-percentile frame rates that we want to see from an entry-level gaming system. Before this week, that's not something we could say of any integrated graphics processor on any CPU this affordable. As part of a complete PC, it does so for $70 less than our GT 1030 build. Gamers don't have to tolerate 1280x720 and low settings on the 2400G, either; we used resolutions of 1600x900 and 1920x1080 with medium settings for the most part.
So there you have it: the Ryzen 5 2400G is a spectacularly balanced value for folks who want an entry-level system without compromising much on CPU or graphics performance, just like its Ryzen 3 sibling is at $100. Both CPUs were equally deserving of a TR Editor's Choice award for their blends of value and performance, and I'll be updating our review post-haste to reflect AMD's dominance in that department. Sorry for the goof, and I'll make a better effort to look before I leap in the future.
|Friday deals: Ryzen 5 2600 for $150, Surface Pro 6 $200 off, and more||13|
|Thermaltake's Commander C chassis: Choose your own front panel style||13|
|Radeon Software 19.2.2 sets up for Crackdown 3||7|
|Ferris Wheel Day Shortbread||16|
|PC gamers have tons to look forward to in 2019||104|
|Latest Windows update highlights HoloLens ahead of MWC announcement||17|
|GeForce driver 418.91 brings DLSS to Battlefield V and Metro Exodus||2|
|Unreal Engine 4.22 update adds support for DirectX Raytracing||21|
|Tuesday deals: A 512-GB NVMe SSD plus 16 GB of fast RGB LED RAM for $160, and more||20|
|Anyone can see that TR is dying This is my chief concern about TR and has been for the better part of a year. No secret that everyone's blocking ads a...||+36|