The days of overclocking for the casual PC enthusiast are numbered, at least if we define "casual enthusiast" as "a person who just wants to put together a PC and crank everything to 11."
We've become more and more fenced in about the chips we can and can't tweak on the Intel side of the fence as the years go by, but efforts at product segmentation aside, the continued race to get more and more performance out of next-gen silicon may put the final nail in the coffin of casual overclocking's wizened form regardless of whose chip you choose. The practice might not die this year or even next year, but come back three to five years from now and it'd surprise me if us dirty casuals are still seriously tweaking multipliers and voltages in our motherboard firmware for anything but DRAM.
The horsemen of this particular apocalypse are already riding. Just look at leading-edge AMD Ryzen CPUs, Intel's first Core i9 mobile CPU (sorta), and Nvidia's Pascal graphics cards. The seals that have burst to herald their arrival come from the dwindling reserves of performance that microarchitectural improvements and modern lithography processes have left chip makers to tap.
As per-core performance improvements at the microarchitectural level have largely dried up, clock speeds have become a last resort for gaining demonstrable improvements from generation to generation for today's desktop CPUs. It's no longer going to be possible for companies to leave clock-speed margins on the table through imprecise or conservative characterization and binning practices—margins that give casual overclockers reason to tweak to begin with. Tomorrow's chips are going to get smarter and smarter about their own capabilities and exploit the vast majority of their potential through awareness of their own electrical and thermal limits, too.
AMD has long talked about improving the intelligence of its chips' on-die monitoring to lift unneccessarily coarse electrical and thermal restrictions on the dynamic-voltage-and-frequency-scaling curve of a particular piece of silicon. Its Precision Boost 2 and XFR 2 algorithms are the most advanced fruits of those efforts so far.
Put a sufficiently large liquid cooler on a Ryzen 7 2700X, for example, and that chip may boost all the way to 4 GHz under an all-core load. Even if you manage to eke out another 200 MHz or so of clock speed from such a chip in all-core workloads, you're only overclocking the chip 5% past what its own monitoring facilities allow for. That performance comes at the cost of higher voltages, higher power consumption, extra heat, and potentially dicier system stability, not to mention that the 2700X is designed to boost to 4.35 GHz on its own in single-core workloads. Giving up any of that single-core oomph hurts.
When the difference between a Ryzen 5 2600 and a Ryzen 5 2600X is just $20 today, and a Ryzen 7 2700 sells for just $30 less than its X-marked counterpart, I have to wonder whether the tweaking is really worth the time. If one can throw $100 or so of coolant and copper at the problem to extract 95% of a chip's performance potential versus hours of poking, prodding, and testing for stability, well, I know what I'd rather be doing, to be honest. As I get older, I have less and less free time, and if it's down to gaming or not gaming, I'm going to do the thing that lets me game more.
The slim pickings of overclocking headroom for casual tweakers these days doesn't stop with CPUs, either. Nvidia's Pascal graphics cards enjoy a deadly-effective dynamic-voltage-and-frequency-scaling algorithm of their own in GPU Boost 3.0. Grab a GeForce GTX 1080 Ti equipped with any massive air cooler or hybrid arrangement, for just one example, and you're already within single digits of the GP102 GPU's potential.
We got just 6% higher clock speeds versus stock out of a massive air-cooled GTX 1080 Ti and about 7% higher clocks out of a liquid-cooled version of that card, all at the cost of substantially higher system power draw. I don't feel like the extra heat and noise generated that way is worth it unless you just enjoy chasing the highest possible benchmark numbers. That's a fine hobby in its own right, but single digits just aren't going to make me pursue them for their own sake these days.
Lest you think I'm being fatalistic here, there was a time—almost 20 years ago, to be exact—when ye olde Intel Celeron 300A with 128 KB of L2 cache could famously be tapped for a whopping 68% higher clock over its stock specifications with a good sample and the attentions of a casual enthusiast. The 300A sold for much lower prices than the chips it proceeded to outpace at those speeds, too. When we talk about casual overclocking, the Celeron 300A is perhaps the high-water mark for what made that kind of tweaking worth it.
Sure, you might take a Core i7-8700K from its 4.3 GHz all-core speed to 5 GHz under non-AVX workloads, but that 16% of extra speed comes with roaring CPU fans and an exceedingly hot chip without some kind of thermal-interface-material-related surgery. You can bet that double-digit margin will rapidly shrink as soon as Intel releases a next-generation architecture with more intelligent Turbo Boost behavior that's not just tied to the number of active cores.
Turbo Boost 2.0 was introduced with Sandy Bridge chips all the way back in 2011, and the technology has only received a couple notable tweaks since then, like Turbo Boost Max 3.0 on Intel's high-end desktop platforms and the XFR-like Thermal Velocity Boost on the Core i9-8950HK. Like I've said, Precision Boost 2 and XFR 2 both show that there's more dynamic intelligence to be applied to CPU voltages and frequencies.
AMD, to its credit, is at least not working against casual overclockers' chances with TIM under its high-end chips' heat spreaders or by segmenting its product lines through locked and unlocked multipliers, but that regime may only last as long as large amounts of clock-speed headroom become exposed through better microarchitectures and process technology. The company's lower-end APUs already feature TIM under the heat spreader, as well, limiting overclocking potential somewhat. More capable Precision Boost and XFR algorithms may ultimately become the primary means of setting AMD CPUs apart from one another on top of the TDP differences we already come to expect.
As we run harder and harder into the limits of silicon, today's newly-competitive CPU market will require all chip makers to squeeze every drop of performance they can out of their products at the factory to set apart their high-end products and motivate upgraders. We'll likely see similar sophistication from future graphics cards, too. Leaving hundreds of Hertz on the table doesn't make dollars or sense for chip makers, and casual overclockers likely will be left with thinner and thinner pickings to extract through manual tweaking. If the behavior of today's cutting-edge chips is any indication, however, we'll have more time to game and create. Perhaps the end of casual overclocking won't be entirely sad as a result.
A little over two years ago, Scott Wasson, the founder and long-time Editor-in-Chief of The Tech Report, began a new role with AMD to make life better for gamers using Radeon hardware and software. After his departure, Scott maintained his ownership of TR's parent business while searching for a new caretaker that would let us continue to enjoy the editorial independence that's been a hallmark of our work from day one.
Today, I'm pleased to announce that search has come to an end. The Tech Report will remain an independent publication under the ownership of Adam Eiberger, our long-time business manager. I will be staying on as The Tech Report's Editor-in-Chief. We're excited that this arrangement will allow us to sustain and grow the project that Scott started almost two decades ago for many years to come on terms of our choosing.
It is critical to note that during the past two years, Scott exercised no control over the editorial side of TR. What we chose to cover and not to cover, the testing we performed or didn't perform, and the conclusions we reached or didn't reach were entirely under my direction. That remained the case even when our methods led to conclusions that favored products from AMD's competitors, and when we wrote articles that were complimentary to AMD's products.
These facts will not surprise anybody who has paid attention, but this final passing of the torch should put to rest any lingering concerns about our editorial independence.
With new ownership, the inevitable first question will be what else is going to change. Happily, the answer is nothing. You've already seen what The Tech Report looks like under my leadership over the past two years, and we'll continue to bring you the day-to-day news and in-depth reviews that we've always produced. We may explore new platforms and new audiences, but the core reporting and reviewing work that made The Tech Report famous will remain the bedrock of our work going forward.
Although we're excited for the opportunity to continue what Scott started in 1999, this changing-of-the-guard comes at a tough time for online media. While we enjoy (and are deeply grateful for) support from a wide range of big names in the PC hardware industry, advertising support simply isn't as lucrative as it once was for many online publications, and The Tech Report is no exception. At the same time, there's more going on in personal computing and technology than ever, and we want to remain a leading voice in what's new and what's next.
If you believe in The Tech Report's work as strongly as I do, there is no better time to help us build the foundation for the next steps on our journey. Please, for the love of all things holy, whitelist us in your ad blocker. Subscribe for any amount you like. Comment on our articles, participate in the forums, and spread our work far and wide on Facebook and Twitter. Help us help you. Thanks in advance for your continued readership and support.
I'd also like to share a message from Scott regarding this news, as well:
To the TR community:
As you may know, I left my position as TR's Editor-in-Chief to take a job in the industry just over two years ago. In the time since, I have technically retained ownership of The Tech Report as a business entity. Naturally, I've had to stay well away from TR's editorial operations during this span due to the obvious conflict of interest, and I have been looking for the right situation for TR's ownership going forward.
I'm happy to announce that we completed a deal last week to sell The Tech Report to Adam Eiberger, our long-time sales guy. The papers are signed, and he's now the sole owner of the business.
I think this arrangement is undoubtedly the best one for TR and its community, for two main reasons.
For one, I'm still passionate about the virtues of a strong, independent media. With this deal in place, TR will maintain its status as an independent voice telling honest, unvarnished truths in its reporting and reviews.
Second, putting the site into the hands of a long-time employee gives us the best chance of keeping the TR staff intact going forward. Jeff, Adam, Bruno, and the gang have done a stellar job of keeping The Tech Report going in my absence, and I'm rooting for their continued success and growth.
I want to say thanks, once again, to the entire community for making our incredible run from 1999 to today possible. I'll always look back with gratitude on the way our readership supported us and allowed me to live my dream of running an independent publication for so many years. With your continued support and a little luck, TR should be able to survive and thrive for another couple of decades.
The TR staff would like to extend our thanks to Scott and the many TR alumni for their hard work in building not only one of the finest technology sites around, but also one of the best audiences any writers could ask for. We wish Scott the best as we part ways and carry TR's mission forward.DirectX 11 more than doubles the GT 1030's performance versus DX12 in Hitman
It's been an eventful week in the TR labs, to say the least, and today had one more surprise in store for us. Astute commenters on our review of the Ryzen 5 2400G and Ryzen 3 2200G took notice of the Nvidia GeForce GT 1030's lagging performance in Hitman compared to the Radeon IGPs and wondered just what was going on. On top of revisiting of the value proposition of the Ryzen 5 2400G and performing some explorations of just how much CPU choice advantaged the GT 1030 in our final standings, I wanted to dig into this performance disparity to see whether it was just how the GT 1030 gets along with Hitman or an indication of a possible software problem.
With our simulated Core i3-8100 running the show, I fired up Hitman again to see what was going on. We've seen some performance disparities between Nvidia and AMD graphics processors under DirectX 12 in the past, so Hitman's rendering path seemed like the most obvious setting to tweak. To my horror, I hit the jackpot.
Hitman with DirectX 11 on the GT 1030 ran quite well with no other changes to our test settings. In its DirectX 11 mode, the GT 1030 turned in a 43-FPS average and a 28.4-ms 99th-percentile frame time, basically drawing dead-even with the Vega 11 IGP on board the Ryzen 5 2400G.
Contrast that with the slideshow-like 20-FPS average and 83.4-ms 99th-percentile frame time our original testing showed. While the GT 1030 is the first tier on the Pascal GeForce ladder, there was no way its performance should have been that bad in light of our other test results.
This new data puts the GT 1030 in a much better light compared to our first round of tests in our final reckoning. Even if we use a geometric mean to lessen the effect of outliers on the data, a big performance drop like the one we observed with Hitman under DirectX 12 will have disproportionate effects on our final index. Swapping out the GT 1030's DirectX 12 result for DirectX 11 is only fair, since it's the way gamers should apparently play with the card for the moment. That move does require a major rethink of how the Ryzen 5 2400G and Ryzen 3 2200G compare to the entry-level Pascal card, though.
With the parts lists we put together yesterday, the Ryzen 5 2400G system is about 15% less expensive than the Core i3-8100 system, and its 99th-percentile FPS figure is now about 11% lower than that of the Core i3-8100-and-GT-1030 box. That's still a better-than-linear relationship in price-to-performance ratios for gaming, and it's still impressive. Prior to today, gamers on a shoestring had no options short of purchasing a discrete card like the GT 1030, and the Ryzen 5 2400G and Ryzen 3 2200G now make entry-level gaming practical on integrated graphics alone.
Dropping a Ryzen 3 2200G into our build reduces 99th-percentile FPS performance about another 16% from its beefier sibling, but it makes our entry-entry-level build 17% cheaper still. As a result, I still think the Ryzen 5 2400G and Ryzen 3 2200G are worthy of the TR Editor's Choice awards they've already garnered, but it's hard to deny that these new results take a bit of the shine off both chips' performance.
To be clear, I don't think this result is an indictment of our original data or testing methods. We always set up every graphics card as equally as possible before we begin testing, and that includes gameplay settings like anti-aliasing, texture quality, and API. Our choice to use Hitman's DX12 renderer across all of our test subjects was no different. This is rudimentary stuff, to be sure, but the possibility simply didn't occur to me that using Hitman's DirectX 12 renderer would pose a problem for the GT 1030.
We've long used Hitman for performance testing despite its reputation as a Radeon-friendly title, and its DirectX 12 mode hasn't caused large performance disparities among GeForces and Radeons even as recently as our GeForce GTX 1070 Ti review. Given that past history, I felt it would be no problem to continue as we always have in using Hitman's cutting-edge API support. Testing hardware is full of surprises, though, and putting the Ryzen APUs and the GT 1030 through the wringer has produced more than its fair share of them.Revisiting the value proposition of AMD's Ryzen 5 2400G
The mornings before tight deadlines in the world of PC hardware reviews often follow a week or less of nonstop testing, retesting, and more testing. Sleep and nutrition tend to fall by the wayside in the days leading up to an article in favor of just one more test or looking at just one more hardware combination. None of these conditions are ideal for producing the best thinking possible, and as a human under stress, I sometimes err in the minutes before a big review needs to go live after running that gauntlet.
So it went when I considered the bang-for-the-buck of the Ryzen 5 2400G, where my thinking fell victim to the availability heuristic. I had just finished the productivity value scatter chart and overall 99th-percentile frame time chart on the last page of the review before putting together my conclusion, and having those charts at the top of my mind blinded me to the need for the simple gut check of, y'know, actually putting together a parts list using some of the CPUs we tested. Had I done that, I would have come away with a significantly different view of the 2400G's value proposition.
While the $170 Ryzen 5 2400G would seem to trade blows with the $190 Core i5-8400 on a dollar-for-dollar basis for a productivity system, even that forgiving bar favors the Ryzen 5 once we start putting together parts lists. Intel doesn't offer H- or B-series motherboards compatible with Coffee Lake CPUs yet, so even budget builders have to select a Z370 motherboard to host those CPUs. That alone adds $30 or more to the Ryzen 5 2400G's value bank.
|CPU||AMD Ryzen 5 2400G||$169.99|
|CPU cooler||AMD Wraith Spire||--|
|Memory||G.Skill Ripjaws V 8 GB (2x 4 GB)
|Motherboard||ASRock AB350 Pro4||$69.99 (MIR)|
|Graphics card||Radeon Vega 11 IGP||--|
|Storage||WD Blue 1TB||$49.00|
|Case||Cooler Master MB600L||$46.99|
To demonstrate as much, here's a sample Ryzen 5 2400G build using what I would consider a balance between budget- and enthusiast-friendliness. One could select a cheaper A320 motherboard to save a few more bucks, but I don't think the typical gamer will want to lose the ability to overclock the CPU and graphics processor of a budget system. The ASRock AB350 Pro4 has a fully-heatsinked VRM and a solid-enough feature set to serve our needs, and the rest of the components in this build come from reputable companies. Spend less, and you might not be able to say as much.
|CPU||Intel Core i5-8400||$189.99|
|CPU cooler||Intel boxed heatsink||--|
|Memory||G.Skill Ripjaws V 8 GB (2x 4 GB)
|Motherboard||Gigabyte Z370 HD3||$99.99 (MIR)|
|Graphics card||Intel UHD Graphics 630||--|
|Storage||WD Blue 1TB||$49.00|
|Case||Cooler Master MB600L||$46.99|
|Price difference versus Ryzen 5 2400G PC||$50.00|
For our Core i5-8400 productivity build, the $20 extra for the CPU might not seem like a big deal, but it's quickly compounded by the $30 extra one will pay for the Z370 motherboard we selected—and that's after one chances a mail-in rebate to get that price. Intel desperately needs to get B- and H-series motherboards for Coffee Lake CPUs into the marketplace if it wants non-gamers to have a chance of building competitive or better-than-competitive systems with AMD's latest.
The Core i5-8400 can still outpace the Ryzen 5 2400G in many of our productivity tasks, though, and on the whole, the $50 extra one will pay for this system is still more than worth it for folks who don't game. If time is money for your heavier computing workloads, the i5-8400 could quickly pay for the difference itself. Ryzen 5 2400G builders can probably make up some of the performance difference through overclocking, but we don't recommend OCing for productivity-focused builds that need 100% stability.
|The Instant Coffee|
|CPU||Intel Core i3-8100||$119.99|
|CPU cooler||Intel boxed heatsink||--|
|Memory||G.Skill Ripjaws V 8 GB (2x 4 GB)
|Motherboard||Gigabyte Z370 HD3||$99.99 (MIR)|
|Graphics card||Asus GT 1030||$89.99|
|Storage||WD Blue 1TB||$49.00|
|Case||Cooler Master MB600L||$46.99|
|Price difference versus Ryzen 5 2400G PC||$69.98|
Those building entry-level PCs might not have the luxury of choosing between productivity chops and gaming power, though. To make a gaming build with capabilities similar to those of the Ryzen 5 2400G, building a system around the Core i5-8400 quickly leads to a bottom line that's too expensive to really be considered budget-friendly. That's thanks to the need for an Nvidia GT 1030 like the one we employed with our test system. Those cards were $70 or $80 until just recently, but a mysterious shortage of them at e-tail has suddenly led to a jump in price.
Regardless, back-ordering one of those cards will run you $90 at Amazon right now, and even though we're rolling with that figure for the sake of argument, $90 is honestly too much to pay for a discrete card with the GT 1030's performance. If you had to buy one, we'd wait for prices to drop once stock levels return to normal.
To restore our system to something approaching budget-friendliness, we have to tap a Core i3-8100 for our Coffee Lake gaming system instead of the Core i5-8400, and that suddenly puts the CPU performance of our build behind that of the Ryzen 5 2400G in most applications. Oof.
With new information gleaned from retesting the GeForce GT 1030 in Hitman, the Ryzen 5 2400G no longer beats out that card in our final reckoning. On the whole, though, it clears the 30-FPS threshold for 99th-percentile frame rates that we want to see from an entry-level gaming system. Before this week, that's not something we could say of any integrated graphics processor on any CPU this affordable. As part of a complete PC, it does so for $70 less than our GT 1030 build. Gamers don't have to tolerate 1280x720 and low settings on the 2400G, either; we used resolutions of 1600x900 and 1920x1080 with medium settings for the most part.
So there you have it: the Ryzen 5 2400G is a spectacularly balanced value for folks who want an entry-level system without compromising much on CPU or graphics performance, just like its Ryzen 3 sibling is at $100. Both CPUs were equally deserving of a TR Editor's Choice award for their blends of value and performance, and I'll be updating our review post-haste to reflect AMD's dominance in that department. Sorry for the goof, and I'll make a better effort to look before I leap in the future.Tobii makes a compelling case for more natural and immersive VR with eye tracking
We've heard murmurs about the benefits of eye-tracking in VR headsets for quite some time now, but even with the number of press days and trade shows we attend in the course of the year, I'd never had the opportunity to give the tech a spin. That changed with a demo we got to try this year at CES. Tobii, probably the leading company in eye-tracking technology, invited us in for a private showing of its most recent round of VR eye-tracking hardware this year. The company had a prototype HTC Vive headset at hand with its eye trackers baked in for me to kick the tires with, and I came away convinced that eye tracking is an essential technology for the best VR experiences.
Tobii's demo took us through a few potential uses of eye-tracking in VR. The most immediate benefit came in setting interpupillary distance, an essential step in achieving the sharpest and clearest images with a VR headset. With today's headsets, one might need to make a best guess at the correct IPD using an error-prone reference image, but the Tobii tech gave me immediate, empirical feedback when I achieved the correct setting.
Next, the demo pulled up a virtual mirror that allowed me to see how the eyes of my avatar could move in response to eye-tracking inputs. While this avatar wasn't particularly detailed, it was clear that the eye-tracking sensors inside the headset could translate where I was looking into virtual space with an impressive degree of precision and with low latency.
I was then transported to a kind of courtyard-like environment where a pair of robots could tell when I was and wasn't looking at them, causing one to pop up a speech bubble when I did make eye contact. That cute and rather binary demo belies a future where VR avatars could make eye contact with one another, a huge part of natural interaction in the real world that's not present with most human-to-human (or human-to-robot) contact in VR today.
After that close encounter, I was transported to a simulated home theater where I was asked to perform tasks like dimming a light, adjusting the volume of the media being played, and selecting titles to watch. With eye tracking on, I had only to look at those objects or menus with my head mostly still to manipulate them with the Vive's trackpads, whereas without it I had to move my entire head, much as one would have to do with most of today's VR HMDs. It was less tiresome and more natural to simply move my eyeballs to perform that work as opposed to engaging my entire neck.
Another more interactive demo involved picking up a rock with the Vive's controller and throwing it at strategically-placed bottles scattered around a farmyard. With eye tracking off, I was acutely aware that I was moving around a controller in the real world to direct the simulated rock at a bottle. This motion didn't feel particularly natural or coordinated, and I'd call it typical of tracked hand controllers in VR today.
With eye-tracking on, however, I felt as though I was suddenly gifted with elite hand-eye coordination. The eye-tracking-enhanced rock simply went where I was looking when I gave it a toss, and my aim became far more reliable. I wouldn't say that the software was going so far as to correct wildly off-course throws, but it was somehow using the eye-tracking data to smooth some of the disconnect between real-world motion and its effects in VR. The experience with eye-tracking on simply felt more immersive.
Another interactive demo simulated a kind of AR game where a military installation on Mars was poised to fire at UFOs invading Earth. With eye-tracking off, I had to point and click with the controller to adjust the various elements of the scene. When my gaze was tracked, I simply had to look at the stellar body I wanted to adjust and move my finger across the touchpad to move it, rather than selecting each planet directly with the controller. This experience wasn't as revelatory as the rock toss, but it was more inviting and natural to simply look at the object I wanted to manipulate in the environment before doing so.
The final demo dropped me into a sci-fi setting where I could toggle a number of switches and send an interplanetary message. Without eye-tracking on, this demo worked like pressing buttons typically does in VR right now: by reaching out with the Vive controller and selecting the various controls with the trigger. With eye tracking on, however, I had only to look at those closely-spaced buttons and pull the trigger to select them—no reaching or direct manipulation required.
The big surprise from this experience was that Tobii had been using a form of foveated rendering throughout the demos I was allowed to try out. For the unfamiliar, foveated rendering devotes fewer processing resources to portions of the VR frame that fall into the user's peripheral vision. Early efforts at foveation relied on fixed, lens-dependent regions, but eye-tracked HMDs can dynamically change the area of best resolution depending on the direction of the wearer's gaze. The Tobii-equipped VR system was invisibly putting pixels where the user was looking while saving rendering effort in parts of the frame where it wasn't needed.
Indeed, the company remarked that nobody had noticed the foveation in action until it pointed out the feature and allowed folks to see an A-B test, and I certainly didn't notice the feature in action until it was revealed to me though that test (though the relatively low resolution and considerable edge aberrations of today's VR HMDs might have concealed the effects of foveation on some parts of the frame). Still, if foveation is as natural on future hardware as Tobii made it feel on today's headsets, higher-quality VR might be easier to achieve without the major increases in graphics-hardware power that would be required to naively shade every pixel.
All told, Tobii's demos proved incredibly compelling, and I was elated to finally experience the technology in action after hearing so much about it. The problem is that getting eye-tracking-equipped headsets onto the heads of VR pioneers is going to require all-new hardware—the company says its sensors require a companion ASIC to process and communicate the eye-tracking data to the host system, and it can't simply be retrofitted to existing HMDs. Asking early adopters to dump their existing hardware for a smoother and more immersive experience might prove to be an uphill climb. Keep an eye out for Tobii tech in future HMDs, though—it makes for a much more natural and immersive VR experience.Synaptics' Clear ID fingerprint sensor feels like the way of the future
Edge-to-edge screens are poised to be the new hotness of smartphone design in 2018, but pushing pixels right out to a device's borders leaves little room for the range of sensors we've come to know and love on the front of a phone—especially fingerprint sensors. By all accounts, Apple is dealing with this new reality by gradually retiring the fingerprint as a biometric input. You can still get a Touch ID sensor on an iPhone 8 or some MacBook Pros, but the future as seen from Cupertino clearly relies on Face ID, its array of depth-mapping hardware, and the accompanying notch.
Fingerprint sensors still have some advantages over face-sensing tech, though. They allow owners to unlock their devices without looking directly at the front of the phone, an important capability in meetings or when the device is resting on a desk or table. They can't be tricked by twins, and they can't be as easily spoofed as some less-sophisticated forms of facial identification. It's simple to enroll multiple fingerprints with most fingerprint sensors, as well, whereas Face ID is limited to one user at the moment. I appreciate being able to enroll several of my ten fingers with my iPhone to account for my left and right hands, for example, while other owners might enroll a spouse's fingerprint for emergencies. Ideally, we'd have both technologies at our disposal in the phones of the future.
Some Android device makers have been coping with the demand for ever-shrinking bezels by introducing less-sophisticated facial unlock schemes of their own, but the overwhelming majority of serious biometric inputs on those devices comes from a fingerprint sensor on the back of the phone. Sometimes those back-mounted sensors are placed well, and sometimes they aren't. As a long-time iPhone user, I believe that the natural home for a fingerprint reader is on the front of the device, but edge-to-edge displays mean that phone manufacturers who aren't buying Kinect makers of their own simply have to put fingerprint sensors somewhere else.
The intensifying battle between face and fingerprint for biometric superiority, and the question of where to put fingerprint sensors in tomorrow's phones, is fertile ground for Synaptics. You might already know Synaptics from its wide selection of existing touchpad and fingerprint-sensing hardware, and last week at CES, the company made a big splash by showing off the first phone with one of its Clear ID under-screen fingerprint sensors inside: a model from Vivo, a brand primarily involved in southeast Asian markets.
In short, Clear ID sensors let owners enjoy the best of both edge-to-edge screens and front-mounted fingerprint sensors by taking advantage of the unique properties of OLED panels to capture fingerprint data right through the gaps in the screen's pixel matrix itself. Clear ID results in an all- (or mostly-) screen device with no visible fingerprint sensor on its face and no notches for face-sensing cameras at the top of the phone. We covered Clear ID in depth at its debut, but I was eager to go thumbs-on with this technology in a production phone.
What's most striking about Clear ID is how natural it feels to use. Enrolling my fingerprint required the usual lengthy sequence of hold-and-lift motions that most any other fingerprint sensor does these days. Once the device knew the contours of my thumb, though, unlocking the phone proved as simple and swift as resting my opposable digit on a highlighted region of the screen that's always visible thanks to the self-illuminating pixels of the Vivo phone's OLED panel. The process felt as fast as using Touch ID on my iPhone 6S, and it may even have been faster when I got the phone in a state where it would unlock without playing the elaborate animation you see above.
In the vein of the best innovations, Clear ID feels like the way fingerprints ought to be read on phones with edge-to-edge screens, and it'll likely serve as a distinguishing feature for device makers planning to incorporate OLED panels in their future phones. The backlight layer of LCDs won't let fingerprint data pass through to Clear ID sensors, so the tech won't be coming to phones relying on those panels yet, if it ever does. Clear ID is so obvious and natural in use that it was my immediate answer when folks asked about the most innovative thing on display at CES, and I'm excited to see it make its way into more devices soon.How much does screen size matter in comparing Ryzen Mobile and Kaby Lake-R battery life?
As we've continued testing AMD's Ryzen 5 2500U APU over the past few days, we've been confronted with the problem of comparing battery life across laptops with different screen sizes. Many readers suggested that I should take each machine's internal display out of the picture by hooking them up to external monitors. While I wanted to get real-world battery-life testing out of the way first, I can certainly appreciate the elegance of leveling the playing field that way. Now we have.
Before we get too deeply into these results, I want to point out loudly and clearly that these numbers are not and will never be representative of real-world performance. Laptop users will nearly always be running the internal displays of their systems when they're on battery, and removing that major source of power draw from a mobile computer is an entirely synthetic and artificial way to run a battery life test. We're also still testing two different vendor implementations of different SoCs, and it's possible that Acer's engineers might have some kind of magic that HP's don't (or vice versa). Still, for folks curious about platform performance and efficiency, rather than the more real-world system performance tests we would typically conduct, these results might prove interesting.
To give this approach a try, I connected both the Envy x360 and the MX150-powered Acer Swift 3 to 2560x1440 external monitors running at 60 Hz using each machine's HDMI output. I then configured each system to show a display output on the external monitor only and confirmed that both laptops' internal displays were 100% off. After those preparations, I ran our TR Browserbench web-browsing test until each machine automatically shut off at 5% battery before recording their run times.
As we'd expect, both machines' battery life benefits from not having to power an internal monitor. Counter to our expectations, though, the Envy x360 doesn't actually seem to spend a great deal of its power budget on running its screen. The Envy gained only 53 minutes, or 15%, more web-browsing time than when it didn't have to drive its internal monitor. The MX150-powered Acer, on the other hand, gained a whopping five hours of battery life when we removed its screen from the picture. I was so astounded by that result that I retested the Envy to ensure that a background process or other anomaly wasn't affecting battery life, but the HP machine repeated its first performance.
We can take battery capacity out of the efficiency picture for this light workload by dividing minutes of run time by the capacity of the battery in watt-hours. This approach gives us a normalized "minutes per watt-hour" figure that should be comparable across our two test systems. HWiNFO64 reports that the Envy x360 has a 54.8 Wh battery, and since it's brand-new, a full charge tops up that battery completely. Using the technique described above, we get 7.8 minutes of run time per watt hour from the HP system.
The Acer Swift 3 I got from Intel appears to have been a test mule at some point in its life. HWiNFO64 reports that the Swift 3 has already lost 10% of its battery capacity, from 50.7 Wh when it was new to 45.7 Wh now. In this measure of efficiency, though, that capacity decrease actually helps the Swift 3. The system posts a jaw-dropping 19 minutes of run time per watt-hour for light web browsing, or a 2.4-times-better result.
Although this is a staggering difference, I emphasize that it's not representative of performance in the real world. If we don't remove the display from the picture, the Optimus-equipped Swift 3 only posted nine and a half hours of run time in our i5-8250U review, or only about half again as long as the Envy's six hours and 12 minutes. If we drop the MX150 from the picture, the IGP-only Swift 3s and their 10.5 hours of battery only run 67% longer than the Envy. Those are only rough assessments of platform potential, given that we aren't normalizing for battery capacity or screen size. Still, Ryzen Mobile systems might have a ways to go to catch Intel in the battery life race. The blue team has been obsessed with mobile power management for years, and technologies like Speed Shift are just the latest and most visible results of those efforts.
In any case, it's clear that there's a lot of moving parts behind the battery life of these systems. I've repeatedly cautioned that it's early days for both drivers and firmware for the Ryzen 5 2500U, and it's possible that future refinements will close this gap somewhat. Benchmarking a similar Intel-powered system from HP might also help even the field, given my research in my first examination of the Ryzen-powered Envy x360's battery life. (If you'd like to help with that project, throw us a few bucks, eh?) Still, if you favor battery-sipping longevity over convertible versatility and raw performance, it seems like the Envy x360 requires a compromise that our GeForce-powered Acer Swift 3 doesn't. Stay tuned for more battery-life testing soon.
|MSI adds 10 B450 motherboards to its Dragon Army||4|
|iFixit dumps dust into Apple's latest MacBook Pro keyboard||31|
|Stick Out Your Tongue Day Shortbread||26|
|Asus fuels up a quartet of AMD B450 motherboards||16|
|Thursday deals: a Ryzen 5 2600X for $189 and more||17|
|Gorilla Glass 6 evolves to withstand more drops||35|
|Corsair's H100i Pro 240-mm closed-loop liquid CPU cooler reviewed||18|
|Help The Tech Report remain independent in a shifting media climate||154|
|Corsair Spec-06 case gets a dose of RGB flair||3|
|So now they can make phone glass even thinner for no overall increase in durability.||+21|