Edge-to-edge screens are poised to be the new hotness of smartphone design in 2018, but pushing pixels right out to a device's borders leaves little room for the range of sensors we've come to know and love on the front of a phone—especially fingerprint sensors. By all accounts, Apple is dealing with this new reality by gradually retiring the fingerprint as a biometric input. You can still get a Touch ID sensor on an iPhone 8 or some MacBook Pros, but the future as seen from Cupertino clearly relies on Face ID, its array of depth-mapping hardware, and the accompanying notch.
Fingerprint sensors still have some advantages over face-sensing tech, though. They allow owners to unlock their devices without looking directly at the front of the phone, an important capability in meetings or when the device is resting on a desk or table. They can't be tricked by twins, and they can't be as easily spoofed as some less-sophisticated forms of facial identification. It's simple to enroll multiple fingerprints with most fingerprint sensors, as well, whereas Face ID is limited to one user at the moment. I appreciate being able to enroll several of my ten fingers with my iPhone to account for my left and right hands, for example, while other owners might enroll a spouse's fingerprint for emergencies. Ideally, we'd have both technologies at our disposal in the phones of the future.
Some Android device makers have been coping with the demand for ever-shrinking bezels by introducing less-sophisticated facial unlock schemes of their own, but the overwhelming majority of serious biometric inputs on those devices comes from a fingerprint sensor on the back of the phone. Sometimes those back-mounted sensors are placed well, and sometimes they aren't. As a long-time iPhone user, I believe that the natural home for a fingerprint reader is on the front of the device, but edge-to-edge displays mean that phone manufacturers who aren't buying Kinect makers of their own simply have to put fingerprint sensors somewhere else.
The intensifying battle between face and fingerprint for biometric superiority, and the question of where to put fingerprint sensors in tomorrow's phones, is fertile ground for Synaptics. You might already know Synaptics from its wide selection of existing touchpad and fingerprint-sensing hardware, and last week at CES, the company made a big splash by showing off the first phone with one of its Clear ID under-screen fingerprint sensors inside: a model from Vivo, a brand primarily involved in southeast Asian markets.
In short, Clear ID sensors let owners enjoy the best of both edge-to-edge screens and front-mounted fingerprint sensors by taking advantage of the unique properties of OLED panels to capture fingerprint data right through the gaps in the screen's pixel matrix itself. Clear ID results in an all- (or mostly-) screen device with no visible fingerprint sensor on its face and no notches for face-sensing cameras at the top of the phone. We covered Clear ID in depth at its debut, but I was eager to go thumbs-on with this technology in a production phone.
What's most striking about Clear ID is how natural it feels to use. Enrolling my fingerprint required the usual lengthy sequence of hold-and-lift motions that most any other fingerprint sensor does these days. Once the device knew the contours of my thumb, though, unlocking the phone proved as simple and swift as resting my opposable digit on a highlighted region of the screen that's always visible thanks to the self-illuminating pixels of the Vivo phone's OLED panel. The process felt as fast as using Touch ID on my iPhone 6S, and it may even have been faster when I got the phone in a state where it would unlock without playing the elaborate animation you see above.
In the vein of the best innovations, Clear ID feels like the way fingerprints ought to be read on phones with edge-to-edge screens, and it'll likely serve as a distinguishing feature for device makers planning to incorporate OLED panels in their future phones. The backlight layer of LCDs won't let fingerprint data pass through to Clear ID sensors, so the tech won't be coming to phones relying on those panels yet, if it ever does. Clear ID is so obvious and natural in use that it was my immediate answer when folks asked about the most innovative thing on display at CES, and I'm excited to see it make its way into more devices soon.How much does screen size matter in comparing Ryzen Mobile and Kaby Lake-R battery life?
As we've continued testing AMD's Ryzen 5 2500U APU over the past few days, we've been confronted with the problem of comparing battery life across laptops with different screen sizes. Many readers suggested that I should take each machine's internal display out of the picture by hooking them up to external monitors. While I wanted to get real-world battery-life testing out of the way first, I can certainly appreciate the elegance of leveling the playing field that way. Now we have.
Before we get too deeply into these results, I want to point out loudly and clearly that these numbers are not and will never be representative of real-world performance. Laptop users will nearly always be running the internal displays of their systems when they're on battery, and removing that major source of power draw from a mobile computer is an entirely synthetic and artificial way to run a battery life test. We're also still testing two different vendor implementations of different SoCs, and it's possible that Acer's engineers might have some kind of magic that HP's don't (or vice versa). Still, for folks curious about platform performance and efficiency, rather than the more real-world system performance tests we would typically conduct, these results might prove interesting.
To give this approach a try, I connected both the Envy x360 and the MX150-powered Acer Swift 3 to 2560x1440 external monitors running at 60 Hz using each machine's HDMI output. I then configured each system to show a display output on the external monitor only and confirmed that both laptops' internal displays were 100% off. After those preparations, I ran our TR Browserbench web-browsing test until each machine automatically shut off at 5% battery before recording their run times.
As we'd expect, both machines' battery life benefits from not having to power an internal monitor. Counter to our expectations, though, the Envy x360 doesn't actually seem to spend a great deal of its power budget on running its screen. The Envy gained only 53 minutes, or 15%, more web-browsing time than when it didn't have to drive its internal monitor. The MX150-powered Acer, on the other hand, gained a whopping five hours of battery life when we removed its screen from the picture. I was so astounded by that result that I retested the Envy to ensure that a background process or other anomaly wasn't affecting battery life, but the HP machine repeated its first performance.
We can take battery capacity out of the efficiency picture for this light workload by dividing minutes of run time by the capacity of the battery in watt-hours. This approach gives us a normalized "minutes per watt-hour" figure that should be comparable across our two test systems. HWiNFO64 reports that the Envy x360 has a 54.8 Wh battery, and since it's brand-new, a full charge tops up that battery completely. Using the technique described above, we get 7.8 minutes of run time per watt hour from the HP system.
The Acer Swift 3 I got from Intel appears to have been a test mule at some point in its life. HWiNFO64 reports that the Swift 3 has already lost 10% of its battery capacity, from 50.7 Wh when it was new to 45.7 Wh now. In this measure of efficiency, though, that capacity decrease actually helps the Swift 3. The system posts a jaw-dropping 19 minutes of run time per watt-hour for light web browsing, or a 2.4-times-better result.
Although this is a staggering difference, I emphasize that it's not representative of performance in the real world. If we don't remove the display from the picture, the Optimus-equipped Swift 3 only posted nine and a half hours of run time in our i5-8250U review, or only about half again as long as the Envy's six hours and 12 minutes. If we drop the MX150 from the picture, the IGP-only Swift 3s and their 10.5 hours of battery only run 67% longer than the Envy. Those are only rough assessments of platform potential, given that we aren't normalizing for battery capacity or screen size. Still, Ryzen Mobile systems might have a ways to go to catch Intel in the battery life race. The blue team has been obsessed with mobile power management for years, and technologies like Speed Shift are just the latest and most visible results of those efforts.
In any case, it's clear that there's a lot of moving parts behind the battery life of these systems. I've repeatedly cautioned that it's early days for both drivers and firmware for the Ryzen 5 2500U, and it's possible that future refinements will close this gap somewhat. Benchmarking a similar Intel-powered system from HP might also help even the field, given my research in my first examination of the Ryzen-powered Envy x360's battery life. (If you'd like to help with that project, throw us a few bucks, eh?) Still, if you favor battery-sipping longevity over convertible versatility and raw performance, it seems like the Envy x360 requires a compromise that our GeForce-powered Acer Swift 3 doesn't. Stay tuned for more battery-life testing soon.Here's a first look at the battery life of HP's Ryzen-powered Envy x360
My initial tests of AMD's Ryzen 5 2500U APU gave us a fine picture of the APU's performance, but we admittedly didn't test battery life in that initial article. Part of the reason for that omission was to avoid drawing unfair comparisons between the 15.6" HP Envy x360 that plays host to the Ryzen 5 2500U and the 14" Acer Swift 3 machines we used to represent Intel's Core i5-8250U.
Although the jump in screen size might not sound large on paper, the practical effects of the Ryzen system's bigger screen on battery life are likely quite significant. Getting the same light output from a bigger panel requires more power, as just one variable, and the HP system also has a pen digitizer in its LCD panel with unclear power-management characteristics.
So long as we keep those caveats in mind, though, we can at least offer a basic picture of how long the Ryzen 5 2500U lets the Envy x360 run on battery using a couple different tests. The one quirk of our test rig compared to the $750 default configuration you'd get off the shelf at Best Buy is the Samsung 960 EVO 500GB SSD we're using as our system drive. A hard-drive-only Envy might run for less time, though Windows' power-management features should generally take the mechanical hard drive out of the equation when it's not in active use.
To run these tests, I set the Envy's screen brightness to 50% and left Windows on its default Balanced power plan. The only changes we made to that default configuration involved disabling the operating system's Battery Saver safeguard and forcing the screen to remain on over the entire course of the test.
First off, I ran the Envy's battery down with TR's Browserbench. This benchmark runs a loop of an older version of our home page with plenty of Flash content sprinkled in, along with some cache-busting code to make for more work on the test system. Browserbench is getting up there in years, but it is repeatable and still offers a decent proxy for light web use. We're working on a new version of Browserbench that runs through a range of real-world web sites, but for now, the old version will have to do.
When the Envy shut off automatically with 5% remaining in its juice pack, it registered six hours and 12 minutes of battery life under Browserbench. That's well short of HP's claimed 11-hour battery life figure, but it would at least get you from New York to Los Angeles on in-flight Wi-Fi. To try and see just how good that figure was among similar machines, at least, I scoured the web for reviews of comparable PCs.
Reviews of Envy x360s with Intel eighth-gen processors inside remain scarce, but I did find that an Intel Kaby Lake-powered Envy x360 with a configuration and battery similar to that of our Ryzen system turned in six hours of web-browsing battery life for the folks at Laptop Mag. That test suggests Ryzen Mobile could be delivering competitive battery life against similar Intel systems, but it's hard to say just how competitive the Ryzen 5 2500U is without more in-depth (and possibly less representative) directed testing on our part.
Web browsing isn't the only use of a machine on the go, of course. To test video playback, I set up Windows 10's Movies and TV app to loop the 1920x1080, 55 Mbps H.264 version of the Jellyfish reference video until the Envy's battery died. I confirmed that Movies and TV was firing up the GPU's video decode engine using the GPU-monitoring tools in the latest version of Windows 10's Task Manager before letting the test run. Incidentally, here's a full accounting of the Ryzen APU's video-decoding capabilities, as ferreted out by DXVA Checker:
After displaying four hours and 37 minutes of pulsating sea life, the Envy x360 went dark once more. That result parallels the four-hour-and-32-minute run time achieved by the folks at HotHardware in their video playback testing, although the site claimed it had to run its x360's screen at 100% brightness to achieve a comparable output level with its other laptops. While I didn't run our Envy x360 so brightly, HotHardware's results still suggest we can be confident that our run time is in the right ballpark.
With these two tests in the bag, it seems like our 15.6" Ryzen system delivers only average battery life for its size class. I'd still caution against drawing too many comparisons between the Envy x360's battery life and that of other laptops at this stage, though. Implementation differences matter, and we don't know how Ryzen Mobile will behave in smaller and lighter systems. It's still early days for drivers and firmware, too. Our preliminary results and research suggest that Ryzen Mobile's battery life could be competitive with that of similar Intel systems', though, and that's as good a bit of news for AMD as its chip's well-rounded performance.Just how hot is Coffee Lake?
Update 10/6/2017 10:45 PM: This article originally stated that the Gigabyte Z370 Aorus Gaming 7 motherboard ships with "multi-core enhancement" enabled. The board in fact ships with the feature disabled. I deeply regret the error and have corrected the article accordingly.
Intel's Core i7-8700K has proved to be an exceptionally well-rounded CPU in our testing so far, but one potential negative has come up again and again in the other reviews I've been reading. Many reviewers have noted that the chip "runs hot," so much so that the idea even made for sub-headline news at one outlet. I was a bit confused reading these statements, because the i7-8700K didn't seem to be an exceptionally hot-running chip in my testing compared to other modern Intel CPUs. Although I ran into a thermal limit while trying to boost voltages enough to get our chip stable under a Prime95 AVX workload, running all of the chip's AVX units at 4.8 GHz was no small feat, and we expect high temperatures as a matter of course from unmodified Intel CPUs when they're overclocked.
Still, the TR water-cooler meeting this morning produced an interesting line in the sand for whether a chip is difficult to cool: can it be held in check by Cooler Master's evergreen Hyper 212 Evo? That $30 tower remains a fine bang-for-the-buck contender among CPU heatsinks, so it's a natural baseline for establishing whether a chip is tough to keep frosty. I don't have a Hyper 212 Evo here, but I do have Cooler Master's MasterAir Pro 4, a 120-mm tower that's basically the same heatsink as a Hyper 212 Evo with a newer fan design. It was simple enough to see whether the Core i7-8700K fell on the right or wrong side of the MasterAir Pro 4's cooling power, so I popped off the 280-mm Corsair H115i that usually cools our test chips and set up the MasterAir Pro 4 in its place.
First off, it's worth defining what "hot" means in the context of the i7-8700K. Intel's Tjunction specification for this chip remains the same 100° C it's been for Skylake and Kaby Lake K-series CPUs. Hit that temperature, and the i7-8700K will begin to throttle. We obviously want to stay as far below that threshold as possible, but it establishes an upper limit for what a "bad" temperature might be for the chip.
With that in mind, I ran the Prime95 Small FFTs torture test at stock speeds to establish a baseline for the chip's thermal behavior. Prime95 hammers a chip's AVX units in a way that's meant to produce the most heat possible, well beyond what any real-world workload might generate. Gigabyte's Easy Tune utility reported that the chip was running at 4.3 GHz—its normal all-core Turbo speed—at 1.1V under this synthetic load. With the MasterAir Pro 4 on top, those clocks and voltages resulted in a CPU package temperature of 78° C, according to HWiNFO64.
Those numbers are certainly warm for a stock-clocked LGA 1151 CPU, but it's worth remembering that we're now asking the cooler to wrangle six cores and 12 threads instead of four cores and eight threads. That's entry-level high-end desktop territory, so slightly higher temps than we're used to should be par for the course. In any case, the stock-clocked i7-8700K proved perfectly happy under our Hyper 212 Evo stand-in.
Next up, I tried to run the chip with Gigabyte's "multi-core enhancement" turned on. This "enhancement" (happily left off by default, as "Auto" means in the Z370 Aorus Gaming 7's firmware) runs all six cores of our i7-8700K at the single-core Turbo Boost speed, or 4.7 GHz. We vigorously search out and disable these kinds of settings for every CPU review we do, since they're the same as overclocking. Other sites may not, and that's not ideal. Not only do these settings ruin any sense of what "stock" performance is from a given processor, they place the same demands on heatsinks as an equivalent overclock would.
I know that's stating the obvious, but we've had bad experiences with these "performance-enhancing" tweaks in the past when they've goosed many-core chips like the Core i7-6950X, and they're sometimes on by default in firmware from Gigabyte and Asus, at least. Readers and YouTube-watchers should be asking whether reviewers explicitly went to the effort to turn off these features before making sweeping conclusions about a chip's power consumption, heat production, performance, and efficiency.
We are glad that Gigabyte's Z370 firmware makes the correct choice with regard to multi-core enhancement behavior, though, and we hope other motherboard brands have followed or will follow suit.
Regardless, I fired up our system in this state and cued up Prime95 Small FFTs again. The chip proceeded to throttle on several cores with a 1.308V Vcore (a difficult figure to monitor given the plunging core clocks, but I tried). That throttling meant the chip was running into its 100° C Tjunction limit on some cores, so the motherboard's automatic voltage control is probably a tad too aggressive given my manual overclocking experience. I also tried running Blender with multi-core enhancement enabled, and while all of the cores got to around 89° C under that load, the chip didn't throttle. That result still suggests a Hyper 212 Evo-class cooler probably isn't sufficient for holding the overclocked i7-8700K in check, given how little headroom it offers.
This behavior shows why "multi-core enhancement" is undesirable: it's overclocking through and through, and it requires cooling to match. Builders who are buying heatsinks under the assumption they'll be facing all-core Turbo speeds of 4.3 GHz from the i7-8700K could be surprised if their motherboard tries to "help" by modifying Intel's factory Turbo Boost behavior. Our Gigabyte Z370 Aorus Gaming 7 test motherboard commendably ships with the feature disabled, but we'd imagine the feature could still catch both reviewers and builders alike off guard. We've been protesting this "feature" for years, and we'll continue to do so when it rears its head.
Finally, I tried the same manual overclock I achieved with our Corsair H115i liquid cooler: 5GHz with a -2 AVX offset and a dynamic Vcore in a range of 1.284V to 1.296V. Under the MasterAir Pro 4, running Prime95 caused the chip to throttle, while Blender caused it to run in the low 90° C range. Considering that my overclock was pulling another 100 MHz from the chip's AVX units with only slightly less voltage, it's not a surprise that I got similar thermal results. Under these conditions, the chip definitely exceeds the informal "difficult to cool" barrier that we drew at the beginning of this article.
For comparison, Corsair's 280-mm H115i produced a 90° C package temperature and core temperatures ranging from about 84° C to 90° C using the same settings and voltages with Prime95 Small FFTs. Blender topped out our overclocked i7-8700K at about 80° C at the package. The H115i definitely reins in the i7-8700K if you're shooting for the ability to run Prime95 for hours, as one might want to do for extreme stability testing.
These are all rough benchmarks, but at the end of the day, Coffee Lake does seem to run hotter at stock speeds than the quad-core CPUs that have come before it. That's probably as it should be: there are two more cores and four more threads to deal with under the heat spreader. Builders planning to cool the chip at stock speeds should certainly be able to get away with an inexpensive cooler like a Hyper 212 Evo, but those hoping for a Prime95-stable overclock without a delid and repaste need to budget for a substantial liquid cooler. In that sense, the i7-8700K is no different than the Core i7-6700K and Core i7-7700K before it, and it's definitely harder to cool than AMD's Ryzen CPUs. AMD's chips all boast soldered heat spreaders, and metal is undeniably a better thermal transfer medium than paste.
The question of a paste-based TIM versus solder is almost certainly the largest variable in keeping Coffee Lake on ice relative to Ryzen CPUs, but I think there's more to it than that. First off, it's worth noting that Intel's implementation of AVX in the Skylake microarchitecture offers two 256-bit vector units per core, while the Zen architecture only offers two 128-bit-wide units per core. Skylake also has wider data paths that need more wires to implement, and that presumably means higher power usage when moving data around. When we run an intense AVX workload like Prime95, then, the stress test should unsurprisingly do more work, consume more power, and ultimately generate more heat on a chip that's capable of sustaining twice the SIMD throughput. It's certainly easier to cool an overclocked Ryzen CPU thanks to its soldered heat spreader, but it's hard to argue that one isn't getting more out of overclocking the Core i7-8700K in many tasks despite its higher temperatures. That fact should be part of the value consideration when setting out to overclock either chip.
Whether Intel is doing the best it can to support overclocking on its chips through its thermal interface material of choice is another question, and it's one that's raged since Ivy Bridge and coursed through Devil's Canyon. Coffee Lake doesn't do anything to quench the flames. Folks seeking the lowest load temperatures and highest possible overclocking headroom from Coffee Lake chips will likely need to reach for liquid-metal TIM, their delid tool of choice, and a hefty liquid cooler or giant tower heatsink. At stock speeds, though, the i7-8700K should be fine with the same Cooler Master Hyper 212 Evo that's graced countless systems. Just be sure to terminate any multi-core enhancement settings in your motherboard's firmware with extreme prejudice first.Spitballing the performance of AMD's Radeon Vega Frontier Edition graphics card
AMD's Radeon Vega Frontier Edition reveal yesterday provided us with some important pieces of the performance puzzle for one of the most hotly-anticipated graphics chips of 2017. Crucially, AMD disclosed the Frontier Edition card's pixel fill rate and some rough expectations for floating-point throughput—figures that allow us to make some educated guesses about Vega's final clock speeds and how it might stack up to Nvidia's latest and greatest for both gaming and compute performance.
Dollars and sense
Before we dive into my educated guesses, though, it's worth mulling over the fact that the Vega Frontier Edition is launching as a Radeon Pro card, not a Radeon RX card. As Ryan Smith at Anandtech points out, this is the first time AMD is debuting a new graphics architecture aboard a professional-grade product. As its slightly green-tinged name suggests, AMD's Frontier Edition strategy roughly echoes how Nvidia has been releasing new graphics architectures of late. Pascal made its debut aboard the Tesla P100 accelerator, and the market's first taste of Nvidia's Volta architecture will be aboard a similar product.
These developments suggest that whether they bleed red or green, gamers may have to accept the fact that they aren't the most important market for these high-performance, next-gen graphics chips any longer.
Though gamers might feel disappointed after yesterday's reveal, this decision makes good business sense. As I mused about on Twitter a few days ago, it doesn't make any sense for the company to sell Vega chips on Radeon RX cards just yet when there's strong demand for this GPU's compute power elsewhere. In turn, AMD can ask much more money for Vega compute accelerators than it can for the same chip aboard a Radeon gaming card. Yesterday's Financial Analyst Day made it clear that AMD is acutely aware of the high demand for GPU compute power right now, especially for machine learning applications, and it wants as big a piece of that pie as it can grab.
Radeon Technologies Group head Raja Koduri put some numbers to this idea at the company's analyst day by pointing out that the high end of the graphics card market could represent less than 15% of the company's sales volume, but potentially as much as 66% of its margin contribution (i.e., profit). Nvidia dominates the high-end graphics card market regardless of whether one is running workstation graphics or datacenter GPU computing tasks, and AMD needs to tap into the demand from these markets as part of its course toward profitability. Radeon RX products might make the most noise in the consumer graphics market, but Vega compute cards could make the biggest bucks for AMD, so it only makes sense that the company is launching the Frontier Edition (and presumably the Radeon Instinct MI25) into the very highest end of the market first.
Sizing up Vega
Now, let's talk some numbers. AMD says the Vega GPU aboard the Frontier Edition will offer about 13 TFLOPS of FP32 and about 25 TFLOPS of FP16 performance, as well as a pixel fill rate of 90 Gpixels/s. AMD also says the chip will have 64 compute units and 4096 stream processors, and that FP32 TFLOPS figure suggests a clock speed range of about 1450 MHz to 1600 MHz. I propose this range because AMD seems to have used different clock rates to calculate different peak throughput rates. I'm also guessing the Vega chip in this card also has 64 ROPs, given the past layout of GCN cards and the way the numbers have to stack up to reach that 90 Gpixels/s figure.
|GTX 970||1050||1178||56||104||1664||224+32||224 GB/s||3.5+0.5GB||145W|
|GTX 980||1126||1216||64||128||2048||256||224 GB/s||4 GB||165W|
|GTX 980 Ti||1002||1075||96||176||2816||384||336 GB/s||6 GB||250W|
|Titan X (Maxwell)||1002||1075||96||192||3072||384||336 GB/s||12 GB||250W|
|GTX 1080||1607||1733||64||160||2560||256||320 GB/s||8GB||180W|
|GTX 1080 Ti||1480||1582||88||224||3584||352||484 GB/s||11GB||250W|
|Titan Xp||1480?||1582||96||240||3840||384||547 GB/s||12GB||250W|
|R9 Fury X||---||1050||64||256||4096||1024||512 GB/s||4GB||275W|
|Vega Frontier Edition||~1450?||~1600?||64?||256?||4096||???||~480 GB/s||16GB||???|
Regardless, that clock-speed range and the resulting numbers suggest that AMD will meet or exceed its compute performance targets for its first Vega products. The company touted a 25 TFLOPS rate for FP16 math when it previewed the Radeon Instinct MI25 card, and the Vega Frontier Edition could potentially top that already-impressive figure with 26 TFLOPS or so at the top of its hypothetical clock range. Assuming those numbers hold, the raw compute capabilities of the Vega FE for some types of math will top even the beastly Quadro GP100, Nvidia's highest-end pro graphics card at the moment. These are both high-end pro cards with 16GB of HBM2 on board, so it's not far-fetched to compare them.
|Asus R9 290X||67||185/92||4.2||5.9|
|Radeon R9 295 X2||130||358/179||8.1||11.3|
|Radeon R9 Fury X||67||269/134||4.2||8.6|
|GeForce GTX 780 Ti||37||223/223||4.6||5.3|
|Gigabyte GTX 980 Windforce||85||170/170||5.3||5.4|
|GeForce GTX 980 Ti||95||189/189||6.5||6.1|
|GeForce GTX 1070||108||202/202||5.0||7.0|
|GeForce GTX 1080||111||277/277||6.9||8.9|
|GeForce GTX 1080 Ti||139||354/354||9.5||11.3|
|GeForce Titan Xp||152||343/343||9.2||11.0|
|Vega Frontier Edition||~90-102?||410?/205?||6.4?||13.0|
Taking AMD's squishy numbers at face value, the 25 TFLOPS of FP16 the Vega FE claims to offer will top the Quadro GP100's claimed 20.7 TFLOPS of FP16 throughput. In turn, AMD claims the Vega FE can deliver about 26% higher FP32 throughput than the Quadro GP100: 13 TFLOPS versus 10.3 TFLOPS. The GP100 might deliver higher double-precision math rates, but we can't compare the Vega FE card's performance on that point because AMD hasn't said a word about Vega's FP64 capability. Even so, the $8900 price tag of the Quadro GP100 gives AMD plenty of wiggle room to field a competitor in this lucrative market, and it seems the performance will be there to make Vega a worthy compute competitor (at least until Volta descends from the data center).
The things we still don't know about the Vega chip in the Frontier Edition are facts most relevant to the chip's gaming performance. AMD hasn't talked in depth about the texturing capabilities or geometry throughput of the Vega architecture yet, but it's simply too tantalizing not to guess at how this Vega chip will stack up given its seeming family resemblance to Fiji cards. Beware: wild guesses ahead.
Assuming Vega maintains 256 texture units and GCN's half-rate throughput for FP16 textures (and this is a big if), the card might deliver as much as 410 GTex/s for int8 textures and 205 GTex/s for bilinear fp16 filtering. For comparison, the GTX 1080 can deliver full throughput for both types of texturing. Even so, that card tops out at 277 GTex/s for both int8 and fp16 work. The Vega FE's impressive texture-crunching capabilites might be slightly tempered by that 90 GPix/s fill rate, which slightly trails even the GTX 1070's theoretical capabilities.
Either way, none of these dart throws suggest the eventual RX Vega will have what it takes to unseat the GeForce GTX 1080 Ti atop the consumer graphics-performance race, as some wild rumors have postulated recently. I'm willing to be surprised, though. We also can't account for the potential performance improvements from Vega's new primitive shader support or its tile-based Draw Stream Binning Rasterizer, both of which could mitigate some of these theoretical shortcomings somewhat.
All of those guesses square pretty nicely with my seat-of-the-pants impressions of Vega's gaming power during AMD's demo sessions, where the card delivered performance that felt like it was in the ballpark with a GeForce GTX 1080. I gleaned those impressions from AMD demo darling Doom, of course, and other games will perform differently. It's also possible that the Radeon RX Vega will use a different configuration of the Vega GPU, so AMD Vega FE numbers may not be the best starting point. Still, if it's priced right, the Radeon RX Vega could be the high-end gaming contender that AMD sorely needs. We'll have to see whether my guesses are on the mark or wide of the mark when Radeon RX Vega cards finally appear.
This article initially speculated, without sourcing, that AMD would include 4096 SPs on the Vega FE GPU. The company did, in fact, confirm that the Vega GPU on this card would include 4096 SPs on a separate product page that I overlooked. While this new information does not affect any of the guesses put forth in this piece, I do regret the error, and the piece has been updated to include numbers from AMD's official specs.Space Pirate Trainer's beta update turns it into a more strategic VR shoot-'em-up
Space Pirate Trainer's early-access release, like many fun games, has a simple premise. Put on the HTC Vive, pick up its controllers, and you're standing in the shoes of a star-blazing outlaw who's perched on a landing pad high above a moody urban landscape. Waves of flying killer droids are coming for you.
All that stands between you and becoming a cloud of space dust are a pair of multi-purpose pistols that double as energy shields. Good luck, and earn as many points as you can by blowing stuff up. Take three hits from the opposing force, though, and you're done for.
When I first picked up Space Pirate Trainer, its potential as a great VR title was immediately evident. The fact that you're on an open platform only gets more fun with a larger play area for the Vive, since that extra space means you have more room to jump, duck, and dodge—and make no mistake, you will be moving around a lot with this title. The twin pistols offer some fun alternate-fire modes that require the player to think about the amount of energy they have on hand instead of holding down the trigger. Each wave of drones represents a real challenge, too: whatever force is sending them against you in Space Pirate Trainer really wants you dead after the first few. I've gotta admit that I eventually got bored of the game, though. Its weapons all felt rather samey after many replays, and I honestly wasn't good enough to make a whole lot of progress past the first few waves of attackers.
Space Pirate Trainer's just-released beta takes that potential and fleshes it out. The most noticeable change is a pair of new weapons that give players fresh tools for dealing with enemy attacks. A shotgun and a remote-detonated grenade launcher give players a couple new ways of dealing with drones at closer ranges and in more formations. Those new weapons require more strategic thinking than the single-shot, automatic laser burst, continuous laser, and charged shot modes of your twin pistols might have in the past, and figuring out the weapon you need in the heat of battle demands quick reflexes, too. Those new challenges give SPT's beta a feeling of enduring freshness that the first Early Access release didn't create.
The fresh thinking in SPT's beta doesn't end with things that go pew-pew, either. Flip a Vive controller over your shoulder and bring it back, and the energy shield from the game's initial release greets you with a new design and a fancy deployment animation that forces you to think ahead a bit. The shield used to come out fully deployed, but the new animation adds a half-second or so where the player remains vulnerable to attacks. If you want to shift from an offensive to defensive approach in the beta, there's a real cost to doing so. Choose carefully.
Those hand-wielded shields aren't purely defensive in the SPT beta, though. Swipe to the right on the Vive trackpad, and the shield turns into a spiky club-lightsaber-tractor-beam fusion that can be used to grab drones at range and bring them in close for death by blunt force.
That tractor beam can also turn the club into a kind of drone-mace, too. Grab a drone with the beam, and you can swing your victim back into the battlefield or use it to deflect incoming fire. This mode of attack doesn't feel particularly precise to me right now, but I get the sense that it might be quite deadly with practice. All the more reason to suit up as a space pirate time and time again.
Survive a wave of drones in this beta, and the game might reward you with one of a variety of new power-ups. You might get a machine-gun mode for your pistols, a gravity vortex that traps drones in a particular spot for easy dispatching, a shield dome that allows you to blast away with impunity, and homing missiles that do exactly what they say on the tin. These power-ups offer pretty sweet advantages, but there is one minor downside to the weapon-specific power-ups, at least: you'd best be sure that you want that particular upgrade for the ten seconds or so that they last, because there's no switching away once they're in action.
This game still exposes some of the limits of current-gen VR headsets. If an enemy drone gets too far away, for example, it turns into an indistinct blob that's frustratingly difficult to target, since your laser sights are only visible out to a certain distance. Forget reading any scores or other text associated with a drone at long distances, too. Unless you really tighten up the head straps, it's possible to end up with the Vive in a less-than-ideal position on your face, as well, since dropping to the floor and jumping from side to side can cause the Vive's bulk to shift rather easily. These minor issues don't take away from what's otherwise an exhilarating experience, though.
Space Pirate Trainer is $11.24 on Steam right now, and that deal lasts until Thursday at 4 PM Pacific. If you somehow own a Vive and don't already have a copy of this game, it's a no-brainer to pick it up for that price. Few developers have grokked what it means to make a good VR title as well as Space Pirate Trainer's have, and this is one game that really feels like it wouldn't be possible in any other medium. I'm now excited to revisit it every time I strap on the Vive. Even at its $14.99 regular price, Space Pirate Trainer is essential for any Vive owner's library.
The author wrote this review using a copy of Space Pirate Trainer purchased for his personal account on Steam.Re-examining the unusual frame time results in our Radeon RX 470 review
It's never fun to admit a mistake, but we made a big one while writing our recent Radeon RX 470 review. That piece was our first time out on a new test rig that included an Intel Core i7-6700K CPU and an ASRock Z170 Extreme7+ motherboard. Once we got that system up and running, it delivered some weird-looking frame time numbers with some games. For example, the spikiness of the frame-time plot below didn't match any test data we had ever gathered before for Grand Theft Auto V, and we puzzled over those strange results for some time. We decided to go ahead and publish them anyway after doing some extended troubleshooting without seeing any improvement.
Compare that to a more typical GTA V result from our Radeon RX 480 review, as demonstrated by three GeForce cards running on an X99 testbed:
The spikiness caused by what turned out to be high deferred procedure call (or DPC) latency didn't seem to affect average framerates much (save one major exception), but it did worsen our 99th-percentile frame time numbers considerably. Given how much we use those 99th-percentile numbers in formulating other parts of our conclusions, especially value plots, the error introduced this way had considerable negative effects on the accuracy of several key parts of our review. The net effect of this error led us to wrongly conclude that the Radeon RX 480 8GB and the Radeon RX 470 were closely matched in performance, when in fact they're quite different.
Upon reflection, we should have stopped the RX 470 review at that point to try and figure out exactly what was going on, but the pressure of deadlines got the better of us. When these weird frame-time plots appeared once more in the preliminary testing work for our review of the Radeon RX 460, however, we had to acknowlege that something unusual was going on. Of course, that also meant that our published RX 470 review had problems that we had to deal with. We believe that in the interest of full transparency, it's important to explain exactly what happened, be clear about how we messed up, and resolve not to make the same mistakes in the future.
While I was testing the Radeon RX 460 and tearing my hair out over the wild frame time plots our cards were generating, TR code wizard Bruno "morphine" Ferreira brought the possibility of high DPC latency to my attention. DPC latency is usually of interest to musicians running digital audio workstations, where consistently minimal input lag is critical. Bruno pointed me to the LatencyMon app, which keeps an eye on DPC performance and warns of any issues, as a way of figuring out whether DPC latency was the root cause of this problem.
I didn't capture any screenshots during my frenzied troubleshooting, but LatencyMon did show that our test rig wasn't servicing DPC requests promptly. Wireless networking drivers are generally considered the first place to look when troubleshooting DPC issues, and I use an Intel Wireless-N 6205 card in our test system. Oops. Even after disabling that wireless card, however, the issue persisted. After killing every potential bit of software that might have been causing the problem without getting any improvements, I took Bruno's suggestion of updating our motherboard's firmware. "The BIOS can't possibly be the cause!" I thought to myself smugly.
Pride goeth before a fall, of course, and the DPC latency issue vanished with the new ASRock firmware installed. The frame-time plots for GTA V began to resemble the flat, tight lines we've come to expect with modern hardware. I had to quash the urge to drive over the motherboard a few times and burn the remains before coming to grips with the fact that I would have to throw out large amounts of testing data.
So what happened? You see, ASRock sent us its Z170 Extreme7+ board during a brief period in which the company was promoting its ability to overclock locked Intel CPUs with a beta BIOS. I had hoped to explore that possibility with some ASRock motherboards and cheap CPUs, but Intel swiftly put the kibosh on the concept. We got busy with other work, and the beta firmware remained on the motherboard through our first attempts to test graphics cards with it. I don't know precisely what was wrong with this beta firmware that was causing it to wreak havoc on DPC latency, but updating the firmware did fix it, so Occam's razor suggests something weird was going on.
Having solved the underlying problem, I now had to contend with the fact that I had published a very public and widely-read review that contained what seemed like reams of contaminated data. To see just how wrong I had been in my conclusions, I retested every title we had slated for our RX 470 and RX 460 reviews on our ASRock test rig, using the same settings we had initially chosen for our reviews.
As it turns out, high DPC latency doesn't affect every game equally, or at least not in a way that shows up in our frame-time numbers. While GTA V, Hitman, and Rise of the Tomb Raider all showed significant changes in average FPS and 99th-percentile frame times after a retest on the updated hardware, Doom, Crysis 3, and The Witcher 3 did not. That second trio of games certainly felt more responsive to input after the critical firmware update, but the data they generated wasn't meaningfully different. We're talking fractions of milliseconds of difference in before-and-after testing, and those deltas are almost certainly imperceptible in real-life gaming. Given that behavior, we're confident that the numbers we generated for Doom, Crysis 3, and The Witcher 3 are representative of the performance of the Radeon RX 460, the Radeon RX 470, and the other cards we tested in those reviews.
Given the large differences in performance we saw with GTA V, Hitman, and RoTR, the only acceptable way to fix our mistake was to retest all of the cards in our Radeon RX 470 review from the ground up with those games. We've done that now, and as a bonus, we did that retesting with the same data-collection methods we just premiered in our RX 460 review.
As a result, we now have Doom Vulkan and OpenGL numbers for the RX 470 and RX 480, plus DirectX 12 numbers for Rise of the Tomb Raider and Hitman. We've also extensively re-written the conclusion of our Radeon RX 470 review to account for this new data, much in the same way that we crunched our results for the Radeon RX 460 and friends. We've accounted for the differences in our results there, so I'd encourage you to go read up on what changed.
If you read our original Radeon RX 470 review, we're deeply sorry to have misinformed you. We also extend our sincerest apologies to everybody at AMD and Nvidia for presenting incorrect information and misguided conclusions about their products. In the future, we'll strive to be both correct and swift with our reviews, but we also won't hesitate to delay a piece when clear warning flags are evident. We hope this clarification reinforces your trust in The Tech Report's reporting. If you have any questions or concerns, please leave a comment below or email me directly.
|Rumor: Details spill on the second wave of Coffee Lake desktop CPUs||4|
|Latest Office for Mac offers real-time collaboration and a common code base||2|
|LG 27UK650-W is an affordable 4K HDR10 display||5|
|SK Hynix adds GDDR6 memory chips to its product catalog||5|
|Zen+ Ryzen CPU and an Asus X470 mobo show up in SiSoft database||6|
|Radeon 18.1.1 drivers fix DirectX 9 bug and remove some thorns||3|
|der8auer Direct Die Frame lets Skylake-X owners flip their lids||16|
|Gigabyte offers a sneak peek at a future AMD motherboard at CES||24|
|Thesaurus Day Shortbread||3|
|On look, an InSpectre Gadget.||+92|