How much does screen size matter in comparing Ryzen Mobile and Kaby Lake-R battery life?
— 1:15 PM on November 30, 2017

 As we've continued testing AMD's Ryzen 5 2500U APU over the past few days, we've been confronted with the problem of comparing battery life across laptops with different screen sizes. Many readers suggested that I should take each machine's internal display out of the picture by hooking them up to external monitors. While I wanted to get real-world battery-life testing out of the way first, I can certainly appreciate the elegance of leveling the playing field that way. Now we have.

Before we get too deeply into these results, I want to point out loudly and clearly that these numbers are not and will never be representative of real-world performance. Laptop users will nearly always be running the internal displays of their systems when they're on battery, and removing that major source of power draw from a mobile computer is an entirely synthetic and artificial way to run a battery life test. We're also still testing two different vendor implementations of different SoCs, and it's possible that Acer's engineers might have some kind of magic that HP's don't (or vice versa). Still, for folks curious about platform performance and efficiency, rather than the more real-world system performance tests we would typically conduct, these results might prove interesting.

To give this approach a try, I connected both the Envy x360 and the MX150-powered Acer Swift 3 to 2560x1440 external monitors running at 60 Hz using each machine's HDMI output. I then configured each system to show a display output on the external monitor only and confirmed that both laptops' internal displays were 100% off. After those preparations, I ran our TR Browserbench web-browsing test until each machine automatically shut off at 5% battery before recording their run times.

As we'd expect, both machines' battery life benefits from not having to power an internal monitor. Counter to our expectations, though, the Envy x360 doesn't actually seem to spend a great deal of its power budget on running its screen. The Envy gained only 53 minutes, or 15%, more web-browsing time than when it didn't have to drive its internal monitor. The MX150-powered Acer, on the other hand, gained a whopping five hours of battery life when we removed its screen from the picture. I was so astounded by that result that I retested the Envy to ensure that a background process or other anomaly wasn't affecting battery life, but the HP machine repeated its first performance.

We can take battery capacity out of the efficiency picture for this light workload by dividing minutes of run time by the capacity of the battery in watt-hours. This approach gives us a normalized "minutes per watt-hour" figure that should be comparable across our two test systems. HWiNFO64 reports that the Envy x360 has a 54.8 Wh battery, and since it's brand-new, a full charge tops up that battery completely. Using the technique described above, we get 7.8 minutes of run time per watt hour from the HP system.

The Acer Swift 3 I got from Intel appears to have been a test mule at some point in its life. HWiNFO64 reports that the Swift 3 has already lost 10% of its battery capacity, from 50.7 Wh when it was new to 45.7 Wh now. In this measure of efficiency, though, that capacity decrease actually helps the Swift 3. The system posts a jaw-dropping 19 minutes of run time per watt-hour for light web browsing, or a 2.4-times-better result.

Although this is a staggering difference, I emphasize that it's not representative of performance in the real world. If we don't remove the display from the picture, the Optimus-equipped Swift 3 only posted nine and a half hours of run time in our i5-8250U review, or only about half again as long as the Envy's six hours and 12 minutes. If we drop the MX150 from the picture, the IGP-only Swift 3s and their 10.5 hours of battery only run 67% longer than the Envy. Those are only rough assessments of platform potential, given that we aren't normalizing for battery capacity or screen size. Still, Ryzen Mobile systems might have a ways to go to catch Intel in the battery life race. The blue team has been obsessed with mobile power management for years, and technologies like Speed Shift are just the latest and most visible results of those efforts.

In any case, it's clear that there's a lot of moving parts behind the battery life of these systems. I've repeatedly cautioned that it's early days for both drivers and firmware for the Ryzen 5 2500U, and it's possible that future refinements will close this gap somewhat. Benchmarking a similar Intel-powered system from HP might also help even the field, given my research in my first examination of the Ryzen-powered Envy x360's battery life. (If you'd like to help with that project, throw us a few bucks, eh?) Still, if you favor battery-sipping longevity over convertible versatility and raw performance, it seems like the Envy x360 requires a compromise that our GeForce-powered Acer Swift 3 doesn't. Stay tuned for more battery-life testing soon.

66 comments — Last by ihack13 at 9:31 AM on 12/06/17

Here's a first look at the battery life of HP's Ryzen-powered Envy x360
— 8:15 AM on November 29, 2017

My initial tests of AMD's Ryzen 5 2500U APU gave us a fine picture of the APU's performance, but we admittedly didn't test battery life in that initial article. Part of the reason for that omission was to avoid drawing unfair comparisons between the 15.6" HP Envy x360 that plays host to the Ryzen 5 2500U and the 14" Acer Swift 3 machines we used to represent Intel's Core i5-8250U.

Although the jump in screen size might not sound large on paper, the practical effects of the Ryzen system's bigger screen on battery life are likely quite significant. Getting the same light output from a bigger panel requires more power, as just one variable, and the HP system also has a pen digitizer in its LCD panel with unclear power-management characteristics.

So long as we keep those caveats in mind, though, we can at least offer a basic picture of how long the Ryzen 5 2500U lets the Envy x360 run on battery using a couple different tests. The one quirk of our test rig compared to the $750 default configuration you'd get off the shelf at Best Buy is the Samsung 960 EVO 500GB SSD we're using as our system drive. A hard-drive-only Envy might run for less time, though Windows' power-management features should generally take the mechanical hard drive out of the equation when it's not in active use.

To run these tests, I set the Envy's screen brightness to 50% and left Windows on its default Balanced power plan. The only changes we made to that default configuration involved disabling the operating system's Battery Saver safeguard and forcing the screen to remain on over the entire course of the test.

First off, I ran the Envy's battery down with TR's Browserbench. This benchmark runs a loop of an older version of our home page with plenty of Flash content sprinkled in, along with some cache-busting code to make for more work on the test system. Browserbench is getting up there in years, but it is repeatable and still offers a decent proxy for light web use. We're working on a new version of Browserbench that runs through a range of real-world web sites, but for now, the old version will have to do.

When the Envy shut off automatically with 5% remaining in its juice pack, it registered six hours and 12 minutes of battery life under Browserbench. That's well short of HP's claimed 11-hour battery life figure, but it would at least get you from New York to Los Angeles on in-flight Wi-Fi. To try and see just how good that figure was among similar machines, at least, I scoured the web for reviews of comparable PCs.
 
Reviews of Envy x360s with Intel eighth-gen processors inside remain scarce, but I did find that an Intel Kaby Lake-powered Envy x360 with a configuration and battery similar to that of our Ryzen system turned in six hours of web-browsing battery life for the folks at Laptop Mag. That test suggests Ryzen Mobile could be delivering competitive battery life against similar Intel systems, but it's hard to say just how competitive the Ryzen 5 2500U is without more in-depth (and possibly less representative) directed testing on our part.

Web browsing isn't the only use of a machine on the go, of course. To test video playback, I set up Windows 10's Movies and TV app to loop the 1920x1080, 55 Mbps H.264 version of the Jellyfish reference video until the Envy's battery died. I confirmed that Movies and TV was firing up the GPU's video decode engine using the GPU-monitoring tools in the latest version of Windows 10's Task Manager before letting the test run. Incidentally, here's a full accounting of the Ryzen APU's video-decoding capabilities, as ferreted out by DXVA Checker:

After displaying four hours and 37 minutes of pulsating sea life, the Envy x360 went dark once more. That result parallels the four-hour-and-32-minute run time achieved by the folks at HotHardware in their video playback testing, although the site claimed it had to run its x360's screen at 100% brightness to achieve a comparable output level with its other laptops. While I didn't run our Envy x360 so brightly, HotHardware's results still suggest we can be confident that our run time is in the right ballpark.

With these two tests in the bag, it seems like our 15.6" Ryzen system delivers only average battery life for its size class. I'd still caution against drawing too many comparisons between the Envy x360's battery life and that of other laptops at this stage, though. Implementation differences matter, and we don't know how Ryzen Mobile will behave in smaller and lighter systems. It's still early days for drivers and firmware, too. Our preliminary results and research suggest that Ryzen Mobile's battery life could be competitive with that of similar Intel systems', though, and that's as good a bit of news for AMD as its chip's well-rounded performance.

55 comments — Last by psuedonymous at 2:01 PM on 12/02/17

Just how hot is Coffee Lake?
— 5:12 PM on October 6, 2017

Update 10/6/2017 10:45 PM: This article originally stated that the Gigabyte Z370 Aorus Gaming 7 motherboard ships with "multi-core enhancement" enabled. The board in fact ships with the feature disabled. I deeply regret the error and have corrected the article accordingly.

Intel's Core i7-8700K has proved to be an exceptionally well-rounded CPU in our testing so far, but one potential negative has come up again and again in the other reviews I've been reading. Many reviewers have noted that the chip "runs hot," so much so that the idea even made for sub-headline news at one outlet. I was a bit confused reading these statements, because the i7-8700K didn't seem to be an exceptionally hot-running chip in my testing compared to other modern Intel CPUs. Although I  ran into a thermal limit while trying to boost voltages enough to get our chip stable under a Prime95 AVX workload, running all of the chip's AVX units at 4.8 GHz was no small feat, and we expect high temperatures as a matter of course from unmodified Intel CPUs when they're overclocked.

Still, the TR water-cooler meeting this morning produced an interesting line in the sand for whether a chip is difficult to cool: can it be held in check by Cooler Master's evergreen Hyper 212 Evo? That $30 tower remains a fine bang-for-the-buck contender among CPU heatsinks, so it's a natural baseline for establishing whether a chip is tough to keep frosty. I don't have a Hyper 212 Evo here, but I do have Cooler Master's MasterAir Pro 4, a 120-mm tower that's basically the same heatsink as a Hyper 212 Evo with a newer fan design. It was simple enough to see whether the Core i7-8700K fell on the right or wrong side of the MasterAir Pro 4's cooling power, so I popped off the 280-mm Corsair H115i that usually cools our test chips and set up the MasterAir Pro 4 in its place.

First off, it's worth defining what "hot" means in the context of the i7-8700K. Intel's Tjunction specification for this chip remains the same 100° C it's been for Skylake and Kaby Lake K-series CPUs. Hit that temperature, and the i7-8700K will begin to throttle. We obviously want to stay as far below that threshold as possible, but it establishes an upper limit for what a "bad" temperature might be for the chip.

With that in mind, I ran the Prime95 Small FFTs torture test at stock speeds to establish a baseline for the chip's thermal behavior. Prime95 hammers a chip's AVX units in a way that's meant to produce the most heat possible, well beyond what any real-world workload might generate. Gigabyte's Easy Tune utility reported that the chip was running at 4.3 GHz—its normal all-core Turbo speed—at 1.1V under this synthetic load. With the MasterAir Pro 4 on top, those clocks and voltages resulted in a CPU package temperature of 78° C, according to HWiNFO64.

Those numbers are certainly warm for a stock-clocked LGA 1151 CPU, but it's worth remembering that we're now asking the cooler to wrangle six cores and 12 threads instead of four cores and eight threads. That's entry-level high-end desktop territory, so slightly higher temps than we're used to should be par for the course. In any case, the stock-clocked i7-8700K proved perfectly happy under our Hyper 212 Evo stand-in.

Next up, I tried to run the chip with Gigabyte's "multi-core enhancement" turned on. This "enhancement" (happily left off by default, as "Auto" means in the Z370 Aorus Gaming 7's firmware) runs all six cores of our i7-8700K at the single-core Turbo Boost speed, or 4.7 GHz. We vigorously search out and disable these kinds of settings for every CPU review we do, since they're the same as overclocking. Other sites may not, and that's not ideal. Not only do these settings ruin any sense of what "stock" performance is from a given processor, they place the same demands on heatsinks as an equivalent overclock would.

I know that's stating the obvious, but we've had bad experiences with these "performance-enhancing" tweaks in the past when they've goosed many-core chips like the Core i7-6950X, and they're sometimes on by default in firmware from Gigabyte and Asus, at least. Readers and YouTube-watchers should be asking whether reviewers explicitly went to the effort to turn off these features before making sweeping conclusions about a chip's power consumption, heat production, performance, and efficiency.

We are glad that Gigabyte's Z370 firmware makes the correct choice with regard to multi-core enhancement behavior, though, and we hope other motherboard brands have followed or will follow suit.

Regardless, I fired up our system in this state and cued up Prime95 Small FFTs again. The chip proceeded to throttle on several cores with a 1.308V Vcore (a difficult figure to monitor given the plunging core clocks, but I tried). That throttling meant the chip was running into its 100° C Tjunction limit on some cores, so the motherboard's automatic voltage control is probably a tad too aggressive given my manual overclocking experience. I also tried running Blender with multi-core enhancement enabled, and while all of the cores got to around 89° C under that load, the chip didn't throttle. That result still suggests a Hyper 212 Evo-class cooler probably isn't sufficient for holding the overclocked i7-8700K in check, given how little headroom it offers.

This behavior shows why "multi-core enhancement" is undesirable: it's overclocking through and through, and it requires cooling to match. Builders who are buying heatsinks under the assumption they'll be facing all-core Turbo speeds of 4.3 GHz from the i7-8700K could be surprised if their motherboard tries to "help" by modifying Intel's factory Turbo Boost behavior. Our Gigabyte Z370 Aorus Gaming 7 test motherboard commendably ships with the feature disabled, but we'd imagine the feature could still catch both reviewers and builders alike off guard. We've been protesting this "feature" for years, and we'll continue to do so when it rears its head.

Finally, I tried the same manual overclock I achieved with our Corsair H115i liquid cooler: 5GHz with a -2 AVX offset and a dynamic Vcore in a range of 1.284V to 1.296V. Under the MasterAir Pro 4, running Prime95 caused the chip to throttle, while Blender caused it to run in the low 90° C range. Considering that my overclock was pulling another 100 MHz from the chip's AVX units with only slightly less voltage, it's not a surprise that I got similar thermal results. Under these conditions, the chip definitely exceeds the informal "difficult to cool" barrier that we drew at the beginning of this article.

For comparison, Corsair's 280-mm H115i produced a 90° C package temperature and core temperatures ranging from about 84° C to 90° C using the same settings and voltages with Prime95 Small FFTs. Blender topped out our overclocked i7-8700K at about 80° C at the package. The H115i definitely reins in the i7-8700K if you're shooting for the ability to run Prime95 for hours, as one might want to do for extreme stability testing.

These are all rough benchmarks, but at the end of the day, Coffee Lake does seem to run hotter at stock speeds than the quad-core CPUs that have come before it. That's probably as it should be: there are two more cores and four more threads to deal with under the heat spreader. Builders planning to cool the chip at stock speeds should certainly be able to get away with an inexpensive cooler like a Hyper 212 Evo, but those hoping for a Prime95-stable overclock without a delid and repaste need to budget for a substantial liquid cooler. In that sense, the i7-8700K is no different than the Core i7-6700K and Core i7-7700K before it, and it's definitely harder to cool than AMD's Ryzen CPUs. AMD's chips all boast soldered heat spreaders, and metal is undeniably a better thermal transfer medium than paste.

The question of a paste-based TIM versus solder is almost certainly the largest variable in keeping Coffee Lake on ice relative to Ryzen CPUs, but I think there's more to it than that. First off, it's worth noting that Intel's implementation of AVX in the Skylake microarchitecture offers two 256-bit vector units per core, while the Zen architecture only offers two 128-bit-wide units per core. Skylake also has wider data paths that need more wires to implement, and that presumably means higher power usage when moving data around. When we run an intense AVX workload like Prime95, then, the stress test should unsurprisingly do more work, consume more power, and ultimately generate more heat on a chip that's capable of sustaining twice the SIMD throughput. It's certainly easier to cool an overclocked Ryzen CPU thanks to its soldered heat spreader, but it's hard to argue that one isn't getting more out of overclocking the Core i7-8700K in many tasks despite its higher temperatures. That fact should be part of the value consideration when setting out to overclock either chip.

Whether Intel is doing the best it can to support overclocking on its chips through its thermal interface material of choice is another question, and it's one that's raged since Ivy Bridge and coursed through Devil's Canyon. Coffee Lake doesn't do anything to quench the flames. Folks seeking the lowest load temperatures and highest possible overclocking headroom from Coffee Lake chips will likely need to reach for liquid-metal TIM, their delid tool of choice, and a hefty liquid cooler or giant tower heatsink. At stock speeds, though, the i7-8700K should be fine with the same Cooler Master Hyper 212 Evo that's graced countless systems. Just be sure to terminate any multi-core enhancement settings in your motherboard's firmware with extreme prejudice first.

55 comments — Last by pogsnet1 at 2:26 PM on 11/08/17

Spitballing the performance of AMD's Radeon Vega Frontier Edition graphics card
— 10:31 AM on May 17, 2017

AMD's Radeon Vega Frontier Edition reveal yesterday provided us with some important pieces of the performance puzzle for one of the most hotly-anticipated graphics chips of 2017. Crucially, AMD disclosed the Frontier Edition card's pixel fill rate and some rough expectations for floating-point throughput—figures that allow us to make some educated guesses about Vega's final clock speeds and how it might stack up to Nvidia's latest and greatest for both gaming and compute performance.

Dollars and sense
Before we dive into my educated guesses, though, it's worth mulling over the fact that the Vega Frontier Edition is launching as a Radeon Pro card, not a Radeon RX card. As Ryan Smith at Anandtech points out, this is the first time AMD is debuting a new graphics architecture aboard a professional-grade product. As its slightly green-tinged name suggests, AMD's Frontier Edition strategy roughly echoes how Nvidia has been releasing new graphics architectures of late. Pascal made its debut aboard the Tesla P100 accelerator, and the market's first taste of Nvidia's Volta architecture will be aboard a similar product.


AMD's Vega Frontier Edition cards

These developments suggest that whether they bleed red or green, gamers may have to accept the fact that they aren't the most important market for these high-performance, next-gen graphics chips any longer.

Though gamers might feel disappointed after yesterday's reveal, this decision makes good business sense. As I mused about on Twitter a few days ago, it doesn't make any sense for the company to sell Vega chips on Radeon RX cards just yet when there's strong demand for this GPU's compute power elsewhere. In turn, AMD can ask much more money for Vega compute accelerators than it can for the same chip aboard a Radeon gaming card. Yesterday's Financial Analyst Day made it clear that AMD is acutely aware of the high demand for GPU compute power right now, especially for machine learning applications, and it wants as big a piece of that pie as it can grab.

Radeon Technologies Group head Raja Koduri put some numbers to this idea at the company's analyst day by pointing out that the high end of the graphics card market could represent less than 15% of the company's sales volume, but potentially as much as 66% of its margin contribution (i.e., profit). Nvidia dominates the high-end graphics card market regardless of whether one is running workstation graphics or datacenter GPU computing tasks, and AMD needs to tap into the demand from these markets as part of its course toward profitability. Radeon RX products might make the most noise in the consumer graphics market, but Vega compute cards could make the biggest bucks for AMD, so it only makes sense that the company is launching the Frontier Edition (and presumably the Radeon Instinct MI25) into the very highest end of the market first.

Sizing up Vega
Now, let's talk some numbers. AMD says the Vega GPU aboard the Frontier Edition will offer about 13 TFLOPS of FP32 and about 25 TFLOPS of FP16 performance, as well as a pixel fill rate of 90 Gpixels/s. AMD also says the chip will have 64 compute units and 4096 stream processors, and that FP32 TFLOPS figure suggests a clock speed range of about 1450 MHz to 1600 MHz. I propose this range because AMD seems to have used different clock rates to calculate different peak throughput rates. I'm also guessing the Vega chip in this card also has 64 ROPs, given the past layout of GCN cards and the way the numbers have to stack up to reach that 90 Gpixels/s figure.

  GPU
base
clock
(MHz)
GPU
boost
clock
(MHz)
ROP
pixels/
clock
Texels
filtered/
clock
Shader
pro-
cessors
Memory
path
(bits)
Memory
bandwidth
Memory
size
Peak
power
draw
GTX 970 1050 1178 56 104 1664 224+32 224 GB/s 3.5+0.5GB 145W
GTX 980 1126 1216 64 128 2048 256 224 GB/s 4 GB 165W
GTX 980 Ti 1002 1075 96 176 2816 384 336 GB/s 6 GB 250W
Titan X (Maxwell) 1002 1075 96 192 3072 384 336 GB/s 12 GB 250W
GTX 1080 1607 1733 64 160 2560 256 320 GB/s 8GB 180W
GTX 1080 Ti 1480 1582 88 224 3584 352 484 GB/s 11GB 250W
Titan Xp 1480? 1582 96 240 3840 384 547 GB/s 12GB 250W
R9 Fury X --- 1050 64 256 4096 1024 512 GB/s 4GB 275W
Vega Frontier Edition ~1450? ~1600? 64? 256? 4096 ??? ~480 GB/s 16GB ???

Regardless, that clock-speed range and the resulting numbers suggest that AMD will meet or exceed its compute performance targets for its first Vega products. The company touted a 25 TFLOPS rate for FP16 math when it previewed the Radeon Instinct MI25 card, and the Vega Frontier Edition could potentially top that already-impressive figure with 26 TFLOPS or so at the top of its hypothetical clock range. Assuming those numbers hold, the raw compute capabilities of the Vega FE for some types of math will top even the beastly Quadro GP100, Nvidia's highest-end pro graphics card at the moment. These are both high-end pro cards with 16GB of HBM2 on board, so it's not far-fetched to compare them.

  Peak pixel
fill rate
(Gpixels/s)
Peak
bilinear
filtering
int8/fp16
(Gtexels/s)
Peak
rasterization
rate
(Gtris/s)

Peak
FP32
shader
arithmetic
rate
(tflops)

Asus R9 290X 67 185/92 4.2 5.9
Radeon R9 295 X2 130 358/179 8.1 11.3
Radeon R9 Fury X 67 269/134 4.2 8.6
GeForce GTX 780 Ti 37 223/223 4.6 5.3
Gigabyte GTX 980 Windforce 85 170/170 5.3 5.4
GeForce GTX 980 Ti 95 189/189 6.5 6.1
GeForce GTX 1070 108 202/202 5.0 7.0
GeForce GTX 1080 111 277/277 6.9 8.9
GeForce GTX 1080 Ti 139 354/354 9.5 11.3
GeForce Titan Xp 152 343/343 9.2 11.0
Vega Frontier Edition ~90-102? 410?/205? 6.4? 13.0

Taking AMD's squishy numbers at face value, the 25 TFLOPS of FP16 the Vega FE claims to offer will top the Quadro GP100's claimed 20.7 TFLOPS of FP16 throughput. In turn, AMD claims the Vega FE can deliver about 26% higher FP32 throughput than the Quadro GP100: 13 TFLOPS versus 10.3 TFLOPS. The GP100 might deliver higher double-precision math rates, but we can't compare the Vega FE card's performance on that point because AMD hasn't said a word about Vega's FP64 capability. Even so, the $8900 price tag of the Quadro GP100 gives AMD plenty of wiggle room to field a competitor in this lucrative market, and it seems the performance will be there to make Vega a worthy compute competitor (at least until Volta descends from the data center).

The things we still don't know about the Vega chip in the Frontier Edition are facts most relevant to the chip's gaming performance. AMD hasn't talked in depth about the texturing capabilities or geometry throughput of the Vega architecture yet, but it's simply too tantalizing not to guess at how this Vega chip will stack up given its seeming family resemblance to Fiji cards. Beware: wild guesses ahead.

Assuming Vega maintains 256 texture units and GCN's half-rate throughput for FP16 textures (and this is a big if), the card might deliver as much as 410 GTex/s for int8 textures and 205 GTex/s for bilinear fp16 filtering. For comparison, the GTX 1080 can deliver full throughput for both types of texturing. Even so, that card tops out at 277 GTex/s for both int8 and fp16 work. The Vega FE's impressive texture-crunching capabilites might be slightly tempered by that 90 GPix/s fill rate, which slightly trails even the GTX 1070's theoretical capabilities.

Either way, none of these dart throws suggest the eventual RX Vega will have what it takes to unseat the GeForce GTX 1080 Ti atop the consumer graphics-performance race, as some wild rumors have postulated recently. I'm willing to be surprised, though. We also can't account for the potential performance improvements from Vega's new primitive shader support or its tile-based Draw Stream Binning Rasterizer, both of which could mitigate some of these theoretical shortcomings somewhat.

All of those guesses square pretty nicely with my seat-of-the-pants impressions of Vega's gaming power during AMD's demo sessions, where the card delivered performance that felt like it was in the ballpark with a GeForce GTX 1080. I gleaned those impressions from AMD demo darling Doom, of course, and other games will perform differently. It's also possible that the Radeon RX Vega will use a different configuration of the Vega GPU, so AMD Vega FE numbers may not be the best starting point. Still, if it's priced right, the Radeon RX Vega could be the high-end gaming contender that AMD sorely needs. We'll have to see whether my guesses are on the mark or wide of the mark when Radeon RX Vega cards finally appear.

This article initially speculated, without sourcing, that AMD would include 4096 SPs on the Vega FE GPU. The company did, in fact, confirm that the Vega GPU on this card would include 4096 SPs on a separate product page that I overlooked. While this new information does not affect any of the guesses put forth in this piece, I do regret the error, and the piece has been updated to include numbers from AMD's official specs.

40 comments — Last by jts888 at 8:29 PM on 06/06/17

Space Pirate Trainer's beta update turns it into a more strategic VR shoot-'em-up
— 4:06 PM on September 14, 2016

Space Pirate Trainer's early-access release, like many fun games, has a simple premise. Put on the HTC Vive, pick up its controllers, and you're standing in the shoes of a star-blazing outlaw who's perched on a landing pad high above a moody urban landscape. Waves of flying killer droids are coming for you.

All that stands between you and becoming a cloud of space dust are a pair of multi-purpose pistols that double as energy shields. Good luck, and earn as many points as you can by blowing stuff up. Take three hits from the opposing force, though, and you're done for.

When I first picked up Space Pirate Trainer, its potential as a great VR title was immediately evident. The fact that you're on an open platform only gets more fun with a larger play area for the Vive, since that extra space means you have more room to jump, duck, and dodge—and make no mistake, you will be moving around a lot with this title. The twin pistols offer some fun alternate-fire modes that require the player to think about the amount of energy they have on hand instead of holding down the trigger. Each wave of drones represents a real challenge, too: whatever force is sending them against you in Space Pirate Trainer really wants you dead after the first few. I've gotta admit that I eventually got bored of the game, though. Its weapons all felt rather samey after many replays, and I honestly wasn't good enough to make a whole lot of progress past the first few waves of attackers.


These are my guns. There are many like them, but this pair is mine.

Space Pirate Trainer's just-released beta takes that potential and fleshes it out. The most noticeable change is a pair of new weapons that give players fresh tools for dealing with enemy attacks. A shotgun and a remote-detonated grenade launcher give players a couple new ways of dealing with drones at closer ranges and in more formations. Those new weapons require more strategic thinking than the single-shot, automatic laser burst, continuous laser, and charged shot modes of your twin pistols might have in the past, and figuring out the weapon you need in the heat of battle demands quick reflexes, too. Those new challenges give SPT's beta a feeling of enduring freshness that the first Early Access release didn't create.

The fresh thinking in SPT's beta doesn't end with things that go pew-pew, either. Flip a Vive controller over your shoulder and bring it back, and the energy shield from the game's initial release greets you with a new design and a fancy deployment animation that forces you to think ahead a bit. The shield used to come out fully deployed, but the new animation adds a half-second or so where the player remains vulnerable to attacks. If you want to shift from an offensive to defensive approach in the beta, there's a real cost to doing so. Choose carefully.

Those hand-wielded shields aren't purely defensive in the SPT beta, though. Swipe to the right on the Vive trackpad, and the shield turns into a spiky club-lightsaber-tractor-beam fusion that can be used to grab drones at range and bring them in close for death by blunt force.

That tractor beam can also turn the club into a kind of drone-mace, too. Grab a drone with the beam, and you can swing your victim back into the battlefield or use it to deflect incoming fire. This mode of attack doesn't feel particularly precise to me right now, but I get the sense that it might be quite deadly with practice. All the more reason to suit up as a space pirate time and time again.


Blowing up a remote-control grenade while I'm safe and sound behind the shield power-up

Survive a wave of drones in this beta, and the game might reward you with one of a variety of new power-ups. You might get a machine-gun mode for your pistols, a gravity vortex that traps drones in a particular spot for easy dispatching, a shield dome that allows you to blast away with impunity, and homing missiles that do exactly what they say on the tin. These power-ups offer pretty sweet advantages, but there is one minor downside to the weapon-specific power-ups, at least: you'd best be sure that you want that particular upgrade for the ten seconds or so that they last, because there's no switching away once they're in action.

This game still exposes some of the limits of current-gen VR headsets. If an enemy drone gets too far away, for example, it turns into an indistinct blob that's frustratingly difficult to target, since your laser sights are only visible out to a certain distance. Forget reading any scores or other text associated with a drone at long distances, too. Unless you really tighten up the head straps, it's possible to end up with the Vive in a less-than-ideal position on your face, as well, since dropping to the floor and jumping from side to side can cause the Vive's bulk to shift rather easily. These minor issues don't take away from what's otherwise an exhilarating experience, though.

Space Pirate Trainer is $11.24 on Steam right now, and that deal lasts until Thursday at 4 PM Pacific. If you somehow own a Vive and don't already have a copy of this game, it's a no-brainer to pick it up for that price. Few developers have grokked what it means to make a good VR title as well as Space Pirate Trainer's have, and this is one game that really feels like it wouldn't be possible in any other medium. I'm now excited to revisit it every time I strap on the Vive. Even at its $14.99 regular price, Space Pirate Trainer is essential for any Vive owner's library.

The author wrote this review using a copy of Space Pirate Trainer purchased for his personal account on Steam.

4 comments — Last by Crackhead Johny at 12:14 PM on 09/22/16

Re-examining the unusual frame time results in our Radeon RX 470 review
— 1:33 PM on August 12, 2016

It's never fun to admit a mistake, but we made a big one while writing our recent Radeon RX 470 review. That piece was our first time out on a new test rig that included an Intel Core i7-6700K CPU and an ASRock Z170 Extreme7+ motherboard. Once we got that system up and running, it delivered some weird-looking frame time numbers with some games. For example, the spikiness of the frame-time plot below didn't match any test data we had ever gathered before for Grand Theft Auto V, and we puzzled over those strange results for some time. We decided to go ahead and publish them anyway after doing some extended troubleshooting without seeing any improvement.

Compare that to a more typical GTA V result from our Radeon RX 480 review, as demonstrated by three GeForce cards running on an X99 testbed:

The spikiness caused by what turned out to be high deferred procedure call (or DPC) latency didn't seem to affect average framerates much (save one major exception), but it did worsen our 99th-percentile frame time numbers considerably. Given how much we use those 99th-percentile numbers in formulating other parts of our conclusions, especially value plots, the error introduced this way had considerable negative effects on the accuracy of several key parts of our review. The net effect of this error led us to wrongly conclude that the Radeon RX 480 8GB and the Radeon RX 470 were closely matched in performance, when in fact they're quite different.

Upon reflection, we should have stopped the RX 470 review at that point to try and figure out exactly what was going on, but the pressure of deadlines got the better of us. When these weird frame-time plots appeared once more in the preliminary testing work for our review of the Radeon RX 460, however, we had to acknowlege that something unusual was going on. Of course, that also meant that our published RX 470 review had problems that we had to deal with. We believe that in the interest of full transparency, it's important to explain exactly what happened, be clear about how we messed up, and resolve not to make the same mistakes in the future.

The issue
While I was testing the Radeon RX 460 and tearing my hair out over the wild frame time plots our cards were generating, TR code wizard Bruno "morphine" Ferreira brought the possibility of high DPC latency to my attention. DPC latency is usually of interest to musicians running digital audio workstations, where consistently minimal input lag is critical. Bruno pointed me to the LatencyMon app, which keeps an eye on DPC performance and warns of any issues, as a way of figuring out whether DPC latency was the root cause of this problem.


LatencyMon thinks something is up with my main desktop. I'll have to look into it soon.

I didn't capture any screenshots during my frenzied troubleshooting, but LatencyMon did show that our test rig wasn't servicing DPC requests promptly. Wireless networking drivers are generally considered the first place to look when troubleshooting DPC issues, and I use an Intel Wireless-N 6205 card in our test system. Oops. Even after disabling that wireless card, however, the issue persisted. After killing every potential bit of software that might have been causing the problem without getting any improvements, I took Bruno's suggestion of updating our motherboard's firmware. "The BIOS can't possibly be the cause!" I thought to myself smugly.

Pride goeth before a fall, of course, and the DPC latency issue vanished with the new ASRock firmware installed. The frame-time plots for GTA V began to resemble the flat, tight lines we've come to expect with modern hardware. I had to quash the urge to drive over the motherboard a few times and burn the remains before coming to grips with the fact that I would have to throw out large amounts of testing data.

So what happened? You see, ASRock sent us its Z170 Extreme7+ board during a brief period in which the company was promoting its ability to overclock locked Intel CPUs with a beta BIOS. I had hoped to explore that possibility with some ASRock motherboards and cheap CPUs, but Intel swiftly put the kibosh on the concept. We got busy with other work, and the beta firmware remained on the motherboard through our first attempts to test graphics cards with it. I don't know precisely what was wrong with this beta firmware that was causing it to wreak havoc on DPC latency, but updating the firmware did fix it, so Occam's razor suggests something weird was going on.

The fallout
Having solved the underlying problem, I now had to contend with the fact that I had published a very public and widely-read review that contained what seemed like reams of contaminated data. To see just how wrong I had been in my conclusions, I retested every title we had slated for our RX 470 and RX 460 reviews on our ASRock test rig, using the same settings we had initially chosen for our reviews.

As it turns out, high DPC latency doesn't affect every game equally, or at least not in a way that shows up in our frame-time numbers. While GTA V, Hitman, and Rise of the Tomb Raider all showed significant changes in average FPS and 99th-percentile frame times after a retest on the updated hardware,  Doom, Crysis 3, and The Witcher 3 did not. That second trio of games certainly felt more responsive to input after the critical firmware update, but the data they generated wasn't meaningfully different. We're talking fractions of milliseconds of difference in before-and-after testing, and those deltas are almost certainly imperceptible in real-life gaming. Given that behavior, we're confident that the numbers we generated for Doom, Crysis 3, and The Witcher 3 are representative of the performance of the Radeon RX 460, the Radeon RX 470, and the other cards we tested in those reviews. 

The resolution
Given the large differences in performance we saw with GTA V, Hitman, and RoTR, the only acceptable way to fix our mistake was to retest all of the cards in our Radeon RX 470 review from the ground up with those games. We've done that now, and as a bonus, we did that retesting with the same data-collection methods we just premiered in our RX 460 review.

As a result, we now have Doom Vulkan and OpenGL numbers for the RX 470 and RX 480, plus DirectX 12 numbers for Rise of the Tomb Raider and Hitman. We've also extensively re-written the conclusion of our Radeon RX 470 review to account for this new data, much in the same way that we crunched our results for the Radeon RX 460 and friends. We've accounted for the differences in our results there, so I'd encourage you to go read up on what changed.

If you read our original Radeon RX 470 review, we're deeply sorry to have misinformed you. We also extend our sincerest apologies to everybody at AMD and Nvidia for presenting incorrect information and misguided conclusions about their products. In the future, we'll strive to be both correct and swift with our reviews, but we also won't hesitate to delay a piece when clear warning flags are evident. We hope this clarification reinforces your trust in The Tech Report's reporting. If you have any questions or concerns, please leave a comment below or email me directly.

73 comments — Last by Freon at 9:16 AM on 08/26/16

TR's VR journals, part one: setting the stage
— 5:33 PM on July 13, 2016

Over the past couple months, I've been places. I've dangled from the side of a rock wall hundreds of feet above crashing waves. I've set foot on a shipwreck deep beneath the ocean and hung out with a humpback whale. I've laid waste to pallets of unsuspecting fruit with a pair of katanas. I've led a research mission on an alien world full of exotic life forms. I've been to Taipei to look at a bunch of new PC hardware. (OK, that last one actually happened.) No, I don't have a holodeck or transporter pad installed in my office. Consumer virtual reality gear is here now. I've been spending lots of time strapped into the big two virtual reality headsets that came out this year: Oculus' Rift and HTC's Vive (built in partnership with Valve Software).

Since dedicated VR headsets require powerful gaming PCs to do their thing, both AMD and Nvidia are betting big on VR tech, too. AMD's latest graphics card, the $200-and-up Radeon RX 480, is priced specifically to make VR-capable graphics more affordable, while Nvidia's Pascal architecture even boasts a couple of VR-specific features in its silicon. Nvidia's VRWorks and AMD's LiquidVR both help developers to address the specific progamming needs of VR titles on Radeons and GeForces. Even PC companies like Zotac and MSI are thinking about how VR is going to change the personal computing experience. This is just a tiny slice of the vast number of software and hardware companies making a move into VR. If you hadn't already guessed, this technology is a Big Deal.

All that buzz has arisen for good reason. VR headset makers often talk about "immersion" and "presence," the sense that you're really inhabiting the virtual worlds viewed through these headsets. At their best, VR headsets really do create a feeling of being transported to another place, whether that's the seat of a Group B rally car or the bowels of Aperture Science. That immersion becomes especially deep when one can actually reach out and touch virtual environments in a room-size space, like one can with the Vive. I often find myself saying "wow" when I enter a new VR experience for the first time, and I imagine I've looked like the clichéd image of the guy below more often than I'd like to admit. When you put on these goggles for the first time, though, it's easy to understand why Oculus relies so heavily on this image to convey what VR is like.

Now that the Vive and Rift are on the market, though, one could be forgiven for thinking that some of the shine has come off. There are some fully- fleshed-out gaming experiences available on both platforms, but many of today's VR games are experimental and rather short. Valve's The Lab demo is typical of the breed: a collection of mini-games that's a fun, if not particularly deep, introduction to VR. While it's easy to understand that developers aren't yet committing the full force of triple-A resources to either headset, that approach might go some way toward explaining the apparently tepid response to these technological wonders a couple months in.

Recent Steam hardware survey results report that vanishingly small numbers of respondents on the platform have bought into either VR hardware ecosystem. While not every Rift user is going to have Steam installed, the numbers still aren't large. Even when people do buy into VR, it seems like they travel to virtual worlds less and less. Razer CEO Min-Liang Tan recently conducted a Twitter poll regarding the frequency of VR headset usage, and of 2,246 respondents, 50% said they don't even use their VR hardware any more. Another 33% use their headsets "just once in a while," and only 17% claim to strap their head-mounted device daily. Those are potentially troubling numbers for systems that cost hundreds of dollars—and that's before we consider the $1000 or more needed to build a compatible PC. More likely, we're just seeing the slow development of a nascent platform.

Another obstacle to getting folks to buy into VR may be that describing what it's like is incredibly hard without actually strapping a headset to someone's face. (Valve's clever augmented-reality setup, as seen above, comes close, though.) Words, pictures, and video are all abstractions of reality to begin with, and it's really hard to talk about something that promises to whisk you away to another world using them. I imagine many people I've told about VR feel sort of like families did when our parents used to pull out slide projectors and show off vacation photos. "You have to be there" may be true, but it's ultimately unsatisfying for the folks that haven't yet shared the experience.

Judging the Rift and Vive is also challenging because these are incredibly personal pieces of equipment. No other computing device of late straps on to one's head and blocks out the world, so it's important to get some face time with both devices instead of relying on one reviewer's sweeping proclamations. To give just one example, I have a huge head that stretches the Rift's strap adjustments to their limits. Its ring of face foam also doesn't make even contact with my skull, so the headset creates uncomfortable pressure points on my forehead. The Vive's strap system may not be as slick as the Rift's, but its generous foam donut and broad range of strap adjustments lets me get a much more comfortable fit. Other people find the Vive unbearably heavy and clunky. Since we all have different heads and musculatures, I think trying on the Rift or Vive before plunking down hundreds of bucks is mandatory.

Given those challenges, trying to review these headsets in an open-and-shut manner as we do with cases and graphics cards just isn't going to work. Along with the need for individual test-drives and the rapidly changing software landscape, not all of the pieces of the puzzle are in place yet for us to really issue a verdict on the Rift or Vive. For example, Oculus just finished fulfilling its pre-order backlog for the Rift, and it has yet to release its Touch hand controllers. Those controllers are going to massively change the way that Rift users interact with VR, but for now, they're only in the hands of developers and conference demonstrators. Every day brings new software for the Rift and Vive that could change the value proposition for either platform.

Because of this rugged new frontier, we're not going to write one review of each headset and call it good. Our conclusions now may not have anything to do with the way VR plays out on the Rift or Vive months down the line. Instead, we're going to be writing a series of articles—TR's VR journals, if you will—that chronicle our journeys through virtual worlds and the hardware we're using to get there. This way, we'll be able to talk about VR as it stands right now while we map out the broadening VR universe over time.

In the coming weeks, we'll talk about the Rift and Vive hardware, the process of setting them up in various spaces, and the different types of experiences each platform offers. Later on, we'll examine what's inside each headset's shell, the hardware and software magic that makes them work, and the challenges and methods involved in measuring VR performance. We'll try and intersperse that coverage with brief reviews of new games and experiences as they arrive so that you're up-to-date on the places you can go with your own Rift or Vive. We'll also be examining VR experiences that don't even require a PC, like Samsung's Gear VR, and what they mean for this burgeoning space. We expect it'll be a wild ride. Stay tuned.

21 comments — Last by Notafanboy at 9:22 AM on 07/19/16