AMD's Radeon Vega Frontier Edition reveal yesterday provided us with some important pieces of the performance puzzle for one of the most hotly-anticipated graphics chips of 2017. Crucially, AMD disclosed the Frontier Edition card's pixel fill rate and some rough expectations for floating-point throughput—figures that allow us to make some educated guesses about Vega's final clock speeds and how it might stack up to Nvidia's latest and greatest for both gaming and compute performance.
Dollars and sense
Before we dive into my educated guesses, though, it's worth mulling over the fact that the Vega Frontier Edition is launching as a Radeon Pro card, not a Radeon RX card. As Ryan Smith at Anandtech points out, this is the first time AMD is debuting a new graphics architecture aboard a professional-grade product. As its slightly green-tinged name suggests, AMD's Frontier Edition strategy roughly echoes how Nvidia has been releasing new graphics architectures of late. Pascal made its debut aboard the Tesla P100 accelerator, and the market's first taste of Nvidia's Volta architecture will be aboard a similar product.
These developments suggest that whether they bleed red or green, gamers may have to accept the fact that they aren't the most important market for these high-performance, next-gen graphics chips any longer.
Though gamers might feel disappointed after yesterday's reveal, this decision makes good business sense. As I mused about on Twitter a few days ago, it doesn't make any sense for the company to sell Vega chips on Radeon RX cards just yet when there's strong demand for this GPU's compute power elsewhere. In turn, AMD can ask much more money for Vega compute accelerators than it can for the same chip aboard a Radeon gaming card. Yesterday's Financial Analyst Day made it clear that AMD is acutely aware of the high demand for GPU compute power right now, especially for machine learning applications, and it wants as big a piece of that pie as it can grab.
Radeon Technologies Group head Raja Koduri put some numbers to this idea at the company's analyst day by pointing out that the high end of the graphics card market could represent less than 15% of the company's sales volume, but potentially as much as 66% of its margin contribution (i.e., profit). Nvidia dominates the high-end graphics card market regardless of whether one is running workstation graphics or datacenter GPU computing tasks, and AMD needs to tap into the demand from these markets as part of its course toward profitability. Radeon RX products might make the most noise in the consumer graphics market, but Vega compute cards could make the biggest bucks for AMD, so it only makes sense that the company is launching the Frontier Edition (and presumably the Radeon Instinct MI25) into the very highest end of the market first.
Sizing up Vega
Now, let's talk some numbers. AMD says the Vega GPU aboard the Frontier Edition will offer about 13 TFLOPS of FP32 and about 25 TFLOPS of FP16 performance, as well as a pixel fill rate of 90 Gpixels/s. AMD also says the chip will have 64 compute units and 4096 stream processors, and that FP32 TFLOPS figure suggests a clock speed range of about 1450 MHz to 1600 MHz. I propose this range because AMD seems to have used different clock rates to calculate different peak throughput rates. I'm also guessing the Vega chip in this card also has 64 ROPs, given the past layout of GCN cards and the way the numbers have to stack up to reach that 90 Gpixels/s figure.
|GTX 970||1050||1178||56||104||1664||224+32||224 GB/s||3.5+0.5GB||145W|
|GTX 980||1126||1216||64||128||2048||256||224 GB/s||4 GB||165W|
|GTX 980 Ti||1002||1075||96||176||2816||384||336 GB/s||6 GB||250W|
|Titan X (Maxwell)||1002||1075||96||192||3072||384||336 GB/s||12 GB||250W|
|GTX 1080||1607||1733||64||160||2560||256||320 GB/s||8GB||180W|
|GTX 1080 Ti||1480||1582||88||224||3584||352||484 GB/s||11GB||250W|
|Titan Xp||1480?||1582||96||240||3840||384||547 GB/s||12GB||250W|
|R9 Fury X||---||1050||64||256||4096||1024||512 GB/s||4GB||275W|
|Vega Frontier Edition||~1450?||~1600?||64?||256?||4096||???||~480 GB/s||16GB||???|
Regardless, that clock-speed range and the resulting numbers suggest that AMD will meet or exceed its compute performance targets for its first Vega products. The company touted a 25 TFLOPS rate for FP16 math when it previewed the Radeon Instinct MI25 card, and the Vega Frontier Edition could potentially top that already-impressive figure with 26 TFLOPS or so at the top of its hypothetical clock range. Assuming those numbers hold, the raw compute capabilities of the Vega FE for some types of math will top even the beastly Quadro GP100, Nvidia's highest-end pro graphics card at the moment. These are both high-end pro cards with 16GB of HBM2 on board, so it's not far-fetched to compare them.
|Asus R9 290X||67||185/92||4.2||5.9|
|Radeon R9 295 X2||130||358/179||8.1||11.3|
|Radeon R9 Fury X||67||269/134||4.2||8.6|
|GeForce GTX 780 Ti||37||223/223||4.6||5.3|
|Gigabyte GTX 980 Windforce||85||170/170||5.3||5.4|
|GeForce GTX 980 Ti||95||189/189||6.5||6.1|
|GeForce GTX 1070||108||202/202||5.0||7.0|
|GeForce GTX 1080||111||277/277||6.9||8.9|
|GeForce GTX 1080 Ti||139||354/354||9.5||11.3|
|GeForce Titan Xp||152||343/343||9.2||11.0|
|Vega Frontier Edition||~90-102?||410?/205?||6.4?||13.0|
Taking AMD's squishy numbers at face value, the 25 TFLOPS of FP16 the Vega FE claims to offer will top the Quadro GP100's claimed 20.7 TFLOPS of FP16 throughput. In turn, AMD claims the Vega FE can deliver about 26% higher FP32 throughput than the Quadro GP100: 13 TFLOPS versus 10.3 TFLOPS. The GP100 might deliver higher double-precision math rates, but we can't compare the Vega FE card's performance on that point because AMD hasn't said a word about Vega's FP64 capability. Even so, the $8900 price tag of the Quadro GP100 gives AMD plenty of wiggle room to field a competitor in this lucrative market, and it seems the performance will be there to make Vega a worthy compute competitor (at least until Volta descends from the data center).
The things we still don't know about the Vega chip in the Frontier Edition are facts most relevant to the chip's gaming performance. AMD hasn't talked in depth about the texturing capabilities or geometry throughput of the Vega architecture yet, but it's simply too tantalizing not to guess at how this Vega chip will stack up given its seeming family resemblance to Fiji cards. Beware: wild guesses ahead.
Assuming Vega maintains 256 texture units and GCN's half-rate throughput for FP16 textures (and this is a big if), the card might deliver as much as 410 GTex/s for int8 textures and 205 GTex/s for bilinear fp16 filtering. For comparison, the GTX 1080 can deliver full throughput for both types of texturing. Even so, that card tops out at 277 GTex/s for both int8 and fp16 work. The Vega FE's impressive texture-crunching capabilites might be slightly tempered by that 90 GPix/s fill rate, which slightly trails even the GTX 1070's theoretical capabilities.
Either way, none of these dart throws suggest the eventual RX Vega will have what it takes to unseat the GeForce GTX 1080 Ti atop the consumer graphics-performance race, as some wild rumors have postulated recently. I'm willing to be surprised, though. We also can't account for the potential performance improvements from Vega's new primitive shader support or its tile-based Draw Stream Binning Rasterizer, both of which could mitigate some of these theoretical shortcomings somewhat.
All of those guesses square pretty nicely with my seat-of-the-pants impressions of Vega's gaming power during AMD's demo sessions, where the card delivered performance that felt like it was in the ballpark with a GeForce GTX 1080. I gleaned those impressions from AMD demo darling Doom, of course, and other games will perform differently. It's also possible that the Radeon RX Vega will use a different configuration of the Vega GPU, so AMD Vega FE numbers may not be the best starting point. Still, if it's priced right, the Radeon RX Vega could be the high-end gaming contender that AMD sorely needs. We'll have to see whether my guesses are on the mark or wide of the mark when Radeon RX Vega cards finally appear.
This article initially speculated, without sourcing, that AMD would include 4096 SPs on the Vega FE GPU. The company did, in fact, confirm that the Vega GPU on this card would include 4096 SPs on a separate product page that I overlooked. While this new information does not affect any of the guesses put forth in this piece, I do regret the error, and the piece has been updated to include numbers from AMD's official specs.Space Pirate Trainer's beta update turns it into a more strategic VR shoot-'em-up
Space Pirate Trainer's early-access release, like many fun games, has a simple premise. Put on the HTC Vive, pick up its controllers, and you're standing in the shoes of a star-blazing outlaw who's perched on a landing pad high above a moody urban landscape. Waves of flying killer droids are coming for you.
All that stands between you and becoming a cloud of space dust are a pair of multi-purpose pistols that double as energy shields. Good luck, and earn as many points as you can by blowing stuff up. Take three hits from the opposing force, though, and you're done for.
When I first picked up Space Pirate Trainer, its potential as a great VR title was immediately evident. The fact that you're on an open platform only gets more fun with a larger play area for the Vive, since that extra space means you have more room to jump, duck, and dodge—and make no mistake, you will be moving around a lot with this title. The twin pistols offer some fun alternate-fire modes that require the player to think about the amount of energy they have on hand instead of holding down the trigger. Each wave of drones represents a real challenge, too: whatever force is sending them against you in Space Pirate Trainer really wants you dead after the first few. I've gotta admit that I eventually got bored of the game, though. Its weapons all felt rather samey after many replays, and I honestly wasn't good enough to make a whole lot of progress past the first few waves of attackers.
Space Pirate Trainer's just-released beta takes that potential and fleshes it out. The most noticeable change is a pair of new weapons that give players fresh tools for dealing with enemy attacks. A shotgun and a remote-detonated grenade launcher give players a couple new ways of dealing with drones at closer ranges and in more formations. Those new weapons require more strategic thinking than the single-shot, automatic laser burst, continuous laser, and charged shot modes of your twin pistols might have in the past, and figuring out the weapon you need in the heat of battle demands quick reflexes, too. Those new challenges give SPT's beta a feeling of enduring freshness that the first Early Access release didn't create.
The fresh thinking in SPT's beta doesn't end with things that go pew-pew, either. Flip a Vive controller over your shoulder and bring it back, and the energy shield from the game's initial release greets you with a new design and a fancy deployment animation that forces you to think ahead a bit. The shield used to come out fully deployed, but the new animation adds a half-second or so where the player remains vulnerable to attacks. If you want to shift from an offensive to defensive approach in the beta, there's a real cost to doing so. Choose carefully.
Those hand-wielded shields aren't purely defensive in the SPT beta, though. Swipe to the right on the Vive trackpad, and the shield turns into a spiky club-lightsaber-tractor-beam fusion that can be used to grab drones at range and bring them in close for death by blunt force.
That tractor beam can also turn the club into a kind of drone-mace, too. Grab a drone with the beam, and you can swing your victim back into the battlefield or use it to deflect incoming fire. This mode of attack doesn't feel particularly precise to me right now, but I get the sense that it might be quite deadly with practice. All the more reason to suit up as a space pirate time and time again.
Survive a wave of drones in this beta, and the game might reward you with one of a variety of new power-ups. You might get a machine-gun mode for your pistols, a gravity vortex that traps drones in a particular spot for easy dispatching, a shield dome that allows you to blast away with impunity, and homing missiles that do exactly what they say on the tin. These power-ups offer pretty sweet advantages, but there is one minor downside to the weapon-specific power-ups, at least: you'd best be sure that you want that particular upgrade for the ten seconds or so that they last, because there's no switching away once they're in action.
This game still exposes some of the limits of current-gen VR headsets. If an enemy drone gets too far away, for example, it turns into an indistinct blob that's frustratingly difficult to target, since your laser sights are only visible out to a certain distance. Forget reading any scores or other text associated with a drone at long distances, too. Unless you really tighten up the head straps, it's possible to end up with the Vive in a less-than-ideal position on your face, as well, since dropping to the floor and jumping from side to side can cause the Vive's bulk to shift rather easily. These minor issues don't take away from what's otherwise an exhilarating experience, though.
Space Pirate Trainer is $11.24 on Steam right now, and that deal lasts until Thursday at 4 PM Pacific. If you somehow own a Vive and don't already have a copy of this game, it's a no-brainer to pick it up for that price. Few developers have grokked what it means to make a good VR title as well as Space Pirate Trainer's have, and this is one game that really feels like it wouldn't be possible in any other medium. I'm now excited to revisit it every time I strap on the Vive. Even at its $14.99 regular price, Space Pirate Trainer is essential for any Vive owner's library.
The author wrote this review using a copy of Space Pirate Trainer purchased for his personal account on Steam.Re-examining the unusual frame time results in our Radeon RX 470 review
It's never fun to admit a mistake, but we made a big one while writing our recent Radeon RX 470 review. That piece was our first time out on a new test rig that included an Intel Core i7-6700K CPU and an ASRock Z170 Extreme7+ motherboard. Once we got that system up and running, it delivered some weird-looking frame time numbers with some games. For example, the spikiness of the frame-time plot below didn't match any test data we had ever gathered before for Grand Theft Auto V, and we puzzled over those strange results for some time. We decided to go ahead and publish them anyway after doing some extended troubleshooting without seeing any improvement.
Compare that to a more typical GTA V result from our Radeon RX 480 review, as demonstrated by three GeForce cards running on an X99 testbed:
The spikiness caused by what turned out to be high deferred procedure call (or DPC) latency didn't seem to affect average framerates much (save one major exception), but it did worsen our 99th-percentile frame time numbers considerably. Given how much we use those 99th-percentile numbers in formulating other parts of our conclusions, especially value plots, the error introduced this way had considerable negative effects on the accuracy of several key parts of our review. The net effect of this error led us to wrongly conclude that the Radeon RX 480 8GB and the Radeon RX 470 were closely matched in performance, when in fact they're quite different.
Upon reflection, we should have stopped the RX 470 review at that point to try and figure out exactly what was going on, but the pressure of deadlines got the better of us. When these weird frame-time plots appeared once more in the preliminary testing work for our review of the Radeon RX 460, however, we had to acknowlege that something unusual was going on. Of course, that also meant that our published RX 470 review had problems that we had to deal with. We believe that in the interest of full transparency, it's important to explain exactly what happened, be clear about how we messed up, and resolve not to make the same mistakes in the future.
While I was testing the Radeon RX 460 and tearing my hair out over the wild frame time plots our cards were generating, TR code wizard Bruno "morphine" Ferreira brought the possibility of high DPC latency to my attention. DPC latency is usually of interest to musicians running digital audio workstations, where consistently minimal input lag is critical. Bruno pointed me to the LatencyMon app, which keeps an eye on DPC performance and warns of any issues, as a way of figuring out whether DPC latency was the root cause of this problem.
I didn't capture any screenshots during my frenzied troubleshooting, but LatencyMon did show that our test rig wasn't servicing DPC requests promptly. Wireless networking drivers are generally considered the first place to look when troubleshooting DPC issues, and I use an Intel Wireless-N 6205 card in our test system. Oops. Even after disabling that wireless card, however, the issue persisted. After killing every potential bit of software that might have been causing the problem without getting any improvements, I took Bruno's suggestion of updating our motherboard's firmware. "The BIOS can't possibly be the cause!" I thought to myself smugly.
Pride goeth before a fall, of course, and the DPC latency issue vanished with the new ASRock firmware installed. The frame-time plots for GTA V began to resemble the flat, tight lines we've come to expect with modern hardware. I had to quash the urge to drive over the motherboard a few times and burn the remains before coming to grips with the fact that I would have to throw out large amounts of testing data.
So what happened? You see, ASRock sent us its Z170 Extreme7+ board during a brief period in which the company was promoting its ability to overclock locked Intel CPUs with a beta BIOS. I had hoped to explore that possibility with some ASRock motherboards and cheap CPUs, but Intel swiftly put the kibosh on the concept. We got busy with other work, and the beta firmware remained on the motherboard through our first attempts to test graphics cards with it. I don't know precisely what was wrong with this beta firmware that was causing it to wreak havoc on DPC latency, but updating the firmware did fix it, so Occam's razor suggests something weird was going on.
Having solved the underlying problem, I now had to contend with the fact that I had published a very public and widely-read review that contained what seemed like reams of contaminated data. To see just how wrong I had been in my conclusions, I retested every title we had slated for our RX 470 and RX 460 reviews on our ASRock test rig, using the same settings we had initially chosen for our reviews.
As it turns out, high DPC latency doesn't affect every game equally, or at least not in a way that shows up in our frame-time numbers. While GTA V, Hitman, and Rise of the Tomb Raider all showed significant changes in average FPS and 99th-percentile frame times after a retest on the updated hardware, Doom, Crysis 3, and The Witcher 3 did not. That second trio of games certainly felt more responsive to input after the critical firmware update, but the data they generated wasn't meaningfully different. We're talking fractions of milliseconds of difference in before-and-after testing, and those deltas are almost certainly imperceptible in real-life gaming. Given that behavior, we're confident that the numbers we generated for Doom, Crysis 3, and The Witcher 3 are representative of the performance of the Radeon RX 460, the Radeon RX 470, and the other cards we tested in those reviews.
Given the large differences in performance we saw with GTA V, Hitman, and RoTR, the only acceptable way to fix our mistake was to retest all of the cards in our Radeon RX 470 review from the ground up with those games. We've done that now, and as a bonus, we did that retesting with the same data-collection methods we just premiered in our RX 460 review.
As a result, we now have Doom Vulkan and OpenGL numbers for the RX 470 and RX 480, plus DirectX 12 numbers for Rise of the Tomb Raider and Hitman. We've also extensively re-written the conclusion of our Radeon RX 470 review to account for this new data, much in the same way that we crunched our results for the Radeon RX 460 and friends. We've accounted for the differences in our results there, so I'd encourage you to go read up on what changed.
If you read our original Radeon RX 470 review, we're deeply sorry to have misinformed you. We also extend our sincerest apologies to everybody at AMD and Nvidia for presenting incorrect information and misguided conclusions about their products. In the future, we'll strive to be both correct and swift with our reviews, but we also won't hesitate to delay a piece when clear warning flags are evident. We hope this clarification reinforces your trust in The Tech Report's reporting. If you have any questions or concerns, please leave a comment below or email me directly.TR's VR journals, part one: setting the stage
Over the past couple months, I've been places. I've dangled from the side of a rock wall hundreds of feet above crashing waves. I've set foot on a shipwreck deep beneath the ocean and hung out with a humpback whale. I've laid waste to pallets of unsuspecting fruit with a pair of katanas. I've led a research mission on an alien world full of exotic life forms. I've been to Taipei to look at a bunch of new PC hardware. (OK, that last one actually happened.) No, I don't have a holodeck or transporter pad installed in my office. Consumer virtual reality gear is here now. I've been spending lots of time strapped into the big two virtual reality headsets that came out this year: Oculus' Rift and HTC's Vive (built in partnership with Valve Software).
Since dedicated VR headsets require powerful gaming PCs to do their thing, both AMD and Nvidia are betting big on VR tech, too. AMD's latest graphics card, the $200-and-up Radeon RX 480, is priced specifically to make VR-capable graphics more affordable, while Nvidia's Pascal architecture even boasts a couple of VR-specific features in its silicon. Nvidia's VRWorks and AMD's LiquidVR both help developers to address the specific progamming needs of VR titles on Radeons and GeForces. Even PC companies like Zotac and MSI are thinking about how VR is going to change the personal computing experience. This is just a tiny slice of the vast number of software and hardware companies making a move into VR. If you hadn't already guessed, this technology is a Big Deal.
All that buzz has arisen for good reason. VR headset makers often talk about "immersion" and "presence," the sense that you're really inhabiting the virtual worlds viewed through these headsets. At their best, VR headsets really do create a feeling of being transported to another place, whether that's the seat of a Group B rally car or the bowels of Aperture Science. That immersion becomes especially deep when one can actually reach out and touch virtual environments in a room-size space, like one can with the Vive. I often find myself saying "wow" when I enter a new VR experience for the first time, and I imagine I've looked like the clichéd image of the guy below more often than I'd like to admit. When you put on these goggles for the first time, though, it's easy to understand why Oculus relies so heavily on this image to convey what VR is like.
Now that the Vive and Rift are on the market, though, one could be forgiven for thinking that some of the shine has come off. There are some fully- fleshed-out gaming experiences available on both platforms, but many of today's VR games are experimental and rather short. Valve's The Lab demo is typical of the breed: a collection of mini-games that's a fun, if not particularly deep, introduction to VR. While it's easy to understand that developers aren't yet committing the full force of triple-A resources to either headset, that approach might go some way toward explaining the apparently tepid response to these technological wonders a couple months in.
Recent Steam hardware survey results report that vanishingly small numbers of respondents on the platform have bought into either VR hardware ecosystem. While not every Rift user is going to have Steam installed, the numbers still aren't large. Even when people do buy into VR, it seems like they travel to virtual worlds less and less. Razer CEO Min-Liang Tan recently conducted a Twitter poll regarding the frequency of VR headset usage, and of 2,246 respondents, 50% said they don't even use their VR hardware any more. Another 33% use their headsets "just once in a while," and only 17% claim to strap their head-mounted device daily. Those are potentially troubling numbers for systems that cost hundreds of dollars—and that's before we consider the $1000 or more needed to build a compatible PC. More likely, we're just seeing the slow development of a nascent platform.
Another obstacle to getting folks to buy into VR may be that describing what it's like is incredibly hard without actually strapping a headset to someone's face. (Valve's clever augmented-reality setup, as seen above, comes close, though.) Words, pictures, and video are all abstractions of reality to begin with, and it's really hard to talk about something that promises to whisk you away to another world using them. I imagine many people I've told about VR feel sort of like families did when our parents used to pull out slide projectors and show off vacation photos. "You have to be there" may be true, but it's ultimately unsatisfying for the folks that haven't yet shared the experience.
Judging the Rift and Vive is also challenging because these are incredibly personal pieces of equipment. No other computing device of late straps on to one's head and blocks out the world, so it's important to get some face time with both devices instead of relying on one reviewer's sweeping proclamations. To give just one example, I have a huge head that stretches the Rift's strap adjustments to their limits. Its ring of face foam also doesn't make even contact with my skull, so the headset creates uncomfortable pressure points on my forehead. The Vive's strap system may not be as slick as the Rift's, but its generous foam donut and broad range of strap adjustments lets me get a much more comfortable fit. Other people find the Vive unbearably heavy and clunky. Since we all have different heads and musculatures, I think trying on the Rift or Vive before plunking down hundreds of bucks is mandatory.
Given those challenges, trying to review these headsets in an open-and-shut manner as we do with cases and graphics cards just isn't going to work. Along with the need for individual test-drives and the rapidly changing software landscape, not all of the pieces of the puzzle are in place yet for us to really issue a verdict on the Rift or Vive. For example, Oculus just finished fulfilling its pre-order backlog for the Rift, and it has yet to release its Touch hand controllers. Those controllers are going to massively change the way that Rift users interact with VR, but for now, they're only in the hands of developers and conference demonstrators. Every day brings new software for the Rift and Vive that could change the value proposition for either platform.
Because of this rugged new frontier, we're not going to write one review of each headset and call it good. Our conclusions now may not have anything to do with the way VR plays out on the Rift or Vive months down the line. Instead, we're going to be writing a series of articles—TR's VR journals, if you will—that chronicle our journeys through virtual worlds and the hardware we're using to get there. This way, we'll be able to talk about VR as it stands right now while we map out the broadening VR universe over time.
In the coming weeks, we'll talk about the Rift and Vive hardware, the process of setting them up in various spaces, and the different types of experiences each platform offers. Later on, we'll examine what's inside each headset's shell, the hardware and software magic that makes them work, and the challenges and methods involved in measuring VR performance. We'll try and intersperse that coverage with brief reviews of new games and experiences as they arrive so that you're up-to-date on the places you can go with your own Rift or Vive. We'll also be examining VR experiences that don't even require a PC, like Samsung's Gear VR, and what they mean for this burgeoning space. We expect it'll be a wild ride. Stay tuned.Radeon Pro Duo spearheads AMD's push for VR dominance
GDC—At its Capsaicin event this evening, AMD took the wraps off a wide range of software and hardware projects that put the spotlight on virtual reality. The company boasts that its products underpin 83% of VR-capable entertainment systems around the world, a figure driven in large part by the company's presence in the console space. AMD is also exploring VR opportunities beyond gaming in the health care, education, media, training and simulation, and entertainment industries.
First and foremost, the company is releasing a new graphics card called the Radeon Pro Duo. This is the dual-Fiji card (previously known as Gemini or the Fury X2) that CEO Lisa Su first showed off in June of last year. The card comes with 8GB of HBM VRAM. Like the Fury X before it, this card relies on liquid cooling to manage its prodigious heat output. According to AnandTech's Ryan Smith, the card will come with ISV-validated drivers for professional applications, much like AMD FirePro cards.
The Pro Duo is the first card in what AMD is calling its "Radeon VR Ready Creator" program—products meant to serve double duty as powerful VR development and playback platforms alike. The company says the card will deliver 16 TFLOPS of compute performance, and it'll be available early in the second quarter of this year with a $1500 price tag.
Polaris may be the first step in AMD's next-generation GPU architectures, but Radeon Technologies Group VP Raja Koduri also shared a tantalizing look at the company's next-generation roadmap. While the company does admit this roadmap is subject to change, that projected info does offer a first look at when we can expect various features (like HBM2) to arrive on future AMD graphics cards.
AMD is also partnering with Crytek to put Pro Duo-equipped PCs in universities around the world as part of Crytek's VR First project. Crytek and AMD plan to work together to promote VR development for head-mounted devices like the Oculus Rift and HTC Vive using AMD's LiquidVR SDK. LiquidVR is a set of development tools for virtual reality applications. AMD says that its tools let devs perform GPU-accelerated head tracking, harness multiple graphics cards to scale rendering performance, pass head-tracking data to the GPU with a minimum of latency, and ease the process of connecting and displaying VR content on VR headsets.
Along with the VR Creator product tier, AMD is also introducing a "Radeon VR Ready Premium" badge that will identify Radeon graphics cards and Radeon-equipped PCs that should offer a high-quality VR experience. One of those systems is the HP Envy Phoenix desktop we looked at a couple weeks ago. Cards from the Radeon R9 290 series and up should be eligible for the VR Ready Premium badge.
Last, but certainly not least, AMD showed off a version of its Polaris 10 GPU running Hitman. From what we know so far, Polaris 10 is AMD's "console-class" graphics card for thin-and-light notebooks. The company expects that graphics chips and cards using Polaris 10 silicon will be able to deliver as much as two times the performance per watt of Nvidia's GeForce GTX 950. The company says that it demonstrated Polaris 11 in December of last year. If that's the case, a Polaris 11 chip running Star Wars Battlefront at 1080p and 60 FPS drew about 90W at the wall. A similarly-configured, GTX-950-equipped PC drew about 140W during that same demo.Microsoft's push for a unified cross-platform gaming experience backfires
Over the past few years, Microsoft has made a few attempts to build bridges to the PC gaming community. More often than not, though, those efforts have ended with unhappy customers and the perception that Redmond is out of touch with PC gamers. For some time now, it's felt like Microsoft and the PC gaming community have reached an uneasy peace. Over the past week, though, it feels like we've gone from zero to "WTF?" again at a rapid pace.
This most recent spat started with the release of some weird-looking benchmark results for the latest beta of Ashes of the Singularity from the folks at Guru3D. Ryan Shrout at PC Perspective attempted to get to the bottom of this issue, and he discovered some troubling behaviors that might become par for the course for DirectX 12 games in general and Universal Windows Platform games in particular.
According to Shrout, vsync remains turned on for DirectX 12 games running on AMD hardware, even if it's turned off in a game's settings. He discovered that turning off vsync in Ashes doesn't actually do what one might expect. Instead, the game engine will render as fast as it's able, but any frames that fall outside the vsync interval will simply be dropped from the pipeline. Shrout warns this behavior can still lead to judder during gameplay, even if it does eliminate tearing.
Shrout says that's all that's thanks to games and drivers taking advantage of Windows Display Driver Model (or WDDM) 2.0. In conjunction with DirectX 12, this new model apparently requires games to use a new compositing method that's similar to borderless windowed mode in today's applications. Using this compositing path lets games run without tearing, but it apparently makes it harder for games to render with the uncapped frame rates that PC gamers have come to know and enjoy.
The furor only intensified when it came to light that Quantum Break, an upcoming DirectX 12 game from Max Payne developer Remedy Entertainment, would only be available on Windows 10 through the Microsoft Store. It appears that Store apps—specifically, UWP games—come with the same kinds of restrictions Shrout discovered when testing Ashes.
According to Mark Walton at Ars Technica, the Windows Store version of Rise of the Tomb Raider won't let users disable vsync, for example, and it has problems running with CrossFire and SLI multi-GPU configs. The Store version of RoTR also doesn't appear to expose an executable file to apps that need one to work, like Steam's Big Picture mode, graphics card control panels, and game overlays like Fraps.
Since other programs can't hook into Store apps, PC Perspective's Shrout worries that a wide range of tools that PC enthusiast sites use to benchmark hardware will no longer work. He thinks that restriction means developers will have to begin writing benchmarking tools into games themselves, something that Ashes of the Singularity developer Oxide Games has done quite competently.
Even if one developer has made a good tool, though, the implications of a fragmented benchmarking environment that only lets hardware reviewers get as much of a look into a game's performance as its developer allows is a chilling prospect. I think that development also puts control of benchmarking results into the hands of those with the most incentive to meddle with them, and it's a frightening prospect to consider that PC hardware reviewers might not be able to independently verify the truthfulness of the tools we're given.
It doesn't help that Microsoft's communication about effects of these new technologies and platforms has been quite muted, too, given the potential magnitude of changes they could hold for the future of gaming on Windows. The company held a press event last week to showcase its vision of the future of gaming across the PC and the Xbox One, two islands that it wants to bridge with universal Windows apps. The broader public is just hearing about the details of this event today.
Attendees brought up complaints about vsync and benchmarking (among other issues) with Xbox head Phil Spencer. Going by Sam Machkovech's account at Ars Technica, Spencer said "These [issues] are all in our roadmap...We hear the feedback from the community, and we will embrace it."
The problem from this PC gamer's perspective is that these issues have rarely been problems for games sold through any platform save the one Microsoft is trying to establish. If the company truly understood the wants and needs of the PC gamer, these issues would surely have been ironed out before major titles like Gears of War: Ultimate Edition and Rise of the Tomb Raider exposed them in another publicity firestorm.
No matter how you slice it, this debacle is another black mark on Microsoft's efforts to reach the PC gaming community, and it's one the company could ill afford given its past relationships with that market. Only time will tell whether the company will truly embrace community feedback and make Windows Store games into the kinds of experiences that PC players (and testers) value. For now, though, gamers can simply choose to put their dollars into other, more "open" platforms like Steam, Origin, Uplay, and GOG, and we imagine they'll do just that.Ashes of the Singularity's second beta gives GCN a big boost with DX12
Ashes of the Singularity is probably the first game that will come to most people's minds these days when we talk about DirectX 12. That title has been the source of a bunch of controversy in its pre-release lifetime, thanks to an ongoing furor about the game's reliance on DirectX 12's support for asynchronous compute on the GPU and that feature's impact on the performance of AMD and Nvidia graphics cards alike.
Ashes of the Singularity will be getting a second beta with a new benchmark ahead of its March 22 debut, and several outlets have been given early access to that software in advance of its official release. AnandTech, ExtremeTech, and Guru3D have all benched this new beta in both single and multi-card configurations, including the kinds of Frankenstein builds we saw when developer Oxide Games previewed support for DirectX 12's explicit multi-adapter mode. Explicit multi-GPU support is being added to the game's public builds for this beta release, too, so more of the press has been able to put Radeons and GeForces side-by-side in the same systems to see how they work together.
While all three reviews are worth reading in-depth, we want to highlight a couple things. Across every review, Radeon cards tend to lead comparable GeForces every step of the way in single-card configurations, at least in the measure of potential performance that FPS averages give us. Those leads widen as resolution and graphics quality settings are turned up.
For example, with Ashes' High preset and the DX12 renderer, the Radeon R9 Fury X leads the GeForce GTX 980 Ti by 17.6% at 4K in AnandTech's testing, going by average FPS. That lead shrinks to about 15% at 2560x1440, and to about 4% at 1080p. Ashes does have even higher quality settings, and Guru3D put the game's most extreme one to the test. Using the Crazy preset at 2560x1440, that site's Fury X takes a whopping 31% lead over the GTX 980 Ti in average FPS. Surprisingly, even a Radeon R9 390X pulls ahead of Nvidia's top-end card with those settings.
As we saw last year in an earlier Ashes benchmark series from PC Perspective, switching to DirectX 11 reverses the Radeons' fortunes. Using that rendering path causes a significant drop in performance: 20% or more, according to AnandTech's results.
The new Ashes benchmark lets testers flip asynchronous compute support on and off, too, so it's interesting to examine what effect that has on performance. AnandTech found that turning on the feature mildly harmed performance on GeForce cards. Radeons, on the other hand, got as much as a 10% boost in frame rates with async compute enabled. Nvidia says it hasn't enabled support for async compute in its public drivers yet, so that could explain part of the performance drop there.
In the "things you can do, but probably shouldn't" department, the latest Ashes beta also lets testers turn on DX12's Explicit Multiadapter feature in unlinked mode, which we'll call EMU for short. As we saw the last time we reported on performance of cards in an EMU configuration, the feature does allow for some significant performance scaling over single-card setups. It also allows weirdos who want to throw a Fury X and a GTX 980 Ti in the same system let their freak flags fly.
AnandTech did just that with its Fury X and GTX 980 Ti. Using the red team's card as the primary adapter, the site got a a considerable performance increase over a single card when running Ashes at 4K and with its Extreme preset. The combo delivered about 39% more performance over a GTX 980 Ti and about 24% over an R9 Fury X. With the GTX 980 Ti in the hot seat, the unlikely team delivered about 35% more frames per second on average.
EMU does come with one drawback, though. Guru3D measured frame times using FCAT for its EMU testing, and the site found that enabling the feature with a GTX 980-and-GTX-980-Ti pairing resulted in significant frame pacing issues, or "microstutter," an ugly problem that we examined back in 2013 with the Radeon HD 7990. If microstutter is a widespread issue when EMU is enabled, it could make the feature less appealing.
As with any purportedly earth-shattering numbers like these, we think there are a few caveats. For one, this is a beta build of Ashes of the Singularity. AnandTech cautions that it's "already seen the performance of Ashes shift significantly since our last look at the game, and while the game is much closer to competition now, it is not yet final."
For two, each site tested Ashes with the Radeon Software 16.1.1 hotfix, AMD's latest beta driver. After the results from each site were published, AMD released the Radeon Software 16.2 hotfix, which contains some Ashes-specific optimizations. We're curious to see what effect the updated driver has on the performance of the game on AMD hardware, and it's entirely possible that the effect could be positive.
For three, as we mentioned earlier, Nvidia says that it still hasn't enabled support for asynchronous compute in its public drivers. GeForce software product manager Sean Pelletier took to Twitter yesterday to point this out, and he also noted that Oxide Software's statement that asynchronous compute support was enabled in the green team's public drivers was incorrect. Given how heavily Ashes appears to rely on asynchronous compute, the fact that Nvidia's drivers apparently aren't exposing the feature to the game could partially explain why GeForce cards are lagging their Radeon competitors so much in this benchmark.
Still, if these numbers are any indication, gamers with AMD graphics cards could see a big boost in performance with DirectX 12, all else being equal. It's unclear whether other studios will take advantage of the same DX12 features that Oxide Games has with Ashes of the Singularity. Just because a game claims DirectX 12 support, that gives us no insight about the number of API features a particular title includes. Even so, we could be on the threshold of some exciting times in the graphics performance space. We'll have to see how games using this new API take shape over the coming months.
|Cooler Master's MasterCase Pro 6 reviewed||8|
|Aorus AC300W case offers fancy front panel connectivity||5|
|Lenovo's Towers and Y25f monitor join its Legion||3|
|HTC Vive price permanently drops to $599||5|
|Acer Nitro 5 Spin boards the eighth-gen Core train||3|
|Eighth-gen Core desktop CPUs pack six cores and need new mobos||34|
|Intel kicks off eighth-gen Core with four cores and eight threads in 15W||60|
|Asus Vivobook Pro N580VD-DB74T can do offices and kids' parties||15|
|AMD's Ryzen Threadripper 1920X and Ryzen Threadripper 1950X CPUs reviewed||116|
|Somewhere in a dark office in the US where almost everyone has left for the weekend sits a tall man in his cubicle, glaring at his computer monitor in...||+18|