AMD’s Radeon VII graphics card reviewed

Ever since AMD refreshed most of its Ryzen product range last year using GlobalFoundries’ 12LP process—an enhanced version of the foundry’s 14 nm FinFET node with higher-performance front-end-of-line transistors but only optional density improvements—and expanded that refresh to Radeons with the RX 590, I sort of expected the company to do the same with its Vega 10 graphics chip at some point. Instead, AMD surprised us at CES by introducing a consumer version of its Vega 20 data-center GPU on board a new graphics card: the Radeon VII. Taking Vega 20 out of the data center means AMD is first to market with a gaming GPU fabricated on TSMC’s cutting-edge 7 nm FinFET process.

 

Vega 20 initially made an appearance in the Radeon Instinct MI50 and MI60 accelerators introduced late last year. Instead of making the biggest chip possible as quickly as possible like Nvidia has with its compute accelerator chips of late, Vega 20 only modestly expands the processing capabilities of the Vega 10 GPU to support some reduced-precision data types useful in deep learning applications. TSMC’s 7-nm process is a real node shrink with all of the areal scaling such an advance implies, and it does most of the work in delivering a generation-on-generation performance improvement.

Vega 20 takes most of the existing Vega 10 GPU and shrinks it to 331 mm², down from 495 mm² for Vega 10. By shrinking the die in this way, AMD made room on Vega 20’s underlying silicon interposer to add on two more stacks of HBM2 RAM. Vega 20 now talks to its on-package memory over a 4096-bit wide bus, and with HBM2 speeds of 2 Gb/s per pin, that works out to a whopping 1 TB/s of theoretical memory bandwidth. Compare that to the 484 GB/s that Vega 10 enjoyed on board the RX Vega 64, and we should expect a significant performance increase from this move alone.

 

TSMC has claimed that its first generation of 7 nm transistors can deliver up to 20% higher performance or 40% power savings at the same performance versus its 16 nm node. AMD pushed the power-savings-versus-performance-increase slider all the way to the right for the Radeon VII. The company has already said that this card would deliver its improved performance at the same power as the RX Vega 64.

As a result, the Radeon VII’s board power rings in at 300 W, up 5 W over even the notoriously power-hungry RX Vega 64. In return, AMD clocks the Radeon VII’s Vega 20 chip at a 1750 MHz typical boost speed, up about 13% over the Radeon RX Vega 64 in its air-cooled configuration. As we’ll see from some back-of-the-napkin calculations to come, that clock-speed increase more than makes up for the fact that AMD has had to disable four of Vega 20’s 64 compute units, presumably for yield reasons. Along with those compute units, the Radeon VII appears to lose 16 of its texture units. We’re confirming that figure with AMD and will update our numbers as soon as we can.

In its descent from the data center, the Vega 20 GPU loses some features that will be of little use to enthusiasts. For example, Radeon Instinct accelerators using Vega 20 support both PCI Express 4.0 and coherent card-to-card communications using the Infinity Fabric interconnect, but the Radeon VII sticks to the far more common PCIe 3.0 standard and drops the data-center class inter-card connection. Vega 20 also offers support for FP64 (aka double-precision) data types at up to 1/2 the FP32 rate, and unusually, some of that FP64 processing capability survives on the Radeon VII in a form that may be useful to folks who need it.

This morning, AMD told Anandtech that it’s ultimately decided to enable 1/4-rate FP64 on the Radeon VII after earlier statements suggesting that 1/8 would be the rate of choice. Some more back-of-the-napkin math suggests that in its final form, the Radeon VII could crunch through FP64 workloads at 3.35 TFLOPS. That’s quite a bit better than the 1.6-TFLOP peak rate that the Kepler-powered GTX Titan offered back in the day. GTX Titan cards have endured as relatively cheap founts of FP64 performance for those who need it, so AMD might sell some Radeon VIIs to scientific-computing folk who want useful FP64 performance alongside an otherwise much more modern graphics card.

Here’s a table comparing the Radeon VII’s basic specs to several of today’s most common pixel-pushers:

Boost

clock

(MHz)

ROP pixels/

clock

INT8/FP16

textures/clock

Shader

processors

Memory

path (bits)

Memory

bandwidth

Memory

size

RX Vega 56 1471 64 224/112 3584 2048 410 GB/s 8 GB
GTX 1070 1683 64 108/108 1920 256 259 GB/s 8 GB
RTX 2060 FE 1680 48 120/120 1920 192 336 GB/s 6 GB
RTX 2070 FE 1710 64 120/120 2304 256 448 GB/s 8 GB
GTX 1080 1733 64 160/160 2560 256 320 GB/s 8 GB
RX Vega 64 1546 64 256/128 4096 2048 484 GB/s 8 GB
Radeon VII 1750 64 240/120? 3840 4096 1 TB/s 16 GB
RTX 2080 FE 1800 64 184/184 2944 256 448 GB/s 8 GB
GTX 1080 Ti 1582 88 224/224? 3584 352 484 GB/s 11 GB
RTX 2080 Ti FE 1635 88 272/272 4352 352 616 GB/s 11 GB
Titan Xp 1582 96 240/240 3840 384 547 GB/s 12 GB
Titan V 1455 96 320/320 5120 3072 653 GB/s 12 GB

From this chart, a couple things should stand out. One is that the Radeon VII offers by far the highest theoretical memory bandwidth of any card we’ve ever had our hands on. At $699, the Radeon VII is also the cheapest way to get 16 GB of RAM on a graphics card yet. We typically don’t see that much VRAM until one gets into pro visualization products, but with the advent of the Radeon VII, AMD is pushing 16 GB as a new enthusiast standard. Since memory capacity is one of the Radeon VII’s clearest spec wins over the RTX 2080 and its 8 GB of RAM, it’s no surprise that AMD is making hay of this feature. The company suggests a number of current and upcoming games can benefit from having more than 8 GB of video memory available for gaming at 4K and maximum settings, and it also highlighted the value of that memory for prosumer workloads like 4K and 8K video processing with Adobe Premiere.

On a gut level, this push for higher memory capacities on cards destined for 4K gaming makes sense. We know that texture sizes are growing and seem likely to grow further as gamers demand higher and higher quality visuals, and I know a couple video producers who might appreciate having more video memory available without paying through the nose for it. AMD says Far Cry 5 can occupy as much as 12.9 GB of VRAM at 4K and max settings, and the company provided us a scary-looking frame time graph that suggests major inconsistency could arise if an RTX 2080 runs over its available memory pool. Far Cry 5 is one of the games we use in our test suite, so that’s an easy point to return to for further analysis.

Peak

pixel

fill

rate

(Gpixels/s)

Peak

bilinear

filtering

INT8/FP16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

FP32

shader

arithmetic

rate

(TFLOPS)

RX Vega 56 94 330/165 5.9 10.5
GTX 1070 108 202/202 5.0 7.0
RTX 2060 FE 81 202/202 5.0 6.5
RTX 2070 FE 109 246/246 5.1 7.9
GTX 1080 111 277/277 6.9 8.9
RX Vega 64 99 396/198 6.2 12.7
Radeon VII 112 420/210 7.0 13.4
RTX 2080 115 331/331 10.8 10.6
GTX 1080 Ti 139 354/354 9.5 11.3
RTX 2080 Ti 144 473/473 9.8 14.2
Titan Xp 152 380/380 9.5 12.1
Titan V 140 466/466 8.7 16.0

As a hotter-clocked Vega chip, the Radeon VII offers modest but meaningful theoretical performance improvements versus its RX Vega 64 predecessor in every one of our usual theoretical peak figures. Really, there are no surprises here. The basic resource allocation of Vega 20 didn’t change as part of the move to the 7 nm node, and AMD’s widening of the chip’s memory bus largely overshadows the clock-speed-related gains in the table above. Graphics tends to be a memory-bandwidth-bound workload, and making sure the beast is fed is unsurprisingly a priority for GPU architects. Perhaps the addition of that wider memory bus speaks to what sat atop the list of fixes AMD envisioned for Vega after its first time out on shipping products.

The Radeon VII card itself should be a well-known quantity by this point. Yours truly wasn’t on the ground at CES, but the card was, and the press enjoyed extensive access to it during the show. The Radeon VII takes the basic bead-blasted, diamond-cut aluminum shroud design introduced on limited-edition Radeon RX Vega 64 cards and turns it into a home for three fans blowing onto an open-style heatsink more typical of custom cooler designs from board partners.

We’re hesitant to tear down and repaste coolers that need to go back on large GPU packages with HBM RAM like Vega 20, but the fine folks at Gamers Nexus have no such qualms, and they’ve already torn down their card ahead of today’s embargo lift. From GN’s work, we know that the Radeon VII cooler uses a massive, complicated copper vapor chamber to wick heat away from the Vega 20 package and into an aluminum fin stack. Perhaps thanks to that vapor chamber, AMD is able to keep the Radeon VII reference design within the boundaries of two slots, although the cooler shroud does jut slightly beyond the edge of the expansion-card bracket. The 300 W board power we alluded to earlier manifests itself in this card’s pair of eight-pin PCIe plugs.

It’s interesting to see AMD follow in Nvidia’s Founders Edition footsteps for Turing by introducing an open-air cooler on the Radeon VII despite its much higher board power than any GeForce RTX card. The RX Vega 64’s blower-style heatsink may not have been the best thing going for absolute noise levels, but it did dump the copious heat that card produced out the rear of a PC. Systems with a Radeon VII inside, on the other hand, will need to be well-ventilated to remove the hot air exhausted by its open-style cooler, although that demand is no different than the challenges posed by Nvidia’s latest reference coolers.

AMD stickers the Radeon VII at $699, or the same suggested price as GeForce RTX 2080 partner cards. The company has drawn a bead on Nvidia’s best-performing Turing card under $1000 over and over in the run-up to this morning. Let’s see if AMD hit its target now.

 

Our testing methods

If you’re new to The Tech Report, we don’t benchmark games like most other sites on the web. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it’s like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it’s the industry standard for quantifying graphics performance. Accept no substitutes.

What’s more, we don’t typically rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out test scenarios that are typical of what one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you’ll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn’t suit our needs, we relied on the PresentMon utility.

As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:

Processor Intel Core i9-9900K
Motherboard MSI Z370 Gaming Pro Carbon
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Flare X DDR4-3200
Memory timings 14-14-14-34 2T
Storage Samsung 960 Pro 512 GB NVMe SSD (OS)

Corsair Force LE 960 GB SATA SSD (games)

Power supply Seasonic Prime Platinum 1000 W
OS Windows 10 Pro version 1809

Thanks to Intel, Corsair, G.Skill, and MSI for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and Gigabyte supplied the graphics cards we used for testing today, as well.

Graphics card Boost clock

(specified)

Graphics driver version
Nvidia GeForce GTX 1080 Ti Founders Edition 1582 MHz GeForce Game Ready 418.81
Gigabyte GeForce RTX 2070 Gaming OC 8G 1725 MHz
Nvidia GeForce RTX 2080 Founders Edition 1800 MHz
Nvidia GeForce RTX 2080 Ti Founders Editoin 1635 MHz
AMD Radeon RX Vega 64 1546 MHz Radeon Software Adrenalin 2019 Edition Press
AMD Radeon VII 1750 MHz

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 3840×2160, unless otherwise noted. We enabled HDR in games where it was available. Our HDR display is an LG OLED55B7A television.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Hitman 2


Hitman 2‘s world of assassination comes with stunning lighting effects and lots of scene complexity, and it’s drop-dead gorgeous in 4K with HDR. All that makes for a tough time getting to 60 FPS for most of our graphics cards. The Radeon VII gets off to a strong enough start by matching the GTX 1080 Ti nearly frame for frame, but the RTX 2080 has an undeniable advantage in both average frame rates and 99th-percentile frame times.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time each graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. To fully appreciate this data, recall that our graphics card tests generally consist of one-minute test runs and that 1000 ms equals one second.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then you’re likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. Also, 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

In less demanding or better-optimized titles, it’s useful to look at our strictest graphs. 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays.

At the 16.7 ms threshold, the Radeon VII proves the equal of the GeForce GTX 1080 Ti. Both cards spend about six seconds of our one-minute test run rendering frames that take longer than 16.7 ms to finish. The RTX 2080 impresses by spending less than one-third the time completing such frames, though.

 

Forza Horizon 4 with MSAA

Imagine MSAA at 8X here.


Forza Horizon 4 hit the road as a showcase for Radeon graphics performance, but after breakout results for the red team early on, that lead has dwindled to performance parity with Nvidia cards. Problem is, our pedal-to-the-metal settings with 4K and HDR in this title resulted in a substantial—and atypical—GeForce lead. Even after moving the Radeon VII to several different test systems and trying any number of platform-related fixes, we couldn’t get our delivered results to mesh with the expectation of performance parity for Radeons in this title.

After bringing up this issue with AMD, the company advised us that using 8X MSAA in Forza Horizon 4 will reduce performance on its products more than it will on GeForces. That suggests the 64-ROP complement that’s graced high-end Radeons since the Hawaii days is perhaps no longer enough to handle MSAA at 4K in this title.

My understanding is that MSAA delivers a one-two punch because it both relies on fixed-function hardware and can be quite memory-bandwidth intensive, even with modern color compression techniques. It appears that the Radeon VII’s terabyte per second of theoretical bandwidth can’t overcome bottlenecks elsewhere in the GPU.


Presuming we are looking at a ROP bottleneck with these settings, that resource constraint hits our Radeons hard. Both the RTX 2080 and the GTX 1080 Ti spend less than half the time past 16.7 ms on tough frames that the Radeon VII needs here, and the Vega 64 completely runs out of gas.

As part of our conversation with AMD, the company suggested that FXAA was the optimal technique for achieving the best balance between image quality and performance on its cards in this title. We didn’t want to discard our MSAA test results, as it should come as no surprise that using less-expensive computational techniques can result in increased performance. Still, we figured we at least ought to give that alternate technique a shot to see whether it resulted in noticeably lower image quality in exchange for performance.

 

Forza Horizon 4 with FXAA


Flipping off MSAA puts a better face on performance for all of our graphics cards, and it may well be the optimal way to play this game with minimal jaggies. Forza Horizon 4’s FXAA implementation is quite good, and even my jaded graphics reviewer’s eye has to admit that there’s precious little difference in image quality between running 8X MSAA and just using FXAA in this title.

Since FXAA is implemented as a pixel shader rather than a technique that employs fixed-function hardware, all of our cards get a nice speedup, and the Radeons can put their prodigious shader resources to use rather than getting bottlenecked by ROP throughput. The net result is that the Radeon VII pulls even with the GTX 1080 Ti on average FPS and 99th-percentile frame times, but the RTX 2080 is just that little bit better yet.


None of our cards need longer than 50 ms or 33.3 ms to render a frame, and both the GTX 1080 Ti and the Radeon VII spend just under a second on tough frames that need longer than 16.7 ms to finish. The RTX 2080 proves even better still, spending only a vanishing fraction of a second on such frames. Still, any of our $699-and-up cards would make a fine choice for 4K HDR gaming with Forza Horizon 4 using FXAA.

 

Assassin’s Creed Odyssey

To test Assassin’s Creed Odyssey, we cued up the Very High preset at 4K with HDR enabled.


Our 4K HDR test of Assassin’s Creed Odyssey reveals stormy seas for the Radeon camp. We’ve been aware that enabling HDR on Radeon cards can cause worse frame-time spikes than usual in the latest Assassin’s Creed titles ever since we tested 4K HDR performance in Assassin’s Creed Origins for our RTX 2080 review, but we didn’t really dig into it at the time.

Whatever the cause for those spikes, it appears to have persisted with the release of Assassin’s Creed Odyssey. The Radeon VII’s average frame rate lands between that of the GTX 1080 Ti and the RTX 2080, but its 99th-percentile frame time trails even the RTX 2070. That’s not good news for gaming smoothness.


Our time-spent-beyond-X graphs show that the worst of the Radeon VII’s spikes cause it to put up some time at the 33.3-ms mark, and those are hitches you can feel as you traverse Odyssey‘s rendition of ancient Greece. At the 16.7-ms mark, though, the Radeon VII pulls ahead of the GTX 1080 Ti, even as both cards spend about one-fourth of our one-minute test run finishing frames that take longer than 16.7 ms to render. If AMD can fix those frame-time spikes, it’ll make the Grecian roads a lot less bumpy.

 

Far Cry 5

As we did for Assassin’s Creed Odyssey, we tested Far Cry 5 at 4K with HDR enabled.


Far Cry 5 is another title that has a rough relationship with Radeons when HDR is switched on, and the Radeon VII’s performance in rural Montana echoes its time in ancient Greece. AMD’s 7 nm darling nearly catches the RTX 2080 on average frame rates, but its 99th-percentile frame time comes in way behind that of its Turing rival.


It’s a shame that those hitches spoil the Radeon VII’s delivered smoothness by putting time on the board at the 50 ms and 33.3 ms marks, because the card’s performance for 4K gaming is otherwise quite capable. The Radeon VII shaves nearly a second and a half off the GTX 1080 Ti’s time spent finishing fussy frames at 16.7 ms, even as the RTX 2080 shaves yet another second off that result yet. Again, if AMD can fix those intermittent spikes, the Radeon VII will be quite the competitor for Nvidia in this title.

 

Monster Hunter: World


Monster Hunter World once again proves that it’s a monster of a game to run well as we dial in a 4K resolution and demanding settings, including HDR. The Radeon VII can’t even outpace the RTX 2070, let alone the GTX 1080 Ti or RTX 2080. At least AMD’s latest and greatest keeps 99th-percentile frame times under control.


Given the low frame rates from the Radeon camp, it’s most relevant to start our time-spent-beyond analysis at the 33.3 ms threshold. Despite its relatively low average frame rate in this title, the Radeon VII doesn’t spend more than a few milliseconds on frames that take longer than 33.3 ms to finish. If 30 FPS gaming at 4K in this title is your goal, the Radeon VII will at least not pummel you with frame-time spikes that lead to an unplayable experience. Still, our 16.7 ms graph shows that the RTX 2080 proves a far more effective tool for getting the job done in Monster Hunter: World at 4K.

 

Battlefield  V


Chalk up a much-needed win for the Radeon VII as we close out our gaming tests. AMD’s 7 nm baby turns in a sterling 60 FPS average and a 99th-percentile frame time that suggests only minor roughness in Battlefield V at 4K and with HDR enabled. Meanwhile, our Turing GeForces all appear to struggle with a stuttering issue that pushes up 99th-percentile frame times.


A peek at our 50 ms and 33.3 ms graphs shows that the spikes GeForces are experiencing aren’t the kinds of punch-you-in-the-face fuzziness we abhor the most, but they do leave their marks at the 16.7 ms threshold. Here, the Radeon VII spends a little under two seconds of our one-minute test run on frames that take longer than 16.7 ms to finish, while the RTX 2080 puts up more than double the time. Even without the spikes of its Turing successors, the GTX 1080 Ti spends more than three times as long wrapping up such frames. If you don’t have any interest in Battlefield V‘s DirectX Raytracing effects, the Radeon VII proves unusually capable in this title.

 

A quick look at noise and power consumption

The most nagging question I’ve had since CES was just how efficient the Radeon VII would be thanks to its underlying TSMC 7 nm FinFET process. Back then, AMD CEO Lisa Su said the Radeon VII would deliver 25% higher performance for the same power as its RX Vega 64 predecessor. To find out whether that’s the case, we turned to the 2016 reboot of Doom and kept an eye on our Watts Up power meter to monitor system power draw from the wall.

There’s a shocker. Despite being rated for a similar board power to that of the Radeon RX Vega 64, the Radeon VII actually lets our test system consume quite a bit less power while delivering much higher performance. We wouldn’t be surprised if the actual improvement in performance per watt for the Radeon VII greatly exceeds that 25% figure.

Even though the Radeon VII marks a much-needed improvement in performance per watt for the red team, the net effect of that shrink has been to bring the Radeon VII’s approximate performance per watt on par with that of the GTX 1080 Ti, whose GP102 GPU was fabricated on TSMC’s 16 nm FinFET process—in other words, a whole node behind 7 nm FinFET.

Going by these results, I’d venture that AMD’s graphics processors badly need a Maxwell moment (or a moment of Zen, if you prefer) that puts the seven-year-old basic design of the Graphics Core Next microarchitecture to rest. GCN has evolved plenty over time, to be sure, but Vega 20 shows that the fruit of that evolution is far off the pace AMD needs to compete with Nvidia on performance per watt.

As we first saw with AMD’s Polaris products, the red team has let process advances take the lead in delivering both performance and performance-per-watt improvements of late, while the stagnation at 28 nm apparently spurred Nvidia to find ways to improve its products’ performance per watt independent of process gains—an investment that has paid off handsomely over time. The Turing architecture appears to have a virtual node and change worth of advantage on AMD in the performance-per-watt race, on top of whatever benefits Nvidia’s chips may yet enjoy from migration to a next-generation process. Scary stuff.

It’s not all roses for the Radeon VII once we pull out the decibel meter, either. Despite the fact that this card draws less power than the RX Vega 64, and even with its elaborate copper vapor chamber and full-length fin stack, the Radeon VII reference design has to spin its trio of fans at high RPMs to move heat away from the card. In turn, the Radeon VII is actually louder at full tilt than the already noisy RX Vega 64 reference design.

Meanwhile, the RTX 2080 Founders Edition comes in more than 10 dBA quieter than the Radeon VII Since dBA is a logarithmic scale, a rough rule of thumb is that such a difference is enough to produce a halving of perceived noise levels. Sure, one can don headphones or crank the volume knob to drown out the racket the Radeon VII makes, but you will notice this card’s din in quiet parts of games where low noise floors might matter, and its whine may annoy other people in a room with you.

Given this card’s price tag, it’s hard not to be disappointed with how loudly it runs. The Radeon R9 Fury X managed to incorporate a whisper-quiet liquid cooler for $50 less than the Radeon VII commands, and the liquid-cooled edition of the RX Vega 64 rang in at the same $699.99 price tag as this card. Perhaps AMD can fine-tune the Radeon VII’s fan curve in a future driver update to better balance noise levels and delivered performance, but for now, we think its noise character is unbecoming of a premium product.

 

Conclusions

Let’s sum up the hundreds of graphs on the preceding pages with our famous value scatter plots. To create these charts, we take the geometric mean of both average frame rates and 99th-percentile frame times across all of our tests, then convert that 99th-percentile number into FPS to make our higher-is-better logic work. We’ve included graphs that incorporate data from Forza Horizon 4 with both MSAA and FXAA so that you can verify how our use of a geometric mean prevents a stumble in one game from affecting our overall results too much.

For pricing data, we sought out e-tail listings for cards still being sold. For cards that are no longer available, we used the manufacturer’s last known suggested price.



Stop us if you’ve heard this one before. On the whole, the Radeon VII’s performance potential (as measured by average frame rates) lets it slightly outpace the venerable GeForce GTX 1080 Ti, making it the fastest single-GPU Radeon card we’ve ever tested by that metric. Our final reckoning puts the Radeon VII just 8% behind the RTX 2080 on that measure, too. That’s not bad, considering that the RTX 2080 Founders Edition we tested is a more expensive and hotter-clocked card than a reference-spec RTX 2080 may be.

Going by our 99th-percentile FPS metric for delivered smoothness, however, the GTX 1080 Ti and RTX 2080 both outstrip the Radeon VII. Right now, even Nvidia’s $500-ish RTX 2070 will generally offer you a 4K HDR gaming experience as smooth as what AMD’s $699 fighter can deliver, and that’s with our most favorable test case for Forza Horizon 4 rolled in. If you prefer MSAA to FXAA in that title, the ride on Radeons gets even rougher, and our conversations with the company suggest that’s because of a ROP bottleneck that’s not going away.

Even with its best foot forward in Forza Horizon 4, it’s disappointing to see a product as expensive and as critical as the Radeon VII go down such a bumpy road at launch. We’ve been banging this drum for seven years now. AMD has proven that it can iron out those wrinkles with driver updates and time, and we don’t think a 15% improvement in 99th-percentile frame rates is an insurmountably high bar to clear if the goal is to catch the RTX 2080. Such a figure might be a deterrent to dropping $700 on one of these cards until AMD’s drivers shape up, though.

On performance per watt, the Radeon VII marks an important advance for AMD. It consumes less power than the Radeon RX Vega 64 to deliver much higher performance. Even so, it needs quite a bit more juice to do its thing than both the RTX 2080 and GTX 1080 Ti. AMD’s impressive-looking triple-fan cooler comes tuned for maximum performance, too, so it’s even slightly louder in operation than the notoriously noisy blower on the RX Vega 64. Both Nvidia’s reference blower for Pascal and its biggest dual-fan Founders Edition cooler for Turing will keep a room’s noise floor noticeably lower than that of the Radeon VII’s heatsink.

Perhaps because of its data-center DNA, the Radeon VII can’t even move the stubbornly stationary performance-per-dollar bar. This card delivers the same bang for the buck that the GTX 1080 Ti did at launch, and aside from 16 GB of RAM that’s of unclear value to gamers today, it lacks any forward-looking features like the nascent real-time ray tracing and DLSS functions of Nvidia’s Turing architecture for curious folks to kick the tires on. With four stacks of costly HBM2 RAM and a cutting-edge lithography process at its heart, we doubt AMD could trim much fat from the Radeon VII’s price tag without whittling bone. If $699 is the price the Radeon VII has to command, though, it makes the card a tough sell for what it delivers.

For even money between the RTX 2080 and the Radeon VII, we’d put our bet on the green team for the moment. Perhaps thanks to the arrival of the Radeon VII, swift and whisper-quiet RTX 2080 partner cards are now selling for only small premiums over Nvidia’s $699.99 suggested price. You’ll enjoy faster and smoother gaming for your money than the Radeon VII can offer right now. We’ll need to defer final judgment on the value of the Radeon VII’s 16 GB of memory versus the RTX 2080’s 8 GB complement, but that deficit didn’t appear to cause issues for the Turing card even in Far Cry 5, a title that AMD highlighted as one of the worst memory hogs around for 4K gaming at max settings.

We imagine the Radeon VII might be the right card for some people. Perhaps your day-to-day work eats VRAM like there’s no tomorrow, and you only care about gaming on the side. Maybe you don’t care in the least about what you’ve seen of hybrid rendering with real-time ray tracing, and you passed up the GTX 1080 Ti at its zenith. Maybe you just can’t bear the thought of putting one red cent in Jensen Huang’s jacket fund. If any or all of those things describe you, the Radeon VII is as good as it gets for an alternative choice in high-end graphics right now. We just wish it was a smoother, quieter, and cheaper one.

Comments closed
    • torquer
    • 6 months ago

    I was just sitting here imagining what it would be like to see interior-facing dashcam video of Chuckula, DoomGuy64, Srsly_Bro, and auxy on a cross country road trip.

      • Shobai
      • 6 months ago

      [url<]https://m.youtube.com/watch?v=d-6Iqnp2Ja8[/url<] ??

    • Jeff Kampman
    • 7 months ago

    For everybody who wanted 2560×1440 SDR data, I have it and we will be publishing it somehow once we figure out how to make an article of it.

      • Redocbew
      • 7 months ago

      Radeon VII: Return of The Buffalo.

      Where you don’t talk about 4k.

        • chuckula
        • 7 months ago

        Radeon VIII: The Last Buffalo [who likes to drink green milk from a sea cow for some reason]

      • JustAnEngineer
      • 7 months ago

      Thanks, Jeff.

      From a selfish standpoint, the additional data will provide the most appropriate comparison for gaming on my 2560×1440 display with 48-144 Hz FreeSync. I do believe that you were correct to test the new graphics cards at 3840×2160. The higher resolution is more likely to show differences between GPUs rather than bottlenecks caused by CPU and RAM.

      • ermo
      • 7 months ago

      Cheers Jeff. Much appreciated.

      Do you have any 3840x1600p or 3840x1440p gaming monitors at your disposal? I’m inclined to believe that these are actually the more interesting targets for gaming these days, as they split the difference between 2560x1440p and 3840x2160p very nicely?

      Maybe a roundup with 2560×1440, 3840×1440/1600 (whichever you have access to/prefer) and re-using the 4K data would make sense?

      • enixenigma
      • 7 months ago

      EDIT: reading apparently isn’t my strong suit. Please ignore.

      • Kretschmer
      • 6 months ago

      Excellent. I’d imagine this is the most relevant dataset to your viewers.

      • K-L-Waster
      • 6 months ago

      More data is always welcome.

    • stefem
    • 7 months ago

    Nice review Jeff but still missing the Beyond3D suite testing and why is the RTX 2080 listed as $800 while I can find plenty at the $700 MSRP (amazon, newegg)? it even show up in the January techreport system guide where you are recommending the RTX 2080 for $699

      • Jeff Kampman
      • 7 months ago

      I explained this in the conclusion but for cards still being sold, we used the actual retail price of the card we tested. In this case the RTX 2080 Founders Edition sells for $799.99 so that’s the price in the chart.

    • wierdo
    • 7 months ago

    Looks like the card has an interesting niche to work with:
    [url<]https://www.hardocp.com/news/2019/02/07/16gb_hbm2_on_radeon_vii_needed_for_real_world_4k_video_production/[/url<] [quote<]In this real-world use case, the 12GB NVIDIA Titan X Pascal would crash during the export phase of head-to-head 4K video production when mixing six 4K video clips on a timeline and using 3 taxing transitions. This "Accelerated Renderer Error" was 100% due to a lack of GPU memory according to Mr. Leadbetter. But when the exact same task was presented to the AMD Radeon VII; there were no problems [/quote<] Barring Adobe doing something about this suite's limtation here, that's a handy feature for prosumers and some streamers dabbling in 4k perhaps.

      • tipoo
      • 7 months ago

      Good find. This would lend further credence to the idea that this kind of oddball chip exists largely because a certain fruit company would have requested it, while we’re probably getting the lesser binned/higher power chips.

        • derFunkenstein
        • 7 months ago

        We all thought the Core i7-8705G existed for the same reason, but it never made it into a fruit salad.

          • Krogoth
          • 7 months ago

          I thought that i7-8705G was more of Raja’s little “pet project” of trying to get AMD RTG sold off to Intel.

    • djayjp
    • 7 months ago

    Now AMD make one that instead uses the 50% spare chip area with more conservative clocks and have ~6000 shader processors…. Oh and use only 3 HBM2 chips this time.

      • auxy
      • 7 months ago

      And it won’t be any faster for games because half of this card is already sitting totally dark in games. (。-`ω-)

        • djayjp
        • 7 months ago

        Wut?

          • auxy
          • 7 months ago

          Well, besides all the dark silicon in Vega anyway (all the supposed advancements that never came to fruition, which is why Vega runs exactly like a fat Polaris)…

          …it’s the same issue Radeons have had since Tonga/Fiji. They have these huge shader arrays that are absolutely not being used in games. Games just don’t have that kind of computational workload; the bottlenecks are in geometry processing, memory bandwidth, and simple, raw-ass fillrate.

          So your hypothetical chip with ~6000 shader processors isn’t useful for games if they don’t expand the fixed-function parts and improve the geometry processing. (Yah it could be useful for other things, but I was and am specifically talking about BIDEO GAEM.)

          Basically, AMD needs a new GPU architecture. GCN is still capable, but it’s long in the tooth and ill-suited for serving today’s market; it’s RTG’s Phenom circa the end of 2010. Let’s hope AMD’s next architecture isn’t RTG’s Bulldozer. (´・ω・)

            • Krogoth
            • 7 months ago

            There’s no “dark silicon” in GCN. AMD RTG was attempting to alleviate GCN 1.0-1.4’s geometric performance bottleneck through software by using the excessive shading power for non-shading operations. This was only implemented in several professional-tier applications (which is where Vega managed to keep up with its Pascal/Volta counterparts). That plan fell through mainstream side which is why Raja and his followers jumped the AMD RTG boat.

            I wouldn’t be surprised that Intel’s upcoming discrete GPUs will end-up resembling “Vega” with proper support.

            • djayjp
            • 6 months ago

            Hmm idk I’m skeptical of your claim. I think the bulk of processing on GPUs is shading work. They could scale up those other components too rather than wasting the budget on ram. But I think the thing that needs the most work is their texture/colour compression engine and the sometimes evident cpu driver overhead.

      • Waco
      • 7 months ago

      If they had the budget to build one that focused primarily on rasterization and not compute it’d be incredible – but I don’t think that’ll be happening till Navi. Even Navi is going to have more compute in it than a gaming GPU should, but I guess we’ll see how it plays out.

        • stefem
        • 7 months ago

        Both AMD and NVIDIA use some of their gaming GPU for product targeted at compute and HPC applications, it’s a misconception that they remove feature from the silicon while developing gaming products. For example, contrary to popular belief (fueled by famous youtubers, like linus…) moving from Kepler to Maxwell NVIDIA didn’t stripped away anything compute related but DP capabilities, in fact Maxwell was better on any front except DP from an HPC standpoint.

          • Waco
          • 7 months ago

          Stripping out the DP capabilities is exactly what makes a card better for gaming (smaller cheaper die that doesn’t have a ton of wasted compute area) – and given that HPC is primarily floating point and generally double precision (at least, large scale physics simulations are), I’d make the argument that Maxwell was a pretty terrible HPC card and a pretty awesome GPU.

    • ronch
    • 7 months ago

    This card reminds me of AMD’s cancelled 20nm GPU a few years ago. I think that was around the time the Radeon Nano came out? Back then I lamented at how AMD had to rely on process shrinks to improve efficiency while Nvidia worked hard to make their stuff more efficient at the same process node. So when 20nm was canned, AMD was stuck in a rut while Nvidia still had the efficiency advantage. And then when Nvidia gets hold of the process shrink, AMD’s advantage disappears. Also, let’s not forget how process shrinks are few, costly, and far between. Same with HBM on the performance front, except HBM doesn’t really do much for AMD anyway.

    • Blytz
    • 7 months ago

    I really want to see undervolting and see how that pans the power usage out. Even just a dip in power consumption and the ability to stabilize the boost clock due to less thermal throttling would be an almost win for me.

      • Voldenuit
      • 7 months ago

      From what Steve Burke reported on gamersnexus, Wattman is not working properly with the Radeon VII right now (nor are any of the main 3rd party GPU tools), so we’ll have to wait for AMD to update the program before reviewers can test overclocking and undervolting.

    • setaG_lliB
    • 7 months ago

    Will I buy this card? Nope. But this review, like all other Radeon reviews, reminds me of my Radeon 9800 Pro. That beast had 128MB of 680MHz, 256-bit DDR1 memory. Plenty of fond memories playing HL2, Doom 3, Splinter Cell, Painkiller, and many more on my 1.63GHz (overclocked) PIII with that video card.

    Later on, I had a 512MB X1950 Pro/Athlon X2 system which was almost as fun. Man, I miss the days when Radeons were every bit as fast as their GeForce counterparts.

      • tipoo
      • 7 months ago

      Played HL2 on an Athlon XP 1800+ and a Radeon x1650 Pro. The flash muzzle lighting up the world in the darkness of the Ravenholm level was one of those moments that stunned me in gaming with how realistic things could be.

      Nowadays “realistic” graphics are just so commonplace it’s become boring.

      • Waco
      • 7 months ago

      I think your glasses are a little rosy – this card *is* just as fast as its GeForce counterpart for the most part. Back then the swings between various games could eclipse 30% – and here we are at a handful of percent.

    • anotherengineer
    • 7 months ago

    Just out of curiosity, does anything know what the price difference is between 8GB of GDDR5, vs 16GB of that HBM2, or how much 16GB of HBM2 is?

      • limitedaccess
      • 7 months ago

      Reported estimates are in the range of –

      ~$50 8 GB GDDR5 8 Gbps

      ~$90 8 GB GDDR6 14 Gbps

      ~$150 8GB HBM2 not factoring additional implementation costs such as the interposer.

      Keep in mind these things are contract negotiated for the parties and allotments involved. So it isn’t like Samsung lists X memory for $X and you just buy and order them.

    • chuckula
    • 7 months ago

    Something tells me part of the reason AMD released VII is to make Navi look that much better when it launches.

    Sure it won’t have the same absolute performance level in the first iteration, but on a relative basis it’s going to be a marvel of efficiency compared to the last gasp of GCN.

    • hiki
    • 7 months ago

    It matches nvidia in price-performance, but
    -Fails at 99% percentile.
    -Doens’t support tensorflow
    -Doesn’t have toys like RTX or DLSS, and it should have directx raytracing support.

    I have to side with nvidia, because AI is the new revolution, even for non professionals.

    It should be much cheaper to be competitive.

      • Anonymous Coward
      • 7 months ago

      I prefer to say “isn’t supported by tensorflow” rather than the other way around. Without any deep knowledge of the topic, I expect there is nothing fundamentally wrong with the hardware, its just nobody has seen the profit in writing the needed software layer.

      • ronch
      • 7 months ago

      GCN is so old that expecting it to support those new capabilities is like expecting K10 to support AVX-512 and FMA.

        • tipoo
        • 7 months ago

        Well, cores fundamentally based on Pentium 3 do support AVX-512 and FMA.

          • ronch
          • 7 months ago

          Yes but in its current form everything in the core has been widened and beefed up to support AVX-512.

      • Chrispy_
      • 7 months ago

      You forgot to mention that it’s super-noisy and consumes as much power as a 2080Ti.

    • ronch
    • 7 months ago

    I don’t think anyone honestly expected AMD to blow Nvidia out of the water here, but this product does fulfill 4 things:

    1. It lets AMD lay claim to having the first 7nm graphics card for gamers.
    2. It moves things forward for AMD.
    3. It lets them sell something while we wait for Navi.
    4. If they’re unable to sell all their 7nm Radeon Instinct chips as such, they at least have another avenue for them.

      • Redocbew
      • 7 months ago

      I suppose those are good things for AMD, but they don’t do much for me or you.

        • ronch
        • 7 months ago

        5. This graphics card will help keep you warm this winter. It’s GOOD FOR YOU.

    • tipoo
    • 7 months ago

    At least they listened to one thing and uncapped more FP64 performance, now at 1/4th rate rather than 1/8th. More interesting as a baby M150 than as a gaming card tbh. Though that also shows that’s entirely a software limit and it could have been uncapped to full rate like M150, but ah well.

    I’ll be curious for testing if the iMac Pro is refreshed with a chip like this soon, what its FP64 rate will be, as Apple co-writes the driver. Hopefully uncapped completely.

      • blastdoor
      • 7 months ago

      I guess you and Chuckula are probably right — perhaps the answer to ‘why bother?’ is ‘Apple.’

        • tipoo
        • 7 months ago

        Yeah, my running theory for this chip has been that it’s to rack up better binned dies for the iMac Pro. Since Navi is supposedly launching mid-range first and not applicable.

        Just curious if they’ll uncap the FP64 rate since that’s just an artificial limit clearly, if AMD released more last minute. The “Pro” in Macs has usually straddled the line definition and kind of been Radeons with Apple drivers, but this generation will have more tells, as the M150 has uncapped FP64 and ECC where the VII does not.

    • Mr Bill
    • 7 months ago

    No cats were harmed during this review.

      • thedosbox
      • 7 months ago

      The cat disappears. Jeff leaves TR. THIS IS NOT A COINCIDENCE.

        • K-L-Waster
        • 7 months ago

        I think your problem is you’re in the DOS box when you should be in Schrodinger’s box.

          • thedosbox
          • 7 months ago

          Much like these cards stock levels?

    • Pancake
    • 7 months ago

    One of these days, NVidia, one of these days… AMD’s driver development team is going to release a new driver. POW!!! Right in the kisser!

      • gerryg
      • 7 months ago

      Well, it’s both the driver team and the game developer, both need to do some tuning. At least that’s what I got out of the flip-a-roo with stuttering in different games. I suspect the additional memory will make a difference, too, once developers start using it. This card seems positioned for the future but has some rough edges. Looking forward to a re-review in ~6 months. But I expect we’ll see some interesting new figures in 2-3 mos.

        • Anonymous Coward
        • 7 months ago

        Why should we expect an upcoming miracle driver? I was under the impression this is pretty much a die shrink of an established product.

          • auxy
          • 7 months ago

          It explicitly is a die-shrink of an established product. Anyone who thinks drivers are going to make this thing anything but a prettied-up pig is fooling themselves. (/ω\)

            • Shobai
            • 7 months ago

            According to the Gamer’s Nexus review I just skimmed AMD changed the API for this card, hence why all the current tools don’t interface properly. The hardware may be a straight shrink but the software isn’t a direct map – it’s reasonable to expect, given their first-time execution track record, that AMD could improve things in software.

            Whether they can, and whether it turns into something more graceful, is anyone’s guess.

            • Kretschmer
            • 6 months ago

            Look at you all. Making me upvote auxy. For shame!

            But yeah, don’t buy things assuming that a future driver update will give you something that didn’t come in the box. Unless it’s a Fuji camera.

        • stefem
        • 7 months ago

        Well, I don’t know how well is positioned for the future… yes it has 16GB of RAM but lacs tensor processor for AI acceleration, lacs raytracing accelerations and even if produced in a newer node it just get close to an RTX 2080 while consuming more than the massive 2080Ti

    • Anonymous Coward
    • 7 months ago

    The cooler is ugly, BTW. Noisy and ugly both.

      • gerryg
      • 7 months ago

      Thank goodness for spray paint and undervolting!

      • FuturePastNow
      • 7 months ago

      I like the look of it. I’d expect AMD’s next gen mainstream reference cards will also look like this, with one or two fans.

      It’s clear that the cooler is completely inadequate for this card, though, and I wonder why they didn’t go with an air/liquid hybrid cooler like the Fury X shipped with.

        • Anonymous Coward
        • 7 months ago

        I find that large amounts of almost-white go very poorly with red, and it gets worse the darker the red is. Wasn’t it AMD with awesome blue cards? Black, gray or silver would have been clearly superior to white, it is going in a case where it probably gets a bit of a dust layer on it.

        Also the sheetmetal shape looks like crap specifically including the part where the product name blocks the airflow, the “R” corner looks tacky.

        I really hope this isn’t their new “design language”!

        [i<]Also, noisy.[/i<]

    • shaq_mobile
    • 7 months ago

    I thought I had something to add to the conversation but I don’t. Ignore me.

    • Action.de.Parsnip
    • 7 months ago

    Nobody seems to notice that radeon 7 is vega 56 but this time not bandwidth starved. Does nobody else find it interesting that we now know what vega 56 & 64 *could* have been? Has a gpu ever had its bandwidth doubled and more or less unchanged otherwise been re—released before? These are genuinely fascinating things to see in action. Is there any technical curiosity amongst you or are the articles just a launchpad for banter?

    Good review as always though.

      • dragontamer5788
      • 7 months ago

      Radeon VII is actually more like a Vega60.

      4CUs are disabled from the full 64. Likely because AMD is worried about yields at 7nm. Perfect dies become MI60 (full 64-CUs), while 60CUs become MI50 or Radeon VII cards.

      ————

      The main issue is that AMD has double the bandwidth of NVidia but barely keeps up. NVidia has far fewer cores, but has higher cache than AMD. That’s probably the direction AMD needs to go in: their current architecture is way too memory bandwidth heavy.

      • RAGEPRO
      • 7 months ago

      You seem to have missed the circa-20% clock rate bump. Wasn’t just the memory bandwidth that changed.

        • Mat3
        • 7 months ago

        For those interested, the Anandtech review has a same-clock comparison.

      • MOSFET
      • 7 months ago

      Parsnip, I think you underestimate the TR audience. Going into the review, I bet 80% of the article’s readers knew that this was a bandwidth-doubled Vega 60, to paraphrase your description. I bet the other 20% (and 80% of the first 80%) knew this was a non-datacenter MI-50. It’s been a slow news year.

      “Good review as always though” and “are the articles just a launchpad for banter” are somewhat incompatible.

      • Mr Bill
      • 7 months ago

      No, its pretty clear VII is for 7nm (nanometer).

    • moose17145
    • 7 months ago

    Wonder how these would have faired at 2560×1440 (same question with the NVidia cards as well). High refresh 2K seems more appealing to me than low frame rate 4K. I suspect I am not the only one. I know these are being marketed as 4K monsters, but I just feel like High Refresh 2K is likely to be a more common scenario that people will be buying these things for.

    Otherwise a good review and I appreciate the hard work you guys put into every review!

      • Krogoth
      • 7 months ago

      Just put about 15-20%+ to the 2540×1440 scores of a Vega 64 and that is where Radeon VII should land.

      • Voldenuit
      • 7 months ago

      Agreed. I don’t like that TR is no longer testing cards at multiple resolutions. Not everyone has the same monitor resolution and refresh, and we go to review sites to see how cards will perform before making purchase decisions.

      Also, I noticed that both this review and the Asus 2070 review have reverted to omitting the resolution label on the graphs, I know this seems to be a TR thing, but I don’t like it.

        • K-L-Waster
        • 7 months ago

        [quote<]Also, I noticed that both this review and the Asus 2070 review have reverted to omitting the resolution label on the graphs[/quote<] I guess the assumption is you get the res from the settings screen shots in the article page.

          • Voldenuit
          • 7 months ago

          The settings screenshots don’t always have resolution.

          See [url=https://techreport.com/r.x/2019_02_02_Asus_ROG_Strix_GeForce_RTX_2070_graphics_card_reviewed/FH41.png<]here[/url<] and [url=https://techreport.com/r.x/2019_02_02_Asus_ROG_Strix_GeForce_RTX_2070_graphics_card_reviewed/ACO2.png<]here[/url<] and [url=https://techreport.com/r.x/2019_02_02_Asus_ROG_Strix_GeForce_RTX_2070_graphics_card_reviewed/BFV1.png<]here[/url<]. Since I'm a long-time TR reader, I know to look in the 'How We Test' page, but even then it's buried in a page of text. But it makes comparing numbers from multiple TR articles testing different cards at different resolutions a hassle. EDIT: Accidentally pasted GOW4 screenshot instead of BF V.

            • enixenigma
            • 7 months ago

            I agree that it would be nice if the resolution was posted for every game tested. It does say on the testing methods page that the games were tested at 4K unless otherwise noted, though.

        • albundy
        • 7 months ago

        i dont think this card was meant to be purchased. maybe by 2 people. but thats it.

      • freebird
      • 7 months ago

      Anandtech and some others have 2K & 4K tests… just saying.

      WCCFTECH has a shedload of links to dozens of reviews, including techreport.
      [url<]https://wccftech.com/amd-radeon-vii-worlds-first-gaming-7nm-graphics-card-review-roundup/[/url<]

      • DoomGuy64
      • 7 months ago

      [url<]https://youtu.be/ehvz3iN8pp4[/url<] Linus Tech Tips on why 4k is dumb, we should be gaming at 1440p, and he's a heavy Nvidia fan. The vast majority of PC gamers agree with him too, IMO. 4k is not a proper gaming resolution yet, and we would rather have 1440p 144hz. Not to mention 4k+AA is clearly ROP limiting the AMD cards, and is a waste of everyone's time.

        • chuckula
        • 7 months ago

        So because AMD cards are supposedly not good at 4K we should call it useless… OK!!

        I can’t wait to use that logic for Cinebench when those 16-core RyZen 2s launch!

          • DoomGuy64
          • 7 months ago

          Thanks for the gold subscription treatment Mr. Chucko the clown. Also, I NEVER said what you are claiming.

          4k is a poor benchmark and gaming scenario for a variety of reasons.

          *Most 4k monitors are 60hz, and we are just barely getting better ones.
          *Steam survey data shows majority of gamers are on 1080p, and the biggest higher resolution gaining is 1440p, not 4k.
          *Most FPS games are better played at higher refresh rates, not higher resolutions.

          Also, it’s not that AMD cards can’t do 4k, it’s just artificially limiting performance in the benchmark by using AA with 4k. Don’t waste our time, and yours, with an obvious bottleneck scenario. You can talk about it sure, but that scenario is clearly inducing a hardware limitation that doesn’t represent performance outside of the bottleneck. It’s like benchmarking a 4GB card with 8GB texture settings, and pretending like that’s legit. No. No, it’s not. TR has done reports on memory limitations, but they don’t benchmark mid-range cards deliberately with those bottlenecks enabled. Every time that has happened, it was always about pointing out memory limitations, and not general performance. ROPs are no different. If you are limited by the ROPs, it should be reported as a ROP limitation, and not as general performance, or at least not without strongly pointing that out. Which was kinda done, but it was still poor sport. Not to mention, this whole 4k only shtick is getting old, and I no longer care to read reviews for scenarios that I don’t game at. This is catering to the smallest percent of the smallest percent of PC gamers, and makes these reviews worthless.

          4k isn’t even why anyone is buying these cards. Radeon VII is sold out. Why? Is it the gaming performance? I don’t think it is. I think people are buying these cards to have a cheap compute card that can play games, and not that they are actually a good gaming product. Since that is clearly the market, maybe the review should have included more professional use scenarios, and less ROP bottleneck scenarios. Unless the goal was to paint a negative picture instead of being relevant to the target audience. In which case, well done, and nobody from any side will take it serious. Not compute, and not gaming.

            • moose17145
            • 7 months ago

            I wouldn’t say that the 4k results are useless. For the class of card this is I feel that the 4k results are important. I would just like to see 2k results as well. Sure others have done tests on it as well, but TRs reviews tend to be the most in depth. So that is why I was saying it would be nice if they also did 2K testing as well.

            • chuckula
            • 7 months ago

            I smell a follow-up article!

            • Jeff Kampman
            • 7 months ago

            Hey, so you’re taking one anomalous case and trying to paint the entire review as anomalous and that just isn’t how it works.

            For one, 4K gaming is a perfectly legit benchmark. I’ve been enjoying seeing games in 4K HDR on my TV all week that now look rather dull and muted on my high-refresh-rate 2560×1440 SDR monitors. The RTX 2080 provides a perfectly fine 4K gaming experience even at high settings because of its laudable and consistent 99th-percentile frame times.

            $700 is a lot for a graphics card, to put it mildly, but you can easily build a 4K gaming system around an RTX 2080 for $1500 or less thanks to the bang for the buck of Ryzen CPUs. That’s long been our sweet spot price for a gaming PC and it’s no longer some unattainable ivory tower thing.

            For two, we went back and tested the optimal case for performance for AMD rather than saying “well, it’s ROP limited, so suck it.” I think it’s important for people to know what they’re getting into if they have a preferred set of settings for Forza Horizon 4 that they would otherwise be mindlessly dialing in on this card. Our job is ultimately to show differences between products, not suppress results that make Brand X look bad.

            For three, our 4K testing might not tell you exactly how many frames per second a card might produce at 2560×1440, but it does provide robust relative performance standings that should still hold as you lower the resolution. I’ve said over and over that we choose test settings to maximize the relative performance differences between products, and that’s answering a different question than “how many FPS does this card produce in a game at a given resolution,” which is a question many other sites already answer with far less detailed analyses. In short, if you’re not here for the insights afforded by our testing methods, I’m not sure what you’re here for.

            • chuckula
            • 7 months ago

            Great as always Jeff.
            Just as a minor addendum: When AMD posted its own numbers for the Radeon VII to show how it was competitive with the RTX 2080, AMD chose to publish numbers that were exclusively produced at 4K resolution.

            Read them all [url=https://www.guru3d.com/news-story/amd-publishes-radeon-vii-benchmark-results-from-26-games.html<]here if you want[/url<]

            • DoomGuy64
            • 7 months ago

            [quote<]In short, if you're not here for the insights afforded by our testing methods, I'm not sure what you're here for.[/quote<] Nobody is reading this review for insights, because there isn't anything here other than a 4k obsession. 4k isn't a thing, the 2080 doesn't provide a "fine" 4k experience if you want high refresh rates (outside of DLSS), high refresh 4k monitors aren't cheap, and people are buying Radeon VII for compute rather than 4k, which makes the entire review pointless to anyone who bought the card for high refresh 2k or compute purposes. Other than the 2080Ti, there aren't any good 4k cards, and Nvidia themselves have moved away from pushing 4k with RTX. People have complained about raytracing needing to be run at lower resolutions, but the steam hardware surveys show that most PC gamers are using lower resolution panels, making RTX perfectly acceptable for those people. Not to mention DLSS basically proves that true 4k is virtually indistinguishable from upsampled 2k. A BF5 gamer with a high refresh 1080p panel could pop in a 2080, and enjoy high refresh raytracing, for example. So overall, gamers don't care about 4k, Nvidia doesn't care about 4k, and only TR still cares about benchmarking a resolution that everyone else has moved away from. I'd recommend watching Linus Tech Tip's video on 4k, and taking some tech tips from him, for future reviews. Also, I think we all know Radeon VII was catering to compute, and there were no benchmarks for that. So overall, what insights could the compute or 2k crowd gain from this review? None. The only insight I got from this review was that 64 ROPs are completely inadequate for 4k, which I will agree is a perfectly valid point, and AMD needs to increase their ROPs. But other than that, there was nothing. No compute, no 2k. Now if AMD was to implement DLSS, which they have hinted at, then I could see a scenario where it would make sense on this card, but not at the moment, and DLSS itself proves that true 4k is unnecessary and the pixels are too tiny to matter.

            • kuraegomon
            • 7 months ago

            Dude, we get it: you don’t believe 4K is a viable gaming resolution. The problem is, that’s your [i<]opinion[/i<], not the gospel fact that you're presenting it to be. There are several people [i<]just in this thread[/i<] who disagree with you. Also, this is just rehashing the same arguments that 1080p proponents gave against 2k testing, and 720p proponents gave against 1080p testing. Hint: they were wrong then too 🙂 And yes, that last is just my [i<]opinion[/i<] 😀 Whether 4K gaming is viable for a particular user depends on which games they play, and what aspects of the visual experience matter most to them (or that they're most sensitive to). What's "true" for you isn't "true" for every gamer. Most importantly, the fact that 4K testing is the best way to shift the processing load from the CPU to (high-end) GPUs is just that: a [i<]fact[/i<]. Rail away about your opinions on 4K testing as much as you like, but the sound technical basis of TR's testing protocol is clear. ([i<]Full disclosure: In the interest of maximum transparency, the last 3x downvote on the post I'm replying to wasn't Chuck's, it was courtesy of yours truly. I 3x upvoted your previous one because I valued an opposing viewpoint, but I downvoted this one when it became apparent that you don't extend the same courtesy.[/i<])

            • freebird
            • 7 months ago

            I view the problem more of where are the AFFORDABLE 4K gaming monitors???

            For me to step up from my 27″ 2560×1440 @144hz Freesync Benq XL2730 (paid $550 mid-2015)

            I’m gonna want 4K @120hz or preferably something like 3840×1600 @ 100hz minimum higher the better. Freesync2 blur reduction and HDR would be nice adds too.

            Newer TVs with 4K 120hz and Freesync HDMI 2.1 might start bringing 4K gaming mainstream.

            [url<]https://www.youtube.com/watch?v=dPtSZ_oJ7mY[/url<] A review of ViewSonic VP3881, but only 60hz, but we are getting closer.

            • Voldenuit
            • 7 months ago

            Why should 4K testing have to be a binary choice against 1440p?

            There is value in testing 1440p performance, and testing the Radeon VII at 1440p leads to some differences compared to at 4K.

            I’m one of those weirdos with a 3440×1440 monitor, so nobody really tests at my resolution (and many games don’t even bother supporting it properly), but I’d still like to see 2560×1440 benchmarks [i<]in addition[/i<] to 3840x2160.

            • Jeff Kampman
            • 7 months ago

            -> 4k isn’t a thing

            I have two 4K monitors in my house, one of them an HDR TV. I would like to run them at their native resolutions, and in the case of the HDR TV, with its full color gamut and dynamic range in play.

            -> the 2080 doesn’t provide a “fine” 4k experience if you want high refresh rates (outside of DLSS)

            Even at “normal” refresh rates, I’ll trust my eyes and data rather than bare supposition, both of which say the RTX 2080 provides a smooth and enjoyable 4K HDR gaming experience. That’s setting aside the assertion that:

            -> high refresh 4k monitors aren’t cheap

            Nobody said anything about high-refresh-rate 4K gaming to begin with! Nice goalpost movement.

            -> people are buying Radeon VII for compute rather than 4k

            AMD’s reviewer’s guide and media presentation specifically highlight 4K and HDR gaming as strengths for this card. 16 GB of very costly HBM2 frame buffer yielding 1 TB/s of bandwidth says that 4K gaming should be a strength for this card. Maybe you should call AMD and tell them that it doesn’t matter so that they don’t devote costly packaging techniques and memory to chasing the dragon.

            At the end of the day, I don’t particularly care what Linus Sebastian thinks about 4K gaming. I care about what AMD, my eyes, and my monitor fleet have to say about 4K gaming, and when the Radeon VII falls short at things AMD says it should be good at, then there is work to do to make claims equal reality. ¯\_(ツ)_/¯

            • Waco
            • 7 months ago

            Thank you for your response and everything you do here at TR.

            Signed, an idiot who games at 4K on an old Maxwell card who bought a Radeon VII specifically for 4K gaming.

            • Redocbew
            • 7 months ago

            WRONG! YOU ARE NOT AN IDIOT!

            Oh, wait. That’s actually true. I fail at trolling.

            • ronch
            • 6 months ago

            I play at 1024 x 768. Beat that, 4K!!!

            PS – Oh and that’s for more modern games. For DOS games I go 320×240. 😀

            • K-L-Waster
            • 6 months ago

            What, your DOS games aren’t ASCII based? We need Radeon VII benchies for Rogue!!!

            • Voldenuit
            • 6 months ago

            I play Cogmind at 3440×1440, albeit I’m not hardcore enough to play it in ASCII mode.

            • moose17145
            • 6 months ago

            I’m just going to apologize. I didn’t realize what my original comment was going to start…

            • Redocbew
            • 6 months ago

            Not your fault dude. Asking a simple question because you’re curious isn’t something you should be sorry about even if some thing or someone causes it to not turn out how you had hoped.

            • Waco
            • 7 months ago

            I bought a Radeon VII for 4K gaming.

            I have a 4K monitor on my desk. I have 4 other 4K sets in the house for high res gaming.

            You are offering nothing to this site other than “I believe this, therefor it is true”. That’s not helpful, useful, or even entertaining to read.

            Get out of here with your attitude and willful ignorance please.

            • DoomGuy64
            • 7 months ago

            “I bought a Radeon VII for 4K gaming.” Good for you, just don’t turn on AA or you will bottleneck it.
            “I believe this, therefor it is true” I am basing my arguments on the steam surveys, TR’s review, the card itself, and low refresh 4k panels. You need a high refresh gaming panel for fps, and I doubt most 4k users have them. Even at that, the Radeon VII is clearly a worse choice than nvidia for gaming at this resolution, and this review proves it.

            The only real use the Radeon VII has is 2k and compute. I don’t see why anyone would buy it for 4k, unless they are willing to deal with it’s limitations, and have compute work they can justify it for.

            “Get out of here with your attitude and willful ignorance please.” If anyone has any of that, it would be every one of you who is disagreeing without taking an honest look at what the card actually is. Which is AMD’s workstation card, that is being sold as a low profit consumer card. It is a great prosumer product, but nvidia clearly has better 4k gaming cards.

            The only problem I have with TR’s review is that they didn’t take into account what this card will actually be used for, and if you bought one without having use for it’s compute, I’d like to ask why. Nvidia has more cost effective products for 4k.

            Either way, I’m not a fan of 4k, or this card. It’s not a thing for the vast majority of PC gamers, outside of 4k’s 1% of PC enthusiasts, who all congregate here, and 4k is not viable for fps outside of high refresh panels and 2080TI cards. The end result of this review was that there were no comprehensive “insights” on this card. What we have here is a 4k cult club. I got way more useful information on other sites that actually did comprehensive reviews. You people are just mad that I pointed it out.

            • Waco
            • 7 months ago

            Nobody is mad about anything except your attitude and opinions as facts.

            It’s almost like you believe nobody can game at 4K if it isn’t perfectly smooth 100+ FPS.

            • DoomGuy64
            • 7 months ago

            “It’s almost like you believe nobody can game at 4K if it isn’t perfectly smooth 100+ FPS.”
            How about anything above 60hz, because 60hz is pretty bad, and you won’t play competitively with it. Unless it’s perhaps a TN panel, which why would you do that.

            “Nobody is mad about anything except your attitude”
            Projection much? I just mention 4k only benchmarks are not being comprehensive, or relevant to the vast majority of PC users, and you all throw a fit.

            Anyway, the real point is that this card is a prosumer compute card, which was not tested for any of the other scenarios where it would actually be useful.

            If you bought it for 4k, then good for you? I think Nvidia has better options, and the whole 64 ROP thing clearly limits it’s capabilities. I’d rather have the 2080 for games, especially with it’s RTX features, while the Radeon VII is better suited for people who both game and have professional uses for it. It may be AMD’s best card to date, but it’s no nvidia killer, nor do I think 4k is that important. I suppose the best thing about it is that it can run 4k, and AMD is not price gouging on it? Maybe this whole hatefest is just you people taking out your frustration of Nvidia’s price gouging on me, that or you all really do drink the 4k kool-aid. Whatever. I just think 4k only results is limiting relevance.

            • Srsly_Bro
            • 7 months ago

            Great post. Waco argues from a different realm. You’ll have to forgive him. He chose that name for a reason.

            • YellaChicken
            • 7 months ago

            [quote<]4k only benchmarks are not being comprehensive, or relevant to the vast majority of PC users[/quote<] Correct, but then a review of a $700 graphics card isn't relevant to most PC users either so should TR just not bother? I'm certainly never going to buy this card because it costs over $250. Still liked reading the review though. [quote<]because 60hz is pretty bad, and you won't play competitively with it[/quote<] So 4k isn't relevant but competitive play is because suddenly we're all pros, yeah?

            • Srsly_Bro
            • 6 months ago

            So much false equivalency and 5th grade drop out reasoning here.

            My 1080 Ti FTW3 cost $840 and I don’t game on 4k. Nevertheless, the issue isn’t 4k. The issue is people like to see cards compared at resolutions they are used to. People with 1080P and/or 1440P, a very large percentage of gamers, cannot compare their experiences to cards reviewed at 4k resolution.

            People like reviews they can relate to and see what they could get with an upgrade. TR isn’t able to afford the time or equipment, to provide reviews with multiple resolutions, so the reviews are made with a resolution few have. It’s a decision that would even make sense to you.

            Your second point really shows your primitive reasoning. You made the connection where competitive is played by professionals. By your flawed and underdeveloped child-like reasoning, every fortnite, pubg, csgo player is a pro. Literally millions of pros playing for free.. And most of whom will never get paid; that’s not a pro. And your second point is easily refuted, more easily than the first. You can play csgo competitively and not be a pro, you know…

            He also didn’t say 4k was irrelevant. He said to the vast majority. Your comprehension is even more poor than your reasoning. You may not understand 144hz monitors and the benefits in competitive games with your $250 budget.

            I’m going to bed. I’m sure you will read this and still be in denial. I can’t save everyone!

            You should focus on reasoning and comprehension and not trying to be edgy, bro…yeah?

            • YellaChicken
            • 6 months ago

            [quote<]So much false equivalency and 5th grade drop out reasoning here. [/quote<] You know how to cut to the core of me Baxter. You did save me, I suddenly feel I need to be more like you in my posts. [quote<]4k isn't a thing[/quote<] [quote<]gamers don't care about 4k, Nvidia doesn't care about 4k[/quote<] [quote<]4k isn't even why anyone is buying these cards[/quote<] I must not draw false equivalence. None of the above means 4k is irrelevant. [quote<]People like reviews they can relate to and see what they could get with an upgrade.[/quote<] I must understand that no one reading TR knows whether they're getting an upgrade or not unless the review is carried out at the same resolution as their current monitor. [quote<]Blah, blah, reasoning, blah, blah, childlike, blah, blah, reasoning, blah, reasoning, blah, blah, reasoning, blah[/quote<] I must in future insert the word reasoning into my posts as often as possible to sound more intelligent than the other guy. So here your reasoning is that my reasoning that 144hz doesn't actually make that much difference in "competitive" play unless you're a pro is incorrect reasoning. Got ya, thanks for the reasoning. [quote<]...yeah?[/quote<] I should try and finish a post by doing something the other guy does to dig at him Ok, I'm off to play competitive Farm Simulator on an $800 gfx card at 144hz. Thanks for saving me bro.

            • Srsly_Bro
            • 6 months ago

            I actually was playing farm sim on my $800 gfx card on a 144hz monitor, cause I can. I also played csgo comp last night… While not being a pro. You aren’t able to make that differentiation, among others.

            In your reply you brought up 4k being irrelevant. That’s not what he said in your quote, but in your inability to comprehend, that’s what you understood.

            You’re pretty thick, and I kinda like it. No one can say anything to you because it doesn’t get through. I’m envious, tbh.

            I’m closing, did you vote yourself with bots or are there more dumb dumbs who think what you say makes sense?

            Take it easy on the down votes, snow flakes. Y’all can’t handle stress and opposition.

            • YellaChicken
            • 6 months ago

            [quote<]Take it easy on the down votes, snow flakes. Y'all can't handle stress and opposition.[/quote<] Lol...says the guy crying about it. Calm down, you can't save us all.

            • K-L-Waster
            • 6 months ago

            Or, maybe we just don’t like all the name calling. Just a thought…

            • Prestige Worldwide
            • 6 months ago

            144hz 1440p is the sweet spot

            • sweatshopking
            • 6 months ago

            4k tvs are literally a few hundred dollars, and have a VASTLY higher market penetration than high refresh monitors.

            • specialworks
            • 6 months ago

            my tuppence: resell it. Why?The Radeon VII should hold its value pretty well as supply will be sufficiently restricted that it will be ushered back into the OpenCL / Compute Computational Workload fold with open arms. It is a bargain for the crossover compute / gaming crowd.

            • YellaChicken
            • 7 months ago

            [quote<]4k isn't a thing[/quote<] Your whole argument isn't a thing. The ppl shelling out over a grand for graphics cards and 4k monitor setups on the other hand, they're a thing and they want to know which companies to throw tons of money at.

        • Waco
        • 7 months ago

        I game at 4K on an aging Maxwell card. It’s not a resolution you switch back from after seeing the clarity.

        • K-L-Waster
        • 7 months ago

        1440 benches would be useful, but based on Anand’s and Gamer Nexus’s reviews it doesn’t look like it would change the conclusions much.

        Looks like this is a good-but-not-great gaming card compared to the competition, but an absolute beast at GPU Compute. (Which given its heritage should surprise no one.)

        • Krogoth
        • 7 months ago

        4K isn’t so much dumb but it is more pointless as the returns of investment aren’t there for most genres. The jump from 2560×1440 to 3840×2560 isn’t as striking as the jump between 1920×1080 to 2560×1440 on computer monitor-size screens.

        The monitors and GPU solutions that are capable sustaining a solid 100FPS on current titles without making compromises to in-game detail settings is rather steep. It is in the same place that 2560×1600/2560×1440 were back a decade ago.

        FYI, I got both 2560×1440@144hz and 3840×2560@60hz screens here and have done decent amount of comparisons. It seems that 3840×2560 works better for strategy games and anything where more screen area/FOV gives you an advantage.

      • Chrispy_
      • 7 months ago

      There are a lot of really good 1440p screens that have Freesync ranges from 48Hz upwards. 48-100Hz or 48-144Hz gives you plenty of choice.

      Aim for 90fps or so with settings for nice fluidiy and know that even your 99th percentile frame times are going to be covered by the Freesync range. Anything faster than a 2070 should be more than adequate for 1440p90. Don’t get me wrong, 144fps is pretty awesome, but most of the benefits of high-refresh gaming are from the initial jump from 60 to about 90fps. That’s why Oculus/Vive feel high-refresh at 90Hz.

      • designerfx
      • 6 months ago

      What this translates to is the same thing as always: it’s not about the resolution by itself, people want high refresh rate options at the highest resolution the refresh rate can be maintained – not just 60FPS. I’d be interested to see techreport focus on 120-240FPS refresh rate and what resolutions a given graphics card can support that at.

    • Aranarth
    • 7 months ago

    Yup exactly what I expected.

    Get the Radeon in to the leader pack but does not lead it.

    It really needs a complete over haul.

    • chuckula
    • 7 months ago

    Man, these comments are getting out of hand.

    Ray tracing still sucks Nvidia! Don’t get cocky!

      • Krogoth
      • 7 months ago

      AMD RTG will unleash a new rendering pathway!

      Glue-tracing. It will bind the whole industry together as the one true standard!

        • K-L-Waster
        • 7 months ago

        I don’t think that will stick.

          • ermo
          • 7 months ago

          RTG is barely hanging on as it is.

          • maxxcool
          • 7 months ago

          Tacky comments …

        • Mr Bill
        • 7 months ago

        3D Printing! Hot Glue-tracing.

    • Rza79
    • 7 months ago

    In typical AMD fashion, the core voltage is set high.

    In this ComputerBase [url=https://www.computerbase.de/2019-02/amd-radeon-vii-test/4/#abschnitt_messung_der_leistungsaufnahme/<]review[/url<], the power consumption goes from 277W to 207W after lowering the voltage with 0,1v. In that same review, the RTX2080FE has 229W consumption.

      • chuckula
      • 7 months ago

      Yes, but what was the power consumption of the undervolted 2080?

      Or did they not even try?

        • enixenigma
        • 7 months ago

        It doesn’t have hard numbers, but the article mentions that it is possible, though the impact would be less significant (Nvidia tends to run closer to their optimal volts on their cards).

        Obviously, undervolting is a very YMMV type deal. No one can be guaranteed that they would see similar results if they purchased a VII.

          • stefem
          • 7 months ago

          I was able to shave 30% from power consumption an my Pascal GTX 1080, didn’t test with Touring though.
          But the point is that not every card will sustain lower voltage, ther’s a reason if AMD set the voltage at that level unless one think they are a bunch of moron

        • Rza79
        • 7 months ago

        They didn’t but from what I have found on the internet, the RTX2080 will drop around 30W with undervolting. That would take it to around 200W.

      • jihadjoe
      • 7 months ago

      That’s because AMD is painting the voltage picture with broad strokes.

      In [url=https://www.guru3d.com/articles_pages/amd_radeon_vii_16_gb_review,29.html<]another review[/url<] Guru3D reported minimal power consumption gains and a slight performance loss from their attempt to undervolt. ComputerBase won the voltage-based silicon lottery while Guru3D lost, but AMD is setting the passing mark so that no GPU is left behind.

        • Anonymous Coward
        • 7 months ago

        I’d bet these are all GPUs that got “left behind” … specifically not admitted to the data center. High voltage is probably quite sensible on this product.

      • stefem
      • 7 months ago

      As Jihadjoe correctly pointed out not every GPU will sustain lower voltage, after all there’s a reason if AMD set such voltage, unless one think they are a bunch of moron

        • BobbinThreadbare
        • 7 months ago

        Most consumer hardware is shipped with safety margin that exceeds what it needs to be just to make sure things do not go wrong. That’s why overclocking exists among other things.

        So it’s probably not worth whatever additional sales AMD would get vs the return percentage but that could be very low because how many people really buy a card based on power draw vs performance and initial cost?

          • stefem
          • 7 months ago

          Sure but I think they were talking more from a product/company image perspective, if a “normal” user sees who he consider enthusiast (even if they are a moron) bitching (even if for demential reasons) the product then it will be persuaded it isn’t good or worth.
          It happen with cars, it happen with games and it happen with hardware too, look at the “recent” Turing lunch, the backlash started even before actual performance number were available.
          That said I don’t think that it will be much of an issue for the Radeon 7 given the nature of the product.

    • blastdoor
    • 7 months ago

    Sheesh. Why even bother, AMD?

      • Krogoth
      • 7 months ago

      Better to sell off sub-par Instinct/FirePro stock that ISVs/enterprise customers didn’t want instead of recycling it at a total loss.

      Nvidia has been doing the same thing with Titan brand SKUs.

      • chuckula
      • 7 months ago

      Somebody needed to make the GPU for the Apple ARM Miracle 2019 Mac Pro!!

      (Oh yeah… confirmed)

        • blastdoor
        • 7 months ago

        [url<]https://www.youtube.com/watch?v=2WZLJpMOxS4[/url<]

      • tipoo
      • 7 months ago

      GPU equivalent of “FIRST” comments
      Nearly just an M150 with FP64 artificially capped (given their quick turnaround on increasing the rate to 1/4th after feedback), so probably wasn’t much extra effort to put out the first consumer 7nm card.

    • Kretschmer
    • 7 months ago

    Thanks for an excellent and exhaustive review, Jeff! My takeaway is that my beefy 1080Ti has aged incredibly well, as Nvidia uses consumers as guinea pigs and AMD treads water.

    /me goes back to playing DOTA 2 and Diablo 2…

      • chuckula
      • 7 months ago

      So it’s almost like your 1080Ti was grape juice that sat in an oak barrel for a few years and came out tasting…. fine?

        • Krogoth
        • 7 months ago

        I see what you did there….. 😉

        • Kretschmer
        • 7 months ago

        As far as I can tell, no. This card was a strong performer out of the gate and didn’t rely on incremental driver updates to stretch its hypothetical legs.

          • Waco
          • 7 months ago

          Nvidia vodka. That should be a thing.

            • Kretschmer
            • 6 months ago

            “We’ve decided that throwing more and more GPU horsepower at anti-aliasing is an inefficient use of silicon. With Nvidia GameJuice (TM) your entire optical processing system applies universal blurring to provide AA without any hit to FPS!”

            • Waco
            • 6 months ago

            *bodily harm not the responsibility of Nvidia GameJuice. Please game responsibly.

            • chuckula
            • 6 months ago

            Waco had post 256 in this thread.

            128 core Epycs…. CONFIRMED!

            • K-L-Waster
            • 6 months ago

            Do not taunt Nvidia GameJuice.

    • 1sh
    • 7 months ago

    The only thing Radeon VII is going to do is help Nvidia sell more RTX 2070s…

      • Krogoth
      • 7 months ago

      It is more like Radeon VII will end-up encouraging Vega 64/56 sales until the stock dries up.

    • techguy
    • 7 months ago

    Too little, too late.

    Typical AMD.

    But Navi will save them!

    /s

    • derFunkenstein
    • 7 months ago

    Guys, TR tested Radeon VII all wrong.

    * Use 1st gen Threadripper, which is of course the ultimate gaming CPU
    * Run your RAM at 2400 MHz. No need to get all the performance out of the CPU because it’s so overpowered in games
    * Test at 1080p because it’s the most popular resolution, and everybody has 144-Hz 1080p displays
    * Report average framerates only. Nobody cares about micro-stuttering, as long as it’s fast overall.

    😆

    [url<]https://www.digitaltrends.com/computing/amd-radeon-vii-review/[/url<] [quote<][spoiler<]I didn’t expect the AMD Radeon VII to impress me. The older Radeon RX Vega 56 and 64 were great cards on paper, but just okay in benchmarks. I doubted AMD’s follow-up would be different. Consider me a convert. Nvidia’s grand ray tracing adventure has given AMD a chance to define itself with a more traditional card that turns all the dials up to 11. The Radeon VII doesn’t just best the RTX 2080. It sometimes keeps pace with the RTX 2080 Ti, a video card that’s priced at $1,199.[/spoiler<][/quote<]

      • chuckula
      • 7 months ago

      That’s what I call an objective review!

      • Krogoth
      • 7 months ago

      >looks at that review
      >insert “The Sisko” + Picard facepalm

      • enixenigma
      • 7 months ago

      As someone who games on a 144-hz 1080p display, this post offends me.

        • derFunkenstein
        • 7 months ago

        I’m sure that your anecdote is real data. ¯\_(ツ)_/¯

          • enixenigma
          • 7 months ago

          IT’S THE ONLY DATA RELEVANT TO ME!!

            • derFunkenstein
            • 7 months ago

            I know, guy. It’s ok. 🙂

        • Shobai
        • 7 months ago

        It shouldn’t.

        I also game on that class of monitor, and that review methodology is poor.

      • ronch
      • 7 months ago

      If you say come with me off to digitaltrends.com

      Off to digitaltrends.com we will go…

      (Er, that’s a song from the play Little Women…)

      • jihadjoe
      • 7 months ago

      I’m tempted to sign up using an AMD exec’s name on their site and comment with something like ‘The cheque’s in the mail!’

      • ColeLT1
      • 7 months ago

      Their chart colors are great:
      Blue = AMD Radeon VII
      Black = Nvidia GeForce RTX 2080
      Black = Nvidia GeForce RTX 2080 Ti
      Black = Radeon RX Vega 64

        • Waco
        • 7 months ago

        You can hover and click them on/off, but yeah, that’s a pretty awesome oversight (like most of the review).

        • chuckula
        • 7 months ago

        So when he says the Radeon VII beats the RTX 20180Ti it’s because it was faster than one of the black lines in the graph?

          • K-L-Waster
          • 7 months ago

          This line, that line, only the blue one matters, right?

            • Redocbew
            • 7 months ago

            Of course. So long as the number we get this time is bigger than the number we got last time, then all is good, right?

      • stefem
      • 7 months ago

      That’s the future that awaits us, horde of youtubers and sunday reviewer which test GPU at 1080p with suboptimal configured slow CPU and CPU reviewed at 4K with AA and other stuff that exclusively weight on the GPU turned on…

      A special thanks goes to AMD’s PR department for introducing the idea (in the Ryzen reviewers guide) that benchmarking a CPU at 4K it’s a sane one and something should be do.

        • derFunkenstein
        • 7 months ago

        I think for the most part the future you predict is already here. It’s certainly here for other technology areas. The internet at large is completely oblivious to it. It has to be, because these techniques work and sell devices. It’s horrible.

        Two examples:

        1. At the 2018 iPhone launch, Apple brought a bunch of YouTube “influencers” to an event, gave some private one-on-one time with the devices, and let people post glowing “reviews” before those phones ever launched. These sites produced videos that gave a glowing review and must-have status based on an hour or two of supervised hands-on time with a device that hadn’t even launched. They didn’t even leave Apple’s little showroom with it. OH IT’S SO PRETTY BUY THIS THING! And Apple does it because it works.

        2. For people into retro stuff, YouTube is a wasteland. The site is littered with “reviews” of devices with knock-off console ASICs (Genesis, SNES, and NES specifically) that output composite video, get that video upscaled to output 720p HDMI, and these idiots that take no payment other than gettin it for free lavish it with praise. There’s a particularly egregious family of Genesis clones that output games that render natively at 256×224 and integer stretch them to 320×224 so every 8th pixel is 2x wide and then upscale that to 720p. And every reviewer I called out replied with “I didn’t notice that”. Like how the fuck could you not? And Hyperkin and GamerzTek do this because it works.

      • dragontamer5788
      • 7 months ago

      I gave it a look over. I dunno, the Gizmodo review still seems worse. At least the digitaltrends.com site lists the CPU + RAM used in the benchmarks.

      [url<]https://gizmodo.com/amds-radeon-vii-is-a-solid-gaming-card-but-thats-just-1832412200[/url<] I can't find any information on how these benchmarks were run. And it doesn't seem like the author knows what all the buzzwords mean either...

        • derFunkenstein
        • 7 months ago

        It’s possible for (and likely that) both reviews to suck.

        • Srsly_Bro
        • 7 months ago

        Gizmodo?? The site that got trolled so hard over their PC building video that they took it down? The site is inept. Why even mention it??

          • Voldenuit
          • 7 months ago

          I think you’re thinking of TheVerge, although Gizmodo isn’t great, either.

            • Srsly_Bro
            • 7 months ago

            You’re right. I didn’t know the difference. Thanks lol

          • dragontamer5788
          • 7 months ago

          Well, the digitaltrends.com review got a lot of upvotes. So someone out there enjoys reading poorly done reviews! I figured the same crowd would be interested in reading some other ones.

            • derFunkenstein
            • 7 months ago

            Upvotes mean nothing on this site or any other because we all know everybody is buying upvotes.

        • K-L-Waster
        • 6 months ago

        When those folks say it’s a “solid gaming card” it likely means they’ve determined it isn’t liquid or gaseous.

          • Voldenuit
          • 6 months ago

          Well those people are wrong, because the vapor chamber has liquid and gaseous phases.

      • Gastec
      • 6 months ago

      You forgot to mention that’s a 144 Hz 1080p TN (twisted ci-nematic) panel we’re “all” gaming on.

    • Mr Bill
    • 7 months ago

    That was a great review. Maybe its time for AMD to employ a hired gun to improve the design. But really a very credible contender.

      • chuckula
      • 7 months ago

      Sorry, we already poached Raja AND Jim Keller.

        • sweatshopking
        • 7 months ago

        AND MY BOY SCOTT WASSON

        • cynan
        • 7 months ago

        Cause the “gaming” GPU Raja’s team developed was even Vegaly relevant as a contender to Pascal and Turing.

          • chuckula
          • 7 months ago

          Raja left AMD for a reason. Something tells me that Vega wasn’t the product he wanted to make but didn’t get a choice or the resources to make what he wanted.

            • K-L-Waster
            • 7 months ago

            Or he was the designated fall guy for the HBM bet not working out.

            • Arbiter Odie
            • 7 months ago

            Purported mutiny attempt:

            [url<]https://www.hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility[/url<]

            • chuckula
            • 7 months ago

            Well he does have a pirate beard.

            A hook and an eye patch would really tie it together.

        • Mr Bill
        • 7 months ago

        Yeah, that Zucks for AMD. Maybe AMD could lure William Starke to infuse their designs with more Power.

        • Mr Bill
        • 7 months ago

        I wanna see the next CPU he does for Intel. If they use him for that.

          • ermo
          • 7 months ago

          What else would they use him for?

      • Mr Bill
      • 7 months ago

      [quote<]we don't think a 15% improvement in 99th-percentile frame rates is an insurmountably high bar to clear if the goal is to catch the RTX 2080. [/quote<] . . . . That might make the purchase decision more interesting, especially if you needed the compute power or were doing graphic design. It looks like a smoking good deal for prosumers.

    • Chrispy_
    • 7 months ago

    Nice review Jeff.

    I initially skipped straight to the power/noise testing and was dumbfounded how a card that consumes less power than the Vega64 can be noisier than the reference blower.

    Do you have to return the card as you found it, or would some Dremel work to remove the big RADEON logo blocking the critical exhaust area be on the cards? Failing that, I’m wondering whether removing the shroud altogether would be an improvement?

    You didn’t mention temperatures, but from reading many reviews over the years, it would seem that the slot on top is more important than the slot on the bottom (because the bottom slot is obscured by the PCIe slots and the motherboard) – and when you look at the slot on the top it’s both narrow (by normal standards) and much of it is totally blocked by the logo! The fans are spinning hard because there’s no free air path.

      • psuedonymous
      • 7 months ago

      [quote<]I initially skipped straight to the power/noise testing and was dumbfounded how a card that consumes less power than the Vega64 can be noisier than the reference blower.[/quote<] That'd be the vapour chamber.... sitting on a thermal pad between the heatsink and the GPU package. Yes, not paste, a [i<]pad[/i<]. Likely due to the massive headaches with the previous Vega cards where multiple different packages had differing package heights (~0.1mm height variation between suppliers) requiring either mutually-incompatible heatsink designs or retention mechanisms (the 'low' design would crush a 'high' package, a 'high' design would leave a large gap above the 'low' package) or mucking about with shims.

        • Voldenuit
        • 7 months ago

        According to GamersNexus, they [url=https://www.gamersnexus.net/guides/3436-amd-radeon-vii-tear-down-disassembly-graphite-pad<]quote the thermal conductivity of the pad to be 25-45 W/mK[/url<], which is in the same ballpark as liquid metal pastes (33-73 W/mK) and much better than traditional thermal pastes (4-8 W/mK). This is offset by the the thermal pad being considerably thicker than a good thermal paste application, so it probably ends up about even with stock thermal paste, but is probably necessitated by the package height variations you mentioned. I don't think the thermal pad is to blame. Fanspeed curves, fanblade geometry, fin layout and the restricted airflow path probably contribute the lion's share of the noise.

          • enixenigma
          • 7 months ago

          I wonder if one could fit a Morpheus II on this. Did wonders for the V64.

      • Waco
      • 7 months ago

      Once mine arrives I plan on testing undervolting as well as removing the shroud to see what the effects on noise / power consumption are.

        • enixenigma
        • 7 months ago

        I’ll be interested in your findings. I imagine that this would be a better product with some undervolting and a better cooler, like many of AMD’s past cards.

          • Waco
          • 7 months ago

          I’ll likely post up the results in the forum if they are interesting!

            • Chrispy_
            • 7 months ago

            I’m curious.

            I’m far more excited about how laptops and HTPCs could benefit from a 7nm lower-power part than to see what everyone already knew about Vega 20. AMD’s only game plan, EVER, is to factory overclock the hell out of everything they make and to hell with power, noise, efficiency.

            • Waco
            • 7 months ago

            That’s what the Vega APUs are for, no? One will likely be my next HTPC chip.

            • Mr Bill
            • 7 months ago

            Perceptions have shifted. Time was when power consumption and heat were nothing compared to clockspeed (e.g. Extreme Edition). But the Israeli Core family were so darned overclockable and apparently not that power hungry. Next IPC became the rally flag and then power consumption became a thing. Most probably because we can measure it. Geeks love to measure every parameter and argue their merits.

            First dual SMP systems and then multicore CPU’s spoiled me for wanting quick response multitasking more than single thread kaboom. And now here we are downthumbing a card that can game competently in its price class but also do 3.5 TFLOPS double precision; because it uses more power than the competition. Its silly. Put a water cooler on it and its all good.

            • Waco
            • 7 months ago

            Power consumption has never really concerned me much – just the associated noise that I didn’t appreciate. I don’t want something that idles at higher than the competition for a HTPC given that mine runs 24/7 though (and thankfully all modern cards are pretty good in that regard).

            • Chrispy_
            • 7 months ago

            I have an APU but the problem is that APUs will always be bandwidth starved, sharing DDR4 bandwidth with the CPU. To top it off, the CPU seems to get the priority in the power budget, so – at least with current implementations – the GPU performance is throttled first and then the CPU performance is throttled if there’s still not enough power in the thermal budget.

            On desktop at least, they’re better but you’re still far better off with something that has dedicated GDDR5 or faster unless you enjoy 720p

    • Krogoth
    • 7 months ago

    It performs as expected. It painfully suffers from GCN’s front-end bottleneck of only doing four-triangles per pass on the pipeline. The frame-times on GCN SKUs makes it very glaring (just check out those frequent pauses) . NGG+ and primitive shaders were going to remedy the bottleneck but obviously that plan fell through.

    I can’t honestly recommend Radeon VII at its price point. RTX 2080 and RTX 2070 are far superior buys. For whatever reason you can’t go with the gream team. Just stick with Vega 56/64 which are at least obtainium and good deals at their price points.

      • chuckula
      • 7 months ago

      Gimme an M!

      Gimme an E!

      Gimme an H!

      What does it spell?

      KROGOTH!

    • f0d
    • 7 months ago

    going by past patterns once they finally make another good gpu you can bet their cpus will suck again
    it seems they just cant get them both right in the same timeframe

    my new system is the opposite of my last one – previous was amd gpu and intel cpu and now i have amd cpu and nvidia gpu

    i wonder if intel will change things with their upcoming gpus?

    • USAFTW
    • 7 months ago

    Excellent review Jeff. We’ll miss your in-depth reviews.
    So it’s 1080 Ti levels of performance, at the same power, a bit less than two years later.
    What should worry AMD is that with Navi being a 2019 launch, that would put their new graphics architecture into the well-into-2020 timeframe. At that point, Nvidia will almost certainly have new chips on the 7nm TSMC process ready to go.
    That cooler really is disappointing. You get all the disadvantages of an open air cooler with none of the upside. I’d be curious to see what the temperature and voltage looks like.

    • Luminair
    • 7 months ago

    The biggest problem isn’t even mentioned here. AMD is selling this at cost in small quantities to appear like they have a competitive product. It will trickle out to the die-hard fans and everyone else will keep AMD’s existence in mind for another year. Radeon 7 is a marketing ploy.

      • Spunjji
      • 7 months ago

      “AMD is selling this at cost”

      Do you have a source for that? I don’t imagine they’re making a ton of money on them, but as die-harvested parts I’d be surprised if they weren’t at least making a positive contribution towards the profitability of that particular chip design.

        • Srsly_Bro
        • 7 months ago

        You have to consider opportunity cost, also. If the GPU die is not qualified for enterprise, some costs are being recovered, at least. It may not be all bad for AMD.

          • Anonymous Coward
          • 7 months ago

          Of course its not all bad news, big fancy 7nm dies in the trashcan are zero revenue. They didn’t even fuse off that many resources, so it seems like they aren’t struggling too much with yields.

          • Luminair
          • 7 months ago

          > If the GPU die is not qualified for enterprise

          As far as we know this chip is identical to the Radeon Instinct mi50.

            • Krogoth
            • 7 months ago

            Ate too much power at load and/or minor defectives that ISV/enterprise customers didn’t want.

            Nvidia does the same thing with its Titan SKUs.

            • Anonymous Coward
            • 7 months ago

            We’re used to CPUs being graded according to quality, why should it be different for other chips?

        • K-L-Waster
        • 7 months ago

        There was this…

        [url<]https://www.shacknews.com/article/109364/report-amd-radeon-7-limited-to-5000-units-sold-at-a-loss[/url<]

        • Luminair
        • 7 months ago

        As far as we know this is identical to the supercomputer Radeon Instinct mi50 part which they’ll be charging $10-20k for. Meaning, as long as they have limited production ability and supercomputer demand, they are wasting money diverting them to be video cards.

        Separately, there have been media reports over the months that this chip was never intended to be a video card, that its launch now was a last-minute decision, that it costs a lot of money to make, and that the supply will be limited. AMD will never actually tell us any of this, so we’ve gotta use reporting and our brains. This is gaming product with 16GB HBM for no reason, why is that, why would AMD throw money away.

          • Anonymous Coward
          • 7 months ago

          Oh come on, [i<]nobody[/i<] is going to divert silicon into comparatively low-margin products unless that silicon is not going to sell at higher margins. That would be insane. The far more sensible explanation is that these desktop parts were either too hot, or AMD had more than they could sell in servers.

      • Krogoth
      • 7 months ago

      Yep, it is a marketing move and a way to sell off sub-par Vega20 yields to non-ISV/enterprise customers then having to recycle it completely.

    • enixenigma
    • 7 months ago

    Thanks as always for an awesome review, Jeff.

    My thoughts:

    – Two years later and AMD still can’t score an unqualified win over the 1080Ti
    – God, I wish that the 128 ROP rumor had turned out to be true.
    – HDR seems to hit the Red team particularly hard. Something to do with HDR defeating their color compression techniques, perhaps? May be way off base with that one.
    – Expecting ‘Fine Wine’ when AMD has had plenty of time with the Vega architecture and considering that Navi should be around the corner seems unrealistic.
    – Hopefully this thing undervolts like a mother…

      • Krogoth
      • 7 months ago

      128ROP rumor made absolutely no sense. Whitepapers for Vega20 were out for a while and it was made abundantly clear that it was little more than a die-shrink Vega10.

      HDR isn’t what is killing AMD RTG team. It is GCN’s 2011-era geometric pipeline. NGG+ and Primitive shaders would have help remedy this through software (harnessing its excessive shading power for non-shading operations). It seems that AMD RTG has pretty much given up on NGG+ and Primitive shaders in mainstream applications. It is very unlikely third-party developers will pick-up the slack.

        • enixenigma
        • 7 months ago

        You’re right about the rumor. It’s nice to dream, though.

        I didn’t say that HDR is ‘killing’ performance, just that there seems to be a lot more frame time variance with it enabled. Hopefully that is something that can be corrected in drivers/software. Looking at the Anandtech(EDIT: can’t spell) review, they suggest that drivers are not where they should be for this product. That is both surprising (in the sense that this isn’t a radical departure from Vega 10) and predictable (in the old ‘lol AMD software’ sense).

        EDIT: Shame that they didn’t see the value in getting the more novel parts of the Vega architecture working via drivers. We’ll never know how much performance was lost due to that.

    • Srsly_Bro
    • 7 months ago

    It’s like April 2017 1080Ti performance almost two years later. Did AMD do good?

    • ermo
    • 7 months ago

    Lovely review.

    I’ll just go ahead and admit that It feels a little weird to see my RX Vega 64 effectively being reduced to runt of the litter status.

    Oh well, progress marches on. I’ll be sure to tweak what little games I play to prefer shader-bound over fixed-function-hardware-bound techniques.

    • Topinio
    • 7 months ago

    Guess I’ll be keeping my RX Vega 56, then.

    I’m not bonkers enough to go paying £650 for maybe 30% more performance and 100% more noise.

    • Wall Street
    • 7 months ago

    Pretty disappointing that years after “inside the second” This card is plagued by frame stutter issues in half of the titles (Forza, AC Odyssey, FC3).

    Also suprised that nVidia has frame stutters in BF5.

    I wonder is there is something particularly hard to compute in those frames or if some buffer fills up every few seconds and needs to be flushed. The frame jutters look to be rythmic which suggests that it isnt tied to scene complexity.

      • enixenigma
      • 7 months ago

      It seems like HDR may be the culprit for frame time spikes for a lot of titles in this review. I’d be interested to see if disabling HDR would make a difference.

        • juampa_valve_rde
        • 7 months ago

        I second this. The frame spikes looks weird on the RVII, specially because the Vega doesnt replicate the same behaviour.

      • chuckula
      • 7 months ago

      Didn’t Lisa Su show Forza at the CES event? So it’s not like Forza is some Nvidia-gimped title here.

    • chuckula
    • 7 months ago

    There is a silver lining: Apparently AMD knew what was coming and tweaked the firmware to enable 1/4 rate double precision after initially stating it would be 1/8 [Edit: Kampman’s statements about 1/8 rate are accurate based on what AMD originally said, BTW. This is an extremely last-minute change.] (See Anand’s article).

    So while these cards aren’t that good for gaming, they have decent double precision support for under $1k.

      • Krogoth
      • 7 months ago

      It is no doubt these guys are going to be snatched up by GPGPU hobbyists.

      • Fursdon
      • 7 months ago

      From reading around the internet, this seems like the point of this card. They had enough good salvage of the Vega 20 chip, and decided to put out a lesser prosumer card. I hope they did their thorough diligence and determined this was worth it.

      AMD’s graphics division has been on food stamps since sometime around or after the 7000 series launch. That series had 3 distinct new chips launched in short order for the various graphics segments. Everything since has been rebrands, single launches, and more power*. With varying degrees of updates/changes.

        • EndlessWaves
        • 7 months ago

        Not true. Both GCN 1.1 and Polaris were two chips launched within a few months of each other (the 7790/260 and 290 and the 460 and 480).

          • Fursdon
          • 7 months ago

          Yeah, I looked back at this as well and thought it was a little wrong as well.

          The 7790 was a strange sidestep that sat oddly in between the 7850 and the 7770. The former of which could be found cheaply enough and provide a better experience. Overall they left 7850/7870 derivatives to take up the lions share in the hole underneath the 290 and then after Tonga, until Polaris came along. (Bonaire also launched in … March 2013, and Hawaii in … Nov. 2013? Going off wiki dates)

          The Polaris one is probably the better example, though that had a gaping hole above $ 250 (MSRP) in their lineup, when NVIDIA has just released a powerful new pair at the same time. We don’t know if they ever designed a larger Polaris GPU, but then scrapped the idea because Vega was near enough. The two chips here were solid updates overall though.

          I agree that you’re correct, but both of those launches felt … lacking.

        • jihadjoe
        • 7 months ago

        Yeah these are probably the Vega 20s that can’t quite make Instinct thermals or power requirements.

        • chuckula
        • 7 months ago

        Based on some of the productivity benchmarks out there, while I’m not overly impressed with AMD’s marketing push to make people think the Radeon VII is a gaming card first and foremost, I don’t think it’s a bad product. As a workstation product, I think it’s pretty good. It’s just a less-than-optimal product for high-end gaming because it’s not really designed primarily for gaming.

          • Voldenuit
          • 6 months ago

          I’d like to know if the Radeon VII has unlocked Radeon Pro drivers for Catia/Maya/Solidworks/Creo.

          If it’s going to be used as a workstation card, I’d like to know which applications will be accelerated, and which are crippled in software.

    • Tristan
    • 7 months ago

    Let they make variant with 8GB HBM and cheaper by 150$

      • Krogoth
      • 7 months ago

      Not possible without making completely new silicon. That’s one of the big drawbacks of HBM2. It doesn’t scale down without having to completely rework with the silicon for it. You can’t just remove tracing and memory chips be done with it.

        • Waco
        • 7 months ago

        Well, that’s technically not true. They could easily just drop one or two of the dies and leave them disconnected. It would probably hurt the performance by a pretty easily measurable amount though.

          • Krogoth
          • 7 months ago

          That requires a redesign of the chip packaging and card validation which all together cost nearly as much as designing new silicon.

          It is much more expensive to downscale HBMx when compared to GDDRx.

            • Waco
            • 7 months ago

            It means a new qual process, yes, but it’s literally the same as GDDR[X] in terms of leaving things off.

            The only difference (which does have a cost difference) is between removing a chip from the board and removing a chip from the interposer. I understand why they did not do the latter.

          • Jeff Kampman
          • 7 months ago

          There’s nothing stopping someone from making 2 GB HBM2 stacks, to my understanding, but the availability of said memory could be another story.

            • Waco
            • 7 months ago

            From what I understand nobody wants to buy less-dense HBM2 stacks – so I doubt AMD would push for their own SKU on what is effectively a stop-gap product.

    • Prestige Worldwide
    • 7 months ago

    Guess I’ll be sticking with my AIB GTX 1080 until the next generation of Geforces rolls around… Neither Nvidia nor AMD can give me a good reason to upgrade.

      • kvndoom
      • 7 months ago

      Will probably upgrade my GTX970 for a 1070 or 1080 this summer when Doom launches, but I’m not even sure I’ll need to. It’s not [b<]that[/b<] far behind.

        • Krogoth
        • 7 months ago

        970 will start to run into VRAM limitations especially in light of its odd memory partitioning pool (3.5GiB/0.5GiB).

          • Spunjji
          • 7 months ago

          I’m actually noticing this already in a few games. It’s sad really, as otherwise that architecture has serious legs.

          • Chrispy_
          • 7 months ago

          Treating the 970 as anything other than a 3.5GB card is a mistake. I just turned settings down to ensure I never used the full 4GB of VRAM on mine, because the performance nosedive simply isn’t worth it.

          • kvndoom
          • 7 months ago

          I know it might, but I do so little gaming now that it’s worth waiting and seeing before taking the plunge. The 970 handled Doom 2016 just fine, so unless it’s a new engine or heavily revamped I might be okay.

          TLDR I’m going to buy the game first and then decide. That’s a fixed $60 expense, the other $300 may or may not be necessary. 😉

      • gerryg
      • 7 months ago

      Yeah, I’ve had my Voodoo card for this long, why change now?

    • chuckula
    • 7 months ago

    maroon1 gets credit for a pretty accurate (maybe even optimistic) prediction: [url<]https://techreport.com/news/34446/in-the-lab-amd-radeon-vii-graphics-card?post=1101983[/url<]

      • cynan
      • 7 months ago

      Cause 4% higher system power is “way” more?

        • derFunkenstein
        • 7 months ago

        4% of total system draw. When you take all the other components out of that, my guess is it’s more like 6-8% or so.

        Don’t forget that since the Radeon VII is slower, the CPU isn’t working as hard.

          • derFunkenstein
          • 7 months ago

          Wow, it’s more like 30% extra power draw.

          Tom’s shows the Radeon VII draws 297W average in Metro 2033: [url<]https://www.tomshardware.com/reviews/amd-radeon-vii-vega-20-7nm,5977-5.html[/url<] And Tom's also showed that the RTX 2080 FE averaged around 225W: [url<]https://www.tomshardware.com/reviews/nvidia-geforce-rtx-2080-founders-edition,5809-10.html[/url<] So it's a 70-ish watt difference.

            • cynan
            • 7 months ago

            You are comparing power draw to the wrong card. maroon1 specifically predicted the Radeon VII would draw “much more power than RTX 2080 Ti”, which according to [url=https://www.tomshardware.com/reviews/gigabyte-aorus-geforce-rtx-2080-ti-xtreme-11g,5953-4.html<]Tom's[/url<] draws about 75 more watts in gaming than the RTX 2080 card they tested. Tom's tested RTX 2080 Ti shows it drawing an average of 302 watts in Metro 2033 (vs the 297W for the Radeon VII, as you stated).

            • derFunkenstein
            • 7 months ago

            Ah ok.

            Still not great. 30% more power for [i<]maybe[/i<] the same performance, with FXAA only.

    • ronch
    • 7 months ago

    Oh well.

      • Chrispy_
      • 7 months ago

      You had higher expectations for a die-shrink of the Vega64?

      Sure, there’s more memory bandwidth than it could ever use, and that perhaps helps offset the loss of four compute units (this is a Vega60, really) but as for new architecture or anything interesting, it’s just an overclock of August 2017’s Vega.

        • enixenigma
        • 7 months ago

        True. To be fair, it does do exactly what AMD said it would do relative to its previous high end card (Vega 64). It actually does a slight bit better than what they said, according to the benchmarks. Compared to the competition, however…

        • ronch
        • 7 months ago

        Honestly I wasn’t expecting anything. GCN and its iterations haven’t been impressive in terms of energy efficiency since 2013 or so, when Maxwell cane out. Performance-wise it’s not really bad but AMD hasn’t had a clear win over Nvidia since forever anyway. Given their focus on the CPU side of things since 2012, however, it’s not surprising.

    • derFunkenstein
    • 7 months ago

    Good thing a process node shift did nothing to alter that classic AMD Radeon combination. It’s too bad that combination is “more power consumption for less performance”.

      • chuckula
      • 7 months ago

      NEVER CHANGE AMD!!

      • Spunjji
      • 7 months ago

      Did anyone seriously think it would, though? It’s been clear for a while that Vega lags behind Nvidia’s best by more than a single die-shrink worth of power/performance.

        • derFunkenstein
        • 7 months ago

        well it did [i<]help[/i<] the perf/watt, because it brings total system power down by around 10% and increases performance around 30%. But it's not even close to catching Nvidia on perf/watt, and that's with a process node advantage. That's disappointing, but not unexpected. You can expect the worst and still be disappointed when it's not as bad as it could have been.

          • Spunjji
          • 7 months ago

          Fair point! My disappointment reserves were entirely drained by Vega 64, so everything since then has failed to register. 😀

    • chuckula
    • 7 months ago

    Thanks as always Jeff!!

    That scatter plot says quite a bit.

    Reaction after Turing launched: This sucks I’m getting a GTX-1080Ti.

    Reaction after Radeon VII launches: This sucks I’m getting a GTX-1080Ti.

    Nah.. who am I kidding?

    Reaction after Radeon VII launches: FINE WINE! Because you know, AMD will totally devote all its driver resources to Vega after Navi launches.

      • Krogoth
      • 7 months ago

      AMD RTG missed one key ingredient that could have saved the Radeon VII.

      GORILLA GLUE X!

        • chuckula
        • 7 months ago

        I mean, if they ripped out all those HBM controllers, that PCIe 4.0 crap, and threw in a 14nm I/O chip to do it instead… Who knows what might happen!

Pin It on Pinterest

Share This