AMD drops new Radeon RX 5700 details and a 16-core Ryzen 9 at E3

AMD just wrapped up its Next Horizon Gaming event at E3, and as the company promised at Computex, it served up some details on the company’s upcoming Radeon products. That’s not all we got, though—in the presentation’s final moments, AMD dropped a sixteen-core bomb on us in the form of the Ryzen 9 3950X. I’ll get to that in a moment; first, let’s talk about these new Radeons for a moment.

AMD’s Next
Horizon Gaming
Core
config
Base
clock
Game
clock
Boost
clock
Memory
config
Memory
speed
Architecture
& process
Price
(USD)
Radeon RX 5700XT
Anniversary
2560 SP
(40 CU)
1680
MHz
1830
MHz
1980
MHz
256-bit
GDDR6
14 GT/s RDNA
7nm TSMC
$499
Radeon RX 5700XT 2560 SP
(40 CU)
1605
MHz
1755
MHz
1905
MHz
256-bit
GDDR6
14 GT/s RDNA
7nm TSMC
$449
Radeon RX 5700 2304 SP
(36 CU)
1465
MHz
1625
MHz
1725
MHz
256-bit
GDDR6
14 GT/s RDNA
7nm TSMC
$379
Radeon RX Vega 64 4096 SP
(64 CU)
1247
MHz
N/A 1546
MHz
2048-bit
HBM2
1.89 GT/s GCN 5
14nm GloFo
$499
Radeon RX Vega 56 3584 SP
(56 CU)
1156
MHz
N/A 1471
MHz
2048-bit
HBM2
1.6 GT/s GCN 5
14nm GloFo
$399
Radeon RX 590 2304 SP
(36 CU)
1469
MHz
N/A 1545
MHz
256-bit
GDDR5
8 GT/s GCN 4
12nm GloFo
$279

So right away, the above chart will probably give gerbils pause. “What is ‘Game clock’?” you wonder. Simply put, AMD’s old “Boost clock” was the maximum clock rate that the card would hit. Since that was more of a theoretical measure, the card wouldn’t always hit that speed during gameplay, so for added transparency, AMD is now offering this “Game clock” metric to help gamers get a better idea of the card’s typical clock rate during gameplay.

With that curiosity resolved, AMD is launching two new video cards in July: the Radeon RX 5700XT, and a slightly de-tuned version that drops the “XT” suffix. The return to the “XT” branding for the top model is nostalgic, even more than the use of the familiar “5700 series” moniker. As the company said at Computex, the new cards are based on the RDNA architecture, which is derived from but not identical to the GCN architecture that has powered the company’s cards since 2011.

AMD CEO Lisa Su holds a Radeon RX 5700XT Anniversary Edition card bearing her signature on the shroud.

There’s also a factory-overclocked model of the faster card on the way to celebrate the company’s 50th anniversary. That move reminds us of competitor Nvidia’s “Founders Edition” cards, as well as AMD’s own “RX Vega Frontier Edition” card. The grey-and-gold heatsink shroud comes with Lisa Su’s signature, and AMD says the Anniversary Edition will only be available direct from the company’s website.


AMD compared itself to the competition in World War Z.

On stage, AMD once again compared the Radeon RX 5700XT to Nvidia’s GeForce RTX 2070 as it did at Computex, and this time claimed victory in a brief World War Z benchmark. We’ll quickly note that World War Z is a Vulkan title that performs very well on AMD hardware, so take these results with a bit of salt (as you should any vendor-provided benchmark.)

The company then compared the Radeon RX 5700 against the GeForce RTX 2060 in an odd impromptu “benchmark” in Apex Legends, where one character spammed incendiary grenades at another. The RX 5700 held a more stable frame rate than the GeForce card, but we can’t say how representative this test is.

AMD went on to talk about some of the new software coming to Radeon cards, including FidelityFX, Radeon Image Sharpening, and Radeon Anti-Lag. The company demoed each feature very briefly. FidelityFX appears to be a red-team version of Nvidia’s Gameworks library that offers AMD-authored visual effects for developers to use in their games, although AMD’s version is open-source. It’s not clear at all what Radeon Image Sharpening is. We’ll have to try and get more details from AMD about what this feature actually does.

Meanwhile, AMD claims Radeon Anti-Lag actually reduces input lag, or “motion to photon latency.” The “demo” of this feature was little more than an on-screen number decreasing, and honestly was a little underwhelming. However, if it works as described, it could be pretty great for reaction-heavy games.

AMD didn’t offer many new details about the RDNA architecture on the stream, and unfortunately, we’re not there at E3 to talk to the company about the new chips. However, the boys from Anandtech are on the scene, and Ryan Smith over there already has a pretty solid preliminary write-up posted. Check out his article for some info about RDNA.

On the CPU side of things, AMD covered the new Ryzen CPUs that it announced at Computex pretty thoroughly in the beginning of its E3 show, and we—like most viewers, we imagine—tuned out afterward, feeling a bit let-down by the lack of a 16-core CPU announcement. As it turns out, Lisa Su saved the best for last, and introduced the Ryzen 9 3950X to close out the show.

Yes, indeed: it has 16 cores, 32 threads, runs 3.5 GHz at base and boosts to 4.7 GHz. It has 64 MB of L3 cache, and it still fits in Socket AM4 at a 105W TDP. It’s impressive stuff, and while the $749 price tag seems high, consider that AMD probably can’t afford to build that many of these chips. We reckon those big 64-core EPYC CPUs get dibs on the best fully enabled Zen 2 chiplets.

AMD also announced release dates for all the new stuff. Release “date,” anyway—aside from the Ryzen 9 3950X (which is coming some time in September), everything else is launching world-wide on July 7.

0 0 votes
Article Rating
Subscribe
Notify of
guest

236 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
dragontamer5788
dragontamer5788
3 years ago

Dwarf Fortress and Factorio, the two games of highest simulation detail I’m aware of, are RAM-latency bottlenecked.

Updating MB, or GB… of data every second (Dwarf’s left arm gets injured: that’s the level of detail of Dwarf Fortress) a little-bit at a time is RAM Latency. Too much information to store in the cache, and single-thread bound because simulations are very difficult to write in a parallel manner.

Taxythingy
Taxythingy
3 years ago
Reply to  derFunkenstein

Me to Supreme Commander: “Would it help if I got out and pushed?”

There were three things that helped: MHz, IPC and Sorian AI. The last one helped by reducing CPU unit spam – harder by being smarter, not bigger.

tipoo
tipoo
3 years ago
Reply to  blastdoor

It would have kind of been interesting to see the what-if machine world where there was no 360 and developers just had to bite the bullet and learn the Cell way of doing things. The small handful of games that did use it well were impressive for the gen, but there were so few titles that made any distance from what also ran on Xenon.

sleeprae
sleeprae
3 years ago
Reply to  blastdoor

I didn’t introduce the term (in fact I re-used it in quotes), but if you look at the context in which it was first used, clearly the OP did not intend it in the traditional sense.

But thank you for being pedantic, that definitely serves the conversation.

blastdoor
blastdoor
3 years ago
Reply to  tipoo

It’s always fun to see what people can do when pushed. All the excuses for why something can’t be done evaporate under pressure.

blastdoor
blastdoor
3 years ago
Reply to  sleeprae

[url<]https://en.m.wikipedia.org/wiki/Predatory_pricing[/url<]

derFunkenstein
derFunkenstein
3 years ago
Reply to  DoomGuy64

It was cache starved, so of course memory speed and timings had a big impact.

Mr Bill
Mr Bill
3 years ago
Reply to  jihadjoe

That reminded me of ‘Mafia Wars’. I would get 10 or so browser windows open and then cycle through them as fast as I could mouse click the attack button in each window. It worked great!

Anonymous Coward
Anonymous Coward
3 years ago
Reply to  DoomGuy64

Well its kind of old news, but the reason there isn’t much margin to exploit these days is that the manufacturers have become very adept at exploiting it themselves, arguably margin-harvesting has been a big part of the increase in performance for many years.

Anonymous Coward
Anonymous Coward
3 years ago
Reply to  tipoo

Not just double the cache, unify it and [i<]then[/i<] double it, last I heard. Just the kind of solution I always wanted to see someone try.

Spunjji
Spunjji
3 years ago
Reply to  srg86

Early indications are that on Intel’s side of things, the lower clock speeds currently attainable on 10nm mostly offset the performance increase from the core enhancements.

I don’t think they’ll do terribly, but I’m also not expecting Intel to suddenly leap back into a definitive lead.

Spunjji
Spunjji
3 years ago
Reply to  Ifalna

You were correct! 9800 Pro and an overclock was the way to go ;D

ronch
ronch
3 years ago
Reply to  Redocbew

Lighten up.

DoomGuy64
DoomGuy64
3 years ago

Perhaps, but A64 performance also depended on memory speed and timings, and a 64-bit OS helped (Not Vista). I used one of the 939 Opterons overclocked with DDR 500 @ 1:1 FSB, ran faster than anything else out there. Even ran id’s Rage when it was being the secondary PC.

Considering how far people could push the A64 and Phenom II with overclocking and high speed memory, Ryzen feels like a let down in terms of being able to push the envelope. It really doesn’t like overclocking, or high speed ram, but still does well where you can push it.

jihadjoe
jihadjoe
3 years ago
Reply to  ronch

It can be if you’re a multi-boxer =)

tipoo
tipoo
3 years ago

Multicore game code was dragging its feet until the 8th gen consoles with 7 available weak Jaguar cores, you HAD to use them all because they all sucked individually, hopefully this gen sees another big increase to multithreaded code as it’ll have 16 threads.

tipoo
tipoo
3 years ago

Yeah that 32MB L3 (for the 8 core) is interesting. I like how their solution to cross CCX communication limiting max framerates in games was “well, we’ll double the cache”, lol

Anonymous Coward
Anonymous Coward
3 years ago
Reply to  derFunkenstein

I rigged up some CPU metrics gathering on some gaming boxes here and played SupCom 2v2 with AI opponents, lots of death, but I wasn’t really very impressed by the extent to which it used a quad core. This includes a Q9550 that is certainly robust enough for the game, yet still similar to a top-end CPU when the game was launched.

SupCom does appear to value a decent-sized L2/L3 though. [i<](Hmm, 32MB L3.)[/i<] Ran like crap on those A64 X2's with 512k L2's I recall.

derFunkenstein
derFunkenstein
3 years ago

Supreme Commander came about as close as I’ve ever seen to totally killing a CPU. The Forged Alliance expansion came out in 2007 but I couldn’t reliably play a 4-player game (3 CPU opponents) without the game world slowing down below real-time speed until I had a Phenom II X4.

ronch
ronch
3 years ago

As someone who’s had his FX-8350 for a long time I couldn’t agree with you more. But by the time we are in those days I probably would’ve long upgraded to a Ryzen.

Anonymous Coward
Anonymous Coward
3 years ago
Reply to  ronch

I’d love simulation-style games with AI or other world-updates that could consume so much CPU though. A world of stuff happening, ecosystems to play with, etc. So lets see high core counts and hope for the best.

Redocbew
Redocbew
3 years ago
Reply to  ronch

I get it dude. Stories like this are the tabloids of the tech press. Most of the people that follow them do so for entertainment rather than information, but I don’t have much interest in speculation without some reasonable basis in fact. That’s in seriously short supply here. Without that we may as well be listening to the homeless dude on the corner with a blanket in one hand and a sock monkey in the other screaming at anyone who will listen.

ronch
ronch
3 years ago

We all know 16 cores isn’t for gaming.

ronch
ronch
3 years ago
Reply to  Redocbew

Speculation is part of every product launch. Especially with AMD. And it’s fun.

charged3800z24
charged3800z24
3 years ago
Reply to  jihadjoe

It seems I must have confused people with my previous reply. Your point is null. Intel matched AMD’s price or was sometimes higher during the Athlon 64 era. Intel’s P4 EX chips not only got beat by the same priced FX 64, but also the AMD FX 3500+ which, as I said,. cost dramatically less then the P4 XE chips. That is why Intel gets crap about the pricing. They never offer the best bang for your buck, they charged whether they are in the lead or not. AMD Prices where they fit. And with the new 3000 series CPUs… Read more »

anotherengineer
anotherengineer
3 years ago

So how many games can use all 16 cores? Is there a list somewhere with all the games and how many threads they can use?

Redocbew
Redocbew
3 years ago
Reply to  ronch

We don’t know. That’s my point. Everyone has different reasons for rationalizing the cost of an expensive product, and they don’t always have to make perfect sense, but I don’t think we have enough information to figure out exactly what the value proposition is yet.

ronch
ronch
3 years ago
Reply to  Redocbew

Um, that was only for illustration. The point is, if AMD can give you, as they usually do, better performance/$, then is it really a bad deal?

Redocbew
Redocbew
3 years ago
Reply to  ronch

You know this is the same benchmark which routinely puts iPads and Xeons on equal footing, right?

ronch
ronch
3 years ago
Reply to  Sahrin

Yeah it’s$750 but if it can beat a $1,000 or $2,000 Intel chippery then DAMMIT it’s a good deal!!

Anonymous Coward
Anonymous Coward
3 years ago
Reply to  blastdoor

Yeah them and all the big unix-risc clients were kicking butt then, also the big-cache Xeons. Then the refactored the code to work in smaller caches! Noooooo.

ronch
ronch
3 years ago
Reply to  ShowsOn

I’m not sure who’s really running TR these days though. There’s only Adam Eiberger and Bruno. Seth Colaner is officially the head honcho but I don’t think he really wants the job. So it’s like TR doesn’t really have a crew running the ship these days.

blastdoor
blastdoor
3 years ago

I’m remembering when the ppc g3 crushed the pentia in running [email protected] due to the big l2 cache on the g3.

It’s always great when things fit in a cache

Wonders
Wonders
3 years ago
Reply to  ShowsOn

At the end, Kanter will be giving away a pony to one very lucky gerbil!

Wonders
Wonders
3 years ago
Reply to  K-L-Waster

Top-shelf banter is what this is, right here folks.
Young’uns, take note.

Anonymous Coward
Anonymous Coward
3 years ago
Reply to  blastdoor

A related thing about bandwidth that has impressed me is the experience of running a database on what is essentially storage over ethernet, as is the modern thing in for example Amazon’s RDS. It works surprisingly not that bad depending on what you are doing. A database… on sometimes less than a gigabit of bandwidth. Its nuts.

I think those dual memory channels should do OK in a number of non-saturation situations.

ronch
ronch
3 years ago
Reply to  tootercomputer

It’s good for epeen, of course.

blastdoor
blastdoor
3 years ago

Yeah.

Also, the memory controller is now a little bit less integrated than before.

Can that big L3 cache overcome both a somewhat non-integrated memory controller AND just 1/8 of a memory channel per core?

Color me skeptical.

K-L-Waster
K-L-Waster
3 years ago
Reply to  srg86

Ice Lake etc. only help if they actually ship.

srg86
srg86
3 years ago
Reply to  ronch

Definitely sounds like what Intel did to Haswell, change things around on the back end to increase resources.

As for rocking Intels’s boat, for Coffee Lake sure, but Ice Lake with Sunny Cove, I highly doubt, since these “Haswell” type revisions are being performed again (and possibly to an even larger extent).

Anonymous Coward
Anonymous Coward
3 years ago
Reply to  blastdoor

I recall [i<]waaaaay[/i<] back in the day that AMD had both single- and dual-channel sockets available for single-core chips, and I guess the dual channel option performed better. Now we get to see how it goes with 8 cores per channel.

Chrispy_
Chrispy_
3 years ago
Reply to  barich

The underdog is only good in so far as their positive effect in competition driving down the high prices of the market leader(s) For our own benefit as consumers, competition is good and so the establish leader ‘winning’ is bad for us and the underdog ‘winning’ is good for us. The minute AMD have enough power and influence to control the pricing such that other companies need to undercut them and outperform them to gain market position, AMD become the ‘bad guys’. All corporations are facelessly-selfish, profit-driven bad guys – but in the relative effect they have on the end… Read more »

ronch
ronch
3 years ago

For those interested about Zen 2’s microarchitectural improvements, head over to Anandtech. It’s an interesting read, and I reckon the changes from Zen 1 to Zen 2 are similar to the changes between Sandy/Ivy to Haswell. Bigger micro-op cache, smaller but faster L1I cache, better branch prediction, one more AGU (making it 3 AGUs) so it’s a slightly wider core, support for AVX2, more instructions in flight, better hardware security, new cache-specific instructions.. it’s pretty much a fairly revised core. I think Zen 2 is gonna rock Intel’s boat. They also talked about Zen 3, Zen 4 and even Zen… Read more »

jihadjoe
jihadjoe
3 years ago
Reply to  ShowsOn

Ah it was such a joy every time a new TR podcast episode went up.

kuraegomon
kuraegomon
3 years ago
Reply to  Gadoran

I wouldn’t bet too much money that Intel’s feeling very comfortable right now. ASP’s are absolutely going to drop across the board, which means that margins are going to drop – and Intel remains capacity-constrained into 2020. Not much comfort to be found there.

kuraegomon
kuraegomon
3 years ago
Reply to  Gadoran

Yield levels aren’t static over a particular vendor/process/layout combination’s lifetime. The vendor figures out improvements to a process, and works with the customer to figure out how to leverage those improvements for a given chip layout. Both parties are [i<]heavily[/i<] incentivized to improve yields - for obvious reasons - and they always do so. The exact [i<]amount[/i<] of those yield improvements is widely variable... but they always occur.

Zizy
Zizy
3 years ago
Reply to  DPete27

Yup, I agree completely.

And figure out a way to properly test this “lower latency” mode please. It would be a shame to see an important feature (purpotedly pushed forward by your previous boss!) go untested.

ronch
ronch
3 years ago
Reply to  Goty

Yeah. Ditto.

Redocbew
Redocbew
3 years ago

It’s a good one, but don’t use it to describe robots. They hate it when you do that.

charged3800z24
charged3800z24
3 years ago
Reply to  sleeprae

Yeah, but for the price of near 1000.00 USD AMD offered the fastest CPU. In the same time frame of the Athlon 64 FX 57, Intel had Pentium 4 XE CPUs that cost as much as the AMD FX and were beat buy the Athlon 64 3500+. And that cost half the price!

236
0
Would love your thoughts, please comment.x
()
x

Pin It on Pinterest

Share This

Share this post with your friends!