Home AMD drops new Radeon RX 5700 details and a 16-core Ryzen 9 at E3
News

AMD drops new Radeon RX 5700 details and a 16-core Ryzen 9 at E3

Zak Killian
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

AMD just wrapped up its Next Horizon Gaming event at E3, and as the company promised at Computex, it served up some details on the company’s upcoming Radeon products. That’s not all we got, though—in the presentation’s final moments, AMD dropped a sixteen-core bomb on us in the form of the Ryzen 9 3950X. I’ll get to that in a moment; first, let’s talk about these new Radeons for a moment.

AMD’s Next
Horizon Gaming
Core
config
Base
clock
Game
clock
Boost
clock
Memory
config
Memory
speed
Architecture
& process
Price
(USD)
Radeon RX 5700XT
Anniversary
2560 SP
(40 CU)
1680
MHz
1830
MHz
1980
MHz
256-bit
GDDR6
14 GT/s RDNA
7nm TSMC
$499
Radeon RX 5700XT 2560 SP
(40 CU)
1605
MHz
1755
MHz
1905
MHz
256-bit
GDDR6
14 GT/s RDNA
7nm TSMC
$449
Radeon RX 5700 2304 SP
(36 CU)
1465
MHz
1625
MHz
1725
MHz
256-bit
GDDR6
14 GT/s RDNA
7nm TSMC
$379
Radeon RX Vega 64 4096 SP
(64 CU)
1247
MHz
N/A 1546
MHz
2048-bit
HBM2
1.89 GT/s GCN 5
14nm GloFo
$499
Radeon RX Vega 56 3584 SP
(56 CU)
1156
MHz
N/A 1471
MHz
2048-bit
HBM2
1.6 GT/s GCN 5
14nm GloFo
$399
Radeon RX 590 2304 SP
(36 CU)
1469
MHz
N/A 1545
MHz
256-bit
GDDR5
8 GT/s GCN 4
12nm GloFo
$279

So right away, the above chart will probably give gerbils pause. “What is ‘Game clock’?” you wonder. Simply put, AMD’s old “Boost clock” was the maximum clock rate that the card would hit. Since that was more of a theoretical measure, the card wouldn’t always hit that speed during gameplay, so for added transparency, AMD is now offering this “Game clock” metric to help gamers get a better idea of the card’s typical clock rate during gameplay.

With that curiosity resolved, AMD is launching two new video cards in July: the Radeon RX 5700XT, and a slightly de-tuned version that drops the “XT” suffix. The return to the “XT” branding for the top model is nostalgic, even more than the use of the familiar “5700 series” moniker. As the company said at Computex, the new cards are based on the RDNA architecture, which is derived from but not identical to the GCN architecture that has powered the company’s cards since 2011.

AMD CEO Lisa Su holds a Radeon RX 5700XT Anniversary Edition card bearing her signature on the shroud.

There’s also a factory-overclocked model of the faster card on the way to celebrate the company’s 50th anniversary. That move reminds us of competitor Nvidia’s “Founders Edition” cards, as well as AMD’s own “RX Vega Frontier Edition” card. The grey-and-gold heatsink shroud comes with Lisa Su’s signature, and AMD says the Anniversary Edition will only be available direct from the company’s website.


AMD compared itself to the competition in World War Z.

On stage, AMD once again compared the Radeon RX 5700XT to Nvidia’s GeForce RTX 2070 as it did at Computex, and this time claimed victory in a brief World War Z benchmark. We’ll quickly note that World War Z is a Vulkan title that performs very well on AMD hardware, so take these results with a bit of salt (as you should any vendor-provided benchmark.)

The company then compared the Radeon RX 5700 against the GeForce RTX 2060 in an odd impromptu “benchmark” in Apex Legends, where one character spammed incendiary grenades at another. The RX 5700 held a more stable frame rate than the GeForce card, but we can’t say how representative this test is.

AMD went on to talk about some of the new software coming to Radeon cards, including FidelityFX, Radeon Image Sharpening, and Radeon Anti-Lag. The company demoed each feature very briefly. FidelityFX appears to be a red-team version of Nvidia’s Gameworks library that offers AMD-authored visual effects for developers to use in their games, although AMD’s version is open-source. It’s not clear at all what Radeon Image Sharpening is. We’ll have to try and get more details from AMD about what this feature actually does.

Meanwhile, AMD claims Radeon Anti-Lag actually reduces input lag, or “motion to photon latency.” The “demo” of this feature was little more than an on-screen number decreasing, and honestly was a little underwhelming. However, if it works as described, it could be pretty great for reaction-heavy games.

AMD didn’t offer many new details about the RDNA architecture on the stream, and unfortunately, we’re not there at E3 to talk to the company about the new chips. However, the boys from Anandtech are on the scene, and Ryan Smith over there already has a pretty solid preliminary write-up posted. Check out his article for some info about RDNA.

On the CPU side of things, AMD covered the new Ryzen CPUs that it announced at Computex pretty thoroughly in the beginning of its E3 show, and we—like most viewers, we imagine—tuned out afterward, feeling a bit let-down by the lack of a 16-core CPU announcement. As it turns out, Lisa Su saved the best for last, and introduced the Ryzen 9 3950X to close out the show.

Yes, indeed: it has 16 cores, 32 threads, runs 3.5 GHz at base and boosts to 4.7 GHz. It has 64 MB of L3 cache, and it still fits in Socket AM4 at a 105W TDP. It’s impressive stuff, and while the $749 price tag seems high, consider that AMD probably can’t afford to build that many of these chips. We reckon those big 64-core EPYC CPUs get dibs on the best fully enabled Zen 2 chiplets.

AMD also announced release dates for all the new stuff. Release “date,” anyway—aside from the Ryzen 9 3950X (which is coming some time in September), everything else is launching world-wide on July 7.

Question & Answers (236)

Have a question? Our panel of experts will answer your queries. Post your Question
  1. So how many games can use all 16 cores? Is there a list somewhere with all the games and how many threads they can use?

      • I’d love simulation-style games with AI or other world-updates that could consume so much CPU though. A world of stuff happening, ecosystems to play with, etc. So lets see high core counts and hope for the best.

        • As someone who’s had his FX-8350 for a long time I couldn’t agree with you more. But by the time we are in those days I probably would’ve long upgraded to a Ryzen.

        • Supreme Commander came about as close as I’ve ever seen to totally killing a CPU. The Forged Alliance expansion came out in 2007 but I couldn’t reliably play a 4-player game (3 CPU opponents) without the game world slowing down below real-time speed until I had a Phenom II X4.

        • I rigged up some CPU metrics gathering on some gaming boxes here and played SupCom 2v2 with AI opponents, lots of death, but I wasn’t really very impressed by the extent to which it used a quad core. This includes a Q9550 that is certainly robust enough for the game, yet still similar to a top-end CPU when the game was launched.

          SupCom does appear to value a decent-sized L2/L3 though. [i<](Hmm, 32MB L3.)[/i<] Ran like crap on those A64 X2's with 512k L2's I recall.

        • Yeah that 32MB L3 (for the 8 core) is interesting. I like how their solution to cross CCX communication limiting max framerates in games was “well, we’ll double the cache”, lol

        • Not just double the cache, unify it and [i<]then[/i<] double it, last I heard. Just the kind of solution I always wanted to see someone try.

        • Perhaps, but A64 performance also depended on memory speed and timings, and a 64-bit OS helped (Not Vista). I used one of the 939 Opterons overclocked with DDR 500 @ 1:1 FSB, ran faster than anything else out there. Even ran id’s Rage when it was being the secondary PC.

          Considering how far people could push the A64 and Phenom II with overclocking and high speed memory, Ryzen feels like a let down in terms of being able to push the envelope. It really doesn’t like overclocking, or high speed ram, but still does well where you can push it.

        • Well its kind of old news, but the reason there isn’t much margin to exploit these days is that the manufacturers have become very adept at exploiting it themselves, arguably margin-harvesting has been a big part of the increase in performance for many years.

        • It was cache starved, so of course memory speed and timings had a big impact.

        • Me to Supreme Commander: “Would it help if I got out and pushed?”

          There were three things that helped: MHz, IPC and Sorian AI. The last one helped by reducing CPU unit spam – harder by being smarter, not bigger.

        • Dwarf Fortress and Factorio, the two games of highest simulation detail I’m aware of, are RAM-latency bottlenecked.

          Updating MB, or GB… of data every second (Dwarf’s left arm gets injured: that’s the level of detail of Dwarf Fortress) a little-bit at a time is RAM Latency. Too much information to store in the cache, and single-thread bound because simulations are very difficult to write in a parallel manner.

        • That reminded me of ‘Mafia Wars’. I would get 10 or so browser windows open and then cycle through them as fast as I could mouse click the attack button in each window. It worked great!

    • Multicore game code was dragging its feet until the 8th gen consoles with 7 available weak Jaguar cores, you HAD to use them all because they all sucked individually, hopefully this gen sees another big increase to multithreaded code as it’ll have 16 threads.

      • It’s always fun to see what people can do when pushed. All the excuses for why something can’t be done evaporate under pressure.

        • It would have kind of been interesting to see the what-if machine world where there was no 360 and developers just had to bite the bullet and learn the Cell way of doing things. The small handful of games that did use it well were impressive for the gen, but there were so few titles that made any distance from what also ran on Xenon.

  2. For those interested about Zen 2’s microarchitectural improvements, head over to Anandtech. It’s an interesting read, and I reckon the changes from Zen 1 to Zen 2 are similar to the changes between Sandy/Ivy to Haswell. Bigger micro-op cache, smaller but faster L1I cache, better branch prediction, one more AGU (making it 3 AGUs) so it’s a slightly wider core, support for AVX2, more instructions in flight, better hardware security, new cache-specific instructions.. it’s pretty much a fairly revised core. I think Zen 2 is gonna rock Intel’s boat. They also talked about Zen 3, Zen 4 and even Zen 5, and AMD’s intentions with Zen 2 and the challenges they faced.

    Now go. Read all about it.

    • Definitely sounds like what Intel did to Haswell, change things around on the back end to increase resources.

      As for rocking Intels’s boat, for Coffee Lake sure, but Ice Lake with Sunny Cove, I highly doubt, since these “Haswell” type revisions are being performed again (and possibly to an even larger extent).

      • Early indications are that on Intel’s side of things, the lower clock speeds currently attainable on 10nm mostly offset the performance increase from the core enhancements.

        I don’t think they’ll do terribly, but I’m also not expecting Intel to suddenly leap back into a definitive lead.

  3. Some interesting tidbits regarding the 3950X:

    [url<]https://www.tomshardware.com/news/amd-ryzen-3950x-vs-intel-i9-9980xe-geekbench,39640.html[/url<] If it can really beat a $2,000 Intel chip then $750 is a steal.

    • Sure, if you keep piling one assumption on top of another, then you can land anywhere you want with that.

        • 1. This is an engineering sample.
          2. Because it’s an engineering sample it might be some undetermined amount slower than a final retail product.
          3. In this hypothetical test of a final product the chip will stay pinned at the max boost clock for the duration of the test.
          4. In the test shown, the chip did [i<]not[/i<] stay at it's max clock speed, but stayed pinned at it's base clock speed for the duration at the test. Otherwise the delta between base and boost clock performance won't be as big as they said it would be. 5. Performance will scale linearly with clock speed. 6. There isn't some oddball configuration to the system which invalidates the test. 7. Geekbench is accurate in predicting your primary workload. If any of that doesn't hold, then everything falls apart. It's a house of cards that can't be reproduced and can't be verified, but rah rah go team anyway?

        • Look dude, I was just sharing that. Don’t blast me like I made that up. I provided a link. At this point it’s obvious that we don’t have solid information yet but these things serve as a rough guide and should be taken with a grain of salt. Don’t get all worked up, yeah?

        • There’s no emotion here, and no suspense either. We’ve all seen rumors like this circulate around unreleased products enough to know how the story ends, but the fact that it’s unreleased does not mean you can elevate a bad source into being a good one just because you’re hopeful.

  4. Have not posted for several years, and have been out of the loop of PC hardware for same. My last build was 2016. Am glad to see AMD continuing to be back in the cpu game.

    So I’ll toss out this question: at this point in time, what is a 16 core/32 thread cpu good for? For what tasks would it really kick butt for the home computer enthusiast? Thanks.

    marty

    • If you’ve got an HTPC it’d be good for transcoding video. There’s plenty of other use cases that benefit, but a lot of them are harder to come by in a home PC. A VM homelab is one that I might be needing here soon, but that’s for work, and I just happen to work at home.

    • Gaming and Streaming in high quality from one PC. The extra cores can handle encoding the video stream (compressing the data) with minimal impact on gaming performance.

    • It depends on what the home computer enthusiast is enthusiastic about. But I think this is one of those things where if you have to ask, it isn’t for you. That is, if you’re enthusiastic about something that would benefit from 16 cores, then the benefits of more cores would already be apparent to you.

      Also …. this is a case of 16 cores with just 2 memory channels. That should give anyone pause

      • I recall [i<]waaaaay[/i<] back in the day that AMD had both single- and dual-channel sockets available for single-core chips, and I guess the dual channel option performed better. Now we get to see how it goes with 8 cores per channel.

        • Yeah.

          Also, the memory controller is now a little bit less integrated than before.

          Can that big L3 cache overcome both a somewhat non-integrated memory controller AND just 1/8 of a memory channel per core?

          Color me skeptical.

        • A related thing about bandwidth that has impressed me is the experience of running a database on what is essentially storage over ethernet, as is the modern thing in for example Amazon’s RDS. It works surprisingly not that bad depending on what you are doing. A database… on sometimes less than a gigabit of bandwidth. Its nuts.

          I think those dual memory channels should do OK in a number of non-saturation situations.

        • I’m remembering when the ppc g3 crushed the pentia in running seti@home due to the big l2 cache on the g3.

          It’s always great when things fit in a cache

        • Yeah them and all the big unix-risc clients were kicking butt then, also the big-cache Xeons. Then the refactored the code to work in smaller caches! Noooooo.

  5. Hopefully the Tech Report podcast is revived for a special 3 hour analysis of Zen 2 with David Kanter.

    • I’m not sure who’s really running TR these days though. There’s only Adam Eiberger and Bruno. Seth Colaner is officially the head honcho but I don’t think he really wants the job. So it’s like TR doesn’t really have a crew running the ship these days.

  6. So if the 5700xt is on par with the 2070, then it’s on par with the VII. Will the price of the VII come down??

    • AMD says the 5700 XT is on par with the 2070. Independent reviews may tell a different story. Afterall, AMD would have you believe that the Radeon VII is actually on-par with the 2080, this performance aggregate says it’s somewhere between the 2070 and 2080, though: [url<]https://www.techpowerup.com/reviews/AMD/Radeon_VII/28.html[/url<]

      • This, and the VII is probably priced as low as AMD can comfortably afford to sell it. Monolithic 7nm dies and 16GB of HBM2 at 1TB/sec don’t come cheap.

  7. It feels like AMD is back to its old game of ‘we have one chip to cover as much of the market as possible, so lets clock it well beyond its peak efficiency because money”.

    So the reviews will be interesting, but I feel my ideal card would be a downclocked/volted 5700XT that is the full chip but with much better efficiency. And probably competing with the RTX 2060, rather than the 2070. For 2060 money, because you may as well ask for everything.

    • On one hand, you would hope that AMD would have learned their lesson in this regard. On the other, I had a lot of fun probing the limits of undervolting my RX480 and Vega 64 at the same clocks. I don’t intend to jump on the RX 5700 train, but it’ll be interesting to read how things go in that arena this time around.

  8. The 3950X should sell like hot cakes, if they can make enough of them. I’d rather get the 3800X, which is already crazy expensive, but worth the upgrade from the 3700X.

    • Maybe you should define “sell like hot cakes.” You know how many people have and are willing to spend 750 bucks on a single CPU (that isn’t Xeon/Epyc)? Prooobbbbbbbbbbbably not as many as you think.

      • It will sell like old K8-based FXs back in the day. I suspect that hot seller will end-up being 3600X.

        • Yes, but not with 16C/32T and 64MB L3. Note I don’t necessarily agree with “hotcakes”

        • Not sure how gaming-focused a PC with 16 cores is anyway. Doesn’t matter what AMD’s branding is, there’s very little difference in game performance between a 2600X and a 2700X as it is.

        • I tend to game while encoding videos quite a bit, and the slowdown is pretty noticeable with a 6700K.

          I’m thinking a 3950X with half or 2/3 of the cores dedicated to encoding would let me still game while the encodes are running without too much interruption.

          Yes, dumb use case, but it is something that’s painful now.

        • With streaming becoming so common, yours is not that much of an edge case.

          Plus, who wants to have to pause or halt their encodes every time the urge strikes to game?

        • Sure, it’s what you need for something other than just gaming. That kinda proves the point. 😉

      • Especially considering that for gaming extra threads only help you so far. Take a look at the gaming performance of the 2700X vs. the various Threadrippers for example — the 2700X is usually marginally better because of faster per-core clocks.

        • Well now stop right there, you know going NUMA messes with stuff. This should be a clearly better way to package 16 cores for gaming. It remains to be seen if it [i<]helps[/i<] over 8 cores of course. That huge L3, maybe it does good stuff. 😀

        • I thought only the 2990 was using NUMA and the rest were not? (Could be wrong though….)

          IIRC the 2700X and the 2950 were more or less even with the 2700X being a smidge in front on gaming performance.

          Extra cache could certainly help, of course.

        • They can pretend that its not NUMA, but the RAM is connected to different CPU dies, so some accesses are slower than others, with the average latency increased. Its not going to be all that many things the v1/v2 Threadrippers will beat a dual-die Ryzen v3 at. (I wonder how AMD will position Threadripper now, actually.)

        • I’m guessing ThreadRipper will move upmarket — basically Epyc pitched at workstation users rather than servers. The 12 core models become redundant, 16 core is an iffy proposition compared to the 3950X (16 cores with quad channel memory and more PCI lanes could appeal to some users, but I expect most of those users would go for even more cores than that if they are available).

        • I think a 16-core Eypc2 with TDP of 150w may be very interesting in the Server arena or even a 24/32 core with 180-240w. (I believe the rumor was that AMD was upping the max TDP on EPYC2 to 240w). Imagine 4 x 6-cores or basically two 3900x’s in an EYPCs CPU. They should be able to have some super clock speeds on any 24-32 core EYPC2s, even with speeds cut down from the desktop range.

          EPYC 7451 24-core 3.2Ghz Max/2.9Ghz all-core boost/2.3Ghz base approx. $2500
          EPYC2 24-core 4.5Ghz Max/3.5G-3.8 all-core boost/3.0Ghz base (conservatively based off 105w TDP 3900x with 3.8Ghz Base/4.6Ghz Boost) and double the AVX/AVX2 performance.

        • Zen2 has uniform memory access. All chiplets go through the same IO die, which is connected to all memory. The chiplets only see one, uniform way to access the memory. The IO die talks the same way to all chiplets. That’s the beauty of the design.

      • I’m betting that 105 watt value isn’t at the boost clock of 4.7 – so there’s your “Hot”. (..hmm, maybe the “Cake” is if you use a cheap thermal paste. ..but remember: “the Cake is a lie!”)

    • [quote<]if they can make enough of them[/quote<] Remember that the “they” is TSMC, not GloFo. TSMC has already made hundreds of millions of 7 nanometer mobile SOCs. Something like an A12X is bigger and more complicated than these little 8 core chiplets. So, I do not understand how AMD could be capacity constrained here. I think they will be demand constrained.

        • That question is only relevant if TSMC were supple constrained. Apple’s iPhone sales have been lower than expected which I’m guessing means TSMC isn’t capacity constrained right now. If anybody knows otherwise though i’d like to hear it.

      • I think they first want to sell EPYC chiplets, not desktop variants. So they would have to saturate server demand first. Also remember that these are really cherry picked to reach the highest single-thread clock. So probably not many.

        I guess we’ll see. Would be a nice upgrade in 1-1.5 years at $500…

    • …. And here I thought 8 cores was too many (or certainly sufficient) for just about anything.

      • What’s not to like about more cache and higher boost clock? Plus you know there is power when you need it, ie for highly parallel applications.

        But I agree that for games 8 is really future proof right now. Six is probably the sweet spot.

    • Same. I think the 3800x is the best buy. I wouldn’t consider the 3900x for the two 6-core dies. The 3950x should do very well in terms of sales.

    • Hope not, maybe like Cinnamon Buns…

      I would rather see AMD selling EPYC2 16-core CPUs selling like hot cakes… it would generate much more revenue (needed for Zen 3, 4 & 5).

      Especially, since the current 16-core EPYC champ (7371) goes for $1550 and EPYC2 16-core should ShredRip its butt.

      I’m thinking $1K per 8-core Zen2 chiplet is a reasonable price in the Server market.

    • Too bad AMD small 8 core dies have reporderly a yields rate of around 70% only. A lot lower versus Intel 90/95% of actual 14nm.
      There will be a shortage of some SKUs for all Ryzen 2 life, hopefully 7nm+ EUV will help to boost the production a bit.
      Intel is in a confortable situation, AMD can not meet the demand. This is reason they boosted 10nm only in high end Mobile.

      • I wouldn’t bet too much money that Intel’s feeling very comfortable right now. ASP’s are absolutely going to drop across the board, which means that margins are going to drop – and Intel remains capacity-constrained into 2020. Not much comfort to be found there.

  9. 16 cores but just two memory channels. Hmm.

    Comparing this to the thread ripper 1950x will be very interesting!

    • The latency is much better, cache is much larger and achievable frequencies are probably also higher (remember, with Zen 1 going past 2666 MHz wasn’t always obvious).

      I would expect the new chip to win, most of the time.

  10. If anyone on TR staff is listening:
    Testing that RDNA performance-per-clock improvement could garner a significant amount of site traffic. Forget the “launch day” benchmark review (which everyone and their mother will do quicker than TR anyway) and get this out ASAP.

    Seems the RX5700, RX590, and RX580 have the exact same amount of resources (2304SPs, 144TUs, 256-bit memory, etc etc) just downclock the core and VRAM on the RX5700 and let the cards duke it out. Let’s really see how much architectural improvements AMD has made since 2016.
    I’m sure there’s a lot more parallels that could be drawn between those cards once you get digging.

      • I think this paragraph from Ryan Smith is telling:
        [quote<]Starting with the architecture itself, one of the biggest changes for RDNA is the width of a wavefront, the fundamental group of work. GCN in all of its iterations was 64 threads wide, meaning 64 threads were bundled together into a single wavefront for execution. RDNA drops this to a native 32 threads wide. At the same time, AMD has expanded the width of their SIMDs from 16 slots to 32 (aka SIMD32), meaning the size of a wavefront now matches the SIMD size. This is one of AMD’s key architectural efficiency changes, as it helps them keep their SIMD slots occupied more often. It also means that a wavefront can be passed through the SIMDs in a single cycle, instead of over 4 cycles on GCN parts.[/quote<] 3 years.... I don't know much/anything about chip design, and I'm sure/hope there are many other minor tweaks, but that doesn't seem like a great deal of change.

        • [quote<]3 years.... I don't know much/anything about chip design, and I'm sure/hope there are many other minor tweaks, but that doesn't seem like a great deal of change.[/quote<] That is actually a huge change to the fundamental architecture of the GPU. It means code like this: int i=0; while(i<100) i++; The above while-loop would have taken 400-clock cycles on Vega, but it will only take 100-clock cycles on NAVI. NAVI is optimizing for smaller compute-units and latency, and likely will be weaker from a pure bandwidth perspective. For anyone with more "complex" work to do (more if-statements, more thread-divergence, etc. etc.), NAVI will be better. Especially because NAVI now has two sALUs per compute unit. Crypto-coin miners however (who have [b<]very[/b<] simple code) will be pissed at these changes. But the overall architecture is clearly better for complex workflows.

        • I get that it’s a 4x turnaround speed, but with 1/2 the info per wave right? (2x ROPs) Be interesting to see what the equates to in real world performance uplift (which was the reasoning behind my normalized testing suggestion)

        • [quote<]I get that it's a 4x turnaround speed, but with 1/2 the info per wave right?[/quote<] Yeah. But that's still an improvement of 2x bandwidth, even when you hold all else constant. But the number of shaders remained the same. The "bandwidth" of NAVI is equivalent to the RX 580 (ignoring clocks). [quote<]Be interesting to see what the equates to in real world performance uplift (which was the reasoning behind my normalized testing suggestion)[/quote<] The [b<]real[/b<] efficiency gains come from the smaller wavefront size due to Ahmdal's law. 32-per-wavefront will innately be more efficient than 64-per-wavefront. Doing things with 64-way innate parallelism doesn't mean you're getting 2x the work done. As soon as you come across a complicated set of "if" statements or "loop" constructs, that 64-way parallelism serializes into a one-at-a-time kind of computation. That's the "thread divergence" problem. 32-per-wavefront in practice, will only be slowed down to 1/32th the speed when it comes across a complicated set of if-statements or loops. While 64-per-wavefront will be slowed down to 1/64th speed. Lets say thread #42 in your wavefront needs to loop 200 times to complete its task. On a 64x wavefront, that will cause 63 other threads to loop unnecessarily 200-times in this worst-case scenario. But with 32x wavefronts, thread #42 (or perhaps, thread #10 in a 2nd workgroup) will only waste the time of its 31 neighbors. So you only get 200 cycles of waste x 31 idles cores. You have 1/2 the waste in this case. A CPU core with 1-thread per "workgroup" [b<]never[/b<] has this kind of waste (which is why CPUs are still faster in some cases: especially when thread divergence is high). -------- Long story short: NVidia's use of 32-thread workgroups is a key element to NVidia's efficiency. NVidia works more efficiently because Ahmdal's Law demands a "less parallel" system will always do less work than a "more parallel" system like AMD's. I guess 32-thread workgroups is the new norm... the "right size" to get the job done with today's workloads.

        • Very informative. Thank you again.
          Makes me interested to see what the “performance/watt” comparisons are to Nvidia this time around then.

        • NVidia still has years of research that AMD hasn’t had the benefits of. AMD’s “dark age” in the 2014 to 2016 era will mean that AMD will remain behind.

          In particular, [url=https://arxiv.org/pdf/1804.06826.pdf<]NVidia's Volta and Turing assembly code has been reverse-engineered[/url<], and there are some very advanced tricks going on in there. NVidia's compiler is likely helping the hardware scheduler figure out the optimal work to do with less power-consumption. EDIT: NVidia encodes read barriers, write barriers, stall cycles, yield flag, and such. People make fun of Itanium's absurdly complicated assembly should take note: NVidia really did write a magic compiler (well... PTX Assembler) that can take into account all sorts of compile-time information! This is also why PTX exists: because the assembler needs to calculate all of these stall cycles before writing the NVidia SASS assembly. AMD doesn't have that compiler or data-structures in their assembly language. To be fair, I'm not 100% sure if the control information really saves power. But it "feels" like it should.

    • I’d love to see this.

      Apart from Tonga’s colour compression algorithm, almost all of the headline features of GCN architecture updates seemed to focus on mundane, non-performance stuff such as Freesync, HDMI and Displayport standards, DX12 feature sets, and FF decode hardware.

      The “improvements” in Radeons since January 2012 seem to have come largely from a clock speed and shader count increase, another reason why the venerable 7970 held up so well all these years. It has the same shader count as an RX 570 and in some titles where memory bandwidth/quantity isn’t an issue, it trails the RX570 by about the same percentage as the clockspeed differences.

      A 1.25x architectural improvement is a huge leap forwards, period. For AMD who seem to have been trapped behind GCN’s almost unwavering architecture, it’s even more drastic/impressive. This matters, because although desktop chips are what us PC gamers care about, it has huge ramifications for low-power, cut-down solutions that will end up in laptops and even tablets.

    • Yup, I agree completely.

      And figure out a way to properly test this “lower latency” mode please. It would be a shame to see an important feature (purpotedly pushed forward by your previous boss!) go untested.

  11. Looks like AMD will not re-route the slope of the price-performance curve as we’d all hoped……bummer.

    • 7nm silicon is too expensive right now and in medium term future. It is not even clear if EUV will help enough given the high costs of equipment.
      One thing is sadly clear, 7nm can not reach the 14nm yields level, there is a drop of around 20%. Well see if TSMC 5nm and Intel 7nm will change the story in 2021.

      • Yield levels aren’t static over a particular vendor/process/layout combination’s lifetime. The vendor figures out improvements to a process, and works with the customer to figure out how to leverage those improvements for a given chip layout. Both parties are [i<]heavily[/i<] incentivized to improve yields - for obvious reasons - and they always do so. The exact [i<]amount[/i<] of those yield improvements is widely variable... but they always occur.

      • Philosophical reasons. I disagree with Nvidia’s closed-source approach to Linux and their practices with game studio tools. I also don’t like they pushed G-Synch as a closed standard, even though technically better than Adaptive Sync/FreeSync. AMD also donated Mantle to Vulkan, etc.

        Yes, they are the better solution for gaming, which I don’t do as much. Actually, I’d just use an APU but it’s the same issue. Alas, a nice gerbil (jihadjoe) shared a very informative post with their transcoding tests. It turns out QSV is a lot better than I thought and I’ll just do an Intel 4C i3 for my next build and it will be just a Plex box, not a replacement.
        [url<]https://techreport.com/forums/viewtopic.php?f=2&t=121900&start=30#p1405577[/url<]

        • Hate to keep ragging on this, but, please explain why you think GSync is “technically” superior to Adaptive/FreeSync without mentioning monitor-related specs.

        • Wow, touch much?

          Let’s be clear, a monitor is a bunch major technical parts, the controller and the panel (and lighting, stands, etc.). According to the article below ULMB is done on the G-Sync Module, thus is a G-Sync technical advantage, if you feel like getting your panties in a twist over it. Also, the controller does LFC, which is an optional feature of FreeSync. It does this by repeating the frame in low frame rate scenarios in the controller.
          [url<]https://www.techspot.com/article/1454-gsync-vs-freesync/[/url<] It's not all roses, of course. It's a mess and I've requested TR do a monitor article for a reason. Ultimately, I'd say a standard that is more tightly controlled and has clearer requirements is better, in the sense of the technical specifications. If I can take the same two monitors and use a crap FreeSync controller, a good FreeSync controller, and a G-Sync controller then I'd say G-Sync wins, as it does just about everything without guessing that any FreeSync can do (minus the nich HDMI use case). While I'm in a grumpy mood, I'll mention that they both suck for not helping to standardize HDR on PC's.

        • You can’t do ULMB and GSync at the same time though, right? (honest question)

          FreeSync supports LFC on monitors with the requisite VRR for such tech.

          [quote<]If I can take the same two monitors and use a crap FreeSync controller, a good FreeSync controller, and a G-Sync controller[/quote<] This sentence shows your (and others') false conceptions. Need to replace "controller" with "panel" for accuracy. FreeSync and GSync do the same thing, it's the monitor/panel that is the difference. The nice thing (since January) is that now you can [url=https://www.tomshardware.com/news/nvidia-gsync-vs-amd-freesync-test-comparison,39042.html<]test BOTH GSync and FreeSync on the same GPU and largely the same panel.[/url<] The crux of FreeSync is that the industry still hasn't adopted any reasonable requirements for VRR spec advertisement. I still have to go to [url=https://www.amd.com/en/products/freesync-monitors<]AMD's curated list.[/url<] If you want to spend extra $$ so you don't have to guess what the VRR specs of a particular monitor are (and want to be locked to Nvidia GPUs for the remaining life of the monitor in order to use VRR, that's fine.

        • ULMB is a fixed-rate refresh rate, I believe. But if you’re pushing more than the panel can do then enable it and do the ye olde V-Sync.

          You’re right that FreeSync can match in parity. But it’s a mess. Do you really want the industry advertising stuff? HRD is a wreck. Then there’s the USB fiasco.

          The spec is technically superior, is that better?

        • Backlight strobing was a thing before VRR. No reason it can’t be implemented in monitors that support FreeSync.

          HDR is a wreck regardless of GSync/FreeSync. (I personally blame Microsoft for that, but…) In order to even get HDR on GSync you have to get into monitors that have the newer GSync HDR chip which is crazy expensive ($500 just for the chip by some reports IIRC, on top of the higher spec panel required obviously). Meanwhile, a FreeSync 2 capable monitor can do that for no added cost.

          There are equivalent panel specs for both GSync and FreeSync available on the market (one example shown in the linked article from my last post). While GSync is relegated to only the “high end” of the spectrum, FreeSync allows for a wider range of spec offerings, so buy what meets your needs/wants/budget.

          I’m not trying to be an aggressor here. Just trying to cut through the Kool-Aid Nvidia has been poisoning the masses with for many years. One battle at a time.

        • As someone who counts himself as a pro-AMD person, I’m not drinking any juice from Nvidia. FreeSync is better in the sense it is more, well, “free.” But it’s possible to have a crap FreeSync monitor, plain and simple. G-Sync starts at “good” and goes up from there.

        • Knuckle sandwich is free. Would you prefer one of those over sandwiches that aren’t free? G sync starts at already superior to the free sync mess of monitors. Stockholm syndrome?

        • That’s only because Nvidia chooses not to include gsync in monitors that suck, no?

          Do we care? Maybe not, but if you’re looking for an empirically true point over why one is better than the other I don’t think this is it.

    • Do you run Plex on a machine that also doubles as your gaming rig? If not, just get a Quadro P2000 off of ebay.

      • I’d like to do both on one build, but price is an issue. And if I’m going cheap then Intel wins (and supports Linux out of the box, though Nvidia is reported to work as well).

        • AMD isn’t supported on plex officially. It also runs like garbage when you do actually use it. My i3 6100 with quicksync out performs a r9 290 in plex.

        • That’s what I read. I also see it’s only on Windows. 🙁 That is unfortunate. I wish AMD would focus on this more. A better decoder and transcoder would make them a clear favorite for me.

        • I’ve run Plex on Linux in the past and it’s been fine. I don’t think I’ve ever used an AMD chip with it though. I ran it on my old NUC which used a Core i5-4250U. I installed it again on my NAS from Synology a while ago, but never got around to setting it up.

        • Plex works well on Linux. Amd gpus dont work well with plex on any os, has zero support on anything plex related except on Windows where it sucks hard.
          Ryzen chips work fine though on any os.

        • Yep, Intel and nvidia GPUs all work under windows or linux for plex transcoding. There are linux driver patches you can get for GTX cards that will remove the 2 concurrent transcode limit and let them operate like a Quadro. I’m actually planning to test that out this weekend myself with a 1050. But the P2000 will handle quite a few more concurrent streams which will be nice for when a bunch of friends and family are watching stuff off the Plex server.

          Edit: I think the P2000 will handle more even after the driver patch for the 1050. Just no idea how many the 1050 can handle. If it give me 4 or 5 concurrent transcodes, I may skip the P2000.

        • I didn’t know about that patch. However, I want HDR capabilities. Sure Plex doesn’t support it yet but it will. When they get there I don’t want another upgrade. Though, I’m beginning to feel like it’s pointless because all good HDR content is so DRM’ed that it’s almost not even worth it at this point.

        • I have zero hard info to base it on, but I think HDR hardware transcoding is going to require new GPUs. I remember it being a big deal when Intel’s QuickSync got 10bit video support, I think in Skylake. The fixed function units in the GPUs that decode/encode video won’t change with a driver update. As for Plex supporting HDR, the underlying open source tools have to support it first I’d think and I don’t know when that’s going to happen ¯\_(ツ)_/¯

        • Hmm, you could bulk transcode onto a 4TB hard drive instead.

          Even a small, crappy CPU like an Intel Atom would finish transcoding many TB of videos after a week, and it would be more than sufficient to serve off of a $100 hard drive. That’s the cheapest solution I can think of.

          EDIT: Is there any reason why you need real time transcoding? If its a Plex-box, its just going to be sitting there idle (and on) all the time anyway, so might as well have it slowly chugging away at a transcoding job.

        • The real time transcoding is important for remote access. You never know how much bandwidth will be available or even what formats the remote client will actually natively support. Real time transcoding handles that wonderfully.

          Edit: basically a direct play or even direct stream is not always a guaranteed possibility.

        • For personal use, you are exactly right. You know exactly what your clients are and can encode up-front to those formats. Even if it takes a day/file who cares. The speed really doesn’t matter. Also generally hard drive space or bitrate don’t have a huge impact.

  12. People keep saying Cryptocurrency mining as the reason for high GPU pricing. I disagree.

    Nvidia has been increasing prices before the massive cryptocurrency mining boom of 2017. That’s called GTX 1080. Crypto mining just gives them an excuse, that’s all.

    Look at the CPU prices. Look at Smartphone prices. TAM(Total available marketshare) has been reached with most of such products while there’s constant expectation to improve revenue every year. Stagnant volume means the only way to do that is to increase ASPs, whether by doing it directly(Turing) or indirectly by “encouraging” people to move up stack.

    At one point, a $999 CPU was an Extreme Edition. Now mainstream lineup is reaching that point, and HEDT platforms are double that. x80 Ti cards have taken up Titan pricing, while Titan cards are at $3K.

    • Crypto boom is the cause of the current GPU price paradigm. It was one “free” marketing experiment that gave Nvidia hard data on much the market was willing to bear. They didn’t have to risk market share that comes with a typical price hike.

      1080 didn’t do anything. Nvidia started the whole mid-tier design sold at high-end price SKU with the launch of the GeForce GTX 680.

      Titans are just discounted Quadros and Teslas that ISVs didn’t want.

      • The 680 was a mid-tier chip at then-current flagship pricing, but if I remember correctly that price was like $500. It beat a 580 in much less power and few realized what was actually happening. Then the bigger chips came out for $650, and the 1080 did the same thing again, beating the performance of the 980 Ti with lower power for the same price, solidifying the higher margins for mid-range chips. AMD wants a piece of that pie too, so we’re unlikely to ever see it go back to how it was.

        • I just want to add that the GTX 1080 was a relatively small die, significantly less so compared to the 980 Ti.

          Yes, prices are higher in the 14nm generation, but Nvidia mostly pocketed the differences as demonstrated by the great increase in revenue.

  13. No mention of power consumption.

    Don’t get me wrong, if the stock price/performance is competitive they’ll likely sell just fine but it’s been a long time since any stock AMD card was power efficient or quiet, let alone both.

    Let’s not forget that a $349 RTX 2060 FE has a good quality cooler and can easily be overclocked to around the performance of a stock 2070, whilst still making less noise and using less power than anything AMD has launched in a long, long while.

    • They showed the 5700XT has an 8pin+6pin. They’ll be slightly below the Vega siblings on average, but I won’t expect them to be under 200W.

      Cheers!

    • Anandtech says 180W for the 5700 and 225W for the XT.

      [url<]https://www.anandtech.com/show/14528/amd-announces-radeon-rx-5700-xt-rx-5700-series[/url<]

  14. It is possible to be disappointed by something that you expected would disappoint you? After Radeon VII released I figured AMD weren’t likely to start going for the throat on pricing again any time soon, but the confirmation of that still sucks. :/

  15. In related news,

    [url<]https://hothardware.com/news/intel-trash-talk-zen-2-amd-ryzen-3000[/url<] I didn't expect this from Intel. Either they got some bad apples in there or they're just bitter that AMD is getting all the attention these days. Dogs bark when they sense danger, I suppose.

    • Dogs do indeed bark when they are threatened.

      That is a great analogy for Intel, who only seem to make real progress when they’re backed into a corner and otherwise are content to hog the sofa.

      • While “beating us in synthetics means nothing, let’s see you beat us in gaming” is in fact a true statement, it does seem that Intel is backed into a corner here.

        I don’t see AMD changing the landscape of CPU architecture anytime soon. I think they need Intel as a target to meet. If anyone, I think that’s going to have to be Intel since they have MUCH more R&D budget.

        • Gaming and AVX512 are the two last bastions of Intel in the performance race.

          Given the frequency bumps and IPC gains of Zen2, add all the new MDS mitigations to slow down Intel, and I’m not 100% sure that AMD are even going to lose the gaming tests now.

          Either way, I’m looking forward to the independent reviews.

    • While I was also whelmed at the tests AMD showed, isn’t it not like Intel to directly respond to them at all? They’ve always been the Coke to AMDs Pepsi in the relationship, the underdog mentions the main rival, but the big guy doesn’t mention the little. Interesting that they’ve chosen to respond.

      • It is unusual, yes. But AMD’s been doing it to them for years, so maybe they just wanted to return the favor. Lots of that kind of thing going around these days.

        • By replying to AMD though, they are sending a subtle message that they’ve gone down to AMD’s level. Ignoring someone below you sends the message that you don’t even want to acknowledge that person or entity. That you’re too high up to even bother with it.

    • Well, let’s see them go head-to-head on TR’s inside the second benchmarks, and we’ll see who wins. Intel will have to drop their prices for us to be impressed, though. I’ll be biding my time to see how it plays out before buying anything, as usual.

  16. MyDNA is telling me to ignore this RDNA nonsense as there is barely any improvement. At least nGreedia wasted die space on ray tracing and NN stuff.

  17. Having a 16-core desktop chip is cool but I think the real news here is RDNA. Eager to see performance and power benchmarks.

    • ronch, you may be right. My internal dialogue is split on which is the bigger news, but RDNA will probably do more for the bottom line. Yes, everyone wants a Vega-killer, but many people here in TR’s comments post that a RX570 8GB is performing surprisingly well for their gaming needs, especially when they pop up in the deals posts. Starting even at that level and working up the Navi chain, cheaper to produce chips will net big gains, even if the high-end isn’t topping the benchmarks. According to Steam, there are LOTS of budget gamers out there.

      • Yeah, but we are at the tail end of the console cycle so of course a 570 does it. Next year the ps5 and Xbox two launch and they’ll bring a huge performance boost.

        Though Asia and other developing markets will continue to drive growth on the low end.

    • I agree that RDNA is the real story here, though not necessarily this implementation of it (RX 5700 series). Here’s hoping that the architecture change will bear fruit and we will see much more competitive products as a result.

      • Zen2 is just Zen but newer, bigger, and more expensive. The 256-bit data-paths is exciting but Zen2 architecturally similar to Zen.

        RDNA is exciting to me if only because its a major change to AMD’s GPUs.

        • I agree 100%. I am just saying that I expect that, much like with Zen, further revisions of RDNA will be what really gets AMD back into the game (pun intended).

        • >RDNA is exciting to me if only because its a major change to AMD’s GPUs.

          Major change? As I understand it, it’s mostly tweaks and optimizations to GCN.

        • That was my understanding as well. But SIMD32 is a huge change, which will require code-changes to fully take advantage of.

          SIMD64 still exists for backwards compatibility with legacy GCN code. But moving to SIMD32 will be necessary to maximize performance moving forward. Fortunately, NVidia is already SIMD32, so most developers are probably SIMD32 based.

        • Is that kind of like how Sandy Bridge was mostly tweaks and optimizations to Core, and how Core was mostly tweaks and optimizations to the P6? In other words, we all might as well be running overclocked Celeron As, because really there haven’t been any major changes?

        • Yeah, generally speaking, I think looking at Navi and thinking it’s just a collection of small tweaks because there hasn’t been a huge increase in the number of stream processors is a mistake. We’ll see how the cards perform once they’re released. I doubt it’s going to be a flagship killer, but it should at least be an improvement over what we’d get with yet another refresh.

  18. $750!

    I mean, I get it – it’s actually a pretty substantial price cut from the launch price of the last 16-core chip (2950X launched at $899 with nothing like the IPC boost Zen2 will have).

    “New normal.” Still not nearly as bad as Intel’s predatory pricing, but not as aggressive at Zen and Zen+ were. Hopefully they don’t go any further.

    • AMD can only “take one for the team” for so long. Needs to turn a consistent profit. Such is the cost of doing business with TSMC.

      • Have you looked at their stock ? They are doing pretty damn well at this point, people seem to be forgetting.

        • AMD’s stock is a prediction of its future, not an indicator of the past. It would not be where it is if the future was full of rock-bottom pricing.

        • [quote<]They are doing pretty damn well at this point, people seem to be forgetting.[/quote<] And if they want to continue to do so the last thing they should do is price in-demand products lower than they need to.

    • Why do so many people think that AMD would release a superior product and still keep fire-sale prices?

      • It is a fire sale price, 16 cores have fallen from $1000 to $750 new in 2 years. Add in the platform savings and the total price has dropped $350, a 35% discount over two years.

        • Soooo you want them to drop it even more?

          Why would they? Or should they? If you need 16 cores but don’t need quad channel memory or oodles of PCI lanes, what out there is more compelling?

      • Cause AMD are the “good guys” and Intel are the “bad guys” somehow despite them both being faceless profit-driven corporations like any other. The only reason AMD hasn’t tried any of Intel’s tricks is that it hasn’t been in a position of power to be able to do so.

        • Yeah, it would be nice if we could get past this “rebel alliance vs. the evil empire” narrative in the tech space….

        • I think we can all agree that the only need for us to root for AMD is that they are incentivizing purchases and causing reverberating effects in the cost and quality of CPU chips being sold due to market competition. Because of AMD, Intel’s 10 years spent at Quad Core finally shattered and split into 50%-100% more cores at each price point. This is great for everybody. Why wouldn’t we root for AMD to continue to deliver?

        • It’s possible to like the effects of competition playing out in the market place without getting emotionally attached to any of the specific participants.

          More to the point, it’s ridiculous to expect AMD to continue to sell at barely above cost when they’ve got a product that will clearly be in demand. The OP seems to basically be expecting that 16 cores / 32 threads / 4.7 boost clock should still be priced cheeep because “it’s AMD, they’re s’posed to care about gamerz, not profitz!?!”

        • The underdog is only good in so far as their positive effect in competition driving down the high prices of the market leader(s)

          For our own benefit as consumers, competition is good and so the establish leader ‘winning’ is bad for us and the underdog ‘winning’ is good for us.

          The minute AMD have enough power and influence to control the pricing such that other companies need to undercut them and outperform them to gain market position, AMD become the ‘bad guys’.

          All corporations are facelessly-selfish, profit-driven bad guys – but in the relative effect they have on the end user experience, the underdog is the ‘good guy’ for driving down prices and pushing technology forwards.

    • I once paid about $1000 for a single-core AMD, was it an A64 FX-57? Oh to be young again.

      Also I paid that price class for each of a pair of dual-core Opterons, put those things [i<]to use[/i<]. If they offer performance that Intel can not, then the profit is theirs for the taking.

      • Absolutely. Paid out the nose for an X2 back in the day, which dropped in price significantly after Intel introduced the Core 2.

        Funny how people think AMD is somehow immune from so-called “predatory pricing” when they have it in their own history. When the product and market conditions allow them to do so, they price their products much like Intel would. As a consumer, it sucks, but that’s the free market.

        Seriously considering the 3950X as an upgrade from Sky-X. That would be the first time since the X2 that I would be running AMD for my primary workstation.

        • Very true, x2’s were not cheap. I paid $332.00 on 12/2005 for the slowest/cheapest model offered, 2.0ghz Athlon x2 3800+ and sold it to a friend a few months later and picked up on sale the “$1,000 top model” x2 4800+ 2.4ghz for $330.99 on 7/2006 (right after C2D conroe release).

          I really liked the socket 939 chips and ended up building a bunch.

        • I got a small pile of x2 chips, I wish I pulled every CPU from every computer I threw away. We used to send out pallets of towers to recyclers every year, we would just destroy the HDD. Now I pull the cpu and ram.
          [url<]https://imgur.com/a/bgYzcd2[/url<]

        • Also have a Athlon MP & XP(converted to MP via lead pencil trick). I have a computer graveyard in my basement dating back mostly to 2000+ era, but probably one or two Vesa Local Bus PCs.

        • Yeah people always slam Intel for introducing Extreme Edition pricing, but really that was done in response to the announcement of AMD’s FX line. There were even pieces about how AMD was exploring ‘dangerous’ pricing levels back in the day.

          [url<]https://www.theinquirer.net/inquirer/news/1037564/amds-high-athlon-64-pricing-is-a-dangerous-gambit[/url<]

        • It seems I must have confused people with my previous reply. Your point is null. Intel matched AMD’s price or was sometimes higher during the Athlon 64 era. Intel’s P4 EX chips not only got beat by the same priced FX 64, but also the AMD FX 3500+ which, as I said,. cost dramatically less then the P4 XE chips. That is why Intel gets crap about the pricing. They never offer the best bang for your buck, they charged whether they are in the lead or not. AMD Prices where they fit. And with the new 3000 series CPUs they are pricing better than Intel and people still think AMD has them to high. What the hell ya’ll……

        • Yeah, but for the price of near 1000.00 USD AMD offered the fastest CPU. In the same time frame of the Athlon 64 FX 57, Intel had Pentium 4 XE CPUs that cost as much as the AMD FX and were beat buy the Athlon 64 3500+. And that cost half the price!

        • I didn’t introduce the term (in fact I re-used it in quotes), but if you look at the context in which it was first used, clearly the OP did not intend it in the traditional sense.

          But thank you for being pedantic, that definitely serves the conversation.

      • That’s part of why I like that TR refers to a corporation as “it” instead of “they”. Take the anthropomorphization out of it.

    • I remember paying that much for Opterons (single and dual core) back in the day, and I was a broke college student then.

      $750 is a steal, comparatively, and it’s always great to see some real competition in the CPU space. It’s been too long.

      My desktop is almost assuredly getting a 16 core Zen injection this fall!

      • No offense Waco, but someone who pays 750+ for a CPU in your “broke” college days isn’t really broke man.

        • what if he made the purchase on a credit card with a 35% interest rate? 😆

        • Oh, I was broke, mostly due to purchases like that. Ramen and skimping so that I could buy a new GPU or CPU or something. There were many times where if something happened out of the blue I’d have been negative in my bank account.

          Thankfully Paypal credit didn’t charge interest if you paid it off before 6 months, but I got close a few times…

          Not saying it was smart, but it’s what I did. 🙂

    • Well, the 8C is 400, so I am not really surprised considering bins are even better here. Chip is essentially 2950X amount of silicon and comes at 150$ cheaper, so it isn’t all that bad from this perspective either.

      But yeah, it is the most expensive “consumer platform” chip right now by a sizeable margin.

    • Yeah it’s$750 but if it can beat a $1,000 or $2,000 Intel chippery then DAMMIT it’s a good deal!!

      • You know this is the same benchmark which routinely puts iPads and Xeons on equal footing, right?

        • Um, that was only for illustration. The point is, if AMD can give you, as they usually do, better performance/$, then is it really a bad deal?

        • We don’t know. That’s my point. Everyone has different reasons for rationalizing the cost of an expensive product, and they don’t always have to make perfect sense, but I don’t think we have enough information to figure out exactly what the value proposition is yet.

        • Speculation is part of every product launch. Especially with AMD. And it’s fun.

        • I get it dude. Stories like this are the tabloids of the tech press. Most of the people that follow them do so for entertainment rather than information, but I don’t have much interest in speculation without some reasonable basis in fact. That’s in seriously short supply here. Without that we may as well be listening to the homeless dude on the corner with a blanket in one hand and a sock monkey in the other screaming at anyone who will listen.

  19. Is the AMD Radeon Graphics Group becoming AMD CPU of 2009 ?

    If nVidia’s latest gpus were crap value for money, at this performance with RTX and DLSS, AMD’s are beyond redemption.

    Guess it will be nvidia or a bit of a wait for intel xe for my RX580 replacement.

  20. It’s too bad 5700 XT brings back more memories of Geforce FX cards than it does Radeon 9000 cards. Nobody wants to remember Geforce FX cards, almost as much as no one wants to remember GeForce 4 MX cards.

    • To be perfectly honest, it reminds me more of the X8xx family trying to trade blows with newly launched Geforce 6xxx family.

  21. DAWBench has, for whatever reason, typically favored Intel. Even with fewer cores. Lots fewer cores:

    [url<]https://techreport.com/review/34253/intel-core-i9-9980xe-cpu-reviewed/8[/url<]

    • [quote<]for whatever reason[/quote<] TR's DawBench seems to make heavy use of AVX2, as you can see in: [url<]https://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/12[/url<] There's a rather big difference between the i7-3770K and the i7-4790k. The 500Mhz uplift of the 4790K only explains a small part of that difference, most of it is down to the addition of AVX2 in the 4790K. Because of this I'm expecting a respectable improvement in Ryzen 3000's DawBench scores over the previous generation, at least in the <=8 cores range. For >8 cores I've no idea at what point the dual-channel memory limitation will become and issue and how much the massive 72MB of cache can offset this.

  22. I’m very disappointed in the pricing. I wonder if they’re being pressured by high supplier margins (especially on TSMC’s 7nm process). Or maybe AMD doesn’t care too much about the desktop market, since they’re doing so well in consoles. I can’t imagine they’re going to sell many of these at these price points, when Nvidia is likely to have more performant products at the same price points in the near future.

    If that pricing holds I guess I’m passing on this generation too. Maybe I’ll pick one up when the next generation drops.

    • Blame the crypto-currency boom for proving that market is more than willing to bear the current price points. AMD RTG and Nvidia are not charities. Intel might end-up being one instigate a price war with their discrete GPUs.

    • [quote<] I wonder if they're being pressured by high supplier margins (especially on TSMC's 7nm process)[/quote<]Cost/transistor has been going up for every process since 28nm. That is only going to get worse, not better, as processes continue to shrink and move towards EUV.

      • Yields are also becoming worse too so near-perfect and perfect-grade silicon are going to be much shorter in supply.

  23. for that high price tag, they better deliver better expectations. nice presentation, but i’ll leave it to the reviews to see who really wins.

    • Crypto-currency GPU price boom has irrevocably changed the price paradigm. The days of SKUs ranging from $99-$399 is going to be a thing of the past. The new normal is $249-$999+ whatever we like or not.

      • And yet the median price of popular graphics cards remains the same. Two decades ago the mainstream cards that were actually popular cost $150-200; Now they’re $275-350 which is exactly the same price point (adjusted for two deceades of inflation)

        The marketshare of $700 and up GPUs is vanishingly irrelevant if you look at the Steam hardware survey:

        People are still running cards that cost ~$250 3+ years ago, or they’re buying mostly the $279 cards because that’s the price/performance sweet spot. Everything beyond that curve hits diminishing returns and right now every tech media outlet on the web is highlighting RX 570 deals at $150 and below. The vast majority of the market isn’t at the high end, it’s at the low end.

        I know a lot of rational PC gamers who have just decided to spend the cost of a GPU on a console instead. $200 buys you a PS4/XBone and $350 will get you an X or a Pro edition of either, capable of playing all the AAA games with tuned settings at 4K resolutions from your sofa. Suddenly, tinkering with add-in cards and driver installations for a less-optimised, more expensive experience isn’t holding the appeal that it used to, and that’s assuming that the rest of your rig is ready for a high-end GPU to be slotted into it as well.

        Until AAA, mainstream games come along that are visually-impacted or simply don’t run acceptably on mainstream hardware at the sub-$250 level, high-end cards are a tough sell, and will probably continue to be until the next generation of consoles launch, gain momentum, and overtake the current generation at their stable price point. That’s late 2020 or 2021, more likely.

    • Indeed, there are RTX2070’s on sale for ~$450 right now. It doesn’t look like the RX5700XT will beat the RTX2070 in all benchmarkds, so it seems….courageous of AMD to compete with Nvidia dollar-for-dollar.

      • It’s probably not going to sell at MSRP. I believe they just want to give the impression that they are not cheap and that this is a premium product.

  24. Navi is going to either force AMD to drop the price on Vega VII or stop pumping those cards out for consumers. Matching an RTX 2080 at RTX 2080 prices without RTX features was already a middling value proposition. If Su’s promises hold true, the RX 5700XT is going to come up right behind Vega VII in performance [i<]and[/i<] knock more than a third off the price. 1.25x IPC + faster clock rate is a nice bump in theory... ...as long as driver performance is there at launch.

    • Vega 56/64 are being discontinued. They are too expensive to make to justify their current price points. These new cards are meant to be direct replacements while being cheaper to produce in bulk and have a decent profit margin.

      • I said Vega VII (where the GPU itself will still be produced for the MI60 regardless of whether they sell it to consumers), not Vega 56 or Vega 64.

        • Radeon VII will not see any further price drops until a direct replacement comes around. The units are barely making any decent profit at their current price points. These Navi units are going to be slower, have less VRAM and have vastly inferior general compute performance.

          Nvidia has no reason to drop prices on 2080 and 2080Ti either until next-generation of products come around.

        • 99% of users don’t need the general compute performance though, even if it is the way a lot of Radeon VII owners justify their purchase.

        • Radeon VII was never a good value for the gaming usage patterns even for the AMD RTG camp. The Vega 56/64 have always been better deals. Their Navi “replacements” will continue the tune. The Radeon VII was only worth it if you dabble in general compute or had a usage pattern that took advantage of the 16GiB of VRAM.

          2080 is a superior choice at its price point for gaming usage pattern and only falls behind the Radeon VII when 8GiB of VRAM isn’t enough. However under such conditionals both cards start to struggle anyway and you’ll probably want something more powerful.

    • If the rumours of them losing money on every VII are true, they really don’t want to sell that many of them….

      • They aren’t losing money. It is more like they aren’t making much money on them via margins and volume. Besides, Radeon VIIs are just a way of selling excessive or “failed” Instinct SKUs.

        Nvidia has been doing the same thing with the Titan brand where they are really just excessive or “failed” Quadros/Teslas.

    • Vega VII is probably closer to a Titan with respect to the feature set, especially floating point rate and DL. Could be wrong, as we don’t have any hard numbers at this stage, but I would expect the VII to outperform the 5700 when it comes to pure calculation…

      • One of the things I hope that AMD has corrected with Navi is all the wasted compute power for non-graphics tasks. To put it another way, I hope that it’s more balanced. The huge majority of folks who bought Vega (outside of Eth miners) did so for games that care nothing for double-precision horsepower.

      • Seriously considering Ryzen 2 for my NAS build. Even without HEDT level PCIe lanes having 4.0 means good enough bandwidth to the SATA controllers on the chipset.

        I’m good on GPUs until Nvidia’s move to 7nm though. Nvidia is just so far ahead that their older stuff on an older process is as good if not better on both performance and performance/watt.

Leave a Reply

Write a Review

Your email address will not be published. Required fields are marked *

Latest News

Cisco Launches AI-Driven Security Solution 'Hypershield'
News

Cisco Launches AI-Driven Security Solution ‘Hypershield’

Crypto analyst April top picks
Crypto News

Crypto Analyst Reveals His Top Three Investments for April

Popular crypto analyst Andre Outberg has revealed his top three crypto investments for April. One of them is a brand-new GameFi/GambleFi Solana project that has gained popularity recently. Outberg is...

You May Soon Have to Pay to Tweet on X, Hints Musk
News

You May Soon Have to Pay to Tweet on X, Hints Musk

Elon Musk has said that Twitter may charge new users a fee to allow them to tweet and interact on X. This is being done to fight the long-standing bot...

Pakistan Interior Ministry Bans X Over Security Concerns
News

Pakistan Bans X over Security Concerns – But The Ban Might Be Temporary

Colorado’s New Law Aims To Protect Consumer’s Brainwave Data
News

Colorado’s One-of-a-Kind Law Aims to Protect Consumer’s Brainwave Data

Samsung's $44 billion investment in chipmaking in the US
News

Samsung’s $44 Billion Investment in Chipmaking in the US

AMD Releases New Processors for AI-Powered PCs
News

AMD Releases New Processors for AI-Powered PCs