As far as we know, the next big release from AMD's CPU division should be the Ryzen-based APUs code-named "Raven Ridge." These chips are rumored to pair up to four Ryzen CPU cores with a graphics processor based on Vega's "NCU" design. A result popped up on Geekbench yesterday for just such a chip: a quad-core, eight-thread APU listed as the Ryzen 5 2500U. Grab the salt shaker.
The page identifies the purported chip as a Raven Ridge APU in many ways. Besides the codename itself, the Processor ID appears accurate, and the listed specifications match up with what we would expect from a hypothetical Ryzen APU. A few things stand out, though—the CPU identifies itself as being clocked at 2 GHz in its CPUID string. Geekbench reports that the chip has 4MB of L3 cache—if true, it's a welcome change from the previous-generation APUs which had none. Perhaps most curiously, Geekbench lists the system as having approximately 8 GB of memory clocked at just 300 MHz—almost certainly a failure to report properly on the part of the app.
So how does the chip perform? Well, our purported processor running at 2 GHz gleefully stomps all over the existing A12-9800 APU. This Geekbench result from a week ago shows the 3.8 GHz Bristol Ridge APU scoring 2675 in the single-threaded tests and 6775 in multi-core tests. Meanwhile, the "Ryzen 5 2500U" posts up 3561 in single-core and 9421 in multi-core tests. That's around a 30% gain over a CPU supposedly clocked a whole 1.8 GHz higher. Impressive stuff, to be sure, but keep in mind that a Ryzen APU would likely boost well above the listed 2 GHz.
These actually aren't the first benchmarks that have leaked out for Raven Ridge. Late last month, results for "Ryzen 5 2500U" and "Ryzen 7 2700U" APUs appeared in the GFXBench database. The results there are equally inspiring. Comparing the supposed Ryzen 7 APU's results to the A12-9800E, the older chip once again looks quite dated. Of particular note here are the "offscreen" results that use a standardized resolution. In every game-like test, the Ryzen 7 2700U posts double or more the performance of the Bristol Ridge chip. That's despite the fact that the two processors have similar scores in the "ALU 2" GPU compute test.
Much of the information that we have about Raven Ridge came from slides talking about its embedded cousin, "Great Horned Owl." Based on those slides, we'd expect the top-end Raven Ridge APU to sport a graphics core with 704 stream processors. We do know that the graphics processor in the Ryzen APUs will at least be a very close relative of Vega, because the Linux drivers for the chip describe it as such. In the company's post-launch Reddit AMA (Ask Me Anything) about Ryzen, AMD employees promised the Raven Ridge chips would arrive at the end of this year, and said that mobile variants would arrive first. Stay tuned for more information about Raven Ridge in the coming weeks.
Those L1D/L1I sizes are clearly wrong.
Every x86 L1D cache in recent memory has been virtually indexed/physically tagged, putting a soft cap of 32 kB on their sizes (4 kB width limited by x86 page size* 8-ways maximum reasonable parallel tag matching), and Zeppelin is 32 kB L1D/64 kB L1I as well.
(AMD K10 was actually one of the exceptions with 64 kB and a more complex tag matching and invalidation scheme. However, the cache design was largely viewed as lackluster, and I don’t see them going back.)
Don’t know if it’s a simple typographical error or not, but it certainly doesn’t make things look more credible.
Allow me to formally request that TR do a review of Raven Ridge, comparing the IGP with the RX 550, GT 1030, GTX 750 Ti, GT 740, HD 7750, GT 730 (mostly GDDR5 models) and other popular entry level gaming cards from years past. This is a huge market for PC hardware and yet every tech site seems to be letting YouTube and geekbench (and once in a great while Tom’s Hardware) monopolize the reviews on this segment. These are the cards people buy to play games on the cheap, yet when they Google them to find reviews there is hardly any accurate information available. TR, you could have the only review worth reading in these searches…
Although I completely agree, I suspect that people reading TR are more likely to buy high-end hardware and, inversely, people who buy low-end hardware may not even be reading TR. I also find myself looking for low-end hardware reviews whenever I want to build a cheap system for friends/family.
Anyway, it’s worth adding this content and seeing the amount of interest it generates.
— EDIT
Why the downvotes? As stated above, I would be happy to see the comparison with low-budget cards. I’m just wondering whether this reflects the general interests of the TR readership. How about a poll??
It’s been a disappointingly long time since Tech Report reviewed low-end hardware, but this is needed. Although as enthusiasts we may want to ooh and ahh over the top-end equipment, it’s the low-end stuff that sells in the millions, and about which our friends and family need to know.
When the desktop versions of Raven Ridge appear, we’ll want to compare them to similarly-priced Intel processors as well as to combinations of really cheap CPUs and cheap graphics cards.
People doing Google searches for these things may actually learn something useful if TR is in the results. That’s the point.
Its a sad day if being an enthusiast requires a big budget.
Not only that but it’s good to know how current games stack up on these IGPs.
Llano was supposed to be the CPU/graphics balance for the masses, but thanks to Bulldozer being a big pile of bull****, it flopped in the market. Intel’s IGPs have improved massively since then, but they still leave most casual gamers with an undesirable gaming experience.
If Raven Ridge can provide entry level PC buyers with a 60fps experience, rather than a ~20 fps on minimum settings, it’ll mean a lot for the PC games industry. Just look at the Steam survey results, Integrated graphics users make up about 1 in 6 users, and if you look at awful graphics cards like the GT 610/620/630 or GT 710/720/730 it’s 1 in 4 users.
Bear in mind that Steam has 125 million users, so for just Steam alone, that’s 30 million people who are trying to game on Intel IGP or equivalent.
I’m usually interested in A vs. B. vs. C hardware performance comparisons, but one evaluation that might be interesting would be to take a few popular games, including just-released and somewhat older titles and determine just how far one must turn down various graphics quality settings and/or resolution to achieve a tolerable gaming experience. Previous generations of AMD APUs (e.g.: 2014’s [url=http://products.amd.com/en-us/search/APU/AMD-A-Series-Processors/AMD-A10-Series-APU-for-Desktops/A10-7800-with-Radeon%E2%84%A2-R7-Series/9<]A10-7800[/url<]) were already capable of playing games like Guild Wars 2 at acceptable performance levels.
The A12-9800 is already capable of 40-50fps gaming in many modern games (Doom, BF1). I would expect RR to do much better. With modest settings, 60 fps will certainly be doable in a lot of stuff.
Intel chips are not even in the same league when it comes to graphics. Yes, they have massively improved, but anything beyond casual gaming is a hit and miss. Broadwell is an exception, one that does not make sense price-wise.
The A12’s with R7 graphics are pretty decent, but they just didn’t get the design wins they deserved.
Sadly, most APUs that hit the market seem to be in cheap laptops and you’re looking at an A10 hobbled by slow generic RAM and sometimes only in a single-channel config.
If Raven Ridge can genuinely outperform the current consoles (on paper, it ought to), then we have a winner.
Llano was actually 32nm K10, pre-bulldozer.
I still have an OC’d A8-3500M, and it’s faster than any of the piledriver-based successors.
I was under the impression that dozer versions clearly beat Llano, despite all the enthusiast frowning in the general direction of the construction cores.
On desktops, yeah. It clocked much higher.
But in power-limited laptops, it’s a whole different story.
I don’t know how typical that is around here, but I have a GTX 750.
I’d love to see this too! It’s one of the reasons I’ve been trying to maintain the “low profile GPU” thread in the forums – there’s just not a lot of reviews out there for the entry level.
I have some low end GPUs I am sure I will eventually have time to run some benchmarks on to add data to the thread, but it’d be awesome if the pros would take a crack at it.
Let’s not forget the GTS450
+3. There’s so much confusion and shenanigans at the low-end……a TR-quality roundup (with IGPs for comparison) would not only be a huge service to the community, I’m sure we’d also pick up some new readers.
I agree, even if I’m unlikely to buy in that segment—for myself, anyway.
It is nice to know how the lower-tier cards compare, especially between generations. Even for enthusiasts. E.g., how worthwhile is it to reuse or hand down the old Radeon XXXX vs buying a new entry-level card?
Low-end GPUs and integrated graphics are most suitable for a gaming-lite HTPC, which, I imagine, is something a lot of enthusiasts have considered at least occasionally.
Most of us are probably called on for recommendations by friends and family. It would be good to have trustworthy information for all segments.
AuthenticAMD is still alive as a CPU string? Amazing! I thought those old CPU strings like GenuineIntel and CyrixInstead are long gone.
Changing the cpuid would certainly break compatibility with older software for no obvious benefit. Imagine having your program crash just because the code could not identify your CPU.
AuthenticAMD is here to stay for a very long time.
Totally valid. It’s just that I’ve totally forgotten about those CPU strings already. AuthenticAMD harkens to the days when AMD was still trying to make a name for themselves as an alternative supplier of x86 CPUs.
CPUID was introduced with the K5 (AMD “pentium”), if I’m not mistaken. Together with the machine specific registers and other “meta” instructions like timestamp counters etc.
Indeed, this is old stuff from the ’90s.
I dunno why but I’ve always thought of APUs as a no-go for me unless they’re real cheap or I’m building a small box where a graphics card won’t fit in nicely. If I want a serious gaming box, I’d go with an i7 (yeah right, you always say it’s Ryzen for you. – Ed), and if I’m building a cheap office box then an i3 with good-enough graphics is the way to go. Why would I hinder the thermal headroom of either the CPU cores or graphics processor by cramming them both into a single package when there’s tons of space in even a microATX enclosure to accommodate a proper RX 480 or GTX 1060 and let those GPU cores run loose, independent from the CPU thermal headroom? And if I’m budget constrained I’d just save up a bit more to get a proper setup, not force myself to get something I might regret later. I dunno folks, even as an AMD fan I just don’t really see the value of APUs except for small builds or a granny box, but then for her I’ll just grab a $65 A8-7600 and call it a day. Who cares if FM2 is a dead end. It’s a granny box and she only needs to check her email and Facebook anyway. It’ll likely stop working way before she’ll need an upgrade.
These first two models are notebook APUs. They could offer tolerable low-end 1080p gaming performance without the need for an expensive discrete GPU.
In a non-portable application, you should think a lot smaller than microATX. You could find one of these APUs in something like a Gigabyte Brix hanging off the back of a monitor or TV, turning it into a capable do-everything PC.
For serious gaming with a PC enclosure that can accept a graphics card, the niche that APUs occupy is small. It’s the performance gap below the $130 Radeon RX 560 4GB and the $150 GeForce GTX 1050Ti 4GB. Any add-in graphics cards less powerful than those are a poor value. These new Zen+Vega APUs could open up PC gaming to folks that are unwilling or unable to spend $150 for an add-in graphics card.
How would AMD compete with the i3 system you mentioned above? Think of it as a RyZen 1400 that doesn’t need a graphics card just to run a monitor.
That’s admittedly an issue with Ryzen: no IGP. So with these APUs AMD can have an alternative to i3 for lower end machines. With presumably stronger CPU cores and weaker graphics, I think the i3 seems to be a more suitable choice for an office box where CPU cores will have to offer as much future proofing as possible and GPU performance isn’t needed that much and could in fact limit office box appeal because you don’t want employees having enough GPU grunt to play LoL nicely. I once caught an employee having a few games installed on her machine.
I guess this will be the only way to get Vega graphics safe from miner-induced inflation.
…until AMD releases an SDK update that enables killer blockchain-optimzed HSA computing?
Grandma’s PC is going to be pretty decent for light gaming to keep the grand kids happy when they visit haha.
This is a decent little chip.
Come on, AMD. You did it with X399, the least you could do is name this one the R5-9500U.
Actually, I wonder if we see an Intel i5 9550 or 9590 next year…
Something goofy is going on here. A Ryzen 5 1400 running 3.2 GHz scores around 3100. It is difficult to imagine that an APU with a similar four-core, eight thread configuration would score 10% higher with 1200 MHz knocked off the clock speed.
I’m assuming the 2 GHz is some type of minimum base clock that probably only gets used when the IGP is railing at 100% utilization. In a microbenchmark like Geekbench the actual clockspeed of the CPU cores was probably much much higher.
The chip is not likely running at its base clock for the duration of the test, which could certainly explain the majority of that difference. The rest is likely explained by the fact that geekbench scores are roughly as indicative of a CPU’s performance as the way it smells.
The memory report was weirdly off, remember. Wild speculation here, but what if this CPU comes mounted on an HBM2 block? We all know Ryzen loves RAM speed. So does Vega…
Well, I can dream.
I think anyone hoping for a Raven Ridge SKU w/HBM is going to be sorely disappointed, both because of cost concerns and availability.
Also, the main thing about HBM isn’t that it’s “fast”, it’s that it’s “wide.” That’s great for applications that aren’t particularly sensitive to latency (e.g. GPUs), but it won’t help Ryzen (and could potentially hurt it.) As I understand it, Ryzen benefits from faster memory because the speed of the infinity fabric is tied to the speed of the memory controller, so faster memory = lower latency between CCXs. Running at the sub-1GHz memory clocks of HBM2 would probably be fairly detrimental.
I’m really looking forward to these things.
My home laptop is still a 13″ 2009 Macbook Pro (Core2Duo, GeForce 9400m), and I’ve been putting off replacing it until something like Raven Ridge is available (Decent CPU for mobile development, and a GPU that’s strong enough to casual game on when traveling).
Key to single thread performance here is the boost speed. Current estimates are between 3.1 – 3.4 GHz.
At 2.0 GHz, the Multi-core numbers are poor compared to i7 & i5 8xx0U series
i7-8550U – [url<]https://browser.geekbench.com/v4/cpu/3986326[/url<] i5-8250U - [url<]https://browser.geekbench.com/v4/cpu/3983155[/url<] And don't forget, "U" has different meanings for Intel and AMD, For Intel "U" is "Ultra-low power" - [url<]https://www.intel.fr/content/www/fr/fr/processors/processor-numbers.html[/url<] For AMD "U" is "Standard Mobile". Low power symbol for AMD is "M" or "Low Power Mobile" - [url<]https://www.profesionalreview.com/wp-content/uploads/2017/03/amd-ryzen-nomenclatura.jpg[/url<] Compared to older 7xx0 mobile series the Raven Ridge numbers are quite decent but the new 8xx0 series are blowing things away.
Those iX-8XXXU numbers are odd.
Around the same as an i7-7820HQ, which has almost identical silicon but 3x the TDP.
[url<]https://browser.geekbench.com/v4/cpu/search?utf8=%E2%9C%93&q=7820hq[/url<] Could they have been faked? Or is Geekbench just a terrible benchmark?
Geekbench is a terrible benchmark.
Cinebench shows lower 5xx scores where 7820 scores mid/upper 7xx cores.
I don’t put much trust on geekbench. Also, the iGPU of the AMD part is certainly going to crush the Intel iGPU.
Anyway, we’ll have to wait for the review. It’s too early to say anything about RR.
Yup, that’s something people seems to ignore or forget, AMD’s strength position is not about being as fast or faster than intel with Raven Ridge, it’s about trying to release something that CPU wise can be 80% as good (so most likely, no one will really notice) while having way better GPU that allows people to play games at much better quality without a dedicated GPU.
A 4C APU is interesting, but I would kill for an 8C, triple or quad channel APU. That would make for one heck of a portable workstation or HTPC.
Seeing how the 1700 already squeezes into 65W at 3ghz, it seems totally plausible… Raven Ridge Ripper, maybe?
Ryzen’s efficiency sweet spot seems to be right around 3GHz so it should certainly be doable, but maybe not in some of the thinner designs.
If they chiplet things work out, maybe in 2019 or 2020.
Real killer (as with any APU) will be memory bandwidth. They’d need to support significantly higher speeds than JEDEC certifies using desktop DDR4 if limited to four channels or less. However, if they’re going to dedicate the die (and socket!) space, maybe they can add some EDRAM to the package as Intel does with their Iris Pro line?
eDRAM cache would fit AMD’s unified memory pool effort.
Alot of people suggest a separate HBM2 stack, but at that point you mind as well separate the whole thing into 2 chips (aka a dGPU).
I guess you could use HBM2 as main memory for the whole APU. It would save a ton of space on the mobo, but you’d also have to pay through the nose for 8GB+ of HBM and an interposer.
Or cut the dies into CPU, GPU, IO and HBM and throw them all into a common interposer.
Those used to be called “motherboards” in the old days, didn’t they?
Yeah but I don’t think these will be user serviceable to swap components. 🙂
No, that ended in the 80’s. They’re called system planars.
HBM would be a very, very bad idea for CPU main memory. The latencies are tremendous, even worse than the DDR5(X) technologies. So, you’d still want a regular DDR4 pool attached, and with the complexity and cost involved, you’d might as well just go discrete, even for mobile.
Why would you need all that power for an HTPC?
SVP and Vapoursynth filters. Also Plex until they support hardware encoding.
OK, that’s a tiny niche… But a light mobile workstation is a much bigger (and higher margin) niche.
Ah yes, SLI in a socket. 🙂
Interesting that this is a quad-core CPU with only 4 MB of L3 cache.
It looks like the main difference between the 5-series APUs and the 7 series APUs will be that the 7 series turns on the full L3 cache.
“Cautiously optimistic”.
I’m still drooling a bit though.
Cheers!
APUs have the chance to be a golden opportunity for AMD over Intel. Assuming the power consumption is good, these could come to dominate the market, with their solid CPU and vastly stronger GPU.
After so many years floating around with similar products, it would be impressive to see this one gain traction. They’ve always been interesting but never good enough. Has AMD really made so much progress with Zen, or is 14nm giving them a big lift?
Most people are interested in the iGP scores, no? It’s pretty easy to predict CPU performance based on existing Ryzen parts.
Yes and no. Operating frequency is part of the performance equation and in a laptop there are more variables than a desktop.
Wait, they are using the Ryzen 3/5/7 branding for APUs?
That sounds like AMD wants to charge me ryzen 7 prices for an APU with 4 cores and an IGP.
If they can effectively cram RX460 levels of IGP performance into a Ryzen 5 CPU would you really expect them to charge the same amount as a GPU-less Ryzen 5? Lets’ face it, the only reason I can think of to buy a Raven Ridge APU is if you don’t need a huge amount of GPU horsepower.
I doubt they’ll get RX460 levels of actual gaming performance in there. The memory bandwidth is just too constrained.
True, that’s a lofty goal…..RX550 maybe?
I’ve heard up to 768 stream processors, minus some for bandwidth constraints and clockspeeds….
I’d take that. If they can get that in a 14″ notebook, I’d even sell my Inspiron 7000 2-in-1 to buy it.
There’s no way they’re going to come close to the performance of GDDR5 equipped dedicated graphics cards with DDR4 unless there is some kind of large cache available. Especially when most DDR4 being put into pre-built systems isn’t over 2400Mhz.
The RX550 has 112GB/s bandwidth. DDR4-2400 can theoretically provide up to ~40GB/s in dual channel, which is shared with the CPU. So, lets call it a third of the bandwidth of an RX550. That’s certainly going to have an impact on gaming performance.
I’d be surprised if Raven Ridge didn’t end up having the most powerful main stream IGP to date (I’m not counting Iris Pro because hardly anyone has those and they’re extremely expensive), but without some method of getting more than the usual DDR4 bandwidth there’s no way we’re going to see RX550 level performance from them outside of situations that don’t need memory bandwidth.
I have to wonder what would happen if you paired this chip with optaine memory(not that you’d be able to on an x399 mb).
They may not match the 550, but I’m not so sure there’s “no way they’re going to come close.”
The 550’s bandwidth is the same as the 560’s, though it has only half the compute power. So it’s probably very rarely bandwidth bottlenecked; something with the same compute power and moderately less bandwidth could perform identically in many situations.
40GB/s (or 50GB/s with exceptionally fast DDR4) is less by a more than moderate amount, so Raven Ridge [b<]will[/b<] likely be bandwidth-limited, but a couple other factors may mitigate that a bit. If the GPU has access to the L3$, or if it has its own similar cache, that could help even though it's small; remember that Intel said when they first introduced Iris that 32MB would have given them almost all the benefit afforded by the full 128MB. And AMD's more recent architectures have a few tricks for better use of memory (delta color compression, better on-GPU caches, etc).
Remember, 40GB/s is shared with the 4C8T CPU, and if I remember correctly, Ryzen seems to like memory bandwidth, so they will both want a large chunk of that.
Here is a basic comparison of a DDR3 vs GDDR5 Radeon 7750:
[url<]https://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/[/url<] I can't imagine that the ~25GB/s the DDR3 HD7750 gets is too far off from what the GPU on Raven Ridge will have available to it in gaming after sharing with the CPU. Look at the massive improvement GDDR5 provides, even with such an old GPU. Its 40-50%. I don't doubt that Raven Ridge APUs will be good chips for light gaming, but I would be very surprised if 4MB of L3 cache (also shared with the CPU) would be enough to push them into the territory of a modern $75+ dedicated card with GDDR5. More likely, they'll do well in situations that absolutely do not require much memory bandwidth, but the performance is going to take a dive any time memory bandwidth is needed. You can bet Intel would have dropped their large (expensive) cache for their Iris GPUs if it really wasn't needed to get "discrete level" gaming performance. Remember, when Intel introduced Iris and 128MB of EDRAM\L4 cache was more than enough, it was back in 2012 when most gamers were still rocking Fermi, Kepler, Terascale or GCN cards with 1GB to 2GB of memory, "4K" wasn't a buzz word yet and the PS4 and Xbox One were still a year away. Still, if the price is right, you can bet that any of these APUs will outperform similar Intel offerings in their price range (in IGP gaming), I just wouldn't expect them to provide multiple times the performance across the board, the way a dedicated card would.
[quote=”ozzuneoj”<] You can bet that [i<]Raven Ridge[/i<] APUs will outperform similar Intel offerings in their price range (in IGP gaming), I just wouldn't expect them to provide multiple times the performance across the board, the way a dedicated card would. [/quote<] That's the niche for APUs on the desktop: Performance that is good enough for some gaming for less money than a [url=https://pcpartpicker.com/products/video-card/#xcx=0&r=32768,24576,16384,12288,11264,8192,6144,4096,3072&sort=price&c=395&page=1<]$130[/url<] Radeon RX 560 or [url=https://pcpartpicker.com/products/video-card/#xcx=0&c=380&r=32768,24576,16384,12288,11264,8192,6144,4096,3072&sort=price&page=1<]$145[/url<] GeForce GTX 1050Ti 4GB discrete graphics card.
I was posting the comparison between the DDR3 and GDDR5 7750s [url=https://techreport.com/news/25864/amd-sheds-more-light-on-kaveri-announces-new-mobile-radeons?post=791278<]almost four years ago[/url<] to explain, months ahead of launch, why the 512SP Kaveri APUs would probably be no better than the 384 SP ones. So I'm well aware of the difference bandwidth makes. 1. I very much doubt the CPU will monopolize the bandwidth as much as you claim. "Ryzen seems to like memory bandwidth" is the wrong takeaway from the Ryzen DDR4 scaling tests; rather, higher DDR4 speeds are affecting the speed the Infinity Fabric interconnect runs at, and they're also usually helping with memory latency. The speed increases are nothing like proportional to the bandwidth increases, and most of the benchmarks you're thinking of are not all that bandwidth heavy. 2. Again, as I already said, AMD has made notable strides in more efficient use of bandwidth since the 7xxx series, so the bandwidth which a Vega based chip needs for a given level of performance won't be as high as what the 7xxx needed for the same performance. The 7xxx was well behind nV in this area and AMD has largely caught up. 3. [url=http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/3<]Here's documentation of Intel saying they saw negligible benefits beyond 32MB of eDRAM[/url<]. eDRAM is much cheaper than SRAM, and once Intel decided they were putting it on a different chip, the marginal cost of increasing the size was not that huge. So, seeing it was a halo product and an experiment, they went ahead with a huge 128MB. In more recent versions, most Crystalwell SKUs have cut back to 64MB. This is not a case where Intel has suddenly come to decide that 128MB is now necessary or even helpful. I'm not saying 4MB L3 will fully compensate for not having 32MB of L4. But it is still a significant source of bandwidth that other AMD APUs and low end GPUs have not had access to. (L3$ has lower latency than Crystalwell eDRAM and has 10x the bandwidth, a large benefit for whatever fits in cache when compared to Kaveri or 77xx.) 4. I'm not so sure that the balance of compute required vs memory required has shifted in favor of memory as you claim. Your view of industry trends isn't really that persuasive - most gamers still have 2GB or less of VRAM, "4K" is utterly irrelevant to APUs as is 1440p, etc. Another thing to remember as an aside- nV's 1030 has only 48GB/s bandwidth, costs $70, and is intended to compete with the 550 (though it's apparently a little slower). Again, I'm not saying Raven Ridge will match the 550, just that "not anywhere close" is a poor guess.
Valid points. It’d be easier to estimate these things more accurately if there were some decent reviews of IGPs and low end discrete GPUs (like the 1030). The IGP reviews I’m finding generally make the results invalid by using super low resolutions and graphics settings. Synthetic benchmarks seem to make integrated graphics and cards with lower memory bandwidth look far worse.
I guess we’ll see. I hope you’re right honestly. Affordable mid range geaphics performance would be nice, especially in laptops and big box retail systems.
How much cache will the GPU contain internally, anyway? On the high end parts I recall something measured in a low number of MB’s. Maybe I’m wrong.
Both AMD and nV have 4MB L2 on their highest end parts. The RX 550 and the GT 1030 both have 512KB L2. No third level cache.
So these GPU caches… at risk of being lazy by not looking it up myself, but I wonder if they are intended for instructions and not data. Or at least not texture-like data, that would quickly blow away a small cache.
The Ryzen memory controller is much, much better than the bulldozer memory controller. Also, the recent GCN iterations have much more effective compression.
I don’t expect RX460 performance, but anything below that will be certainly threatened.
The lowest power parts are high margin. Ask Intel about that.
How does this compare to an intro CPU?
Intro CPU?
I think he means entry-level like a Bobcat or Atom, and there’s no comparison in terms of performance or power consumption (15W vs 4W is still more than a 3x difference, so in absolute terms it’s still a huge difference).
OK then. Thanks for clearing up what ‘Intro CPU’ means. You just brought me much joy.
I don’t know that’s what he meant, but it’s the best guess I could come up with.
It’s gonna be real interesting to see what these chips can do at 15W TDP.
Short answer-Not very much.
Longer answer-15W is the realm of “race to sleep”processors,you can hide it’s short-comings
with CPU,but anything that requires a little GPU grunt your stuffed……………………
Based on EPYC (32 cores in <200W), it should be able to run at 60-80% of full speed when constrained at 15W. I would guess at least 2.8GHz all-core sustained, possibly even 3.0 GHz. I’m not adding the iGPU in the equation.
But what happens when you allocate 10 Watts for the GPU??????????
That’s what everyone wants/expects from a AMD APU-good graphics……………..
Well, don’t mix the desktop APUs with the notebook APUs.
The gaming APUs are going to be 65-95W and can easily fit 30+ W for the GPU cores. This is more than it sounds, given that a dGPU also has to power VRMs, memory, memory controllers etc.
The notebook APUs are not meant for serious gaming. I suppose at 35W you could probably do some light gaming. The 15W variants will probably be limited to casual games. At best some older games with low requirements and at the lowest settings. Maybe CS:GO and DOTA2 at 720p. If you are lucky.
The lappy APU I’m expecting from AMD will have a CPU equivalent to Pentium/i3 and
graphics double what Intel has,in a 35-45 Watt TDP.
Anything slower is kind of pointless outside of signage-p.o.s. etc
Judging from the success of the ultra-low-power variants in Apple devices (Macbook non-pro and Air), I think there is potential value in a 15W version. I don’t know whether it makes sense for AMD to target this market, because it depends almost exclusively on OEM demand.
But I agree with what you are saying. A laptop CPU at 35-45W (configurable) is probably best for most people. I wouldn’t expect this to run any games though. OK, maybe at 720p/low.